\n Abstract:\n

\n Cryptography is a fundamental pillar of cybersecurity.\n

\n However, quantum computing would change classical cryptography drastically. Most of the public-key cryptosystems that constitute the security infrastructure of the Internet will be broken by quantum computers. Recently quantum attacks on basic symmetric-key cryptosystems such as CBC-MAC have been found too. Advances in development of full-scale quantum computers make an update to quantum-safe cryptography pressing.\n

\n In this talk, I will introduce the threats that cryptography faces and how to establish a formal foundation of cryptography in a quantum world. In the first part, I will describe my work on designing quantum algorithms that solve several long-standing algebraic problems efficiently, an exponential speedup over best known classical algorithms. Then I will show how our quantum algorithms can break some cryptosystems in the family of lattice-based cryptography, which is considered a promising candidate to resist quantum attacks [1]. In the second part, I will discuss the difficulties as well as strategies of analyzing security of classical cryptographic constructions in the presence of quantum attacks. I will describe my recent work [2] on establishing quantum security of several popular message authentication schemes based on block-ciphers, such as NMAC and HMAC.\n

\n This answers affirmatively that domain-extension for block ciphers against a strong type of quantum attacks is feasible, despite the recent break of many symmetric-key constructions (e.g., CBC-MAC and Galois/Counter mode).\n

\n I will also discuss briefly quantum cryptography, which can supplement and expand the capability of classical cryptography. Beyond cryptography, I will point out a few other exciting directions that quantum computing can offer.\n

\n [1] Efficient quantum algorithms for computing class groups and solving the principal ideal problem in arbitrary degree number fields, by Jean-François Biasse and Fang Song. (SODA 2016)\n

\n [2] Quantum Security of NMAC and Related Constructions, by Fang Song and Aaram Yun. (Crypto 2017)\n

UID:20180226T153000Z-37439@calendar.tamu.edu
URL:http://calendar.tamu.edu/live/events/37439-cybersecurity-invited-speaker-presentation-with
LAST-MODIFIED:20180219T221007Z
ATTACH;FMTTYPE=image/jpeg:http://calendar.tamu.edu/live/image/gid/53/width/80/height/80/crop/1/src_region/0,0,400,400/1956_tamuengineering-avatar-instagram.jpg
X-LIVEWHALE-TYPE:events
X-LIVEWHALE-ID:37439
X-LIVEWHALE-TIMEZONE:America/Chicago
X-LIVEWHALE-IMAGE:http://calendar.tamu.edu/live/image/gid/53/width/80/height/80/crop/1/src_region/0\,0\,400\,400/1956_tamuengineering-avatar-instagram.jpg
X-LIVEWHALE-CONTACT-INFO:Dr. Daniel Ragsdale
X-LIVEWHALE-SUMMARY:Please join us as Fang Song presents his talk\, "Get cryptography ready in a quantum world."
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Chicago:20180228T161000
DTEND;TZID=America/Chicago:20180228T171000
LOCATION:Bright Building
GEO:30.619003;-96.338807
SUMMARY:CSCE 681 Open Seminar with Boris Hanin
DESCRIPTION:Abstract:\n\n Due to its compositional nature\, the function computed by a deep neural net often produces gradients whose magnitude is either very close to 0 or very large. This so-called vanishing and exploding gradient problem is often already present at initialization and is a major impediment to gradient-based optimization techniques. The purpose of this talk is to give a rigorous answer to the question of which neural architectures have exploding and vanishing gradients for feed-forward neural nets with ReLU activations. The results discussed apply to both convolutional and fully connected networks\, and they represent the first more or less complete characterization of exploding and vanishing gradients for feed-forward networks.
X-ALT-DESC;FMTTYPE=text/html:\n Abstract:\n

\n Due to its compositional nature, the function computed by a deep neural net often produces gradients whose magnitude is either very close to 0 or very large. This so-called vanishing and exploding gradient problem is often already present at initialization and is a major impediment to gradient-based optimization techniques. The purpose of this talk is to give a rigorous answer to the question of which neural architectures have exploding and vanishing gradients for feed-forward neural nets with ReLU activations. The results discussed apply to both convolutional and fully connected networks, and they represent the first more or less complete characterization of exploding and vanishing gradients for feed-forward networks.\n

UID:20180228T221000Z-37634@calendar.tamu.edu URL:http://calendar.tamu.edu/live/events/37634-csce-681-open-seminar-with-boris-hanin LAST-MODIFIED:20180221T231456Z ATTACH;FMTTYPE=image/jpeg:http://calendar.tamu.edu/live/image/gid/53/width/80/height/80/crop/1/src_region/0,0,600,600/1440_csce_1x1-primary.png X-LIVEWHALE-TYPE:events X-LIVEWHALE-ID:37634 X-LIVEWHALE-TIMEZONE:America/Chicago X-LIVEWHALE-IMAGE:http://calendar.tamu.edu/live/image/gid/53/width/80/height/80/crop/1/src_region/0\,0\,600\,600/1440_csce_1x1-primary.png X-LIVEWHALE-CONTACT-INFO:Dr. Atlas Wang\nDr. Ben Hu X-LIVEWHALE-SUMMARY:Please join us as Boris Hanin \;presents his talk\, "Which Neural Net Architectures Give Rise to Exploding and Vanishing Gradients?" END:VEVENT BEGIN:VEVENT DTSTART;TZID=America/Chicago:20180228T161000 DTEND;TZID=America/Chicago:20180228T172500 LOCATION:Bright Building GEO:30.619003;-96.338807 SUMMARY:CSCE 681 Open Seminar with Bihan Wen DESCRIPTION:Abstract\n Techniques exploiting the sparsity of signals in a transform domain or dictionary have been popular in signal processing and computer vision. While the synthesis model based methods\, such as the well-known dictionary learning\, are widely used\, the emerging sparsifying transform learning technique have received interest recently. It allows cheap and exact computations\, and demonstrates promising performance in various applications including image and video processing\, medical image reconstruction\, and computer vision. In this talk\, I will provide an overview of the transform learning problem. Several advanced data-driven transform models that we proposed will be discussed. Extending from local patch sparsity\, I will show how non-local transform learning can be applied in image and video applications\, and demonstrate state-of-the-art results. X-ALT-DESC;FMTTYPE=text/html:

**Abstract**

\n Techniques exploiting the sparsity of signals in a transform domain or dictionary have been popular in signal processing and computer vision. While the synthesis model based methods, such as the well-known dictionary learning, are widely used, the emerging sparsifying transform learning technique have received interest recently. It allows cheap and exact computations, and demonstrates promising performance in various applications including image and video processing, medical image reconstruction, and computer vision. In this talk, I will provide an overview of the transform learning problem. Several advanced data-driven transform models that we proposed will be discussed. Extending from local patch sparsity, I will show how non-local transform learning can be applied in image and video applications, and demonstrate state-of-the-art results.\n

UID:20180228T221000Z-34320@calendar.tamu.edu URL:http://calendar.tamu.edu/live/events/34320-csce-681-open-seminar-with-bihan-wen LAST-MODIFIED:20171214T145927Z ATTACH;FMTTYPE=image/jpeg:http://calendar.tamu.edu/live/image/gid/53/width/80/height/80/crop/1/src_region/0,0,600,600/1440_csce_1x1-primary.png X-LIVEWHALE-TYPE:events X-LIVEWHALE-ID:34320 X-LIVEWHALE-TIMEZONE:America/Chicago X-LIVEWHALE-IMAGE:http://calendar.tamu.edu/live/image/gid/53/width/80/height/80/crop/1/src_region/0\,0\,600\,600/1440_csce_1x1-primary.png X-LIVEWHALE-CONTACT-INFO:Dr. \;Zhangyang (Atlas) Wang X-LIVEWHALE-SUMMARY:Please join us as Bihan Wen presents his talk\, "Transform Learning for Non-Local Image and Video Modeling." END:VEVENT BEGIN:VEVENT DTSTART;TZID=America/Chicago:20180301T161000 DTEND;TZID=America/Chicago:20180301T171000 LOCATION:Bright Building GEO:30.619003;-96.338807 SUMMARY:IAP Distinguished Lecturer Presentation with Juan Garay DESCRIPTION:Abstract:\n\n As the first and most popular decentralized cryptocurrency to date\, Bitcoin has ignited much excitment\, not only for its novel realization of a central bank-free financial instrument\, but also as an alternative approach to classical problems in distributed computing and cryptographic protocols\, such as reaching consensus in the presence of misbehaving parties.\n\n In this talk\, after a brief introduction to the innovative and distributedly-maintained data structure known as the "blockchain\," we first present the first formalization of the Bitcoin core protocol\, identifying its fundamental properties\, and showing how a distributed public ledger can be built "on top" of them. This rigorous cryptographic treatment shows that such a ledger is robust if and only if the majority of the mining power is honest. Which brings us to the second part of the talk: Why then does Bitcoin work\, given that the real-world system (the size of existing "mining pools\," in particular) does not necessarily adhere to this assumption? Using a mix of game theory and cryptography approach\, we show how natural incentives in combination with a high monetary value of Bitcoin can explain why Bitcoin continues to work even though majority coalitions are in fact possible. X-ALT-DESC;FMTTYPE=text/html:\n Abstract:\n

\n As the first and most popular decentralized cryptocurrency to date, Bitcoin has ignited much excitment, not only for its novel realization of a central bank-free financial instrument, but also as an alternative approach to classical problems in distributed computing and cryptographic protocols, such as reaching consensus in the presence of misbehaving parties.\n

\n In this talk, after a brief introduction to the innovative and distributedly-maintained data structure known as the "blockchain," we first present the first formalization of the Bitcoin core protocol, identifying its fundamental properties, and showing how a distributed public ledger can be built "on top" of them. This rigorous cryptographic treatment shows that such a ledger is robust if and only if the majority of the mining power is honest. Which brings us to the second part of the talk: Why then does Bitcoin work, given that the real-world system (the size of existing "mining pools," in particular) does not necessarily adhere to this assumption? Using a mix of game theory and cryptography approach, we show how natural incentives in combination with a high monetary value of Bitcoin can explain why Bitcoin continues to work even though majority coalitions are in fact possible.\n

UID:20180301T221000Z-36882@calendar.tamu.edu URL:http://calendar.tamu.edu/live/events/36882-iap-distinguished-lecturer-presentation-with-juan LAST-MODIFIED:20180205T141955Z ATTACH;FMTTYPE=image/jpeg:http://calendar.tamu.edu/live/image/gid/53/width/80/height/80/crop/1/src_region/0,0,600,600/1440_csce_1x1-primary.png X-LIVEWHALE-TYPE:events X-LIVEWHALE-ID:36882 X-LIVEWHALE-TIMEZONE:America/Chicago X-LIVEWHALE-IMAGE:http://calendar.tamu.edu/live/image/gid/53/width/80/height/80/crop/1/src_region/0\,0\,600\,600/1440_csce_1x1-primary.png X-LIVEWHALE-CONTACT-INFO:Taffie Behringer X-LIVEWHALE-SUMMARY:Please join us as Dr. Juan Garay\, presents his talk\, "But Why Does It Work? A Cryptographer's Take on Bitcoin." END:VEVENT BEGIN:VEVENT DTSTART;TZID=America/Chicago:20180301T180000 DTEND;TZID=America/Chicago:20180301T190000 LOCATION:MSC Bethancourt SUMMARY:CSE Spring Awards Banquet UID:20180302T000000Z-35437@calendar.tamu.edu URL:http://calendar.tamu.edu/live/events/35437-cse-spring-awards-banquet LAST-MODIFIED:20180110T215133Z X-LIVEWHALE-TYPE:events X-LIVEWHALE-ID:35437 X-LIVEWHALE-TIMEZONE:America/Chicago END:VEVENT BEGIN:VEVENT DTSTART;TZID=America/Chicago:20180321T093000 DTEND;TZID=America/Chicago:20180321T103000 LOCATION:Bright Building GEO:30.619003;-96.338807 SUMMARY:CSE Faculty Candidate Presentation with Nima Khademi Kalantari DESCRIPTION:Abstract: The field of computer graphics\, specifically computational photography and rendering\, has seen tremendous progress over the past decades and\, as a result\, is an essential part of the film\, gaming\, and camera industries. However\, the current state-of-the-art algorithms use complex optimization systems with heuristic components\, and thus\, are typically slow and produce sub-optimal results. Deep learning has the potential to revolutionize computer graphics by modeling problems in a data-driven and systematic way. However\, the major challenge in applying deep learning to synthesis applications\, such as view synthesis and high dynamic range imaging\, is the task complexity and lack of large scale training data.\n In this talk\, I show how to address these problems by incorporating the underlying physical process of these applications into deep learning. Instead of solving one complex problem with deep learning\, which is often intractable\, I propose to break it into two smaller sub-problems that are physically motivated and easier to learn on limited training data. Specifically\, I propose a novel\, general two-stage framework\, which can be applied (with slight modifications) to a variety of problems\, including light field image synthesis\, high dynamic range image and video generation\, and Monte Carlo denoising. X-ALT-DESC;FMTTYPE=text/html:\n Abstract: The field of computer graphics, specifically computational photography and rendering, has seen tremendous progress over the past decades and, as a result, is an essential part of the film, gaming, and camera industries. However, the current state-of-the-art algorithms use complex optimization systems with heuristic components, and thus, are typically slow and produce sub-optimal results. Deep learning has the potential to revolutionize computer graphics by modeling problems in a data-driven and systematic way. However, the major challenge in applying deep learning to synthesis applications, such as view synthesis and high dynamic range imaging, is the task complexity and lack of large scale training data.

\n In this talk, I show how to address these problems by incorporating the underlying physical process of these applications into deep learning. Instead of solving one complex problem with deep learning, which is often intractable, I propose to break it into two smaller sub-problems that are physically motivated and easier to learn on limited training data. Specifically, I propose a novel, general two-stage framework, which can be applied (with slight modifications) to a variety of problems, including light field image synthesis, high dynamic range image and video generation, and Monte Carlo denoising.\n