Awesome
STAT 991: Topics in deep learning (UPenn)
STAT 991: Topics in Deep Learning is a seminar class at UPenn started in 2018. It surveys advanced topics in deep learning based on student presentations.
Fall 2019
-
Lecture notes. (~170 pages, file size ~30 MB, mostly covering notes from previous semesters.)
Lectures
Lectures 1 and 2: Introduction and uncertainty quantification (jackknife+, and Pearce at al, 2018), presented by Edgar Dobriban.
Lecture 3: NTK by Jiayao Zhang. Blog post on the off-convex blog.
Lecture 4: Adversarial robustness by Yinjun Wu.
Lecture 5: ELMo and BERT by Dan Deutsch.
Lecture 6: TCAV by Ben Auerbach (adapted from Been Kim's slides).
Lecture 7: Spherical CNN by Arjun Guru and Claudia Zhu.
Lecture 8: DNNs and approximation by Yebiao Jin.
Lecture 9: Deep Learning and PDE by Chenyang Fang.
Bias and Fairness by Chetan Parthiban.
Lecture 10: Generalization by Bradford Lynch.
Double Descent by Junhui Cai, adapted from slides by Misha Belkin and Ryan Tibshirani.
Lecture 11: Deep Learning in Practice by Dewang Sultania, adaping some slides from CIS 700. Colab notebook
Lecture 12: Hindsight Experience Replay by Achin Jain.
Lecture 13: Deep Learning and Chemistry by Chris Koch.
Text summarization by Jamaal Hay.
Lecture 14: Deep Learning and Langevin Dynamics, and lecture notes by Kan Chen.
Deep Learning in Asset Pricing by Wu Zhu.
Topics
-
Potential topics: Uncertainty quantification, Adversarial Examples, Symmetry, Theory and Empirics, Interpretation, Fairness, ...
-
Potential papers:
Uncertainty quantification
Predictive inference with the jackknife+. Slides.
High-Quality Prediction Intervals for Deep Learning: A Distribution-Free, Ensembled Approach
Adversarial Examples
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
Certified Adversarial Robustness via Randomized Smoothing
On Evaluating Adversarial Robustness
VC Classes are Adversarially Robustly Learnable, but Only Improperly
Adversarial Examples Are Not Bugs, They Are Features
See section 6.1 of my lecture notes for a collection of materials.
Symmetry
Learning SO(3) Equivariant Representations with Spherical CNNs
Invariance reduces Variance: Understanding Data Augmentation in Deep Learning and Beyond
Theory and empirical wonders
Understanding deep learning requires rethinking generalization
Spectrally-normalized margin bounds for neural networks
Neural Tangent Kernel: Convergence and Generalization in Neural Networks. GNTK.
Gradient Descent Provably Optimizes Over-parameterized Neural Networks
Mean-field theory of two-layers neural networks. Youtube talk
Interpretation
Sanity checks for saliency maps
Scalability and Federated Learning
Communication-Efficient Learning of Deep Networks from Decentralized Data
Federated Learning: Challenges, Methods, and Future Directions
Fairness
[TBA]
Applications
Climate, energy, healthcare...
Other resources
Course on Coursera. A good way to learn the basics.
Stanford classes: CS231N (Computer vision). CS224N (NLP). Cheat sheet.
Conferences: NeurIPS, ICML, ICLR
Convenient ways to run code online: https://colab.research.google.com/notebooks/welcome.ipynb, https://www.kaggle.com/kernels
Keras is a user-friendly language for DL. Interfaces to R, see this book
Foundations of Deep Learning program at the Simons Institute for the Theory of Computing. workshops: 1, 2, 3. Reading groups and papers
IAS Special Year on Optimization, Statistics, and Theoretical Machine Learning
Materials from previous editions
Lecture notes
The materials draw inspiration from many sources, including David Donoho's course Stat 385 at Stanford, Andrew Ng's Deep Learning course on deeplearning.ai, CS231n at Stanford, David Silver's RL course, Tony Cai's reading group at Wharton. They may contain factual and typographical errors. Thanks to several people who have provided parts of the notes, including Zongyu Dai, Georgios Kissas, Jane Lee, Barry Plunkett, Matteo Sordello, Yibo Yang, Bo Zhang, Yi Zhang, Carolina Zheng. The images included are subject to copyright by their rightful owners, and are included here for educational purposes.
- Lecture notes. (~170 pages, file size ~30 MB.)
Compared to other sources, these lecture notes are aimed at people with a basic knowledge of probability, statistics, and machine learning. They start with basic concepts from deep learning, and aim to cover selected important topics up to the cutting edge of research.
The entire latex source is included, encouraging reuse (subject to appropriate licenses).
Spring 2019
Topics: sequential decision-making (from bandits to deep reinforcement learning), distributed learning, AutoML, Visual Question Answering.
Presentations
-
Lecture 1: Bandits. Presented by Edgar Dobriban.
-
Lecture 2: Contextual Bandits. Presented by Bo Zhang.
-
Lecture 2b: Contextual Bandits for Mobile Health. Presented by Halley Young.
-
Lectures 4-9: Reinforcement learning following David Silver's course.
-
Lecture 11: Hierarchical Reinforcement learning. Presented by Barry Plunkett.
-
Lecture 12: AutoML. Presented by Yi Zhang.
-
Lecture 13: Visual Question Answering. Presented by Reno Kriz.
-
Lecture 13b: Visual Question Answering: Part 2. Presented by Soham Parikh.
Fall 2018
Topics: basics (deep feedforward networks, training, CNNs, RNNs). Generative Adversarial Networks, Learning Theory, Sequence Learning, Neuroscience, etc.
Presentations
-
Lectures 1-3: Lectures based on Edgar Dobriban's notes.
-
Lecture 4: Generative Adversarial Networks. Presented by Zilu Zhou.
-
Lecture 5: Theory for Generative Adversarial Networks. Presented by Hadi Elzayn.
-
Lecture 6: Learning Theory for Neural Networks. Presented by Jacob Seidman.
-
Lecture 7: Gradient Based Optimization. Presented by Matteo Sordello.
-
Lecture 8: Sequence Learning. Presented by Carolina Zheng.
-
Lecture 9: Robotics. Presented by Ty Nguyen.
-
Lecture 10: Autoencoders, Physics-Informed Neural Networks. Presented by Yibo Yang and Georgios Kissas.
-
Lecture 11: Neuroscience Inspired Deep Learning. Presented by Huy Le.
-
Lecture 12: Approximation and Estimation for Deep Learning Networks. Guest lecture by Jason Klusowski.
-
Lecture 13: Deep Learning in Marketing Research. Presented by Mingyung Kim.
Materials for future editions
Relatively more recent developments or additions.
Papers
Applications and Methods
-
Few-Shot Adversarial Learning of Realistic Neural Talking Head Models
-
Language Models are Few-Shot Learners, from OpenAI, introducing GPT-3. See Youtube video explanation by Yannic Kilcher. Argument: is it more than just elaborate pattern-matching (a lookup table)?
-
A Simple Framework for Contrastive Learning of Visual Representations, introducing SimCLR, a prominent method for self-supervised learning
Theory
-
Machine Learning from a Continuous Viewpoint; Towards a Mathematical Understanding of Neural Network-Based Machine Learning: what we know and what we don't
-
The large learning rate phase of deep learning: the catapult mechanism
-
Symmetry & critical points for a model shallow neural network, also: Analytic Characterization of the Hessian in Shallow ReLU Models: A Tale of Symmetry
-
Prevalence of Neural Collapse during the terminal phase of deep learning training
-
Traces of Class/Cross-Class Structure Pervade Deep Learning Spectra
Books and other educational materials
-
Dive into Deep Learning: An interactive deep learning book with code, math, and discussions, based on the NumPy interface, see also Reddit post
-
Deep Learning with PyTorch Course by Alfredo Canziani at NYU, co-taught with Yann LeCun. Has slides, videos, code etc.
-
EPFL EE-559 – DEEP LEARNING by Francois Fleuret
-
Deep Learning Courses by Marc Lelarge, taught at Ecole Polytechnique, ENS, etc
-
AtHomeWithAI Curated Resource List by DeepMind
-
Machine Learning Summer School 2020, Tuebingen, see also materials from previous years at mlss.cc
-
Eastern European Machine Learning Summer School (EEML), 2020, recorded lectures. see e.g., the lecture by Misha Belkin on the theory of deep learning.
Implementation and reproducibility
- AI Research, Replicability and Incentives, by Denny Britz