Home

Awesome

18.337J/6.338J: Parallel Computing and Scientific Machine Learning (Spring 2023)

Professor Alan Edelman (and Philip the Corgi)

MW 3:00 to 4:30 @ Room 2-190

TA and Office hours: (To be confirmed)

Piazza Link

Canvas will only be used for homework and project (+proposal) submission + lecture videos

Classes are recorded and will be uploaded on canvas. Another great resource is Chris Rackauckas' videos of 2021 spring class. See SciMLBook.

Julia:

A basic overview of the Julia programming environment for numerical computations that we will use in 18.06 for simple computational exploration. This (Zoom-based) tutorial will cover what Julia is and the basics of interaction, scalar/vector/matrix arithmetic, and plotting — we'll be using it as just a "fancy calculator" and no "real programming" will be required.

If possible, try to install Julia on your laptop beforehand using the instructions at the above link. Failing that, you can run Julia in the cloud (see instructions above).

Announcement:

There will be homeworks, followed by the final project. Everyone needs to present their work and submit a project report.

1-page Final Project proposal due : March 24

Final Project presentations : April 26 to May 15

Final Project reports due: May 15

Grading:

50% problem sets, 10% for the final project proposal, and 40% for the final project. Problem sets and final projects will be submitted electronically.

HW

#Notebook
1HW1
(For matrix calculus problems, do not use indices)
2HW2 Due Wednesday March 1, 2023
3HW3 Due Wednesday March 15, 2023
4HW4 Due Wednesday April 19, 2023

Lecture Schedule (tentative)

#DayDateTopicSciML lectureMaterials
1M2/6Intro to Julia. My Two Favorite Notebooks.[Julia is fast], [AutoDiff], [autodiff video],
2W2/8Matrix Calculus I and The Parallel DreamSee [IAP 2023 Class on Matrix Calculus],[handwritten notes],[The Parallel Dream]
3M2/13Matrix Calculus II[handwritten notes],[Corgi in the Washing Machine],[2x2 Matrix Jacobians]
4W2/15Serial Performance2[handwritten notes], [Serial Performance .jl file], [Loop Fusion Blog ]
5T2/21Intro to PINNs and Automatic differentiation I : Forward mode AD3 and 8ode and Pinns,intro to pinn handwritten notes,autodiff handwritten notes
6W2/22Automatic differentiation II : Reverse mode AD10pinn.jl, reverse mode ad demo,handwritten notes
7M2/27Dynamical Systems & Serial Performance on Iterations4Lorenz many ways, Dynamical Systems, handwriten notes
8W3/1HPC & Threading5 and 6pi.jl, threads.jl,HPC Slides
9M3/6ParallelismParallelism in Julia Slides,reduce/prefix notebook
10W3/8Prefix (and more)ppt slides, reduce/prefix notebook,ThreadedScans.jl,cuda blog
11M3/13Adjoint Method Example10Handwritten Notes
12W3/15Guest Lecture - Chris Rackauckas
13M3/21Vectors, Operators and AdjointsHandwritten Notes
14W3/23Adjoints of Linear, Nonlinear, Ode11Handwritten Notes, 18.335 adjoint notes (Johnson)
Spring Break
15M4/3Guest Lecture, Billy MosesEnzyme AD
16W4/5Guest Lecture, Keaton BurnsDedalus PDE Solver
17M4/10Adjoints of ODEHandwritten Notes
18W4/12Partitioning
M4/17Patriots' Day
19W4/19Fast Multipole and Parallel PrefixUnfinished Draft
20M4/24
21W4/26Project Presentation I
22M5/1Project Presentation II
23W5/3Project Presentation III
24M5/8Project Presentation IV
25W5/10Project Presentation V
M5/15Class Cancelled

|8|W|3/1| GPU Parallelism I |7| [video 1],[video2] |9|M|3/6| GPU Paralellism II | | [video], [Eig&SVD derivatives notebooks], [2022 IAP Class Matrix Calculus] |10|W|3/8| MPI | | Slides, [video, Lauren Milichen],[Performance Metrics] see p317,15.6 |11|M|3/13| Differential Equations I | 9| |12|W|3/15| Differential Equations II |10 | |13|M|3/20| Neural ODE |11 | |14|W|3/22| |13 | | | | | Spring Break | |15|M|4/3| | | GPU Slides Prefix Materials |16|W|4/5| Convolutions and PDEs | 14 | |17|M|4/10| Chris R on ode adjoints, PRAM Model |11 | [video]| |18|W|4/12| Linear and Nonlinear System Adjoints | 11 | [video]| | |M|4/17| Patriots' Day |19|W|4/19| Lagrange Multipliers, Spectral Partitioning || Partitioning Slides| | |20|M|4/24| |15| [video],notes on adjoint| |21|W|4/26| Project Presentation I | |22|M|5/1| Project Presentation II | Materials |23|W|5/3| Project Presentation III | 16 | [video] |24|M|5/8| Project Presentation IV |
|25|W|5/10| Project Presentation V | |26|M|5/15| Project Presentation VI|

Lecture Summaries and Handouts

Class Videos

Lecture 1: Syllabus, Introduction to Performance, Introduction to Automatic Differentiation

Setting the stage for this course which will involve high performance computing, mathematics, and scientific machine learning, we looked at two introductory notebooks. The first [Julia is fast]](https://github.com/mitmath/18337/blob/master/lecture1/Julia%20is%20fast.ipynb) primarily reveals just how much performance languages like Python can leave on the table. Many people don't compare languages, so they are unlikely to be aware. The second [AutoDiff]](https://github.com/mitmath/18337/blob/master/lecture1/AutoDiff.ipynb) reveals the "magic" of forward mode autodifferentiation showing how a compiler can "rewrite" a program through the use of software overloading and still maintain performance. This is a whole new way to see calculus, not the way you learned it in a first year class, and not finite differences either.

Lecture 2: The Parallel Dream and Intro to Matrix Calculus

We gave an example [The Parallel Dream]](https://github.com/mitmath/18337/blob/master/lecture1/the_dream.ipynb)

Lecture and Notes

Homeworks

HW1 will be due Thursday Feb 16. This is really just a getting started homework.

Hw1

Final Project

For the second half of the class students will work on the final project. A one-page final project proposal must be sumbitted by March 24 Friday, through canvas.

Last three weeks (tentative) will be student presentations.

Possible Project Topics

Here's a list of current projects of interest to the julialab

One possibility is to review an interesting algorithm not covered in the course and develop a high performance implementation. Some examples include:

Another possibility is to work on state-of-the-art performance engineering. This would be implementing a new auto-parallelization or performance enhancement. For these types of projects, implementing an application for benchmarking is not required, and one can instead benchmark the effects on already existing code to find cases where it is beneficial (or leads to performance regressions). Possible examples are:

Additionally, Scientific Machine Learning is a wide open field with lots of low hanging fruit. Instead of a review, a suitable research project can be used for chosen for the final project. Possibilities include: