Home

Awesome

Building Blocks for AI Systems

This is a (biased) view of great work studying the building blocks of efficient and performant foundation models. This Github was originally put together as a place to aggregate materials for a NeurIPS keynote - but we're also hoping to highlight great work across AI Systems. If you think we're missing something, please open an issue or PR!

Slides from Chris Ré's NeurIPS Keynote: https://cs.stanford.edu/~chrismre/papers/NeurIPS23_Chris_Re_Keynote_DELIVERED.pptx

Courses. Courses a great resources for getting started in this space. It's great that we have so many that have open materials! Here's a partial list of courses -- it's biased by Stanford courses, so please reach out if you think of other resources that are helpful!

If you just want to follow along on the major pieces from the talk, check out these blog posts:

An older set of resources on Data-Centric AI.

The rest of this README is split up into resources by topic.

Table of contents:

Foundation Models for Systems

Foundation models are changing the ways that we build systems for classical problems like data cleaning. SIGMOD keynote on this topic. Ihab Ilyas and Xu Chen's textbook on data cleaning: Data Cleaning. The ML for Systems workshops and community are great.

Blog Posts

Papers

Hardware-Aware Algorithms

Hardware-aware algorithms for today's ML primitives. Canonical resources:

Jim Gray's Turing Award Profile.

Blog Posts

Papers

Can We Replace Attention?

Alternatives to attention that scale sub-quadratically in sequence length. Canonical text on signal processing: Discrete-Time Signal Processing. High-level overview of this space: From Deep to Long Learning.

Blog Posts

Papers

Attention Approximations

There's also a great literature around approximating attention (sparse, low-rank, etc). Just as exciting! Here's a partial list of great ideas in this area:

Synthetics for Language Modeling

In research on efficient language models, synthetic tasks (e.g. associative recall) are crucial for understanding and debugging issues before scaling up to expensive pretraining runs.

Code

We've created a very simple GitHub repo with a simple playground for understanding and testing langauge model architectures on synthetic tasks: HazyResearch/zoology.

Blog Posts

Papers

Truly Sub-Quadratic Models

ML models are quadratic along another dimension -- model width. Can we develop models that grow sub-quadratically with model width?

The canonical textbook for a lot of this stuff: Structured Matrices and Polynomials.

Blog Posts

Papers

Quantization, Pruning, and Distillation

Quantization, pruning, and distillation are great techniques to improve efficiency. Here's just a short overview of some of the ideas here:

Systems for Inference

Inference is an increasingly important cost for LLMs: a model will be served many more times than it is trained. Systems for inference are an increasingly important problem. Here's some papers and posts on the topic, there's a lot to do!

High-Throughput

Foundation models will increasingly be used to serve back-of-house tasks like document processing (not just chat interfaces). These will require different systems than our current inference solutions. This work is still very new, but hopefully there's a lot more to come soon!

New Data Types

Most ML models focus on text or images, but there's a large variety of other modalities that present unique challenges (e.g., long context). New modalities will drive advances in model architectures and systems. A few modalities compiled below: