Home

Awesome

Modded-NanoGPT

This is a modified variant of the PyTorch GPT-2 trainer from Andrej Karpathy's llm.c repo, which attains the same final validation loss in only:

It has been hyperoptimized by the community, and has become a good baseline from which to perform research on the architecture/optimizer/etc.

It uses the following techniques:

The training has attained this speed due to the contributions of meself, @Grad62304977, @jxbz, @bozavlado, @brendanh0gan, @KoszarskyB, & @fernbear.bsky.social.


Running the current record

To install and execute the training, run the following four commands. They should all complete within <20min on an 8xH100 with decent internet connection. If the torch install command updates your cuda installation, you many need to reboot.

git clone https://github.com/KellerJordan/modded-nanogpt.git & cd modded-nanogpt
pip install -r requirements.txt
pip install --pre torch==2.6.0.dev20241203+cu124 --index-url https://download.pytorch.org/whl/nightly/cu124 --upgrade # install torch 2.6.0
python data/cached_fineweb10B.py 10 # downloads only the first 1.0B training tokens to save time
./run.sh

The result will be a transformer with 124M active parameters trained for 1480 steps on 0.75B tokens of Fineweb [1], achieving ~3.278 mean validation loss (w/ up to 0.005 inter-run stddev). For comparison, the default llm.c PyTorch trainer yields >3.28 validation loss after training for 19560 steps on 10B tokens.

Note: torch.compile will take a long time on the first run.

Running it on fewer GPUs or with less memory

Running with Docker

For cases where CUDA or NCCL versions aren't compatible with your current system setup, Docker can be a helpful alternative. This approach standardizes versions for CUDA, NCCL, CUDNN, and Python, reducing dependency issues and simplifying setup. Note: an NVIDIA driver must already be installed on the system (useful if only the NVIDIA driver and Docker are available).

sudo docker build -t modded-nanogpt .
sudo docker run -it --rm --gpus all -v $(pwd):/modded-nanogpt modded-nanogpt python data/cached_fineweb10B.py 18
sudo docker run -it --rm --gpus all -v $(pwd):/modded-nanogpt modded-nanogpt sh run.sh

World record history

The following is the progression of world records for the task of training a model with 124M active parameters to 3.28 validation loss on FineWeb in the minimal amount of time on an 8xH100 machine.

#Record timeDescriptionDateLogContributors
145 minutesllm.c baseline05/28/24log@karpathy, llm.c contributors
231.4 minutesArchitectural modernizations & tuned learning rate06/06/24log@kellerjordan0
324.9 minutesIntroduced the Muon optimizer10/04/24none@kellerjordan0, @jxbz
422.3 minutesMuon improvements10/11/24log@kellerjordan0, @bozavlado
515.2 minutesPad embeddings & architectural improvements10/14/24log@Grad62304977, @kellerjordan0
613.1 minutesDistributed the overhead of Muon10/18/24log@kellerjordan0
712.0 minutesUpgraded PyTorch from 2.4.1 to 2.5.010/18/24log@kellerjordan0
810.8 minutesUntied embed and lm_head11/03/24log@Grad62304977, @kellerjordan0
98.2 minutesShortcuts & tweaks11/06/24log@Grad62304977, @kellerjordan0
107.8 minutesBfloat16 activations11/08/24log@kellerjordan0
117.2 minutesU-net & 2x lr11/10/24log@brendanh0gan
125.03 minutesFlexAttention11/19/24log@KoszarskyB
134.66 minutesAttention window warmup11/24/24log@fernbear.bsky.social
144.41 minutesValue Embeddings12/04/24log@KoszarskyB
153.95 minutesU-net pattern for value embeds, assorted code improvements12/08/24log@leloykun, @YouJiacheng
163.80 minutesMFU tweaks12/10/24log@YouJiacheng

Speedrun rules

All new record attempts:

  1. Must not modify the train or validation data pipelines. (Except to change batch size, seqlen, attention structure etc. I.e., just don't change the underlying tokens.)
  2. Must use ≤ 124M active parameters per token. (So MoE is fine; and extra embedding layers can be added since they only contribute hidden_dim active params.)
  3. Must attain ≤ 3.28 val loss. Unfortunately, due to high inter-run variance, new record attempts must provide enough run logs to attain a statistical significance level of p<0.01 that their average val loss is lower than 3.28. You see see how to conduct a t-test here.

Other than that, go crazy! Anything is fair game

<!--Note: The original llm.c baseline is intended to be closer to a replication of GPT-2 than to an optimized LLM training. So it's no surprise that there is room to improve; as @karpathy has said, 'llm.c still has a lot of pending optimizations.' In addition, many of the techniques used in these records are completely standard, such as rotary embeddings. The goal of this benchmark/speedrun is simply to find out which techniques actually work, and maybe come up with some new ones.--> <!--The goal of this benchmark is simply to find out all the techniques which actually work, because I'm going crazy reading all these LLM training papers which claim a huge benefit but then use their own idiosyncratic non-competitive benchmark and therefore no one in the community has any idea if it's legit for months.--> <!--[LLM](https://arxiv.org/abs/2305.14342) [training](https://arxiv.org/abs/2402.17764) [papers](https://arxiv.org/abs/2410.01131)--> <!--I mean hello??? We're in a completely empirical field; it is insane to not have a benchmark. Ideally everyone uses the same LLM training benchmark, and then reviewing LLM training papers becomes as simple as checking if they beat the benchmark. It's not like this would be unprecedented, that's how things were in the ImageNet days. The only possible 'benefit' I can think of for any empirical field to abandon benchmarks is that it would make it easier to publish false results. Oh, I guess that's why it happened. Hilarious to think about how, in the often-commented-upon and ongoing collapse of the peer review system, people blame the *reviewers* -- yeah, those guys doing free labor who everyone constantly musters all of their intelligence to lie to, it's *their* fault! My bad, you caught me monologuing.-->

Notes


Notable forks


Q: What is the point of NanoGPT speedrunning?

A: The officially stated goal of NanoGPT speedrunning is as follows: gotta go fast. But for something a little more verbose involving an argument for good benchmarking, here's some kind of manifesto, adorned with a blessing from the master. https://x.com/karpathy/status/1846790537262571739

Q: What makes "NanoGPT speedrunning" not just another idiosyncratic benchmark?

A: Because it is a competitive benchmark. In particular, if you attain a new speed record (using whatever method you want), there is an open invitation for you to post that record (on arXiv or X) and thereby vacuum up all the clout for yourself. I will even help you do it by reposting you as much as I can.

<!--On the contrary, for example, the benchmark used in the [Sophia](https://arxiv.org/abs/2305.14342) paper does *not* have this property. There is no such open invitation for anyone to compete on the benchmark they used. In particular, if, for a random and definitely not weirdly specific example, you happen to find better AdamW hyperparameters for their training setup than the ones they used which significantly close the gap between AdamW and their proposed optimizer, then there is no clear path for you to publish that result in *any* form. You could try posting it on X.com, but then you would be risking being perceived as aggressive/confrontational, which is *not a good look* in this racket. So if you're rational, the result probably just dies with you and no one else learns anything (unless you're in a frontier lab, in which case you can do a nice internal writeup. Boy I'd love to get my hands on those writeups).-->

"Artificial intelligence advances by inventing games and gloating to goad others to play" - Professor Ben Recht

Q: NanoGPT speedrunning is cool and all, but meh it probably won't scale and is just overfitting to val loss

A: This is hard to refute, since "at scale" is an infinite category (what if the methods stop working only for >100T models?), making it impossible to fully prove. Also, I would agree that some of the methods used in the speedrun are unlikely to scale. But if the reader cares about 1.5B models, they might be convinced by this result:

Straightforwardly scaling up the speedrun (10/18/24 version) to 1.5B parameters yields a model with GPT-2 (1.5B)-level HellaSwag performance 2.5x more cheaply than @karpathy's baseline ($233 instead of $576):

[reproducible log]


Muon optimizer

Muon is defined as follows:

Where NewtonSchulz5 is the following Newton-Schulz iteration [2, 3], which approximately replaces G with U @ V.T where U, S, V = G.svd().

@torch.compile
def zeroth_power_via_newtonschulz5(G, steps=5, eps=1e-7):
    assert len(G.shape) == 2
    a, b, c = (3.4445, -4.7750,  2.0315)
    X = G.bfloat16() / (G.norm() + eps)
    if G.size(0) > G.size(1):
        X = X.T 
    for _ in range(steps):
        A = X @ X.T
        B = b * A + c * A @ A
        X = a * X + B @ X
    if G.size(0) > G.size(1):
        X = X.T 
    return X.to(G.dtype)

For this training scenario, Muon has the following favorable properties:

Provenance

Many of the choices made to generate this optimizer were obtained experimentally by our pursuit of CIFAR-10 speedrunning. In particular, we experimentally obtained the following practices:

Our use of a Newton-Schulz iteration for orthogonalization traces to Bernstein & Newhouse (2024), who suggested it as a way to compute Shampoo [5, 6] preconditioners, and theoretically explored Shampoo without preconditioner accumulation. In particular, Jeremy Bernstein @jxbz sent us the draft, which caused us to experiment with various Newton-Schulz iterations as the orthogonalization method for this optimizer. If we had used SVD instead of a Newton-Schulz iteration, this optimizer would have been too slow to be useful. Bernstein & Newhouse also pointed out that Shampoo without preconditioner accumulation is equivalent to steepest descent in the spectral norm, and therefore Shampoo can be thought of as a way to smooth out spectral steepest descent. The proposed optimizer can be thought of as a second way of smoothing spectral steepest descent, with a different set of memory and runtime tradeoffs compared to Shampoo.


Startup script

Here's a good startup script for a fresh 8xH100 instance.

sudo apt-get update
sudo apt-get install vim tmux python3-pip python-is-python3 -y
git clone https://github.com/KellerJordan/modded-nanogpt.git
cd modded-nanogpt
tmux

pip install numpy==1.23.5 huggingface-hub tqdm
pip install --upgrade torch &
python data/cached_fineweb10B.py 18

References

  1. Penedo, Guilherme, et al. "The fineweb datasets: Decanting the web for the finest text data at scale." arXiv preprint arXiv:2406.17557 (2024).
  2. Nicholas J. Higham. Functions of Matrices. Society for Industrial and Applied Mathematics, 2008. Equation 5.22.
  3. Günther Schulz. Iterative Berechnung der reziproken Matrix. Z. Angew. Math. Mech., 13:57–59, 1933.
  4. Jeremy Bernstein and Laker Newhouse. "Old Optimizer, New Norm: An Anthology." arxiv preprint arXiv:2409.20325 (2024).
  5. Vineet Gupta, Tomer Koren, and Yoram Singer. "Shampoo: Preconditioned stochastic tensor optimization." International Conference on Machine Learning. PMLR, 2018.
  6. Anil, Rohan, et al. "Scalable second order optimization for deep learning." arXiv preprint arXiv:2002.09018 (2020).
  7. Hägele, Alexander, et al. "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations." arXiv preprint arXiv:2405.18392 (2024).

Citation

@misc{modded_nanogpt_2024,
  author       = {Keller Jordan and Jeremy Bernstein and Brendan Rappazzo and
                  @fernbear.bsky.social and Boza Vlado and You Jiacheng and
                  Franz Cesista and Braden Koszarsky and @Grad62304977},
  title        = {modded-nanogpt: Speedrunning the NanoGPT baseline},
  year         = {2024},
  url          = {https://github.com/KellerJordan/modded-nanogpt},
  note         = {Accessed: 2024-12-09}
}
<img src="img/dofa.jpg" alt="itsover_wereback" style="width:100%;">