Home

Awesome

llm.šŸ”„

This project is a port of Andrej Karpathy's llm.c to Mojo, currently in beta. Visit llm.c for a detailed explanation of the original project.

Please note that this repository has not yet been updated to Mojo version 24.5 due to an issue that remains unresolved in the stable release: parallelize won't work with local variables. This issue is addressed in the nightly release, but not in the stable version of Mojo 24.5.

Implementation

Prerequisite

Before using llm.šŸ”„ for the first time, please run the following preparatory commands:

pip install -r requirements
python prepro_tinyshakespeare.py  
python train_gpt2.py

How to use

magic shell -e mojo-24-4
mojo train_gpt2.mojo

For a more detailed step-by-step guide including additional setup details and options, please refer to our detailed usage instructions.

Mojo 24.5 nightly

A initial version for the nightly release of Mojo 24.5 is now available for testing:

magic shell -e nightly
mojo train_gpt2_nightly.mojo

Please note that the nightly releases are often subject to breaking changes, so you may encounter issues when running train_gpt2_nightly.mojo.

Benchmarks

Basic benchmark results: (M2 MacBook Pro)

ImplementationAverage Training Loop Time
train_gpt2.mojo1819 ms
train_gpt2.c (with OpenMP)1849 ms
train_gpt2.c (no OpenMP)7473 ms
train_gpt2_basic.mojo54509 ms

'Training Loop Times'

Test

We ported test_gpt2.c from the original repository to Mojo to validate our port's functionality. For instructions on how to run this test and insights into the results it yields, please see our guide here.

Development Roadmap

At this stage, there are no plans for further development of this app, except for keeping up with Mojo's latest versions. It primarily serves as a proof of concept, showcasing Mojo's ability to implement C-like applications in terms of speed and low-level programming. That said, Iā€™m always open to new ideas or collaboration opportunities, so feel free to reach out to discuss ideas.

Changelog

License

MIT