Awesome
<div align="center"> <img src="./docs/pictures/icon.png" width="150"/> </div>A PyTorch Native LLM Training Framework
An Industrial-Level Framework for Easy-of-Use
-
🔥 PyTorch Native: veScale is rooted in PyTorch-native data structures, operators, and APIs, enjoying the ecosystem of PyTorch that dominates the ML world.
-
🛡 Zero Model Code Change: veScale decouples distributed system design from model architecture, requiring near-zero or zero modification on the model code of users.
-
🚀 Single Device Abstraction: veScale provides single-device semantics to users, automatically distributing and orchestrating model execution in a cluster of devices.
-
🎯 Automatic Parallelism Planning: veScale parallelizes model execution with a synergy of strategies (tensor, sequence, data, ZeRO, pipeline parallelism) under semi- or full-automation [coming soon].
-
âš¡ Eager & Compile Mode: veScale supports not only Eager-mode automation for parallel training and inference but also Compile-mode for ultimate performance [coming soon].
-
📀 Automatic Checkpoint Resharding: veScale manages distributed checkpoints automatically with online resharding across different cluster sizes and different parallelism strategies.
Latest News
-
[2024-7-25] veScale's pipeline parallelism open sourced with API, graph parser, stage abstraction, schedules and execution runtime along with nD distributed timeline.
-
[2024-5-31] veScale's fast checkpointing system open sourced with automatic checkpoint resharding, caching, load-balancing, fast copying, deduplicating, and asynchronous io.
-
[2024-5-21] veScale's examples (Mixtral, LLama2, and nanoGPT) open sourced with bit-wise correctness of training loss curves.
-
[2024-5-13] The debut of veScale in MLSys 2024 as a poster.
-
[2024-4-16] Our internal LLM training system presented in NSDI 2024.
Coming Soon
veScale is still in its early phase. We are refactoring our internal LLM training system components to meet open source standard. The tentative timeline is as follows:
-
High-level nD parallel api for extreme ease of use
-
Power-user plan api for easy customization of nD parallel training
-
End-to-end vescale/examples with 5D parallel training (TP, SP, DP, ZeRO, PP)
Table of Content (web view)
Parallel
- Overview
- Tensor Parallel & Sequence Parallel
- Data Parallel
- Optimizer Parallel
- Pipeline Parallel
- nD Device Mesh
Plan
We Are Hiring!
License
The veScale Project is under the Apache License v2.0.