Home

Awesome

<!--- Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. --> <!--- SPDX-License-Identifier: Apache-2.0 -->

Optimizing Multi-task Training through Dynamic Pipelines

Official repository for the paper DynaPipe: Optimizing Multi-task Training through Dynamic Pipelines (camera-ready link pending).

During multi-task training, the model commonly receives input sequences of highly different lengths due to the diverse contexts of different tasks. Padding (to the same sequence length) or packing (short examples into long sequences of the same length) is usually adopted to prepare input samples for model training, which is nonetheless not space or computation efficient. This project adopts a dynamic micro-batching approach to tackle sequence length variation. Each input global batch is split into multiple variable-length micro-batches, each of which comprises a (potentially different) number of samples of similar sequence lengths. These micro-batches are efficiently organized into pipelines, facilitating efficient 3D-parallel (data, tensor and pipeline) multi-task model training.

Main features of this project include:

System Diagram

System Diagram

Getting Started

Dependencies

Redis

The distributed instruction store uses Redis as the underlying key-value store. Redis server needs to be installed on machines participating in training. Our code will setup and initialize a Redis server automatically.

Python Dependencies

Please see requirements.txt for the required Python packages. Install them by running

pip3 install -r requirements.txt

Installation

Clone this repository and run

pip3 install -e .

Then, build the C++ extensions by running

cd dynapipe/data_opt
make
cd ../memory_opt
python3 setup.py build

Pipeline Instructions

To use this project, the Pipeline Instructions (defined here) needs to be implemented using the intented training framework (e.g., Megatron-LM). A reference implementation of the instructions in Megatron-LM can be found here.

Using this project

Please note that this project is experimental and only tested on integrating with Megatron-LM (please refer to the linked repository for detailed usage).

This project interacts with the training framework mainly through the following two interfaces:

Data Loader

We wrap the micro-batch splitting and execution plan generation process into a DynaPipeDataLoader. It takes the normal PyTorch data loader arguments with a few additional ones. Please see here for the full list of arguments. The returning iterator will generate tuples of micro-batched data and the corresponding execution plan for each iteraton. This iterator is to be used by the pipeline executor. See here for an example of using the DynaPipeDataLoader in Megatron-LM.

Pipeline Executor

The pipeline executor simply reads in execution plans and calls the Pipeline Instruction Implementations. These implementations are registered to the executor through the register_handler function. To run the pipeline executor, simply call the execute function with the corresponding execution plan in each iteration. See here for an example of using the pipeline executor in Megatron-LM.

Environment Variables

Except for the above two interfaces, this project can also be configured through the following environment variables:

Code Structure

├── dynapipe
│   : main source folder
│   ├── data_opt
│   │   : code for micro-batch splitting and cost models
│   ├── memory_opt
│   │   : contains the modified cuda caching memory allocator 
│   │     from PyTorch
│   ├── pipe
│   │   : contains implementation of pipeline instructions,
│   │     executor, and the distributed instruction store
│   ├── schedule_opt
│   │   : code for computing pipeline schedule
│   └── utils
│       : other util codes like logger
├── scripts
│   : utility scripts for various purposes 
├── tests
│   : unit tests of different modules

Security

See CONTRIBUTING for more information.

License

This project is licensed under the Apache-2.0 License.