Home

Awesome

<!-- markdownlint-disable first-line-h1 --> <!-- markdownlint-disable html --> <div align="center"> <img src="assets/logo.jpg" width="390"/> <div>&nbsp;</div> <div align="center"> <b><font size="5">project website</font></b> <sup> <a href="https://space.bilibili.com/3493095748405551?spm_id_from=333.337.search-card.all.click"> <i><font size="4">HOT</font></i> </a> </sup> &nbsp;&nbsp;&nbsp;&nbsp; <b><font size="5">PKU-Alignment Team</font></b> <sup> <a href="https://space.bilibili.com/3493095748405551?spm_id_from=333.337.search-card.all.click"> <i><font size="4">welcome</font></i> </a> </sup> </div> <div>&nbsp;</div>

PyPI License

📘Documentation | 🛠️Quick Start | 🚀Algorithms | 👀Evaluation | 🤔Reporting Issues

</div> <div align="center">

Our 100K Instruction-Following Datasets

</div>

Align-Anything aims to align any modality large models (any-to-any models), including LLMs, VLMs, and others, with human intentions and values. More details about the definition and milestones of alignment for Large Models can be found in AI Alignment. Overall, this framework has the following characteristics:

Note: We provide a quick start guide for users to quickly get the code structure and development details.

<details><summary>prompt</summary>Small white toilet sitting in a small corner next to a wall.</details><details><summary>prompt</summary>A close up of a neatly made bed with two night stands</details><details><summary>prompt</summary>A pizza is sitting on a plate at a restaurant.</details><details><summary>prompt</summary>A girl in a dress next to a piece of luggage and flowers.</details>
Before Alignment (Chameleon-7B)<img src="https://github.com/Gaiejj/align-anything-images/blob/main/chameleon/before/1.png?raw=true" alt="Image 8" style="max-width: 100%; height: auto;"><img src="https://github.com/Gaiejj/align-anything-images/blob/main/chameleon/before/2.png?raw=true" alt="Image 8" style="max-width: 100%; height: auto;"><img src="https://github.com/Gaiejj/align-anything-images/blob/main/chameleon/before/3.png?raw=true" alt="Image 8" style="max-width: 100%; height: auto;"><img src="https://github.com/Gaiejj/align-anything-images/blob/main/chameleon/before/4.png?raw=true" alt="Image 8" style="max-width: 100%; height: auto;">
After Alignment (Chameleon 7B Plus)<img src="https://github.com/Gaiejj/align-anything-images/blob/main/chameleon/after/1.png?raw=true" alt="Image 8" style="max-width: 100%; height: auto;"><img src="https://github.com/Gaiejj/align-anything-images/blob/main/chameleon/after/2.png?raw=true" alt="Image 8" style="max-width: 100%; height: auto;"><img src="https://github.com/Gaiejj/align-anything-images/blob/main/chameleon/after/3.png?raw=true" alt="Image 8" style="max-width: 100%; height: auto;"><img src="https://github.com/Gaiejj/align-anything-images/blob/main/chameleon/after/4.png?raw=true" alt="Image 8" style="max-width: 100%; height: auto;">

Alignment fine-tuning can significantly enhance the instruction-following capabilities of large multimodal models. After fine-tuning, Chameleon 7B Plus generates images that are more relevant to the prompt.

Quick Start

Easy Installation

# clone the repository
git clone git@github.com:PKU-Alignment/align-anything.git
cd align-anything

# create virtual env
conda create -n align-anything python==3.11
conda activate align-anything
# We tested on the H800 computing cluster, and this version of CUDA works well.
# You can adjust this version according to the actual situation of the computing cluster.

conda install nvidia/label/cuda-12.2.0::cuda
export CUDA_HOME=$CONDA_PREFIX

If your CUDA installed in a different location, such as /usr/local/cuda/bin/nvcc, you can set the environment variables as follows:

export CUDA_HOME="/usr/local/cuda"

Finally, install align-anything by:

# We prepare quick installation for training and evaluation.
# If you only need to use the training or evaluation module,
# you can install the corresponding dependencies.
pip install -e .[train] # install the training dependencies
pip install -e .[evaluate] # install the evaluation dependencies

# If you need to install all dependencies, you can use the following command:
pip install -e .[all]
<details> <summary>Other Dependencies</summary> </details>

Training

We provide some scripts for quick start, you can find them in the ./scripts directory. These scripts would automatically download the model and dataset, and run the training or evaluation.

For example, scripts/llava_dpo.sh is the script for Text + Image -> Text modality, you can run it by:

cd scripts
bash llava_dpo.sh

Evaluation

After training, you can evaluate the model by running the scripts/llava_eval.sh script.

cd scripts
bash llava_eval.sh

You can simply modify the parameters in the script to suit your needs, e.g., the MODEL_NAME_OR_PATH for your own model or TRAIN_DATASETS for your own dataset. For more details please refer to the Advanced Usage section.

Algorithms

We support basic alignment algorithms for different modalities, each of which may involve additional algorithms. For instance, in the text modality, we have also implemented SimPO, KTO, and others.

ModalitySFTRMDPOPPO
Text -> Text (t2t)✔️✔️✔️✔️
Text+Image -> Text (ti2t)✔️✔️✔️✔️
Text+Image -> Text+Image (ti2ti)✔️✔️✔️✔️
Text+Audio -> Text (ta2t)✔️✔️✔️✔️
Text+Video -> Text (tv2t)✔️✔️✔️✔️
Text -> Image (t2i)✔️⚒️✔️⚒️
Text -> Video (t2v)✔️⚒️✔️⚒️
Text -> Audio (t2a)✔️⚒️✔️⚒️

Evaluation

We support evaluation datasets for Text -> Text, Text+Image -> Text and Text -> Image.

ModalitySupported Benchmarks
t2tARC, BBH, Belebele, CMMLU, GSM8K, HumanEval, MMLU, MMLU-Pro, MT-Bench, PAWS-X, RACE, TruthfulQA
ti2tA-OKVQA, LLaVA-Bench(COCO), LLaVA-Bench(wild), MathVista, MM-SafetyBench, MMBench, MME, MMMU, MMStar, MMVet, POPE, ScienceQA, SPA-VL, TextVQA, VizWizVQA
tv2tMVBench, Video-MME
ta2tAIR-Bench
t2iImageReward, HPSv2, COCO-30k(FID)
t2vChronoMagic-Bench
t2aAudioCaps(FAD)
<!-- # News - 2024-11-20: We release a bunch of scripts for all-modality models [here](./scripts). You can directly run the scripts to fine-tune your models, without any need to modify the code. - 2024-10-10: We support SFT for `Any -> Any` modality models Emu3. - 2024-09-24: We support SFT, DPO, RM and PPO for `Text + Video -> Text` modality models. - 2024-09-13: We support SFT, DPO, RM and PPO for `Text + Audio -> Text` modality models. - 2024-08-17: We support DPO and PPO for `Text+Image -> Text+Image` modality models. - 2024-08-15: We support a new function in the evaluation module: the `models_pk` script in [here](./scripts/models_pk.sh), which enables comparing the performance of two models across different benchmarks. - 2024-08-06: We restructure the framework to support any modality evaluation and the supported benchmark list is [here](https://github.com/PKU-Alignment/align-anything/tree/main/align_anything/evaluation/benchmarks). - 2024-08-06: We support `Text+Image -> Text+Image` modality for the SFT trainer and Chameleon models. <details><summary>More News</summary> - 2024-07-23: We support `Text -> Image`, `Text -> Audio`, and `Text -> Video` modalities for the SFT trainer and DPO trainer. - 2024-07-22: We support the **Chameleon** model for the SFT trainer and DPO trainer! - 2024-07-17: We open-source the Align-Anything-Instruction-100K dataset for text modality. This dataset is available in both [English](https://huggingface.co/datasets/PKU-Alignment/Align-Anything-Instruction-100K) and [Chinese](https://huggingface.co/datasets/PKU-Alignment/Align-Anything-Instruction-100K-zh) versions, each sourced from different data sets and meticulously refined for quality by GPT-4. - 2024-07-14: We open-source the align-anything framework. </details> -->

Wandb Logger

We support wandb logging. By default, it is set to offline. If you need to view wandb logs online, you can specify the environment variables of WANDB_API_KEY before starting the training:

export WANDB_API_KEY="..."  # your W&B API key here

Advanced Usage

Training

Q (Training Model Registration): What models are supported for training? What should I pay attention to if I want to use my own model?

A: The models registration of align-anything is 2 folds:

  1. The model has been manually supported by the align-anything team, they are:
ModalityModels
Text -> Textmeta-llama/Llama-3.1-8B-Instruct series (Llama3, Llama2 is also supported)
Text+Image -> TextLLaVA series, LLaVA-Next series, openbmb/MiniCPM-V and LLaMA-3.2-Vision-Instruct
Text+Image -> Text+Imagefacebook/chameleon-7b
Text+Audio -> TextQwen/Qwen2-Audio-7B-Instruct
Text+Video -> TextQwen/Qwen2-VL-7B-Instruct
Text -> ImageCompVis/stable-diffusion-v1-4
Text -> Videoali-vilab/text-to-video-ms-1.7b
Text -> Audiocvssp/audioldm-s-full-v2

Besides, you can also use your own model for training, you can refer to the here (sorry, corresponding docs will be uploaded later) for the model registration.

Q (Training Dataset Registration): What datasets are supported for training? What should I pay attention to if I want to use my own dataset?

A: We prepare datasets_formatter for dataset registration. Its core function is to mapping the dataset key to conversation format.

Basically, we support 3 types of dataset format:

TypeDescription
format_supervised_sampleMapping the dataset to the supervised training format (For SFT).
format_preference_sampleMapping the dataset to the preference training format (For RM, DPO, KTO, etc.).
format_prompt_only_sampleMapping the dataset to the unique prompt only training format (For PPO).

We introduce the following example below, and you can refer to here for more details.

<details> <summary>Click to expand</summary>
@register_template('Alpaca')
class Alpaca(BaseFormatter):

    def format_supervised_sample(self, raw_sample: dict[str, Any]) -> tuple[list[dict[str, Any]], dict]:
        prompt = ' '.join((raw_sample['instruction'], raw_sample['input']))
        response = raw_sample['output']
        return [
            {"role": "user", "content": prompt},
            {"role": "assistant", "content": response},
        ], {}
</details> <details> <summary>Click to expand</summary>
@register_template('AA_TI2T')
class AA_TI2T(BaseFormatter):
    system_prompt: str = ""

    def format_preference_sample(self, raw_sample: dict[str, Any]) -> tuple[list[dict[str, Any]], list[dict[str, Any]], dict[str, Any]]:
        better_id = int(raw_sample['overall_response'])
        worse_id = 2 if better_id==1 else 1

        if better_id not in [1, 2] or worse_id not in [1, 2]:
            return [], [], {}

        raw_better_response = raw_sample[f'response_{better_id}']
        raw_worse_response = raw_sample[f'response_{worse_id}']
        prompt = raw_sample['question']
        image = raw_sample['image'].convert('RGBA')
        better_conversation = [
            {'role': 'user', 'content': [
                    {'type': 'image'},
                    {'type': 'text', 'text': prompt},
                ]
            },
            {'role': 'assistant', 'content': [{'type': 'text', 'text': raw_better_response}]},
        ]
        worse_conversation = [
            {'role': 'user', 'content': [
                    {'type': 'image'},
                    {'type': 'text', 'text': prompt},
                ]
            },
            {'role': 'assistant', 'content': [{'type': 'text', 'text': raw_worse_response}]},
        ]

        meta_info = {
            'image': image,
            'better_response': raw_better_response,
            'worse_response': raw_worse_response,
        }

        return better_conversation, worse_conversation, meta_info
</details> <details> <summary>Click to expand</summary>
@register_template('AA_TA2T')
class AA_TA2T(BaseFormatter):
    system_prompt: str = 'You are a helpful assistant.'

    def format_prompt_only_sample(self, raw_sample: dict[str, Any]) -> dict[str, Any]:
        prompt = raw_sample['prompt']
        audio_path = raw_sample['audio_path']

        conversation = [
            {'role': 'system', 'content': [{'type': 'text', 'text': self.system_prompt}]},
            {'role': 'user', 'content': [
                    {'type': 'audio', 'audio_url': audio_path},
                    {'type': 'text', 'text': prompt},
                ]},
        ]

        return conversation, {'audio_path': audio_path}
</details>

Evaluation

Q (Evaluation Model Registration): What models are supported for evaluation? What should I pay attention to if I want to use my own model?

A: Register your model to use align-anything for evaluation is easy, you only need to add your model special token to the ./align_anything/configs/eval_template.py file.

For example, if you want to use liuhaotian/llava-v1.5-7b for evaluation, you need to add the following template for it to the ./align_anything/configs/eval_template.py file:

@register_template('Llava')
class Llava:
    system_prompt: str = ''
    user_prompt: str = 'USER: \n<image>{input}'
    assistant_prompt: str = '\nASSISTANT:{output}'
    split_token: str = 'ASSISTANT:'
    separator: str = '###'

Evaluation

All evaluation scripts can be found in the ./scripts. The ./scripts/evaluate.sh script runs model evaluation on the benchmarks, and parameters that require user input have been left empty. The corresponding script is as follow:

You can modify the configuration files for the benchmarks in this directory to suit specific evaluation tasks and models, and adjust inference parameters for vLLM or DeepSpeed based on your generation backend. For more details about the evaluation pipeline, refer to the here.

Inference

Interactive Client

python3 -m align_anything.serve.cli --model_name_or_path your_model_name_or_path
<img src="assets/cli_demo.gif" alt="cli_demo" style="width:600px;">

Interactive Arena

python3 -m align_anything.serve.arena \
    --red_corner_model_name_or_path your_red_model_name_or_path \
    --blue_corner_model_name_or_path your_blue_model_name_or_path
<img src="assets/arena_demo.gif" alt="arena_demo" style="width:600px;">

Report Issues

If you have any questions in the process of using align-anything, don't hesitate to ask your questions on the GitHub issue page, we will reply to you in 2-3 working days.

Citation

Please cite the repo if you use the data or code in this repo.

@misc{align_anything,
  author = {PKU-Alignment Team},
  title = {Align Anything: training all modality models to follow instructions with unified language feedback},
  year = {2024},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/PKU-Alignment/align-anything}},
}

License

align-anything is released under Apache License 2.0.