Home

Awesome

LLM-Finetuning

PEFT Fine-Tuning Project 🚀

Welcome to the PEFT (Pretraining-Evaluation Fine-Tuning) project repository! This project focuses on efficiently fine-tuning large language models using LoRA and Hugging Face's transformers library.

Fine Tuning Notebook Table 📑

Notebook TitleDescriptionColab Badge
1. Efficiently Train Large Language Models with LoRA and Hugging FaceDetails and code for efficient training of large language models using LoRA and Hugging Face.Open in Colab
2. Fine-Tune Your Own Llama 2 Model in a Colab NotebookGuide to fine-tuning your Llama 2 model using Colab.Open in Colab
3. Guanaco Chatbot Demo with LLaMA-7B ModelShowcase of a chatbot demo powered by LLaMA-7B model.Open in Colab
4. PEFT Finetune-Bloom-560m-taggerProject details for PEFT Finetune-Bloom-560m-tagger.Open in Colab
5. Finetune_Meta_OPT-6-1b_Model_bnb_peftDetails and guide for finetuning the Meta OPT-6-1b Model using PEFT and Bloom-560m-tagger.Open in Colab
6.Finetune Falcon-7b with BNB Self Supervised TrainingGuide for finetuning Falcon-7b using BNB self-supervised training.Open in Colab
7.FineTune LLaMa2 with QLoRaGuide to fine-tune the Llama 2 7B pre-trained model using the PEFT library and QLoRa methodOpen in Colab
8.Stable_Vicuna13B_8bit_in_ColabGuide of Fine Tuning Vecuna 13B_8bitOpen in Colab
9. GPT-Neo-X-20B-bnb2bit_trainingGuide How to train the GPT-NeoX-20B model using bfloat16 precisionOpen in Colab
10. MPT-Instruct-30B Model TrainingMPT-Instruct-30B is a large language model from MosaicML that is trained on a dataset of short-form instructions. It can be used to follow instructions, answer questions, and generate text.Open in Colab
11.RLHF_Training_for_CustomDataset_for_AnyModelHow train a Model with RLHF training on any LLM model with custom datasetOpen in Colab
12.Fine_tuning_Microsoft_Phi_1_5b_on_custom_dataset(dialogstudio)How train a model with trl SFT Training on Microsoft Phi 1.5 with customOpen in Colab
13. Finetuning OpenAI GPT3.5 TurboHow to finetune GPT 3.5 on your own dataOpen in Colab
14. Finetuning Mistral-7b FineTuning Model using Autotrain-advancedHow to finetune Mistral-7b using autotrained-advancedOpen in Colab
15. RAG LangChain TutorialHow to Use RAG using LangChainOpen in Colab
16. Knowledge Graph LLM with LangChain PDF Question AnsweringHow to build knowledge graph with pdf question answeringOpen in Colab
17. Text to Knolwedge Graph with OpenAI Function with Neo4j and Langchain Agent Question AnsweringHow to build knowledge graph from text or Pdf Document with pdf question AnsweringOpen in Colab
18. Convert the Document to Knowledgegraph using Langchain and OpenaiThis notebook is help you to understand how easiest way you can convert your any documents into Knowledgegraph for your next RAG based ApplicationOpen in Colab
19. How to train a 1-bit Model with LLMs?This notebook is help you to train a model with 1-bit and 2-bit quantization method using hqq frameworkOpen in Colab
20.Alpaca_+_Gemma2_9b_Unsloth_2x_faster_finetuningThis notebook is help you to train a model with gemma2 9bOpen in Colab
21.RAG Pipeline Evaluation Using MLFLOW Best Industry PracticeThis notebook provides a comprehensive guide to evaluating the 21 RAG (Retrieve-then-Answer Generation) pipeline using MLFLOW, adhering to best industry practices.Open in Colab
22. Evaluate a Hugging Face LLM with mlflow.evaluate()This notebook provides a comprehensive guide on evaluating a Hugging Face Language Learning Model (LLM) using mlflow_evaluate.Open in Colab

Contributing 🤝

Contributions are welcome! If you'd like to contribute to this project, feel free to open an issue or submit a pull request.

License 📝

This project is licensed under the MIT License.


Created with ❤️ by Ashish