Home

Awesome

<div align="center"> <h1>Awesome Totally Open Chatgpt</h1> <a href="https://github.com/sindresorhus/awesome"><img src="https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg"/></a> </div>

ChatGPT is GPT-3.5 finetuned with RLHF (Reinforcement Learning with Human Feedback) for human instruction and chat.

Alternatives are projects featuring different instruct finetuned language models for chat. Projects are not counted if they are:

Tags:

Other revelant lists:

Table of Contents

  1. The template
  2. The list

The template

Append the new project at the end of file

## [{owner}/{project-name}]{https://github.com/link/to/project}

Description goes here

Tags: Bare/Standard/Full/Complicated

The list

lucidrains/PaLM-rlhf-pytorch

Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the PaLM architecture. Basically ChatGPT but with PaLM

Tags: Bare

togethercomputer/OpenChatKit

OpenChatKit provides a powerful, open-source base to create both specialized and general purpose chatbots for various applications.

Related links:

Tags: Full

oobabooga/text-generation-webui

A gradio web UI for running Large Language Models like GPT-J 6B, OPT, GALACTICA, LLaMA, and Pygmalion.

Tags: Full

KoboldAI/KoboldAI-Client

This is a browser-based front-end for AI-assisted writing with multiple local & remote AI models. It offers the standard array of tools, including Memory, Author’s Note, World Info, Save & Load, adjustable AI settings, formatting options, and the ability to import existing AI Dungeon adventures. You can also turn on Adventure mode and play the game like AI Dungeon Unleashed.

Tags: Full

LAION-AI/Open-Assistant

OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.

Related links:

Tags: Full

tatsu-lab/stanford_alpaca

This is the repo for the Stanford Alpaca project, which aims to build and share an instruction-following LLaMA model.

Tags: Complicated

Other LLaMA-derived projects:

BlinkDL/ChatRWKV

ChatRWKV is like ChatGPT but powered by RWKV (100% RNN) language model, and open source.

Tags: Full

THUDM/ChatGLM-6B

ChatGLM-6B is an open bilingual language model based on General Language Model (GLM) framework, with 6.2 billion parameters. With the quantization technique, users can deploy locally on consumer-grade graphics cards (only 6GB of GPU memory is required at the INT4 quantization level).

Related links:

Tags: Full

bigscience-workshop/xmtf

This repository provides an overview of all components used for the creation of BLOOMZ & mT0 and xP3 introduced in the paper Crosslingual Generalization through Multitask Finetuning.

Related links:

Tags: Standard

carperai/trlx

A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF), supporting online RL up to 20b params and offline RL to larger models. Basically what you would use to finetune GPT into ChatGPT.

Tags: Bare

databrickslabs/dolly

Databricks’ dolly-v2-12b, an instruction-following large language model trained on the Databricks machine learning platform that is licensed for commercial use. Based on pythia-12b trained on ~15k instruction/response fine tuning records databricks-dolly-15k generated by Databricks employees in capability domains from the InstructGPT paper.

Related links:

Tags: Standard

LianjiaTech/BELLE

The goal of this project is to promote the development of the open-source community for Chinese language large-scale conversational models. This project optimizes Chinese performance in addition to original Stanford Alpaca. The model finetuning uses only data generated via ChatGPT (without other data). This repo contains: 175 chinese seed tasks used for generating the data, code for generating the data, 0.5M generated data used for fine-tuning the model, model finetuned from BLOOMZ-7B1-mt on data generated by this project.

Related links:

Tags: Standard

ethanyanjiali/minChatGPT

A minimum example of aligning language models with RLHF similar to ChatGPT

Related links:

Tags: Standard

cerebras/Cerebras-GPT

7 open source GPT-3 style models with parameter ranges from 111 million to 13 billion, trained using the Chinchilla formula. Model weights have been released under a permissive license (Apache 2.0 license in particular).

Related links:

Tags: Standard

TavernAI/TavernAI

Atmospheric adventure chat for AI language model Pygmalion by default and other models such as KoboldAI, ChatGPT, GPT-4

Tags: Full

Cohee1207/SillyTavern

SillyTavern is a fork of TavernAI 1.2.8 which is under more active development, and has added many major features. At this point they can be thought of as completely independent programs. On its own Tavern is useless, as it's just a user interface. You have to have access to an AI system backend that can act as the roleplay character. There are various supported backends: OpenAPI API (GPT), KoboldAI (either running locally or on Google Colab), and more.

Tags: Full

h2oai/h2ogpt

h2oGPT - The world's best open source GPT

Related links:

Tags: Full

mlc-ai/web-llm

Bringing large-language models and chat to web browsers. Everything runs inside the browser with no server support.

Related links:

Tags: Full

Stability-AI/StableLM

This repository contains Stability AI's ongoing development of the StableLM series of language models and will be continuously updated with new checkpoints.

Related links:

Tags: Full

clue-ai/ChatYuan

ChatYuan: Large Language Model for Dialogue in Chinese and English (The repos are mostly in Chinese)

Related links:

Tags: Full

OpenLMLab/MOSS

MOSS: An open-source tool-augmented conversational language model from Fudan University. (Most examples are in Chinese)

Related links:

Tags: Full