Awesome
<div align="center"> <h1>console-chat-gpt v6
</h1>Your Ultimate CLI Companion for Chatting with AI Models
Enjoy seamless interactions with ChatGPT, MistralAI, Claude by Anthropic, Grok by xAI and Gemini by Google directly from your command line. </br>Elevate your chat experience with efficiency and ease.
<h3> </h3> <h4 align="center"> <a href="https://github.com/amidabuddha/consoleChatGPT/blob/main/LICENSE.md"> <img src="https://img.shields.io/github/license/amidabuddha/consoleChatGPT" alt="Released under the Apache license." /> </a> <img src="https://img.shields.io/badge/Python-3.10+-blue" alt="Working on Python 3.10+" /> <img src="https://img.shields.io/github/stars/amidabuddha/consoleChatGPT"/> <img src="https://img.shields.io/github/issues/amidabuddha/consoleChatGPT"/> <img src="https://img.shields.io/github/forks/amidabuddha/consoleChatGPT"/> <img src="https://img.shields.io/badge/platform-Linux%20%7C%20macOS-blue"/> </h4> </div>Table of Contents
DISCLAIMER: The intention and implementation of this code are entirely unconnected and unrelated to OpenAI, MistralAI, Anthropic, xAI, Google AI or any other related parties. There is no affiliation or relationship with OpenAI, MistralAI, Anthropic, xAI, Google or their subsidiaries in any form.
Features
- :new: Anthropic Prompt caching Fully supported :new:
- :new: Model Context Protocol (MCP) supported! If you are already using MCP servers just copy your
claude_desktop_config.json
to the root directory and rename tomcp_config.json
to start using with any model! :new: - :star: Unified chat completion function separated as independent library to be used in any application for seamless cross-provider API experience. The source code is available in Python and TypeScript :star:
- :star: Streaming with all supported models, disabled by default, may be enabled in 'settings' menu :star:
- OpenAI Assistants Beta fully supported
- AI Managed mode Based on the complexity of the task, automatically determines which model to use.
- Configuration File: Easily customize the app's settings through the
config.toml
file for complete control over how the app works. Also supported in-app via thesettings
command. - Role selection: Users can define the role of the AI in the conversation, allowing for a more personalized and interactive experience.
- Temperature Control: Adjust the temperature of generated responses to control creativity and randomness in the conversation.
- Command Handling: The app responds to various commands entered by the user for easy and intuitive interaction.
- Image input: with selected models.
- Error Handling: Clear and helpful error messages to easily understand and resolve any issues.
- Conversation History: Review previous interactions and save conversations for future reference, providing context and continuity.
- Graceful Exit: Smoothly handle interruptions, ensuring conversations are saved before exiting to avoid loss of progress.
- A nice team: Actively adding features, open for ideas and fixing bugs.
Overall, this app focuses on providing a user-friendly and customizable experience with features that enhance personalization, control, and convenience.
Installation and Usage
The script works fine on Linux and MacOS terminals. For Windows it's recommended to use WSL.
-
Clone the repository:
git clone https://github.com/amidabuddha/console-chat-gpt.git
-
Go inside the folder:
cd console-chat-gpt
-
Install the necessary dependencies:
python3 -m pip install -r requirements.txt
-
Get your API key from OpenAI, MistralAI, Anthropic, xAI, Google AI Studio depending on your selected LLM.
-
The
config.toml.sample
will be automatically copied intoconfig.toml
upon first run, with a prompt to enter your API key/s. Feel free to change any of the other defaults that are not available in thesettings
in-app menu as per your needs. -
Run the executable:
python3 main.py
Pro-tip: Create an alias for the executable to run from anywhere.
-
Use the
help
command within the chat to check the available options. -
Enjoy
Examples
-
Prompt example:
-
Markdown visualization example:
-
Settings and help:
You can find more examples on our Examples page.
Contributing
Contributions are welcome! If you find any bugs, have feature requests, or want to contribute improvements, please open an issue or submit a pull request.