Home

Awesome

Appilot

Appilot['æpaɪlət] stands for application-pilot. It is an experimental project that helps you operate applications using GPT-like LLMs.

Feature

Demo

Chat to deploy llama-2 on AWS:

https://github.com/seal-io/appilot/assets/5697937/0562fe29-8e97-42ba-bbf6-eaa5b5fefc41

Other use cases:

Quickstart

prerequistes:

  1. Clone the repository.
git clone https://github.com/seal-io/appilot && cd appilot
  1. Run the following command to get the envfile.
cp .env.example .env
  1. Edit the .env file and fill in OPENAI_API_KEY.

  2. Run the following command to install. It will create a venv and install required dependencies.

make install
  1. Run the following command to get started:
make run
  1. Ask Appilot to deploy an application, e.g.:
> Deploy a jupyterhub.
...
> Get url of the jupyterhub.

Usage

Configuration

Appilot is configurable via environment variable or the envfile:

ParameterDescriptionDefault
OPENAI_API_KEYOpenAI API key, access to gpt-4 model is required.""
OPENAI_API_BASECustom openAI API base. You can integrate with other LLMs as long as they serve in the same API style.""
TOOLKITSToolkits to enable. Currently support Kubernetes and Walrus. Case insensitive."kubernetes"
NATURAL_LANGUAGENatural language AI used to interacte with you. e.g., Chinese, Japanese, etc."English"
SHOW_REASONINGShow AI reasoning steps.True
VERBOSEOutput in verbose mode.False
WALRUS_URLURL of Walrus, valid when Walrus toolkit is enabled.""
WALRUS_API_KEYAPI key of Walrus, valid when Walrus toolkit is enabled.""
WALRUS_SKIP_TLS_VERIFYSkip TLS verification for WALRUS API. Use when testing with self-signed certificates. Valid when Walrus toolkit is enabled.True
WALRUS_DEFAULT_PROJECTProject name for the default context, valid when Walrus toolkit is enabled.""
WALRUS_DEFAULT_ENVIRONMENTEnvironment name for the default context, valid when Walrus toolkit is enabled.""

Using Kubernetes Backend

Follow steps in quickstart to run with Kubernetes backend.

Using Walrus Backend

Prerequisites: Install Walrus.

Walrus serves as the application management engine. It provides features like hybrid infrastructure support, environment management, etc. To enable Walrus backend, edit the envfile:

  1. Set TOOLKITS=walrus
  2. Fill in OPENAI_API_KEY, WALRUS_URL and WALRUS_API_KEY

Then you can run Appilot to get started:

make run

Run with Docker

You can run Appilot in docker container when using Walrus backend.

Prerequisites: Install docker.

  1. Get an envfile by running the following command.
cp .env.example .env
  1. Configure the .env file.
  1. Run the following command:
docker run -it --env-file .env sealio/appilot:main

Using LLM alternatives to GPT-4

You can use other LLMs as the reasoning engine of Appilot, as long as it serves inference APIs in openAI compatible way.

  1. Configure the .env file, then set OPENAI_API_BASE=https://your-api-base.

  2. Run Appilot as normal.

How it works

The following is the architecture diagram of Appilot:

appilot-arch

License

Copyright (c) 2023 Seal, Inc.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at LICENSE file for details.

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.