Home

Awesome

AIAC AIAC

Artificial Intelligence Infrastructure-as-Code Generator.

<kbd><img src="demo.gif" style="width: 100%; border: 1px solid silver;" border="1" alt="demo"></kbd>

<!-- vim-markdown-toc GFM --> <!-- vim-markdown-toc -->

Description

aiac is a library and command line tool to generate IaC (Infrastructure as Code) templates, configurations, utilities, queries and more via LLM providers such as OpenAI, Amazon Bedrock and Ollama.

The CLI allows you to ask a model to generate templates for different scenarios (e.g. "get terraform for AWS EC2"). It composes an appropriate request to the selected provider, and stores the resulting code to a file, and/or prints it to standard output.

Users can define multiple "backends" targeting different LLM providers and environments using a simple configuration file.

Use Cases and Example Prompts

Generate IaC

Generate Configuration Files

Generate CI/CD Pipelines

Generate Policy as Code

Generate Utilities

Command Line Builder

Query Builder

Instructions

Before installing/running aiac, you may need to configure your LLM providers or collect some information.

For OpenAI, you will need an API key in order for aiac to work. Refer to OpenAI's pricing model for more information. If you're not using the API hosted by OpenAI (for example, you may be using Azure OpenAI), you will also need to provide the API URL endpoint.

For Amazon Bedrock, you will need an AWS account with Bedrock enabled, and access to relevant models. Refer to the Bedrock documentation for more information.

For Ollama, you only need the URL to the local Ollama API server, including the /api path prefix. This defaults to http://localhost:11434/api. Ollama does not provide an authentication mechanism, but one may be in place in case of a proxy server being used. This scenario is not currently supported by aiac.

Installation

Via brew:

brew tap gofireflyio/aiac https://github.com/gofireflyio/aiac
brew install aiac

Using docker:

docker pull ghcr.io/gofireflyio/aiac

Using go install:

go install github.com/gofireflyio/aiac/v5@latest

Alternatively, clone the repository and build from source:

git clone https://github.com/gofireflyio/aiac.git
go build

aiac is also available in the Arch Linux user repository (AUR) as aiac (which compiles from source) and aiac-bin (which downloads a compiled executable).

Configuration

aiac is configured via a TOML configuration file. Unless a specific path is provided, aiac looks for a configuration file in the user's XDG_CONFIG_HOME directory, specifically ${XDG_CONFIG_HOME}/aiac/aiac.toml. On Unix-like operating systems, this will default to "~/.config/aiac/aiac.toml". If you want to use a different path, provide the --config or -c flag with the file's path.

The configuration file defines one or more named backends. Each backend has a type identifying the LLM provider (e.g. "openai", "bedrock", "ollama"), and various settings relevant to that provider. Multiple backends of the same LLM provider can be configured, for example for "staging" and "production" environments.

Here's an example configuration file:

default_backend = "official_openai"   # Default backend when one is not selected

[backends.official_openai]
type = "openai"
api_key = "API KEY"
# Or 
# api_key = "$OPENAI_API_KEY"
default_model = "gpt-4o"              # Default model to use for this backend

[backends.azure_openai]
type = "openai"
url = "https://tenant.openai.azure.com/openai/deployments/test"
api_key = "API KEY"
api_version = "2023-05-15"            # Optional
auth_header = "api-key"               # Default is "Authorization"
extra_headers = { X-Header-1 = "one", X-Header-2 = "two" }

[backends.aws_staging]
type = "bedrock"
aws_profile = "staging"
aws_region = "eu-west-2"

[backends.aws_prod]
type = "bedrock"
aws_profile = "production"
aws_region = "us-east-1"
default_model = "amazon.titan-text-express-v1"

[backends.localhost]
type = "ollama"
url = "http://localhost:11434/api"     # This is the default

Notes:

  1. Every backend can have a default model (via configuration key default_model). If not provided, calls that do not define a model will fail.
  2. Backends of type "openai" can change the header used for authorization by providing the auth_header setting. This defaults to "Authorization", but Azure OpenAI uses "api-key" instead. When the header is either "Authorization" or "Proxy-Authorization", the header's value for requests will be "Bearer API_KEY". If it's anything else, it'll simply be "API_KEY".
  3. Backends of type "openai" and "ollama" support adding extra headers to every request issued by aiac, by utilizing the extra_headers setting.

Usage

Once a configuration file is created, you can start generating code and you only need to refer to the name of the backend. You can use aiac from the command line, or as a Go library.

Command Line

Listing Models

Before starting to generate code, you can list all models available in a backend:

aiac -b aws_prod --list-models

This will return a list of all available models. Note that depending on the LLM provider, this may list models that aren't accessible or enabled for the specific account.

Generating Code

By default, aiac prints the extracted code to standard output and opens an interactive shell that allows conversing with the model, retrying requests, saving output to files, copying code to clipboard, and more:

aiac terraform for AWS EC2

This will use the default backend in the configuration file and the default model for that backend, assuming they are indeed defined. To use a specific backend, provide the --backend or -b flag:

aiac -b aws_prod terraform for AWS EC2

To use a specific model, provide the --model or -m flag:

aiac -m gpt-4-turbo terraform for AWS EC2

You can ask aiac to save the resulting code to a specific file:

aiac terraform for eks --output-file=eks.tf

You can use a flag to save the full Markdown output as well:

aiac terraform for eks --output-file=eks.tf --readme-file=eks.md

If you prefer aiac to print the full Markdown output to standard output rather than the extracted code, use the -f or --full flag:

aiac terraform for eks -f

You can use aiac in non-interactive mode, simply printing the generated code to standard output, and optionally saving it to files with the above flags, by providing the -q or --quiet flag:

aiac terraform for eks -q

In quiet mode, you can also send the resulting code to the clipboard by providing the --clipboard flag:

aiac terraform for eks -q --clipboard

Note that aiac will not exit in this case until the contents of the clipboard changes. This is due to the mechanics of the clipboard.

Via Docker

All the same instructions apply, except you execute a docker image:

docker run \
    -it \
    -v ~/.config/aiac/aiac.toml:~/.config/aiac/aiac.toml \
    ghcr.io/gofireflyio/aiac terraform for ec2

As a Library

You can use aiac as a Go library:

package main

import (
    "context"
    "log"
    "os"

    "github.com/gofireflyio/aiac/v5/libaiac"
)

func main() {
    aiac, err := libaiac.New() // Will load default configuration path.
                               // You can also do libaiac.New("/path/to/aiac.toml")
    if err != nil {
        log.Fatalf("Failed creating aiac object: %s", err)
    }

    ctx := context.TODO()

    models, err := aiac.ListModels(ctx, "backend name")
    if err != nil {
        log.Fatalf("Failed listing models: %s", err)
    }

    chat, err := aiac.Chat(ctx, "backend name", "model name")
    if err != nil {
        log.Fatalf("Failed starting chat: %s", err)
    }

    res, err = chat.Send(ctx, "generate terraform for eks")
    res, err = chat.Send(ctx, "region must be eu-central-1")
}

Upgrading from v4 to v5

Version 5.0.0 introduced a significant change to the aiac API in both the command line and library forms, as per feedback from the community.

Changes in Configuration

Before v5, there was no concept of a configuration file or named backends. Users had to provide all the information necessary to contact a specific LLM provider via command line flags or environment variables, and the library allowed creating a "client" object that could only talk with one LLM provider.

Backends are now configured only via the configuration file. Refer to the Configuration section for instructions. Provider-specific flags such as --api-key, --aws-profile, etc. (and their respective environment variables, if any) are no longer accepted.

Since v5, backends are also named. Previously, the --backend and -b flags referred to the name of the LLM provider (e.g. "openai", "bedrock", "ollama"). Now they refer to whatever name you've defined in the configuration file:

[backends.my_local_llm]
type = "ollama"
url = "http://localhost:11434/api"

Here we configure an Ollama backend named "my_local_llm". When you want to generate code with this backend, you will use -b my_local_llm rather than -b ollama, as multiple backends may exist for the same LLM provider.

Changes in CLI Invokation

Before v5, the command line was split into three subcommands: get, list-models and version. Due to this hierarchical nature of the CLI, flags may not have been accepted if they were provided in the "wrong location". For example, the --model flag had to be provided after the word "get", otherwise it would not be accepted. In v5, there are no subcommands, so the position of the flags no longer matters.

The list-models subcommand is replaced with the flag --list-models, and the version subcommand is replaced with the flag --version.

Before v5:

aiac -b ollama list-models

Since v5:

aiac -b my_local_llm --list-models

In earlier versions, the word "get" was actually a subcommand and not truly part of the prompt sent to the LLM provider. Since v5, there is no "get" subcommand, so you no longer need to add this word to your prompts.

Before v5:

aiac get terraform for S3 bucket

Since v5:

aiac terraform for S3 bucket

That said, adding either the word "get" or "generate" will not hurt, as v5 will simply remove it if provided.

Changes in Model Usage and Support

Before v5, the models for each LLM provider were hardcoded in each backend implementation, and each provider had a hardcoded default model. This significantly limited the usability of the project, and required us to update aiac whenever new models were added or deprecated. On the other hand, we could provide extra information about each model, such as its context lengths and type, as we manually extracted them from the provider documentation.

Since v5, aiac no longer hardcodes any models, including default ones. It will not attempt to verify the model you select actually exists. The --list-models flag will now directly contact the chosen backend API to get a list of supported models. Setting a model when generating code simply sends its name to the API as-is. Also, instead of hardcoding a default model for each backend, users can define their own default models in the configuration file:

[backends.my_local_llm]
type = "ollama"
url = "http://localhost:11434/api"
default_model = "mistral:latest"

Before v5, aiac supported both completion models and chat models. Since v5, it only supports chat models. Since none of the LLM provider APIs actually note whether a model is a completion model or a chat model (or even an image or video model), the --list-models flag may list models which are not actually usable, and attempting to use them will result in an error being returned from the provider API. The reason we've decided to drop support for completion models was that they require setting a maximum amount of tokens for the API to generate (at least in OpenAI), which we can no longer do without knowing the context length. Chat models are not only a lot more useful, but they do not have this limitation.

Other Changes

Most LLM provider APIs, when returning a response to a prompt, will include a "reason" for why the response ended where it did. Generally, the response should end because the model finished generating a response, but sometimes the response may be truncated due to the model's context length or the user's token utilization. When the response did not "stop" because it finished generation, the response is said to be "truncated". Before v5, if the API returned that the response was truncated, aiac returned an error. Since v5, an error is no longer returned, as it seems that some providers do not return an accurate stop reason. Instead, the library returns the stop reason as part of its output for users to decide how to proceed.

Example Output

Command line prompt:

aiac dockerfile for nodejs with comments

Output:

FROM node:latest

# Create app directory
WORKDIR /usr/src/app

# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm@5+)
COPY package*.json ./

RUN npm install
# If you are building your code for production
# RUN npm ci --only=production

# Bundle app source
COPY . .

EXPOSE 8080
CMD [ "node", "index.js" ]

Troubleshooting

Most errors that you are likely to encounter are coming from the LLM provider API, e.g. OpenAI or Amazon Bedrock. Some common errors you may encounter are:

License

This code is published under the terms of the Apache License 2.0.