Home

Awesome

<picture> <source media="(prefers-color-scheme: dark)" srcset="./images/banner-white.png" width="600px;"> <img alt="Text changing depending on mode. Light: 'So light!' Dark: 'So dark!'" src="./images/banner-black.png" width="600px;"> </picture> <br/>

GitHub code size in bytes GitHub Workflow Status GitHub release (latest by date) OpenSSF Best Practices Link to documentation FOSSA Status License Go version codecov GitHub last commit (branch)

k8sgpt is a tool for scanning your Kubernetes clusters, diagnosing, and triaging issues in simple English.

It has SRE experience codified into its analyzers and helps to pull out the most relevant information to enrich it with AI.

Out of the box integration with OpenAI, Azure, Cohere, Amazon Bedrock, Google Gemini and local models.

<a href="https://www.producthunt.com/posts/k8sgpt?utm_source=badge-featured&utm_medium=badge&utm_souce=badge-k8sgpt" target="_blank"><img src="https://api.producthunt.com/widgets/embed-image/v1/featured.svg?post_id=389489&theme=light" alt="K8sGPT - K8sGPT&#0032;gives&#0032;Kubernetes&#0032;Superpowers&#0032;to&#0032;everyone | Product Hunt" style="width: 250px; height: 54px;" width="250" height="54" /></a> <a href="https://hellogithub.com/repository/9dfe44c18dfb4d6fa0181baf8b2cf2e1" target="_blank"><img src="https://abroad.hellogithub.com/v1/widgets/recommend.svg?rid=9dfe44c18dfb4d6fa0181baf8b2cf2e1&claim_uid=gqG4wmzkMrP0eFy" alt="Featured|HelloGitHub" style="width: 250px; height: 54px;" width="250" height="54" /></a>

<img src="images/demo4.gif" width=650px; />

CLI Installation

Linux/Mac via brew

$ brew install k8sgpt

or

brew tap k8sgpt-ai/k8sgpt
brew install k8sgpt
<details> <summary>RPM-based installation (RedHat/CentOS/Fedora)</summary>

32 bit:

<!---x-release-please-start-version-->
sudo rpm -ivh https://github.com/k8sgpt-ai/k8sgpt/releases/download/v0.3.46/k8sgpt_386.rpm
<!---x-release-please-end-->

64 bit:

<!---x-release-please-start-version-->
sudo rpm -ivh https://github.com/k8sgpt-ai/k8sgpt/releases/download/v0.3.46/k8sgpt_amd64.rpm
<!---x-release-please-end--> </details> <details> <summary>DEB-based installation (Ubuntu/Debian)</summary>

32 bit:

<!---x-release-please-start-version-->
curl -LO https://github.com/k8sgpt-ai/k8sgpt/releases/download/v0.3.46/k8sgpt_386.deb
sudo dpkg -i k8sgpt_386.deb
<!---x-release-please-end-->

64 bit:

<!---x-release-please-start-version-->
curl -LO https://github.com/k8sgpt-ai/k8sgpt/releases/download/v0.3.46/k8sgpt_amd64.deb
sudo dpkg -i k8sgpt_amd64.deb
<!---x-release-please-end--> </details> <details> <summary>APK-based installation (Alpine)</summary>

32 bit:

<!---x-release-please-start-version-->
wget https://github.com/k8sgpt-ai/k8sgpt/releases/download/v0.3.46/k8sgpt_386.apk
apk add --allow-untrusted k8sgpt_386.apk
<!---x-release-please-end-->

64 bit:

<!---x-release-please-start-version-->
wget https://github.com/k8sgpt-ai/k8sgpt/releases/download/v0.3.46/k8sgpt_amd64.apk
apk add --allow-untrusted k8sgpt_amd64.apk
<!---x-release-please-end--> </details> <details> <summary>Failing Installation on WSL or Linux (missing gcc)</summary> When installing Homebrew on WSL or Linux, you may encounter the following error:
==> Installing k8sgpt from k8sgpt-ai/k8sgpt Error: The following formula cannot be installed from a bottle and must be
built from the source. k8sgpt Install Clang or run brew install gcc.

If you install gcc as suggested, the problem will persist. Therefore, you need to install the build-essential package.

   sudo apt-get update
   sudo apt-get install build-essential
</details>

Windows

Operator Installation

To install within a Kubernetes cluster please use our k8sgpt-operator with installation instructions available here

This mode of operation is ideal for continuous monitoring of your cluster and can integrate with your existing monitoring such as Prometheus and Alertmanager.

Quick Start

Analyzers

K8sGPT uses analyzers to triage and diagnose issues in your cluster. It has a set of analyzers that are built in, but you will be able to write your own analyzers.

Built in analyzers

Enabled by default

Optional

Examples

Run a scan with the default analyzers

k8sgpt generate
k8sgpt auth add
k8sgpt analyze --explain
k8sgpt analyze --explain --with-doc

Filter on resource

k8sgpt analyze --explain --filter=Service

Filter by namespace

k8sgpt analyze --explain --filter=Pod --namespace=default

Output to JSON

k8sgpt analyze --explain --filter=Service --output=json

Anonymize during explain

k8sgpt analyze --explain --filter=Service --output=json --anonymize
<details> <summary> Using filters </summary>

List filters

k8sgpt filters list

Add default filters

k8sgpt filters add [filter(s)]

Examples :

Remove default filters

k8sgpt filters remove [filter(s)]

Examples :

</details> <details> <summary> Additional commands </summary>

List configured backends

k8sgpt auth list

Update configured backends

k8sgpt auth update $MY_BACKEND1,$MY_BACKEND2..

Remove configured backends

k8sgpt auth remove -b $MY_BACKEND1,$MY_BACKEND2..

List integrations

k8sgpt integrations list

Activate integrations

k8sgpt integrations activate [integration(s)]

Use integration

k8sgpt analyze --filter=[integration(s)]

Deactivate integrations

k8sgpt integrations deactivate [integration(s)]

Serve mode

k8sgpt serve

Analysis with serve mode

grpcurl -plaintext -d '{"namespace": "k8sgpt", "explain" : "true"}' localhost:8080 schema.v1.ServerAnalyzerService/Analyze
{
  "status": "OK"
}

Analysis with custom headers

k8sgpt analyze --explain --custom-headers CustomHeaderKey:CustomHeaderValue

Print analysis stats

k8sgpt analyze -s
The stats mode allows for debugging and understanding the time taken by an analysis by displaying the statistics of each analyzer.
- Analyzer Ingress took 47.125583ms 
- Analyzer PersistentVolumeClaim took 53.009167ms 
- Analyzer CronJob took 57.517792ms 
- Analyzer Deployment took 156.6205ms 
- Analyzer Node took 160.109833ms 
- Analyzer ReplicaSet took 245.938333ms 
- Analyzer StatefulSet took 448.0455ms 
- Analyzer Pod took 5.662594708s 
- Analyzer Service took 38.583359166s

Diagnostic information

To collect diagnostic information use the following command to create a dump_<timestamp>_json in your local directory.

k8sgpt dump
</details>

LLM AI Backends

K8sGPT uses the chosen LLM, generative AI provider when you want to explain the analysis results using --explain flag e.g. k8sgpt analyze --explain. You can use --backend flag to specify a configured provider (it's openai by default).

You can list available providers using k8sgpt auth list:

Default:
> openai
Active:
Unused:
> openai
> localai
> ollama
> azureopenai
> cohere
> amazonbedrock
> amazonsagemaker
> google
> huggingface
> noopai
> googlevertexai
> ibmwatsonxai

For detailed documentation on how to configure and use each provider see here.

To set a new default provider

k8sgpt auth default -p azureopenai
Default provider set to azureopenai

Key Features

<details>

With this option, the data is anonymized before being sent to the AI Backend. During the analysis execution, k8sgpt retrieves sensitive data (Kubernetes object names, labels, etc.). This data is masked when sent to the AI backend and replaced by a key that can be used to de-anonymize the data when the solution is returned to the user.

<summary> Anonymization </summary>
  1. Error reported during analysis:
Error: HorizontalPodAutoscaler uses StatefulSet/fake-deployment as ScaleTargetRef which does not exist.
  1. Payload sent to the AI backend:
Error: HorizontalPodAutoscaler uses StatefulSet/tGLcCRcHa1Ce5Rs as ScaleTargetRef which does not exist.
  1. Payload returned by the AI:
The Kubernetes system is trying to scale a StatefulSet named tGLcCRcHa1Ce5Rs using the HorizontalPodAutoscaler, but it cannot find the StatefulSet. The solution is to verify that the StatefulSet name is spelled correctly and exists in the same namespace as the HorizontalPodAutoscaler.
  1. Payload returned to the user:
The Kubernetes system is trying to scale a StatefulSet named fake-deployment using the HorizontalPodAutoscaler, but it cannot find the StatefulSet. The solution is to verify that the StatefulSet name is spelled correctly and exists in the same namespace as the HorizontalPodAutoscaler.

Note: Anonymization does not currently apply to events.

Further Details

Anonymization does not currently apply to events.

In a few analysers like Pod, we feed to the AI backend the event messages which are not known beforehand thus we are not masking them for the time being.

*Note:

*Note:

Proceed with care

</details> <details> <summary> Configuration management</summary>

k8sgpt stores config data in the $XDG_CONFIG_HOME/k8sgpt/k8sgpt.yaml file. The data is stored in plain text, including your OpenAI key.

Config file locations:

OSPath
MacOS~/Library/Application Support/k8sgpt/k8sgpt.yaml
Linux~/.config/k8sgpt/k8sgpt.yaml
Windows%LOCALAPPDATA%/k8sgpt/k8sgpt.yaml
</details> <details> There may be scenarios where caching remotely is preferred. In these scenarios K8sGPT supports AWS S3 or Azure Blob storage Integration. <summary> Remote caching </summary> <em>Note: You can only configure and use only one remote cache at a time</em>

Adding a remote cache

Listing cache items

k8sgpt cache list

Purging an object from the cache Note: purging an object using this command will delete upstream files, so it requires appropriate permissions.

k8sgpt cache purge $OBJECT_NAME

Removing the remote cache Note: this will not delete the upstream S3 bucket or Azure storage container

k8sgpt cache remove
</details> <details> <summary> Custom Analyzers</summary>

There may be scenarios where you wish to write your own analyzer in a language of your choice. K8sGPT now supports the ability to do so by abiding by the schema and serving the analyzer for consumption. To do so, define the analyzer within the K8sGPT configuration and it will add it into the scanning process. In addition to this you will need to enable the following flag on analysis:

k8sgpt analyze --custom-analysis

Here is an example local host analyzer in Rust When this is run on localhost:8080 the K8sGPT config can pick it up with the following additions:

custom_analyzers:
  - name: host-analyzer
    connection:
      url: localhost
      port: 8080

This now gives the ability to pass through hostOS information ( from this analyzer example ) to K8sGPT to use as context with normal analysis.

See the docs on how to write a custom analyzer

Listing custom analyzers configured

k8sgpt custom-analyzer list

Adding custom analyzer without install

k8sgpt custom-analyzer add --name my-custom-analyzer --port 8085

Removing custom analyzer

k8sgpt custom-analyzer remove --names "my-custom-analyzer,my-custom-analyzer-2"
</details>

Documentation

Find our official documentation available here

Contributing

Please read our contributing guide.

Community

Find us on Slack

<a href="https://github.com/k8sgpt-ai/k8sgpt/graphs/contributors"> <img src="https://contrib.rocks/image?repo=k8sgpt-ai/k8sgpt" /> </a>

License

FOSSA Status