Home

Awesome

Kubernetes Terraform Provider

The k8s Terraform provider enables Terraform to deploy Kubernetes resources. Unlike the official Kubernetes provider it handles raw manifests, leveraging controller-runtime and the Unstructured API directly to allow developers to work with any Kubernetes resource natively.

This project is a hard fork of ericchiang/terraform-provider-k8s.

Installation

The Go Get way

Use go get to install the provider:

go get -u github.com/banzaicloud/terraform-provider-k8s

Register the plugin in ~/.terraformrc (see Documentation for Windows users):

providers {
  k8s = "/$GOPATH/bin/terraform-provider-k8s"
}

The Terraform Plugin way (enable versioning)

Download a release from the Release page and make sure the name matches the following convention:

OSVersionName
LINUX0.4.0terraform-provider-k8s_v0.4.0
0.3.0terraform-provider-k8s_v0.3.0
Windows0.4.0terraform-provider-k8s_v0.4.0.exe
0.3.0terraform-provider-k8s_v0.3.0.exe

Install the plugin using Terraform Third-party Plugin Documentation:

Operating systemUser plugins directory
Windows%APPDATA%\terraform.d\plugins
All other systems~/.terraform.d/plugins

Usage

The provider uses your default Kubernetes configuration by default, but it takes some optional configuration parameters, see the Configuration section (these parameters are the same as for the Kubernetes provider).

terraform {
  required_providers {
    k8s = {
      version = ">= 0.8.0"
      source  = "banzaicloud/k8s"
    }
  }
}

provider "k8s" {
  config_context = "prod-cluster"
}

The k8s Terraform provider introduces a single Terraform resource, a k8s_manifest. The resource contains a content field, which contains a raw manifest in JSON or YAML format.

variable "replicas" {
  type    = "string"
  default = 3
}

data "template_file" "nginx-deployment" {
  template = "${file("manifests/nginx-deployment.yaml")}"

  vars {
    replicas = "${var.replicas}"
  }
}

resource "k8s_manifest" "nginx-deployment" {
  content = "${data.template_file.nginx-deployment.rendered}"
}

# creating a second resource in the nginx namespace
resource "k8s_manifest" "nginx-deployment" {
  content   = "${data.template_file.nginx-deployment.rendered}"
  namespace = "nginx"
}

In this case manifests/nginx-deployment.yaml is a templated deployment manifest.

apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: ${replicas}
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80

The Kubernetes resources can then be managed through Terraform.

$ terraform apply
# ...
Apply complete! Resources: 1 added, 1 changed, 0 destroyed.
$ kubectl get deployments
NAME               DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   3         3         3            3           1m
$ terraform apply -var 'replicas=5'
# ...
Apply complete! Resources: 0 added, 1 changed, 0 destroyed.
$ kubectl get deployments
NAME               DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   5         5         5            3           3m
$ terraform destroy -force
# ...
Destroy complete! Resources: 2 destroyed.
$ kubectl get deployments
No resources found.

NOTE: If the YAML formatted content contains multiple documents (separated by ---) only the first non-empty document is going to be parsed. This is because Terraform is mostly designed to represent a single resource on the provider side with a Terraform resource:

resource types correspond to an infrastructure object type that is managed via a remote network API -- <cite>Terraform Documentation</cite>

You can workaround this easily with the following snippet (however we still suggest to use separate resources):

locals {
  resources = split("\n---\n", data.template_file.nginx.rendered)
}

resource "k8s_manifest" "nginx-deployment" {
  count = length(local.resources)

  content = local.resources[count.index]
}

Helm workflow

Requirements

Get a versioned chart into your source code and render it

Helm 2
helm fetch stable/nginx-ingress --version 1.24.4 --untardir charts --untar
helm template --namespace nginx-ingress .\charts\nginx-ingress --output-dir manifests/
Helm 3
helm pull stable/nginx-ingress --version 1.24.4 --untardir charts --untar
helm template --namespace nginx-ingress nginx-ingress .\charts\nginx-ingress --output-dir manifests/

Apply the main.tf with the k8s provider

# terraform 0.12.x
locals {
  nginx-ingress_files   = fileset(path.module, "manifests/nginx-ingress/templates/*.yaml")
}

data "local_file" "nginx-ingress_files_content" {
  for_each = local.nginx-ingress_files
  filename = each.value
}

resource "k8s_manifest" "nginx-ingress" {
  for_each = data.local_file.nginx-ingress_files_content
  content  = each.value.content
  namespace = "nginx"
}

Configuration

There are generally two ways to configure the Kubernetes provider.

File config

The provider always first tries to load a config file from a given (or default) location. Depending on whether you have current context set this may require config_context_auth_info and/or config_context_cluster and/or config_context.

Setting default config context

Here's an example for how to set default context and avoid all provider configuration:

kubectl config set-context default-system \
  --cluster=chosen-cluster \
  --user=chosen-user

kubectl config use-context default-system

Read more about kubectl in the official docs.

In-cluster service account token

If no other configuration is specified, and when it detects it is running in a kubernetes pod, the provider will try to use the service account token from the /var/run/secrets/kubernetes.io/serviceaccount/token path. Detection of in-cluster execution is based on the sole availability both of the KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT environment variables, with non empty values.

provider "k8s" {
  load_config_file = "false"
}

If you have any other static configuration setting specified in a config file or static configuration, in-cluster service account token will not be tried.

Statically defined credentials

An other way is statically define TLS certificate credentials:

provider "k8s" {
  load_config_file = "false"

  host = "https://104.196.242.174"

  client_certificate     = "${file("~/.kube/client-cert.pem")}"
  client_key             = "${file("~/.kube/client-key.pem")}"
  cluster_ca_certificate = "${file("~/.kube/cluster-ca-cert.pem")}"
}

or username and password (HTTP Basic Authorization):

provider "k8s" {
  load_config_file = "false"

  host = "https://104.196.242.174"

  username = "username"
  password = "password"
}

If you have both valid configuration in a config file and static configuration, the static one is used as override. i.e. any static field will override its counterpart loaded from the config.

Argument Reference

The following arguments are supported:

Release

gpg --fingerprint $MY_EMAIL
export GPG_FINGERPRINT="THEF FING ERPR INTO OFTH  EPUB LICK EYOF YOU!"
goreleaser release --rm-dist -p 2

Testing

Create a kind cluster with the attached configuration file:

kind create cluster --config hack/kind.yaml

Once the cluster is running, run the setup script:

./hack/setup-kind.sh

Finally, run the integration test suite:

make EXAMPLE_DIR=test/terraform test-integration