Awesome
Kip, the Kubernetes Cloud Instance Provider
Kip is a Virtual Kubelet provider that allows a Kubernetes cluster to transparently launch pods onto their own cloud instances. The kip pod is run on a cluster and will create a virtual Kubernetes node in the cluster. When a pod is scheduled onto the Virtual Kubelet, Kip starts a right-sized cloud instance for the pod’s workload and dispatches the pod onto the instance. When the pod is finished running, the cloud instance is terminated. We call these cloud instances “cells”.
When workloads run on Kip, your cluster size naturally scales with the cluster workload, pods are strongly isolated from each other and the user is freed from managing worker nodes and strategically packing pods onto nodes. This results in lower cloud costs, improved security and simpler operational overhead.
Table of Contents
Requirements
To build Kip you need:
- golang 1.14+ (older versions may work)
- deepcopy-gen: go install k8s.io/code-generator/cmd/deepcopy-gen
Installation
There are two ways to get Kip up and running.
- Use the provided Terraform scripts to create a new Kubernetes cluster with a single Kip node. There are instructions for AWS and GCP.
- Add Kip to an existing kubernetes cluster. This option is documented below.
Install Kip using an existing cluster
To deploy Kip into an existing cluster, you'll need to setup cloud credentials that allow the Kip provider to manipulate cloud instances, networking and other cloud resources.
Step 1: Credentials
In AWS, Kip can either use API keys supplied in the Kip provider configuration file (provider.yaml
) or use the instance profile of the machine the Kip pod is running on.
On Google Cloud, Kip can use the oauth scopes attached to the k8s node it runs on. Alternatively the user can supply a service account key in provider.yaml.
AWS Credentials Option 1 - Configuring AWS API keys:
You can configure the AWS access key Kip will use in your provider configuration, via changing accessKeyID
and secretAccessKey
under the cloud.aws
section. See below on how to create a kustomize overlay with your custom provider configuration.
AWS Credentials Option 2 - Instance Profile Credentials:
In AWS, Kip can use credentials supplied by the instance profile attached to the node the pod is dispatched to. To use an instance profile, create an IAM policy with the minimum Kip permissions then apply the instance profile to the node that will run the Kip provider pod. The Kip pod must run on the cloud instance that the instance profile is attached to.
GCP Credentials Option 1 - instance service account:
In GCE, Kip can use the service account attached to an instance. Kip requires https://www.googleapis.com/auth/compute scope in order to launch instances.
GCP Credentials - Service Account private key:
Alternatively, Kip can use service account credentials manually supplied in provider.yaml. Add your email and key to cloud.gce.credentials
. Example:
cloud:
gce:
projectID: "my-project"
credentials:
clientEmail: my-account@my-project.iam.gserviceaccount.com
privateKey: "-----BEGIN PRIVATE KEY-----\n[base64-encoded private key]-----END PRIVATE KEY-----\n"
zone: us-central1-c
vpcName: "default"
subnetName: "default"
Step 2: Apply the manifests
The resources in deploy/manifests/kip create ServiceAccounts, Roles and a StatefulSet to run the provider. Kip is not stateless, the manifest will also create a PersistentVolumeClaim to store the provider data.
Once credentials are set up, apply deploy/manifests/kip/base to create the necessary kubernetes resources to support and run the provider.
In AWS:
$ kustomize build deploy/manifests/kip/base | kubectl apply -f -
In GCE:
$ kustomize build deploy/manifests/kip/overlays/gcp | kubectl apply -f -
For rendering the manifests, kustomize is used. You can create your own overlays on top of the base template. For example, to override provider.yaml, Kip's configuration file:
$ mkdir -p deploy/manifests/kip/overlay/local-config
$ cp deploy/manifests/kip/base/provider.yaml deploy/manifests/kip/overlay/local-config/provider.yaml
# Edit your provider configuration file.
$ vi deploy/manifests/kip/overlay/local-config/provider.yaml
$ cat > deploy/manifests/kip/overlay/local-config/kustomization.yaml <<EOF
> apiVersion: kustomize.config.k8s.io/v1beta1
> kind: Kustomization
> bases:
> - ../../base
> configMapGenerator:
> - behavior: merge
> files:
> - provider.yaml
> name: config
EOF
$ kustomize build deploy/manifests/kip/overlays/local-config | kubectl apply -f -
After applying, you should see a new kip pod in the kube-system namespace and a new node named "kip-provider-0" in the cluster.
Running Pods on Kip
To assign pods to run on the virtual kubelet node, add the following node selector to the pod spec in manifests.
spec:
nodeSelector:
type: virtual-kubelet
If you enabled taints on your virtual node (they are disabled by default in the example manifests; remove --disable-taint
from the command line flags to enable), add the necessary tolerations too:
spec:
tolerations:
- key: virtual-kubelet.io/provider
operator: Exists
Uninstall
If you used the provided terraform config for creating your cluster, you can remove the VPC and the cluster via:
terraform destroy -var-file <env.tfvars>.
If you deployed Kip in an existing cluster, make sure that you first remove all the pods and deployments that have been created by Kip. Then remove the kip statefulset via:
kubectl delete -n kube-system statefulset kip
Current Status
Features
- Networking, including host network mode, cluster IPs, DNS, HostPorts and NodePorts
- Pods will be started on a cloud instance that matches the pod resource requests/limits. If no requests/limits are present in the pod spec, Kip will fall back to a default cloud instance type specified in provider-config.yaml
- GPU instances
- Logs
- Exec
- Stats
- Readiness/Liveness probes
- Service account token automounts in pods.
- Security Groups
- Attaching instance profiles to Cells via annotations
- The following volume types are supported
- EmptyDir
- ConfigMap
- Secret
- HostPath
- Projected ConfigMaps and Secrets
Limitations
- Stateful workloads and PersistentVolumes are not supported.
- No support for updating ConfigMaps and Secrets for running Pods and Cells.
- Virtual-kubelet has limitations on what it supports in the Downward API, e.g. pod.status.podIP is not supported
- VolumeMounts do not support readOnly, subPath and subPathExpr attributes.
- VolumeMount mountPropagation is always Bidirectional
- Unsupported pod attributes:
- EphemeralContainers
- ReadinessGates
- Lifecycle handlers
- TerminationGracePeriodSeconds
- ActiveDeadlineSeconds
- VolumeDevices
- TerminationMessagePolicy FallbackToLogsOnError is not implemented
- The following PodSecurityContext fields
- FSGroup
- RunAsNonRoot
- ShareProcessNamespace
- HostIPC
- HostPID
We are actively working on adding missing features. One of the main objectives of the project is to provide full support for all Kubernetes features.
FAQ
Q. I’ve seen the name Milpa mentioned in the logs and source code. What is Milpa?
A. Kip’s source code was adapted from an earlier project developed at Elotl called Milpa. We will be migrating away from that name in coming releases. Milpa started out as a stand alone unikernel (and later container) orchestration system and it was natural to move a subset of its functionality into an open source virtual-kubelet provider.
Q. How long does it take to start a workload?
A. In AWS and GCE, instances boot in under a minute, usually pods are dispatched to the instance in about 45 seconds. Depending on the size of the container image, a pod will be running in 60 to 90 seconds. In our experience, starting pods in Azure can be a bit slower with startup times between 1.5 to 3 minutes.
Q. Does it work with the Horizontal Pod Autoscaler and Vertical Pod Autoscaler?
A. Yes it does. However, to use the VPA with the provider, the pod must be dispatched to a new cloud instance.
Q. Are DaemonSets supported?
A. Yes, though they might not work the way intended. The pod will start on a separate cloud instance, and not on the node. It's possible to patch a DaemonSet so it does not get dispatched to the Kip virtual node.
Q. Are you a kubernetes conformant runtime?
A. We are not 100% conformant at this time but we are working towards getting as close as possible to conformance. Currently Kip passes 70-80% of conformance tests but are hoping to get those values above 90% soon.
Q. What cloud providers does Kip support?
A. Kip is currently GA on AWS and GCE. We are actively working on Azure support.
Q. What components make up the Kip system?
A. The following repositories are part of the Kip system
- Itzo containes the cell agent and code for building cell images
- Tosi for downloading images to cells
- Cloud-Init a minimal cloud-init implementation
Q. We have our custom built image. Can I use it for running cells?
A. Yes, take a look at Bring your Own Image.
Q. Can Kip use a kubeconfig file for API server access?
A. Yes, see kubeconfig for more information.