Home

Awesome

K-Bench

(K-Bench) is a framework to benchmark the control and data plane aspects of a Kubernetes infrastructure. K-Bench provides a configurable way to prescriptively create and manipulate Kubernetes resources at scale and eventually provide the relevant control plane and dataplane performance metrics for the target infrastructure. Example operations include CREATE, UPDATE, LIST, DELETE, RUN, COPY etc. on different types of Kubernetes resources including Pod, Deployment, Service, ReplicationController, etc.

K-Bench has the following features:

Architecture

<img src="documentation/kbench-overview.jpg">

The above diagram shows an overview of the benchmark. Upon starting, a json config file is parsed for infrastructure and operation information. Then a sequence of operations are generated, each of which may contain a list of actions such as create, list, scale, etc. The operations run one by one, optionally in a blocking manner. Actions inside one operation, however, run in parallel with Go routines. Actions supported for different types of resources are defined in their respective managers. The resource managers also provide metrics collection mechanism and produce Wavefront consumable data. The benchmark uses client-go to communicate with the Kubernetes cluster.

K-Bench can be extremely flexible in that it allows virtually any supported actions performed with user chosen parameters on selected resource objects serially, in parallel, or in a hybrid manner. To achieve this, a crucial problem to address is to determine how actions and resources are handled or partitioned by different threads. We call this workload dispatch. In K-Bench, dispatch for actions is straightforward: the configuration parser scans the entire config file and determines the maximum concurrency for each operation by summing up all the Count fields of different resource types in the operation. The dispatcher spawns and maintains all go routines so that corresponding actions of different types of resources in an operation are fully parallelized. Different actions for the same resource in the operation share the same go routine and are executed in order. To achieve dispatch for resource objects, K-Bench maintains two types of labels, namely k-label and u-label respectively, for each resource object. K-Bench assigns each go routine a TID and an operation an OID, which are also attached as k-label to the relevant objects. Other metadata such as resource type, app name, benchmark name, etc., are also attached as k-label when a resource is created. K-Bench provides predefined label matching options such as MATCH_GOROUTINE, MATCH_OPERATION to select objects created by a specified routine in certain operations. Users labels passed through the benchmark configuration specification are attached to resources as u-labels, which can be also used for resource dispatch.

Control Plane Metrics

After a successful run, the benchmark reports metrics (e.g., number of requests, API invoke latency, throughput, etc.) for the executed operations on various resource types. One resource type whose metrics need special consideration is Pod, as its operations are typically long-running and asynchronous. For Pod (and related resource types such as Deployment), we introduce two sets of metrics, server-side and client-side, to better summarize its performance from different perspectives. The server-side metrics design for Pod in K-Bench inherits the definition suggested by the kubernetes sig group (the exact way those Pod metrics are defined can be revealed from the density and performance test in the e2e: density_test.go). The client-side set of metrics, collected by an event callback mechanism, is a more accurate reflection on the time taken for Pod states to transition end-to-end. The below table describes all the supported metrics:

Metric 1DefinitionApplied Resource TypeNotes & References & Sources
Pod creation latency (server)scheEvent.FirstTimestamp (the FirstTimestamp of a scheduling event associated with a pod) - pod.CreationTimestamp (the CreationTimestamp of the pod object)Pod Deploymentdensity.go
Pod scheduling latency (server)pod.Status.StartTime (the server timestamp indicating when a pod is accepted by kubelet but image not pulled yet) - scheEvent.FirstTimestamp (the first timestamp of a scheduled event related to a pod)Pod Deploymentdensity.go
Pod image pulling latency (server)pulledEvent.FirstTimestamp (the FirstTimestamp of an event with "Pulled" as the reason associated with a pod) - pod.Status.StartTime (the timestamp indicating when a pod is accepted by kubelet but image not pulled yet)Pod Deploymenta new metric defined in pod_manager.go for kbench
Pod starting latency (server)max(pod.Status.ContainerStatuses[...].State.Running.StartedAt) (the StartedAt timestamp for the last container that gets into running state inside a pod) - pulledEvent.FirstTimestamp (the FirstTimestamp of an event with "Pulled" as the reason associated with a pod)Pod Deploymentdensity.go
Pod startup total latency (server)max(pod.Status.ContainerStatuses[...].State.Running.StartedAt) (the StartedAt timestamp for the last container that gets into running state inside a pod) - pod.CreationTimestamp (the CreationTimestamp of the pod object)Pod Deploymentdensity.go
Pod client-server e2e latencythe first time when client watches that pod.Status.Phase becomes running - pod.CreationTimestamp (the server-side CreationTimestamp of the pod object)Pod Deploymentthis is similar to the "watch" latency in e2e test
Pod scheduling latency (client)the first time when client watches that pod.Status.Conditions[...] has a PodScheduled condition - the first time client watches the pod (and thus does not have a PodScheduled condition)Pod Deploymenta new metric defined in pod_manager.go in kbench
Pod initialization latency (client)the first time when client watches that pod.Status.Conditions[...] has a PodInitialized condition - the first time when client watches that pod.Status.Conditions[...] has a PodScheduled conditionPod Deploymenta new metric defined in pod_manager.go in kbench
Pod starting latency (client)the first time when client watches that pod.Status.Phase becomes running - the first time when client watches that pod.Status.Conditions[...] has a PodInitialized conditionPod Deploymenta new metric defined in pod_manager.go Note that there is no client-side watch event for image pulling, so this metric includes the image pulling.
Pod startup total latency (client)the first time when client watches that pod.Status.Phase becomes running - the first time client watches the pod (and thus does not have a PodScheduled condition)Pod Deploymenta new metric defined in pod_manager.go
Pod creation throughputsum(number of running pods of every operation that has pod actions / 2) / sum(median Pod startup total latency of every operation that has pod actions)Pod Deploymenta new metric defined in pod_manager.go.
API invoke latencylatency for an API to returnAll resource typesa new metric defined in pod_manager.go.

Data Plane Workloads and Metrics

Metric 1Resource CategoryBenchmarkNotes
Transaction throughputCPU/MemoryRedis MemtierMaximum achievable throughput aggregated across pods in a cluster
Transaction latencyCPU/MemoryRedis MemtierLatency for the injected SET/GET transactions
Pod densityCPU/MemoryRedis MemtierTransaction throughput and latency for given pod density
I/O bandwidth (IOPS)I/OFIOSynchronous and Asynchronous Rd/Wr bandwidth for 70-30, 100-0 and 0-100 read-write ratios, block sizes on various K8s volumes
I/O Latency (ms)I/OIopingDisk I/O latency on Ephemeral and Persistent K8s volumes
Network b/wNetworkIperf3Inter-pod TCP, UDP performance with varying pod placements on nodes, zones
Network Latency (ms)NetworkQperfInter-pod network latency for TCP and UDP packets with varying pod placements

Infrastructure Diagnostic Telemetry

In addition to the above metrics that the benchmark reports, K-Bench can be configured to report Wavefront- and Prometheus-defined metrics that include: memory, CPU, storage utilization of nodes, namespaces, pods, cluster level statistics, bytes transferred and received rates between pods, uptime, infrastructure statistics, etc.

To use Wavefront monitoring of the nodes, one can install the Waverunner component using pkg/waverunner/install.sh. Invoking this script without any parameters will give the help menu. To start telemetry, invoke pkg/waverunner/WR_wcpwrapper.sh as follows:

./WR_wcpwrapper.sh -r <run_tag> -i <Host_IP_String> -w <Wavefront_source> [-o <output_folder> -k <ssh_key_file> -p <host_passwd>]

The above command defaults to /tmp for output folder and a null host password.

To use Prometheus as your metrics monitoring mechanism, configure the PrometheusManifestPaths option in the K-Bench config file. Please see top level configuration options section below and prometheus readme.

K-Bench Quickstart Guide

To use K-Bench, clone this repo, install the benchmark, and then you can use it to run workload against your k8s cluster by following the below instructions.

Install using Script

On a Linux box (tested on Ubuntu 16.04), just invoke:

./install.sh

to install the benchmark.

If you would like the kbench binary to be copied to /usr/local/bin so that you can directly run without specifying the full kbench path, run it with sudo.

On systems like Ubuntu, just being able to use sudo is enough and one does not explicitly need to be the "root" user. Also, please ensure that the K8s nodes and the client on which you run K-Bench have their times synchronized as K-Bench uses both client and server side time stamps to calculate latencies.

Run the Benchmark

Once the installation completes, you can start using K-Bench. To run the benchmark, you need to make sure your

~/.kube/config file or the KUBECONFIG environment variable points to a valid and running Kubernetes cluster. To verify this, you may install kubectl (this expects a ~/.kube/config file in place, which you can copy from the Master node) and simply run:

kubectl get nodes

Once you verify that you have a running Kubernetes cluster, the workload can be run directly using the kbench go binary or using the run.sh script. The default benchmark config file ./config/default/config.json specifies the workload you would like to run. You can modify the config file to run workload of your choice. After that, simply run:

kbench

or

./run.sh

If your config file is at a different location, please use -benchconfig option if invoking the kbench binary directly:

kbench -benchconfig filepath

If your filepath is a directory, the benchmark will run them one by one.

When using the run.sh script, invoking this script with -h provides the following help menu:

Usage: ./run.sh -r <run-tag> [-t <comma-separated-tests> -o <output-dir>]
Example: ./run.sh -r "kbench-run-on-XYZ-cluster"  -t "cp_heavy16,dp_netperf_internode,dp_fio" -o "./"

Valid test names:

all || all_control_plane || all_data_plane || cp_heavy_12client || cp_heavy_8client || cp_light_1client || cp_light_4client || default || dp_fio || dp_network_internode || dp_network_interzone || dp_network_intranode || dp_redis || dp_redis_density || predicate_example || 

To get details about each of the existing workloads, please check the individual README or config.json in config/<test-name> folder. For more details about how to configure workload, please check the examples under the ./config directory, or read the benchmark configuration section of this document.

Adding a new test to use with run.sh

Add a new folder in config/<test-name>, include the run configuration as config/<test-name>/config.json and run the test by providing the <test-name> as input to the -t option of run.sh

Alternative Installing Method: Install Manually with Go (old way with GOROOT and GOPATH)

First, you need to setup your Go environment. Download Go and unzip it to a local directory (e.g., /root/go) and point your GOROOT environment variable there. Also, set your GOPATH (e.g., /root/gocode). The below instructions are example for your reference (assuming you download Go to /root/go):

cd /root/go

gunzip go***.linux-amd64.tar.gz

tar -xvf go***.linux-amd64.tar

mkdir /root/gocode && cd gocode/

export GOPATH=/root/gocode

export GOROOT=/root/go

export PATH=$PATH:/root/go/bin

Clone or download benchmark source code to $GOPATH/src/k-bench (create this directory if it does not exist) using Git or through other means.

mkdir -p $GOPATH/src

mkdir -p $GOPATH/src/k-bench

After you have all the files under $GOPATH/src/k-bench, cd to that directory.

It is also handy to include into your PATH variable locations where Go typically places and finds binaries and tools:

export PATH=$PATH:$GOROOT/bin:$GOPATH/bin

Now, you are ready to build the benchmark. To build, you can either use the below command to install the kbench binary into $GOPATH/bin:

go install cmd/kbench.go

or run (under the $GOPATH/src/k-bench directory) the below to generate the kbench executable under $GOPATH/src/k-bench/bin:

mkdir -p bin && cd bin && go build k-bench/cmd/kbench.go

Benchmark Configuration

The benchmark is highly configurable through a json config file. The ./config/default/config.json file is provided as an example (this file is also the default benchmark config file if user does not specify one through the -benchconfig option). More config examples can be found under the ./config directory and its subdirectories.

Top Level Configuration Options

At top level, the benchmark supports the following configuration options:

Operation Configuration

In each operation of the "Operations" array, users can specify one or more resource types, and each resource type can have a list of actions to perform, and each action may accept some options. Below are example (and a subset of all supported) resource types with the corresponding actions and options:

The benchmark also supports other resource types including ConfigMap, Event, Endpoints, ComponentStatus, Node, LimitRange, PersistentVolume, PersistentVolumeClaim, PodTemplate, ResourceQuota, Secret, ServiceAccount, Role, RoleBinding, ClusterRole, ClusterRoleBinding, etc.

In addition to different types of resource types, in an operation you can also specify a RepeatTimes option to run the operation for a given number of times.

For more supported resources, actions, and configuration options in K-Bench, please checkout the sample config files under ./config or source code.

Operation Predicate

To simplify synchronization and orchestrate operation execution flow, the benchmark supports Predicate, which blocks an operation's execution until certain conditions are met. A predicate is configured through the below options:

For examples on how to use predicates, you may check config file samples under ./config/predicate_example.

Contributing to the Benchmark

Please contact the project members and read CONTRIBUTING.md if you are interested in making contributions.

Project Leads

Karthik Ganesan Email: ganesank@vmware.com for questions and comments

Contributors

Yong Li Helen Liu

Footnotes

  1. For each latency related metric, there are four values reported: median, min, max, and 99-percentile. 2