Home

Awesome

Deis Workflow is no longer maintained.<br />Please read the announcement for more detail.
09/07/2017Deis Workflow v2.18 final release before entering maintenance mode
03/01/2018End of Workflow maintenance: critical patches no longer merged
Hephy is a fork of Workflow that is actively developed and accepts code contributions.

Deis Workflow End to End Tests v2

Build Status Go Report Card Docker Repository on Quay

Deis (pronounced DAY-iss) Workflow is an open source Platform as a Service (PaaS) that adds a developer-friendly layer to any Kubernetes cluster, making it easy to deploy and manage applications on your own servers.

For more information about the Deis Workflow, please visit the main project page at https://github.com/deis/workflow.

We welcome your input! If you have feedback, please submit an issue. If you'd like to participate in development, please read the "Development" section below and submit a pull request.

About

The code in this repository is a set of Ginkgo and Gomega based integration tests that execute commands against a running Deis cluster using the Deis CLI.

Development

The Deis project welcomes contributions from all developers. The high level process for development matches many other open source projects. See below for an outline.

Prerequisities

Before you run the tests, you'll need a full Deis cluster up and running in Kubernetes. Follow the instructions here to get one running.

Run the Tests

There are three options for how to execute the tests. These include two options for executing the tests against Deis Workflow installed on a remote Kubernetes cluster, and one option for installing the same tests directly into a Kubernetes cluster and executing them there.

Remote Execution

Either of two options for remote execution of the test suite require the DEIS_CONTROLLER_URL environment variable to be exported. Its value should be the the controller endpoint you would normally use with the deis register or deis login commands:

$ export DEIS_CONTROLLER_URL=http://deis.your.cluster

Tests execute in parallel by default. If you wish to control the number of executors, export a value for the GINKGO_NODES environment variable:

$ export GINKGO_NODES=5

If this is not set, Ginkgo will automatically choose a number of test nodes (executors) based on the number of CPU cores on the machine executing the tests. It is important to note, however, that test execution is constrained more significantly by the resources of the cluster under test than by the resources of the machine executing the tests. The number of test nodes, therefore, should be explicitly set and scaled in proportion to the resources available in the cluster.

For reference, Workflow's own CI pipeline uses the following:

Test NodesKubernetes Worker NodesWorker Node CPUWorker Node Memory
534 vCPUs15 GB

Setting the GINKGO_NODES environment variable to a value of 1 will allow serialized execution of all tests in the suite.

Native Execution

If you have Go 1.5 or greater already installed and working properly and also have the Glide dependency management tool for Go installed, you may clone this repository into your $GOPATH:

git clone git@github.com:deis/workflow-e2e.git $GOPATH/src/github.com/deis/workflow-e2e

One-time execution of the following will resolve the test suite's own dependencies:

$ make bootstrap

To execute the entire test suite:

$ make test-integration

To run a single test or set of tests, you'll need the Ginkgo tool installed on your machine:

$ go get github.com/onsi/ginkgo/ginkgo

You can then use the --focus option to run subsets of the test suite:

$ ginkgo --focus="deis apps" tests

Containerized Execution

If you do not have Go 1.5 or greater installed locally, but do have a Docker daemon running locally (or are using docker-machine), you can quite easily execute tests against a remote cluster from within a container.

In this case, you may clone this repository into a path of your own choosing (does not need to be on your $GOPATH):

git clone git@github.com:deis/workflow-e2e.git /path/of/your/choice

Then build the test image and execute the test suite:

$ make docker-build docker-test-integration

Within the Cluster

A third option is to run the test suite from within the very cluster that is under test.

To install the helm chart and start the tests, assuming helm and its corresponding server component tiller are installed:

helm repo add workflow-e2e https://charts.deis.com/workflow-e2e
helm install --verify workflow-e2e/workflow-e2e --namespace deis

To monitor tests as they execute:

$ kubectl --namespace=deis logs -f workflow-e2e tests

Special Note on Resetting Cluster State

All tests clean up after themselves, however, in the case of test failures or interruptions, automatic cleanup may not always proceed as intended. This may leave projects, users or other state behind, which may impact future executions of the test suite against the same cluster. (Often all tests will fail.) If you see this behavior, run these commands to clean up. (Replace deis-workflow-qoxhz with the name of the deis/workflow pod in your cluster.)

$ kubectl exec -it deis-workflow-qoxhz python manage.py shell
Python 2.7.10 (default, Aug 13 2015, 12:27:27)
[GCC 4.9.2] on linux2
>>> from django.contrib.auth import get_user_model
>>> m = get_user_model()
>>> m.objects.exclude(username='AnonymousUser').delete()
>>> m.objects.all()

Note that this is an ongoing issue for which we're planning a more comprehensive fix.