Home

Awesome

inlets-operator

build License: MIT Go Report Card Documentation

Get public TCP LoadBalancers for local Kubernetes clusters

When using a managed Kubernetes engine, you can expose a Service as a "LoadBalancer" and your cloud provider will provision a TCP cloud load balancer for you, and start routing traffic to the selected service inside your cluster. In other words, you get ingress to an otherwise internal service.

The inlets-operator brings that same experience to your local Kubernetes cluster by provisioning a VM on the public cloud and running an inlets server process there.

Within the cluster, it runs the inlets client as a Deployment, and once the two are connected, it updates the original service with the IP, just like a managed Kubernetes engine.

Deleting the service or annotating it will cause the cloud VM to be deleted.

See also:

Change any LoadBalancer from <pending> to a real IP

Once the inlets-operator is installed, any Service of type LoadBalancer will get an IP address, unless you exclude it with an annotation.

kubectl run nginx-1 --image=nginx --port=80 --restart=Always
kubectl expose pod/nginx-1 --port=80 --type=LoadBalancer

$ kubectl get services -w
NAME               TYPE        CLUSTER-IP        EXTERNAL-IP       PORT(S)   AGE
service/nginx-1    ClusterIP   192.168.226.216   <pending>         80/TCP    78s
service/nginx-1    ClusterIP   192.168.226.216   104.248.163.242   80/TCP    78s

You'll also find a Tunnel Custom Resource created for you:

$ kubectl get tunnels

NAMESPACE   NAME             SERVICE   HOSTSTATUS     HOSTIP         HOSTID
default     nginx-1-tunnel   nginx-1   provisioning                  342453649
default     nginx-1-tunnel   nginx-1   active         178.62.64.13   342453649

We recommend exposing an Ingress Controller or Istio Ingress Gateway, see also: Expose an Ingress Controller

Plays well with other LoadBalancers

Want to create tunnels for all LoadBalancer services, but ignore one or two?

Want to disable the inlets-operator for a particular Service? Add the annotation operator.inlets.dev/manage with a value of 0.

kubectl annotate service nginx-1 operator.inlets.dev/manage=0

Want to ignore all services, then only create Tunnels for annotated ones?

Install the chart with annotatedOnly: true, then run:

kubectl annotate service nginx-1 operator.inlets.dev/manage=1

Using IPVS for your Kubernetes networking?

For IPVS, you need to declare a Tunnel Custom Resource instead of using the LoadBalancer field.

apiVersion: operator.inlets.dev/v1alpha1
kind: Tunnel
metadata:
  name: nginx-1-tunnel
  namespace: default
spec:
  serviceRef:
    name: nginx-1
    namespace: default
status: {}

You can pre-define the auth token for the tunnel if you need to:

spec:
  authTokenRef:
    name: nginx-1-tunnel-token
    namespace: default

Who is this for?

Your cluster could be running anywhere: on your laptop, in an on-premises datacenter, within a VM, or on your Raspberry Pi. Ingress and LoadBalancers are a core-building block of Kubernetes clusters, so Ingress is especially important if you:

There is no need to open a firewall port, set-up port-forwarding rules, configure dynamic DNS or any of the usual hacks. You will get a public IP and it will "just work" for any TCP traffic you may have.

How does it compare to other solutions?

Any Service of type LoadBalancer can be exposed within a few seconds.

Since exit-servers are created in your preferred cloud (around a dozen are supported already), you'll only have to pay for the cost of the VM, and where possible, the cheapest plan has already been selected for you. For example with Hetzner (coming soon) that's about 3 EUR / mo, and with DigitalOcean it comes in at around 5 USD - both of these VPSes come with generous bandwidth allowances, global regions and fast network access.

Conceptual overview

In this animation by Ivan Velichko, you see the operator in action.

It detects a new Service of type LoadBalancer, provisions a VM in the cloud, and then updates the Service with the IP address of the VM.

Demo GIF

There's also a video walk-through of exposing an Ingress Controller

Installation

Read the installation instructions for different cloud providers

See also: Helm chart

Expose an Ingress Controller or Istio Ingress Gateway

Unlike other solutions, this:

Configuring ingress:

Other use-cases

Provider Pricing

The host provisioning code used by the inlets-operator is shared with inletsctl, both tools use the configuration in the grid below.

These costs need to be treated as an estimate and will depend on your bandwidth usage and how many hosts you decide to create. You can at all times check your cloud provider's dashboard, API, or CLI to view your exit-nodes. The hosts provided have been chosen because they are the absolute lowest-cost option that the maintainers could find.

ProviderPrice per monthPrice per hourOS imageCPUMemoryBoot time
Google Compute Engine* ~$4.28~$0.006Ubuntu 22.041614MB~3-15s
Digital Ocean$5~$0.0068Ubuntu 22.0411GB~20-30s
Scaleway5.84€0.01€Ubuntu 22.0422GB3-5m
Amazon Elastic Computing 2$3.796$0.0052Ubuntu 20.0411GB3-5m
Linode$5$0.0075Ubuntu 22.0411GB~10-30s
Azure$4.53$0.0062Ubuntu 22.0410.5GB2-4min
Hetzner4.15€€0.007Ubuntu 22.0412GB~5-10s

Video walk-through

In this video walk-through Alex will guide you through creating a Kubernetes cluster on your laptop with KinD, then he'll install ingress-nginx (an IngressController), followed by cert-manager and then after the inlets-operator creates a LoadBalancer on the cloud, you'll see a TLS certificate obtained by LetsEncrypt.

Video demo

Tutorial: Tutorial: Expose a local IngressController with the inlets-operator

Contributing

Contributions are welcome, see the CONTRIBUTING.md guide.

Also in this space

Author / vendor

inlets and the inlets-operator are brought to you by OpenFaaS Ltd.