Awesome
<p align="center"><img height="200" src="./.github/logo/conmon-rs.png"></p>A pod level OCI container runtime monitor.
The goal of this project is to provide a container monitor in Rust. The scope of conmon-rs encompasses the scope of the c iteration of conmon, including daemonizing, holding open container standard streams, writing the exit code.
However, the goal of conmon-rs also extends past that of conmon, attempting to become a monitor for a full pod (or a group of containers). Instead of a container engine creating a conmon per container (as well as subsequent conmons per container exec), the engine will spawn a conmon-rs instance when a pod is created. That instance will listen over an UNIX domain socket for new requests to create containers, and exec processes within them.
Obtain the latest version
We provide statically linked binaries for every successfully built commit on
main
via our Google Cloud Storage Bucket. Our provided get
script can be used to download the latest version:
> curl https://raw.githubusercontent.com/containers/conmon-rs/main/scripts/get | bash
It is also possible to select a specific git SHA or the output binary path by:
> curl https://raw.githubusercontent.com/containers/conmon-rs/main/scripts/get | \
bash -s -- -t $GIT_SHA -o $OUTPUT_PATH
The script automatically verifies the created sigstore signatures if the local
system has cosign
available in its
$PATH
.
More information about how to use conmon-rs can be found in the usage documentation.
If you want to create a new conmon-rs release, please refer to the release documentation.
Architecture
The whole application consists of two main components:
- The Rust server: conmon-rs/server (docs)
- A golang client: pkg/client (docs)
The golang client should act as main interface while it takes care of creating the server instance via the Command Line Interface (CLI) as well as communicating to the server via Cap’n Proto. The client itself hides the raw Cap’n Proto parts and exposes dedicated golang structures to provide a clean API surface.
The following flow chart explains the client and container creation process:
<p align="center"><img src=".github/img/conmon-rs.png" height=700 width=auto></p>Goals
- Single conmon per pod (post MVP/stretch)
- Keeping RSS under 3-4 MB
- Support exec without respawning a new conmon
- API with RPC to make it extensible (should support golang clients)
- Act as pid namespace init
- Join network namespace to solve running hooks inside the pod context
- Use pidfds (it doesn't support getting exit code today, though)
- Use io_uring
- Plugin support for seccomp notification
- Logging rate limiting (double buffer?)
- Stats
- IPv6 port forwarding
Future development
In the future, conmon-rs may:
- Be extended to mirror the functionality for each runtime operation.
- Thus reducing the amount of exec calls that must happen in the container engine, and reducing the amount of memory it uses.
- Be in charge of configuring the namespaces for the pod
- Taking over functionality that pinns has historically done.