Awesome
A minimal Ubuntu base image modified for Docker-friendliness
Baseimage-docker only consumes 8.3 MB RAM and is much more powerful than Busybox or Alpine. See why below.
Baseimage-docker is a special Docker image that is configured for correct use within Docker containers. It is Ubuntu, plus:
- Modifications for Docker-friendliness.
- Administration tools that are especially useful in the context of Docker.
- Mechanisms for easily running multiple processes, without violating the Docker philosophy.
You can use it as a base for your own Docker images.
Baseimage-docker is available for pulling from the Docker registry and GHCR (GitHub Container Registry)!
What are the problems with the stock Ubuntu base image?
Ubuntu is not designed to be run inside Docker. Its init system, Upstart, assumes that it's running on either real hardware or virtualized hardware, but not inside a Docker container. But inside a container you don't want a full system; you want a minimal system. Configuring that minimal system for use within a container has many strange corner cases that are hard to get right if you are not intimately familiar with the Unix system model. This can cause a lot of strange problems.
Baseimage-docker gets everything right. The "Contents" section describes all the things that it modifies.
<a name="why_use"></a>
Why use baseimage-docker?
You can configure the stock ubuntu
image yourself from your Dockerfile, so why bother using baseimage-docker?
- Configuring the base system for Docker-friendliness is no easy task. As stated before, there are many corner cases. By the time that you've gotten all that right, you've reinvented baseimage-docker. Using baseimage-docker will save you from this effort.
- It reduces the time needed to write a correct Dockerfile. You won't have to worry about the base system and you can focus on the stack and the app.
- It reduces the time needed to run
docker build
, allowing you to iterate your Dockerfile more quickly. - It reduces download time during redeploys. Docker only needs to download the base image once: during the first deploy. On every subsequent deploys, only the changes you make on top of the base image are downloaded.
Related resources: Website | Github | Docker registry | Discussion forum | Twitter | Blog
Table of contents
- What's inside the image?
- Inspecting baseimage-docker
- Using baseimage-docker as base image
- Container administration
- Building the image yourself
- Removing optional services
- Conclusion
<a name="whats_inside"></a>
What's inside the image?
<a name="whats_inside_overview"></a>
Overview
Looking for a more complete base image, one that is ideal for Ruby, Python, Node.js and Meteor web apps? Take a look at passenger-docker.
Component | Why is it included? / Remarks |
---|---|
Ubuntu 24.04 LTS | The base system. |
A correct init process | Main article: Docker and the PID 1 zombie reaping problem. <br><br>According to the Unix process model, the init process -- PID 1 -- inherits all orphaned child processes and must reap them. Most Docker containers do not have an init process that does this correctly. As a result, their containers become filled with zombie processes over time. <br><br>Furthermore, docker stop sends SIGTERM to the init process, which stops all services. Unfortunately most init systems don't do this correctly within Docker since they're built for hardware shutdowns instead. This causes processes to be hard killed with SIGKILL, which doesn't give them a chance to correctly deinitialize things. This can cause file corruption. <br><br>Baseimage-docker comes with an init process /sbin/my_init that performs both of these tasks correctly. |
Fixes APT incompatibilities with Docker | See https://github.com/dotcloud/docker/issues/1024. |
syslog-ng | A syslog daemon is necessary so that many services - including the kernel itself - can correctly log to /var/log/syslog. If no syslog daemon is running, a lot of important messages are silently swallowed. <br><br>Only listens locally. All syslog messages are forwarded to "docker logs".<br><br>Why syslog-ng?<br>I've had bad experience with rsyslog. I regularly run into bugs with rsyslog, and once in a while it takes my log host down by entering a 100% CPU loop in which it can't do anything. Syslog-ng seems to be much more stable. |
logrotate | Rotates and compresses logs on a regular basis. |
SSH server | Allows you to easily login to your container to inspect or administer things. <br><br>SSH is disabled by default and is only one of the methods provided by baseimage-docker for this purpose. The other method is through docker exec. SSH is also provided as an alternative because docker exec comes with several caveats.<br><br>Password and challenge-response authentication are disabled by default. Only key authentication is allowed. |
cron | The cron daemon must be running for cron jobs to work. |
runit | Replaces Ubuntu's Upstart. Used for service supervision and management. Much easier to use than SysV init and supports restarting daemons when they crash. Much easier to use and more lightweight than Upstart. |
setuser | A tool for running a command as another user. Easier to use than su , has a smaller attack vector than sudo , and unlike chpst this tool sets $HOME correctly. Available as /sbin/setuser . |
install_clean | A tool for installing apt packages that automatically cleans up after itself. All arguments are passed to apt-get -y install --no-install-recommends and after installation the apt caches are cleared. To include recommended packages, add --install-recommends . |
Baseimage-docker is very lightweight: it only consumes 8.3 MB of memory.
<a name="docker_single_process"></a>
Wait, I thought Docker is about running a single process in a container?
The Docker developers advocate the philosophy of running a single logical service per container. A logical service can consist of multiple OS processes.
Baseimage-docker only advocates running multiple OS processes inside a single container. We believe this makes sense because at the very least it would solve the PID 1 problem and the "syslog blackhole" problem. By running multiple processes, we solve very real Unix OS-level problems, with minimal overhead and without turning the container into multiple logical services.
Splitting your logical service into multiple OS processes also makes sense from a security standpoint. By running processes as different users, you can limit the impact of vulnerabilities. Baseimage-docker provides tools to encourage running processes as different users, e.g. the setuser
tool.
Do we advocate running multiple logical services in a single container? Not necessarily, but we do not prohibit it either. While the Docker developers are very opinionated and have very rigid philosophies about how containers should be built, Baseimage-docker is completely unopinionated. We believe in freedom: sometimes it makes sense to run multiple services in a single container, and sometimes it doesn't. It is up to you to decide what makes sense, not the Docker developers.
<a name="fat_containers"></a>
Does Baseimage-docker advocate "fat containers" or "treating containers as VMs"?
There are people who think that Baseimage-docker advocates treating containers as VMs because Baseimage-docker advocates the use of multiple processes. Therefore, they also think that Baseimage-docker does not follow the Docker philosophy. Neither of these impressions are true.
The Docker developers advocate running a single logical service inside a single container. But we are not disputing that. Baseimage-docker advocates running multiple OS processes inside a single container, and a single logical service can consist of multiple OS processes.
It follows that Baseimage-docker also does not deny the Docker philosophy. In fact, many of the modifications we introduce are explicitly in line with the Docker philosophy. For example, using environment variables to pass parameters to containers is very much the "Docker way", and providing a mechanism to easily work with environment variables in the presence of multiple processes that may run as different users.
<a name="inspecting"></a>
Inspecting baseimage-docker
To look around in the image, run:
docker run --rm -t -i phusion/baseimage:<VERSION> /sbin/my_init -- bash -l
where <VERSION>
is one of the baseimage-docker version numbers.
You don't have to download anything manually. The above command will automatically pull the baseimage-docker image from the Docker registry.
<a name="using"></a>
Using baseimage-docker as base image
<a name="getting_started"></a>
Getting started
The image is called phusion/baseimage
, and is available on the Docker registry.
# Use phusion/baseimage as base image. To make your builds reproducible, make
# sure you lock down to a specific version, not to `latest`!
# See https://github.com/phusion/baseimage-docker/blob/master/Changelog.md for
# a list of version numbers.
FROM phusion/baseimage:<VERSION>
# Use baseimage-docker's init system.
CMD ["/sbin/my_init"]
# ...put your own build instructions here...
# Clean up APT when done.
RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
<a name="adding_additional_daemons"></a>
Adding additional daemons
A daemon is a program which runs in the background of its system, such as a web server.
You can add additional daemons (for example, your own app) to the image
by creating runit service directories. You only have to write a small
shell script which runs your daemon;
runsv
will start your script,
and - by default - restart it upon its exit, after waiting one second.
The shell script must be called run
, must be executable, and is to be
placed in the directory /etc/service/<NAME>
. runsv
will switch to
the directory and invoke ./run
after your container starts.
Be certain that you do not start your container using interactive mode
(-it
) with another command, as runit
must be the first process to run. If you do this, your runit service directories won't be started. For instance, docker run -it <name> bash
will bring you to bash in your container, but you'll lose all your daemons.
Here's an example showing you how a runit
service directory can be
made for a memcached
server.
In memcached.sh
, or whatever you choose to name your file (make sure
this file is chmod +x):
#!/bin/sh
# `/sbin/setuser memcache` runs the given command as the user `memcache`.
# If you omit that part, the command will be run as root.
exec /sbin/setuser memcache /usr/bin/memcached >>/var/log/memcached.log 2>&1
In an accompanying Dockerfile
:
RUN mkdir /etc/service/memcached
COPY memcached.sh /etc/service/memcached/run
RUN chmod +x /etc/service/memcached/run
A given shell script must run without daemonizing or forking itself;
this is because runit
will start and restart your script on its own.
Usually, daemons provide a command line flag or a config file option for
preventing such behavior - essentially, you just want your script to run
in the foreground, not the background.
<a name="running_startup_scripts"></a>
Running scripts during container startup
The baseimage-docker init system, /sbin/my_init
, runs the following scripts during startup, in the following order:
- All executable scripts in
/etc/my_init.d
, if this directory exists. The scripts are run in lexicographic order. - The script
/etc/rc.local
, if this file exists.
All scripts must exit correctly, e.g. with exit code 0. If any script exits with a non-zero exit code, the booting will fail.
Important note: If you are executing the container in interactive mode (i.e. when you run a container with -it
), rather than daemon mode, you are sending stdout directly to the terminal (-i
interactive -t
terminal). If you are not calling /sbin/my_init
in your run declaration, /sbin/my_init
will not be executed, therefore your scripts will not be called during container startup.
The following example shows how you can add a startup script. This script simply logs the time of boot to the file /tmp/boottime.txt.
In logtime.sh
:
#!/bin/sh
date > /tmp/boottime.txt
In Dockerfile
:
RUN mkdir -p /etc/my_init.d
COPY logtime.sh /etc/my_init.d/logtime.sh
RUN chmod +x /etc/my_init.d/logtime.sh
<a name="environment_variables"></a>
Shutting down your process
/sbin/my_init
handles termination of children processes at shutdown. When it receives a SIGTERM
it will pass the signal onto the child processes for correct shutdown. If your process is started with
a shell script, make sure you exec
the actual process, otherwise the shell will receive the signal
and not your process.
/sbin/my_init
will terminate processes after a 5 second timeout. This can be adjusted by setting
environment variables:
# Give children processes 5 minutes to timeout
ENV KILL_PROCESS_TIMEOUT=300
# Give all other processes (such as those which have been forked) 5 minutes to timeout
ENV KILL_ALL_PROCESSES_TIMEOUT=300
Note: Prior to 0.11.1, the default values for KILL_PROCESS_TIMEOUT
and KILL_ALL_PROCESSES_TIMEOUT
were 5 seconds. In version 0.11.1+ the default process timeout has been adjusted to 30 seconds to
allow more time for containers to terminate gracefully. The default timeout of your container runtime
may supersede this setting, for example Docker currently applies a 10s timeout
by default before sending SIGKILL, upon docker stop
or receiving SIGTERM.
Environment variables
If you use /sbin/my_init
as the main container command, then any environment variables set with docker run --env
or with the ENV
command in the Dockerfile, will be picked up by my_init
. These variables will also be passed to all child processes, including /etc/my_init.d
startup scripts, Runit and Runit-managed services. There are however a few caveats you should be aware of:
- Environment variables on Unix are inherited on a per-process basis. This means that it is generally not possible for a child process to change the environment variables of other processes.
- Because of the aforementioned point, there is no good central place for defining environment variables for all applications and services. Debian has the
/etc/environment
file but it only works in some situations. - Some services change environment variables for child processes. Nginx is one such example: it removes all environment variables unless you explicitly instruct it to retain them through the
env
configuration option. If you host any applications on Nginx (e.g. using the passenger-docker image, or using Phusion Passenger in your own image) then they will not see the environment variables that were originally passed by Docker. - We ignore HOME, SHELL, USER and a bunch of other environment variables on purpose, because not ignoring them will break multi-user containers. See https://github.com/phusion/baseimage-docker/pull/86 -- A workaround for setting the
HOME
environment variable looks like this:RUN echo /root > /etc/container_environment/HOME
. See https://github.com/phusion/baseimage-docker/issues/119
my_init
provides a solution for all these caveats.
<a name="envvar_central_definition"></a>
Centrally defining your own environment variables
During startup, before running any startup scripts, my_init
imports environment variables from the directory /etc/container_environment
. This directory contains files named after the environment variable names. The file contents contain the environment variable values. This directory is therefore a good place to centrally define your own environment variables, which will be inherited by all startup scripts and Runit services.
For example, here's how you can define an environment variable from your Dockerfile:
RUN echo Apachai Hopachai > /etc/container_environment/MY_NAME
You can verify that it works, as follows:
$ docker run -t -i <YOUR_NAME_IMAGE> /sbin/my_init -- bash -l
...
*** Running bash -l...
# echo $MY_NAME
Apachai Hopachai
Handling newlines
If you've looked carefully, you'll notice that the 'echo' command actually prints a newline. Why does $MY_NAME not contain a newline then? It's because my_init
strips the trailing newline. If you intended on the value having a newline, you should add another newline, like this:
RUN echo -e "Apachai Hopachai\n" > /etc/container_environment/MY_NAME
<a name="envvar_dumps"></a>
Environment variable dumps
While the previously mentioned mechanism is good for centrally defining environment variables, itself does not prevent services (e.g. Nginx) from changing and resetting environment variables from child processes. However, the my_init
mechanism does make it easy for you to query what the original environment variables are.
During startup, right after importing environment variables from /etc/container_environment
, my_init
will dump all its environment variables (that is, all variables imported from container_environment
, as well as all variables it picked up from docker run --env
) to the following locations, in the following formats:
/etc/container_environment
/etc/container_environment.sh
- a dump of the environment variables in Bash format. You can source the file directly from a Bash shell script./etc/container_environment.json
- a dump of the environment variables in JSON format.
The multiple formats make it easy for you to query the original environment variables no matter which language your scripts/apps are written in.
Here is an example shell session showing you how the dumps look like:
$ docker run -t -i \
--env FOO=bar --env HELLO='my beautiful world' \
phusion/baseimage:<VERSION> /sbin/my_init -- \
bash -l
...
*** Running bash -l...
# ls /etc/container_environment
FOO HELLO HOME HOSTNAME PATH TERM container
# cat /etc/container_environment/HELLO; echo
my beautiful world
# cat /etc/container_environment.json; echo
{"TERM": "xterm", "container": "lxc", "HOSTNAME": "f45449f06950", "HOME": "/root", "PATH": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "FOO": "bar", "HELLO": "my beautiful world"}
# source /etc/container_environment.sh
# echo $HELLO
my beautiful world
<a name="modifying_envvars"></a>
Modifying environment variables
It is even possible to modify the environment variables in my_init
(and therefore the environment variables in all child processes that are spawned after that point in time), by altering the files in /etc/container_environment
. After each time my_init
runs a startup script, it resets its own environment variables to the state in /etc/container_environment
, and re-dumps the new environment variables to container_environment.sh
and container_environment.json
.
But note that:
- modifying
container_environment.sh
andcontainer_environment.json
has no effect. - Runit services cannot modify the environment like that.
my_init
only activates changes in/etc/container_environment
when running startup scripts.
<a name="envvar_security"></a>
Security
Because environment variables can potentially contain sensitive information, /etc/container_environment
and its Bash and JSON dumps are by default owned by root, and accessible only to the docker_env
group (so that any user added this group will have these variables automatically loaded).
If you are sure that your environment variables don't contain sensitive data, then you can also relax the permissions on that directory and those files by making them world-readable:
RUN chmod 755 /etc/container_environment
RUN chmod 644 /etc/container_environment.sh /etc/container_environment.json
<a name="logging"></a>
System logging
Baseimage-docker uses syslog-ng to provide a syslog facility to the container. Syslog-ng is not managed as an runit service (see below). Syslog messages are forwarded to the console.
Log startup/shutdown sequence
In order to ensure that all application log messages are captured by syslog-ng, syslog-ng is started separately before the runit supervisor process, and shutdown after runit exits. This uses the startup script facility provided by this image. This avoids a race condition which would exist if syslog-ng were managed as an runit service, where runit kills syslog-ng in parallel with the container's other services, causing log messages to be dropped during a graceful shutdown if syslog-ng exits while logs are still being produced by other services.
<a name="upgrading_os"></a>
Upgrading the operating system inside the container
Baseimage-docker images contain an Ubuntu operating system (see OS version at Overview). You may want to update this OS from time to time, for example to pull in the latest security updates. OpenSSL is a notorious example. Vulnerabilities are discovered in OpenSSL on a regular basis, so you should keep OpenSSL up-to-date as much as you can.
While we release Baseimage-docker images with the latest OS updates from time to time, you do not have to rely on us. You can update the OS inside Baseimage-docker images yourself, and it is recommended that you do this instead of waiting for us.
To upgrade the OS in the image, run this in your Dockerfile:
RUN apt-get update && apt-get upgrade -y -o Dpkg::Options::="--force-confold"
<a name="container_administration"></a>
Container administration
One of the ideas behind Docker is that containers should be stateless, easily restartable, and behave like a black box. However, you may occasionally encounter situations where you want to login to a container, or to run a command inside a container, for development, inspection and debugging purposes. This section describes how you can administer the container for those purposes.
<a name="oneshot"></a>
Running a one-shot command in a new container
Note: This section describes how to run a command insider a -new- container. To run a command inside an existing running container, see Running a command in an existing, running container.
Normally, when you want to create a new container in order to run a single command inside it, and immediately exit after the command exits, you invoke Docker like this:
docker run YOUR_IMAGE COMMAND ARGUMENTS...
However the downside of this approach is that the init system is not started. That is, while invoking COMMAND
, important daemons such as cron and syslog are not running. Also, orphaned child processes are not properly reaped, because COMMAND
is PID 1.
Baseimage-docker provides a facility to run a single one-shot command, while solving all of the aforementioned problems. Run a single command in the following manner:
docker run YOUR_IMAGE /sbin/my_init -- COMMAND ARGUMENTS ...
This will perform the following:
- Runs all system startup files, such as /etc/my_init.d/* and /etc/rc.local.
- Starts all runit services.
- Runs the specified command.
- When the specified command exits, stops all runit services.
For example:
$ docker run phusion/baseimage:<VERSION> /sbin/my_init -- ls
*** Running /etc/rc.local...
*** Booting runit daemon...
*** Runit started as PID 80
*** Running ls...
bin boot dev etc home image lib lib64 media mnt opt proc root run sbin selinux srv sys tmp usr var
*** ls exited with exit code 0.
*** Shutting down runit daemon (PID 80)...
*** Killing all processes...
You may find that the default invocation is too noisy. Or perhaps you don't want to run the startup files. You can customize all this by passing arguments to my_init
. Invoke docker run YOUR_IMAGE /sbin/my_init --help
for more information.
The following example runs ls
without running the startup files and with less messages, while running all runit services:
$ docker run phusion/baseimage:<VERSION> /sbin/my_init --skip-startup-files --quiet -- ls
bin boot dev etc home image lib lib64 media mnt opt proc root run sbin selinux srv sys tmp usr var
<a name="run_inside_existing_container"></a>
Running a command in an existing, running container
There are two ways to run a command inside an existing, running container.
- Through the
docker exec
tool. This is builtin Docker tool, available since Docker 1.4. Internally, it uses Linux kernel system calls in order to execute a command within the context of a container. Learn more in Login to the container, or running a command inside it, viadocker exec
. - Through SSH. This approach requires running an SSH daemon inside the container, and requires you to setup SSH keys. Learn more in Login to the container, or running a command inside it, via SSH.
Both way have their own pros and cons, which you can learn in their respective subsections.
<a name="login_docker_exec"></a>
Login to the container, or running a command inside it, via docker exec
You can use the docker exec
tool on the Docker host OS to login to any container that is based on baseimage-docker. You can also use it to run a command inside a running container. docker exec
works by using Linux kernel system calls.
Here's how it compares to using SSH to login to the container or to run a command inside it:
- Pros
- Does not require running an SSH daemon inside the container.
- Does not require setting up SSH keys.
- Works on any container, even containers not based on baseimage-docker.
- Cons
- If the
docker exec
process on the host is terminated by a signal (e.g. with thekill
command or even with Ctrl-C), then the command that is executed bydocker exec
is not killed and cleaned up. You will either have to do that manually, or you have to rundocker exec
with-t -i
. - Requires privileges on the Docker host to be able to access the Docker daemon. Note that anybody who can access the Docker daemon effectively has root access.
- Not possible to allow users to login to the container without also letting them login to the Docker host.
- If the
<a name="docker_exec_usage"></a>
Usage
Start a container:
docker run YOUR_IMAGE
Find out the ID of the container that you just ran:
docker ps
Now that you have the ID, you can use docker exec
to run arbitrary commands in the container. For example, to run echo hello world
:
docker exec YOUR-CONTAINER-ID echo hello world
To open a bash session inside the container, you must pass -t -i
so that a terminal is available:
docker exec -t -i YOUR-CONTAINER-ID bash -l
<a name="login_ssh"></a>
Login to the container, or running a command inside it, via SSH
You can use SSH to login to any container that is based on baseimage-docker. You can also use it to run a command inside a running container.
Here's how it compares to using docker exec
to login to the container or to run a command inside it:
- Pros
- Does not require root privileges on the Docker host.
- Allows you to let users login to the container, without letting them login to the Docker host. However, this is not enabled by default because baseimage-docker does not expose the SSH server to the public Internet by default.
- Cons
- Requires setting up SSH keys. However, baseimage-docker makes this easy for many cases through a pregenerated, insecure key. Read on to learn more.
<a name="enabling_ssh"></a>
Enabling SSH
Baseimage-docker disables the SSH server by default. Add the following to your Dockerfile to enable it:
RUN rm -f /etc/service/sshd/down
# Regenerate SSH host keys. baseimage-docker does not contain any, so you
# have to do that yourself. You may also comment out this instruction; the
# init system will auto-generate one during boot.
RUN /etc/my_init.d/00_regen_ssh_host_keys.sh
Alternatively, to enable sshd only for a single instance of your container, create a folder with a startup script. The contents of that should be
### In myfolder/enable_ssh.sh (make sure this file is chmod +x):
#!/bin/sh
rm -f /etc/service/sshd/down
ssh-keygen -P "" -t dsa -f /etc/ssh/ssh_host_dsa_key
Then, you can start your container with
docker run -d -v `pwd`/myfolder:/etc/my_init.d my/dockerimage
This will initialize sshd on container boot. You can then access it with the insecure key as below, or using the methods to add a secure key. Further, you can publish the port to your machine with -p 2222:22 allowing you to ssh to 127.0.0.1:2222 instead of looking up the ip address of the container.
<a name="ssh_keys"></a>
About SSH keys
First, you must ensure that you have the right SSH keys installed inside the container. By default, no keys are installed, so nobody can login. For convenience reasons, we provide a pregenerated, insecure key (PuTTY format) that you can easily enable. However, please be aware that using this key is for convenience only. It does not provide any security because this key (both the public and the private side) is publicly available. In production environments, you should use your own keys.
<a name="using_the_insecure_key_for_one_container_only"></a>
Using the insecure key for one container only
You can temporarily enable the insecure key for one container only. This means that the insecure key is installed at container boot. If you docker stop
and docker start
the container, the insecure key will still be there, but if you use docker run
to start a new container then that container will not contain the insecure key.
Start a container with --enable-insecure-key
:
docker run YOUR_IMAGE /sbin/my_init --enable-insecure-key
Find out the ID of the container that you just ran:
docker ps
Once you have the ID, look for its IP address with:
docker inspect -f "{{ .NetworkSettings.IPAddress }}" <ID>
Now that you have the IP address, you can use SSH to login to the container, or to execute a command inside it:
# Download the insecure private key
curl -o insecure_key -fSL https://github.com/phusion/baseimage-docker/raw/master/image/services/sshd/keys/insecure_key
chmod 600 insecure_key
# Login to the container
ssh -i insecure_key root@<IP address>
# Running a command inside the container
ssh -i insecure_key root@<IP address> echo hello world
<a name="enabling_the_insecure_key_permanently"></a>
Enabling the insecure key permanently
It is also possible to enable the insecure key in the image permanently. This is not generally recommended, but is suitable for e.g. temporary development or demo environments where security does not matter.
Edit your Dockerfile to install the insecure key permanently:
RUN /usr/sbin/enable_insecure_key
Instructions for logging into the container is the same as in section Using the insecure key for one container only.
<a name="using_your_own_key"></a>
Using your own key
Edit your Dockerfile to install an SSH public key:
## Install an SSH of your choice.
COPY your_key.pub /tmp/your_key.pub
RUN cat /tmp/your_key.pub >> /root/.ssh/authorized_keys && rm -f /tmp/your_key.pub
Then rebuild your image. Once you have that, start a container based on that image:
docker run your-image-name
Find out the ID of the container that you just ran:
docker ps
Once you have the ID, look for its IP address with:
docker inspect -f "{{ .NetworkSettings.IPAddress }}" <ID>
Now that you have the IP address, you can use SSH to login to the container, or to execute a command inside it:
# Login to the container
ssh -i /path-to/your_key root@<IP address>
# Running a command inside the container
ssh -i /path-to/your_key root@<IP address> echo hello world
<a name="docker_ssh"></a>
The docker-ssh
tool
Looking up the IP of a container and running an SSH command quickly becomes tedious. Luckily, we provide the docker-ssh
tool which automates this process. This tool is to be run on the Docker host, not inside a Docker container.
First, install the tool on the Docker host:
curl --fail -L -O https://github.com/phusion/baseimage-docker/archive/master.tar.gz && \
tar xzf master.tar.gz && \
sudo ./baseimage-docker-master/install-tools.sh
Then run the tool as follows to login to a container using SSH:
docker-ssh YOUR-CONTAINER-ID
You can lookup YOUR-CONTAINER-ID
by running docker ps
.
By default, docker-ssh
will open a Bash session. You can also tell it to run a command, and then exit:
docker-ssh YOUR-CONTAINER-ID echo hello world
<a name="building"></a>
Building the image yourself
If for whatever reason you want to build the image yourself instead of downloading it from the Docker registry, follow these instructions.
Clone this repository:
git clone https://github.com/phusion/baseimage-docker.git
cd baseimage-docker
Start a virtual machine with Docker in it. You can use the Vagrantfile that we've already provided.
First, install vagrant-disksize
plug-in:
vagrant plugin install vagrant-disksize:
Then, start the virtual machine
vagrant up
vagrant ssh
cd /vagrant
Build the image:
make build
If you want to call the resulting image something else, pass the NAME variable, like this:
make build NAME=joe/baseimage
You can also change the ubuntu
base-image to debian
as these distributions are quite similar.
make build BASE_IMAGE=debian:stretch
The image will be: phusion/baseimage-debian-stretch
. Use the NAME
variable in combination with the BASE_IMAGE
one to call it joe/stretch
.
make build BASE_IMAGE=debian:stretch NAME=joe/stretch
To verify that the various services are started, when the image is run as a container, add test
to the end of your make invocations, e.g.:
make build BASE_IMAGE=debian:stretch NAME=joe/stretch test
<a name="removing_optional_services"></a>
Removing optional services
The default baseimage-docker installs syslog-ng
, cron
and sshd
services during the build process.
In case you don't need one or more of these services in your image, you can disable its installation through the image/buildconfig
that is sourced within image/system_services.sh
. Do this at build time by passing a variable in with --build-arg
as in docker build --build-arg DISABLE_SYSLOG=1 image/
, or you may set the variable in image/Dockerfile
with an ENV setting above the RUN directive.
These represent build-time configuration, so setting them in the shell env at build-time will not have any effect. Setting them in child images' Dockerfiles will also not have any effect.)
You can also set them directly as shown in the following example, to prevent sshd
from being installed into your image, set 1
to the DISABLE_SSH
variable in the ./image/buildconfig
file.
### In ./image/buildconfig
# ...
# Default services
# Set 1 to the service you want to disable
export DISABLE_SYSLOG=0
export DISABLE_SSH=1
export DISABLE_CRON=0
Then you can proceed with make build
command.
<a name="conclusion"></a>
Conclusion
- Using baseimage-docker? Tweet about us or follow us on Twitter.
- Having problems? Want to participate in development? Please post a message at the discussion forum.
- Looking for a more complete base image, one that is ideal for Ruby, Python, Node.js and Meteor web apps? Take a look at passenger-docker.
- Need a helping hand? Phusion also offers consulting on a wide range of topics, including Web Development, UI/UX Research & Design, Technology Migration and Auditing.
<img src="https://avatars.githubusercontent.com/u/830588?s=200&v=4">
Please enjoy baseimage-docker, a product by Phusion. :-)