Home

Awesome

Table of Contents

Docker PG Backup

A simple docker container that runs PostgreSQL / PostGIS backups (PostGIS is not required it will backup any PG database). It is primarily intended to be used with our docker postgis docker image. By default, it will create a backup once per night (at 23h00)in a nicely ordered directory by a year / month.

Getting the image

There are various ways to get the image onto your system:

The preferred way (but using most bandwidth for the initial image) is to get our docker trusted build like this:

docker pull kartoza/pg-backup:$POSTGRES_MAJOR_VERSION-$POSTGIS_MAJOR_VERSION.${POSTGIS_MINOR_RELEASE}

Where the environment variables are

POSTGRES_MAJOR_VERSION=17
POSTGIS_MAJOR_VERSION=3
POSTGIS_MINOR_RELEASE=5 

We highly suggest that you use a tagged image that match the PostgreSQL image you are running i.e (kartoza/pg-backup:17-3.5 for backing up kartoza/postgis:17-3.5 DB). The latest tag may change and may not successfully back up your database.

To build the image yourself do:

git clone https://github.com/kartoza/docker-pg-backup.git
cd docker-pg-backup
./build.sh # It will build the latest version corresponding the latest PostgreSQL version

Running the image

To create a running container do:

POSTGRES_MAJOR_VERSION=17
POSTGIS_MAJOR_VERSION=3
POSTGIS_MINOR_RELEASE=5 
docker run --name "db"  -p 25432:5432 -d -t kartoza/postgis:$POSTGRES_MAJOR_VERSION-$POSTGIS_MAJOR_VERSION.${POSTGIS_MINOR_RELEASE}
docker run --name="backups"  --link db:db -v `pwd`/backups:/backups  -d kartoza/pg-backup:$POSTGRES_MAJOR_VERSION-$POSTGIS_MAJOR_VERSION.${POSTGIS_MINOR_RELEASE}

Specifying environment variables

You can also use the following environment variables to pass a username and password etc for the database connection.

Note To avoid interpolation issues with the env variable ${CRON_SCHEDULE} you will need to provide the variable as a quoted string i.e ${CRON_SCHEDULE}='*/1 * * * ' or ${CRON_SCHEDULE}="/1 * * * *"

Here is a more typical example using docker-composer:

Filename format

The default backup archive generated will be stored in the /backups directory (inside the container):

/backups/$(date +%Y)/$(date +%B)/${DUMPPREFIX}_${DB}.$(date +%d-%B-%Y).dmp

As a concrete example, with DUMPPREFIX=PG and if your postgis has DB name gis. The backup archive would be something like:

/backups/2019/February/PG_gis.17-February-2019.dmp

If you specify ARCHIVE_FILENAME instead (default value is empty). The filename will be fixed according to this prefix. Let's assume ARCHIVE_FILENAME=latest The backup archive would be something like

/backups/latest.gis.dmp

Backing up to S3 bucket

The script uses s3cmd for backing up files to S3 bucket.

You can read more about configuration options for s3cmd

For a typical usage of this look at the docker-compose-s3.yml

Mounting Configs

The image supports mounting the following configs:

An environment variable ${EXTRA_CONFIG_DIR} controls the location of the folder.

If you need to mount s3cfg file. You can run the following:

-e ${EXTRA_CONFIG_DIR}=/settings
-v /data:/settings

Where s3cfg is located in /data

Restoring

The image provides a simple restore script. There are two ways to restore files based on the location of the backup files.

Restore from file based backups

You need to specify some environment variables first:

Note: The restore script will try to delete the TARGET_DB if it matches an existing database, so make sure you know what you are doing. Then it will create a new one and restore the content from TARGET_ARCHIVE

It is generally a good practice to restore into an empty new database and then manually drop and rename the databases.

i.e. if your original database is named gis, you can restore it into a new database called gis_restore

If you specify these environment variables using docker-compose.yml file, then you can execute a restore process like this:

docker-compose exec dbbackups /backup-scripts/restore.sh

Restoring from S3 bucket

The script uses s3cmd for restoring files S3 bucket to a postgresql database.

To restore from S3 bucket, first you have to exec into your running container. You have to launch the /backup-scripts/restore.sh with two parameters

You can read more about configuration options for s3cmd

For a typical usage of this look at the docker-compose-s3.yml

Credits

Tim Sutton (tim@kartoza.com)

Admire Nyakudya (admire@kartoza.com)

Rizky Maulana (rizky@kartoza.com)