Home

Awesome

Infrastructure Risk Visualisation Tool

This project provides interactive data visualisations of risk analysis results.

About

The tool presents the infrastructure systems and hazards considered in the analysis, then presents results as modelled for the whole system at a fine scale.

Other functionality:

This README covers requirements and steps through how to prepare data for visualisation and how to run the tool.

Architecture

The tool runs as a set of containerised services:

The services are orchestrated using docker compose.

N.B. The app was built with docker engine version 20.10.16 and compose version 2.5.0. It may not work with other versions.

Usage

Data preparation

The visualisation tool runs using prepared versions of analysis data and results:

See ETL directory for details.

Data to be served from the vector and raster tileservers should be placed on the host within tileserver/<data_type>. These folders are made available to the running tileservers as docker bind mounts.

For example, in tileserver/raster/data/aqueduct there might live TIF files like these:

coastal_mangrove__rp_100__rcp_baseline__epoch_2010__conf_None.tif
coastal_mangrove__rp_25__rcp_baseline__epoch_2010__conf_None.tif
coastal_mangrove__rp_500__rcp_baseline__epoch_2010__conf_None.tif
coastal_nomangrove_minus_mangrove__rp_100__rcp_baseline__epoch_2010__conf_None.tif

And in tileserver/vector/, mbtiles files like these:

airport_runways.mbtiles
airport_terminals.mbtiles
buildings_commercial.mbtiles
buildings_industrial.mbtiles

Environment

Environment variables for the various services (and the ETL workflow) are stored in env files. Example files are given in envs/dev-example. These can be placed in envs/dev to get started.

Production env files should be placed in envs/prod.

Deploy

To deploy the stack we use the docker compose tool.

Development

The set of long-running services can include:

If you're running your own frontend development server, or connecting to a remotely hosted database, or not using the autopackage API, you may not need all these services.

To this end, we use profiles to define 'core' services which always run, and optional services. A bare docker compose -f docker-compose-dev.yaml up will run only the core services (those without a profiles attribute).

For example, when running your own FE development server to add a new raster layer the following should suffice: docker compose -f docker-compose-dev.yaml up. This will bring up db, tiles-db, backend and vector-tileserver.

To run the core services with a standard frontend: docker compose -f docker-compose-dev.yaml --profile web-server up.

To run the core services alongside the autopackage services: docker compose -f docker-compose-dev.yaml --profile autopkg up.

To run all of these behind traefik (every long-running service): docker compose -f docker-compose-dev.yaml --profile traefik --profile web-server --profile autopkg up.

There are also a few short-lived 'utility containers', which can be run to perform particular tasks:

When starting from a clean slate, the recreate-metadata-schema service must be run to create the tables in db that backend relies upon. If you find that the backend service is complaining that the raster_tile_sources database table is not available, you may need to create the appropriate tables in the db service first. To do that, bring the db service up as described above, and then run: docker-compose -f docker-compose-dev.yaml up recreate-metadata-schema to (re)create the tables. Note that this will drop any data currently in database.

Production

To run local builds of production containers we use the docker-compose-prod-build.yaml file. See [below](#Updating a service) for more details.

To deploy containers into a production environment: docker compose -f docker-compose-prod-deploy.yaml up -d

Updating a service

To update a service:

As an example, below we update the backend on a development machine:

# Edit docker-compose-prod-build.yaml image version:
#     image: ghcr.io/nismod/gri-backend:1.5.0

# Build
docker compose -f docker-compose-prod-build.yaml build backend

# Log in to the container registry
# see: https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry

# Push
docker push ghcr.io/nismod/gri-backend:1.5.0

On the production remote, pull the image and restart the service:

# Pull image
docker pull ghcr.io/nismod/gri-backend:1.5.0

# Edit docker-compose-prod-deploy.yaml image version (or sync up):
#     image: ghcr.io/nismod/gri-backend:1.5.0

# Restart service
docker compose up -d backend

Adding new data layers

To add a raster data layer (for example, the iris set of tropical cyclone return period maps) see the ETL directory.

IRV AutoPackage Service

Provides API for extraction of data (and hosting of results) from various layers using pre-defined boundaries.

See irv-autopkg for more information.

Acknowledgements

This tool has been developed through several projects.