Home

Awesome

Ocean Navigator

CodeFactor Lint Python Python tests

Contents


Overview

Ocean Navigator is a Data Visualization tool that enables users to discover and view 3D ocean model output quickly and easily.

The model outputs are stored as NetCDF4 files. Our file management is now handled by an SQLite3 process that incrementally scans the files for a dataset, and updates a corresponding table so that the Python layer can only open the exact files required to perform computations; as opposed to the THREDDS aggregation approach which serves all the files in a dataset as a single netcdf file. The THREDDS approach was unable to scale to the sheer size of the datasets we deal with.

The server-side component of the Ocean Navigator is written in Python 3, using the Flask web API. Conceptually, it is broken down into three components:

The user interface is written in Javascript using the React framework. This allows for a single-page, responsive application that offloads as much processing from the server onto the user's browser as possible. For example, if the user chooses to load points from a CSV file, the file is parsed in the browser and only necessary parts of the result are sent back to the server for plotting.

The main display uses the OpenLayers mapping API to allow the user to pan around the globe to find the area of interest. It also allows the user to pick an individual point to get more information about, draw a transect on the map, or draw a polygon to extract a map or statistics for an area.


Development

Local Installation

The instructions for performing a local installation of the Ocean Data Map Project are available at: https://github.com/DFO-Ocean-Navigator/Navigator-Installer/blob/master/README.md

SQLite3 backend

Since we're now using a home-grown indexing solution, as such there is now no "server" to host the files through a URL (at the moment). You also need to install the dependencies for the netcdf indexing tool. Then, download a released binary for Linux systems here. You should go through the README for basic setup and usage details.

The workflow to import new datasets into the Navigator has also changed:

  1. Run the indexing tool linked above.
  2. Modify datasetconfig.json so that the url attribute points to the absolute path of the generated .sqlite3 database.
  3. Restart web server.

Running the webserver for development

Assuming the above installation script succeeded, your PATH should be set to point towards ${HOME}/miniconda/3/amd64/bin, and the navigator conda environment has been activated.

Running the webserver for production

Using the launch-web-service.sh script will automatically determine how many processors are available, determine the platform's IP address, what port above 5000 can be used, print out the IP and port information. The IP:PORT information can then be copied to a web browser to access the Ocean Navigator web service either locally or shared with others. This script will also copy all information bring written to stdout and place the information in the ${HOME}/launch-on-web-service.log file.

Coding Style (Javascript)

Javascript is a dynamically-typed language so it's super important to have clear and concise code, that demonstrates it's exact purpose.

Coding Style (Python)

Coming soon...

Automate CLASS4 pickle generation

In order to generate the class4.pickle file daily. You should create a crontab entry for the user account hosting the Ocean Navigator instance. Use the command crontab -e to add the string @daily ${HOME}/Ocean-Data-Map-Project/bin/launch-pickle.sh. Then once a day at midnight the script launch-pickle.sh will index all the CLASS4 files.

Proper handling of the datasetconfig.json and oceannavigator.cfg configuration files

In order to provide a production ready and off-site configuration files. We have implemented a new configurations repository. When people clone the Ocean-Data-Map-Project repository they will need to perform an additional step of updating any defined submodules. The following command changes your working directory to your local Ocean-Data-Map-Project directory and then updates the submodules recursively.