Home

Awesome


title: Air Quality Forecasting emoji: šŸ“ˆ colorFrom: yellow colorTo: gray sdk: streamlit sdk_version: 1.39.0 app_file: streamlit_src/app.py pinned: false

Air Quality Forecast

<a target="_blank" href="https://cookiecutter-data-science.drivendata.org/"> <img src="https://img.shields.io/badge/CCDS-Project%20template-328F97?logo=cookiecutter" /> </a>

Air pollution is a significant environmental concern, especially in urban areas, where the high levels of nitrogen dioxide and ozone can have a negative impact on human health, the ecosystem and on the overall quality of life. Given these risks, monitoring and forecasting the level of air pollution is an important task in order to allow for timely actions to reduce the harmful effects.

In the Netherlands, cities like Utrecht experience challenges concerning air quality due to urbanization, transportation, and industrial activities. Developing a system that can provide accurate and robust real-time air quality monitoring and reliable forecasts for future pollution levels would allow authorities and residents to take preventive measures and adjust their future activities based on expected air quality. This project focuses on the time-series forecasting of air pollution levels, specifically NO<sub>2</sub> and O<sub>3</sub> concentrations, for the next three days. This task can be framed as a regression problem, where the goal is to predict continuous values based on historical environmental data. Moreover, it provides infrastructure for real-time prediction, based on recent measurements.


Streamlit Application

Explore the interactive air quality forecast for Utrecht through our Streamlit app on Hugging Face Spaces:

Air Quality Forecasting App

šŸš€ How to Run the App

To launch the Utrecht Air Quality Monitoring application on a localhost, follow these simple steps:

  1. Navigate to the streamlit_src folder in your terminal where the app files are located.

  2. Run the Streamlit application by entering the following command:

    streamlit run app.py
    

[!TIP] Alternative Path: If you are not in the streamlit_src folder, provide the full path to app.py. For example, from the root directory:


šŸš€ How to Run the Scripts

Setting Up

Clone the Repository: Start by cloning the repository to your local machine.

git clone https://github.com/atodorov284/air-quality-forecast.git
cd air-quality-forecast

Set Up Environment: Make sure all dependencies are installed by running the following requirements.txt file from the repository root:

pip install -r requirements.txt

Running Source Code

First, navigate to the air-quality-forecast folder, which contains the source code for the project:

cd air_quality_forecast

šŸ“Š View the MLFlow Dashboard: To track experiments, run model_development.py, which will start an MLFlow server on localhost at port 5000.

python model_development.py

[!TIP] If the server does not start automatically, manually run the MLFlow UI using:

mlflow ui --port 5000

You might need to grant admin permissions for this process

šŸ”„ Using the parser to retrain the model or make predictions on new data: Instructions on how to use the retraining protocol or making predictions on new data can be found in the README.md in the air-quality-forecast directory

[!NOTE] The retrain datasets need to be under data/retrain and the prediction dataset needs to be under data/inference.


[!IMPORTANT] The notebooks in this project were used as scratch for analysis and data merge and do not reflect our thorough methodology (source is under air-quality-forecast). Some extra scripts for the generation of our plots in the report can be found under extra_scripts.


šŸ“– Viewing the Documentation

The project documentation is generated using Sphinx and can be viewed as HTML files. To access the documentation:

  1. Navigate to the _build/html/ directory inside the docs folder:
cd docs\_build\html\
  1. Open the index.html file in your web browser. You can do this by double-clicking the file in your file explorer or using the following command:
open index.html  # macOS
xdg-open index.html  # Linux
start index.html  # Windows
  1. Alternatively you can navigate to the index.html file through the file explorer and double click it to run it

šŸ“‚ Project Folder Structure

ā”œā”€ā”€ LICENSE               <- Open-source MIT license
ā”œā”€ā”€ Makefile              <- Makefile with convenience commands like `make data` or `make train`
ā”œā”€ā”€ README.md             <- The top-level README for developers using this project.
ā”œā”€ā”€ data                  <- Folder containing data used for training, testing, and inference
ā”‚   ā”œā”€ā”€ inference         <- Data for inference predictions
ā”‚   ā”œā”€ā”€ model_predictions <- Folder containing model-generated predictions
ā”‚   ā”œā”€ā”€ other             <- Additional data or miscellaneous files
ā”‚   ā”œā”€ā”€ processed         <- The final, canonical data sets for modeling. Contains the train-test split.
ā”‚   ā””ā”€ā”€ raw               <- The original, immutable data dump.
ā”‚
ā”œā”€ā”€ .github               <- Contains automated workflows for reproducibility, flake8 checks, and scheduled updates. 
ā”‚
ā”œā”€ā”€ docs                  <- Contains files to make the HTML documentation for this project using Sphinx
ā”‚
ā”œā”€ā”€ mlruns                <- Contains all the experiments ran using mlflow.
ā”‚
ā”œā”€ā”€ mlartifacts           <- Contains the artifacts generated by mlflow experiments.
ā”‚
ā”œā”€ā”€ notebooks             <- Scratch Jupyter notebooks (not to be evaluated, source code is in air-quality-forecast)
ā”‚
ā”œā”€ā”€ pyproject.toml        <- Project configuration file with package metadata for 
ā”‚                            air-quality-forecast and configuration for tools like black
ā”‚
ā”œā”€ā”€ reports               <- Generated analysis as HTML, PDF, LaTeX, etc.
ā”‚
ā”œā”€ā”€ requirements.txt      <- The requirements file for reproducing the analysis environment, e.g.
ā”‚                            generated with `pip freeze > requirements.txt`
ā”‚
ā”œā”€ā”€ setup.cfg             <- Configuration file for flake8
ā”‚
ā”œā”€ā”€ configs               <- Configuration folder for the hyperparameter search space (for now)
ā”‚
ā”œā”€ā”€ saved_models          <- Folder with the saved models in `.pkl` and `.xgb`.
ā”‚
ā”œā”€ā”€ extra_scripts         <- Some extra scripts in R and .tex to generate figures
ā”‚
ā”œā”€ā”€ streamlit_src         <- Streamlit application source code
ā”‚   ā”œā”€ā”€ controllers       <- Handles application logic and data flow for different app sections
ā”‚   ā”œā”€ā”€ json_interactions <- Manages JSON data interactions for configuration and storage
ā”‚   ā”œā”€ā”€ models            <- Contains model loading, preprocessing, and prediction logic
ā”‚   ā””ā”€ā”€ views             <- Manages the UI components for different app sections
ā”‚
ā””ā”€ā”€ air_quality_forecast  <- Source code used in this project.
    ā”‚
    ā”œā”€ā”€ api_caller.py             <- Manages API requests to retrieve air quality and meteorological data
    ā”œā”€ā”€ data_pipeline.py          <- Loads, extracts, and preprocesses the data. Final result is the train-test under data/processed
    ā”œā”€ā”€ get_prediction_data.py    <- Prepares input data required for generating forecasts
    ā”œā”€ā”€ main.py                   <- Main entry point for executing the forecasting pipeline
    ā”œā”€ā”€ model_development.py      <- Trains the models using k-fold CV and Bayesian hyperparameter tuning
    ā”œā”€ā”€ parser_ui.py              <- Manages configuration settings and command-line arguments
    ā”œā”€ā”€ prediction.py             <- Generates forecasts using the trained model
    ā””ā”€ā”€ utils.py                  <- Utility functions for common tasks across scripts