Awesome
PASHA: Efficient HPO and NAS with Progressive Resource Allocation
Abstract
Hyperparameter optimization (HPO) and neural architecture search (NAS) are methods of choice to obtain the best-in-class machine learning models, but in practice they can be costly to run. When models are trained on large datasets, tuning them with HPO or NAS rapidly becomes prohibitively expensive for practitioners, even when efficient multi-fidelity methods are employed. We propose an approach to tackle the challenge of tuning machine learning models trained on large datasets with limited computational resources. Our approach, named PASHA, extends ASHA and is able to dynamically allocate maximum resources for the tuning procedure depending on the need. The experimental comparison shows that PASHA identifies well-performing hyperparameter configurations and architectures while consuming significantly fewer computational resources than ASHA.
How to use
This repository extends the open-source Syne Tune library and provides a Jupyter notebook to reproduce our experiments.
In order to run the notebook, it is necessary to install Syne Tune. We have installed the full version of Syne Tune using pip install -e .[extra]
.
In order to run experiments using NASBench201 it is necessary to build the dataset using the provided Syne Tune scripts. In particular you need to run python nasbench201_import.py
that is in syne_tune/blackbox_repository/conversion_scripts/scripts
. Note that this requires a lot of RAM memory, so it may be needed to have at least 32GB RAM.
The Jupyter notebook to replicate our experiments is located in notebooks
directory, and it is called PASHA-ICLR23.ipynb
.
PASHA is available also within Syne Tune library, together with a short tutorial.
How to cite
If you find PASHA useful for your research, please consider citing:
@inproceedings{bohdal2023pasha,
title={PASHA: Efficient HPO and NAS with Progressive Resource Allocation},
author={Bohdal, Ondrej and Balles, Lukas and Wistuba, Martin and Ermis, Beyza and Archambeau, Cedric and Zappella, Giovanni},
booktitle={ICLR},
year={2023}
}
The original descriptions for Syne Tune library follow next.
Syne Tune: Large-Scale and Reproducible Hyperparameter Optimization
This package provides state-of-the-art distributed hyperparameter optimizers (HPO) with the following key features:
- wide coverage (>20) of different HPO methods for asynchronous optimization with multiple workers, including:
- advanced multi-fidelity methods supporting model-based decisions (BOHB and MOBSTER)
- transfer-learning optimizers that achieve better and better performance when used repeatedly
- multi-objective optimizers that can tune multiple objectives simultaneously (such as accuracy and latency)
- you can run HPO in different environments (locally, AWS, simulation) by changing one line of code
- out-of-the-box tabulated benchmarks available for several domains with efficient simulations that allows you to get results in seconds while preserving the real dynamics of asynchronous or synchronous HPO with any number of workers
Installing
To install Syne Tune from pip, you can simply do:
pip install 'syne-tune[extra]==0.3.0'
or to get the latest version from git:
pip install --upgrade pip
git clone https://github.com/awslabs/syne-tune.git
cd syne-tune
pip install -e '.[extra]'
You can see the FAQ What are the different installations options supported? for more install options.
See our change log to see what changed in the latest version.
Getting started
To enable tuning, you have to report metrics from a training script so that they can be communicated later to Syne Tune,
this can be accomplished by just calling report(epoch=epoch, loss=loss)
as shown in the example bellow:
# train_height.py
import logging
import time
from syne_tune import Reporter
from argparse import ArgumentParser
if __name__ == '__main__':
root = logging.getLogger()
root.setLevel(logging.INFO)
parser = ArgumentParser()
parser.add_argument('--steps', type=int)
parser.add_argument('--width', type=float)
parser.add_argument('--height', type=float)
args, _ = parser.parse_known_args()
report = Reporter()
for step in range(args.steps):
dummy_score = (0.1 + args.width * step / 100) ** (-1) + args.height * 0.1
# Feed the score back to Syne Tune.
report(step=step, mean_loss=dummy_score, epoch=step + 1)
time.sleep(0.1)
Once you have a script reporting metric, you can launch a tuning as-follow:
from syne_tune import Tuner, StoppingCriterion
from syne_tune.backend import LocalBackend
from syne_tune.config_space import randint
from syne_tune.optimizer.baselines import ASHA
# hyperparameter search space to consider
config_space = {
'steps': 100,
'width': randint(1, 20),
'height': randint(1, 20),
}
tuner = Tuner(
trial_backend=LocalBackend(entry_point='train_height.py'),
scheduler=ASHA(
config_space, metric='mean_loss', resource_attr='epoch', max_t=100,
search_options={'debug_log': False},
),
stop_criterion=StoppingCriterion(max_wallclock_time=15),
n_workers=4, # how many trials are evaluated in parallel
)
tuner.run()
The above example runs ASHA with 4 asynchronous workers on a local machine.
Examples
You will find the following examples in examples/ folder illustrating different functionalities provided by Syne Tune:
- launch_height_baselines.py: launches HPO locally, tuning a simple script train_height_example.py for several baselines
- launch_height_ray.py: launches HPO locally with Ray Tune scheduler
- launch_height_moasha.py: shows how to tune a script reporting multiple-objectives with multiobjective Asynchronous Hyperband (MOASHA)
- launch_height_standalone_scheduler.py: launches HPO locally with a custom scheduler that cuts any trial that is not in the top 80%
- launch_height_sagemaker_remotely.py: launches the HPO loop on SageMaker rather than a local machine, trial can be executed either the remote machine or distributed again as separate SageMaker training jobs
- launch_height_sagemaker.py: launches HPO on SageMaker to tune a SageMaker Pytorch estimator
- launch_height_sagemaker_custom_image.py: launches HPO on SageMaker to tune a entry point with a custom docker image
- launch_plot_results.py: shows how to plot results of a HPO experiment
- launch_fashionmnist.py: launches HPO locally tuning a multi-layer perceptron on Fashion MNIST. This employs an easy-to-use benchmark convention
- launch_huggingface_classification.py: launches HPO on SageMaker to tune a SageMaker Hugging Face estimator for sentiment classification
- launch_tuning_gluonts.py: launches HPO locally to tune a gluon-ts time series forecasting algorithm
- launch_rl_tuning.py: launches HPO locally to tune a RL algorithm on the cartpole environment
FAQ and Tutorials
You can check our FAQ, to learn more about Syne Tune functionalities.
- What are the different installations options supported?
- How can I run on AWS and SageMaker?
- What are the metrics reported by default when calling the
Reporter
? - How can I utilize multiple GPUs?
- What is the default mode when performing optimization?
- How are trials evaluated on a local machine?
- What does the output of the tuning contain?
- Where can I find the output of the tuning?
- How can I enable trial checkpointing?
- Which schedulers make use of checkpointing?
- Is the tuner checkpointed?
- Where can I find the output of my trials?
- How can I plot the results of a tuning?
- How can I specify additional tuning metadata?
- How do I append additional information to the results which are stored?
- I don’t want to wait, how can I launch the tuning on a remote machine?
- How can I run many experiments in parallel?
- How can I access results after tuning remotely?
- How can I specify dependencies to remote launcher or when using the SageMaker backend?
- How can I benchmark experiments from the command line?
- What different schedulers do you support? What are the main differences between them?
- How do I define the search space?
- How can I visualize the progress of my tuning experiment with Tensorboard?
Do you want to know more? Here are a number of tutorials.
- Basics of Syne Tune
- Using the built-in schedulers
- Choosing a configuration space
- Using the command line launcher to benchmark schedulers
- Using and extending the list of benchmarks
Security
See CONTRIBUTING for more information.
Citing Syne Tune
If you use Syne Tune in a scientific publication, please cite the following paper:
"Syne Tune: A Library for Large Scale Hyperparameter Tuning and Reproducible Research" First Conference on Automated Machine Learning 2022
@inproceedings{
salinas2022syne,
title={Syne Tune: A Library for Large Scale Hyperparameter Tuning and Reproducible Research},
author={David Salinas and Matthias Seeger and Aaron Klein and Valerio Perrone and Martin Wistuba and Cedric Archambeau},
booktitle={First Conference on Automated Machine Learning (Main Track)},
year={2022},
url={https://openreview.net/forum?id=BVeGJ-THIg9}
}
License
This project is licensed under the Apache-2.0 License.