Awesome
<div align="center"> <img src="docs/img/qdax_logo.png" alt="qdax_logo" width="140"></img> </div>QDax: Accelerated Quality-Diversity
QDax is a tool to accelerate Quality-Diversity (QD) and neuro-evolution algorithms through hardware accelerators and massive parallelization. QD algorithms usually take days/weeks to run on large CPU clusters. With QDax, QD algorithms can now be run in minutes! ⏩ ⏩ 🕛
QDax has been developed as a research framework: it is flexible and easy to extend and build on and can be used for any problem setting. Get started with simple example and run a QD algorithm in minutes here!
- QDax paper
- QDax documentation
Installation
QDax is available on PyPI and can be installed with:
pip install qdax
To install QDax with CUDA 12 support, use:
pip install qdax[cuda12]
Alternatively, the latest commit of QDax can be installed directly from source with:
pip install git+https://github.com/adaptive-intelligent-robotics/QDax.git@main
Installing QDax via pip
installs a CPU-only version of JAX by default. To use QDax with NVidia GPUs, you must first install CUDA, CuDNN, and JAX with GPU support.
However, we also provide and recommend using either Docker or conda environments to use the repository which by default provides GPU support. Detailed steps to do so are available in the documentation.
Basic API Usage
For a full and interactive example to see how QDax works, we recommend starting with the tutorial-style Colab notebook. It is an example of the MAP-Elites algorithm used to evolve a population of controllers on a chosen Brax environment (Walker by default).
However, a summary of the main API usage is provided below:
import jax
import functools
from qdax.core.map_elites import MAPElites
from qdax.core.containers.mapelites_repertoire import compute_euclidean_centroids
from qdax.tasks.arm import arm_scoring_function
from qdax.core.emitters.mutation_operators import isoline_variation
from qdax.core.emitters.standard_emitters import MixingEmitter
from qdax.utils.metrics import default_qd_metrics
seed = 42
num_param_dimensions = 100 # num DoF arm
init_batch_size = 100
batch_size = 1024
num_iterations = 50
grid_shape = (100, 100)
min_param = 0.0
max_param = 1.0
min_bd = 0.0
max_bd = 1.0
# Init a random key
random_key = jax.random.PRNGKey(seed)
# Init population of controllers
random_key, subkey = jax.random.split(random_key)
init_variables = jax.random.uniform(
subkey,
shape=(init_batch_size, num_param_dimensions),
minval=min_param,
maxval=max_param,
)
# Define emitter
variation_fn = functools.partial(
isoline_variation,
iso_sigma=0.05,
line_sigma=0.1,
minval=min_param,
maxval=max_param,
)
mixing_emitter = MixingEmitter(
mutation_fn=lambda x, y: (x, y),
variation_fn=variation_fn,
variation_percentage=1.0,
batch_size=batch_size,
)
# Define a metrics function
metrics_fn = functools.partial(
default_qd_metrics,
qd_offset=0.0,
)
# Instantiate MAP-Elites
map_elites = MAPElites(
scoring_function=arm_scoring_function,
emitter=mixing_emitter,
metrics_function=metrics_fn,
)
# Compute the centroids
centroids = compute_euclidean_centroids(
grid_shape=grid_shape,
minval=min_bd,
maxval=max_bd,
)
# Initializes repertoire and emitter state
repertoire, emitter_state, random_key = map_elites.init(init_variables, centroids, random_key)
# Run MAP-Elites loop
for i in range(num_iterations):
(repertoire, emitter_state, metrics, random_key,) = map_elites.update(
repertoire,
emitter_state,
random_key,
)
# Get contents of repertoire
repertoire.genotypes, repertoire.fitnesses, repertoire.descriptors
QDax core algorithms
QDax currently supports the following algorithms:
QDax baseline algorithms
The QDax library also provides implementations for some useful baseline algorithms:
Algorithm | Example |
---|---|
DIAYN | |
DADS | |
SMERL | |
NSGA2 | |
SPEA2 | |
Population Based Training (PBT) |
QDax Tasks
The QDax library also provides numerous implementations for several standard Quality-Diversity tasks.
All those implementations, and their descriptions are provided in the tasks directory.
Contributing
Issues and contributions are welcome. Please refer to the contribution guide in the documentation for more details.
Related Projects
- EvoJAX: Hardware-Accelerated Neuroevolution. EvoJAX is a scalable, general purpose, hardware-accelerated neuroevolution toolkit. Paper
- evosax: JAX-Based Evolution Strategies
Citing QDax
If you use QDax in your research and want to cite it in your work, please use:
@article{chalumeau2024qdax,
title={Qdax: A library for quality-diversity and population-based algorithms with hardware acceleration},
author={Chalumeau, Felix and Lim, Bryan and Boige, Raphael and Allard, Maxime and Grillotti, Luca and Flageat, Manon and Mac{\'e}, Valentin and Richard, Guillaume and Flajolet, Arthur and Pierrot, Thomas and others},
journal={Journal of Machine Learning Research},
volume={25},
number={108},
pages={1--16},
year={2024}
}
Contributors
QDax was developed and is maintained by the Adaptive & Intelligent Robotics Lab (AIRL) and InstaDeep.
<div align="center"> <img align="center" src="docs/img/AIRL_logo.png" alt="AIRL_Logo" width="220"/> <img align="center" src="docs/img/instadeep_logo.png" alt="InstaDeep_Logo" width="220"/> </div> <div align="center"> <a href="https://github.com/limbryan" title="Bryan Lim"><img src="https://github.com/limbryan.png" height="auto" width="50" style="border-radius:50%"></a> <a href="https://github.com/maxiallard" title="Maxime Allard"><img src="https://github.com/maxiallard.png" height="auto" width="50" style="border-radius:50%"></a> <a href="https://github.com/Lookatator" title="Luca Grilloti"><img src="https://github.com/Lookatator.png" height="auto" width="50" style="border-radius:50%"></a> <a href="https://github.com/manon-but-yes" title="Manon Flageat"><img src="https://github.com/manon-but-yes.png" height="auto" width="50" style="border-radius:50%"></a> <a href="https://github.com/Aneoshun" title="Antoine Cully"><img src="https://github.com/Aneoshun.png" height="auto" width="50" style="border-radius:50%"></a> <a href="https://github.com/felixchalumeau" title="Felix Chalumeau"><img src="https://github.com/felixchalumeau.png" height="auto" width="50" style="border-radius:50%"></a> <a href="https://github.com/ranzenTom" title="Thomas Pierrot"><img src="https://github.com/ranzenTom.png" height="auto" width="50" style="border-radius:50%"></a> <a href="https://github.com/Egiob" title="Raphael Boige"><img src="https://github.com/Egiob.png" height="auto" width="50" style="border-radius:50%"></a> <a href="https://github.com/valentinmace" title="Valentin Mace"><img src="https://github.com/valentinmace.png" height="auto" width="50" style="border-radius:50%"></a> <a href="https://github.com/GRichard513" title="Guillaume Richard"><img src="https://github.com/GRichard513.png" height="auto" width="50" style="border-radius:50%"></a> <a href="https://github.com/flajolet" title="Arthur Flajolet"><img src="https://github.com/flajolet.png" height="auto" width="50" style="border-radius:50%"></a> <a href="https://github.com/remidebette" title="Rémi Debette"><img src="https://github.com/remidebette.png" height="auto" width="50" style="border-radius:50%"></a> </div>