Welcome to EnvPool!

EnvPool is a C++-based batched environment pool with pybind11 and thread pool. It has high performance (~1M raw FPS on Atari games / ~3M FPS with Mujoco physics engine in DGX-A100) and compatible APIs (supports both gym and dm_env, both sync and async, both single and multi player environment).

Here are EnvPool’s several highlights:

  • Compatible with OpenAI gym APIs and DeepMind dm_env APIs;

  • Manage a pool of envs, interact with the envs in batched APIs by default;

  • Support both synchronous execution and asynchronous execution;

  • Support both single player and multi-player environment;

  • Easy C++ developer API to add new envs: Customized C++ environment integration;

  • Free ~2x speedup with only single environment;

  • 1 Million Atari frames / 3 Million Mujoco steps per second simulation with 256 CPU cores, ~20x throughput of Python subprocess-based vector env;

  • ~3x throughput of Python subprocess-based vector env on low resource setup like 12 CPU cores;

  • XLA support with JAX jit function;

  • Comparing with the existing GPU-based solution (Brax / Isaac-gym), EnvPool is a general solution for various kinds of speeding-up RL environment parallelization;

  • Compatible with some existing RL libraries, e.g., Stable-Baselines3, Tianshou, ACME, CleanRL (Solving Pong in 5 mins), rl_games (2 mins Pong, 15 mins Breakout, 5 mins Ant and HalfCheetah).

Installation

EnvPool is currently hosted on PyPI. It requires Python >= 3.7.

You can install EnvPool with the following command:

$ pip install envpool

After installation, open a Python console and type

import envpool
print(envpool.__version__)

If no error occurs, you have successfully installed EnvPool.

EnvPool is still under development; you can also check out the documents in stable version through envpool.readthedocs.io/en/stable/.

Indices and tables