You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+20-21Lines changed: 20 additions & 21 deletions
Original file line number
Diff line number
Diff line change
@@ -19,13 +19,13 @@
19
19
20
20
21
21
# MDP Playground
22
-
A python package to inject low-level dimensions of difficulties in RL environments. There are toy environments to design and debug RL agents. And complex environment wrappers for Atari and Mujoco to test robustness to these dimensions in complex environments.
22
+
A python package to inject low-level dimensions of hardness in RL environments. There are toy environments to design and debug RL agents. And complex environment wrappers for Gym environments (inclduing Atari and Mujoco) to test robustness to these dimensions in complex environments.
23
23
24
24
## Getting started
25
25
There are 4 parts to the package:
26
-
1)**Toy Environments**: The base toy Environment in [`mdp_playground/envs/rl_toy_env.py`](mdp_playground/envs/rl_toy_env.py) implements the toy environment functionality, including discrete and continuous environments, and is parameterised by a `config` dict which contains all the information needed to instantiate the required MDP. Please see [`example.py`](example.py) for some simple examples of how to use the MDP environments in the package. For further details, please refer to the documentation in [`mdp_playground/envs/rl_toy_env.py`](mdp_playground/envs/rl_toy_env.py).
26
+
1)**Toy Environments**: The base toy Environment in [`mdp_playground/envs/rl_toy_env.py`](mdp_playground/envs/rl_toy_env.py) implements the toy environment functionality, including discrete and continuous environments, and is parameterised by a `config` dict which contains all the information needed to instantiate the required toy MDP. Please see [`example.py`](example.py) for some simple examples of how to use these. For further details, please refer to the documentation in [`mdp_playground/envs/rl_toy_env.py`](mdp_playground/envs/rl_toy_env.py).
27
27
28
-
2)**Complex Environment Wrappers**: Similar to the toy environment, this is parameterised by a `config` dict which contains all the information needed to inject the dimensions into Atari or Mujoco environments. Please see [`example.py`](example.py) for some simple examples of how to use these. The Atari wrapper is in [`mdp_playground/envs/gym_env_wrapper.py`](mdp_playground/envs/gym_env_wrapper.py) and the Mujoco wrapper is in [`mdp_playground/envs/mujoco_env_wrapper.py`](mdp_playground/envs/mujoco_env_wrapper.py).
28
+
2)**Complex Environment Wrappers**: Similar to the toy environment, this is parameterised by a `config` dict which contains all the information needed to inject the dimensions into Gym environments (tested with Atari, Mujoco and ProcGen). Please see [`example.py`](example.py) for some simple examples of how to use these. The generic Gym wrapper (for Atari, ProcGen, etc.) is in [`mdp_playground/envs/gym_env_wrapper.py`](mdp_playground/envs/gym_env_wrapper.py) and the Mujoco specific wrapper is in [`mdp_playground/envs/mujoco_env_wrapper.py`](mdp_playground/envs/mujoco_env_wrapper.py).
29
29
30
30
3)**Experiments**: Experiments are launched using [`run_experiments.py`](run_experiments.py). Config files for experiments are located inside the [`experiments`](experiments) directory. Please read the [instructions](#running-experiments) below for details on how to launch experiments.
31
31
@@ -35,7 +35,7 @@ There are 4 parts to the package:
35
35
## Running experiments from the main paper
36
36
For reproducing experiments from the main paper, please continue reading.
37
37
38
-
For general instructions, please see [here](#installation).
38
+
For general install and usage instructions, please see [here](#installation).
39
39
40
40
### Installation for running experiments from the main paper
41
41
We recommend using `conda` environments to manage virtual `Python` environments to run the experiments. Unfortunately, you will have to maintain 2 environments - 1 for the "older" **discrete toy** experiments and 1 for the "newer" **continuous and complex** experiments from the paper. As mentioned in Appendix section **Tuned Hyperparameters** in the paper, this is because of issues with Ray, the library that we used for our baseline agents.
# For the spider plots, experiments for all the agents and dimensions will need to be run from the experiments directory, i.e., for discrete: dqn_p_r_noises.py, a3c_p_r_noises, ..., dqn_seq_del, ..., dqn_sparsity, ..., dqn_image_representations, ...
# and then follow the instructions in plot_experiments.ipynb
90
-
91
-
# For the bsuite debugging experiment, please run the bsuite sonnet dqn agent on our toy environment while varying reward density. Commit https://github.com/deepmind/bsuite/commit/5116216b62ce0005100a6036fb5397e358652530 should work fine.
88
+
# For the bsuite debugging experiment, please run the bsuite sonnet dqn agent on our toy environment while varying reward density. Commit https://github.com/deepmind/bsuite/commit/5116216b62ce0005100a6036fb5397e358652530 from the bsuite repo should work fine.
92
89
```
93
90
94
-
The CSV stats files will be saved to the current directory and can be analysed in [`plot_experiments.ipynb`](plot_experiments.ipynb).
91
+
For plotting, please follow the instructions [here](#plotting).
95
92
96
93
97
94
## Installation
98
95
For reproducing experiments from the main paper, please see [here](#running-experiments-from-the-main-paper).
99
96
97
+
For continued usage of MDP Playground as it is in development, please continue reading.
98
+
100
99
### Production use
101
100
We recommend using `conda` to manage environments. After setup of the environment, you can install MDP Playground in two ways:
102
101
#### Manual
@@ -107,7 +106,7 @@ pip install -e .[extras]
107
106
This might be the preferred way if you want easy access to the included experiments.
108
107
109
108
#### From PyPI
110
-
MDP Playground is also on PyPI. Just run:
109
+
Alternatively, MDP Playground can also be installed from PyPI. Just run:
111
110
```bash
112
111
pip install mdp_playground[extras]
113
112
```
@@ -119,21 +118,21 @@ You can run experiments using:
The `exp_name` is a prefix for the filenames of CSV files where stats for the experiments are recorded. The CSV stats files will be saved to the current directory.<br>
122
-
Each of the command line arguments has defaults. Please refer to the documentation inside [`run_experiments.py`](run_experiments.py) for further details on the command line arguments. (Or run it with the `-h` flag to bring up help.)
121
+
The command line arguments also usually have defaults. Please refer to the documentation inside [`run_experiments.py`](run_experiments.py) for further details on the command line arguments. (Or run it with the `-h` flag to bring up help.)
123
122
124
-
The config files for experiments from the [paper](https://arxiv.org/abs/1909.07750) are in the experiments directory.<br>
125
-
The name of the file corresponding to an experiment is formed as: `<algorithm_name>_<dimension_names>.py`<br>
126
-
Some sample `algorithm_name`s are: `dqn`, `rainbow`, `a3c`, `a3c_lstm`, `ddpg`, `td3` and `sac`<br>
123
+
The config files for experiments from the [paper](https://arxiv.org/abs/1909.07750) are in the `experiments` directory.<br>
124
+
The name of the file corresponding to an experiment is formed as: `<algorithm_name>_<dimension_names>.py` for the toy environments<br>
125
+
And as: `<algorithm_name>_<env>_<dimension_names>.py` for the complex environments<br>
126
+
Some sample `algorithm_name`s are: `dqn`, `rainbow`, `a3c`, `ddpg`, `td3` and `sac`<br>
127
127
Some sample `dimension_name`s are: `seq_del` (for **delay** and **sequence length** varied together), `p_r_noises` (for **P** and **R noises** varied together),
For example, for algorithm **DQN** when varying dimensions **delay** and **sequence length**, the corresponding experiment file is [`dqn_seq_del.py`](experiments/dqn_seq_del.py)
130
130
131
131
The CSV stats files will be saved to the current directory and can be analysed in [`plot_experiments.ipynb`](plot_experiments.ipynb).
132
132
133
133
## Plotting
134
-
To plot results from experiments, run `jupyter-notebook` and open [`plot_experiments.ipynb`](plot_experiments.ipynb) in Jupyter. There are instructions within each of the cells on how to generate and save plots.
134
+
To plot results from experiments, please be sure that you installed MDP Playground for production use manually (please see [here](#manual)) and then run `jupyter-notebook` and open [`plot_experiments.ipynb`](plot_experiments.ipynb) in Jupyter. There are instructions within each of the cells on how to generate and save plots.
135
135
136
-
We have provided a sample set of CSVs you could use in the supplementary material. There correspond to experiments from the main paper used for the spider plots for continuous environments (Figure 3b).
137
136
138
137
## Documentation
139
138
The documentation can be found at: https://automl.github.io/mdp-playground/
0 commit comments