Schmidt, D. (2022). Dojo: A Benchmark for Large Scale Multi-Task Reinforcement Learning. In ALOE 2022. Accepted Papers. Workshop on Agent Learning in Open-Endedness (ALOE) at ICLR 2022, Unknown. https://doi.org/10.34726/4263
We introduce Dojo, a reinforcement learning environment intended as a benchmark for evaluating RL agents' capabilities in the areas of multi-task learning, generalization, transfer learning, and curriculum learning. In this work, we motivate our benchmark, compare it to existing methods, and empirically demonstrate its suitability for the purpose of studying cross-task generalization. We establish a multi-task baseline across the whole benchmark as a reference for future research and discuss the achieved results and encountered issues. Finally, we provide experimental protocols and evaluation procedures to ensure that results are comparable across experiments. We also supply tools allowing researchers to easily understand their agents' performance across a wide variety of metrics.
There are no proceedings for ALOE 2022, but authors were able to opt to have links to their accepted papers (such as this one) displayed on the workshop website.