<div class="csl-bib-body">
<div class="csl-entry">Schmidt, D. (2022). Dojo: A Benchmark for Large Scale Multi-Task Reinforcement Learning. In <i>ALOE 2022. Accepted Papers</i>. Workshop on Agent Learning in Open-Endedness (ALOE) at ICLR 2022, Unknown. https://doi.org/10.34726/4263</div>
</div>
-
dc.identifier.uri
http://hdl.handle.net/20.500.12708/177469
-
dc.identifier.uri
https://doi.org/10.34726/4263
-
dc.description
There are no proceedings for ALOE 2022, but authors were able to opt to have links to their accepted papers (such as this one) displayed on the workshop website.
-
dc.description.abstract
We introduce Dojo, a reinforcement learning environment intended as a benchmark for evaluating RL agents' capabilities in the areas of multi-task learning, generalization, transfer learning, and curriculum learning. In this work, we motivate our benchmark, compare it to existing methods, and empirically demonstrate its suitability for the purpose of studying cross-task generalization. We establish a multi-task baseline across the whole benchmark as a reference for future research and discuss the achieved results and encountered issues. Finally, we provide experimental protocols and evaluation procedures to ensure that results are comparable across experiments. We also supply tools allowing researchers to easily understand their agents' performance across a wide variety of metrics.
en
dc.language.iso
en
-
dc.rights.uri
http://creativecommons.org/licenses/by/4.0/
-
dc.subject
Machine Learning
en
dc.subject
Reinforcement Learning
en
dc.subject
Benchmarks
en
dc.title
Dojo: A Benchmark for Large Scale Multi-Task Reinforcement Learning
en
dc.type
Inproceedings
en
dc.type
Konferenzbeitrag
de
dc.rights.license
Creative Commons Namensnennung 4.0 International
de
dc.rights.license
Creative Commons Attribution 4.0 International
en
dc.identifier.doi
10.34726/4263
-
dc.type.category
Poster Contribution
-
tuw.booktitle
ALOE 2022. Accepted Papers
-
tuw.peerreviewed
true
-
tuw.researchTopic.id
I4a
-
tuw.researchTopic.name
Information Systems Engineering
-
tuw.researchTopic.value
100
-
tuw.linking
https://openreview.net/forum?id=rHr8BXvZIZ5
-
tuw.publication.orgunit
E194-06 - Forschungsbereich Machine Learning
-
tuw.publication.orgunit
E194 - Institut für Information Systems Engineering
-
dc.identifier.libraryid
AC17204372
-
dc.description.numberOfPages
13
-
dc.rights.identifier
CC BY 4.0
de
dc.rights.identifier
CC BY 4.0
en
tuw.event.name
Workshop on Agent Learning in Open-Endedness (ALOE) at ICLR 2022