Experimentation on real robots is very costly in terms of time and money. For this reason, a large part of the reinforcement learning community uses simulators, such as those available through OpenAI Gym, to develop and benchmark algorithms. However, insights gained in simulation do not necessarily translate to real robots, in particular for tasks involving complex interaction with the environment.
The purpose of this competition is to alleviate this problem by allowing participants to experiment on a real robot as easily as on a simulator. As tasks, we propose a number of dexterous manipulation problems, such as pushing, grasping, flipping and spinning objects.
We developed robot hardware and software which is suitable for this setup. The robots will be hosted at our institute and participants will execute their algorithms remotely, through a convenient and simple interface.
Each team will have a budget of over 100 robot hours, without any restrictions on the algorithms they execute.
We hope that this competition and potential follow-ups will
i) lead to a more inclusive and coordinated robotic learning community and
ii) allow for generation of orders of magnitude more data than what is possible in individual labs with typical robot platforms.
These factors may help advancing the state of the art in dexterous manipulation, which has myriad of applications, such as construction, cleaning, care of the sick and elderly, agriculture and assembly.