Note that this years edition will be organized on crowdAI platform!
Last update: Fri Jul 13 23:52:00 CEST 2018
The last years' editions, 2016 and 2017 has been successful with a number of participants which submitted high-quality bots. Nonetheless, the bots have not yet achieved the human level in any track. There is much room for improvement.
The participants of the Visual Doom AI competition are supposed to submit an agent (in Python or Lua) that plays Doom using mainly visual input. Our ViZDoom framework gives a real-time access to the screen as the only information the agent can base its decision on. This year, there are two completely different tracks.
Track 1 challenges agents to beat single player levels as fast as possible. Levels vary in difficulty so the entry threshold is low - you do not need sophisticated knowledge to start and can learn on the run!
Track 2 is a full on Doom deathmatch (like in previous years) on unknown maps. Agents will compete in multiplayer games and the best frag collector will emerge victorious.
Although participants are allowed to use any technique to develop a controller, the design and efficiency of the Visual Doom AI environment encourage to use machine learning methods such as deep reinforcement learning.
The task is to create a bot that completes randomly generated levels of Doom, filled with monsters and resources. PyOblige random generator is provided for training. The final evaluation will also use maps generated by this generator.
Bots will be ranked by the time of completion of a set of levels. The competition times will be summed up. Death will reset progress without resetting the timer. Each level will have limited time to complete, that will also be the maximum time for a level (lower score).
Generated maps may contain various monsters, hazardous surfaces (acid and lava), weapons and items, doors (that need to be opened with the USE key). Difficulty levels will be moderated as the competition progresses. At the beginning, test evaluation will take place on really simple maps. If the submitted controllers beat them with ease, the difficulty will be increased.
The task is to create bots that fight against each other in a regular deathmatch, where different weapons and items are available. Five maps are provided for training and more maps can be found at Doomworld. The final evaluation will take place on several (secret) testing maps.
Bots will fight against each other for 10 minutes on selected maps. We will sum the results over at least 10 games on different maps. The controllers will be ranked by the number of frags, where the number of frags for this competition is defined as:
frags = number of killed opponents - number of suicides
To be announced.
Would you like to sponsor the competition? Contact us.
These dates are not fixed yet, they can change.
Each team is allowed to a single submission with 1 bot. Teams and bots aren't allowed to cooperate.
Evaluation of the submitted controls will take place on crowdAI platform. We will announce more details soon.
Before the actual evaluation, the submitted controller will have to pass a simple test. Complete simple level without monsters for singleplayer track and win a simple fight with Doom's build-in bot for the multiplayer track.
We also reserve the right to disqualify bots that behave random-like or/and are unintelligent in an evident way or have programed malicious behaviour.
During the contest, ViZDoom will have somewhat limited capabilities. It will be enforced by loading _vizdoom.cfg and +vizdoom_nocheat flag but anyway here is the list of settings and methods your agents aren't allowed to use during the competition:
All paths and configs are set so that running the programs from the team directory is required so run the agent from their directory and do not change it during runtime.
TECHNICALLY ALLOWED BUT POINTLESS.
More information about how the code will be run and how to prepare agents will be published soon. However, like in the last year's competition, the submission will have to contain a docker file. For example, submissions from previous years see https://github.com/mihahauke/vizdoom_cig2017.
Selected submissions might be published on this website after the competition is finished.