The competition is over. The results can be found here .
Doom has been considered one of the most influential titles in the game industry since it popularized the first-person shooter (FPS) genre and pioneered immersive 3D graphics. Even though more than 20 years have passed since Doom’s release, the methods for developing AI bots have not improved significantly in newer FPS productions. In particular, bots have still to “cheat” by accessing game’s internal data such as maps, locations of objects and positions of (player or non-player) characters. In contrast, a human can play FPS games using a computer screen as the only source of information. Can AI effectively play Doom using only raw visual input?
The participants of the Visual Doom AI competition are supposed to submit a controller (C++, Python, or Java) that plays Doom. The provided software gives a real-time access to the screen buffer as the only information the agent can base its decision on. The winner of the competition will be chosen in a deathmatch tournament.
Although the participants are allowed to use any technique to develop a controller, the design and efficiency of the Visual Doom AI environment allows and encourages participants to use machine learning methods such as reinforcement deep learning.
Different weapons and items are available. Two maps are provided for training. The final evaluation will take place on three maps unknown to the participants beforehand.
Your controller will fight against all other controllers for 10 minutes on a single map. Each game will be repeated 12 times for track 1 and 4 times for track 2, which involves three maps. The controllers will be ranked by the number of frags.
In case of lots of submissions, we will introduce some eliminations.
During the contest ViZDoom will have somewhat limited capabilities. It will be enforced by loading _vizdoom.cfg and +vizdoom_nocheat flag but anyway here is the list of settings and methods your agents aren't allowed to use during the showdown:
All paths and configs are set so that running the programs from the team directory is required so run the agent from their directory and do not change it during runtime.
TECHNICALLY ALLOWED BUT POINTLESS.
For more information about how the code will be run and how to prepare agents please refer to the submission page.
We reserve the right to disqualify bots that behave random-like or/and are unintelligent in an evident way.
To make a submission for the competition follow the submission page guidelines.
In the spirit of open science, all submissions will be published on this website after the competition is finished.
$ python -V Python 2.7.9 $ python -V Python 2.7.9 gcc version 4.9.2 (Ubuntu 4.9.2-10ubuntu13) $ python -c import numpy; print(numpy.__version__) 1.11.0 $ python -c import scipy; print(scipy.__version__) 0.14.1 $ python -c import theano; print(theano.__version__) Using gpu device 0: GeForce GTX 960 (CNMeM is disabled, cuDNN 5004) 0.8.2 $ python -c import lasagne; print(lasagne.__version__) Using gpu device 0: GeForce GTX 960 (CNMeM is disabled, cuDNN 5004) 0.2.dev1 $ python -c import cv2; print(cv2.__version__) 3.1.0 $ python -c import skimage; print(skimage.__version__) 0.12.3 $ python -c import keras; print(keras.__version__) Using Theano backend. Using gpu device 0: GeForce GTX 960 (CNMeM is disabled, cuDNN 5004) 1.0.4 $ python -c import tensorflow; print(tensorflow.__version__) I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcublas.so locally I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcudnn.so locally I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcufft.so locally I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcuda.so locally I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcurand.so locally 0.9.0rc0 $ java -version java version "1.8.0_91" Java(TM) SE Runtime Environment (build 1.8.0_91-b14) Java HotSpot(TM) 64-Bit Server VM (build 25.91-b14, mixed mode)
More on demand