Matthew Stephenson, Maastricht University (matthew.stephenson888@gmail.com)
Jochen Renz, Australian National University (jochen.renz@gmail.com)
Lucas Ferreira, UC Santa Cruz (lucasnfe@gmail.com)
Julian Togelius, New York University, US (julian@togelius.com)
Description
This year we will run our fourth Angry Birds Level Generation Competition. The goal of this competition is to build computer programs that can automatically create fun and challenging Angry Birds levels. The difficulty of this competition compared to similar competitions is that the generated levels must be stable under gravity, robust in the sense that a single action should not destroy large parts of the generated structure, and most importantly, the levels should be fun to play, visually interesting and challenging to solve. Participants will be able to ensure solvability and difficulty of their levels by using open source Angry Birds AI agents that were developed for the Angry Birds AI competition. This competition will evaluate each level generator based on the overall fun or enjoyment factor of the levels it creates. Aside from the main prize for “most enjoyable levels”, two additional prizes for “most aesthetic levels” and “most challenging levels” will also be awarded. This evaluation will be done by an impartial panel of judges. restrictions will be placed on what objects can be used in the generated levels (in order to prevent pre-generation of levels). We will generate 100 levels for each submitted generator and randomly select a fraction of those for the competition. There will be a penalty if levels are too similar. Each entrant will be evaluated for all prizes. More details on the competition rules and can be found on the competition website aibirds.org. The competition will be based on the physics game implementation “Science Birds” by Lucas Ferreira using Unity3D.
The video for the competition (using the baseline level generator and agents) is the following
Bot Bowl I
Organisers
Niels Justesen, PhD student, IT University of Copenhagen
Nicolai Overgaard Larsen, Danish Eurobowl Captain, former Eurobowl Committee Chairman.
Sebastian Risi, Associate Professor, IT University of Copenhagen
Julian Togelius, Associate Professor, New York University
Description
Bot Bowl I is an AI competition in the board game Blood Bowl. The competition uses the Fantasy Football AI (FFAI) framework [1] that simulates the game with an API for scripted bots and machine learning algorithms in Python. Blood Bowl is a major challenge due to the complexity introduced by having multiple actions each turn. For more details on why we think Blood Bowl should be the next board game challenge for AI, please read our paper [2]. The Bot Bowl I competition will feature one track which uses the traditional board size of 26×15 squares with 11 players on each side. Participants are, however, limited to use a prefixed human team. In future competitions, we plan to allow all teams and the option to customize rosters.
[1] https://github.com/njustesen/ffai
[2] Justesen, Niels, Sebastian Risi, and Julian Togelius. Blood Bowl: The Next Board Game Challenge for AI. FDG 2018, 1st Workshop on Tabletop Games, (2018).
What are promising techniques to develop general fighting-game AIs whose performances are robust against a variety of settings and opponents? As the platform, Java-based FightingICE is used which also supports Python programming and development of visual-based deep learning AIs. Two leagues (Standard and Speedrunning) are associated to each of the three character types: Zen, Garnet, and Lud where the character data of the last one is not revealed. Standard League considers the winner of a round as the one with the hit point (HP) above zero at the time its opponent's HP has reached zero. In Speedrunning League, the league winner of a given character type is the AI with the shortest average time to beat our sample MCTS AI. The competition winner is decided considering both leagues' results based on the 2015 Formula-1 scoring system.
First TextWorld Problems: A Reinforcement and Language Learning Challenge
Organisers
Marc-Alexandre Côté
Wendy Tay
Tavian Barnes
Eric Yuan
Adam Trischler
Description
The goal of this competition is to build an AI agent that can play efficiently and win simplified text-based games. We hope to highlight the limitation of existing Reinforcement Learning models when combined with Natural Language Processing. Therefore, any agent that doesn't show learning behaviors will be penalized. Enter your submission for a chance to win $2000 USD and more in prizes! The competition runs until June 1st, 2019.
The agent must navigate and interact within a text environment, i.e. the agent perceives the environment through text and acts in it using text commands. The agent would need skills like:
language understanding dealing with a combinatorial actions space efficient exploration memory sequential decision-making
In this competition, all the games share a similar theme (cooking in a modern house), similar text commands, and similar entities (i.e. interactable objects within the games). To better understand the games, check out the Jupyter notebook found in the starting kit.
The simplified games were generated using TextWorld (https://www.microsoft.com/en-us/research/project/textworld/). TextWorld is an open-source framework that both generates and interfaces with text-based games. You can use TextWorld to train your agents.
Rui Prada, INESC-ID and Instituto Superior Técnico, Universidade de Lisboa
Francisco S. Melo, INESC-ID and Instituto Superior Técnico, Universidade de Lisboa
João Dias, INESC-ID and Instituto Superior Técnico, Universidade de Lisboa
Description
The goal of the competition is to build AI agents for a 2-player collaborative physics-based puzzle platformer game (Geometry Friends). The agents control, each, a different character (circle or rectangle) with distinct characteristics. Their goal is to collaborate in order to collect a set of diamonds in a set of levels as fast as possible. The game presents problems of combined task and motion planning and promotes collaboration at different levels. Participants can tackle cooperative levels with the full complexity of the problem or single-player levels for dealing with task and motion planning without the complexity of collaboration.
For all questions regarding the proposal, please contact Jialin Liu via liujl@sustc.edu.cn.
Scope and Topics
The General Video Game AI (GVG-AI) Competition explores the problem of creating agents for general video game playing. How would you create a single agent that is able to play any game it is given? Could you program an agent that is able to play a wide variety of games, without knowing which games are to be played and without a forward model? How would you create a generator to design game rules or levels?
Five GVGAI competitions are proposed:
GVGAI Single-Player Planning Track
Main organiser: Diego Perez-Liebana1 (diego.perez@qmul.ac.uk)
Submission via http://www.gvgai.net
GVGAI Two-Player Planning Track
Main organiser: Raluca Gaina1 (r.d.gaina@qmul.ac.uk)
Submission via http://www.gvgai.net
GVGAI Single-Player Learning Track
Main organisers: Philip Bontrager2 (pjb411@nyu.edu), Ruben Rodriguez Torrado2 (rubentorrado@gmail.com), Hao Tong3 (htong6@outlook.com)
Submission via http://www.aingames.cn
GVGAI Level Generation Track
Main organiser: Ahmed Khalifa2 (aak538@nyu.edu)
Submission via http://www.gvgai.net
GVGAI Rule Generation Track
Main organiser: Ahmed Khalifa2 (aak538@nyu.edu)
Submission via http://www.gvgai.net
GVGAI Steering Committee
Jialin Liu3 , Diego Pérez Liébana1 , Julian Togelius2 , Simon M. Lucas1
Submission instructions
The participants are invited to submit their agent via http://www.gvgai.net or http://www.aingames.cn, depending on the track. Submission instructions of each track will be provided separately on the corresponding webpage.
Submission deadline
Agent submission: 15th July 2019, 23:59 (GMT)
Demo Video
Demo video for Learning Track:
Hanabi Competition
Organisers
Joseph Walton-Rivers, University of Essex (jwalto@essex.ac.uk)
Description
Write an agent capable of playing the cooperative partially observable card game Hanabi. Agents are written in Java and submitted via our online submission system. In Hanabi, agents cannot see their own cards but can see the other agent's cards. On their turn, agents can either choose to play a card from their hand, discard a card from their hand or spend an information token to tell another player about a feature (rank or suit) of the cards they have. The players must try to play cards for each suit in rank order. If the group makes 3 errors when executing play actions the game is over. Agents will be paired with either copies of their own agent or a set of unknown agents. The winner is the agent that achieves the highest score over a set of unknown deck orderings.
Agents will play with a set of unknown policies (you don't know how they are making their decisions) to form the team, for each game the agents will be paired with a set of n-1 other agents. This will then be com paired with the scores with the other competition entrants. Deck orderings will remain consistent for 1 round, before a different deck ordering is chosen. Player positions will also be randomized between rounds to avoid the player's agent always playing first.
Learning Track - agents play multiple games with the same group (to allow for strategy learning)
This was at the request of last years competition entrants, this will play similarly to the mixed track, but rather than the paired agents changing every round, the agent set would be fixed. This gives the agents the opportunity to learn from the observed moves.
Hearthstone AI competition
Organisers
Alexander Dockhorn, University of Magdeburg, Germany (alexander.dockhorn@ovgu.de)
Sanaz Mostaghim, University of Magdeburg, Germany (sanaz.mostaghim@ovgu.de)
Description
The collectible online card game Hearthstone features a rich testbed and poses unique demands for generating artificial intelligence agents. The game is a turn-based card game between two opponents, using constructed decks of thirty cards along with a selected hero with a unique power. Players use their limited mana crystals to cast spells or summon minions to attack their opponent, with the goal to reduce the opponent's health to zero. The competition aims to promote the stepwise development of fully autonomous AI agents in the context of Hearthstone. During the game, both players need to play the best combination of hand cards, while facing a large amount of uncertainty. The upcoming card draw, the opponent’s hand cards, as well as some hidden effects played by the opponent can influence the player’s next move and its succeeding rounds. Predicting the opponent’s deck from previously seen cards, and estimating the chances of getting cards of the own deck can help in finding the best cards to be played. Card playing order, their effects, as well as attack targets have a large influence on the player’s chances of winning the game. Despite using premade decks players face the opportunity of creating a deck of 30 cards from the over 1000 available in the current game. Most of them providing unique effects and card synergies that can help in developing combos. Generating a strong deck is a step in consistently winning against a diverse set of opponents.
Competition website
You can find more information on this year’s competition and the evaluation of last year’s submissions on our webpage. It also features a list of previously submitted bots and their source code as well as information about how to get started.
The competition will encourage submissions to the following two separate tracks, which will be available in the second year of this competition:
Premade Deck Playing
In the “Premade Deck Playing”-track participants will receive a list of decks and play out all combinations against each other. Determining and using the characteristics of player’s and the opponent’s deck to the player’s advantage will help in winning the game. This track will feature an updated list of decks to better represent the current meta-game.
User Created Deck Playing
The “User Created Deck Playing”-track invites all participants in creating their own decks or choosing from the vast amount of decks available online. Finding a deck that can consistently beat multiple other decks will play a key role in this competition track. Additionally, it gives the participants the chance in optimizing the agents’ strategy to the characteristics of their chosen deck.
Author's Schedule
For the deadline for submitting papers, please check the website of CoG 2019
MicroRTS AI Competition
Organisers
Santiago Ontañon (so367@drexel.edu)
Description
Several AI competitions organized around RTS games have been organized in the past (such as the ORTS competitions, and the StarCraft AI competitions), which has spurred a new wave of research in to RTS AI. However, as it has been reported numerous times, developing bots for RTS games such as StarCraft involves a very large amount of engineering, which often relegates the research aspects of the competition to a second plane. The microRTS competition has been created to motivate research in the basic research questions underlying the development of AI for RTS games, while minimizing the amount of engineering required to participate. Also, a key difference with respect to the StarCraft competition is that the AIs have access to a 'forward model' (i.e., a simulator), with which they can simulate the effect of actions or plans, thus allowing for planning and game-tree search techniques to be developed easily. This will be the third edition of the competition, after the 2017 and 2018 editions hosted at IEEE-CIG conferences.
Simon Lucas, Queen Mary University of London, UK (simon.lucas@qmul.ac.uk)
Alexander Dockhorn, University of Magdeburg, Germany (alexander.dockhorn@ovgu.de)
Competition Outline
The aim of the competition is to promote the production of short videos highlighting any research which is relevant to IEEE CoG. The videos may be related to CoG papers but this is not necessary. A similar competition was run for IEEE CIG 2018 and attracted 8 entries, and led to an interesting presentation session that was very well received. Links to the top 3 entries are below in the appendix.
The videos should be informative and well presented. Participants must submit a video which is not longer than 5 minutes, but there is no lower limit. The video should include a title page at the beginning. Each video must mention that it is an entry for the IEEE CoG 2019 Short Video Competition
To enter the competition at least one author of the video must be registered for the conference.
The information required is the video title, authors, a brief description (approx. 150 words), and a link to the video which can be hosted on any easily viewable video streaming service such as Youtube, Youku, or ieee.tv
The deadline for entries is Saturday 10th August (anywhere in the world).
Links to all accepted videos will be published on the conference website after the conference. A panel will select the set of finalist videos to be judged by the audience during the short video competition plenary session at the conference.
The winner will be chosen by an audience vote at the end of this session. The organisers reserve the right to exclude any video they deem to be offensive or inappropriate
Sponsorship: $1,000USD of prize money will be provided by IEEE CIS Education Committee to be divided in ratio 500:300:200 for 1st,2nd,3rd place.
Appendix: Leading entries in IEEE CIG 2018 Short Video Competition
(ranking of 8 finalists decided by audience vote; note in this case we did no prior shortlisting so the 8 finalists were the entire set of entries)
Winner: Vanessa Volz
Evolving Mario Levels in the Latent Space of a Deep Convolutional Generative Adversarial Network
2nd Place: Niels Justesen
Automated Curriculum Learning by Rewarding Temporally Rare Events
3rd Place: Raluca Gaina
General Win Prediction: CIG 2018 Short Video Competition
StarCraft AI Competition
Organisers
Kyung-Joong Kim, GIST
Seonghun Yoon, Sejong Univ.
Description
IEEE CoG StarCraft competitions have seen quite some progress in the development and evolution of new StarCraft bots. For the evolution of the bots, participants used various approaches for making AI bots and it has fertilized game AI and methods such as HMM, Bayesian model, CBR, Potential fields, and reinforcement learning. However, it is still quite challenging to develop AI for the game because it should handle a number of units and buildings while considering resource management and high-level tactics. The purpose of this competition is developing RTS game AI and solving challenging issues on RTS game AI such as uncertainty, real-time process, managing units. Participants are submitting the bots using BWAPI to play 1v1 StarCraft matches.
Legends of Code and Magic (LOCM) is a small implementation of a Strategy Card Game, designed to perform AI research. Its advantage over the real cardgame AI engines is that it is much simpler to handle by the agents, and thus allows testing more sophisticated algorithms and quickly implement theoretical ideas. All cards effects are deterministic, thus the nondeterminism is introduced only by the ordering of cards and unknown opponent's deck. The game board consists of two lines (similarly as in TES:Legends), so it favors deeper strategic thinking. Also, LOCM is based on the fair arena mode, i.e., before every game, both players create their decks secretly from the symmetrical yet limited choices. Because of that, the deckbuilding is dynamic and cannot be simply reduced to using human-created top-meta decks. This competition aims to play the same role for Hearthstone AI Competition as microRTS plays for various StarCraft AI contests. Encourage advanced research, free of drawbacks of working with the full-fledged game. In this domain, it means i.a. embedding deckbuilding into the game itself (limiting the usage of premade decks), and allowing efficient search beyond the one turn depth. The contest is based on the LOCM 1.2, the same as in CEC 2019 Competition. One-lane, 1.0 version of the game, has been used for CodinGame contest in August 2018.