X
Innovation

Blizzard and DeepMind release StarCraft II AI toolset

DeepMind and Blizzard have released a toolset for AI learning using StarCraft II, including a machine learning API, a dataset of game replays for imitation learning, and a set of mini games.
Written by Corinne Reichert, Contributor
starcraft-deepmind.png
(Image: DeepMind)

Google's artificial intelligence (AI) lab DeepMind and game development studio Blizzard have announced the release of a set of tools aimed at accelerating AI research through real-time strategy game StarCraft II.

The toolset, labelled SC2LE, includes a machine-learning API from Blizzard; an open-source iteration of DeepMind's PySC2 toolset; a dataset of 65,000 anonymised game replays, which will be expanded to more than 500,000 in the next few weeks and will aid imitation learning for sequence prediction and long-term memory; a set of mini games to test AI performance on specific StarCraft II tasks, such as collecting minerals, compiling gas, and selecting units; and a joint paper outlining the environment and initial baseline results on AI performance.

According to a blog post by DeepMind research scientist Oriol Vinyals, program manager Stephen Gaffney, and software engineer Timo Ewalds, testing AI in games that were not designed for such research and in which human players excel "is crucial to benchmark agent performance".

"Part of StarCraft's longevity is down to the rich, multi-layered gameplay, which also makes it an ideal environment for AI research," the blog post said.

"For example, while the objective of the game is to beat the opponent, the player must also carry out and balance a number of sub-goals, such as gathering resources or building structures.

"In addition, a game can take from a few minutes to one hour to complete, meaning actions taken early in the game may not pay-off for a long time. Finally, the map is only partially observed, meaning agents must use a combination of memory and planning to succeed."

There are around 100 million possible actions in a game of StarCraft, they added, whereas in Atari games -- which DeepMind has also used for AI research -- there are only 300 basic actions. According to DeepMind, the popularity of the game also means there is a significant amount of replay data to learn from, as well as a large pool of opponents for the AI to play.

DeepMind said it has isolated elements including unit type, health, and map visibility, breaking the game down into "feature layers", with the mini games helping to provide manageable chunks for AIs to learn basic actions.

"Our initial investigations show that our agents perform well on these mini games. But when it comes to the full game, even strong baseline agents, such as A3C, cannot win a single game against even the easiest built-in AI," the blog post said, adding that one agent had failed to complete trivial tasks such as keeping its workers mining.

"Our hope is that the release of these new tools will build on the work that the AI community has already done in StarCraft, encouraging more DeepRL research and making it easier for researchers to focus on the frontiers of our field."

DeepMind had announced in November that it would be using StarCraft II as a testing platform for AI and machine-learning research, opening the environment worldwide.

"We've worked closely with the StarCraft II team to develop an API that supports something similar to previous bots written with a 'scripted' interface, allowing programmatic control of individual units and access to the full game state (with some new options as well)," DeepMind said at the time.

"Ultimately, agents will play directly from pixels, so to get us there, we've developed a new image-based interface that outputs a simplified low-resolution RGB image data for map and minimap, and the option to break out features into separate 'layers', like terrain heightfield, unit type, unit health, etc."

An AI engine would therefore have to make use of the skills of memory, mapping, long-term planning, and adapting to changes in plans using information that is continually being gathered, which translates to hierarchical planning and reinforcement learning.

DeepMind has also used complex games such as Go to test AI, with its AI AlphaGo defeating world champion Ke Jie in May.

Go, an ancient board game originating from China, has 10^761 possible games in comparison to the 10^120 possible games for chess.

DeepMind then retired AlphaGo to instead focus on using AI to create advanced algorithms to help scientists develop cures for diseases, reduce energy consumption, and invent new materials.

DeepMind is working with Moorfields Eye Hospital and University College London Hospitals (UCLH) Trust in the United Kingdom on reading scans via algorithms. It also partnered with the National Health Service to experiment with using machine learning to plan the use of radiotherapy for individual head and neck cancer patients, which could improve waiting times for procedures and free up more time for doctors nationwide.

Editorial standards