Coevolution in hierarchical ai for strategy games (original) (raw)

A Systematic Review of Coevolution in Real-Time Strategy Games

IEEE Access

Real-time strategy (RTS) games are a subgenre of strategy video games. Due to their importance in practical decision-making and digital entertainment over the last two decades, many researchers have explored different algorithmic approaches for controlling agents within RTS games and learning effective strategies and tactics. Among the techniques, coevolutionary algorithms proved to be one of the most popular and successful algorithms for developing such games, in which players can compete or cooperate to achieve the given game's mission. However, as many alternative designs exist with their analysis and the applications reported in diverse publications, a review covering the evolution of such algorithms would be valuable for researchers and practitioners in this domain. This paper aims to provide a systematic review by highlighting why and how coevolution is used in RTS games and analysis of the recent work. The review conducted follows procedural steps to identify, filter, analyse and discuss the existing literature. This structured review articulates the purposes of using coevolution in RTS games and highlights several open questions for future research in this domain.

A Combined Tactical and Strategic Hierarchical Learning Framework in Multi-agent Games

This paper presents a novel approach to modeling a generic cognitive framework in game agents to provide tactical behavior generation as well as strategic decision making in modern multi-agent computer games. The core of our framework consists of two characterization concepts we term as the tactical and strategic personalities, embedded in each game agent. Tactical actions and strategic plans are generated according to the weights defined in their respective personalities. The personalities are constantly improved as the game proceeds by a learning process based on reinforcement learning. Also, the strategies selected at each level of the agents' command hierarchy affect the personalities and hence the decisions of other agents. The learning system improves performance of the game agents in combat and is decoupled from the action selection mechanism to ensure speed. The variability in tactical behavior and decentralized strategic decision making improves realism and increases entertainment value. Our framework is implemented in a real game scenario as an experiment and shown to outperform various scripted opponent team tactics and strategies, as well as one with a randomly varying strategy.

Artificial intelligence design for real-time strategy games

2011

For now over a decade, real-time strategy (RTS) games have been challenging intelligence, human and artificial (AI) alike, as one of the top genre in terms of overall complexity. RTS is a prime example problem featuring multiple interacting imperfect decision makers. Elaborate dynamics, partial observability, as well as a rapidly diverging action space render rational decision making somehow elusive. Humans deal with the complexity using several abstraction layers, taking decisions on different abstract levels. Current agents, on the other hand, remain largely scripted and exhibit static behavior, leaving them extremely vulnerable to flaw abuse and no match against human players. In this paper, we propose to mimic the abstraction mechanisms used by human players for designing AI for RTS games. A non-learning agent for StarCraft showing promising performance is proposed, and several research directions towards the integration of learning mechanisms are discussed at the end of the paper.

Hall-Of-Fame Competitive Coevolutionary Algorithms for Optimizing Opponent Strategies in a New Game

2012

This paper describes the application of competitive coevolution as a mechanism of self learning in a two-player real time strategy (RTS) game. The paper presents this (war) RTS game, developed by the authors as an open-source tool, and describes its (built-in) coevolutionary engine developed to find winning strategies. This engine applies a competitive coevolutionary algorithm that uses the concept of Hall-ofFame to establish a long-term memory that is employed in the evaluation process. An empirical analysis of the performance of two different versions of this coevolutionary algorithm is conducted in the context of the RTS game. Moreover, the paper also shows, by an example, the potential of this coevolutionary engine as a prediction tool by inferring the initial conditions (i.e. army configuration) under which a battle has been executed when we know the final result.

Computational Intelligence in Strategy Games

Presented are issues in designing smart, believable software agents capable of playing strategy games, with particular emphasis on the design of an agent capable of playing Cyberwar XXI, a complex war game. The architecture of a personality-rich, advise-taking game playing agent that learns to play is described. The suite of computational-intelligence tools used by the advisers include evolutionary computation and neural nets.

New Generation of Artificial Intelligence for Real-Time Strategy Games

Advanced Research and Trends in New Technologies, Software, Human-Computer Interaction, and Communicability

Artificial intelligence in computer games is still well behind academic artificial intelligence research. The computer power and memory resources have increased exponentially over the last few years and improved game artificial intelligence should not hinder the performance of the game anymore. Improvements of game artificial intelligence are necessary because an appropriate artificial intelligence for the more advanced players does not exist today. This chapter discusses artificial intelligence for real-time strategy computer games, which are ideal test beds for research on movement, tactic, and strategy. Open-source real-time strategy game development tools are presented and compared, and an enhanced combat artificial intelligence algorithm is proposed.

AI for general strategy game playing

Computer strategy games 1 — games such as those in the Civilization, StarCraft, Age of Empires and Total War series, and board game adaptations such as Risk and Axis and Allies — have been popular since soon after computer games were invented, and are a popular genre among a wide range of players. Strategy games are closely related to classic board games such as Chess and Go, but though there has been no shortage of work on AI for playing classic board games, there has been remarkably little work on strategy games. This chapter addresses the understudied question of how to create AI that plays strategy game, through building and comparing AI for general strategy game playing.

Building human-level ai for real-time strategy games

2011

Video games are complex simulation environments with many real-world properties that need to be addressed in order to build robust intelligence. In particular, realtime strategy games provide a multi-scale challenge which requires both deliberative and reactive reasoning processes. Experts approach this task by studying a corpus of games, building models for anticipating opponent actions, and practicing within the game environment. We motivate the need for integrating heterogeneous approaches by enumerating a range of competencies involved in gameplay and discuss how they are being implemented in EISBot, a reactive planning agent that we have applied to the task of playing real-time strategy games at the same granularity as humans.

A Review of Real‐Time Strategy Game AI

AI Magazine, 2014

This literature review covers AI techniques used for real‐time strategy video games, focusing specifically on StarCraft. It finds that the main areas of current academic research are in tactical and strategic decision making, plan recognition, and learning, and it outlines the research contributions in each of these areas. The paper then contrasts the use of game AI in academe and industry, finding the academic research heavily focused on creating game‐winning agents, while the industry aims to maximize player enjoyment. It finds that industry adoption of academic research is low because it is either inapplicable or too time‐consuming and risky to implement in a new game, which highlights an area for potential investigation: bridging the gap between academe and industry. Finally, the areas of spatial reasoning, multiscale AI, and cooperation are found to require future work, and standardized evaluation methods are proposed to produce comparable results between studies.