Computational Intelligence in Strategy Games (original) (raw)
Related papers
Computational intelligence in games
2006
Video games provide an opportunity and challenge for the soft computational intelligence methods like the symbolic games did for "good old-fashioned artificial intelligence." This article reviews the achievements and future prospects of one particular approach, that of evolving neural networks, or neuroevolution. This approach can be used to construct adaptive characters in existing video games, and it can serve as a foundation for a new genre of games based on machine learning. Evolution can be guided by human knowledge, allowing the designer to control the kinds of solutions that emerge and encouraging behaviors that appear visibly intelligent to the human player. Such techniques may allow building video games that are more engaging and entertaining than current games, and those that can serve as training environments for people. Techniques developed in these games may also be widely applicable in other fields, such as robotics, resource optimization, and intelligent assistants. *
AI for general strategy game playing
Computer strategy games 1 — games such as those in the Civilization, StarCraft, Age of Empires and Total War series, and board game adaptations such as Risk and Axis and Allies — have been popular since soon after computer games were invented, and are a popular genre among a wide range of players. Strategy games are closely related to classic board games such as Chess and Go, but though there has been no shortage of work on AI for playing classic board games, there has been remarkably little work on strategy games. This chapter addresses the understudied question of how to create AI that plays strategy game, through building and comparing AI for general strategy game playing.
A review of computational intelligence in RTS games
2013 IEEE Symposium on Foundations of Computational Intelligence (FOCI), 2013
Real-time strategy games offer a wide variety of fundamental AI research challenges. Most of these challenges have applications outside the game domain. This paper provides a review on computational intelligence in real-time strategy games (RTS). It starts with challenges in real-time strategy games, then it reviews different tasks to overcome this challenges. Later, it describes the techniques used to solve this challenges and it makes a relationship between techniques and tasks. Finally, it presents a set of different frameworks used as test-beds for the techniques employed. This paper is intended to be a starting point for future researchers on this topic.
Steps toward Building of a Good AI for Complex Wargame-Type Simulation Games
2002
One of the key areas for the application of Artificial Intelligence to the game domain is in the design of challenging artificial opponents for human players. Complex simulations such as historical wargames can be seen as natural extensions of classical games where AI techniques such as planning or learning have already proved powerful. Yet the parallel nature of more recent games introduce new levels of complexity which can be tackled at various levels. This paper focuses on the question of finding good representations for the AI design, which implies finding relevant granularities for the various tasks involved, for a popular historical wargame. This work is based on the partially automated use of the rules of the game, as well as some common sense and historical military knowledge, to design relevant heuristics. The resulting gain in representation complexity will help the application of techniques such as Reinforcement Learning.
Artificial intelligence design for real-time strategy games
2011
For now over a decade, real-time strategy (RTS) games have been challenging intelligence, human and artificial (AI) alike, as one of the top genre in terms of overall complexity. RTS is a prime example problem featuring multiple interacting imperfect decision makers. Elaborate dynamics, partial observability, as well as a rapidly diverging action space render rational decision making somehow elusive. Humans deal with the complexity using several abstraction layers, taking decisions on different abstract levels. Current agents, on the other hand, remain largely scripted and exhibit static behavior, leaving them extremely vulnerable to flaw abuse and no match against human players. In this paper, we propose to mimic the abstraction mechanisms used by human players for designing AI for RTS games. A non-learning agent for StarCraft showing promising performance is proposed, and several research directions towards the integration of learning mechanisms are discussed at the end of the paper.
Adaptive Intelligence for Turn-based Strategy Games
Proceedings of the Belgian-Dutch Artificial …, 2008
Computer games are an increasingly popular form of entertainment. Typically, the quality of AI opponents in computer games leaves a lot to be desired, which poses many attractive challenges for AI researchers. In this respect, Turn-based Strategy (TBS) games are of particular interest. These games are focussed on high-level decision making, rather than low-level behavioural actions. Moreover, they allow the players sufficient time to consider their moves. For efficiently designing a TBS AI, in this paper we propose a game AI architecture named ADAPTA (Allocation and Decomposition Architecture for Performing Tactical AI). It is based on task decomposition using asset allocation, and promotes the use of machine learning techniques. In our research we concentrated on one of the subtasks for the ADAPTA architecture, namely the Extermination module, which is responsible for combat behaviour. Our experiments show that ADAPTA can successfully learn to outperform static opponents. It is also capable of generating AIs which defeat a variety of static tactics simultaneously.
Massive Multiplayer Online Strategy games present several unique challenges to players and designers. There is the need to constantly adapt to changes in the game itself and the need to achieve a certain level of simulation and realism, which typically implies battles involving combat with several distinct armies, combat phases and diferent terrains; resource management which involves buying and selling goods and combining lots of diferent kinds of resources to fund the player's nation and cutthroat diplomacy which dictates the pace of the game. However, these constant changes and simulation mechanisms make a game harder to play, increasing the amount of effort required to play it properly. As some of these games take months to be played, players who become inactive have a negative impact on the game. This work pretends to demonstrate how to create versatile agents for playing Massive Multiplayer Online Turn Based Strategy Games, while keeping close attention to their playing performance. In a test to measure this performance the results showed similar survival performance between humans and AIs.
Towards more intelligent adaptive video game agents
Proceedings of the 9th conference on Computing Frontiers - CF '12, 2012
This paper provides a computational intelligence perspective on the design of intelligent video game agents. The paper explains why this is an interesting area to research, and outlines the most promising approaches to date, including evolution, temporal difference learning and Monte Carlo Tree Search. Strengths and weaknesses of each approach are identified, and some research directions are outlined that may soon lead to significantly improved video game agents with lower development costs.
A multi-agent architecture for game playing
2007
General Game playing, a relatively new field in game research, presents new frontiers in building intelligent game players. The traditional premise for building a good artificially intelligent player is that the game is known to the player and pre-programmed to play accordingly. General game players challenge game programmers by not identifying the game until the beginning of game play. In this paper we explore a new approach to intelligent general game playing employing a self-organizing, multiple-agent evolutionary learning strategy. In order to decide on an intelligent move, specialized agents interact with each other and evolve competitive solutions to decide on the best move, sharing the learnt experience and using it to train themselves in a social environment. In an experimental setup using a simple board game, the evolutionary agents employing a learning strategy by training themselves from their own experiences, and without prior knowledge of the game, demonstrate to be as effective as other strong dedicated heuristics. This approach provides a potential for new intelligent game playing program design in the absence of prior knowledge of the game at hand.