Learning to play strong poker (original) (raw)

Opponent Modeling in Poker

2007

In recent years much progress has been made on computer gameplay in games of complete information such as chess and go. Computers have surpassed the ability of top chess players and are well on their way to doing so at Go. Games of incomplete information, on the other hand, are far less studied. Despite significant financial incentives, computerized poker players still perform at a level well below that of poker professionals.

HoldemML: A framework to generate No Limit Hold'em Poker agents from human player strategies

6th Iberian Conference on Information Systems and Technologies (CISTI 2011), 2011

Developing computer programs that play Poker at human level is considered to be challenge to the A.I. research community, due to its incomplete information and stochastic nature. Due to these characteristics of the game, a competitive agent must manage luck and use opponent modeling to be successful at short term and therefore be profitable. In this paper we propose the creation of No Limit Hold'em Poker agents by copying strategies of the best human players, by analyzing past games between them. To accomplish this goal, first we determine the best players on a set of game logs by determining which ones have higher winning expectation. Next, we define a classification problem to represent the player strategy, by associating a game state with the performed action. To validate and test the defined player model, the HoldemML framework was created. This framework generates agents by classifying the data present on the game logs with the goal to copy the best human player tactics. Th...

Opponent Classification in Poker

2010

Modeling games has a long history in the Artificial Intelligence community. Most of the games that have been considered solved in AI are perfect information games. Imperfect information games like Poker and Bridge represent a domain where there is a great deal of uncertainty involved and additional challenges with respect to modeling the behavior of the opponent etc.

Mini-poker game

2023

This project focuses on developing reinforcement learning algorithms in a simplified version of a poker game. The approach begins with finding the best agent capable of defending, using a model where the agent always receives the opposing bet and decides whether to fold or call based on its hand and its estimation of the adversary's hand. Four different agents were developed using multi-arm bandit and Q-learning algorithms, and their approaches were compared to select the best defender agent. After identifying the best defender, the project moves on to developing an attack strategy that can be combined with the top-performing defender to create a complete agent capable of competing effectively against a human player.