Download
Abstract
Poker is one of the most widely played card games worldwide, particularly due to its unique blend of skill and luck. The psychology of a successful poker player, how a player demonstrates confidence through bets and bluffs combined with imperfect information makes poker very “human”. Our group aims to create a deep reinforcement learning agent that is capable of outperforming existing poker bots to demonstrate RL’s ability to “solve” non-deterministic problems. We plan to build on the existing framework Neuron_poker1. This framework provides an OpenAI gym-style environment for training and evaluating Poker agents. It also allows easy creation and integration of new poker “players”, which we create in this project. Using this framework, we create a novel PPO and modified DQN agent that outperforms the existing agents that Neuron Poker has to offer. We also experiment with other variations of DQN, such as dueling DQN and Double DQN to enhance performance. We present a deep reinforcement learning alternative to playing poker, one that does not rely on probability tables or continuous outcome trees, purely on Reinforcement Learning.