Tag: reinforcementlearning

Libratus: A Texas Hold-Em Poker AI

Libratus: A Texas Hold-Em Poker AI

Four of the best professional poker players in the world – Dong Kim, Jason Les, Jimmy Chou, and Daniel McAulay – recently got beat by Libratus, a poker-playing AI developed at the Pittsburgh Supercomputing Center. During a period of 20 days of continuous play (10h/day), each of these four professionals lost to Libratus heads-up in a whopping total of 120.000 hands of No Limit Texas Hold-em Poker.

A player may face 10 to the power of 160 different situations in Texas Hold-em Poker: more than the number of atoms in the universe. It took extensive machine learning to compute and prioritize the computation of the most rewarding actions in these situations. Libratus works by running extensive simulations, taking into account the way the professionals play, and figuring out the best counter strategy. Although it is not without flaws, any “holes” the players found in Libratus’ strategy could not be exploited for long, as the algorithm would quickly learn and adapt to prevent further exploitation. The experience was completely different from playing a human player, the professionals argue, as Libratus would make both tiny and huge bets and would continuously change its strategy and plays.

The video below provides more detailed information and also shows the million-dollar margin by which Libratus won at the end of the twenty day poker (training) marathon:

Video: Human-Computer Interactions in Reinforcement Learning

Video: Human-Computer Interactions in Reinforcement Learning

Reinforcement learning is an area of machine learning inspired by behavioral psychology, concerned with how software agents ought to take actions in an environment so as to maximize some notion of cumulative reward (Wikipedia, 2017). Normally, reinforcement learning occurs autonomously. Here, algorithms will seek to minimize/maximize a score that is estimated via predefined constraints. As such, algorithms can thus learn to perform the most effective actions (those that minimize/maximize the score) by repeatedly experimenting and assessing strategies.

The approach in the video below is radically different. Instead of a pre-defined scoring, human-computer interaction is used to assign each action sequence (each iteration/experiment) a score. This approach is particularly useful for complex behaviors, such as a back-flip, for which it is hard to pre-define the constraints and actions that lead to the “most effective” back-flip. However, for us humans, it is relatively easy to recognize a good back-flip when we see one. The video below shows how the researchers therefore integrated a human-computer interaction in their reinforcement learning algorithm. After observing the algorithm perform a sequence of actions, a human actor indicates to what extent the goal (i.e., a backflip) is achieved or not. This human assessment thus functions as the score which the algorithm will try to minimize/maximize.

This approach can be really valuable for organizations seeking to improve their machine learning application. The paper on the principle (Deep Reinforcement Learning from Human Preferences) can be found here. The scholars conclude that this supervised approach based on human preferences has very good training results whereas the cost are similar the simple bulldozer approach of training a neural net from scratch using GPU servers.