Just as humans, computers learn by experience.The purpose of A/B testing is often to collect data to decide whether intervention A or B is better. As such, we provide one group with intervention A whereas another group receives intervention B. With the data of these two groups coming in, the computer can statistically estimate which intervention (A or B) is more effective. The more data the computer has, the more certain the estimate is. Here, a trade-off exists: we need to collect data on both interventions to be certain which is best. But we don’t want to conduct an inefficient intervention, say B, if we are quite sure already that intervention A is better.
In his post, Corné de Ruijt of Endouble writes about multi-armed bandit algorithms, which try to optimize this trade-off: “Multi-armed bandit algorithms try to overcome the high missed opportunity cost involved in learning, by exploiting and exploring at the same time. Therefore, these methods are in particular interesting when there is a high lost opportunity cost involved in the experiment, and when exploring and exploiting must be performed during a limited time interval.“
In the full article, you can read Corné’s comparison of this multi-armed bandit approach to the traditional A/B testing approach using a recruitment and selection example. For those of you who are interested in reading how anyone can apply this algorithm and others to optimize our own daily decisions, I highly recommend the book Algorithms to Live By: The Computer Science of Human Decisions available on Amazon or the Dutch bol.com.
One thought on “Multi-Armed Bandits: The Smart Alternative for A/B Testing”