Bayesian approach to solve the Multi-Armed Bandit problem to do A/B/C/D... testing to decide which arm has the highest winning probability.
A traditional A/B testing approach is not very practical in cases like this. As to perform A/B experiments, we need to invest time, money and resources to observe the outcomes. Moreover, in this case, if we keep on trying different arms even when we can observe from the limited data that we have collected that some arm is better than others, we would have lost opportunity to make more money had we bet on the arm with the best chances while the experiment is going on.