Contextual Bandits in R - simulation and evaluation of Multi-Armed Bandit Policies
-
Updated
Jul 25, 2020 - R
Contextual Bandits in R - simulation and evaluation of Multi-Armed Bandit Policies
This repository is for a Decision Making Aarhus University Course assignment, focusing on using Multi-Armed Bandit algorithms, specifically the epsilon-greedy algorithm, for optimizing click-through rates in digital advertising by balancing the exploration of new ads and the exploitation of successful ones.
Add a description, image, and links to the multi-armed-bandit topic page so that developers can more easily learn about it.
To associate your repository with the multi-armed-bandit topic, visit your repo's landing page and select "manage topics."