Skip to content

Fictitious play and Q-Learning was examined for the computation of equiilibria in Zero-Sum games

Notifications You must be signed in to change notification settings

Tsili123/Computation-of-Equilibria-in-Zero-Sum-Games

Repository files navigation

MSc in AI Demokritos Multiagent Agent Reinforcement Learning Assignment

Fictitious play and Q-Learning was examined for the computation of equiilibria in Zero-Sum games. We tested convergence to Nash-Equilibria in several two-player(two agents) games, which are matching pennis, rock-paper-scissors and selling damaged goods. Regarding the approach that was followed, we tested fictitious play agent vs fictitious play agent(FP vs FP), Q-Learning agent vs Q-Learning agent(Q-Learning vs Q-Learning) and Fictitious play agent vs Q-Learning agent(FP vs Q-Learning).

The report is included here.

About

Fictitious play and Q-Learning was examined for the computation of equiilibria in Zero-Sum games

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published