No description, website, or topics provided.
Switch branches/tags
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
backend
crawler
data_transformation
documentation
frontend
ml
.gitignore
LICENSE
README.md

README.md

Runepicker Helper

This repository contains Satrium's and TiFu's entry for the Riot Games API Challenge 2017.

Demo: http://runehelper.heimerdinger.support/

Goal

We want to enable all players to quickly create, customize and select good rune pages based on automatically generated recommendations to improve the overall quality of used rune pages.

What does it do?

Runepicker Helper is a web/electron app which allows an user to automatically generate a rune page for any champion during champ select. The rune page is generated using a machine learning model trained on one million games. To not limit the user to the generated rune page, modifications to the generated rune page are possible.

Additionally, the electron app is able to access the League Client Update's (LCU) internal API directly: It can create and select the generated page in the LCU with one click. The creation and selection of the page needs far less than one second: Never again start your game with the wrong rune page because you weren't fast enough to create and select the new rune page.

How does it work?

We trained a couple neural networks to predict the primary style, sub style and chosen perks for a given champion and lane combination. This simple approach has a 95% Top 2 accuracy for primary style prediction and 85% accuracy for sub style prediction.

The prediction of perks is quite difficult because there are not many features available, which indicate if a player should use a specific perk (e.g. Poro Ward). With this simple model we still achieve a > 50% accuracy on most perks.

Repository Structure

  • backend
    • python backend
  • crawler
    • based on cassiopeia
    • crawls games and saves them in a database.
  • data_transformation
    • extracts infos from the db generated by the crawler into another database
    • the db generated by this module can then be used to train the neural networks
  • documentation
    • presentation of the project used on our website
  • frontend
    • our angular based frontend code
  • ml
    • code to train the models using keras and tensorflow