Riot Games API Challenge 2015
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
dataset
server
static
views
.gitignore
Procfile
README.md
bower.json
gulpfile.js
package.json
riot.txt
search-src
server.js

README.md

Itemify

A repository for Alvin Lin's and Justin He's web application for the Riot Games API Challenge 2.0 of 2015. This web app analyzes tens of thousands of high ranking games to determine the best build for a champion. It constructs an item build JSON from that build which you can download and load into League of Legends. You can also download builds for all champions and load them into your game so that you can be prepared no matter what champion you are playing.
Hosted at itemify.herokuapp.com.

Attribution

Itemify isn't endorsed by Riot Games and doesn't reflect the views or opinions of Riot Games or anyone officially involved in producing or managing League of Legends. League of Legends and Riot Games are trademarks or registered trademarks of Riot Games, Inc. League of Legends © Riot Games, Inc. We do not own any of the image assets used in this website. All code is our intellectual property and you may use it freely given that you credit us as its source.

Overview

All data analysis is done on the backend with Python even though the server is NodeJS. Scripts of interest are all located in /dataset/scripts. These scripts help us fetch all necessary data as well as parse it. Each script is documented and contains detailed descriptions about its function. /dataset/builds contains all the JSON files generated by the scripts. If League of Legends releases a new patch, all we have to do is run:

/dataset/scripts/get_stats_from_seed.py
/dataset/scripts/get_stats.py
/dataset/scripts/generate_champion_builds.py

to update the data. Due to rate limiting on Riot's API key, get_stats.py will usually take about 18~ hours since we aggregate data on 150k games.

Data Aggregation

Our algorithm to aggregate game data starts with a few summoners known as "seed" summoners. We query their past games and the past games of the teammates they have had in the past games. Then we recursively query those teammate's teammates until we have enough data. To generate champion builds, we assign each item an effectiveness score for each champion it has been built on. For each player that has built that item on that champion, we add 2.0 if they won the game, plus the player's KDA ((kills + (assists / 2)) / deaths). Every item built on that champion is then sorted into buckets based on its type (starter, jungle, endgame, etc) and sorted by effectiveness.

Data Aggregation Scripts

/dataset/scripts contains all the scripts that help us aggregate and organize game build data. In addition to these files, a file named .api_key is needed to store the API key that will be used to query Riot's databases.

Classes

riot_api.py contains RiotApi, the class that takes care of the actual HTTP request to Riot's servers and returns the result of each request.
data_aggregator.py contains DataAggregator, the class that takes care of aggregating static data and live data for each summoner. It aggregates the item and champion JSONs as well as the summoner ID and recent builds for each summoner.
data_analyzer.py contains DataAnalyzer, the class that takes care of pulling important fields for filtering from the raw champion/item/summoner JSONs. <br/ > item_set_generator.py contains ItemSetGenerator and ItemSetBlockItems, classes used to help generate a valid a item set JSON. It does not do the file write and simply returns the item set as a Python dict which we dump to a file.
util.py contains Util, a class that contains utility methods which make our lives easier when parsing data.

Runnables

get_items_champions.py pulls the items and champions from Riot's API and dumps the data into files. <br/ > get_item_assets.py pulls the image assets for every item from DataDragon and dumps them into the static images folder. <br/ > get_stats_from_seed.py gets champion build data from the hardcoded seeding summoners and initializes the temporary files QUERIED_SUMMONERS and UNQUERIED_SUMMONERS, which store the IDs of the summoners we have and have not queried.
get_stats.py gets champion build data using the temporary files QUERIED_SUMMONERS and UNQUERIED_SUMMONERS after they have been initialized, so it must be run after get_stats_from_seed.py.
recheck_queried_ids.py checks all the IDs in QUERIED_SUMMONERS against all the IDs in UNQUERIED_SUMMONERS and removes duplicates and all summoners in UNQUERIED_SUMMONERS that have already been queried. There are cases where summoner IDs that have already been queried make their way into UNQUERIED_SUMMONERS because they are teammates of summoners that are currently being queried.
generate_champion_builds.py is the most interesting script, since it takes stats.json (generated by get_stats.py and get_stats_from_seed.py) and analyzes the data to determine what is the most effective build for each champion. It will dump each build into a file.

Note

This project is no longer maintained. Please contact me at alvin.lin.dev@gmail.com if you would like to take over this project.