Skip to content
Predict scores of NBA games using regularized matrix completion
Python R
Branch: master
Clone or download
Latest commit 747d668 Aug 27, 2018
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
equations Add all local equations Nov 22, 2017
model Bug Fix Aug 26, 2018
tests Avoid html requests in tests Jul 3, 2017
.travis.yml Change coverage directory Jul 1, 2017
README.md
__init__.py Initial testing framework Jun 29, 2017
requirements.txt Add CI Jul 1, 2017
setup.py Add CI Jul 1, 2017

README.md

NBA-prediction

Coverage Status

Predicts scores of NBA games using matrix completion

The Model

For a given NBA game, if you could accurately predict each team's offensive rating (points per 100 possessions) and the pace of the game (possessions per game), you could estimate the final score of the game.

Predicting a team's offensive rating against another team is tricky. It depends on how good the offensive team is at scoring and how good the defending team is a defending. Most importantly though, it depends on the specific matchups between the two teams. This is reminiscent of recommendation systems where the recommendation depends on the type of user, the type of product, and the affinity between those two. Furthermore, for a given season only some offensive ratings between teams are available (the teams that have already played). The strategy in this model is to use matrix completion techniques to estimate unseen offensive ratings. These will be combined with pace estimations to predict final scores.

Matrix completion

Here, we look at two methods for matrix completion: Maximum Margin Matrix Factorization (MMMF) and Singular Value Decomposition (SVD).

Hastie, Trevor, Robert Tibshirani, and Martin Wainwright. Statistical learning with sparsity: the lasso and generalizations. CRC Press, 2015.

Maximum Margin Matrix Factorization (MMMF)

The objective of MMMF is approximate an m x n matrix Z by factoring into

1

where A is an m x r matrix and B is an n x r matrix. Effectively, this puts a rank constraint r on the approximation M.

This can be estimated by solving the following

2

where Omega indicates that only the known values in Z should be taken into consideration. Any unknown value is treated as zero.

While intuitive, this approach has a two of problems. First, this is a two dimensional family of models indexed by r (the rank of the factorization) and lambda (the magnitude of regularization), which requires a lot of tuning. Second, this optomization problem is non-convex and in practice did not find global minima when used to predict NBA offensive ratings. Because of this, we turned to SVD.

Singular Value Decomposition Using Nuclear Norm

SVD, not explained here, can be used to provide a rank-q approximation of a matrix (Z) by constraining the rank of the SVD (M). This amounts to the following optimization

3

If values are missing from Z then you can constrain M to correctly impute these values, while approximating the unknown values

4

Where omega is the set of known values. However, this problem is NP-hard and also leads to overfitting since the known values are required to be predicted exactly. Instead, you can simultanously predict unknown values and approximate known values by solving the following optimization

5

Like MMMF, this problem is non-convex. However, it can be relaxed to the following convex optimization problem

6

where a nuclear norm on M, ||M||* is used. This algorithm, called soft-impute, is studied extensively in:

Mazumder, Rahul, Trevor Hastie, and Robert Tibshirani. "Spectral regularization algorithms for learning large incomplete matrices." Journal of machine learning research 11.Aug (2010): 2287-2322.

Example Code

To make predictions, use the following code:

>> model = NBAModel(update=True)
>> model.get_scores('PHO', 'WAS')
PHO WAS
92.9092883132 97.1806398788

which predicts the Suns will lose to the Wizards 93-97.

Note, scraping all the data required to run the algorithm is slow. This only needs to be done the first time. On subsequent models, you can use update=False to used the cached data.

Model Tuning and Test Error

The optimization strategy above is parameterized by lambda, the extent of regularization. Using a validation set (10% of sample), we determined 25 to be optimal value of lambda.

Imgur

Using lambda = 25 on a held out test set, our model estimates a team's final score with an MSE of 6.7. Not bad.

You can’t perform that action at this time.