We used the following packages developped at the MILA lab:
- Theano. A general GPU-accelerated python math library, with an interface similar to numpy (see [3, 4]). See http://deeplearning.net/software/theano/
- Blocks. A deep-learning and neural network framework for Python based on Theano. As Blocks evolves very rapidly, we suggest you use commit
1e0aca9171611be4df404129d91a991354e67730, which we had the code working on. See https://github.com/mila-udem/blocks
- Fuel. A data pipelining framework for Blocks. Same that for Blocks, we suggest you use commit
ed725a7ff9f3d080ef882d4ae7e4373c4984f35a. See https://github.com/mila-udem/fuel
We also used the scikit-learn Python library for their mean-shift clustering algorithm. numpy, cPickle and h5py are also used at various places.
Here is a brief description of the Python files in the archive:
config/*.py: configuration files for the different models we have experimented with the model which gets the best solution is
data/*.py: files related to the data pipeline:
__init__.pycontains some general statistics about the data
csv_to_hdf5.py: convert the CSV data file into an HDF5 file usable directly by Fuel
hdf5.py: utility functions for exploiting the HDF5 file
init_valid.py: initializes the HDF5 file for the validation set
make_valid_cut.py: generate a validation set using a list of time cuts. Cut lists are stored in Python files in
data/cuts/(we used a single cut file)
transformers.py: Fuel pipeline for transforming the training dataset into structures usable by our model
data_analysis/*.py: scripts for various statistical analyses on the dataset
cluster_arrival.py: the script used to generate the mean-shift clustering of the destination points, producing the 3392 target points
model/*.py: source code for the various models we tried
__init__.pycontains code common to all the models, including the code for embedding the metadata
mlp.pycontains code common to all MLP models
dest_mlp_tgtcls.pycontaints code for our MLP destination prediction model using target points for the output layer
error.pycontains the functions for calculating the error based on the Haversine Distance
ext_saveload.pycontains a Blocks extension for saving and reloading the model parameters so that training can be interrupted
ext_test.pycontains a Blocks extension that runs the model on the test set and produces an output CSV submission file
train.pycontains the main code for the training and testing
How to reproduce the winning results?
There is an helper script
prepare.sh which might help you (by performing steps 1-6 and some other checks), but if you encounter an error, the script will re-execute all the steps from the beginning (before the actual training, steps 2, 4 and 5 are quite long).
Note that some script expect the repository to be in your PYTHONPATH (go to the root of the repository and type
- Set the
TAXI_PATHenvironment variable to the path of the folder containing the CSV files.
data/csv_to_hdf5.py "$TAXI_PATH" "$TAXI_PATH/data.hdf5"to generate the HDF5 file (which is generated in
TAXI_PATH, along the CSV files). This takes around 20 minutes on our machines.
data/init_valid.py valid.hdf5to initialize the validation set HDF5 file.
data/make_valid_cut.py test_times_0to generate the validation set. This can take a few minutes.
data_analysis/cluster_arrival.pyto generate the arrival point clustering. This can take a few minutes.
- Create a folder
model_dataand a folder
output(next to the training script), which will receive respectively a regular save of the model parameters and many submission files generated from the model at a regular interval.
./train.py dest_mlp_tgtcls_1_cswdtx_alexandreto train the model. Output solutions are generated in
output/every 1000 iterations. Interrupt the model with three consecutive Ctrl+C at any times. The training script is set to stop training after 10 000 000 iterations, but a result file produced after less than 2 000 000 iterations is already the winning solution. We trained our model on a GeForce GTX 680 card and it took about an afternoon to generate the winning solution. When running the training script, set the following Theano flags environment variable to exploit GPU parallelism:
More information in this pdf