Python code for the estimation of a link-based route choice model with decomposed utilities (a global-local model)
For more details, please see the paper
Oyama, Y. (2024) Global path preference and local response: A reward decomposition approach for network path choice analysis in the presence of visually perceived attributes. Transportation Research Part A: Policy and Practice 181: 103998 (Open Access).
If you find this code useful, please cite the paper:
@article{oyama2024globalocal,
title = {Global path preference and local response: A reward decomposition approach for network path choice analysis in the presence of visually perceived attributes},
journal = {Transportation Research Part A: Policy and Practice},
volume = {181},
pages = {103998},
year = {2024},
issn = {0965-8564},
doi = {https://doi.org/10.1016/j.tra.2024.103998},
url = {https://www.sciencedirect.com/science/article/pii/S0965856424000466},
author = {Yuki Oyama},
}
I have prepared synthetic data generated in the Sioux Falls network. Two synthetic datasets are available in the data folder:
- data_G0.csv: simulated by a global model, i.e., the original recursive logit model
- data_L0.csv: simulated by a global-local model, wherein capacity is assumed to have a local impact.
You can specify data to use within the run_estimation.py code.
Corresponding part
# for the prepared synthetic dataset: choose
# "data_G0.csv" for data generated by a global model
# "data_L0.csv" for data generated by a local model
obs_data = pd.read_csv(os.path.join(data_dir, 'data_L0.csv'))
Estimate a global-local model with both global and local attributes by specifyning vars_g
(global attribute names) and vars_l
with the initial values, upper and lower bounds.
python run_estimation.py --vars_g "length" --init_beta_g -1 --lb_g None --ub_g 0 --vars_l "caplen" --init_beta_l 0 --lb_l None --ub_l None
Estimate a global-local model with only global attributes (corresponding the original recursive logit model)
python run_estimation.py --vars_g "length" "caplen" --init_beta_g -1 -1 --lb_g None None --ub_g 0 None
For cross-validation, split the data into estimation and validation samples by setting test ratio greater than zero.
python run_estimation.py --n_samples 10 --test_ratio 0.2
For bootstrapping, resample K sets of observations and estimate the model with each of them, by setting n_samples greater than 1 and test ratio to zero.
python run_estimation.py --n_samples 200 --test_ratio 0 --isBootstrap True
Note: this returns only the estimation results for K observations, so you should analyze the standard error or confidential intervals from the results afterwards.