Skip to content

Commit

Permalink
Merge pull request #75 from cimat-ris/use_config_files
Browse files Browse the repository at this point in the history
Use config files
  • Loading branch information
jbhayet committed Apr 29, 2024
2 parents f0427ed + 50de88f commit 6f5312a
Show file tree
Hide file tree
Showing 60 changed files with 2,210 additions and 2,352 deletions.
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -131,3 +131,6 @@ dmypy.json
# Dataset related
*sdd_raw*
*.pickle
*.pth
*.pdf
*.csv
28 changes: 18 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,33 +2,40 @@
[![Tests status](https://github.com/cimat-ris/trajpred-bdl/actions/workflows/python-app.yml/badge.svg)](https://github.com/cimat-ris/trajpred-bdl/actions/workflows/python-app.yml)
# trajpred-bdl

## To install the libraries

```bash
pip -e install .
```

## Training

To train a simple deterministic model:

```
```bash
python scripts/train_deterministic.py
```

To train a simple deterministic model with variances as output (DG):

```
```bash
python scripts/train_deterministic_gaussian.py
```

To train a model made of an ensemble of DGs (DGE):

```
```bash
python scripts/train_ensembles.py
```

To train a deterministic model with dropout at inference (DD):
```

```bash
python scripts/train_dropout.py
```

To train a deterministic-variational model (DV):
```
```bash
python scripts/train_variational.py
```

Expand All @@ -37,15 +44,15 @@ python scripts/train_variational.py

With any of the training scripts above, you can use the '--no-retrain' option to produce testing results

```
```bash
python scripts/train_ensembles.py --no-retrain --pickle --examples 10
```

## Calibration: a postprocess step

After a model is trained, it saves it's results in a `pickle` file, then the calibration for it uses the output from a trained model and can be executed as follows:

```
```bash
# training the desired model
$ python scripts/train_torch_deterministic_gaussian.py --pickle --no-retrain

Expand All @@ -72,14 +79,15 @@ The `test_calibration.py` script uses Isotonic regression to compute the calibra

* Clone the [bitrap repository](https://github.com/umautobots/bidireaction-trajectory-prediction).
* The train/test partition from the [Trajectron++](https://github.com/StanfordASL/Trajectron-plus-plus) repository are now present in the datasets/trj++ directory as .pkl files.
* Modify *bitrap_np_ETH.yml* lines 30 and set the path to where the .json file is located. You may also change BATCH_SIZE or NUM_WORKERS
* Modify *bitrap_np_ETH.yml* lines 30 and set the path to where the .json file is located. You may also change BATCH_SIZE or NUM_WORKERS

* To train bitrap, run
```
```bash
python scripts/train_bitrap.py --config_file bitrap_np_ETH.yml --seed n
```
By changing the seed, you will be building different models for an ensemble.

* To generate data calibration from bitrap, run
```
```bash
python tests/test_bitrap.py
```
2 changes: 1 addition & 1 deletion cfg/bitrap_np_eth.yml
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ MODEL:
DEC_HIDDEN_SIZE: 256
DATASET:
NAME: 'eth'
ETH_CONFIG: '/home/jbhayet/opt/repositories/devel/bidireaction-trajectory-prediction/configs/ETH_UCY.json'
ETH_CONFIG: '../bitrap/configs/ETH_UCY.json'
ROOT: 'datasets/trj++'
TRAJECTORY_PATH: 'datasets/trj++'
DATALOADER:
Expand Down
2 changes: 1 addition & 1 deletion cfg/bitrap_np_hotel.yml
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ MODEL:
DEC_HIDDEN_SIZE: 256
DATASET:
NAME: 'hotel'
ETH_CONFIG: '/home/jbhayet/opt/repositories/devel/bidireaction-trajectory-prediction/configs/ETH_UCY.json'
ETH_CONFIG: '../bitrap/configs/ETH_UCY.json'
ROOT: 'datasets/trj++'
TRAJECTORY_PATH: 'datasets/trj++'
DATALOADER:
Expand Down
2 changes: 1 addition & 1 deletion cfg/bitrap_np_univ.yml
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ MODEL:
DEC_HIDDEN_SIZE: 256
DATASET:
NAME: 'univ'
ETH_CONFIG: '/home/jbhayet/opt/repositories/devel/bidireaction-trajectory-prediction/configs/ETH_UCY.json'
ETH_CONFIG: '../bitrap/configs/ETH_UCY.json'
ROOT: 'datasets/trj++'
TRAJECTORY_PATH: 'datasets/trj++'
DATALOADER:
Expand Down
2 changes: 1 addition & 1 deletion cfg/bitrap_np_zara1.yml
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ MODEL:
DEC_HIDDEN_SIZE: 256
DATASET:
NAME: 'zara1'
ETH_CONFIG: '/home/jbhayet/opt/repositories/devel/bidireaction-trajectory-prediction/configs/ETH_UCY.json'
ETH_CONFIG: '../bitrap/configs/ETH_UCY.json'
ROOT: 'datasets/trj++'
TRAJECTORY_PATH: 'datasets/trj++'
DATALOADER:
Expand Down
2 changes: 1 addition & 1 deletion cfg/bitrap_np_zara2.yml
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ MODEL:
DEC_HIDDEN_SIZE: 256
DATASET:
NAME: 'zara2'
ETH_CONFIG: '/home/jbhayet/opt/repositories/devel/bidireaction-trajectory-prediction/configs/ETH_UCY.json'
ETH_CONFIG: '../bitrap/configs/ETH_UCY.json'
ROOT: 'datasets/trj++'
TRAJECTORY_PATH: 'datasets/trj++'
DATALOADER:
Expand Down
31 changes: 31 additions & 0 deletions cfg/deterministic_dropout_ethucy.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
dataset:
pickle_dir: 'pickle/'
validation_proportion: 0.1
use_neighbors: False
batch_size: 256
person_max: 70 # Maximum number of persons in a frame
obs_len: 8 # Observation length (trajlet size)
pred_len: 12 # Prediction length
delim: "," # Delimiter
dt: 0.4 # Delta time (time between two discrete time samples)
max_overlap: 1 # Maximal overlap between trajets
model:
num_layers: 2
dropout_rate: 0.2
hidden_dim: 128
embedding_dim: 128
input_dim: 2
output_dim: 2
train:
initial_lr: 0.001
epochs: 220 #220
no_retrain: False
teacher_forcing: False
save_dir : "training_checkpoints/"
model_name: "deterministic"
misc:
plot_losses: True
plot_dir: "images/"
show_test: False
samples_test: 10
model_samples: 10
35 changes: 35 additions & 0 deletions cfg/deterministic_ethucy.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
dataset:
id_dataset: 0 # 0: ETHUCY
id_test: 2
pickle: False
pickle_dir: 'pickle/'
validation_proportion: 0.1
use_neighbors: False
batch_size: 512
person_max: 70 # Maximum number of persons in a frame
obs_len: 8 # Observation length (trajlet size)
pred_len: 12 # Prediction length
delim: "," # Delimiter
dt: 0.4 # Delta time (time between two discrete time samples)
max_overlap: 1 # Maximal overlap between trajets
model:
num_layers: 2
dropout_rate: 0.2
hidden_dim: 128
embedding_dim: 8
input_dim: 2
output_dim: 2
train:
initial_lr: 0.001
epochs: 800
no_retrain: False
teacher_forcing: False
save_dir : "training_checkpoints/"
model_name: "deterministic"
misc:
plot_losses: True
plot_dir: "images/"
show_test: True
samples_test: 10
log_level: 20
seed: 1234
31 changes: 31 additions & 0 deletions cfg/deterministic_gaussian_ethucy.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
dataset:
pickle_dir: 'pickle/'
validation_proportion: 0.1
use_neighbors: True
batch_size: 256
person_max: 70 # Maximum number of persons in a frame
obs_len: 8 # Observation length (trajlet size)
pred_len: 12 # Prediction length
delim: "," # Delimiter
dt: 0.4 # Delta time (time between two discrete time samples)
max_overlap: 1 # Maximal overlap between trajets
model:
num_layers: 2
dropout_rate: 0.2
hidden_dim: 128
embedding_dim: 128
input_dim: 2
output_dim: 2
train:
initial_lr: 0.001
epochs: 220
no_retrain: False
teacher_forcing: False
save_dir : "training_checkpoints/"
model_name: "deterministic_gaussian"
misc:
plot_losses: False
plot_dir: "images/"
show_test: False
samples_test: 10
model_samples: 3
36 changes: 36 additions & 0 deletions cfg/deterministic_gaussian_sdd.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
dataset:
id_dataset: 1 # 0: ETHUCY
id_test: 2
pickle: True
pickle_dir: 'pickle/'
validation_proportion: 0.1
use_neighbors: False
batch_size: 256
person_max: 70 # Maximum number of persons in a frame
obs_len: 8 # Observation length (trajlet size)
pred_len: 12 # Prediction length
delim: "," # Delimiter
dt: 0.4 # Delta time (time between two discrete time samples)
max_overlap: 1 # Maximal overlap between trajets
model:
num_layers: 2
dropout_rate: 0.2
hidden_dim: 128
embedding_dim: 128
input_dim: 2
output_dim: 2
train:
initial_lr: 0.001
epochs: 1 #220
no_retrain: True
teacher_forcing: False
save_dir : "training_checkpoints/"
model_name: "deterministic_gaussian"
misc:
plot_losses: False
plot_dir: "images/"
show_test: True
samples_test: 10
log_level: 20
seed: 1234
model_samples: 3
37 changes: 37 additions & 0 deletions cfg/deterministic_variational_ethucy.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
dataset:
id_dataset: 0 # 0: ETHUCY
id_test: 2
pickle: False
pickle_dir: 'pickle/'
validation_proportion: 0.1
use_neighbors: False
batch_size: 256
person_max: 70 # Maximum number of persons in a frame
obs_len: 8 # Observation length (trajlet size)
pred_len: 12 # Prediction length
delim: "," # Delimiter
dt: 0.4 # Delta time (time between two discrete time samples)
max_overlap: 1 # Maximal overlap between trajets
model:
num_layers: 2
dropout_rate: 0.2
hidden_dim: 128
embedding_dim: 128
input_dim: 2
output_dim: 2
train:
initial_lr: 0.001
epochs: 2 #220
num_mctrain: 10
no_retrain: False
teacher_forcing: False
save_dir : "training_checkpoints/"
model_name: "deterministic_variational"
misc:
plot_losses: True
plot_dir: "images/"
show_test: True
samples_test: 10
log_level: 20
seed: 1234
model_samples: 10
37 changes: 37 additions & 0 deletions datasets/edinburgh/preprocess_edinburgh.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
import os, sys
import pandas as pd
import matplotlib.pyplot as plt

from opentraj.loaders.loader_edinburgh import load_edinburgh
sys.path.append('.')

months = {
#'Jan' : {'01','02','04','05','06','07','08','10','11','12','13','14','15','16','17','18','19'},
#'May' : {'29','30','31'},
#'Jun' : {'02','03','04','05','06','08','09','11','12','14','16','17','18','20','22','24','25','26','29','30'},
#'Jul' : {'01','02','04','11','12','13','14','17','18','19','20','21','22','23','25','26','27','28','29','30'},
#'Aug' : {'01','24','25','26','27','28','29','30'},
'Sep' : {'01','02','04','05','06','10','11','12','13','14','16','18','19','20','21','22','23','24','25','27','28','29','30'},
'Oct': {'02','03','04','05','06','07','08','09','10','11','12','13','14','15'},
'Dec' : {'06','11','14','15','16','18','19','20','21','22','23','24','29','30','31'}
}
# Fixme: set proper OpenTraj directory
edi_root = os.path.join('../../../OpenTraj', 'datasets', 'Edinburgh','annotations')
edi_data = {key: pd.DataFrame() for key in months.keys() }

for month, videos_per_month in months.items():
scene_video_ids = [day+month for day in videos_per_month]
traj_datasets_per_scene = []

for scene_video_id in scene_video_ids:
annot_file = os.path.join(edi_root,'tracks.'+scene_video_id+'.txt')
print(annot_file)
itraj_dataset = load_edinburgh(annot_file, title="Edinburgh",
use_kalman=False, scene_id=scene_video_id, sampling_rate=4) # original framerate=9
trajs = list(itraj_dataset.get_trajectories())
traj_datasets_per_scene.append(pd.concat([itraj_dataset.data.iloc[:, : 4], itraj_dataset.data.iloc[:, 8:9]], axis=1))

if len(traj_datasets_per_scene)>0:
df = pd.concat(traj_datasets_per_scene)
df.to_pickle(month+'.pickle')

12 changes: 7 additions & 5 deletions datasets/sdd/preprocess_sdd.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,8 @@
import pickle
import pandas as pd

sys.path.append('../../../OpenTraj')
sys.path.append('../../../OpenTraj/opentraj')
from opentraj.toolkit.loaders.loader_sdd import load_sdd
sys.path.append('.')

Expand All @@ -16,8 +18,7 @@
'quad' : 4
}
# Fixme: set proper OpenTraj directory
sdd_root = os.path.join('/<dirTo>/OpenTraj', 'datasets', 'SDD')

sdd_root = os.path.join('../../../OpenTraj', 'datasets', 'SDD')
sdd_data = {key: pd.DataFrame() for key in scenes.keys() }

for scene_name, total_videos_per_scene in scenes.items():
Expand All @@ -36,6 +37,7 @@
drop_lost_frames=False, use_kalman=False, label='Pedestrian')
traj_datasets_per_scene.append(pd.concat([itraj_dataset.data.iloc[:, : 4], itraj_dataset.data.iloc[:, 8:9]], axis=1))

pickle_out = open(scene_name+'.pickle',"wb")
pickle.dump(pd.concat(traj_datasets_per_scene), pickle_out, protocol=2)
pickle_out.close()

df = pd.concat(traj_datasets_per_scene)
df.to_pickle(scene_name+'.pickle')

Loading

0 comments on commit 6f5312a

Please sign in to comment.