Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use config files #75

Merged
merged 55 commits into from
Apr 29, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
55 commits
Select commit Hold shift + click to select a range
34c0ea1
New version for the train script for deterministic model
jbhayet Mar 23, 2024
539caf4
Update .gitignore
jbhayet Mar 23, 2024
aee686a
Some rewriting of the basic model
jbhayet Mar 23, 2024
51b2111
Modifications in the config loading; modifications in the dataset loa…
jbhayet Mar 23, 2024
98dde2f
Model and util scripts modified and simplified
jbhayet Mar 24, 2024
6e4cb3e
yaml configuration files
jbhayet Mar 24, 2024
9c72ab6
Corrected a few problems with plots for example
jbhayet Mar 24, 2024
c7cfe39
Updated config files
jbhayet Mar 30, 2024
85bad07
Updates of the MC dropout model
jbhayet Mar 30, 2024
cf29312
Modified minadefde
jbhayet Apr 1, 2024
1baa5ac
Rewriting the ensemble script
jbhayet Apr 2, 2024
5f10b98
Made the name of this config param more general
jbhayet Apr 2, 2024
a0f16c6
Revisited test_teaser (visualization of KDEs before/after calibration)
jbhayet Apr 3, 2024
bed285f
Added these two files to separate some of the functionalities
jbhayet Apr 3, 2024
7e6752e
Some rewriting and code simplifcation
jbhayet Apr 3, 2024
a882f63
Slight changes
jbhayet Apr 3, 2024
c1264f0
Cleaned the imports
jbhayet Apr 3, 2024
b766d0d
Sdd example working (although something weird with the data)+change n…
jbhayet Apr 4, 2024
c95a14e
Started to integrate an efficient version of HDR with KNNs
jbhayet Apr 4, 2024
3f167a7
Restructured module
jbhayet Apr 5, 2024
acb03b0
Added Edinburgh dataset
jbhayet Apr 5, 2024
803473d
SDD test file
jbhayet Apr 5, 2024
a51b4ea
Started implementation of HDR-KNN
jbhayet Apr 5, 2024
1499bfd
Modified imports to reflect the new structure
jbhayet Apr 5, 2024
410e372
Added setup file
jbhayet Apr 5, 2024
0a85536
Modified imports
jbhayet Apr 5, 2024
e4960bc
Corrected imports in some of the scripts
jbhayet Apr 5, 2024
c39457c
More corrections
jbhayet Apr 5, 2024
26847d9
Changed get_model_name into get_model_filename
jbhayet Apr 5, 2024
02f33da
Correction of the get+model_name
jbhayet Apr 5, 2024
7bcaec3
MOre corrections
jbhayet Apr 5, 2024
10a59c1
Upgraded the variational examlpe
jbhayet Apr 6, 2024
26ce2d6
Removed mentions of IMAGES_DIR
jbhayet Apr 6, 2024
de043ea
Updated test
jbhayet Apr 6, 2024
a996631
Add instructions for installing the package
jbhayet Apr 8, 2024
f94d573
Revisiting bitrap training
jbhayet Apr 8, 2024
5f4ca97
Removed script with redundancy with test_calibration
jbhayet Apr 8, 2024
f17684e
Parameters
jbhayet Apr 8, 2024
e7e7895
Coming back to a hybrid way for getting parameters, some of them thro…
jbhayet Apr 9, 2024
8ad636d
Changed the np.gradient calls to compute derivatives
jbhayet Apr 9, 2024
ced52c4
Changed default value
jbhayet Apr 9, 2024
fa4c8a5
Updated tests example
jbhayet Apr 10, 2024
1799a14
Updates config files
jbhayet Apr 10, 2024
96d9c35
Updated setup file
jbhayet Apr 18, 2024
cf8c26f
Simplifying some of the test scripts
jbhayet Apr 18, 2024
64cff0f
Added files for packages
jbhayet Apr 18, 2024
5aaa0b5
Unifiying the prototypes of the calibration function (no timestep)
jbhayet Apr 18, 2024
49eb3ee
Simplification in the calibration code
jbhayet Apr 18, 2024
cad1156
More simplifications in the calibration routines
jbhayet Apr 18, 2024
a68acad
Improvements to the test_calibrarion script
jbhayet Apr 19, 2024
5189905
Updated eval with bitrap; ready for uncertainty evaluation
jbhayet Apr 25, 2024
3ec76e7
Simplifying the generation of data for calibration
jbhayet Apr 25, 2024
ecbffbb
Simplified calibration data generation
jbhayet Apr 26, 2024
173a5ef
More simplifications
jbhayet Apr 26, 2024
50de88f
Major bug on the hdr approximation by knn
jbhayet Apr 27, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -131,3 +131,6 @@ dmypy.json
# Dataset related
*sdd_raw*
*.pickle
*.pth
*.pdf
*.csv
28 changes: 18 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,33 +2,40 @@
[![Tests status](https://github.com/cimat-ris/trajpred-bdl/actions/workflows/python-app.yml/badge.svg)](https://github.com/cimat-ris/trajpred-bdl/actions/workflows/python-app.yml)
# trajpred-bdl

## To install the libraries

```bash
pip -e install .
```

## Training

To train a simple deterministic model:

```
```bash
python scripts/train_deterministic.py
```

To train a simple deterministic model with variances as output (DG):

```
```bash
python scripts/train_deterministic_gaussian.py
```

To train a model made of an ensemble of DGs (DGE):

```
```bash
python scripts/train_ensembles.py
```

To train a deterministic model with dropout at inference (DD):
```

```bash
python scripts/train_dropout.py
```

To train a deterministic-variational model (DV):
```
```bash
python scripts/train_variational.py
```

Expand All @@ -37,15 +44,15 @@ python scripts/train_variational.py

With any of the training scripts above, you can use the '--no-retrain' option to produce testing results

```
```bash
python scripts/train_ensembles.py --no-retrain --pickle --examples 10
```

## Calibration: a postprocess step

After a model is trained, it saves it's results in a `pickle` file, then the calibration for it uses the output from a trained model and can be executed as follows:

```
```bash
# training the desired model
$ python scripts/train_torch_deterministic_gaussian.py --pickle --no-retrain

Expand All @@ -72,14 +79,15 @@ The `test_calibration.py` script uses Isotonic regression to compute the calibra

* Clone the [bitrap repository](https://github.com/umautobots/bidireaction-trajectory-prediction).
* The train/test partition from the [Trajectron++](https://github.com/StanfordASL/Trajectron-plus-plus) repository are now present in the datasets/trj++ directory as .pkl files.
* Modify *bitrap_np_ETH.yml* lines 30 and set the path to where the .json file is located. You may also change BATCH_SIZE or NUM_WORKERS
* Modify *bitrap_np_ETH.yml* lines 30 and set the path to where the .json file is located. You may also change BATCH_SIZE or NUM_WORKERS

* To train bitrap, run
```
```bash
python scripts/train_bitrap.py --config_file bitrap_np_ETH.yml --seed n
```
By changing the seed, you will be building different models for an ensemble.

* To generate data calibration from bitrap, run
```
```bash
python tests/test_bitrap.py
```
2 changes: 1 addition & 1 deletion cfg/bitrap_np_eth.yml
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ MODEL:
DEC_HIDDEN_SIZE: 256
DATASET:
NAME: 'eth'
ETH_CONFIG: '/home/jbhayet/opt/repositories/devel/bidireaction-trajectory-prediction/configs/ETH_UCY.json'
ETH_CONFIG: '../bitrap/configs/ETH_UCY.json'
ROOT: 'datasets/trj++'
TRAJECTORY_PATH: 'datasets/trj++'
DATALOADER:
Expand Down
2 changes: 1 addition & 1 deletion cfg/bitrap_np_hotel.yml
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ MODEL:
DEC_HIDDEN_SIZE: 256
DATASET:
NAME: 'hotel'
ETH_CONFIG: '/home/jbhayet/opt/repositories/devel/bidireaction-trajectory-prediction/configs/ETH_UCY.json'
ETH_CONFIG: '../bitrap/configs/ETH_UCY.json'
ROOT: 'datasets/trj++'
TRAJECTORY_PATH: 'datasets/trj++'
DATALOADER:
Expand Down
2 changes: 1 addition & 1 deletion cfg/bitrap_np_univ.yml
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ MODEL:
DEC_HIDDEN_SIZE: 256
DATASET:
NAME: 'univ'
ETH_CONFIG: '/home/jbhayet/opt/repositories/devel/bidireaction-trajectory-prediction/configs/ETH_UCY.json'
ETH_CONFIG: '../bitrap/configs/ETH_UCY.json'
ROOT: 'datasets/trj++'
TRAJECTORY_PATH: 'datasets/trj++'
DATALOADER:
Expand Down
2 changes: 1 addition & 1 deletion cfg/bitrap_np_zara1.yml
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ MODEL:
DEC_HIDDEN_SIZE: 256
DATASET:
NAME: 'zara1'
ETH_CONFIG: '/home/jbhayet/opt/repositories/devel/bidireaction-trajectory-prediction/configs/ETH_UCY.json'
ETH_CONFIG: '../bitrap/configs/ETH_UCY.json'
ROOT: 'datasets/trj++'
TRAJECTORY_PATH: 'datasets/trj++'
DATALOADER:
Expand Down
2 changes: 1 addition & 1 deletion cfg/bitrap_np_zara2.yml
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ MODEL:
DEC_HIDDEN_SIZE: 256
DATASET:
NAME: 'zara2'
ETH_CONFIG: '/home/jbhayet/opt/repositories/devel/bidireaction-trajectory-prediction/configs/ETH_UCY.json'
ETH_CONFIG: '../bitrap/configs/ETH_UCY.json'
ROOT: 'datasets/trj++'
TRAJECTORY_PATH: 'datasets/trj++'
DATALOADER:
Expand Down
31 changes: 31 additions & 0 deletions cfg/deterministic_dropout_ethucy.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
dataset:
pickle_dir: 'pickle/'
validation_proportion: 0.1
use_neighbors: False
batch_size: 256
person_max: 70 # Maximum number of persons in a frame
obs_len: 8 # Observation length (trajlet size)
pred_len: 12 # Prediction length
delim: "," # Delimiter
dt: 0.4 # Delta time (time between two discrete time samples)
max_overlap: 1 # Maximal overlap between trajets
model:
num_layers: 2
dropout_rate: 0.2
hidden_dim: 128
embedding_dim: 128
input_dim: 2
output_dim: 2
train:
initial_lr: 0.001
epochs: 220 #220
no_retrain: False
teacher_forcing: False
save_dir : "training_checkpoints/"
model_name: "deterministic"
misc:
plot_losses: True
plot_dir: "images/"
show_test: False
samples_test: 10
model_samples: 10
35 changes: 35 additions & 0 deletions cfg/deterministic_ethucy.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
dataset:
id_dataset: 0 # 0: ETHUCY
id_test: 2
pickle: False
pickle_dir: 'pickle/'
validation_proportion: 0.1
use_neighbors: False
batch_size: 512
person_max: 70 # Maximum number of persons in a frame
obs_len: 8 # Observation length (trajlet size)
pred_len: 12 # Prediction length
delim: "," # Delimiter
dt: 0.4 # Delta time (time between two discrete time samples)
max_overlap: 1 # Maximal overlap between trajets
model:
num_layers: 2
dropout_rate: 0.2
hidden_dim: 128
embedding_dim: 8
input_dim: 2
output_dim: 2
train:
initial_lr: 0.001
epochs: 800
no_retrain: False
teacher_forcing: False
save_dir : "training_checkpoints/"
model_name: "deterministic"
misc:
plot_losses: True
plot_dir: "images/"
show_test: True
samples_test: 10
log_level: 20
seed: 1234
31 changes: 31 additions & 0 deletions cfg/deterministic_gaussian_ethucy.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
dataset:
pickle_dir: 'pickle/'
validation_proportion: 0.1
use_neighbors: True
batch_size: 256
person_max: 70 # Maximum number of persons in a frame
obs_len: 8 # Observation length (trajlet size)
pred_len: 12 # Prediction length
delim: "," # Delimiter
dt: 0.4 # Delta time (time between two discrete time samples)
max_overlap: 1 # Maximal overlap between trajets
model:
num_layers: 2
dropout_rate: 0.2
hidden_dim: 128
embedding_dim: 128
input_dim: 2
output_dim: 2
train:
initial_lr: 0.001
epochs: 220
no_retrain: False
teacher_forcing: False
save_dir : "training_checkpoints/"
model_name: "deterministic_gaussian"
misc:
plot_losses: False
plot_dir: "images/"
show_test: False
samples_test: 10
model_samples: 3
36 changes: 36 additions & 0 deletions cfg/deterministic_gaussian_sdd.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
dataset:
id_dataset: 1 # 0: ETHUCY
id_test: 2
pickle: True
pickle_dir: 'pickle/'
validation_proportion: 0.1
use_neighbors: False
batch_size: 256
person_max: 70 # Maximum number of persons in a frame
obs_len: 8 # Observation length (trajlet size)
pred_len: 12 # Prediction length
delim: "," # Delimiter
dt: 0.4 # Delta time (time between two discrete time samples)
max_overlap: 1 # Maximal overlap between trajets
model:
num_layers: 2
dropout_rate: 0.2
hidden_dim: 128
embedding_dim: 128
input_dim: 2
output_dim: 2
train:
initial_lr: 0.001
epochs: 1 #220
no_retrain: True
teacher_forcing: False
save_dir : "training_checkpoints/"
model_name: "deterministic_gaussian"
misc:
plot_losses: False
plot_dir: "images/"
show_test: True
samples_test: 10
log_level: 20
seed: 1234
model_samples: 3
37 changes: 37 additions & 0 deletions cfg/deterministic_variational_ethucy.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
dataset:
id_dataset: 0 # 0: ETHUCY
id_test: 2
pickle: False
pickle_dir: 'pickle/'
validation_proportion: 0.1
use_neighbors: False
batch_size: 256
person_max: 70 # Maximum number of persons in a frame
obs_len: 8 # Observation length (trajlet size)
pred_len: 12 # Prediction length
delim: "," # Delimiter
dt: 0.4 # Delta time (time between two discrete time samples)
max_overlap: 1 # Maximal overlap between trajets
model:
num_layers: 2
dropout_rate: 0.2
hidden_dim: 128
embedding_dim: 128
input_dim: 2
output_dim: 2
train:
initial_lr: 0.001
epochs: 2 #220
num_mctrain: 10
no_retrain: False
teacher_forcing: False
save_dir : "training_checkpoints/"
model_name: "deterministic_variational"
misc:
plot_losses: True
plot_dir: "images/"
show_test: True
samples_test: 10
log_level: 20
seed: 1234
model_samples: 10
37 changes: 37 additions & 0 deletions datasets/edinburgh/preprocess_edinburgh.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
import os, sys
import pandas as pd
import matplotlib.pyplot as plt

from opentraj.loaders.loader_edinburgh import load_edinburgh
sys.path.append('.')

months = {
#'Jan' : {'01','02','04','05','06','07','08','10','11','12','13','14','15','16','17','18','19'},
#'May' : {'29','30','31'},
#'Jun' : {'02','03','04','05','06','08','09','11','12','14','16','17','18','20','22','24','25','26','29','30'},
#'Jul' : {'01','02','04','11','12','13','14','17','18','19','20','21','22','23','25','26','27','28','29','30'},
#'Aug' : {'01','24','25','26','27','28','29','30'},
'Sep' : {'01','02','04','05','06','10','11','12','13','14','16','18','19','20','21','22','23','24','25','27','28','29','30'},
'Oct': {'02','03','04','05','06','07','08','09','10','11','12','13','14','15'},
'Dec' : {'06','11','14','15','16','18','19','20','21','22','23','24','29','30','31'}
}
# Fixme: set proper OpenTraj directory
edi_root = os.path.join('../../../OpenTraj', 'datasets', 'Edinburgh','annotations')
edi_data = {key: pd.DataFrame() for key in months.keys() }

for month, videos_per_month in months.items():
scene_video_ids = [day+month for day in videos_per_month]
traj_datasets_per_scene = []

for scene_video_id in scene_video_ids:
annot_file = os.path.join(edi_root,'tracks.'+scene_video_id+'.txt')
print(annot_file)
itraj_dataset = load_edinburgh(annot_file, title="Edinburgh",
use_kalman=False, scene_id=scene_video_id, sampling_rate=4) # original framerate=9
trajs = list(itraj_dataset.get_trajectories())
traj_datasets_per_scene.append(pd.concat([itraj_dataset.data.iloc[:, : 4], itraj_dataset.data.iloc[:, 8:9]], axis=1))

if len(traj_datasets_per_scene)>0:
df = pd.concat(traj_datasets_per_scene)
df.to_pickle(month+'.pickle')

12 changes: 7 additions & 5 deletions datasets/sdd/preprocess_sdd.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,8 @@
import pickle
import pandas as pd

sys.path.append('../../../OpenTraj')
sys.path.append('../../../OpenTraj/opentraj')
from opentraj.toolkit.loaders.loader_sdd import load_sdd
sys.path.append('.')

Expand All @@ -16,8 +18,7 @@
'quad' : 4
}
# Fixme: set proper OpenTraj directory
sdd_root = os.path.join('/<dirTo>/OpenTraj', 'datasets', 'SDD')

sdd_root = os.path.join('../../../OpenTraj', 'datasets', 'SDD')
sdd_data = {key: pd.DataFrame() for key in scenes.keys() }

for scene_name, total_videos_per_scene in scenes.items():
Expand All @@ -36,6 +37,7 @@
drop_lost_frames=False, use_kalman=False, label='Pedestrian')
traj_datasets_per_scene.append(pd.concat([itraj_dataset.data.iloc[:, : 4], itraj_dataset.data.iloc[:, 8:9]], axis=1))

pickle_out = open(scene_name+'.pickle',"wb")
pickle.dump(pd.concat(traj_datasets_per_scene), pickle_out, protocol=2)
pickle_out.close()

df = pd.concat(traj_datasets_per_scene)
df.to_pickle(scene_name+'.pickle')

Loading
Loading