Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

More datasets #3

Merged
merged 18 commits into from
Jan 8, 2024
Merged
Show file tree
Hide file tree
Changes from 17 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 7 additions & 0 deletions cmd/conf/datamodule/mimiciii.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
_target_: fdiff.dataloaders.datamodules.MIMICIIIDatamodule
data_dir: ${hydra:runtime.cwd}/data
random_seed: ${random_seed}
fourier_transform: ${fourier_transform}
standardize: ${standardize}
batch_size: 64
n_feats: 50
6 changes: 6 additions & 0 deletions cmd/conf/datamodule/nasdaq.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
_target_: fdiff.dataloaders.datamodules.NASDAQDatamodule
data_dir: ${hydra:runtime.cwd}/data
random_seed: ${random_seed}
fourier_transform: ${fourier_transform}
standardize: ${standardize}
batch_size: 64
4 changes: 3 additions & 1 deletion cmd/conf/metrics/default.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,9 @@ metrics:
- _target_: fdiff.sampling.metrics.SlicedWasserstein
_partial_: true
random_seed: ${random_seed}
num_directions: 10000
num_directions: 1000
save_all_distances: true
- _target_: fdiff.sampling.metrics.MarginalWasserstein
_partial_: true
random_seed: ${random_seed}
save_all_distances: true
2 changes: 1 addition & 1 deletion cmd/conf/sample.yaml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
model_path: ${hydra:runtime.cwd}/lightning_logs
model_id: ub0lv98f
num_samples: 1000
num_samples: 10000
num_diffusion_steps: 1000
random_seed: 42

Expand Down
2 changes: 1 addition & 1 deletion cmd/conf/score_model/default.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ d_model: 72
num_layers: 10
n_head: 12
lr_max: 1.0e-3
fourier_noise_scaling: False
fourier_noise_scaling: ${fourier_transform}
likelihood_weighting: False

defaults:
Expand Down
16 changes: 13 additions & 3 deletions cmd/conf/trainer/callbacks/default.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,16 @@
monitor: val/loss
filename: "epoch={epoch}-val_loss={val/loss:.2f}"
auto_insert_metric_name: false
- _target_: pytorch_lightning.callbacks.EarlyStopping
monitor: val/loss
patience: 20
- _target_: fdiff.utils.callbacks.SamplingCallback
every_n_epochs: 10
sample_batch_size: ${datamodule.batch_size}
num_samples: 200
num_diffusion_steps: 1000
metrics:
- _target_: fdiff.sampling.metrics.SlicedWasserstein
_partial_: true
random_seed: ${random_seed}
num_directions: 200
- _target_: fdiff.sampling.metrics.MarginalWasserstein
_partial_: true
random_seed: ${random_seed}
1 change: 1 addition & 0 deletions cmd/conf/trainer/default.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,5 +5,6 @@ gradient_clip_val: 1.0
enable_progress_bar: true
logger:
_target_: pytorch_lightning.loggers.WandbLogger
log_model: true
defaults:
- callbacks: default
6 changes: 6 additions & 0 deletions cmd/train.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@

from fdiff.dataloaders.datamodules import Datamodule
from fdiff.models.score_models import ScoreModule
from fdiff.utils.callbacks import SamplingCallback
from fdiff.utils.extraction import dict_to_str, get_training_params
from fdiff.utils.wandb import maybe_initialize_wandb

Expand Down Expand Up @@ -50,6 +51,11 @@ def __init__(self, cfg: DictConfig) -> None:
training_params = get_training_params(self.datamodule, self.trainer)
self.score_model = self.score_model(**training_params)

# Possibly setup the datamodule in the sampling callback
for callback in self.trainer.callbacks: # type: ignore
if isinstance(callback, SamplingCallback):
callback.setup_datamodule(datamodule=self.datamodule)

def train(self) -> None:
assert not (
self.score_model.scale_noise and not self.datamodule.fourier_transform
Expand Down
Loading