Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Example workflow #45

Merged
merged 1 commit into from
Sep 25, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
84 changes: 84 additions & 0 deletions docs/usage.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,84 @@
# Using VisCy

This page briefly describes the workflow of training,
evaluating, and deploying a virtual staining model with VisCy.

## Preprocessing

VisCy uses a simple preprocessing script to compute intensity metrics
(mean, standard deviation, median, inter-quartile range)
to normalize the images during training and inference.
Use with:

```sh
python -m vicy.cli.preprocess -c config.yaml
```

An example of the config file is shown below:

```yaml
zarr_dir: /path/to/ome.zarr
preprocessing:
normalize:
# index of channels to compute statistics on
channel_ids: [0, 1, 2]
# statistics are computed in local blocks
# avoid high RAM usage
block_size: 32
# number of CPU cores to parallelize over
num_workers: 16
```

> **Note:** This script is subject to change.
> It may be moved into the main CLI in the future.

## CLI

Training, testing, inference, and deployment can be performed with the `viscy` CLI.

See `viscy --help` for a list of available commands and their help messages.

### Training

Training a model is done with the main CLI:

```sh
viscy fit -c config.yaml
```

An example of the config file can be found [here](../examples/configs/fit_example.yml).

By default, TensorBoard logs and checkpoints are saved
in the `default_root_dir/lightning_logs/` directory.

### Testing

This tests a model with regression metrics by default.
For segmentation metrics,
supply ground truth masks and a CellPose segmentation model.

```sh
viscy test -c config.yaml
```

An example of the config file can be found [here](../examples/configs/test_example.yml).

### Inference

Run inference on a dataset and save the results to OME-Zarr:

```sh
viscy predict -c config.yaml
```

An example of the config file can be found [here](../examples/configs/predict_example.yml).

### Deployment

Export model to ONNX format for deployment:

```sh
viscy export -c config.yaml
```

An example of the config file can be found [here](../examples/configs/export_example.yml).
21 changes: 21 additions & 0 deletions examples/configs/export_example.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
# lightning.pytorch==2.0.4
seed_everything: true
trainer:
accelerator: auto
strategy: auto
devices: auto
num_nodes: 1
precision: 32-true
model:
architecture: null
model_config: {}
loss_function: null
lr: 0.001
schedule: Constant
log_num_samples: 8
test_cellpose_model_path: null
test_cellpose_diameter: null
test_evaluate_cellpose: false
export_path: null
ckpt_path: null
format: onnx
2 changes: 1 addition & 1 deletion examples/configs/predict_example.yml
Original file line number Diff line number Diff line change
Expand Up @@ -65,5 +65,5 @@ predict:
augment: true
caching: false
normalize_source: false
return_predictions: null
return_predictions: false
ckpt_path: null
73 changes: 73 additions & 0 deletions examples/configs/test_example.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
# lightning.pytorch==2.0.4
seed_everything: true
trainer:
accelerator: auto
strategy: auto
devices: auto
num_nodes: 1
precision: 32-true
callbacks: null
fast_dev_run: false
max_epochs: null
min_epochs: null
max_steps: -1
min_steps: null
max_time: null
limit_train_batches: null
limit_val_batches: null
limit_test_batches: null
limit_predict_batches: null
overfit_batches: 0.0
val_check_interval: null
check_val_every_n_epoch: 1
num_sanity_val_steps: null
log_every_n_steps: null
enable_checkpointing: null
enable_progress_bar: null
enable_model_summary: null
accumulate_grad_batches: 1
gradient_clip_val: null
gradient_clip_algorithm: null
deterministic: null
benchmark: null
inference_mode: true
use_distributed_sampler: true
profiler: null
detect_anomaly: false
barebones: false
plugins: null
sync_batchnorm: false
reload_dataloaders_every_n_epochs: 0
default_root_dir: null
model:
architecture: null
model_config: {}
loss_function: null
lr: 0.001
ziw-liu marked this conversation as resolved.
Show resolved Hide resolved
schedule: Constant
log_num_samples: 8
test_cellpose_model_path: null
test_cellpose_diameter: null
test_evaluate_cellpose: false
data:
data_path: null
source_channel: null
target_channel: null
z_window_size: null
split_ratio: 0.8
batch_size: 16
num_workers: 8
yx_patch_size:
- 256
- 256
augment: true
caching: false
normalize_source: false
ground_truth_masks: null
train_z_scale_range:
- 0.0
- 0.0
train_noise_std: 0.0
ziw-liu marked this conversation as resolved.
Show resolved Hide resolved
train_patches_per_stack: 1
ckpt_path: null
verbose: true