Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DLMBL 2023 excercise #36

Merged
merged 32 commits into from
Aug 23, 2023
Merged

DLMBL 2023 excercise #36

merged 32 commits into from
Aug 23, 2023

Conversation

ziw-liu
Copy link
Collaborator

@ziw-liu ziw-liu commented Aug 18, 2023

No description provided.

pyproject.toml Outdated Show resolved Hide resolved
pyproject.toml Outdated Show resolved Hide resolved
pyproject.toml Outdated Show resolved Hide resolved
@mattersoflight mattersoflight marked this pull request as ready for review August 20, 2023 14:09
@mattersoflight
Copy link
Member

@ziw-liu I've tested the whole exercise on dlmbl1 node. In addition to prototyping 3 solutions above, can you suggest how to log all validation samples? At this point, the subsample of validation set comes out empty in multiple runs.

image

With solutions in hand, I'll write the exercise prompt. Earlier exercise prompts are commented out so that you can just run the script by clicking Run Above in the last cell.

@ziw-liu
Copy link
Collaborator Author

ziw-liu commented Aug 20, 2023

can you suggest how to log all validation samples? At this point, the subsample of validation set comes out empty in multiple runs.

I think I know what's happening here. The validation samples are the first sample of the first log_num_samples batches. So if total validation steps is 1 it will only have 1 sample. I can modify the behavior to only use the first batch in the epoch, with the caveat that log_num_samples will then have to be smaller than batch size.

Edit: see f7229a8

This was referenced Aug 21, 2023

# %% [markdown]
"""
We now look at some metrics of performance. Loss is a differentiable metric. But, several non-differentiable metrics are useful to assess the performance of the model. We typically evaluate the model performance on a held out test data. We will use the following metrics to evaluate the accuracy of regression of the model:
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor language edit: these 3 metrics are actually differentiable because the developers of torchmetrics has made sure that all the operators used in the implementation are differentiable.

@ziw-liu ziw-liu changed the base branch from main to 21d-upscale-decoder August 21, 2023 20:45
mattersoflight and others added 3 commits August 21, 2023 23:06
Introducing capitalization to highlight vision and single-cell aspects of the pipeline.
@mattersoflight
Copy link
Member

mattersoflight commented Aug 23, 2023

@ziw-liu the conflicts in engine.py seem to be related to how you are logging the loss in two branches. Please resolve them and merge this branch into #37. I can start a smaller PR with updates from the course.

<<<<<<< dlmbl2023
        self.log("loss/validate", loss, batch_size=self.batch_size, sync_dist=True)
        if batch_idx == 0:
            self.validation_step_outputs.extend(
=======
        self.log("loss/validate", loss, sync_dist=True)
        if batch_idx < self.log_num_samples:
            self.validation_step_outputs.append(
>>>>>>> 21d-upscale-decoder
                self._detach_sample((source, target, pred))
            )

@ziw-liu ziw-liu merged commit 76c3b31 into 21d-upscale-decoder Aug 23, 2023
3 checks passed
@ziw-liu ziw-liu deleted the dlmbl2023 branch August 23, 2023 16:51
@mattersoflight mattersoflight restored the dlmbl2023 branch August 23, 2023 18:29
mattersoflight added a commit that referenced this pull request Aug 30, 2023
* pixelshuffle decoder

* Allow sampling multiple patches from the same stack (#35)

* sample multiple patches from one stack

* do not use type annotations from future
it breaks jsonargparse

* fix channel stacking for non-training samples

* remove batch size from model
the metrics will be automatically reduced by lightning

* add flop counting script

* 3d ouput head

* add datamodule target dims mode

* remove unused argument and configure drop path

* move architecture argument to model level

* DLMBL 2023 excercise (#36)

* updated intro and paths

* updated figures, tested data loader

* setup.sh fetches correct dataset

* finalized the exercise outline

* semi-final exercise

* parts 1 and 2 tested, part 3 outline ready

* clearer variables, train with larger patch size

* fix typo

* clarify variable names

* trying to log graph

* match example size with training

* reuse globals

* fix reference

* log sample images from the first batch

* wider model

* low LR solution

* fix path

* seed everything

* fix test dataset without masks

* metrics solution
this needs a new test dataset

* fetch test data, compute metrics

* byass cellpose import error due to numpy version conflicts

* final exercise

* moved files

* fixed formatting - ready for review

* viscy -> VisCy (#34) (#39)

Introducing capitalization to highlight vision and single-cell aspects of the pipeline.

* trying to log graph

* log graph

* black

---------

Co-authored-by: Shalin Mehta <shalin.mehta@gmail.com>
Co-authored-by: Shalin Mehta <shalin.mehta@czbiohub.org>

* fix channel dimension size for example input

#40

* fix argument linking

* 3D prediction writer
sliding windows are blended with uniform average

* update network diagram

* upgrade flop counting

* shallow 3D (2.5D) SSIM metric

* ms-ssim

* mixed loss

* fix arguments

* fix inheritance

* fix weight checking

* squeeze metric

* aggregate metrics

* optinal clamp to stabilize gradient of MS-SSIM

* fix calling

* increase epsilon

* disable autocast for loss

* restore relu for clamping

* plot all architectures with network_diagram script

---------
Co-authored-by: Shalin Mehta <shalin.mehta@czbiohub.org>
@mattersoflight mattersoflight deleted the dlmbl2023 branch September 16, 2023 00:52
ziw-liu added a commit that referenced this pull request Nov 1, 2023
* pixelshuffle decoder

* Allow sampling multiple patches from the same stack (#35)

* sample multiple patches from one stack

* do not use type annotations from future
it breaks jsonargparse

* fix channel stacking for non-training samples

* remove batch size from model
the metrics will be automatically reduced by lightning

* add flop counting script

* 3d ouput head

* add datamodule target dims mode

* remove unused argument and configure drop path

* move architecture argument to model level

* DLMBL 2023 excercise (#36)

* updated intro and paths

* updated figures, tested data loader

* setup.sh fetches correct dataset

* finalized the exercise outline

* semi-final exercise

* parts 1 and 2 tested, part 3 outline ready

* clearer variables, train with larger patch size

* fix typo

* clarify variable names

* trying to log graph

* match example size with training

* reuse globals

* fix reference

* log sample images from the first batch

* wider model

* low LR solution

* fix path

* seed everything

* fix test dataset without masks

* metrics solution
this needs a new test dataset

* fetch test data, compute metrics

* byass cellpose import error due to numpy version conflicts

* final exercise

* moved files

* fixed formatting - ready for review

* viscy -> VisCy (#34) (#39)

Introducing capitalization to highlight vision and single-cell aspects of the pipeline.

* trying to log graph

* log graph

* black

---------

Co-authored-by: Shalin Mehta <shalin.mehta@gmail.com>
Co-authored-by: Shalin Mehta <shalin.mehta@czbiohub.org>

* fix channel dimension size for example input

#40

* fix argument linking

* 3D prediction writer
sliding windows are blended with uniform average

* update network diagram

* upgrade flop counting

* shallow 3D (2.5D) SSIM metric

* ms-ssim

* mixed loss

* fix arguments

* fix inheritance

* fix weight checking

* squeeze metric

* aggregate metrics

* optinal clamp to stabilize gradient of MS-SSIM

* fix calling

* increase epsilon

* disable autocast for loss

* shuffle validation data for logging

this hurts cache hit rate, but can avoid logging neighboring windows

* simplify decoder structure

* pop-head

* fix head expansion

* init conv weights

* update diagnostic scripts

* fix center slice metrics for 3D output (#51)

* Configure the number of  image samples logged at each epoch and batch (#49)

* log sample size at epoch and batch levels

* update example configs

* do not shuffle validation dataset

* fix upsampling weight initialization

* fix merge

* fix merge error

* fix formatting

---------

Co-authored-by: Shalin Mehta <shalin.mehta@gmail.com>
Co-authored-by: Shalin Mehta <shalin.mehta@czbiohub.org>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants