Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to load two channesl as inputs to do fluoresence to phase image translation #40

Closed
edyoshikun opened this issue Aug 24, 2023 · 1 comment · Fixed by #37
Closed
Labels
bug Something isn't working

Comments

@edyoshikun
Copy link
Contributor

edyoshikun commented Aug 24, 2023

I was running the MBL DL2023 example notebook and ran into this issue at the end trying to predict the phase using 2 fluorescence channels.

tune_data = HCSDataModule(
    data_path,
    source_channel= ["Nuclei","Membrane"],
    target_channel="Phase",
    z_window_size=1,
    split_ratio=0.8,
    batch_size=BATCH_SIZE,
    num_workers=10,
    architecture="2D",
    yx_patch_size=YX_PATCH_SIZE,
    augment=True,
)
tune_data.setup("fit")

tune_config = {
    "architecture": "2D",
    "num_filters": [24, 48, 96, 192, 384],
    "in_channels": 2,
    "out_channels":1,
    "residual": True,
    "dropout": 0.1,  # dropout randomly turns off weights to avoid overfitting of the model to data.
    "task": "reg",  # reg = regression task.
}
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
Cell In[30], line 58
     42 n_epochs = 50 # Set this to 50 or the number of epochs you want to train for.
     44 trainer = VSTrainer(
     45     accelerator="gpu",
     46     devices=[GPU_ID],
   (...)
     55         ),
     56 )  
---> 58 trainer.fit(fluor2phase_model, datamodule=fluor2phase_data)
     61 # Visualize the graph of fluor2phase model as image.
     62 model_graph_fluor2phase = torchview.draw_graph(
     63     fluor2phase_model,
     64     fluor2phase_data.train_dataset[0]["source"],
     65     depth=2,  # adjust depth to zoom in.
     66     device="cpu",
     67 )

File [~/conda/envs/04_image_translation/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py:532](https://vscode-remote+ssh-002dremote-002bec2-002d3-002d144-002d39-002d9-002eus-002deast-002d2-002ecompute-002eamazonaws-002ecom.vscode-resource.vscode-cdn.net/home/eduardoh/DL-MBL-2023/04_image_translation/~/conda/envs/04_image_translation/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py:532), in Trainer.fit(self, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path)
    530 self.strategy._lightning_module = model
    531 _verify_strategy_supports_compile(model, self.strategy)
--> 532 call._call_and_handle_interrupt(
    533     self, self._fit_impl, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path
    534 )
...
    458                     _pair(0), self.dilation, self.groups)
--> 459 return F.conv2d(input, weight, bias, self.stride,
    460                 self.padding, self.dilation, self.groups)

RuntimeError: Given groups=1, weight of size [24, 2, 3, 3], expected input[1, 1, 512, 512] to have 2 channels, but got 1 channels instead---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
Cell In[30], line 58
     42 n_epochs = 50 # Set this to 50 or the number of epochs you want to train for.
     44 trainer = VSTrainer(
     45     accelerator="gpu",
     46     devices=[GPU_ID],
   (...)
     55         ),
     56 )  
---> 58 trainer.fit(fluor2phase_model, datamodule=fluor2phase_data)
     61 # Visualize the graph of fluor2phase model as image.
     62 model_graph_fluor2phase = torchview.draw_graph(
     63     fluor2phase_model,
     64     fluor2phase_data.train_dataset[0]["source"],
     65     depth=2,  # adjust depth to zoom in.
     66     device="cpu",
     67 )

File [~/conda/envs/04_image_translation/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py:532](https://vscode-remote+ssh-002dremote-002bec2-002d3-002d144-002d39-002d9-002eus-002deast-002d2-002ecompute-002eamazonaws-002ecom.vscode-resource.vscode-cdn.net/home/eduardoh/DL-MBL-2023/04_image_translation/~/conda/envs/04_image_translation/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py:532), in Trainer.fit(self, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path)
    530 self.strategy._lightning_module = model
    531 _verify_strategy_supports_compile(model, self.strategy)
--> 532 call._call_and_handle_interrupt(
    533     self, self._fit_impl, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path
    534 )
...
    458                     _pair(0), self.dilation, self.groups)
--> 459 return F.conv2d(input, weight, bias, self.stride,
    460                 self.padding, self.dilation, self.groups)

RuntimeError: Given groups=1, weight of size [24, 2, 3, 3], expected input[1, 1, 512, 512] to have 2 channels, but got 1 channels instead
@ziw-liu
Copy link
Collaborator

ziw-liu commented Aug 24, 2023

Oh this is due to a bug in the example input array. You can work around this by passing enable_model_summary=False to VSTrainer.__init__() before a fix is available.

The channel of example_input_array is hardcoded to be 1:

VisCy/viscy/light/engine.py

Lines 164 to 169 in 76c3b31

self.example_input_array = torch.rand(
1,
1,
example_depth,
*example_input_yx_shape,
)

@ziw-liu ziw-liu added the bug Something isn't working label Aug 24, 2023
@ziw-liu ziw-liu linked a pull request Aug 24, 2023 that will close this issue
mattersoflight added a commit that referenced this issue Aug 30, 2023
* pixelshuffle decoder

* Allow sampling multiple patches from the same stack (#35)

* sample multiple patches from one stack

* do not use type annotations from future
it breaks jsonargparse

* fix channel stacking for non-training samples

* remove batch size from model
the metrics will be automatically reduced by lightning

* add flop counting script

* 3d ouput head

* add datamodule target dims mode

* remove unused argument and configure drop path

* move architecture argument to model level

* DLMBL 2023 excercise (#36)

* updated intro and paths

* updated figures, tested data loader

* setup.sh fetches correct dataset

* finalized the exercise outline

* semi-final exercise

* parts 1 and 2 tested, part 3 outline ready

* clearer variables, train with larger patch size

* fix typo

* clarify variable names

* trying to log graph

* match example size with training

* reuse globals

* fix reference

* log sample images from the first batch

* wider model

* low LR solution

* fix path

* seed everything

* fix test dataset without masks

* metrics solution
this needs a new test dataset

* fetch test data, compute metrics

* byass cellpose import error due to numpy version conflicts

* final exercise

* moved files

* fixed formatting - ready for review

* viscy -> VisCy (#34) (#39)

Introducing capitalization to highlight vision and single-cell aspects of the pipeline.

* trying to log graph

* log graph

* black

---------

Co-authored-by: Shalin Mehta <shalin.mehta@gmail.com>
Co-authored-by: Shalin Mehta <shalin.mehta@czbiohub.org>

* fix channel dimension size for example input

#40

* fix argument linking

* 3D prediction writer
sliding windows are blended with uniform average

* update network diagram

* upgrade flop counting

* shallow 3D (2.5D) SSIM metric

* ms-ssim

* mixed loss

* fix arguments

* fix inheritance

* fix weight checking

* squeeze metric

* aggregate metrics

* optinal clamp to stabilize gradient of MS-SSIM

* fix calling

* increase epsilon

* disable autocast for loss

* restore relu for clamping

* plot all architectures with network_diagram script

---------
Co-authored-by: Shalin Mehta <shalin.mehta@czbiohub.org>
ziw-liu added a commit that referenced this issue Nov 1, 2023
* pixelshuffle decoder

* Allow sampling multiple patches from the same stack (#35)

* sample multiple patches from one stack

* do not use type annotations from future
it breaks jsonargparse

* fix channel stacking for non-training samples

* remove batch size from model
the metrics will be automatically reduced by lightning

* add flop counting script

* 3d ouput head

* add datamodule target dims mode

* remove unused argument and configure drop path

* move architecture argument to model level

* DLMBL 2023 excercise (#36)

* updated intro and paths

* updated figures, tested data loader

* setup.sh fetches correct dataset

* finalized the exercise outline

* semi-final exercise

* parts 1 and 2 tested, part 3 outline ready

* clearer variables, train with larger patch size

* fix typo

* clarify variable names

* trying to log graph

* match example size with training

* reuse globals

* fix reference

* log sample images from the first batch

* wider model

* low LR solution

* fix path

* seed everything

* fix test dataset without masks

* metrics solution
this needs a new test dataset

* fetch test data, compute metrics

* byass cellpose import error due to numpy version conflicts

* final exercise

* moved files

* fixed formatting - ready for review

* viscy -> VisCy (#34) (#39)

Introducing capitalization to highlight vision and single-cell aspects of the pipeline.

* trying to log graph

* log graph

* black

---------

Co-authored-by: Shalin Mehta <shalin.mehta@gmail.com>
Co-authored-by: Shalin Mehta <shalin.mehta@czbiohub.org>

* fix channel dimension size for example input

#40

* fix argument linking

* 3D prediction writer
sliding windows are blended with uniform average

* update network diagram

* upgrade flop counting

* shallow 3D (2.5D) SSIM metric

* ms-ssim

* mixed loss

* fix arguments

* fix inheritance

* fix weight checking

* squeeze metric

* aggregate metrics

* optinal clamp to stabilize gradient of MS-SSIM

* fix calling

* increase epsilon

* disable autocast for loss

* shuffle validation data for logging

this hurts cache hit rate, but can avoid logging neighboring windows

* simplify decoder structure

* pop-head

* fix head expansion

* init conv weights

* update diagnostic scripts

* fix center slice metrics for 3D output (#51)

* Configure the number of  image samples logged at each epoch and batch (#49)

* log sample size at epoch and batch levels

* update example configs

* do not shuffle validation dataset

* fix upsampling weight initialization

* fix merge

* fix merge error

* fix formatting

---------

Co-authored-by: Shalin Mehta <shalin.mehta@gmail.com>
Co-authored-by: Shalin Mehta <shalin.mehta@czbiohub.org>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants