Skip to content

Releases: okunator/cellseg_models.pytorch

v0.1.25

05 Jul 13:46
Compare
Choose a tag to compare

0.1.25 — 2024-07-05

Features

  • Image encoders are imported now only from timm models.
  • Add enc_out_indices to model classes, to enable selecting which layers to use as the encoder outputs.

Removed

  • Removed SAM and DINOv2 original implementation image-encoders from this repo. These can be found from timm models these days.
  • Removed cellseg_models_pytorch.training module which was left unused after example notebooks were updated.

Examples

  • Updated example notebooks.
  • Added new example notebooks utilizing UNI foundation model from the MahmoodLab.
  • Added new example notebooks utilizing the Prov-GigaPath foundation model from the Microsoft Research.
  • NOTE: These examples use the huggingface model hub to load the weights. Permission to use the model weights is required to run these examples.

Chore

  • Update timm version to above 1.0.0.

Breaking changes

  • Lose support for python 3.9
  • The self.encoder in each model is new, thus, models with trained weights from previous versions of the package will not work with this version.

v0.1.24

16 Oct 06:58
Compare
Choose a tag to compare

0.1.24 — 2023-10-13

Style

  • Update the Ìnferer.infer() -method api to accept arguments related to saving the model outputs.

Features

  • Add CPP-Net. https://arxiv.org/abs/2102.06867

  • Add option for mixed precision inference

  • Add option to interpolate model outputs to a given size to all of the segmentation models.

  • Add DINOv2 Backbone

  • Add support for .geojson, .feather, .parquet file formats when running inference.

Docs

  • Add CPP-Net example trainng with Pannuke dataset.

Fixes

  • Fix resize transformation bug.

v0.1.23

29 Aug 06:02
Compare
Choose a tag to compare

0.1.23 — 2023-08-28

Features

  • add a stem-skip module. (Long skip for the input image resolution feature map)

  • add UnetTR transformer encoder wrapper class

  • add a new Encoder wrapper for timm and unetTR based encoders

  • Add stem skip support and upsampling block options to all current model architectures

  • Add masking option to all the criterions

  • Add MAELoss

  • Add BCELoss

  • Add base class for transformer based backbones

  • Add SAM-VitDet image encoder with support to load pre-trained SAM weights

  • Add CellVIT-SAM model.

Docs

  • Add notebook example on training Hover-Net with lightning from scratch.

  • Add notebook example on training StarDist with lightning from scratch.

  • Add notebook example on training CellPose with accelerate from scratch.

  • Add notebook example on training OmniPose with accelerate from scratch.

  • Add notebook example on finetuning CellVIT-SAM with accelerate.

Fixes

  • Fix current TimmEncoder to store feature info

  • Fix Up block to support transconv and bilinear upsampling and fix data flow issues.

  • Fix StardistUnet class to output all the decoder features.

  • Fix Decoder, DecoderStage and long-skip modules to work with up scale factors instead of output dimensions.

v0.1.22

10 Jul 13:23
Compare
Choose a tag to compare

0.1.22 — 2023-07-10

Features

  • Add mps (Mac) support for inference
  • Add cell class probabilities to saved geojson files

v0.1.21

12 Jun 12:55
Compare
Choose a tag to compare

0.1.21 — 2023-06-12

Features

v0.1.20

13 Jan 16:30
Compare
Choose a tag to compare

0.1.20 — 2023-01-13

Fixes

  • Enable only writing folder&hdf5 datasets with only images

  • Enable writing datasets without patching.

  • Add long missing h5 reading utility function to FileHandler

Features

  • Add hdf5 input file reading to Inferer classes.

  • Add option to write pannuke dataset to h5 db in PannukeDataModule and LizardDataModule.

  • Add a generic model builder function get_model to models.__init__.py

  • Rewrite segmentation benchmarker. Now it can take in hdf5 datasets.

v0.1.19

04 Jan 16:32
Compare
Choose a tag to compare

0.1.19 — 2023-01-04

Features

  • Add pytorch lightning in-built auto_lr_finder option to SegmentationExperiment

v0.1.18

03 Jan 10:27
Compare
Choose a tag to compare

0.1.18 — 2023-01-03

Features

  • Add Multi-scale-convolutional-attention (MSCA) module (SegNexT).
  • Add TokenMixer & MetaFormer modules.

v0.1.17

29 Dec 11:40
Compare
Choose a tag to compare

0.1.17 — 2022-12-29

Features

  • Add transformer modules
  • Add exact, slice, and memory efficient (xformers) self attention computations
  • Add transformers modules to Decoder modules
  • Add common transformer mlp activation functions: star-relu, geglu, approximate-gelu.
  • Add Linformer self-attention mechanism.
  • Add support for model intialization from yaml-file in MultiTaskUnet.
  • Add a new cross-attention long-skip module. Works with long_skip='cross-attn'

Refactor

  • Added more verbose error messages for the abstract wrapper-modules in modules.base_modules
  • Added more verbose error catching for xformers.ops.memory_efficient_attention.

v0.1.16

14 Dec 08:27
Compare
Choose a tag to compare

0.1.16 — 2022-12-14

Dependencies

  • Bump old versions of numpy & scipy