Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

πŸš€ add subset to slice datasets #2038

Closed
wants to merge 7 commits into from

Conversation

MightyStud
Copy link

πŸ“ Description

Added the functionality to only take a percentage of the subset of the data, Afaik, you only can use all the data for training/testing, but with this PR you can use a subset of the data which is quite handy when experimenting.

✨ Changes

add subset argument to folder class

Select what type of change your PR is:

  • 🐞 Bug fix (non-breaking change which fixes an issue)
  • πŸ”¨ Refactor (non-breaking change which refactors the code base)
  • [X ] πŸš€ New feature (non-breaking change which adds functionality)
  • πŸ’₯ Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • πŸ“š Documentation update
  • πŸ”’ Security update

βœ… Checklist

Before you submit your pull request, please make sure you have completed the following steps:

  • πŸ“‹ I have summarized my changes in the CHANGELOG and followed the guidelines for my type of change (skip for minor changes, documentation updates, and test enhancements).
  • [X ] πŸ“š I have made the necessary updates to the documentation (if applicable).
  • [ X] πŸ§ͺ I have written tests that support my changes and prove that my fix is effective or my feature works (if applicable).

For more information about code review checklists, see the Code Review Checklist.

Copy link

Check out this pull request onΒ  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

@alexriedel1
Copy link
Contributor

alexriedel1 commented May 2, 2024

Hey, this functionality is already built in Lightning:
https://lightning.ai/docs/pytorch/stable/common/trainer.html#trainer-class-api

limit_train_batches (Union[int, float, None]) – How much of training dataset to check (float = fraction, int = num_batches). Default: 1.0.

limit_val_batches (Union[int, float, None]) – How much of validation dataset to check (float = fraction, int = num_batches). Default: 1.0.

limit_test_batches (Union[int, float, None]) – How much of test dataset to check (float = fraction, int = num_batches). Default: 1.0.

You can pass trainer arguments to the Engine class

@samet-akcay
Copy link
Contributor

Hi @MightyStud, thanks for creating this PR. As @alexriedel1 mentioned above, this is already possible with Lightning. For details, you could refer here
https://lightning.ai/docs/pytorch/stable/common/trainer.html#limit-train-batches
https://lightning.ai/docs/pytorch/stable/common/trainer.html#limit-test-batches
https://lightning.ai/docs/pytorch/stable/common/trainer.html#limit-val-batches

Here is how you could use this:

from anomalib.data import MVTec
from anomalib.models import Patchcore
from anomalib.engine import Engine

# Create datamodule and model, whichever data and model you want to import.
datamodule = MVTec()
model = Patchcore()

# default used by the Trainer
engine = Engine(limit_val_batches=1.0)

# run through only 25% of the validation set each epoch
engine = Engine(limit_val_batches=0.25)

# run for only 10 batches
engine = Engine(limit_val_batches=10)

# disable validation
engine = Engine(limit_val_batches=0)

@samet-akcay
Copy link
Contributor

I think we close this PR as the feature is already available. We would welcome more of your contributions in the future. Thanks a lot for the effort!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants