Skip to content

Commit

Permalink
🔒 Validate path before processing (#1668)
Browse files Browse the repository at this point in the history
* Merge feature/lightning-version-upgrade to feature/custom-trainer (#1297)

Upgrade to Lightning 2.0.5 (#1221)

* Adapt code and configs to PL 2.0.5

* Pre-commit checks.

* Fix a function call.

* Fix function calls.

* pytorch_lightning -> lightning.pytorch

* Add lightning to requirements

---------

Co-authored-by: Weilin Xu <mzweilin@gmail.com>
Co-authored-by: Samet <samet.akcay@intel.com>

* Partially restore tests (#1298)

* Upgrade to Lightning 2.0.5 (#1221)

* Adapt code and configs to PL 2.0.5

* Pre-commit checks.

* Fix a function call.

* Fix function calls.

* pytorch_lightning -> lightning.pytorch

* Add lightning to requirements

---------

Co-authored-by: Samet <samet.akcay@intel.com>

* partially restore tests

* Address PR comments

---------

Co-authored-by: Weilin Xu <mzweilin@gmail.com>
Co-authored-by: Samet <samet.akcay@intel.com>
Co-authored-by: Ashwin Vaidya <ashwinitinvaidya@gmail.com>

* [Custom Trainer] Remove future annotations (#1299)

* remove __future__

* Update changelog

---------

Co-authored-by: Ashwin Vaidya <ashwinitinvaidya@gmail.com>

* [Custom Trainer] Refactor postprocessing (#1302)

* remove __future__

* Update changelog

* 🚚 Trainer -> AnomalibTrainer

* add post-processor

* Refactor callback

* Remove handler

* Address PR comments

---------

Co-authored-by: Ashwin Vaidya <ashwinitinvaidya@gmail.com>

* [Custom Trainer] refactor normalization callbacks (#1310)

* remove __future__

* Update changelog

* 🚚 Trainer -> AnomalibTrainer

* add post-processor

* Refactor callback

* Refactor normalization callback

* Match convention

* Refactor imports

* Address PR comments

* Fix path

* Refactor callbacks

* Fix module path

---------

* [Custom Trainer] Refactor thresholding (#1311)

* remove __future__

* Update changelog

* 🚚 Trainer -> AnomalibTrainer

* add post-processor

* Refactor callback

* Refactor normalization callback

* Refactor thresholding

* Match convention

* Refactor imports

* Address PR comments

* Fix path

* Refactor callbacks

* Fix module path

* Address PR comments

* Apply suggestions from code review

Co-authored-by: Samet Akcay <samet.akcay@intel.com>

---------

Co-authored-by: Samet Akcay <samet.akcay@intel.com>

* [Custom Trainer] Refactor Metrics (#1312)

* remove __future__

* Update changelog

* 🚚 Trainer -> AnomalibTrainer

* add post-processor

* Refactor callback

* Refactor normalization callback

* Refactor thresholding

* Refactor metrics configuration

* Match convention

* Refactor imports

* Address PR comments

* Fix path

* Refactor callbacks

* Fix module path

* Address PR comments

* Address PR comments

---------

Co-authored-by: Ashwin Vaidya <ashwinitinvaidya@gmail.com>

* [Custom Trainer] Refactor visualization callback (#1313)

* remove __future__

* Update changelog

* 🚚 Trainer -> AnomalibTrainer

* add post-processor

* Refactor callback

* Refactor normalization callback

* Refactor thresholding

* Refactor metrics configuration

* Move visualizer callbacks into trainer

* Match convention

* Refactor imports

* Address PR comments

* Fix path

* Refactor callbacks

* Fix module path

* Address PR comments

* Remove comment

---------

Co-authored-by: Ashwin Vaidya <ashwinitinvaidya@gmail.com>

* [Custom Trainer] Initial engine design (#1348)

* Initial engine design

* Address PR comments

* Circular import + trainer->engine

* Update src/anomalib/engine/engine.py

* revert import in __init__

---------

Co-authored-by: Ashwin Vaidya <ashwinitinvaidya@gmail.com>
Co-authored-by: Samet Akcay <samet.akcay@intel.com>

* [Custom Trainer] Partially fix tests (#1359)

* Partially fix test

* Address PR comments

---------

Co-authored-by: Ashwin Vaidya <ashwinitinvaidya@gmail.com>

* [Custom Trainer] Add CLI support (#1381)

* Support trainer methods

* support yaml serialization

* add hpo command

* Add benchmark + train entrypoint

* Add export arguments

* Partially address PR comments

* Add export subcommands + refactor

* Address PR comments

---------

Co-authored-by: Ashwin Vaidya <ashwinitinvaidya@gmail.com>

* [Custom Trainer] Partially restore tests (#1391)

* Fix tests

* Patch get experiment logger

* Sort imports

* Add stfpm config

---------

Co-authored-by: Ashwin Vaidya <ashwinitinvaidya@gmail.com>

* Add new ruff rules (#1390)

* Add rules to pyproject.toml file

* Only include padim and stfpm in tests

* Fix notebook tests

* Fix notebook tests

* Code quality/enable rules (#1394)

* pyflakes

* pycodestyle

* pep8-naming (`N`)

* Enable Ruff rules - Part III (#1397)

* pyflakes

* pycodestyle

* pep8-naming (`N`)

* pyupgrade (`UP`)

* flake8-bandit (`S`)

* Enabled UP, ANN, S, BLE, FBT, B

* Fix typo

* Revert F1AdaptiveThreshold parent classes

* Fix some of the tests

* ignore boolean-positional-value-in-call (FBT003)

* Addressed pr comments

* Remove duplicated lines

* Enable Ruff Rules - Part 4 (#1402)

* pyflakes

* pycodestyle

* pep8-naming (`N`)

* pyupgrade (`UP`)

* flake8-bandit (`S`)

* Enabled UP, ANN, S, BLE, FBT, B

* Fix typo

* Revert F1AdaptiveThreshold parent classes

* Fix some of the tests

* ignore boolean-positional-value-in-call (FBT003)

* Addressed pr comments

* Remove duplicated lines

* flake8-builtins (`A`) and flake8-commas (`COM`)

* flake8-comprehensions (`C4`)

* flake8-datatimez (`DTZ`), flake8-debugger (`T10`), flake8-errmsg (`EM`)

* [Custom Trainer] Switch to manual optimization for ganomaly (#1404)

* implement manual optimizers for ganomaly

* cleanup

* Enable Ruff Rules - Part 5 (#1403)

* pyflakes

* pycodestyle

* pep8-naming (`N`)

* pyupgrade (`UP`)

* flake8-bandit (`S`)

* Enabled UP, ANN, S, BLE, FBT, B

* Fix typo

* Revert F1AdaptiveThreshold parent classes

* Fix some of the tests

* ignore boolean-positional-value-in-call (FBT003)

* Addressed pr comments

* Remove duplicated lines

* flake8-builtins (`A`) and flake8-commas (`COM`)

* flake8-comprehensions (`C4`)

* flake8-datatimez (`DTZ`), flake8-debugger (`T10`), flake8-errmsg (`EM`)

* flake8-pie (`PIE`)

* flake8-raise (`RSE`)

* flake8-return (`RET`)

* flake8-self (`SLF`)

* flake8-simplify (`SIM`)

* flake8-unsused-arguments (`ARG`)

* Update src/anomalib/models/components/flow/all_in_one_block.py

Co-authored-by: Ashwin Vaidya <ashwinnitinvaidya@gmail.com>

* addressed pr comments

---------

Co-authored-by: Ashwin Vaidya <ashwinnitinvaidya@gmail.com>

* [Custom Trainer] Add import checks (#1393)

* Add checks

* Add checks for wandb

* move exception handling to method

* fix pre-commit issue

---------

Co-authored-by: Ashwin Vaidya <ashwinitinvaidya@gmail.com>

* [Custom Trainer] Remove max epochs > 1 (#1400)

Remove max epochs>1 from default param list

Co-authored-by: Ashwin Vaidya <ashwinitinvaidya@gmail.com>

* [Custom Trainer] Add default values (#1395)

* Add checks

* Add default values to datasets + padim model

* update default values

* Remove merge artifact

* Change gt path

---------

Co-authored-by: Ashwin Vaidya <ashwinitinvaidya@gmail.com>

* Enable Ruff Rules - Part 6 (#1407)

* pyflakes

* pycodestyle

* pep8-naming (`N`)

* pyupgrade (`UP`)

* flake8-bandit (`S`)

* Enabled UP, ANN, S, BLE, FBT, B

* Fix typo

* Revert F1AdaptiveThreshold parent classes

* Fix some of the tests

* ignore boolean-positional-value-in-call (FBT003)

* Addressed pr comments

* Remove duplicated lines

* flake8-builtins (`A`) and flake8-commas (`COM`)

* flake8-comprehensions (`C4`)

* flake8-datatimez (`DTZ`), flake8-debugger (`T10`), flake8-errmsg (`EM`)

* flake8-pie (`PIE`)

* flake8-raise (`RSE`)

* flake8-return (`RET`)

* flake8-self (`SLF`)

* flake8-simplify (`SIM`)

* flake8-unsused-arguments (`ARG`)

* flake8-use-pathlib (`PTH`)

* eradicate (`ERA`)

* pylint (`PL`)

* tryceratos (`TRY`), flynt (`FLY`), Perflint (`PERF`)

* NumPy-specific rules (`NPY`)

* Ruff-specific rules (`RUF`)

* Remove pylint ignore comments

* Fix tests

* Fix tests

* Enable Ruff Rules - Part 7 (#1408)

* pyflakes

* pycodestyle

* pep8-naming (`N`)

* pyupgrade (`UP`)

* flake8-bandit (`S`)

* Enabled UP, ANN, S, BLE, FBT, B

* Fix typo

* Revert F1AdaptiveThreshold parent classes

* Fix some of the tests

* ignore boolean-positional-value-in-call (FBT003)

* Addressed pr comments

* Remove duplicated lines

* flake8-builtins (`A`) and flake8-commas (`COM`)

* flake8-comprehensions (`C4`)

* flake8-datatimez (`DTZ`), flake8-debugger (`T10`), flake8-errmsg (`EM`)

* flake8-pie (`PIE`)

* flake8-raise (`RSE`)

* flake8-return (`RET`)

* flake8-self (`SLF`)

* flake8-simplify (`SIM`)

* flake8-unsused-arguments (`ARG`)

* flake8-use-pathlib (`PTH`)

* eradicate (`ERA`)

* pylint (`PL`)

* tryceratos (`TRY`), flynt (`FLY`), Perflint (`PERF`)

* NumPy-specific rules (`NPY`)

* Ruff-specific rules (`RUF`)

* Remove pylint ignore comments

* Fix tests

* Fix tests

* mccabe (`C90`)

* pygrep-hooks (`PGH`)

* flake8-todos (`TD`)

* flake8-fixme (`FIX`)

* pandas-vet (`PD`)

* Fix random_split tests

* Fix pre-commit

* Fixed the logger test

* Fix the typos in todos

* Enable Ruff Rules - Part 8 (#1412)

* pyflakes

* pycodestyle

* pep8-naming (`N`)

* pyupgrade (`UP`)

* flake8-bandit (`S`)

* Enabled UP, ANN, S, BLE, FBT, B

* Fix typo

* Revert F1AdaptiveThreshold parent classes

* Fix some of the tests

* ignore boolean-positional-value-in-call (FBT003)

* Addressed pr comments

* Remove duplicated lines

* flake8-builtins (`A`) and flake8-commas (`COM`)

* flake8-comprehensions (`C4`)

* flake8-datatimez (`DTZ`), flake8-debugger (`T10`), flake8-errmsg (`EM`)

* flake8-pie (`PIE`)

* flake8-raise (`RSE`)

* flake8-return (`RET`)

* flake8-self (`SLF`)

* flake8-simplify (`SIM`)

* flake8-unsused-arguments (`ARG`)

* flake8-use-pathlib (`PTH`)

* eradicate (`ERA`)

* pylint (`PL`)

* tryceratos (`TRY`), flynt (`FLY`), Perflint (`PERF`)

* NumPy-specific rules (`NPY`)

* Ruff-specific rules (`RUF`)

* Remove pylint ignore comments

* Fix tests

* Fix tests

* mccabe (`C90`)

* pygrep-hooks (`PGH`)

* flake8-todos (`TD`)

* flake8-fixme (`FIX`)

* pandas-vet (`PD`)

* ignore ANN101 ANN102 and ANN103

* Fix random_split tests

* Fix pre-commit

* ANN partly done

* Remove kwargs: Any

* flake8-annotations (`ANN`)

* Enabled tests

* Revert padim configs

* Enable Ruff Rules - Part 9 (#1419)

* pyflakes

* pycodestyle

* pep8-naming (`N`)

* pyupgrade (`UP`)

* flake8-bandit (`S`)

* Enabled UP, ANN, S, BLE, FBT, B

* Fix typo

* Revert F1AdaptiveThreshold parent classes

* Fix some of the tests

* ignore boolean-positional-value-in-call (FBT003)

* Addressed pr comments

* Remove duplicated lines

* flake8-builtins (`A`) and flake8-commas (`COM`)

* flake8-comprehensions (`C4`)

* flake8-datatimez (`DTZ`), flake8-debugger (`T10`), flake8-errmsg (`EM`)

* flake8-pie (`PIE`)

* flake8-raise (`RSE`)

* flake8-return (`RET`)

* flake8-self (`SLF`)

* flake8-simplify (`SIM`)

* flake8-unsused-arguments (`ARG`)

* flake8-use-pathlib (`PTH`)

* eradicate (`ERA`)

* pylint (`PL`)

* tryceratos (`TRY`), flynt (`FLY`), Perflint (`PERF`)

* NumPy-specific rules (`NPY`)

* Ruff-specific rules (`RUF`)

* Remove pylint ignore comments

* Fix tests

* Fix tests

* mccabe (`C90`)

* pygrep-hooks (`PGH`)

* flake8-todos (`TD`)

* flake8-fixme (`FIX`)

* pandas-vet (`PD`)

* ignore ANN101 ANN102 and ANN103

* Fix random_split tests

* Fix pre-commit

* ANN partly done

* Remove kwargs: Any

* flake8-annotations (`ANN`)

* Enabled tests

* Revert padim configs

* Add auto fixes

* Fix docstrings

* Update src/anomalib/utils/metrics/binning.py

Co-authored-by: Ashwin Vaidya <ashwinnitinvaidya@gmail.com>

* Update src/anomalib/models/dfkde/lightning_model.py

Co-authored-by: Ashwin Vaidya <ashwinnitinvaidya@gmail.com>

* Update src/anomalib/models/rkde/feature_extractor.py

Co-authored-by: Ashwin Vaidya <ashwinnitinvaidya@gmail.com>

* Fixed pre-commit

---------

Co-authored-by: Ashwin Vaidya <ashwinnitinvaidya@gmail.com>

* Merge main into feature/custom_trainer (#1420)

* Address tiler issues (#1411)

* fix tiler

* deprecate random tile locations

* restore random tiling in tile method

* check tiling section in config

* disable tiling for ganomalu

* pad -> padding

* Refactor Reverse Distillation to match official code (#1389)

* Non-mandatory early stopping

* Added conv4 and bn4 to OCBE

* Loss as in the official code (flattened arrays)

* Added comment on how to use torchvision model as an encoder to reproduce results in the paper

* Remove early stop from config, change default anomaly_map_mode to add

* pre-commit fix

* Updated results

* Update src/anomalib/models/reverse_distillation/README.md

Co-authored-by: Samet Akcay <samet.akcay@intel.com>

* Update src/anomalib/models/reverse_distillation/README.md

Co-authored-by: Samet Akcay <samet.akcay@intel.com>

* Update src/anomalib/models/reverse_distillation/README.md

Co-authored-by: Samet Akcay <samet.akcay@intel.com>

* Remove early_stopping

* Update src/anomalib/models/reverse_distillation/lightning_model.py

Co-authored-by: Samet Akcay <samet.akcay@intel.com>

* Easier to read code

---------

Co-authored-by: Samet Akcay <samet.akcay@intel.com>

---------

Co-authored-by: Dick Ameln <dick.ameln@intel.com>
Co-authored-by: abc-125 <63813435+abc-125@users.noreply.github.com>
Co-authored-by: Samet Akcay <samet.akcay@intel.com>
Co-authored-by: Ashwin Vaidya <ashwinitinvaidya@gmail.com>

* Enable Ruff Rules - Part 10 (#1423)

* pyflakes

* pycodestyle

* pep8-naming (`N`)

* pyupgrade (`UP`)

* flake8-bandit (`S`)

* Enabled UP, ANN, S, BLE, FBT, B

* Fix typo

* Revert F1AdaptiveThreshold parent classes

* Fix some of the tests

* ignore boolean-positional-value-in-call (FBT003)

* Addressed pr comments

* Remove duplicated lines

* flake8-builtins (`A`) and flake8-commas (`COM`)

* flake8-comprehensions (`C4`)

* flake8-datatimez (`DTZ`), flake8-debugger (`T10`), flake8-errmsg (`EM`)

* flake8-pie (`PIE`)

* flake8-raise (`RSE`)

* flake8-return (`RET`)

* flake8-self (`SLF`)

* flake8-simplify (`SIM`)

* flake8-unsused-arguments (`ARG`)

* flake8-use-pathlib (`PTH`)

* eradicate (`ERA`)

* pylint (`PL`)

* tryceratos (`TRY`), flynt (`FLY`), Perflint (`PERF`)

* NumPy-specific rules (`NPY`)

* Ruff-specific rules (`RUF`)

* Remove pylint ignore comments

* Fix tests

* Fix tests

* mccabe (`C90`)

* pygrep-hooks (`PGH`)

* flake8-todos (`TD`)

* flake8-fixme (`FIX`)

* pandas-vet (`PD`)

* ignore ANN101 ANN102 and ANN103

* Fix random_split tests

* Fix pre-commit

* ANN partly done

* Remove kwargs: Any

* flake8-annotations (`ANN`)

* Enabled tests

* Revert padim configs

* Add auto fixes

* Fix docstrings

* Enabled "PLW2901",  # `for` loop variable `row` overwritten by assignment target

* Enable Ruff Rules - Part 11 (#1425)

* pyflakes

* pycodestyle

* pep8-naming (`N`)

* pyupgrade (`UP`)

* flake8-bandit (`S`)

* Enabled UP, ANN, S, BLE, FBT, B

* Fix typo

* Revert F1AdaptiveThreshold parent classes

* Fix some of the tests

* ignore boolean-positional-value-in-call (FBT003)

* Addressed pr comments

* Remove duplicated lines

* flake8-builtins (`A`) and flake8-commas (`COM`)

* flake8-comprehensions (`C4`)

* flake8-datatimez (`DTZ`), flake8-debugger (`T10`), flake8-errmsg (`EM`)

* flake8-pie (`PIE`)

* flake8-raise (`RSE`)

* flake8-return (`RET`)

* flake8-self (`SLF`)

* flake8-simplify (`SIM`)

* flake8-unsused-arguments (`ARG`)

* flake8-use-pathlib (`PTH`)

* eradicate (`ERA`)

* pylint (`PL`)

* tryceratos (`TRY`), flynt (`FLY`), Perflint (`PERF`)

* NumPy-specific rules (`NPY`)

* Ruff-specific rules (`RUF`)

* Remove pylint ignore comments

* Fix tests

* Fix tests

* mccabe (`C90`)

* pygrep-hooks (`PGH`)

* flake8-todos (`TD`)

* flake8-fixme (`FIX`)

* pandas-vet (`PD`)

* ignore ANN101 ANN102 and ANN103

* Fix random_split tests

* Fix pre-commit

* ANN partly done

* Remove kwargs: Any

* flake8-annotations (`ANN`)

* Enabled tests

* Revert padim configs

* Add auto fixes

* Fix docstrings

* Enabled "PLW2901",  # `for` loop variable `row` overwritten by assignment target

* Add google style to pydocstyle

* Remove dashed line from Returns

* Remove dashed line from Args

* Remove dashed line from Example and Raises

* Removed left-over dashed lines

* Cleanup pyproject.toml file

* [Custom Trainer] Add a verbose help output structure to the CLI (#1396)

* Add Verbosity Help-Formatter class

* Add Help-Formatter unit-tests

* Fix some strings

* Fix pre-commit ruff stuff

* Fix help_formatter's pre-commit

* Add new configs (#1418)

* Add new configs

* Add draem config

* Fix docstring

* Update src/anomalib/models/cflow/lightning_model.py

Co-authored-by: Samet Akcay <samet.akcay@intel.com>

* Linting

* Remove --- from docstrings

* Remove any from return type

* Fix linting issues from feature/custom_trainer

---------

Co-authored-by: Ashwin Vaidya <ashwinitinvaidya@gmail.com>
Co-authored-by: Samet Akcay <samet.akcay@intel.com>

* Add CLI Tests (#1426)

* Add new configs

* Add draem config

* Stash cli tests

* Stash cli tests with minor changes

* Stash changes

* Fix reverse distillation

* Fix EfficientAD

* Match ai_vad params to config params

* Fix ucsd and ai_vad configs

* Uncomment validation step

* Refactor directory structure

* Rename method

* use uscd for aivad

* fix ucsd path + modify model checkpoint callback for tests

* fix dfkde config

* Restructure tests + fix normalization test

* Revert file

* add v1 to tox

* Skip testing ai_vad

* Increase train and test size

* use mvtec dataset

---------

Co-authored-by: Ashwin Vaidya <ashwinitinvaidya@gmail.com>

* Restructure test directories. (#1438)

* Restructured the test directories

* Fixed typo

* Fix imports

* Fix config path in export tests

* Replace black with ruff formatter (#1439)

* [Custom Trainer] Refactor export (#1440)

* Refactor export

* Fix entrypoint tests

* remove match statements

---------

Co-authored-by: Ashwin Vaidya <ashwinitinvaidya@gmail.com>

* Add ruff checks to tests (#1455)

* Fix tests + add ruff check to tests

* Limit gradio version

* Path->str

---------

Co-authored-by: Ashwin Vaidya <ashwinitinvaidya@gmail.com>

* Add dummy image dataset generator helper class (#1444)

* Created tests/v1 directory

* update license year

* Add beantech generator

* Refactor mvtec-ad and beantech

* Add visa dataset generator

* Add DummyImageGenerator

* Use DummyImageGenerator in dummy mvtec dataset generator

* Use DummyImageGenerator in dummy mvtec dataset generator

* Use DummyImageGenerator in dummy mvtec3d dataset generator

* Restructured the test directories

* Fixed typo

* Fix imports

* Fix config path in export tests

* Add kolektor dataset

* add ucsdped generator

* Fix tests

* Revert conftest.py

* add method for generating avenue dataset

* Revert conftest.py

* add method for generating shanghaitech dataset

* swap order of typing for better parsing of normalization type

* cleanup

* Dynamically create DataFormat enum

* Add examples to docstring

* address pr comments and rename dataset.py to data.py

* Fix pre-commit issues

---------

Co-authored-by: Dick Ameln <dick.ameln@intel.com>

* Remove configurable parameters (#1453)

* Refactor export

* Fix entrypoint tests

* remove match statements

* Fix tests + remove get_configurable_params + fix hpo,benchmarking

* Path->str

* Update src/anomalib/models/__init__.py

Co-authored-by: Samet Akcay <samet.akcay@intel.com>

* Update src/anomalib/utils/sweep/config.py

Co-authored-by: Samet Akcay <samet.akcay@intel.com>

* Update tools/inference/README.md

Co-authored-by: Samet Akcay <samet.akcay@intel.com>

* Update tools/inference/README.md

Co-authored-by: Samet Akcay <samet.akcay@intel.com>

* str->Path

* str->Path

* Fix model checkpoint path

* typing + path + test order

* Update src/anomalib/utils/sweep/config.py

Co-authored-by: Samet Akcay <samet.akcay@intel.com>

* Mark tests as xfail

* Fix notebook

---------

Co-authored-by: Ashwin Vaidya <ashwinitinvaidya@gmail.com>
Co-authored-by: Dick Ameln <amelndjd@gmail.com>
Co-authored-by: Samet Akcay <samet.akcay@intel.com>

* [v1] - Tests: Add datamodule tests. (#1456)

* Created tests/v1 directory

* update license year

* Add beantech generator

* Refactor mvtec-ad and beantech

* Add visa dataset generator

* Add DummyImageGenerator

* Use DummyImageGenerator in dummy mvtec dataset generator

* Use DummyImageGenerator in dummy mvtec dataset generator

* Use DummyImageGenerator in dummy mvtec3d dataset generator

* Restructured the test directories

* Fixed typo

* Fix imports

* Fix config path in export tests

* Add kolektor dataset

* add ucsdped generator

* Fix tests

* Revert conftest.py

* add method for generating avenue dataset

* Revert conftest.py

* add method for generating shanghaitech dataset

* swap order of typing for better parsing of normalization type

* cleanup

* create the data tests files

* Dynamically create DataFormat enum

* Add examples to docstring

* add conftest.py

* address pr comments and rename dataset.py to data.py

* add some changes

* Add test_datasets to the integration tests

* Change order

* Added datamodule tests

* Format ruff

* Address pre-commit issues

* Fix video tests

* Add the rest of the datamodule tests

---------

Co-authored-by: Dick Ameln <dick.ameln@intel.com>

* [Custom Trainer] Add train subcommand (#1465)

Add train subcommand

Co-authored-by: Ashwin Vaidya <ashwinitinvaidya@gmail.com>

* Refactor `Tensor` annotation to `torch.Tensor` (#1477)

* Modify Tensor to torch.Tensor

* list[Tensor] to list[torch.Tensor]

* TODO: Fix formatting issues

* torch_all to torch.all

* Remove redundant import

* Apply ruff format

* Fix the tests

* Refactor tests Part 1 (#1473)

* Refactor CLI tests

* Select a random model

* Fix test for all the models

---------

Co-authored-by: Ashwin Vaidya <ashwinitinvaidya@gmail.com>

* Add API tests Part 2 (#1474)

* Partially migrate unit tests Part 3 (#1480)

* Refactor CLI tests

* Add api tests

* Select a random model

* Fix test for all the models

* Fix API tests

* refactor pre-post processing + get model ckpt from fixture

* Add tests for custom transforms

* Address PR comments

* Refactor ckpt_path fixture

* Update conftest.py

* Update __init__.py

* Update __init__.py

---------

Co-authored-by: Ashwin Vaidya <ashwinitinvaidya@gmail.com>
Co-authored-by: Samet Akcay <samet.akcay@intel.com>

* Reorg Part I: Data (#1483)

* Update Anomalib data with the new structure

* Update dataset imports and remove unused imports

* ruff format in engine.py

* Move TaskType to utils/types

* Revert TaskType import from anomalib

* Revert tox.ini

* Refactor imports and fix import errors

* Fix import errors and update type annotations

* Fix imports in jupyter notebooks

* Refactor import statements in test_visualizer.py

* Reorg Part II: Remove `pre_processor` and `post_processor` subpackages (#1485)

* Update Anomalib data with the new structure

* Update dataset imports and remove unused imports

* ruff format in engine.py

* Move TaskType to utils/types

* Revert TaskType import from anomalib

* Revert tox.ini

* Refactor imports and fix import errors

* Fix import errors and update type annotations

* Fix imports in jupyter notebooks

* Remove pre-processor subpackage from anomalib

* Remove unused imports and update import paths

* Refactor import statements in test_visualizer.py

* Remove unused code and deprecate Denormalize and
ToNumpy classes

* Remove empty code cell

* Add a description why input image is read from path

* Fix bug in superimpose

* Migrate deploy tests Part 4 (#1481)

* Refactor CLI tests

* Add api tests

* Select a random model

* Fix test for all the models

* Fix API tests

* refactor pre-post processing + get model ckpt from fixture

* Add tests for custom transforms

* Migrate deploy tests

* trained_padim_path->ckpt_path

* Split normalization line

* Fix normalization class path

---------

Co-authored-by: Ashwin Vaidya <ashwinitinvaidya@gmail.com>

* Migrate model components unit tests Part 5 (#1482)

* Refactor CLI tests

* Add api tests

* Select a random model

* Fix test for all the models

* Fix API tests

* refactor pre-post processing + get model ckpt from fixture

* Add tests for custom transforms

* Migrate deploy tests

* Migrate model component tests

* Migrate visualizer callback + cli tests

* Fix lightning entrypoint test

* trained_padim_path->ckpt_path

* Add todo

* Fix TaskType import

* Apply suggestions from code review

Co-authored-by: Samet Akcay <samet.akcay@intel.com>

---------

Co-authored-by: Ashwin Vaidya <ashwinitinvaidya@gmail.com>
Co-authored-by: Samet Akcay <samet.akcay@intel.com>

* Reorg Part III: Move the main anomalib components from `anomalib.utils` to `anomalib` (#1487)

* Update Anomalib data with the new structure

* Update dataset imports and remove unused imports

* ruff format in engine.py

* Move TaskType to utils/types

* Revert TaskType import from anomalib

* Revert tox.ini

* Refactor imports and fix import errors

* Fix import errors and update type annotations

* Fix imports in jupyter notebooks

* Remove pre-processor subpackage from anomalib

* Remove unused imports and update import paths

* Refactor import statements in test_visualizer.py

* Move callbacks from utils under anomalib

* Fix import statements in benchmarking and CLI
modules

* Move CLI under anomalib

* Add benchmark to pipelines

* Move hpo to pipelines

* Move sweep to pipelines

* Move loggers to anomalib

* Move metrics to anomalib

* Move callbacks from utils to test/utils

* Move config to anomalib.utils

* Fix the metric imports

* Remove unused code and deprecate Denormalize and
ToNumpy classes

* Remove empty code cell

* Add a description why input image is read from path

* Fix bug in superimpose

* Move anomalib.utils.config.config to anomalib.utils.config

* Fix config import

* Merge feature/custom_trainer

* Migrate tools test Part 6 (#1488)

* Refactor CLI tests

* Add api tests

* Select a random model

* Fix test for all the models

* Fix API tests

* refactor pre-post processing + get model ckpt from fixture

* Add tests for custom transforms

* Migrate deploy tests

* Migrate model component tests

* Migrate visualizer callback + cli tests

* Fix lightning entrypoint test

* trained_padim_path->ckpt_path

* Migrate metrics tests

* Migrate tools + remove nightly

---------

Co-authored-by: Ashwin Vaidya <ashwinitinvaidya@gmail.com>

* 🚜 Refactor padim and patchcore models (#1300)

* Fix metadata path

* Ignore hidden directories in folder dataset

* Add check for mask_dir for segmentation tasks in Folder dataset

* add is_fitted

* self.model._is_fitted to self.model.is_fitted

* Format anomaly module

* Remove on_save_checkpoint

* Refactor padim

* Add __repr__ to anomaly score threshold

* Revert patchcore config

* Add memory bank modules for anomaly detection

* Add explanation to MemoryBankTorchModule docstring.

* Update memory bank module imports and fix typo in
Padim model

* Rename Dynamic Buffer Module to Memory Bank
Module in docstring.

* Revert "Add explanation to MemoryBankTorchModule docstring."

This reverts commit 44c991450f7c78eee2b0ceb0e7c855c3893a0801.

* Refactor memory bank modules based on Dick's suggestion

* Fix model attribute assignment in lightning models

* Add MemoryBankMixin to anomaly detection models

* revert padim and patchcore

* Reorder inheritance in anomaly detection models

---------

Co-authored-by: Ashwin Vaidya <ashwin.vaidya@intel.com>

* Migrate unit tests Part 7 (#1490)

* Refactor CLI tests

* Add api tests

* Select a random model

* Fix test for all the models

* Fix API tests

* refactor pre-post processing + get model ckpt from fixture

* Add tests for custom transforms

* Migrate deploy tests

* Migrate model component tests

* Migrate visualizer callback + cli tests

* Fix lightning entrypoint test

* trained_padim_path->ckpt_path

* Migrate metrics tests

* Migrate tools + remove nightly

* Increase coverage

* Migrate remaining tests

* Fix imports

* Fix import

* Update tests/unit/deploy/test_inferencer.py

Co-authored-by: Samet Akcay <samet.akcay@intel.com>

* Update test_get_logger.py

---------

Co-authored-by: Ashwin Vaidya <ashwinitinvaidya@gmail.com>
Co-authored-by: Samet Akcay <samet.akcay@intel.com>

* Fix circular import in cdf normalizer (#1494)

* fix circular import in cdf normalizer

* fix pre-commit

---------

Co-authored-by: Samet Akcay <samet.akcay@intel.com>

* 🛠️ Refactor: Split models to image and video (#1493)

* Move AiVad to anomalib.video

* Move cfa to models.image

* Add Ganomaly model to image models

* Add Fastflow model to image models

* Add EfficientAd anomaly detection model to image
models

* Move dfm, dfkde and draem

* Add CS-Flow model implementation for image-based
defect detection

* Add cflow to image models

* Add padim to image models

* Add patchcore to image models

* Add Reverse distillation to image models.

* Add rkde to image models

* Add stfpm to image models.

* Add image models for handling image datasets in
anomalib

* Update copyright year in model files

* Update import statement for Fastflow model

* Update image and video model documentation

* Update src/anomalib/models/video/README.md

Co-authored-by: Dick Ameln <amelndjd@gmail.com>

* Update clip_length_in_frames parameter in
AvenueDataset and Avenue classes

* Remove folder references

* Fix a typo in readme

* Fix shape of image in batch

* Update clip_length_in_frames parameter

---------

Co-authored-by: Dick Ameln <amelndjd@gmail.com>

* v1: Update the readme file (#1503)

* Fix metadata path

* Ignore hidden directories in folder dataset

* Add check for mask_dir for segmentation tasks in Folder dataset

* Replace docs

* Add each inferencing scripts as a details section

* update readme

* Add training

* modify getting started

* Make getting started a subsection

* tmp

* Added inference section

* Refactor Lightning inference code

* Update entry point in setup.py

* Add training api example to readme

* Update training command in README

* Fix bug in login functionality

* Update HPO and Logging Documentation

* refactor getting started section

* Update HPO and benchmarking commands

* Update the image

* Update README.md

Co-authored-by: Dick Ameln <amelndjd@gmail.com>

* Update README.md

Co-authored-by: Dick Ameln <amelndjd@gmail.com>

* Remove getting started section

* Update README.md

Co-authored-by: Dick Ameln <amelndjd@gmail.com>

* Update README.md

Co-authored-by: Dick Ameln <amelndjd@gmail.com>

* Address the reviewer comments

---------

Co-authored-by: Dick Ameln <amelndjd@gmail.com>

* Extend Engine Tests (#1509)

* Add validate + predict

* Add train

* Add export tests + refactor export cli command

* Fix tests

* Fix jupyternotebook

---------

Co-authored-by: Ashwin Vaidya <ashwinitinvaidya@gmail.com>

* v1: Create the new documentation via `sphinx-design` and `myst` (#1518)

* removed docs

* Created the new docs

* Finished get started

* Remove jupyter notebook from docs

* Add mvtec to data

* Add reference guide

* Add section for each image datasets

* Add base data modules and datasets

* v1 - 📝 Update and Enhance Docstrings (#1532)

* removed docs

* Created the new docs

* Finished get started

* Remove jupyter notebook from docs

* Add mvtec to data

* Add reference guide

* initial commit

* Remove subrocess from btech

* Remove unused import and commented out code

* Add section for each image datasets

* Add base data modules and datasets

* Refactor dataset classes and add docstrings

* Add initial draft for backbone docs

* Update the docstring in folder_3d

* Update mvtec 3d docstring

* Added feature extractor tutorial

* Add a readme file to the docs

* Update folder data docstring

* Update kolektor docstrings

* Update kolektor docstring

* Update mvtec docstring

* Update visa docstrings

* Update avenue docstring

* Update cfa docstring

* Update cflow docstring

* Update csflow docstring

* Update csflow docstring

* Update dfkde docstrings

* Update dfm docstring

* Update draem docstring

* Update efficient ad docstring

* Update ganomaly docstring

* Update padim and patchcore docstrings

* Update reverse distillation docstring

* update rkde docstring

* update stfpm docstring

* Update ai-vad docstring

* Update feature extractors

* add docstring to sparse random projection

* Update dimensionality reduction components

* Exclude prompt from copying

* Normalizing flow update

* Add pro

* Add feature extractor docs

* update aupr

* Update aupr

* Update aupro

* Update auroc

* Update docs/source/markdown/guides/how_to/models/feature_extractors.md

* Update f1 and manual thresholds

* add minmax

* add optimal f1

* update utils

* update comet

* add tensorboard

* update wandb

* Add callbacks

* Update deploy docstrings

* Fix pre-commit on Blasz changes

* Change the requirement file in readthedocs config file

* Partially address pr comments

* Fix the model to padim for the cli integration tests

---------

Co-authored-by: Blaž Rolih <61357777+blaz-r@users.noreply.github.com>
Co-authored-by: Blaz Rolih <blaz.rolih@gmail.com>

* Fix AI-VAD issues (#1524)

* partially fix empty bbox issue

* allow empty region detections

* add torch implementation of gmm (WIP)

* make knn mem bank persistent

* set val_split_mode to same_as_test as default to enforce deterministicness

* add unit tests and docstrings to gmm

* improve typing of knn estimator

* remove todo

* update buffer name

* fix minor mistakes in gmm implementation

* remove unnecessary tensor conversion

* fix visualization when predicting with video model

* add __init__.py to components.cluster

* check for empty bboxes in feature extractor

* reduce default batch size

* cast deep features to float

* fix device issue

* add unit tests for feature extractors

* add license header

* disable random model selection in integration tests

* typing and docstrings

* add test case for non-convergence warning

* 📝  v1 - Docs: Create a dedicated section for each model. (#1540)

* Initial commit for model components

* FIx the grid in model components

* Add image models

* Add video models

* Fix titles and do some cleanup

* reduce the sphinx version as it fails the readthedocs builds

* Fix examples for reading transforms from albumentations Compose object and deserializing from a yaml file

* Update __init__.py

* OpenVINO NNCF updates (#1534)

* Update versions of openvino and nncf

* All export functions return the model path

* Change default OV device to AUTO

* Minor changes on openvino API

* Fix pre-commit issues

* Restored onnx dependency

* Added OV export tests

* Drop export tests

* Rename path var

* Renamed test paths

* [Docs] Add average score to the FastFlow's performance results tables (#1587)

Add average score in the tables of performance results

* Update the paper title in CS-FLOW and CFLOW readme (#1579)

* Fix csflow name in readme

* Update cflow name in readme

* v1 - [Refactor] Reflect the changes in #1562 into v1 (#1595)

Reflect the changes in #1562 into v1

* ✏️ Refactor `ExportMode` to `ExportType` (#1594)

* Update export_mode to export_type

* Fix typo typel -> model

* Revert the python version in the notebook

* 📚 v1 - Modify the PR template (#1596)

* Modify the PR template

* Update pull_request_template.md

* added emojis to the checklist

* [Bug] v1: Fix default input normalization method (#1583)

Fix default input normalization method

* Modify `Engine.predict` (#1514)

* Add validate + predict

* Add train

* Add export tests + refactor export cli command

* Fix tests

* Fix jupyternotebook

* Update engine.predict + expand tests

* Fix lightning entrypoint test

* Address PR comments

* Use only Padim

* Fix commands

* Move padim to common args

* Address 1st PR comment

* Address PR comments

* Fix aivad tests

* Fix missing docstring

* Rename config to args

* Add missing ckpt_path warning in predict

* Remove ckpt_path as required parameter

* Add tests for image path in predict

* Fix image path in predict

* Address PR comments

* Fix missing checkpoint path

* Fix fastflow precommit issue

* Fix tests

* Fix test

---------

Co-authored-by: Ashwin Vaidya <ashwinitinvaidya@gmail.com>

* Fix issue with incorrect image save location (#1515)

Co-authored-by: Ashwin Vaidya <ashwinitinvaidya@gmail.com>

* Upgrade gradio version to 4.x (#1608)

* upgrade gradio version to 4.x

* refactor variable names

* refactor

---------

Co-authored-by: Ashwin Vaidya <ashwinitinvaidya@gmail.com>

* ✍ InferenceDataset->PredictDataset (#1544)

* ✍ InferenceDataset->PredictDataset

* update predict dataset references

* update predict dataset references

* update predict dataset references

* update predict dataset references

* update predict dataset references

* update notebook

---------

Co-authored-by: Ashwin Vaidya <ashwinitinvaidya@gmail.com>

* `FeatureExtractor` -> `TimmFeatureExtractor` (#1543)

Deprecate FeatureExtractor

Co-authored-by: Ashwin Vaidya <ashwinitinvaidya@gmail.com>

* Add `LearningType` and refactor enums (#1598)

* add LearningType and move enums to separate module

* add enum definitions

* move shared enums to root init

* place version above enums

* 📘 Add custom data tutorial (#1571)

* Add custom data tutorial

* Add the custom data training instructions

* Address PR comments

* Start with a classification data

* Release hazelnut toy dataset and refer to the link here.

* Update docs/source/markdown/guides/how_to/data/custom_data.md

Co-authored-by: Dick Ameln <amelndjd@gmail.com>

* Update docs/source/markdown/guides/how_to/data/custom_data.md

Co-authored-by: Dick Ameln <amelndjd@gmail.com>

* Update docs/source/markdown/guides/how_to/data/custom_data.md

Co-authored-by: Dick Ameln <amelndjd@gmail.com>

* Remove no-val-test section from tutorials

* Address PR comments

---------

Co-authored-by: Dick Ameln <amelndjd@gmail.com>
Co-authored-by: Samet Akcay <sakcay@Samets-MacBook-Pro.local>

* Add URL verification for downloading dataset (#1620)

* add url path verification for dataset downloading module

* specify node version to pre-commit-config

* fix import errors on the notebooks

* 🐞 v1 - Fix training with mps accelerator (#1618)

* Convert mask to float32 in AnomalibDataset

* Convert the tensor to cpu before convrting to numpy

* replace np.float32 with np.single

* Update Engine docstrings (#1549)

Update docstrings

Co-authored-by: Ashwin Vaidya <ashwinitinvaidya@gmail.com>

* Fixed shape error, allowing arbitary image sizes for EfficientAD

* Revert the previous commit

* Extend supported models in TimmFeatureExtractor (#1443)

Extend Timm feature extractor for v1.0

* 🔒 v1 - Address security issues (#1637)

* Address path traversal issues 1-3

* address traversal path 6

* Address traverse path 8

* modify the comment to make it more descriptive

* 🐞 Fix mps float64 tensor conversion issue (#1644)

Fix leftover

* 🐞 Fix metadata_path arg to metadata in OpenVINO inferencer (#1648)

Fix metadata_path arg to metadata in OpenVINO inferencer

* 🔒 Address path traversal issues (#1643)

* Address path traversal issues 1-3

* address traversal path 6

* Address traverse path 8

* modify the comment to make it more descriptive

* Update get_image_filename function to enhance the input security

* fix example

* Fix incorrect default value assignment

* Refactor project_path fixture to create temporary directory in the root directory of the project

* Update .gitignore file to include test-related files and directories

* Refactor get_image_filenames_from_dir to filter out non-image files

* Add test case for path outside base directory

* Add examples to get_filenames

* Address PR comments.

* Renamed the tmp dir

* CVS-129503

* add validate_path to anomalib.data.utils.path

* Validate depth path

* 🔒 Add `SECURITY.md` file (#1655)

* Add SECURITY.md file

* Add security item to the type of changes in the pull request template.

* Update pr template

* replace the security emoji

* Validate path in make_dataset functions

* Add a check whether path contains null byte

* fix tests

* Remove duplicated tests that causes the ci fail

* 🚀 Add zero-/few-shot model support and WinCLIP model implementation (#1575)

* add clip normalization

* initial commit for winclip

* add cosine similarity computation

* add multiscale score computation

* simplify mask generation

* add few-shot extension (unvalidated)

* refactor

* cleanup

* add todo

* formatting

* minor refactor

* add comment

* expose optimal F1 metric

* some cleanup

* add ln_after_pool logic

* remove final_ln_after_pool

* update module docstring and remove comments

* add typing and docstrings to torch model

* cleanup lightning model

* hardcode backbone

* n_shot -> k_shot

* add temperature as constant

* minor bugfix

* add typing and docstrings to utils

* set class name dynamically

* replace inf values in harmonic aggregation

* run validate before test

* set default class name to None

* formatting

* remove config

* comments

* minor bugfix

* Revert "expose optimal F1 metric"

This reverts commit e8e1ead9601d76c743af3678f26b1eb0e06d38fb.

* more descriptive assert message

* expose scales as configurable parameter and hardcode pretrained as constant

* add readme

* add images for readme

* update docstrings

* update license headers

* ruff

* add openclip as requirement

* skip model tests for winclip

* fix visualizer test

* add example in docstring

* fix typo in function name

* typing

* imports

* docstrings

* check if model has trainer attribute

* remove pylint ignore statement

* typing

* docstring

* improve tensor shape handling

* refactor and rename class_scores function

* add docstring example

* commenting

* Update src/anomalib/models/image/winclip/torch_model.py

Co-authored-by: Samet Akcay <samet.akcay@intel.com>

* Update src/anomalib/models/image/winclip/torch_model.py

Co-authored-by: Samet Akcay <samet.akcay@intel.com>

* Update src/anomalib/models/image/winclip/torch_model.py

Co-authored-by: Samet Akcay <samet.akcay@intel.com>

* formatting

* docstrings

* docstrings

* comment

* typing

* multiscale -> multi_scale

* add winclip to docs

* add few_shot_source parameter

* use PredictDataset in WinCLIP implementation

* add learning type to winclip lightning model

* add custom model checkpoint callback to save at validation end

* remove trainer arguments from winclip model

* prune state dict for smaller model file

* add learning type logic to engine.train

* pass full path to model checkpoint

* remove training step

* enable integration tests for winclip

* fix typo

* index masks at 0

* formatting

* simplify make_masks

* validate inputs in make_masks

* add unit tests for winclip utils

* add default class_name to prompt ensemble

* add unit tests for winclip prompt ensemble

* add base class for normalization callback

* add _should_run_validation check

* add engine.model property

* use custom modelcheckpoint in tests

* update name in todos

* fix predict tests

* skip export tests for winclip

* fix mistake in model retrieval from trainer

* add model checks

* fix checks

* simplify model check

* add todo for winclip export

* add todo

* add bufferlist mixin

* update bufferlist docstring

* add setup method and register buffers

* import torch model in root of winclip module

* add unit tests for bufferlist mixin

* add unit tests for torch model

* fix transform and update docstring

* disable strict loading in export

* initialize embeddings as tensors

* add test to check if erors are raised

* add todo

* enable winclip export test

* remove device references in torch model

* restore frozen weights in load_state_dict

* make embedding collection methods private

* move state dict handling to winclip from base

* fix typo

* make generate_masks private

* increase onnx opset version

* remove future import

* update docstring

* Update src/anomalib/callbacks/normalization/__init__.py

Co-authored-by: Samet Akcay <samet.akcay@intel.com>

* Update src/anomalib/callbacks/normalization/cdf_normalization.py

Co-authored-by: Samet Akcay <samet.akcay@intel.com>

* Update src/anomalib/callbacks/normalization/min_max_normalization.py

Co-authored-by: Samet Akcay <samet.akcay@intel.com>

* Update src/anomalib/engine/engine.py

Co-authored-by: Samet Akcay <samet.akcay@intel.com>

* typing in docstrings

* Update src/anomalib/engine/engine.py

Co-authored-by: Samet Akcay <samet.akcay@intel.com>

* Update src/anomalib/engine/engine.py

Co-authored-by: Samet Akcay <samet.akcay@intel.com>

* Update src/anomalib/engine/engine.py

Co-authored-by: Samet Akcay <samet.akcay@intel.com>

* Update src/anomalib/engine/engine.py

Co-authored-by: Samet Akcay <samet.akcay@intel.com>

* use exception instead of assert

* update license header

* docstrings

* bufferlist -> buffer_list

---------

Co-authored-by: Samet Akcay <samet.akcay@intel.com>

* Check if the path too long and contains non-printable chars

* WinCLIP attribution (#1662)

give credit to related works

* 🔀 Merge main to v1 (#1652)

* Address tiler issues (#1411)

* fix tiler

* deprecate random tile locations

* restore random tiling in tile method

* check tiling section in config

* disable tiling for ganomalu

* pad -> padding

* Refactor Reverse Distillation to match official code (#1389)

* Non-mandatory early stopping

* Added conv4 and bn4 to OCBE

* Loss as in the official code (flattened arrays)

* Added comment on how to use torchvision model as an encoder to reproduce results in the paper

* Remove early stop from config, change default anomaly_map_mode to add

* pre-commit fix

* Updated results

* Update src/anomalib/models/reverse_distillation/README.md

Co-authored-by: Samet Akcay <samet.akcay@intel.com>

* Update src/anomalib/models/reverse_distillation/README.md

Co-authored-by: Samet Akcay <samet.akcay@intel.com>

* Update src/anomalib/models/reverse_distillation/README.md

Co-authored-by: Samet Akcay <samet.akcay@intel.com>

* Remove early_stopping

* Update src/anomalib/models/reverse_distillation/lightning_model.py

Co-authored-by: Samet Akcay <samet.akcay@intel.com>

* Easier to read code

---------

Co-authored-by: Samet Akcay <samet.akcay@intel.com>

* Patch for the WinError183 on the OpenVino export mode (#1386)

* Fix WinError183 (Windows Error)

* Add commentary of the change

---------

Co-authored-by: Youho99 <gaylord.giret@viacesi.fr>

* Add DSR model (#1142)

* added license, init.py and draft readme

* added draft DSR files

* minor comment update

* Implemented dsr model + comments

* added dsr discrete model

* added defect generation in torch model + dsr to list of existing methods in init.py

* fixed torch model, started implementing lightning model, implemented anomaly generator

* added loss file for DSR

* Added loss, improved lightning module

* Finished up global implementation of DSR second phase

* minor fixes

* Bugfixes

* Fixed DSR loss calculation

* on_training_start -> on_train_start

* pre-commit run

* updated DSR documentation

* reset config file

* added automatic pretraining weight download

* testing pretrained weights. fixed embedding size in upsampling module and image recon module, to be fixed in original branch

* successful testing on pretrained dsr weights

* checked test quality with pretrained weights, fixed anomaly score calculation

* training is functional

* Fixed training procedure

* test still working

* working upsampling module training and testing

* fixed minor bugs

* updated documentation

* added tests and doc

* adapted learning schedule to steps

* Update src/anomalib/models/dsr/anomaly_generator.py

Co-authored-by: Ashwin Vaidya <ashwinnitinvaidya@gmail.com>

* Apply suggestions from code review

Co-authored-by: Samet Akcay <samet.akcay@intel.com>
Co-authored-by: Ashwin Vaidya <ashwinnitinvaidya@gmail.com>

* refactored outputs into dicts

* remove super() args

* changed downloading weights from anomalib releases + minor fixes

* pre commit hooks + minor fixes

* removed configurable ckpt path refs + default iteration nb from paper

* cleaned up dsr.rst and turned exceptions into RuntimeErrors

* Added upsampling ratio parameter to set third training phase epochs

* Added batched evalaution + minor code simplification

* pre commit hooks

* squeeze output image score tensor

* readded new path check in efficient ad

* fixed double step count with manual optimization

* fixed trailing whitespace

* Fix black issues

* Apply suggestions from code review

Co-authored-by: Samet Akcay <samet.akcay@intel.com>

* review suggestions

* updated architecture image links

* Address mypy

* changed output types for dsr model

* readded dict outputs, adapted to TorchInferencer

* fixed error in output dict

* removed default imagenet norm

---------

Co-authored-by: Samet Akcay <samet.akcay@intel.com>
Co-authored-by: Ashwin Vaidya <ashwinnitinvaidya@gmail.com>

* Fix unexpected key pixel_metrics.AUPRO.fpr_limit (#1055)

* fix unexpected key pixel_metrics.AUPRO.fpr_limit

Signed-off-by: FanJiangIntel <fan.jiang@intel.com>

* load AUPRO before create_metric_collection

Signed-off-by: FanJiangIntel <fan.jiang@intel.com>

* code refine

Signed-off-by: FanJiangIntel <fan.jiang@intel.com>

* fix comment

Signed-off-by: FanJiangIntel <fan.jiang@intel.com>

* fix

Signed-off-by: FanJiangIntel <fan.jiang@intel.com>

* Support test

Signed-off-by: Kang Wenjing <wenjing.kang@intel.com>

* Update test

Signed-off-by: Kang Wenjing <wenjing.kang@intel.com>

* Update test

Signed-off-by: Kang Wenjing <wenjing.kang@intel.com>

---------

Signed-off-by: FanJiangIntel <fan.jiang@intel.com>
Signed-off-by: Kang Wenjing <wenjing.kang@intel.com>
Co-authored-by: FanJiangIntel <fan.jiang@intel.com>
Co-authored-by: Samet Akcay <samet.akcay@intel.com>

* Improved speed and memory usage of mean+std calculation (#1457)

* preexisting OpenCV version check added to `setup.py`, ran formatting pre-commit hooks on previous contribution. (#1424)

* testing upstream switch

* picked up on stale OpenCV `setup.py` issue #1041

* 🐞 Hotfix: Limit Gradio Version (#1458)

* Fix metadata path

* Ignore hidden directories in folder dataset

* Add check for mask_dir for segmentation tasks in Folder dataset

* Limit the gradio version to <4

* Fix/efficient ad normalize before every validation (#1441)

* Normalize anomaly maps before every validation

* Remove print statement

---------

Co-authored-by: Samet Akcay <samet.akcay@intel.com>

* Fix DRAEM (#1431)

* Fix beta in augmenter

* Add scheduler

* Change normalization to none

* Replace two lr schedulers with MultiStepLR

* Revert change to beta

* Disable early stopping default

* Format config

* Add opacity parameter beta to config

* Adding U-Flow method (#1415)

* Added uflow model

* Added documentation (README) for uflow model

* Added uflow to the list of available models, and main README updated

* Added missing images for the documentation

* Update src/anomalib/models/uflow/anomaly_map.py

Co-authored-by: Samet Akcay <samet.akcay@intel.com>

* Update src/anomalib/models/uflow/anomaly_map.py

Co-authored-by: Samet Akcay <samet.akcay@intel.com>

* Update src/anomalib/models/uflow/feature_extraction.py

Co-authored-by: Samet Akcay <samet.akcay@intel.com>

* Update src/anomalib/models/uflow/torch_model.py

Co-authored-by: Samet Akcay <samet.akcay@intel.com>

* Added uflow to the reference guide in docs

* Added uflow to the pre-merge tests

* removed the _step function, and merged the code with training_step

* added as a comment the values used in the paper

* re-factorized feature extractors to use the TimmFeatureExtractor class

* added annotations for some functions, where the flow graph is created

* updated readme to fix images loading

* Added link in the README to the original code for reproducing the results

* Removed unused kwargs

* Added docstrigs with args explanations to UFlow classes

* Added models in a github release, and linked here

* Passing all pre-commit checks

* Changed freia's AllInOneBlock by Anomalib's version, and converted the subnet contructor to a Class, in order to be pickable, that is needed to export the model to torch

---------

Co-authored-by: Samet Akcay <samet.akcay@intel.com>

* Update README.md

* 📘 Announce anomalib v1 on the main `README.md` (#1542)

* Fix metadata path

* Ignore hidden directories in folder dataset

* Add check for mask_dir for segmentation tasks in Folder dataset

* Limit the gradio version to <4

* Announce anomalib v1 on readme

* Add the installation instructions and update the documentation link

* Fixed DSR (#1486)

* fixed DSR squeeze bug

* added comment

* Refactor/extensions custom dataset (#1562)

* Explanation how to use extension names in the config file

* Added information about extensions to the error message and control of the user input

* Easier to read code

* Replacing assert with raise

* 📚 Modify the PR template (#1611)

Update pull_request_template.md

* Fix result image URLs (#1510)

* Fix tests

* refactor path + fix issues + fix linting issues

* Migrate docs

* fix typing

* fix failing model tests

* Fix tests

* Address PR comments

* Fixed shape error, allowing arbitary image sizes for EfficientAD (#1537)

* Fixed shape error, allowing arbitrary image sizes. Replaced integer parsing by floor operation

* Replaced calculation by ceil operation. Solution of shape error is to round up and not down for the last upsample layer

* Add comment for ceil oepration

* Formatting with pre-commit hook

* Clean up badge

---------

Signed-off-by: FanJiangIntel <fan.jiang@intel.com>
Signed-off-by: Kang Wenjing <wenjing.kang@intel.com>
Co-authored-by: Dick Ameln <dick.ameln@intel.com>
Co-authored-by: abc-125 <63813435+abc-125@users.noreply.github.com>
Co-authored-by: Samet Akcay <samet.akcay@intel.com>
Co-authored-by: ggiret-thinkdeep <146845847+ggiret-thinkdeep@users.noreply.github.com>
Co-authored-by: Youho99 <gaylord.giret@viacesi.fr>
Co-authored-by: Philippe Carvalho <31983398+phcarval@users.noreply.github.com>
Co-authored-by: Wenjing Kang <wenjing.kang@intel.com>
Co-authored-by: FanJiangIntel <fan.jiang@intel.com>
Co-authored-by: belfner <belfner@belfner.com>
Co-authored-by: Abdulla Al Blooshi <76493346+abdullamatar@users.noreply.github.com>
Co-authored-by: Blaž Rolih <61357777+blaz-r@users.noreply.github.com>
Co-authored-by: Matías Tailanian <895687+mtailanian@users.noreply.github.com>
Co-authored-by: Jan Schlüter <github@jan-schlueter.de>
Co-authored-by: Ashwin Vaidya <ashwinitinvaidya@gmail.com>
Co-authored-by: Christopher <48522299+holzweber@users.noreply.github.com>

* Update license headers

* Check symlinks

---------

Signed-off-by: FanJiangIntel <fan.jiang@intel.com>
Signed-off-by: Kang Wenjing <wenjing.kang@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Co-authored-by: Ashwin Vaidya <ashwin.vaidya@intel.com>
Co-authored-by: Weilin Xu <mzweilin@gmail.com>
Co-authored-by: Ashwin Vaidya <ashwinitinvaidya@gmail.com>
Co-authored-by: Dick Ameln <dick.ameln@intel.com>
Co-authored-by: Ashwin Vaidya <ashwinnitinvaidya@gmail.com>
Co-authored-by: abc-125 <63813435+abc-125@users.noreply.github.com>
Co-authored-by: Harim Kang <harim.kang@intel.com>
Co-authored-by: Dick Ameln <amelndjd@gmail.com>
Co-authored-by: Blaž Rolih <61357777+blaz-r@users.noreply.github.com>
Co-authored-by: Blaz Rolih <blaz.rolih@gmail.com>
Co-authored-by: Adrian Boguszewski <adekboguszewski@gmail.com>
Co-authored-by: Willy Fitra Hendria <willyfitrahendria@gmail.com>
Co-authored-by: Samet Akcay <sakcay@Samets-MacBook-Pro.local>
Co-authored-by: Yunchu Lee <yunchu.lee@intel.com>
Co-authored-by: ggiret-thinkdeep <146845847+ggiret-thinkdeep@users.noreply.github.com>
Co-authored-by: Youho99 <gaylord.giret@viacesi.fr>
Co-authored-by: Philippe Carvalho <31983398+phcarval@users.noreply.github.com>
Co-authored-by: Wenjing Kang <wenjing.kang@intel.com>
Co-authored-by: FanJiangIntel <fan.jiang@intel.com>
Co-authored-by: belfner <belfner@belfner.com>
Co-authored-by: Abdulla Al Blooshi <76493346+abdullamatar@users.noreply.github.com>
Co-authored-by: Matías Tailanian <895687+mtailanian@users.noreply.github.com>
Co-authored-by: Jan Schlüter <github@jan-schlueter.de>
Co-authored-by: Christopher <48522299+holzweber@users.noreply.github.com>
  • Loading branch information
25 people committed Jan 25, 2024
1 parent 31a83c6 commit b5a29d7
Show file tree
Hide file tree
Showing 14 changed files with 279 additions and 34 deletions.
22 changes: 11 additions & 11 deletions src/anomalib/data/depth/folder_3d.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@
ValSplitMode,
get_transforms,
)
from anomalib.data.utils.path import _prepare_files_labels, _resolve_path
from anomalib.data.utils.path import _prepare_files_labels, validate_and_resolve_path


def make_folder3d_dataset(
Expand Down Expand Up @@ -68,13 +68,13 @@ def make_folder3d_dataset(
Returns:
DataFrame: an output dataframe containing samples for the requested split (ie., train or test)
"""
normal_dir = _resolve_path(normal_dir, root)
abnormal_dir = _resolve_path(abnormal_dir, root) if abnormal_dir is not None else None
normal_test_dir = _resolve_path(normal_test_dir, root) if normal_test_dir is not None else None
mask_dir = _resolve_path(mask_dir, root) if mask_dir is not None else None
normal_depth_dir = _resolve_path(normal_depth_dir, root) if normal_depth_dir is not None else None
abnormal_depth_dir = _resolve_path(abnormal_depth_dir, root) if abnormal_depth_dir is not None else None
normal_test_depth_dir = _resolve_path(normal_test_depth_dir, root) if normal_test_depth_dir is not None else None
normal_dir = validate_and_resolve_path(normal_dir, root)
abnormal_dir = validate_and_resolve_path(abnormal_dir, root) if abnormal_dir else None
normal_test_dir = validate_and_resolve_path(normal_test_dir, root) if normal_test_dir else None
mask_dir = validate_and_resolve_path(mask_dir, root) if mask_dir else None
normal_depth_dir = validate_and_resolve_path(normal_depth_dir, root) if normal_depth_dir else None
abnormal_depth_dir = validate_and_resolve_path(abnormal_depth_dir, root) if abnormal_depth_dir else None
normal_test_depth_dir = validate_and_resolve_path(normal_test_depth_dir, root) if normal_test_depth_dir else None

assert normal_dir.is_dir(), "A folder location must be provided in normal_dir."

Expand Down Expand Up @@ -117,15 +117,15 @@ def make_folder3d_dataset(
samples.label_index = samples.label_index.astype("Int64")

# If a path to mask is provided, add it to the sample dataframe.
if normal_depth_dir is not None:
if normal_depth_dir:
samples.loc[samples.label == DirType.NORMAL, "depth_path"] = samples.loc[
samples.label == DirType.NORMAL_DEPTH
].image_path.to_numpy()
samples.loc[samples.label == DirType.ABNORMAL, "depth_path"] = samples.loc[
samples.label == DirType.ABNORMAL_DEPTH
].image_path.to_numpy()

if normal_test_dir is not None:
if normal_test_dir:
samples.loc[samples.label == DirType.NORMAL_TEST, "depth_path"] = samples.loc[
samples.label == DirType.NORMAL_TEST_DEPTH
].image_path.to_numpy()
Expand All @@ -146,7 +146,7 @@ def make_folder3d_dataset(
samples = samples.astype({"depth_path": "str"})

# If a path to mask is provided, add it to the sample dataframe.
if mask_dir is not None and abnormal_dir is not None:
if mask_dir and abnormal_dir:
samples.loc[samples.label == DirType.ABNORMAL, "mask_path"] = samples.loc[
samples.label == DirType.MASK
].image_path.to_numpy()
Expand Down
3 changes: 2 additions & 1 deletion src/anomalib/data/depth/mvtec_3d.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,7 @@
ValSplitMode,
download_and_extract,
get_transforms,
validate_path,
)

logger = logging.getLogger(__name__)
Expand Down Expand Up @@ -102,7 +103,7 @@ def make_mvtec_3d_dataset(
if extensions is None:
extensions = IMG_EXTENSIONS

root = Path(root)
root = validate_path(root)
samples_list = [(str(root),) + f.parts[-4:] for f in root.glob(r"**/*") if f.suffix in extensions]
if not samples_list:
msg = f"Found 0 images in {root}"
Expand Down
3 changes: 3 additions & 0 deletions src/anomalib/data/image/btech.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@
ValSplitMode,
download_and_extract,
get_transforms,
validate_path,
)

logger = logging.getLogger(__name__)
Expand Down Expand Up @@ -79,6 +80,8 @@ def make_btech_dataset(path: Path, split: str | Split | None = None) -> DataFram
Returns:
DataFrame: an output dataframe containing samples for the requested split (ie., train or test)
"""
path = validate_path(path)

samples_list = [
(str(path),) + filename.parts[-3:] for filename in path.glob("**/*") if filename.suffix in (".bmp", ".png")
]
Expand Down
6 changes: 3 additions & 3 deletions src/anomalib/data/image/folder.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@
ValSplitMode,
get_transforms,
)
from anomalib.data.utils.path import _prepare_files_labels, _resolve_path
from anomalib.data.utils.path import _prepare_files_labels, validate_and_resolve_path


def make_folder_dataset(
Expand Down Expand Up @@ -96,8 +96,8 @@ def _resolve_path_and_convert_to_list(path: str | Path | Sequence[str | Path] |
list[Path]: The result of path replaced by Sequence[str | Path].
"""
if isinstance(path, Sequence) and not isinstance(path, str):
return [_resolve_path(dir_path, root) for dir_path in path]
return [_resolve_path(path, root)] if path is not None else []
return [validate_and_resolve_path(dir_path, root) for dir_path in path]
return [validate_and_resolve_path(path, root)] if path is not None else []

# All paths are changed to the List[Path] type and used.
normal_dir = _resolve_path_and_convert_to_list(normal_dir)
Expand Down
3 changes: 2 additions & 1 deletion src/anomalib/data/image/kolektor.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,7 @@
ValSplitMode,
download_and_extract,
get_transforms,
validate_path,
)

__all__ = ["Kolektor", "KolektorDataset", "make_kolektor_dataset"]
Expand Down Expand Up @@ -117,7 +118,7 @@ def make_kolektor_dataset(
3 KolektorSDD kos01 test Good KolektorSDD/kos01/Part3.jpg KolektorSDD/kos01/Part3_label.bmp 0
4 KolektorSDD kos01 train Good KolektorSDD/kos01/Part4.jpg KolektorSDD/kos01/Part4_label.bmp 0
"""
root = Path(root)
root = validate_path(root)

# Get list of images and masks
samples_list = [(str(root),) + f.parts[-2:] for f in root.glob(r"**/*") if f.suffix == ".jpg"]
Expand Down
3 changes: 2 additions & 1 deletion src/anomalib/data/image/mvtec.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,7 @@
ValSplitMode,
download_and_extract,
get_transforms,
validate_path,
)

logger = logging.getLogger(__name__)
Expand Down Expand Up @@ -127,7 +128,7 @@ def make_mvtec_dataset(
if extensions is None:
extensions = IMG_EXTENSIONS

root = Path(root)
root = validate_path(root)
samples_list = [(str(root),) + f.parts[-3:] for f in root.glob(r"**/*") if f.suffix in extensions]
if not samples_list:
msg = f"Found 0 images in {root}"
Expand Down
13 changes: 11 additions & 2 deletions src/anomalib/data/utils/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,14 @@
read_image,
)
from .label import LabelName
from .path import DirType, _check_and_convert_path, _prepare_files_labels, _resolve_path
from .path import (
DirType,
_check_and_convert_path,
_prepare_files_labels,
resolve_path,
validate_and_resolve_path,
validate_path,
)
from .split import Split, TestSplitMode, ValSplitMode, concatenate_datasets, random_split, split_by_label
from .transforms import InputNormalizationMethod, get_transforms

Expand Down Expand Up @@ -44,5 +51,7 @@
"DownloadInfo",
"_check_and_convert_path",
"_prepare_files_labels",
"_resolve_path",
"resolve_path",
"validate_path",
"validate_and_resolve_path",
]
13 changes: 3 additions & 10 deletions src/anomalib/data/utils/image.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,8 @@
from torch.nn import functional as F # noqa: N812
from torchvision.datasets.folder import IMG_EXTENSIONS

from anomalib.data.utils.path import validate_path

logger = logging.getLogger(__name__)


Expand Down Expand Up @@ -179,16 +181,7 @@ def get_image_filenames(path: str | Path, base_dir: str | Path | None = None) ->
File "<string>", line 18, in get_image_filenames
ValueError: Access denied: Path is outside the allowed directory.
"""
path = Path(path).expanduser().resolve()
base_dir = Path(base_dir).expanduser().resolve() if base_dir else Path.home()

# Ensure the resolved path is within the base directory
# This is for security reasons to avoid accessing files outside the base directory.
if not path.is_relative_to(base_dir):
msg = "Access denied: Path is outside the allowed directory"
raise ValueError(msg)

# Get image filenames from file or directory.
path = validate_path(path, base_dir)
image_filenames: list[Path] = []

if path.is_file():
Expand Down
148 changes: 147 additions & 1 deletion src/anomalib/data/utils/path.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,8 @@
# SPDX-License-Identifier: Apache-2.0


import os
import re
from enum import Enum
from pathlib import Path

Expand Down Expand Up @@ -77,7 +79,7 @@ def _prepare_files_labels(
return filenames, labels


def _resolve_path(folder: str | Path, root: str | Path | None = None) -> Path:
def resolve_path(folder: str | Path, root: str | Path | None = None) -> Path:
"""Combine root and folder and returns the absolute path.
This allows users to pass either a root directory and relative paths, or absolute paths to each of the
Expand All @@ -98,3 +100,147 @@ def _resolve_path(folder: str | Path, root: str | Path | None = None) -> Path:
# root provided; prepend root and return absolute path
path = (Path(root) / folder).resolve()
return path


def is_path_too_long(path: str | Path, max_length: int = 512) -> bool:
r"""Check if the path contains too long input.
Args:
path (str | Path): Path to check.
max_length (int): Maximum length a path can be before it is considered too long.
Defaults to ``512``.
Returns:
bool: True if the path contains too long input, False otherwise.
Examples:
>>> contains_too_long_input("./datasets/MVTec/bottle/train/good/000.png")
False
>>> contains_too_long_input("./datasets/MVTec/bottle/train/good/000.png" + "a" * 4096)
True
"""
return len(str(path)) > max_length


def contains_non_printable_characters(path: str | Path) -> bool:
r"""Check if the path contains non-printable characters.
Args:
path (str | Path): Path to check.
Returns:
bool: True if the path contains non-printable characters, False otherwise.
Examples:
>>> contains_non_printable_characters("./datasets/MVTec/bottle/train/good/000.png")
False
>>> contains_non_printable_characters("./datasets/MVTec/bottle/train/good/000.png\0")
True
"""
printable_pattern = re.compile(r"^[\x20-\x7E]+$")
return not printable_pattern.match(str(path))


def validate_path(path: str | Path, base_dir: str | Path | None = None) -> Path:
"""Validate the path.
Args:
path (str | Path): Path to validate.
base_dir (str | Path): Base directory to restrict file access.
Returns:
Path: Validated path.
Examples:
>>> validate_path("./datasets/MVTec/bottle/train/good/000.png")
PosixPath('/abs/path/to/anomalib/datasets/MVTec/bottle/train/good/000.png')
>>> validate_path("./datasets/MVTec/bottle/train/good/000.png", base_dir="./datasets/MVTec")
PosixPath('/abs/path/to/anomalib/datasets/MVTec/bottle/train/good/000.png')
Path to an outside file/directory should raise ValueError:
>>> validate_path("/usr/local/lib")
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "<string>", line 18, in validate_path
ValueError: Access denied: Path is outside the allowed directory
Path to a non-existing file should raise FileNotFoundError:
>>> validate_path("/path/to/unexisting/file")
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "<string>", line 18, in validate_path
FileNotFoundError: Path does not exist: /path/to/unexisting/file
Accessing a file without read permission should raise PermissionError:
.. note::
Note that, we are using ``/usr/local/bin`` directory as an example here.
If this directory does not exist on your system, this will raise
``FileNotFoundError`` instead of ``PermissionError``. You could change
the directory to any directory that you do not have read permission.
>>> validate_path("/bin/bash", base_dir="/bin/")
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "<string>", line 18, in validate_path
PermissionError: Read permission denied for the file: /usr/local/bin
"""
# Check if the path is of an appropriate type
if not isinstance(path, str | Path):
raise TypeError("Expected str, bytes or os.PathLike object, not " + type(path).__name__)

# Check if the path is too long
if is_path_too_long(path):
msg = f"Path is too long: {path}"
raise ValueError(msg)

# Check if the path contains non-printable characters
if contains_non_printable_characters(path):
msg = f"Path contains non-printable characters: {path}"
raise ValueError(msg)

# Sanitize paths
path = Path(path).resolve()
base_dir = Path(base_dir).resolve() if base_dir else Path.home()

# Check if the resolved path is within the base directory
if not str(path).startswith(str(base_dir)):
msg = "Access denied: Path is outside the allowed directory"
raise ValueError(msg)

# Check if the path exists
if not path.exists():
msg = f"Path does not exist: {path}"
raise FileNotFoundError(msg)

# Check the read and execute permissions
if not (os.access(path, os.R_OK) or os.access(path, os.X_OK)):
msg = f"Read or execute permissions denied for the path: {path}"
raise PermissionError(msg)

return path


def validate_and_resolve_path(
folder: str | Path,
root: str | Path | None = None,
base_dir: str | Path | None = None,
) -> Path:
"""Validate and resolve the path.
Args:
folder (str | Path): Folder location containing image or mask data.
root (str | Path | None): Root directory for the dataset.
base_dir (str | Path | None): Base directory to restrict file access.
Returns:
Path: Validated and resolved path.
"""
return validate_path(resolve_path(folder, root), base_dir)
5 changes: 4 additions & 1 deletion src/anomalib/data/video/avenue.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,7 @@
ValSplitMode,
download_and_extract,
get_transforms,
validate_path,
)
from anomalib.data.utils.video import ClipsIndexer

Expand Down Expand Up @@ -86,7 +87,9 @@ def make_avenue_dataset(root: Path, gt_dir: Path, split: Split | str | None = No
Returns:
DataFrame: an output dataframe containing samples for the requested split (ie., train or test)
"""
samples_list = [(str(root),) + filename.parts[-2:] for filename in Path(root).glob("**/*.avi")]
root = validate_path(root)

samples_list = [(str(root),) + filename.parts[-2:] for filename in root.glob("**/*.avi")]
samples = DataFrame(samples_list, columns=["root", "folder", "image_path"])

samples.loc[samples.folder == "testing_videos", "mask_path"] = (
Expand Down
4 changes: 3 additions & 1 deletion src/anomalib/data/video/shanghaitech.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,7 @@
download_and_extract,
get_transforms,
read_image,
validate_path,
)
from anomalib.data.utils.video import ClipsIndexer, convert_video

Expand Down Expand Up @@ -78,7 +79,8 @@ def make_shanghaitech_dataset(root: Path, scene: int, split: Split | str | None
scene_prefix = str(scene).zfill(2)

# get paths to training videos
train_root = Path(root) / "training/converted_videos"
root = validate_path(root)
train_root = root / "training/converted_videos"
train_list = [(str(train_root),) + filename.parts[-2:] for filename in train_root.glob(f"{scene_prefix}_*.avi")]
train_samples = DataFrame(train_list, columns=["root", "folder", "image_path"])
train_samples["split"] = "train"
Expand Down
Loading

0 comments on commit b5a29d7

Please sign in to comment.