Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optional Depencencies #1043

Merged
merged 11 commits into from Jun 3, 2022
Merged

Optional Depencencies #1043

merged 11 commits into from Jun 3, 2022

Conversation

ggraffieti
Copy link
Member

@ggraffieti ggraffieti commented Jun 2, 2022

  • Management of optional dependencies
    • Only the core dependencies are installed when the library is installed
    • Optional dependencies are divided into non-exclusive groups (a dependency can be into more groups). Each group defines the particular set of dependencies required to extend avalanche with certain functionalities.
      • So far we have 3 groups: detection, extra, rl
    • The optional dependencies are memorized in a txt file, where for each of them one or more groups are indicated. This also allows having an additional group: "all", that install all the optional dependencies at once.
    • The optional dependencies can be installed by using pip install avalanche-lib[<group>], e.g. pip install avalanche-lib[detection]
      • This also works if avalanche is already installed, it simply installs the required optional dependencies
  • Library works without any additional dependency installed
    • Avalanche now works even without any additional dependency installed. All the top-level imports work and no errors are thrown.
    • The classes/methods that use optional dependencies are not included in the top-level imports but should be imported explicitly.
    • If a class that requires an additional dependency is imported and the additional dependency is not installed, a talking error is thrown, guiding the user to install the required optional dependency group.

@ggraffieti
Copy link
Member Author

ggraffieti commented Jun 2, 2022

@AntonioCarta @NickLucche
In the rl_scenario (avalanche.benchmarks.scenarios.rl_scenariothegymimport is already enclosed inside a try-catch, but instead of throwing and error if gym is not installed, it defines the mock classesEnvandWrapper`. Don't know if this is needed or we can substitute it with an error that indicated to the user to install the rl dependencies like in all the other cases.

@coveralls
Copy link

coveralls commented Jun 2, 2022

Pull Request Test Coverage Report for Build 2433873446

  • 14 of 21 (66.67%) changed or added relevant lines in 7 files are covered.
  • No unchanged relevant lines lost coverage.
  • Overall coverage increased (+1.4%) to 71.67%

Changes Missing Coverage Covered Lines Changed/Added Lines %
avalanche/benchmarks/scenarios/rl_scenario.py 0 1 0.0%
avalanche/benchmarks/classic/ctrl.py 2 4 50.0%
avalanche/benchmarks/datasets/lvis_dataset/lvis_dataset.py 2 4 50.0%
tests/test_ctrl.py 3 5 60.0%
Totals Coverage Status
Change from base Build 2428756377: 1.4%
Covered Lines: 11971
Relevant Lines: 16703

💛 - Coveralls

@AntonioCarta
Copy link
Collaborator

Thanks, this looks perfect! The gym class stub was used just to avoid the error. Now that we have optional dependencies, we can remove it and emit the same error of the other packages when gym is not avaiable.

@NickLucche
Copy link
Member

Yep good work, we can run tests assuming all dependencies are installed right? Currently I'm checking for gym installation before skipping tests when testing rl_scenario, is there a better way to do it taking into account these changes?

@ggraffieti
Copy link
Member Author

Yep good work, we can run tests assuming all dependencies are installed right? Currently I'm checking for gym installation before skipping tests when testing rl_scenario, is there a better way to do it taking into account these changes?

I don't think we have a better way than trying to import the dependency, catch the error and don't run the tests if the dependency is not present. I'll take a look at the tests but I think your solution is already the best one.

@ContinualAI-bot
Copy link
Collaborator

Oh no! It seems there are some PEP8 errors! 😕
Don't worry, you can fix them! 💪
Here's a report about the errors and where you can find them:

tests/benchmarks/scenarios/test_rl_scenario.py:5:81: E501 line too long (83 > 80 characters)
1       E501 line too long (83 > 80 characters)

@AntonioCarta AntonioCarta merged commit 9dbe293 into ContinualAI:master Jun 3, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants