Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[refactor] Tests/update fixtures #1046

Merged
merged 36 commits into from Jul 15, 2021
Merged

[refactor] Tests/update fixtures #1046

merged 36 commits into from Jul 15, 2021

Conversation

verbose-void
Copy link
Contributor

@verbose-void verbose-void commented Jul 13, 2021

this PR is not responsible for using the hub cloud dev environment. this will be in a separate PR so as not to make this one even larger.

removes some outdated fixtures and replaces them with newer more robust ones. also cleaned up conftest.py and turned it into smaller files.

updated CONTRIBUTING.md with much more concise examples/explanations

also removes pytest benchmarking code because we are moving to another repository for that

@verbose-void verbose-void changed the title Tests/update fixtures [refactor] Tests/update fixtures Jul 13, 2021
HUB_CLOUD = "hub_cloud"


def _get_path_composition_configs(request):
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no need to review this file, it's mostly copy-pasted with some minor changes

- when:
condition: << parameters.unix-like >>
steps:
- run:
name: "Running tests - Unix"
command: |
export GOOGLE_APPLICATION_CREDENTIALS=$HOME/.secrets/gcs.json
python3 -m pytest --cov-report=xml --cov=./ --local --s3 --cache-chains
python3 -m pytest --cov-report=xml --cov=./ --local --s3 --hub-cloud
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

removed --cache-chains param, it's always on because we aren't doing benchmarking anymore with just cache chains

@verbose-void
Copy link
Contributor Author

test fail should be trivial


@pytest.mark.xfail(raises=ReadOnlyModeError, strict=True)
@parametrize_all_storages_and_caches
def test_readonly_ds_create_tensor(storage):
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

moved into another test file


ds.delete()


Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just noticed, test_array interface is comparing arr1 to arr1 instead of arr2. changing to arr2 makes the test fail which is weird.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i don't understand, should i change something? is there a broken test?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See Line 454
assert arr1.__array_interface__["data"][0] == arr1.__array_interface__["data"][0]

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we even check the array_interface?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ohh you know what, it's probably because @farizrahman4u had this test in here when he was caching the arrays, but since it is arr1 comparing with arr1 it didn't fail

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yup, you can add a TODO and a task in the backlog if the test fails and you think it's out of scope.

# test will fail if credentials are not provided
memory_storage["key"] = b"1234" # this data will only be stored in s3
```
@enabled_datasets
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there a good way now for testing a subset of all datasets? For example, I might want to just test local and s3 datasets (and not memory datasets) for transforms.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, you just need to write a parametrization, see enabled_datasets definition

action="store_true",
help="Tests using the `memory_storage` fixture will be skipped. Tests using the `storage` fixture will be "
"skipped if called with `MemoryProvider`.",
MEMORY_OPT, action="store_true", help="Memory tests will be SKIPPED if enabled."
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

store true won't really mean anything for memory datasets I guess. Maybe we can mention that somewhere?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not sure what you mean, --memory-skip is the MEMORY_OPT and when you call the tests with --memory-skip, all memory tests are skipped.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I meant --keep-storage doesn't work for memory datasets, can we mention that somewhere?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yep did this in CONTRIBUTING.md and in the conftest

hub/api/dataset.py Outdated Show resolved Hide resolved
@codecov
Copy link

codecov bot commented Jul 15, 2021

Codecov Report

Merging #1046 (65071f8) into main (07fb8d2) will increase coverage by 0.22%.
The diff coverage is 94.57%.

Impacted file tree graph

@@            Coverage Diff             @@
##             main    #1046      +/-   ##
==========================================
+ Coverage   89.40%   89.62%   +0.22%     
==========================================
  Files          91       93       +2     
  Lines        4170     4095      -75     
==========================================
- Hits         3728     3670      -58     
+ Misses        442      425      -17     
Impacted Files Coverage Δ
hub/client/utils.py 61.81% <ø> (-1.98%) ⬇️
hub/integrations/tests/test_tensorflow.py 100.00% <ø> (ø)
hub/client/client.py 92.20% <80.00%> (-1.13%) ⬇️
hub/util/exceptions.py 76.55% <80.00%> (+0.67%) ⬆️
hub/tests/path_fixtures.py 87.32% <87.32%> (ø)
hub/api/tests/test_api.py 99.09% <90.62%> (-0.07%) ⬇️
hub/tests/client_fixtures.py 94.44% <94.44%> (ø)
conftest.py 100.00% <100.00%> (+8.75%) ⬆️
hub/api/dataset.py 92.14% <100.00%> (+0.90%) ⬆️
hub/api/tests/test_api_with_compression.py 100.00% <100.00%> (ø)
... and 22 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 07fb8d2...65071f8. Read the comment docs.

@verbose-void verbose-void merged commit dac569c into main Jul 15, 2021
@verbose-void verbose-void deleted the tests/update-fixtures branch July 15, 2021 21:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants