New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[refactor] Tests/update fixtures #1046
Conversation
HUB_CLOUD = "hub_cloud" | ||
|
||
|
||
def _get_path_composition_configs(request): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no need to review this file, it's mostly copy-pasted with some minor changes
- when: | ||
condition: << parameters.unix-like >> | ||
steps: | ||
- run: | ||
name: "Running tests - Unix" | ||
command: | | ||
export GOOGLE_APPLICATION_CREDENTIALS=$HOME/.secrets/gcs.json | ||
python3 -m pytest --cov-report=xml --cov=./ --local --s3 --cache-chains | ||
python3 -m pytest --cov-report=xml --cov=./ --local --s3 --hub-cloud |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
removed --cache-chains param, it's always on because we aren't doing benchmarking anymore with just cache chains
test fail should be trivial |
|
||
@pytest.mark.xfail(raises=ReadOnlyModeError, strict=True) | ||
@parametrize_all_storages_and_caches | ||
def test_readonly_ds_create_tensor(storage): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
moved into another test file
|
||
ds.delete() | ||
|
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just noticed, test_array interface is comparing arr1 to arr1 instead of arr2. changing to arr2 makes the test fail which is weird.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i don't understand, should i change something? is there a broken test?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See Line 454
assert arr1.__array_interface__["data"][0] == arr1.__array_interface__["data"][0]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why do we even check the array_interface?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ohh you know what, it's probably because @farizrahman4u had this test in here when he was caching the arrays, but since it is arr1 comparing with arr1 it didn't fail
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yup, you can add a TODO and a task in the backlog if the test fails and you think it's out of scope.
# test will fail if credentials are not provided | ||
memory_storage["key"] = b"1234" # this data will only be stored in s3 | ||
``` | ||
@enabled_datasets |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is there a good way now for testing a subset of all datasets? For example, I might want to just test local and s3 datasets (and not memory datasets) for transforms.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, you just need to write a parametrization, see enabled_datasets
definition
action="store_true", | ||
help="Tests using the `memory_storage` fixture will be skipped. Tests using the `storage` fixture will be " | ||
"skipped if called with `MemoryProvider`.", | ||
MEMORY_OPT, action="store_true", help="Memory tests will be SKIPPED if enabled." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
store true won't really mean anything for memory datasets I guess. Maybe we can mention that somewhere?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not sure what you mean, --memory-skip
is the MEMORY_OPT
and when you call the tests with --memory-skip
, all memory tests are skipped.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I meant --keep-storage doesn't work for memory datasets, can we mention that somewhere?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yep did this in CONTRIBUTING.md and in the conftest
Codecov Report
@@ Coverage Diff @@
## main #1046 +/- ##
==========================================
+ Coverage 89.40% 89.62% +0.22%
==========================================
Files 91 93 +2
Lines 4170 4095 -75
==========================================
- Hits 3728 3670 -58
+ Misses 442 425 -17
Continue to review full report at Codecov.
|
this PR is not responsible for using the hub cloud dev environment. this will be in a separate PR so as not to make this one even larger.
removes some outdated fixtures and replaces them with newer more robust ones. also cleaned up
conftest.py
and turned it into smaller files.updated
CONTRIBUTING.md
with much more concise examples/explanationsalso removes pytest benchmarking code because we are moving to another repository for that