-
Notifications
You must be signed in to change notification settings - Fork 89
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Testing Suite Updates #2499
Comments
maybe we add |
Just leaving some of the notes on how to adjust configuration for this to be tested and not interfere with the existing test code. This may change as we continue through development. We want to be able to maintain the existing framework until the new suite is ready to replace it, but do not want to cause any issues between them. To run with the Prototype Test Suite:
To revert before push so existing test suite runs:
Additional Notes
|
@pierce314159, @hokiegeek2 - Wanted to get your input on how we should handle problem size because I think there are a few ways we can go about it. First, I don't think that we need the problem applied to every test case, but for example for IO we definitely need to be able to adjust the problem size. I have laid out some options below. Option 1 Option 2 Option 3 Option 4 Personally, I am leaning towards option 2 because it will give us ultimate flexibility and the parameters are recorded in the JSON output automatically so we will not need to add code to add the meta data to the JSON output. I wanted to get input before putting the initial architecture PR up so I can adjust a few tests to reflect the decision here. |
@Ethan-DeBandi99 I also vote option 2 |
I set up the tests currently using the problem size from the configuration to use Option 2. Verified that it has the desired outcome. I do want to note that we will need to update how the benchmarks are using the problem size when we integrate them with the updated tests to use the same configuration. |
Just noting when I run
If I update this line in Line 9 in cdd15d3
This is pretty weird because it needs to be a different value of my laptop than ethans. It's not pressing since we can all run the tests, but we should def figure this out at some point |
@pierce314159 I am starting to see the same issue. Let's keep an eye on it and if it continues, we can update. |
This is a future work idea, but I wanted to capture it so i don't forget when i come back I think eventually a nice thing way to do at least edge case testing if to have a big dictionary that uses a str of the objtype/dtype as the key and value is the edge case arkouda object. This would live somewhere accessible by all the test files. This is an oversimplification but I think this could work pretty well with some tweaking If we wanna grab a dictionary of edge cases with only certain dtypes, we could pass in those column names edge_case_dict = global_edge_case_dict[dtypes] and if we want only arrays of each edge case, we could do something like edge_case_arr = global_edge_case_dict[dtype] |
Replaced by the milestone: https://github.com/Bears-R-Us/arkouda/milestone/9 |
The ideas conveyed here were previously explored, but updates were not completed. Arkouda needs more robust scalable testing that interfaces with our benchmarking for simplified maintenance. Key components are listed below:
Steps to Complete
Issues will be added for these at a later date.
Remove any of the prototype code from the previous research into this so that we have a clean slate to start. (Remove Old Testing Prototype Code #2500)
Configure the architecture and configuration files. This will include
pytest.ini
andconftest.py
. We will only want to maintain one set of configuration for testing and benchmarking. Benchmarking will be turned off by default and will be manually enabled when needed. (New Testing Architecture #2504)Configure tests to scale and use the same/similar parameters for problem size and configuration as the benchmarks (Configuration included as part of New Testing Architecture #2504. This will require ongoing updates to handle different cases).
Ensure the Benchmark Correctness checks from old benchmarking system are configured and runnable.
Convert all tests to new format
Simple Conversions
alignment_test.py
Conversion for new test framework #2536array_view_test.py
conversion for new test framework #2537bigint_agg_test.py
Conversion for new test framework #2570bitops_test.py
Conversion for new test framework #2572client_dtypes_test.py
conversion for new test framework #2583dtypes_test.py
Conversion for new test framework #2605index_test.py
Conversion for new test framework #2616indexing_test.py
Conversion for new test framework #2620join_tests.py
Conversion for new test framework #2626logger_test.py
Conversion for new test framework #2648pdarray_creation_test.py
Conversion for new test framework #2573security_test.py
for new test framework #2640segarray_test.py
to new test framework #2642series_test.py
Conversion for new test framework #2651sort_test.py
Conversion for new test framework #2654stats_test.py
Conversion for new test framework #2656string_test.py
conversion for new test framework #2700where_test.py
refactor for new testing framework #2705io_util_test.py
reformat for new test framework #2668Complex Conversions
categorical_test.py
conversion for new test framework #2686client_test.py
conversion for new test framework #2538coargsort_test.py
conversion for new test framework #2625datetime_test.py
conversion to new framework #2697extrema_test.py
conversion for new test framework #2694groupby_test.py
Conversion to new test framework #2607io_test.py
/parquet_test.py
/import_export_test.py
conversion for new test framework #2539message_test.py
Conversion for new test framework #2542nan_test.py
conversion to new test framework #2610operator_test.py
conversion to new framework #2688regex_test.py
conversion to new test framework #2659setops_test.py
Conversion to new test framework #2547symbol_table_test.py
refactor to new test framework #2684check.py should be reviewed to determine if any testing needs to be moved to another file. Otherwise, it will be removed.
Rename proto-tests and make commands #3669
Configure benchmarks so that testing and benchmarks use same configuration files
Verify that all test files pass flake8. Correct any that do not. #2768
Ensure CI is functional. Add flake8 check to CI for
tests
The text was updated successfully, but these errors were encountered: