-
Notifications
You must be signed in to change notification settings - Fork 0
Test that TORCH_FEATURE_VERSION guards are used where needed #6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: base_20251120_6635
Are you sure you want to change the base?
Conversation
[ghstack-poisoned]
[ghstack-poisoned]
PR Compliance Guide 🔍Below is a summary of compliance checks for this PR:
Compliance status legend🟢 - Fully Compliant🟡 - Partial Compliant 🔴 - Not Compliant ⚪ - Requires Further Human Verification 🏷️ - Compliance label |
||||||||||||||||||||||||
PR Code Suggestions ✨Explore these optional code suggestions:
|
|||||||||
User description
Splits each torch library registration in the 2.10 folder into its own file -- I had a script that parsed kernel.cpp to do this but I felt like forcing this responsibility on the user might be less error prone
Compiles each file targetting 2.9 and asserts that compilation fails. (There are 2 2.9 kernels we use as negative tests where compilation is expected to succeed)
Stack from ghstack (oldest at bottom):
PR Type
Tests, Enhancement
Description
Split monolithic kernel.cpp into individual function files for better modularity
Created comprehensive test suite verifying TORCH_FEATURE_VERSION guards for 2.10+ APIs
Added negative tests (mv_tensor_accessor files) to validate test infrastructure
Dynamically generates test methods for each .cpp and .cu file in csrc directory
Diagram Walkthrough
File Walkthrough
3 files
New test suite for version compatibility verificationAdded negative test file for CPU tensor accessorAdded negative test file for CUDA tensor accessor16 files
Removed monolithic kernel file split into separate filesExtracted foreach_mul function into dedicated fileExtracted foreach_mul_ function into dedicated fileExtracted make_tensor_clones_and_call_foreach function into dedicatedfileExtracted my_empty function into dedicated fileExtracted my_reshape function into dedicated fileExtracted my_view function into dedicated fileExtracted test_tensor_device function into dedicated fileExtracted test_device_constructor function into dedicated fileExtracted test_device_equality function into dedicated fileExtracted test_device_set_index function into dedicated fileExtracted test_device_index function into dedicated fileExtracted test_device_is_cuda function into dedicated fileExtracted test_device_is_cpu function into dedicated fileExtracted test_parallel_for function into dedicated fileExtracted test_get_num_threads function into dedicated file1 files
Added shared tensor accessor kernel header file