-
Notifications
You must be signed in to change notification settings - Fork 7.2k
[FBcode->GH] Fix pytorch/vision/test:torchvision_models test_maskrcnn_resnet50_fpn_cuda #3675
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FBcode->GH] Fix pytorch/vision/test:torchvision_models test_maskrcnn_resnet50_fpn_cuda #3675
Conversation
Reviewed By: datumbox Differential Revision: D25371946 fbshipit-source-id: cbf748a1e82638f936a21d12bcfea91b4779c9a5
Summary: * Encapsulate and standardize deform_conv2d (pytorch#3074) * Rename files. * Standardizing method names. * Adding anonymous namespaces. * Applying C++ naming rules and alinging variable names across headers and cpp files. * Syncing names across implementations. * Rename deform_conv2d.h to deform_conv2d.cpp * Use header files: - Create header files for kernel implementation and remove definitions from vision_*.h files. - Eliminate unnecessary headers and ensure all cpp include their headers. * Change the naming convention for kernel implementations. * Remove the _param postfix from the variables and standardizing names. * Exposing public forward/backward methods to the C++ API and moving methods around to minimize git blame changes. * Encapsulate and standardize nms (pytorch#3081) * Syncing, where possible, the names of functions across devices. * Adding all internal functions in anonymous namespaces. * Renaming C++/CUDA kernel files and moving operator code from header to cpp file. * Create foreach cpp file a separate header file with "public" functions. * Removing unnecessary repeated includes. * Update CMakeLists.txt to include all headers. * Encapsulate and standardize ps_roi_align (pytorch#3082) * Renaming C++ files & methods according to recommended naming conventions and aligning them with Python's API. Syncing, where possible, the names of functions across devices. * Adding all internal functions in anonymous namespaces. * Renaming C++/CUDA kernel files and moving operator code from header to cpp file. * Create foreach cpp file a separate header file with "public" functions. * Removing unnecessary repeated includes. * Encapsulate and standardize ps_roi_pool (pytorch#3084) * Renaming C++ files & methods according to recommended naming conventions and aligning them with Python's API. * Adding all internal functions in anonymous namespaces. * Renaming C++/CUDA kernel files and moving operator code from header to cpp file. * Create foreach cpp file a separate header file with "public" functions. * Removing unnecessary repeated includes. * Encapsulate and standardize roi_align (pytorch#3085) * Renaming C++ files & methods according to recommended naming conventions and aligning them with Python's API. * Adding all internal functions in anonymous namespaces. * Renaming C++/CUDA kernel files and moving operator code from header to cpp file. * Create foreach cpp file a separate header file with "public" functions. * Removing unnecessary repeated includes. * Encapsulate and standardize roi_pool (pytorch#3088) * Renaming C++ files & methods according to recommended naming conventions and aligning them with Python's API. * Adding all internal functions in anonymous namespaces. * Syncing variable names between the cpp files and their header files. * Renaming C++/CUDA kernel files and moving operator code from header to cpp file. * Create foreach cpp file a separate header file with "public" functions. * Removing unnecessary repeated includes. * Encapsulate and standardize new_empty_tensor_op (pytorch#3089) * Renaming C++ files & methods according to recommended naming conventions and aligning them with Python's API. * Create foreach cpp file a separate header file with "public" functions. * Adding all internal functions in anonymous namespaces. * Convert to const ref all possible parameters. * Removing unnecessary repeated includes. * Encapsulate and standardize C++ Ops - Clean up (pytorch#3094) * Removing unnecessary repeated includes. * Remove unnecessary vision_cpu.h, vision_cuda.h, autocast.h. * Fixing naming convention and correcting method names on macros. * Turn on clang formatter for cu files and fixing broken styles. * Replace "#ifndef ... #define ... #endif" with "#pragma once" on header files. * Adding operator methods in vision::ops namespace. (pytorch#3096) * Adding operator methods in vision::ops namespace. * Replace general.h with macros.h * Adding vision.h to the necessary cpp files. Pull Request resolved: pytorch#3133 Reviewed By: datumbox Differential Revision: D25369675 Pulled By: fmassa fbshipit-source-id: 36b57ed735394a799f744ce161f2667c5f6c95c2
Summary: This image was moved to `test/assets/encode_jpeg` in pytorch#2988 but was not removed in this branch for some reason Pull Request resolved: pytorch#3139 Reviewed By: datumbox Differential Revision: D25395596 Pulled By: fmassa fbshipit-source-id: a0afdec2d1da41e6743d7d723e71ffde442cf3a7
Summary: This was already deleted in pytorch#2671 but for some reason didn't get picked in the merge Pull Request resolved: pytorch#3140 Reviewed By: datumbox Differential Revision: D25395647 Pulled By: fmassa fbshipit-source-id: c8eb3692266deaccf5f00b314e84ec102bea60cb
Summary: * Fill color support for tensor affine transforms * PEP fix * Docstring changes and float support * Docstring update for transforms and float type cast * Cast only for Tensor * Temporary patch for lack of Union type support, plus an extra unit test * More plausible bilinear filling for tensors * Keep things simple & New docstrings * Fix lint and other issues after merge * make it in one line * Docstring and some code modifications * More tests and corresponding changes for transoforms and docstring changes * Simplify test configs * Update test_functional_tensor.py * Update test_functional_tensor.py * Move assertions Reviewed By: datumbox Differential Revision: D25396712 fbshipit-source-id: 7eb32024c91b67ffa154a481aa592c6e57b3c480 Co-authored-by: vfdev <vfdev.5@gmail.com>
…rch#3113) Reviewed By: datumbox Differential Revision: D25396711 fbshipit-source-id: 0430fb1c431634d3bbaef45adf1d9ebfbcc4515e
…ytorch#3131) Reviewed By: datumbox Differential Revision: D25396710 fbshipit-source-id: da74eca9d755d887f0a983432bf526e00d32cfcc
…ch#3136) Reviewed By: datumbox Differential Revision: D25396708 fbshipit-source-id: 7aa27cd76f135c0ccca6b8412fec9388b3df0919
Summary: * Remove TravisCI Add hub tests to CircleCI, coverage and ONNX are still missing * Install torchvision dependencies on CI Pull Request resolved: pytorch#3141 Reviewed By: datumbox Differential Revision: D25397167 Pulled By: fmassa fbshipit-source-id: 57e3de333176f0156afebc1a2be5e37a2bbca45c
Summary: * Moving deform_conv2d op registration. * Moving nms op registration. * Moving new_empty_tensor op registration. * Moving ps_roi_align op registration. * Moving ps_roi_pool op registration. * Moving roi_align op registration. * Moving roi_pool op registration. * Restoring headers for forward/backward and fixing styles. * Restoring the test hack on windows. * Stricter header inclusion. Reviewed By: fmassa Differential Revision: D25460675 fbshipit-source-id: 8b7c65320d41c9c0f43815d156957c1e40aef106
Reviewed By: fmassa Differential Revision: D25460678 fbshipit-source-id: 708f1a57091bed84381895184ae77a866eb1762b
Summary: * [WIP] Added CONTRIBUTING.md * Updated CONTRIBUTING.md * Update Reviewed By: fmassa Differential Revision: D25460676 fbshipit-source-id: b28d9f5f2530363aa371b6ef7d09fa0cdb96bba9
Reviewed By: fmassa Differential Revision: D25460677 fbshipit-source-id: 3d0324539ca5fc5f3cf7d4de0b63659cf469c0f2
Summary: * Reduce unnecessary header inclusions in models and io. * Move autocast to separate folder and hide autograd implementation in an anonymous namespace. * Moving files in subfolders. Reviewed By: fmassa Differential Revision: D25461523 fbshipit-source-id: 756eeb6848aacaa474de4825ed4c1045d17e2cea
Reviewed By: datumbox Differential Revision: D25531036 fbshipit-source-id: 69380127ff97efe6172c1774985a180a8f7aa506
Summary: * Enable ONNX test in circle CI Reviewed By: datumbox Differential Revision: D25531035 fbshipit-source-id: 022b8613ac5bd8cb7165e9cbaa9daf99a9e1694e
Summary: Co-authored-by: Francisco Massa <fvsmassa@gmail.com> Reviewed By: datumbox Differential Revision: D25531033 fbshipit-source-id: 735bdd211edfb49bc97fae8558049214d6a4b169
Reviewed By: datumbox Differential Revision: D25531038 fbshipit-source-id: 481434d15d0709417b3c36ff2b13e10a7994dff2
Summary: * Adapt to new torch export API for dictionary Reviewed By: datumbox Differential Revision: D25531037 fbshipit-source-id: 123002753d63aee9feaafcafb077c80af1652331
Summary: * Fix an issue that ShuffleNetV2 model is exported to a wrong ONNX file if dynamic_axes field was provided. * Add a ut for the bug fix. * Fix flake8 issue. * Don't access each element in x.shape, use x.size() instead. Reviewed By: datumbox Differential Revision: D25531034 fbshipit-source-id: 9a7ea77ba6ac5a5b80cb15e7f6cba1a8a47f9289 Co-authored-by: Vasilis Vryniotis <datumbox@users.noreply.github.com>
…rch#3163) Summary: * Removing VISION_API from backward() methods and adding a ops.h * Fixing clang format. Reviewed By: datumbox Differential Revision: D25531032 fbshipit-source-id: a9241ac53196ca14647002a29c476909b50ee064
Summary: * Invert Transform (pytorch#3104) * Adding invert operator. * Make use of the _assert_channels(). * Update upper bound value. * Remove private doc from invert, create or reuse generic testing methods to avoid duplication of code in the tests. (pytorch#3106) * Create posterize transformation and refactor common methods to assist reuse. (pytorch#3108) * Implement the solarize transform. (pytorch#3112) * Implement the adjust_sharpness transform (pytorch#3114) * Adding functional operator for sharpness. * Adding transforms for sharpness. * Handling tiny images and adding a test. * Implement the autocontrast transform. (pytorch#3117) * Implement the equalize transform (pytorch#3119) * Implement the equalize transform. * Turn off deterministic for histogram. * Fixing test. (pytorch#3126) * Force ratio to be float to avoid numeric overflows on blend. (pytorch#3127) * Separate the tests of Adjust Sharpness from ColorJitter. (pytorch#3128) * Add AutoAugment Policies and main Transform (pytorch#3142) * Separate the tests of Adjust Sharpness from ColorJitter. * Initial implementation, not-jitable. * AutoAugment passing JIT. * Adding tests/docs, changing formatting. * Update test. * Fix formats * Fix documentation and imports. * Apply changes from code review: - Move the transformations outside of AutoAugment on a separate method. - Renamed degenerate method for sharpness for better clarity. * Update torchvision/transforms/functional.py * Apply more changes from code review: - Add InterpolationMode parameter. - Move all declarations away from AutoAugment constructor and into the private method. * Update documentation. * Apply suggestions from code review * Apply changes from code review: - Refactor code to eliminate as any to() and clamp() as possible. - Reuse methods where possible. - Apply speed ups. * Replacing pad. Reviewed By: fmassa Differential Revision: D25679210 fbshipit-source-id: f7b4a086dc9479e44f93e508d6070280cbc9bdac Co-authored-by: vfdev <vfdev.5@gmail.com> Co-authored-by: Francisco Massa <fvsmassa@gmail.com> Co-authored-by: vfdev <vfdev.5@gmail.com> Co-authored-by: Francisco Massa <fvsmassa@gmail.com>
Summary: * Replacing all torch.jit.annotations with typing * Replacing remaining typing Reviewed By: fmassa Differential Revision: D25679213 fbshipit-source-id: 297d52d7ed1322d350619e298a9c2bbaa771d2a2
Summary: * added the helper method for dimension checks * unit tests for dimensio check function in functional_tensor * code formatting and typing * moved torch image check after tensor check * unit testcases for test_assert_image_tensor added and refactored * separate unit testcase file deleted * assert_image_tensor added to newly created 6 methods * test cases added for new 6 mthohds * removed wrongly pasted posterize method and added solarize method for testing Reviewed By: fmassa Differential Revision: D25679214 fbshipit-source-id: 60ca5c1e6a653195a3dd07755b7ac7fa6d4eaf4b Co-authored-by: Vasilis Vryniotis <datumbox@users.noreply.github.com> Co-authored-by: Vasilis Vryniotis <datumbox@users.noreply.github.com>
Summary: * Fixing incorrect doc example in MNASNet. * Fixing incorrect output. Reviewed By: fmassa Differential Revision: D25679208 fbshipit-source-id: 2fc9db36df66104cdf3a29bc514ded93e3131d0f
Summary: * Moving mobilenet.py to mobilenetv2.py * Adding mobilenet.py for BC. * Extending ConvBNReLU for reuse. * Reduce import scope on mobilenet to only the public and versioned classes and methods. Reviewed By: fmassa Differential Revision: D25679211 fbshipit-source-id: 72d8eadeef42a93879bbe4a61b6611023db29669
Summary: * Update ImageReadMode error messages, add newline at the end of image_read_mode.h, replace define with const in image_read_mode.h, add documentation to ImageReadMode enum * Update readpng_cpu and readjpeg_cpu error messages * Update image.py documentation Reviewed By: fmassa Differential Revision: D25679209 fbshipit-source-id: 4376dbb0e005ae4a09b908daada8a5a5cfd9b2a8
Reviewed By: fmassa Differential Revision: D25679212 fbshipit-source-id: ab567b28e454bb5e8ac741e6d7798786dcbb6b3d
Summary: Co-authored-by: Vasilis Vryniotis <datumbox@users.noreply.github.com> Reviewed By: mthrok Differential Revision: D25680403 fbshipit-source-id: 0a9e91c7f9af034b581487dbef352b20ac4f3936
Summary: * Initial doc clean-up * Remove all private docs * Rename files * Highlight backend inconsistencies * Sequence and number * [Need checking] AutoAugment related doc change * Revert name changes Reviewed By: datumbox Differential Revision: D25954563 fbshipit-source-id: 3b73d924ec4e23d58416a8d38b554b4e16e64059
Summary: Reviewed By: NicolasHug Differential Revision: D27706956 fbshipit-source-id: c5e3f4030b9df7081d72ea9a2e307cadb9f0a676 Co-authored-by: Nicolas Hug <contact@nicolas-hug.com> Co-authored-by: Nicolas Hug <nicolashug@fb.com>
Summary: * Added defusedxml to parse untrusted XML data * Added typecheck disable for defusedxml Reviewed By: NicolasHug Differential Revision: D27706948 fbshipit-source-id: 4334745d939c83e763ea5508b6284275c5c7bc32 Co-authored-by: Nicolas Hug <nicolashug@fb.com>
Summary: * Fixed return docstrings * Added some refs and corrected some parts * more refs, and a note about dtypes Reviewed By: NicolasHug Differential Revision: D27706952 fbshipit-source-id: 8d6a7cc7fa72f446a163a102db5bc53f1465dd8d Co-authored-by: Francisco Massa <fvsmassa@gmail.com>
Summary: * WIP * clang * docs * extracted out common utils * Use better quantization function and pass tensors as parameters * proper dequantization * Some tests * Dequantization optimization, seems to gain a few ms * clang-format * again * more correct test. Had to remove optimization although it almost works * Also test aligned=True * remove useless part * more docs and comments * Put back optimization with more robust test * Added check for index upper bound * avoid possible overflow * Move common function into common.h * oops * scale=1,zero_point=0 makes more sense * Force batch size of 1 to prevent any indexingbug * format * format again * updated docstring * put back description comment for pre_calc_bilinear_interpolate * revert most changes to docstring as it's taken care of in another PR Reviewed By: NicolasHug Differential Revision: D27706946 fbshipit-source-id: 2ae1614c214ea676b4f7705dc0716efd9f34330e
Reviewed By: NicolasHug Differential Revision: D27706953 fbshipit-source-id: c3810183d8acb7feec6868e28705ee25806acc27
Summary: * Make two methods as similar as possible. * Introducing conditional fake casting. * Change the casting mechanism. Reviewed By: NicolasHug Differential Revision: D27706950 fbshipit-source-id: ef7503817cd64ffc8723fec89f1cd94647490eaf
Summary: * Added KITTI dataset * Addressed review comments * Changed type of target to List[Dict] and corrected the data types of the returned values. * Updated unit test to rely on ImageDatasetTestCase * Added kitti to dataset documentation * Cleaned up test and some minor changes * Made data_url a string instead of a list * Removed unnecessary try and print Reviewed By: NicolasHug Differential Revision: D27706941 fbshipit-source-id: aa646f17e7ad5a0858320274cc2ec226fa8f4790 Co-authored-by: Francisco Massa <fvsmassa@gmail.com>
Summary: * packaging: Remove pin for jpeg, numpy These may no longer be necessary due to the default anaconda channel having the necessary packages now. Signed-off-by: Eli Uriegas <eliuriegas@fb.com> * Update packaging/torchvision/meta.yaml Reviewed By: NicolasHug Differential Revision: D27706942 fbshipit-source-id: 64476f429ad8fd5ea110df3bd62b816157bdae11 Co-authored-by: Nicolas Hug <contact@nicolas-hug.com> Co-authored-by: Francisco Massa <fvsmassa@gmail.com>
Reviewed By: NicolasHug Differential Revision: D27706944 fbshipit-source-id: 9ee5e90200b2f7f79eccdaa4681e9caa67afb503
Summary: * Remove pandas dependecy for CelebA dataset * address PR comments * Apply suggestions from code review Reviewed By: NicolasHug Differential Revision: D27706937 fbshipit-source-id: 4beb11a0706c598735b65590afa0260f29dfa3a8 Co-authored-by: Philip Meier <github.pmeier@posteo.de> Co-authored-by: Vasilis Vryniotis <datumbox@users.noreply.github.com> Co-authored-by: Philip Meier <github.pmeier@posteo.de>
…_cuda Summary: This test is consistently failing or being skipped / omitted: https://www.internalfb.com/intern/test/562949978742689?ref_report_id=0 Some models are known to be flaky with autocast so we just ignore the check, as with other models Reviewed By: fmassa Differential Revision: D27791576 fbshipit-source-id: 1d3f7254a1031edf6a9393b9afe0aba7921d8042
This pull request was exported from Phabricator. Differential Revision: D27791576 |
…_cuda (pytorch#3675) Summary: Pull Request resolved: pytorch#3675 This test is consistently failing or being skipped / omitted: https://www.internalfb.com/intern/test/562949978742689?ref_report_id=0 Some models are known to be flaky with autocast so we just ignore the check, as with other models Reviewed By: fmassa Differential Revision: D27791576 fbshipit-source-id: b7c85e4d67143bcc3cf4b5da0150a6dd6fd12298
Summary: Pull Request resolved: pytorch#3676 The test is constantly failing: https://www.internalfb.com/intern/test/562949982577806?ref_report_id=0 The fix just adjusts `atol` from 1e-8 to 1e-7. The equality test was likely failing on exact zeros Reviewed By: fmassa Differential Revision: D27790959 fbshipit-source-id: 58d06250df5905e39e197ee946ee2d875a5bab76
Summary: Pull Request resolved: pytorch#3677 This test is broken: https://www.internalfb.com/intern/test/281475006043433?ref_report_id=0 This diff fixes the test on CUDA devices by adjusting the tolerance, as was previously done for this same test Reviewed By: fmassa Differential Revision: D27792082 fbshipit-source-id: b336fb68fb72a5a80136efd5c2d3c9d0e1d4f604
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @NicolasHug for the PR.
@NicolasHug Closing because it is not possible to merge manually on fbsync branch. Also the changes you are trying to bring in are already available on the fbsync by e1b9ae2 If your intention for this PR was to merge to master, please reopen and change the target branch. |
@NicolasHug It's worth closing this PR and opening a new one by starting with master and cherrypicking the specific hash commit from fbsync (check wiki guide for details)... This PR seems bodged as it tries to modify ~500 files. |
…_cuda (pytorch#3675) Summary: Pull Request resolved: pytorch#3675 This test is consistently failing or being skipped / omitted: https://www.internalfb.com/intern/test/562949978742689?ref_report_id=0 Some models are known to be flaky with autocast so we just ignore the check, as with other models Reviewed By: fmassa Differential Revision: D27791576 fbshipit-source-id: b7c85e4d67143bcc3cf4b5da0150a6dd6fd12298
Summary:
This test is consistently failing or being skipped / omitted: https://www.internalfb.com/intern/test/562949978742689?ref_report_id=0
Some models are known to be flaky with autocast so we just ignore the check, as with other models
Reviewed By: fmassa
Differential Revision: D27791576