-
Notifications
You must be signed in to change notification settings - Fork 30.6k
🔴Make center_crop
fast equivalent to slow
#40856
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks.
If it is breaking, add a 🔴
If it is a bug fix (because slow is quite standard) then still add the red because its quite important no?
Lastly, I only see 1 test being uskipped, wondering if some perceiver were failing and are now fixed or where they skipped?
The 1 test unskipped is a common test so it was also skipped for perceiver! In total it's unskipped for around 10 processors, which were all using center_crop by default |
center_crop
fast equivalent to slowcenter_crop
fast equivalent to slow
[For maintainers] Suggested jobs to run (before merge) run-slow: chinese_clip, perceiver |
make center_crop fast equivalent to slow
make center_crop fast equivalent to slow
make center_crop fast equivalent to slow
What does this PR do?
Use a custom
center_crop
function to be equivalent to the one used in slow processors.The only difference with torchvision one is that instead of using
int(round(..))
to definecrop_top
andcrop_left
, which round towards the even number, we just useint(...)
to always round down.Thanks @rootonchair for adding this first to bridgetower image processor!
Slightly breaking as it will change current results from fast image processors using
center_crop