-
Notifications
You must be signed in to change notification settings - Fork 7.2k
ToDtype CV-CUDA Backend #9278
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
ToDtype CV-CUDA Backend #9278
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/vision/9278
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
Hi @justincdavis! Thank you for your pull request and welcome to our community. Action RequiredIn order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you. ProcessIn order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with If you have received this in error or have any questions, please contact us at cla@meta.com. Thanks! |
AntoineSimoulin
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @justincdavis, thanks a lot for the PR. I left some comments and questions as a first review. Let me know what you think!
|
@justincdavis could you complete the missing Contributor License Agreement (c.f. earlier comment from the meta-cla bot)? |
AntoineSimoulin
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @justincdavis, thanks for addressing my first round of comments. I had another pass. Will it be possible to have another iteration on the PR based on my new comments? Thanks a lot for your time here!
zy1git
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi,
This is just a light pass of the review. Let me know what you think.
204f698 to
4259d7f
Compare
|
Hi @zy1git thanks for the first pass! I have updated this PR to reflect the conventions of the flip PR, LMK what you think! |
7e17ce4 to
41af724
Compare
NicolasHug
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot for the PR @justincdavis , I left a first pass
| make_image_cvcuda, | ||
| marks=pytest.mark.skipif(not CVCUDA_AVAILABLE, reason="CVCUDA is not available"), | ||
| ), | ||
| pytest.param(make_image_cvcuda, marks=CV_CUDA_TEST), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a not that you should be able to remove these changes once #9305 lands.
| def test_functional_signature(self, kernel, input_type): | ||
| if kernel is F._misc._to_dtype_image_cvcuda: | ||
| input_type = _import_cvcuda().Tensor | ||
| check_functional_kernel_signature_match(F.to_dtype, kernel=kernel, input_type=input_type) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for adding this test!
test/test_transforms_v2.py
Outdated
| if is_uint16_to_uint8: | ||
| atol = 255 | ||
| elif is_uint8_to_uint16 and not scale: | ||
| atol = 255 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIUC, this 255 tol is needed because in torch, when scale is False, we're doing a brutal .to(dtype) which is going to cause a lot of overflows, whereas in CVCUDA you either cap the result or always scale?
I'm hoping we can simplify this a bit, potentially by dropping support for uint8 <-> uint16 conversions when scale is False on CV-CUDA. I feel like that's not a really valid conversion to support anyway. The general idea is that for all transforms, we'll want the CVCUDA backend to have very close results to the existing tensor backend. A difference of 255 is too large.
BTW, we should be able to set atol to 0 or 1 when is_uint16_to_uint8 and scale is True?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am looking into if we can reduce this more and if not I will update to drop support for uint16<->uint8 through cvcuda
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@NicolasHug I made some changes to the atol calculations and dropped uint16->uint8 with scale=False from the CV-CUDA version. All atol values are <=1 now for all supported use cases. LMK if you want to see more changes/verification from this.
Summary
Add the backend kernel for ToDtype transform using CV-CUDA
How to use