Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

split and where translations #6

Merged
merged 2 commits into from Feb 18, 2021
Merged

split and where translations #6

merged 2 commits into from Feb 18, 2021

Conversation

RikVoorhaar
Copy link
Contributor

As mentioned in #5

  • Translating np.diff didn't seem worthwhile. There is tf.experimental.numpy.diff, but it's only on newest version of tensorflow, and probably in a later version it will become just tf.diff.
  • Torch and tensorflow wrappers for split can probably be unified into one function, but this would mean wrapping a translation around the wrapper I made for torch since syntax is slightly different. Both typically take a list as input if using sections to split the array, so I don't think my wrapper brings that much extra overhead.
  • For torch I didn't use tensor_split since it's not yet in the stable release. Maybe one can do different things in autoray depending on torch version number, but that doesn't seem very elegant.
  • I couldn't run any of the CuPy tests on this machine since it doesn't have a GPU

@codecov
Copy link

codecov bot commented Feb 17, 2021

Codecov Report

Merging #6 (adf2221) into master (5fdcbdf) will increase coverage by 0.14%.
The diff coverage is 100.00%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master       #6      +/-   ##
==========================================
+ Coverage   97.30%   97.45%   +0.14%     
==========================================
  Files           1        1              
  Lines         482      510      +28     
==========================================
+ Hits          469      497      +28     
  Misses         13       13              
Impacted Files Coverage Δ
autoray/autoray.py 97.45% <100.00%> (+0.14%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 5fdcbdf...adf2221. Read the comment docs.

Copy link
Owner

@jcmgray jcmgray left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

Yeah tensorflow seems to be implementing a much more numpy compatible interface which should make life easier. And torch similarly seems to be planning to add more numpy API like functions to torch.linalg eventually. No worries about cupy as it very explicitly mimics the numpy interface.

if backend == "dask":
pytest.xfail("dask doesn't support split yet")
A = ar.do("ones", (10, 20, 10), like=backend)
sections = [2, 4, 14]
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it possible to parametrize sections with a int case as well? then we'll have full test coverage for the translations.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good thing you noticed, implementation for int case for torch was wrong (torch takes split size as opposed to number of splits), should be fixed now.

@jcmgray
Copy link
Owner

jcmgray commented Feb 18, 2021

Nice, thanks for the extra test coverage and PR!

@jcmgray jcmgray merged commit 6f14268 into jcmgray:master Feb 18, 2021
@jcmgray
Copy link
Owner

jcmgray commented Oct 26, 2022

Note I've updated split to tensor_split in 5a80323, since autoray now has a mechanism for falling back to an alternative implementation, e.g. if torch is old.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants