Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add out and where args for ht.div #945

Merged

Conversation

neosunhan
Copy link
Collaborator

@neosunhan neosunhan commented Mar 29, 2022

Description

Implemention of out and where functionality for ht.divide is fairly straightforward. However, use of both arguments at the same time causes complications due to lack of support for in-place where in pytorch.

Currently, I use a workaround by changing the underlying torch.Tensor of the out DNDarray. Possible alternatives include updating indexing.where to allow in-place modification or implementing a function that allows in-place modification using a mask (similar to numpy.copyto).

Issue/s resolved: #870

Changes proposed:

  • Use out functionality of true_divide in pytorch backend
  • Use indexing.where

Type of change

New feature (non-breaking change which adds functionality)

Due Diligence

  • All split configurations tested
  • Multiple dtypes tested in relevant functions
  • Documentation updated (if needed)
  • Updated changelog.md under the title "Pending Additions"

Does this change modify the behaviour of other functions? If so, which?

no

@mtar
Copy link
Collaborator

mtar commented Mar 29, 2022

GPU cluster tests are currently disabled on this Pull Request.

@ghost
Copy link

ghost commented Mar 29, 2022

CodeSee Review Map:

Review these changes using an interactive CodeSee Map

Review in an interactive map

View more CodeSee Maps

Legend

CodeSee Map Legend

@codecov
Copy link

codecov bot commented Mar 30, 2022

Codecov Report

Merging #945 (9d2926b) into main (aaafea0) will increase coverage by 0.00%.
The diff coverage is 100.00%.

@@           Coverage Diff           @@
##             main     #945   +/-   ##
=======================================
  Coverage   95.39%   95.39%           
=======================================
  Files          65       65           
  Lines        9965     9976   +11     
=======================================
+ Hits         9506     9517   +11     
  Misses        459      459           
Flag Coverage Δ
gpu 94.63% <100.00%> (+<0.01%) ⬆️
unit 91.02% <100.00%> (+<0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
heat/core/_operations.py 96.04% <100.00%> (+0.26%) ⬆️
heat/core/arithmetics.py 99.06% <100.00%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update aaafea0...9d2926b. Read the comment docs.

@neosunhan
Copy link
Collaborator Author

Found a cleaner way to handle indexing of the where argument.

Copy link
Contributor

@ClaudiaComito ClaudiaComito left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@neosunhan thank you so much for taking this on!

I have a few comments, the main thing is that where should probably be addressed within _operations.__binary_op(). This way, it would be available to all binary operations. And it would be easier to satisfy the condition that ht.divide(t1, t2) returns out, not t1, where where is False (which also holds for all numpy binary operations).

Please let me know if you need help with __binary_ops!

@@ -438,6 +444,10 @@ def div(t1: Union[DNDarray, float], t2: Union[DNDarray, float]) -> DNDarray:
The first operand whose values are divided
t2: DNDarray or scalar
The second operand by whose values is divided
out: DNDarray, optional
The output array. It must have a shape that the inputs broadcast to
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shape and split dimension

out: DNDarray, optional
The output array. It must have a shape that the inputs broadcast to
where: DNDarray, optional
Condition of interest, where true yield divided value else yield original value in t1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should follow numpy.divide, so ht.divide should actually yield out where where is False (and uninitialized values when out=None)

if where is not None:
t2 = indexing.where(where, t2, 1)

return _operations.__binary_op(torch.true_divide, t1, t2, out)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should return out instead of t1 where where is False. And in fact this applies to all binary operations, as I admittedly had not realized when I created this issue.

So the way to go here would be to modify _operations.__binary_op to accomodate the where kwarg once and for all. Do you need help with that?

heat/core/tests/test_arithmetics.py Show resolved Hide resolved
@neosunhan
Copy link
Collaborator Author

neosunhan commented Apr 4, 2022

@ClaudiaComito Thanks for the feedback! I have taken a look at _operations.__binary_op and here is what I have come up with so far.

The main idea is to create a new DNDarray with uninitialized values (using torch.empty) in the event that out=None. We can then copy over the values from the result tensor (using torch.where to index if a condition is provided).

Copy link
Contributor

@ClaudiaComito ClaudiaComito left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great job @neosunhan ! 👏 I have left some comments in the code.

out.larray[:] = operation(
t1.larray.type(promoted_type), t2.larray.type(promoted_type), **fn_kwargs
else:
out_tensor = torch.empty(output_shape, dtype=promoted_type)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2 comments here:

  • output_shape is the global (memory-distributed) shape of the output DNDarray, here you're initializing a potentially huge torch tensor. In this case you should call
factories.empty(output_shape, dtype=..., split=..., device=...)

and that will take care of only initializing slices of the global array on each process

(I think this is also why the tests fail btw)

  • if I understand the numpy docs correctly, this empty out DNDarray only needs to be initialized if where is not None.

@neosunhan
Copy link
Collaborator Author

@ClaudiaComito I have modified the code accordingly and added a few more tests.

@ClaudiaComito
Copy link
Contributor

run tests

Copy link
Contributor

@ClaudiaComito ClaudiaComito left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@neosunhan thanks a lot. This looks good, as far as I can tell. Can you update the CHANGELOG? When you're done, we can run the GPU tests.

@neosunhan
Copy link
Collaborator Author

Updated CHANGELOG for the new div kwargs. I'm not sure which section is appropriate to document the new where kwarg for __binary_op (or if it should even be included in the changelog).

Also I believe there was a typo in the CHANGELOG where the "Feature Additions" heading was repeated 3 times, which I have fixed.

@ClaudiaComito
Copy link
Contributor

run tests

Copy link
Contributor

@ClaudiaComito ClaudiaComito left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@neosunhan please bear with me. While we're trying to figure out why the GPU tests fail, it occurred to me that we are not checking for where's distribution scheme and whether it fits the way out is distributed.

heat/core/_operations.py Show resolved Hide resolved
heat/core/_operations.py Show resolved Hide resolved
@@ -43,6 +44,8 @@ def __binary_op(
The second operand involved in the operation,
out: DNDarray, optional
Output buffer in which the result is placed
where: DNDarray, optional
Condition of interest, where true yield the result of the operation else yield original value in out (uninitialized when out=None)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can use numpy's docs for where, I think they are a bit clearer. But we must expand on them a bit, e.g. is where supposed/expected to be distributed, and how.

heat/core/tests/test_arithmetics.py Show resolved Hide resolved
@neosunhan
Copy link
Collaborator Author

neosunhan commented Apr 7, 2022

@ClaudiaComito I've added several test cases with different split configurations of out and where, but they all seem to pass. Not sure if I'm missing something here.

@ClaudiaComito
Copy link
Contributor

@ClaudiaComito I've added several test cases with different split configurations of out and where, but they all seem to pass. Not sure if I'm missing something here.

@neosunhan If you look at the list of checks below, you will see that the continuous-integration/jenkins/pr-merge has failed. Click on details, and you can see that the tests passed on 1 process, but failed on 2. Do you need help setting up your environment for multi-process? If you have installed openmpi, you should be able to run the tests on 2 processes with mpirun -n 2 python -m unittest.

@neosunhan neosunhan changed the title Add out and where args for ht.div Add out and where args for ht.div Apr 11, 2022
@neosunhan
Copy link
Collaborator Author

@ClaudiaComito Thanks for the help! I have added the check for where's distribution scheme and the corresponding test cases.

@ClaudiaComito
Copy link
Contributor

run tests

Copy link
Contributor

@ClaudiaComito ClaudiaComito left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm ready to approve this, great job @neosunhan . Just a small documentation update needed.

will be set to the result of the operation. Elsewhere, the `out` array will retain its original
value. If an uninitialized `out` array is created via the default `out=None`, locations within
it where the condition is False will remain uninitialized. If distributed, must be distributed
along the same dimension as the `out` array.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is only correct if where and out have the same shape.

For example if out is (100, 10000) and distributed along the columns (out.split is 1), where is (10000,), their shapes are broadcastable but where must be distributed along 0.

Copy link
Collaborator Author

@neosunhan neosunhan Apr 20, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If distributed, the split axis (after broadcasting if required) must match that of the out array.

Would this phrasing be accurate?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, sounds accurate

@ClaudiaComito
Copy link
Contributor

run tests

Copy link
Collaborator

@mtar mtar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One check can be saved

heat/core/_operations.py Outdated Show resolved Hide resolved
@ClaudiaComito
Copy link
Contributor

run tests

@ClaudiaComito
Copy link
Contributor

run tests

Copy link
Contributor

@ClaudiaComito ClaudiaComito left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks a lot @neosunhan !

@ClaudiaComito ClaudiaComito merged commit 860626b into helmholtz-analytics:main Apr 22, 2022
@neosunhan neosunhan deleted the features/870-divide-kwargs branch April 23, 2022 01:51
@mtar mtar removed the PR talk label Sep 11, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Implement missing functionalities in ht.divide (out, where)
3 participants