Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rework ops.random.CoinFlip #2577

Merged
merged 32 commits into from
Jan 21, 2021

Conversation

jantonguirao
Copy link
Contributor

(#2531 should be merged first)

Why we need this PR?

Pick one, remove the rest

  • Refactoring to unify functionality of random number generators

What happened in this PR?

Fill relevant points, put NA otherwise. Replace anything inside []

  • What solution was applied:
    Implemented ops.random.CoinFlip in terms of RNGBase
    Moved ops.CoinFlip to ops.random.CoinFlip
  • Affected modules and functionalities:
    ops.random.CoinFlip
  • Key points relevant for the review:
    CoinFlip implementation
  • Validation and testing:
    Tests extended
  • Documentation (including examples):
    Existing documentation

JIRA TASK: [DALI-1197]

jantonguirao and others added 25 commits December 7, 2020 11:15
Signed-off-by: Joaquin Anton <janton@nvidia.com>
Signed-off-by: Joaquin Anton <janton@nvidia.com>
Signed-off-by: Joaquin Anton <janton@nvidia.com>
Signed-off-by: Joaquin Anton <janton@nvidia.com>
Signed-off-by: Joaquin Anton <janton@nvidia.com>
Signed-off-by: Joaquin Anton <janton@nvidia.com>
Signed-off-by: Joaquin Anton <janton@nvidia.com>
Signed-off-by: Joaquin Anton <janton@nvidia.com>
Signed-off-by: Joaquin Anton <janton@nvidia.com>
Signed-off-by: Joaquin Anton <janton@nvidia.com>
Signed-off-by: Joaquin Anton <janton@nvidia.com>
Signed-off-by: Joaquin Anton <janton@nvidia.com>
Signed-off-by: Joaquin Anton <janton@nvidia.com>
Signed-off-by: Joaquin Anton <janton@nvidia.com>
Signed-off-by: Joaquin Anton <janton@nvidia.com>
Signed-off-by: Joaquin Anton <janton@nvidia.com>
Signed-off-by: Janusz Lisiecki <jlisiecki@nvidia.com>
Signed-off-by: Janusz Lisiecki <jlisiecki@nvidia.com>
Signed-off-by: Joaquin Anton <janton@nvidia.com>
Signed-off-by: Joaquin Anton <janton@nvidia.com>
Signed-off-by: Joaquin Anton <janton@nvidia.com>
Signed-off-by: Joaquin Anton <janton@nvidia.com>
Signed-off-by: Joaquin Anton <janton@nvidia.com>
Signed-off-by: Joaquin Anton <janton@nvidia.com>
Signed-off-by: Joaquin Anton <janton@nvidia.com>
@review-notebook-app
Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

Signed-off-by: Joaquin Anton <janton@nvidia.com>
Copy link
Collaborator

@banasraf banasraf left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

dali/operators/random/coin_flip_cpu.cc Show resolved Hide resolved
: probability_(probability) {}

__device__ inline bool operator()(curandState *state) const {
return curand_uniform(state) <= probability_ ? true : false;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about

Suggested change
return curand_uniform(state) <= probability_ ? true : false;
return !(curand_uniform(state) > probability_);

;) Up to you

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I could do return curand_uniform(state) <= probability_.
The negation just complicates things I believe

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, that's true ;)

Comment on lines +49 to +51
DALIDataType DefaultDataType() const {
return DALI_INT32;
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it used anywhere?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See my comment above


def test_coin_flip():
batch_size = 8
shape = [100000]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AFAIU, there's also a single-input version of this op, right? Maybe we should test it also?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK

Signed-off-by: Joaquin Anton <janton@nvidia.com>
Signed-off-by: Joaquin Anton <janton@nvidia.com>
Comment on lines +72 to +79
def test_coin_flip():
batch_size = 8
for device in ['cpu', 'gpu']:
for max_shape, use_shape_like_in in [([100000], False),
([100000], True),
(None, False)]:
for probability in [None, 0.7, 0.5, 0.0, 1.0]:
yield check_coin_flip, device, batch_size, max_shape, probability, use_shape_like_in
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably it would be cleaner, if you'd go with two check... functions instead of the use_shape_like_in:

def test_coin_flip():
    batch_size = 8
    for device in ['cpu', 'gpu']:
        for max_shape in [100000, None]:
            for probability in [None, 0.7, 0.5, 0.0, 1.0]:
                yield check_coin_flip_input, device, batch_size, max_shape, probability
                yield check_coin_flip, device, batch_size, max_shape, probability

But that's up to you

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd have to duplicate a lot of boilerplate, so I prefer it as is

Signed-off-by: Joaquin Anton <janton@nvidia.com>
@jantonguirao
Copy link
Contributor Author

!build

@dali-automaton
Copy link
Collaborator

CI MESSAGE: [1993407]: BUILD STARTED

@dali-automaton
Copy link
Collaborator

CI MESSAGE: [1993407]: BUILD FAILED

Signed-off-by: Joaquin Anton <janton@nvidia.com>
@jantonguirao
Copy link
Contributor Author

!build

@dali-automaton
Copy link
Collaborator

CI MESSAGE: [1997031]: BUILD STARTED

@dali-automaton
Copy link
Collaborator

CI MESSAGE: [1997031]: BUILD FAILED

Signed-off-by: Joaquin Anton <janton@nvidia.com>
Signed-off-by: Joaquin Anton <janton@nvidia.com>
@jantonguirao
Copy link
Contributor Author

!build

@dali-automaton
Copy link
Collaborator

CI MESSAGE: [1998822]: BUILD STARTED

@dali-automaton
Copy link
Collaborator

CI MESSAGE: [1998822]: BUILD PASSED

@jantonguirao jantonguirao merged commit f25cb53 into NVIDIA:master Jan 21, 2021
@JanuszL JanuszL mentioned this pull request Oct 26, 2021
23 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants