Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SobolEngine] Update direction numbers to 21201 dims #49710

Closed
wants to merge 1 commit into from

Conversation

Balandat
Copy link
Contributor

@Balandat Balandat commented Dec 21, 2020

Performs the update that was suggested in #41489

Adjust the functionality to largely match that pf the scipy companion PR scipy/scipy#10844, including

  • a new draw_base2 method
  • include zero as the first point in the (unscrambled) Sobol sequence

The scipy PR is also quite opinionated if the draw method doesn't get called with a base 2 number (for which the resulting sequence has nice properties, see the scipy PR for a comprehensive discussion of this).

Note that this update is a breaking change in the sense that sequences generated with the same parameters after as before will not be identical! They will have the same (better, arguably) distributional properties, but calling the engine with the same seed will result in different numbers in the sequence.

Test Plan:

from torch.quasirandom import SobolEngine

sobol = SobolEngine(3)
sobol.draw(4)

sobol = SobolEngine(4, scramble=True)
sobol.draw(5)

sobol = SobolEngine(4, scramble=True)
sobol.draw_base2(2)

Differential Revision: D25657233

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Dec 21, 2020

💊 CI failures summary and remediations

As of commit 36b8cb2 (more details on the Dr. CI page):


  • 1/1 failures possibly* introduced in this PR
    • 1/1 non-CircleCI failure(s)

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D25657233

Copy link
Contributor

@malfet malfet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All Sobol related tests are failing, can you please look at those (and either update the tests or fix underlying failures)

@Balandat
Copy link
Contributor Author

Balandat commented Jan 4, 2021

I didn't touch the tests yet b/c I wanted to get a read on how reasonable putting the constants into the humongous constant C-array. If that seems reasonable I'm happy to fix up the units.

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D25657233

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D25657233

@Balandat
Copy link
Contributor Author

Balandat commented Jan 7, 2021

@malfet I updated the PR so that it implements the same behavior as the Sobol sampler in the scipy PR - the implementation and statistical properties were discussed at length with experts in the field there (443 comments on the PR and counting), so I am very confident about this being the correct thing to do.

I adapted the tests from the scipy PR. I can add more of the old tests back in, but I will have to update target numbers b/c of the breaking change to the direction numbers and the inclusion of zero as the first point.

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D25657233

Comment on lines +133 to +135
for (int64_t m = 0; m < MAXBIT; ++m) {
ss_a[0][m] = 1;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit

Suggested change
for (int64_t m = 0; m < MAXBIT; ++m) {
ss_a[0][m] = 1;
}
std::fill=n(ss_a[0], MAXBIT, 1);

Comment on lines 143 to 145
for (int64_t i = 0; i < m; ++i) {
ss_a[d][i] = initsobolstate[d][i];
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit

Suggested change
for (int64_t i = 0; i < m; ++i) {
ss_a[d][i] = initsobolstate[d][i];
}
std::copy(ss_a[d], ssa_[d] + m, initsobolstate[d]);

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have very little understanding of the C++ internals of torch, but this fails (your suggestion above as well). Note that initsobolstate is a C array not a tensor, not sure if that's what's the issue here. I'll leave this and the fill_n above that also fails as is for now.
...

stderr: caffe2/aten/src/ATen/native/SobolEngineOps.cpp:141:32: error: invalid operands to binary expression ('TensorAccessor<long, 2UL - 1, DefaultPtrTraits, long>' and 'int64_t' (aka 'long'))
std::copy(ss_a[d], ss_a[d] + m, initsobolstate[d]);
~~~~~~~ ^ ~
caffe2/aten/src/ATen/TensorOperators.h:110:22: note: candidate function not viable: no known conversion from 'TensorAccessor<long, 2UL - 1, DefaultPtrTraits, long>' to 'const at::Tensor' for 1st argument
AT_FORALL_BINARY_OPS(DEFINE_OPERATOR)
^
caffe2/aten/src/ATen/TensorOperators.h:82:33: note: expanded from macro 'AT_FORALL_BINARY_OPS'
#define AT_FORALL_BINARY_OPS()
^
caffe2/aten/src/ATen/TensorOperators.h:102:22: note: expanded from macro 'DEFINE_OPERATOR'
static inline Tensor operator op(const Tensor & x, Scalar y) {
^
caffe2/aten/src/ATen/TensorOperators.h:110:22: note: candidate function not viable: no known conversion from 'TensorAccessor<long, 2UL - 1, DefaultPtrTraits, long>' to 'const at::Tensor' for 1st argument
caffe2/aten/src/ATen/TensorOperators.h:82:33: note: expanded from macro 'AT_FORALL_BINARY_OPS'
#define AT_FORALL_BINARY_OPS(
)
^
caffe2/aten/src/ATen/TensorOperators.h:99:22: note: expanded from macro 'DEFINE_OPERATOR'
static inline Tensor operator op(const Tensor & x, const Tensor & y) {
^
caffe2/aten/src/ATen/TensorOperators.h:110:22: note: candidate function not viable: no known conversion from 'TensorAccessor<long, 2UL - 1, DefaultPtrTraits, long>' to 'c10::Scalar' for 1st argument
caffe2/aten/src/ATen/TensorOperators.h:82:33: note: expanded from macro 'AT_FORALL_BINARY_OPS'
#define AT_FORALL_BINARY_OPS(_)
^
caffe2/aten/src/ATen/TensorOperators.h:105:22: note: expanded from macro 'DEFINE_OPERATOR'
static inline Tensor operator op(Scalar x, const Tensor & y) {
^
In module 'third_party_libgcc_std' imported from caffe2/c10/core/Allocator.h:4:
third-party-buck/platform007/build/libgcc/include/c++/7.3.0/bits/stl_algobase.h:753:29: error: cannot increment value of type 'at::TensorAccessor<long, 1, DefaultPtrTraits, long>'
__niter > 0; --__niter, ++__first)
^ ~~~~~~~
third-party-buck/platform007/build/libgcc/include/c++/7.3.0/bits/stl_algobase.h:789:23: note: in instantiation of function template specialization 'std::__fill_n_a<at::TensorAccessor<long, 1, DefaultPtrTraits, long>, long, int>' requested here
return _OI(std::__fill_n_a(std::__niter_base(__first), __n, __value));
^
caffe2/aten/src/ATen/native/SobolEngineOps.cpp:133:8: note: in instantiation of function template specialization 'std::fill_n<at::TensorAccessor<long, 1, DefaultPtrTraits, long>, long, int>' requested here
std::fill_n(ss_a[0], MAXBIT, 1);
^
2 errors generated.
When running <c++ preprocess_and_compile>.
When building rule //caffe2/aten:ATen-cpu#compile-pic-SobolEngineOps.cpp.o6f3daec3,platform007-clang.

test/test_torch.py Outdated Show resolved Hide resolved
Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Balandat has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Balandat has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Balandat has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D25657233

@codecov
Copy link

codecov bot commented Jan 26, 2021

Codecov Report

Merging #49710 (36b8cb2) into master (e26fccc) will decrease coverage by 0.00%.
The diff coverage is 96.96%.

@@            Coverage Diff             @@
##           master   #49710      +/-   ##
==========================================
- Coverage   80.85%   80.85%   -0.01%     
==========================================
  Files        1938     1938              
  Lines      211215   211233      +18     
==========================================
+ Hits       170782   170789       +7     
- Misses      40433    40444      +11     

Summary:
Performs the update that was suggested in pytorch#41489

Adjust the functionality to largely match that pf the scipy companion PR scipy/scipy#10844, including
- a new `draw_base2` method
- include zero as the first point in the (unscrambled) Sobol sequence

The scipy PR is also quite opinionated if the `draw` method doesn't get called with a base 2 number (for which the resulting sequence has nice properties, see the scipy PR for a comprehensive discussion of this).

Note that this update is a **breaking change** in the sense that sequences generated with the same parameters after as before will not be identical! They will have the same (better, arguably) distributional properties, but calling the engine with the same seed will result in different numbers in the sequence.

Pull Request resolved: pytorch#49710

Test Plan:
```
from torch.quasirandom import SobolEngine

sobol = SobolEngine(3)
sobol.draw(4)

sobol = SobolEngine(4, scramble=True)
sobol.draw(5)

sobol = SobolEngine(4, scramble=True)
sobol.draw_base2(2)
```

Reviewed By: malfet

Differential Revision: D25657233

Pulled By: Balandat

fbshipit-source-id: ce8ccafe90fc968ed811634b5f37c2eb208af985
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D25657233

@Balandat Balandat deleted the export-D25657233 branch February 1, 2021 16:54
@albanD albanD added the module: bc-breaking Related to a BC-breaking change label Feb 1, 2021
@facebook-github-bot
Copy link
Contributor

@Balandat merged this pull request in 449098c.

facebook-github-bot pushed a commit that referenced this pull request Feb 2, 2021
Summary:
Pull Request resolved: #51578

#49710 introduced an edge case in which
drawing a single sample resulted in ignoring the `dtype` arg to `draw`. This
fixes this and adds a unit test to cover this behavior.

Test Plan: Unit tests

Reviewed By: danielrjiang

Differential Revision: D26204393

fbshipit-source-id: 441a44dc035002e7bbe6b662bf6d1af0e2cd88f4
Balandat added a commit to Balandat/botorch that referenced this pull request Feb 3, 2021
Summary:
Updates test to fix failures caused by pytorch/pytorch#49710
Removes the comparison against specifc sample values to be more robust in the
future (but retains the distributional tests)

Reviewed By: danielrjiang

Differential Revision: D26207258

fbshipit-source-id: 3838339d6674518385ee6cf37faa7f52dc3de9dc
facebook-github-bot pushed a commit to pytorch/botorch that referenced this pull request Feb 3, 2021
Summary:
Pull Request resolved: #674

Updates test to fix failures caused by pytorch/pytorch#49710
Removes the comparison against specifc sample values to be more robust in the
future (but retains the distributional tests)

Reviewed By: danielrjiang

Differential Revision: D26207258

fbshipit-source-id: 445ccec67bf11c5e2b6502654a39a47ffaa3ed83
Balandat added a commit to Balandat/Ax that referenced this pull request Feb 4, 2021
Summary:
These tests are failing due to pytorch/pytorch#49710, which changed the behavior of the pytorch SobolEngine. The distributional properties are arguably better due to the new direction numbers, but there is a breaking change to the realized samples.

This changes the tests to not rely on the specific values generated by the upstream sampler, making the tests less brittle and work with both the latest pytorch release and the current master.

Reviewed By: ldworkin

Differential Revision: D26207619

fbshipit-source-id: 22645ce9d74d52fdebe06f0a9c09101b35d4c7c4
Balandat added a commit to Balandat/Ax that referenced this pull request Feb 4, 2021
Summary:
Pull Request resolved: facebook#485

These tests are failing due to pytorch/pytorch#49710, which changed the behavior of the pytorch SobolEngine. The distributional properties are arguably better due to the new direction numbers, but there is a breaking change to the realized samples.

This changes the tests to not rely on the specific values generated by the upstream sampler, making the tests less brittle and work with both the latest pytorch release and the current master.

Reviewed By: ldworkin

Differential Revision: D26207619

fbshipit-source-id: adc7480f9a47bee3b64404b811975953481068ea
facebook-github-bot pushed a commit to facebook/Ax that referenced this pull request Feb 4, 2021
Summary:
Pull Request resolved: #485

These tests are failing due to pytorch/pytorch#49710, which changed the behavior of the pytorch SobolEngine. The distributional properties are arguably better due to the new direction numbers, but there is a breaking change to the realized samples.

This changes the tests to not rely on the specific values generated by the upstream sampler, making the tests less brittle and work with both the latest pytorch release and the current master.

Reviewed By: ldworkin

Differential Revision: D26207619

fbshipit-source-id: a79ddd040c749c00ab47b48e482e0d99bf442304
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants