Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

UniformIntDistribution is exclusive in its upper bound #56675

Open
joelberkeley opened this issue Jul 4, 2022 · 1 comment
Open

UniformIntDistribution is exclusive in its upper bound #56675

joelberkeley opened this issue Jul 4, 2022 · 1 comment
Assignees
Labels
comp:xla XLA stat:awaiting tensorflower Status - Awaiting response from tensorflower TF 2.8 type:support Support issues

Comments

@joelberkeley
Copy link

joelberkeley commented Jul 4, 2022

Click to expand!

Issue Type

Bug

Source

source (I actually use this binary but that isn't compiled by Google)

Tensorflow Version

2.8

Custom Code

Yes

OS Platform and Distribution

Ubuntu 20.04

Mobile device

n/a

Python version

n/a

Bazel version

2.4.1

GCC/Compiler version

Unknown, between 7.5 and 9.3

CUDA/cuDNN version

CUDA Version: 11.6

GPU model and memory

NVIDIA GeForce GTX 1070

Current Behaviour?

UniformIntDistribution appears to be inclusive in its lower bound, but exclusive in its upper bound. I think this is wrong for a number of reasons:

  • There is no simple way to get uniform samples that include the maximum U64.
  • Setting both bounds to be the same produces seemingly undefined behaviour*
  • Its sister function UniformFloatingPointDistribution appears to be inclusive in its upper bound. I came to this conclusion as sampling that function with equal bounds returns the common bound, rather than, say NaN. The two functions thus have incongruous behaviour.

*Example samples for equal bounds of 0 and 0

[
      11077253088097075545
    , 13897614985444391724
    , 18164676955841373932
    , 376028057765135569
    , 12082511028297777851
    , 1029834464974463124
    , 12380146588138085609
    , 8561165853724746330
    , 9319215302267380863
    , 16134235052671214906
]

Standalone code to reproduce the issue

void Test() {
    xla::XlaBuilder builder("");
    auto zero = xla::ConstantR0<uint64_t>(&builder, 0);
    zero = xla::BroadcastInDim(zero, {1000}, {});
    auto one = xla::ConstantR0<uint64_t>(&builder, 1);
    one = xla::BroadcastInDim(one, {1000}, {});
    auto key = xla::ConstantR0<uint64_t>(&builder, 0);
    auto state = xla::ConstantR0<uint64_t>(&builder, {0});
    auto shape = xla::ShapeUtil::MakeShape(xla::U64, {1000});

    auto sample = xla::UniformIntDistribution(
        key, state, xla::ThreeFryBitGenerator, zero, one, shape
    ).value;
    auto anyEqOne = xla::Any(xla::Eq(sample, one));

    auto computation = builder.Build(anyEqOne).ConsumeValueOrDie();
    auto res =
        xla::ClientLibrary::LocalClientOrDie()
        ->ExecuteAndTransfer(computation, {})
        .ConsumeValueOrDie()
        .ToString();

    std::cout << res << std::endl;
}

Relevant log output

pred[] false
@google-ml-butler google-ml-butler bot added the type:bug Bug label Jul 4, 2022
@sushreebarsa sushreebarsa added type:support Support issues comp:runtime c++ runtime, performance issues (cpu) TF 2.8 and removed type:bug Bug labels Jul 5, 2022
@joelberkeley
Copy link
Author

joelberkeley commented Jul 5, 2022

Hi, I wanted to ask about the tags. This issue has type:support and comp:runtime, while this very similar ticket I raised has type:bug and comp:xla, which seems more appropriate to me.

@sushreebarsa sushreebarsa added comp:xla XLA and removed comp:runtime c++ runtime, performance issues (cpu) labels Jul 6, 2022
@sachinprasadhs sachinprasadhs added the stat:awaiting tensorflower Status - Awaiting response from tensorflower label Jul 11, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
comp:xla XLA stat:awaiting tensorflower Status - Awaiting response from tensorflower TF 2.8 type:support Support issues
Projects
None yet
Development

No branches or pull requests

5 participants