Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

UniformFloatingPointDistribution incorrect behaviour at infinite bounds #56663

Closed
joelberkeley opened this issue Jul 3, 2022 · 1 comment
Closed
Assignees
Labels
comp:xla XLA stat:awaiting tensorflower Status - Awaiting response from tensorflower TF 2.8 type:bug Bug

Comments

@joelberkeley
Copy link

joelberkeley commented Jul 3, 2022

Click to expand!

Issue Type

Bug

Source

source (I actually use this binary but that isn't compiled by Google))

Tensorflow Version

2.8

Custom Code

Yes

OS Platform and Distribution

Ubuntu 20.04

Mobile device

n/a

Python version

n/a

Bazel version

2.4.1

GCC/Compiler version

Unknown, between 7.5 and 9.3

CUDA/cuDNN version

CUDA Version: 11.6

GPU model and memory

NVIDIA GeForce GTX 1070

Current Behaviour?

`UniformFloatingPointDistribution` produces the following samples for the following bounds

1) -inf 0 -> nan
2) 0 +inf -> inf
3) -inf -inf -> nan
4) +inf +inf -> nan

I believe 1) is incorrect and inconsistent with 2), which is correct. I believe 3) and 4) should be -inf and +inf respectively, since any sample between one +inf and another +inf will be +inf, and since there's no way to specify _different_ +infs (so that bounds are different), I think it makes sense to assume the bounds of +inf and +inf are different, and the same for -inf and -inf.

Standalone code to reproduce the issue

#include "tensorflow/compiler/xla/client/xla_builder.h"
#include "tensorflow/compiler/xla/client/lib/constants.h"
#include "tensorflow/compiler/xla/client/lib/prng.h"
#include "tensorflow/compiler/xla/shape.h"
#include "tensorflow/compiler/xla/client/local_client.h"
#include "tensorflow/compiler/xla/client/client_library.h"
#include "tensorflow/core/common_runtime/gpu/gpu_init.h"

void Test() {
    xla::XlaBuilder builder("");
    auto posinf = xla::MaxValue(&builder, xla::F64);
    auto neginf = xla::MinValue(&builder, xla::F64);
    auto zero = xla::ConstantR0<double>(&builder, 0.0);
    auto key = xla::ConstantR0<uint64_t>(&builder, 0);
    auto state = xla::ConstantR0<uint64_t>(&builder, {0});
    auto shape = xla::ShapeUtil::MakeShape(xla::F64, {});

    auto sample = xla::UniformFloatingPointDistribution(
        key, state, xla::ThreeFryBitGenerator, neginf, 0, shape // replace bounds as appropriate
    ).value;

    auto computation = builder.Build(sample).ConsumeValueOrDie();
    auto res =
        xla::ClientLibrary::GetOrCreateLocalClient(tensorflow::GPUMachineManager())  // I'm also seeing this on CPU
        .ConsumeValueOrDie()
        ->ExecuteAndTransfer(computation, {})
        .ConsumeValueOrDie()
        .ToString();

    std::cout << res << std::endl;
}

Relevant log output

f64[] -nan
f64[] inf
f64[] -nan
f64[] -nan

for each test case 1-4
@google-ml-butler google-ml-butler bot added the type:bug Bug label Jul 3, 2022
@joelberkeley joelberkeley changed the title UniformFloatingPointDistribution incorrect behaviour at infinite bounds. UniformFloatingPointDistribution incorrect behaviour at infinite bounds Jul 3, 2022
@tilakrayal tilakrayal assigned chunduriv and unassigned tilakrayal Jul 4, 2022
@chunduriv chunduriv added the stat:awaiting tensorflower Status - Awaiting response from tensorflower label Jul 7, 2022
@joelberkeley joelberkeley closed this as not planned Won't fix, can't repro, duplicate, stale Mar 30, 2023
@google-ml-butler
Copy link

Are you satisfied with the resolution of your issue?
Yes
No

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
comp:xla XLA stat:awaiting tensorflower Status - Awaiting response from tensorflower TF 2.8 type:bug Bug
Projects
None yet
Development

No branches or pull requests

4 participants