Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Strange finding: When the global seed and @tf.function decorator are used, the random sampling values of the two adjacent periods are equal #68215

Open
tonyherschel opened this issue May 18, 2024 · 3 comments
Assignees
Labels
comp:apis Highlevel API related issues TF 2.16 type:bug Bug

Comments

@tonyherschel
Copy link

Issue type

Bug

Have you reproduced the bug with TensorFlow Nightly?

Yes

Source

source

TensorFlow version

tf 2.16.1

Custom code

Yes

OS platform and distribution

Windows 10

Mobile device

No response

Python version

No response

Bazel version

No response

GCC/compiler version

No response

CUDA/cuDNN version

No response

GPU model and memory

No response

Current behavior?

When I set the global seed, applied the @tf.function decorator, and performed gradient updates every two periods, I observed an unexpected phenomenon: the error values were randomly sampled from two consecutive periods with identical values. However, my expectation was that the error values should differ when sampled in each period instead of being equal in adjacent periods. Furthermore, I noticed that removing the @tf.function decorator from the model function prevented this issue of having identical error values in consecutive periods. What could be causing this phenomenon? How should one handle this situation when using @tf.function?"

Standalone code to reproduce the issue

import os
import random
import numpy as np
import tensorflow as tf

# set seeds
SEED = 0
random.seed(SEED)
np.random.seed(SEED)
tf.random.set_seed(SEED)
os.environ['PYTHONHASHSEED'] = str(SEED)
tf.keras.utils.set_random_seed(SEED)
tf.config.experimental.enable_op_determinism()

@tf.function
def model(x):
    err = tf.random.uniform(shape=(1,))
    loss = x + err
    return err, loss

optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3)
training_periods = 10

# initialize x
x = tf.Variable(tf.random.uniform(shape=(1,)), trainable=True)

for tt in range(training_periods):
    if tt % 2 == 0:
        with tf.GradientTape() as tape:
            err, loss = model(x)
        
        gradients = tape.gradient(loss, [x])  # suppose x need to be optimized
        optimizer.apply_gradients(zip(gradients, [x]))  # update x
        
        print(f"Period: {tt}, err (trained): {err.numpy()}")
        
    else:
        err, loss = model(x)
        
        print(f"Period: {tt}, err (not trained): {err.numpy()}")

# outcomes
Period: 0, err (trained): [0.01975703]
Period: 1, err (not trained): [0.01975703]
Period: 2, err (trained): [0.5400312]
Period: 3, err (not trained): [0.5400312]
Period: 4, err (trained): [0.51667833]
Period: 5, err (not trained): [0.51667833]
Period: 6, err (trained): [0.4683528]
Period: 7, err (not trained): [0.4683528]
Period: 8, err (trained): [0.14856052]
Period: 9, err (not trained): [0.14856052]

Relevant log output

No response

@google-ml-butler google-ml-butler bot added the type:bug Bug label May 18, 2024
@sushreebarsa sushreebarsa added comp:apis Highlevel API related issues TF 2.16 labels May 20, 2024
@sushreebarsa
Copy link
Contributor

sushreebarsa commented May 20, 2024

@tonyherschel Could you try to consider explicitly clearing the cache every period.
You could also try to control the caching behavior with the jit_compile option within @tf.function by setting jit_compile to False as follows;

@tf.function(jit_compile=False)
def model(x):
    

Thank you!

@sushreebarsa sushreebarsa added the stat:awaiting response Status - Awaiting response from author label May 20, 2024
@tonyherschel
Copy link
Author

Hi, @sushreebarsa, thanks for your reply! I tried setting jit_compile to False in the @tf.function, but the results do not seem to have changed.

import os
import random
import numpy as np
import tensorflow as tf

# set seeds
SEED = 0
random.seed(SEED)
np.random.seed(SEED)
tf.random.set_seed(SEED)
os.environ['PYTHONHASHSEED'] = str(SEED)
tf.keras.utils.set_random_seed(SEED)
tf.config.experimental.enable_op_determinism()

@tf.function(jit_compile=False)
def model(x):
    err = tf.random.uniform(shape=(1,))
    loss = x + err
    return err, loss

optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3)
training_periods = 10

# initialize x
x = tf.Variable(tf.random.uniform(shape=(1,)), trainable=True)

for tt in range(training_periods):
    if tt % 2 == 0:
        with tf.GradientTape() as tape:
            err, loss = model(x)
        
        gradients = tape.gradient(loss, [x])  # suppose x need to be optimized
        optimizer.apply_gradients(zip(gradients, [x]))  # update x
        
        print(f"Period: {tt}, err (trained): {err.numpy()}")
        
    else:
        err, loss = model(x)
        
        print(f"Period: {tt}, err (not trained): {err.numpy()}")

log outputs

Period: 0, err (trained): [0.01975703]
Period: 1, err (not trained): [0.01975703]
Period: 2, err (trained): [0.5400312]
Period: 3, err (not trained): [0.5400312]
Period: 4, err (trained): [0.51667833]
Period: 5, err (not trained): [0.51667833]
Period: 6, err (trained): [0.4683528]
Period: 7, err (not trained): [0.4683528]
Period: 8, err (trained): [0.14856052]
Period: 9, err (not trained): [0.14856052]

@google-ml-butler google-ml-butler bot removed the stat:awaiting response Status - Awaiting response from author label May 21, 2024
@sushreebarsa
Copy link
Contributor

@SuryanarayanaY I was able to replicate the issue reported here. Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
comp:apis Highlevel API related issues TF 2.16 type:bug Bug
Projects
None yet
Development

No branches or pull requests

3 participants