Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tensorflow optimizers approach NaN #37

Closed
phgilde opened this issue Aug 6, 2020 · 5 comments
Closed

Tensorflow optimizers approach NaN #37

phgilde opened this issue Aug 6, 2020 · 5 comments

Comments

@phgilde
Copy link

phgilde commented Aug 6, 2020

This model/training loop approaches an error of NaN after a couple of iterations:

import tensorflow as tf
import numpy as np
from tensorflow import keras
import matplotlib.pyplot as plt
import time
from datetime import timedelta

def fn(x):
    return tf.sin(x)

seq_length = 200
x = tf.linspace(tf.constant(0, dtype=tf.float32), 50, seq_length)
y = fn(x)

n_outputs = 50
model = keras.layers.LSTM(n_outputs, return_sequences=True)
optimizer = keras.optimizers.Adam(learning_rate=1e-3)
loss_fn = keras.losses.MSE

loss_history = []
epochs = 2_000
out_epochs = 10
start = time.time()
for epoch in range(epochs):
    with tf.GradientTape() as tape:
        y_pred = model(tf.zeros(shape=(1, seq_length, 1)))
        y_pred_data = y_pred[0, :, 0]
        loss = loss_fn(y, y_pred_data)
    loss_history.append(loss.numpy())
    grads = tape.gradient(loss, model.trainable_variables)
    optimizer.apply_gradients(zip(grads, model.trainable_variables))
    if epoch % out_epochs == 0:
        print(f"Epoch {epoch}: Loss = {loss} ({timedelta(seconds=time.time()-start)})")

After a couple of training loops, the loss is NaN instead of a float

System: Intel i5-7200U with Intel HD Graphics 620

@PatriceVignola
Copy link
Contributor

Thank you for reporting this @phgilde , we'll look into it.

@ereish64
Copy link

ereish64 commented Aug 16, 2020

same issue.

running on an AMD Vega 64 and Ryzen 3700X

@PatriceVignola
Copy link
Contributor

It looks like the issue has already been addressed internally. I can repro the NaN convergence on the latest pypi build, but not with our latest internal build.

I'll let you know once we release a new wheel so you can try it out!

@ereish64
Copy link

Is there any ETA for the new wheel?

Thank you very much.

@PatriceVignola
Copy link
Contributor

We just released tensorflow-directml 1.15.3.dev200911, which should contain the fixes for the NaN errors that you were seeing. You can try it out and tell us how it goes!

Also, since we have now open-sourced our fork, new tensorflow-directml issues should be opened over here.

@jstoecker jstoecker transferred this issue from microsoft/DirectML Sep 17, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants