Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

loss不考虑padsequece的情况? #6

Closed
nonva opened this issue Nov 19, 2019 · 2 comments
Closed

loss不考虑padsequece的情况? #6

nonva opened this issue Nov 19, 2019 · 2 comments

Comments

@nonva
Copy link

nonva commented Nov 19, 2019

one_hot_labels = tf.one_hot(labels, depth=num_labels, dtype=tf.float32)
per_example_loss = -tf.reduce_sum(one_hot_labels * log_probs, axis=-1)
loss = tf.reduce_sum(per_example_loss)
probabilities = tf.nn.softmax(logits, axis=-1)

log_prob是[batch_size, max_seq, label_num]维度, max_seq有pad,直接reduce_sum全部作为loss?

@xuanzebi
Copy link
Owner

这里可以 在计算loss的时候将padding部分mask掉。

不过当时写的时候因为padding部分idx 为 0,所以在计算loss的时候影响不太大,就没考虑mask.

@zdgithub
Copy link

zdgithub commented Mar 1, 2022

这里可以 在计算loss的时候将padding部分mask掉。

不过当时写的时候因为padding部分idx 为 0,所以在计算loss的时候影响不太大,就没考虑mask.
@xuanzebi 您好,
为什么padding部分的label id=0,在计算loss的时候影响不大?这时one-hot标签向量第0维是1吧

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants