We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
代码疑惑1 model.py中,125行的unmasked_attr_loss的shape是[batch_size,],126行的attr_mask的shape是[batch_size, 1],两者经过tf.multiply后的attr_loss的shape就会是[batch_size, batch_size]。是不是应该先把attr_mask reshape成[batch_size,]?
代码疑惑2 model.py中,71行是否应该用tf.reduce_mean而非tf.reduce_sum?
结果复现 这是我复现的结果:Acc: 93.4; MP: 56.9; MR: 57.7; F1: 55.6。F1值没有达到论文的64.9,想知道这可能是什么原因造成的?谢谢!
The text was updated successfully, but these errors were encountered:
我们为了提高代码可读性,重构了我们的代码,并由此带来了这些问题。感谢您的意见,我们最近会提交正确的代码。
Sorry, something went wrong.
您好,我们的上传了修正后的代码,由于一定的随机性,可能F1值会在62到64.9之间。
我们已经修复了之前代码中的bug,目前的版本能够达到论文中汇报的结果
No branches or pull requests
代码疑惑1
model.py中,125行的unmasked_attr_loss的shape是[batch_size,],126行的attr_mask的shape是[batch_size, 1],两者经过tf.multiply后的attr_loss的shape就会是[batch_size, batch_size]。是不是应该先把attr_mask reshape成[batch_size,]?
代码疑惑2
model.py中,71行是否应该用tf.reduce_mean而非tf.reduce_sum?
结果复现
这是我复现的结果:Acc: 93.4; MP: 56.9; MR: 57.7; F1: 55.6。F1值没有达到论文的64.9,想知道这可能是什么原因造成的?谢谢!
The text was updated successfully, but these errors were encountered: