Non deterministic NLU training on GPU #6040
Labels
area:rasa-oss 🎡
Anything related to the open source Rasa framework
stale
type:bug 🐛
Inconsistencies or issues which will cause an issue or problem for users or implementors.
Rasa version: 1.9.4
Python version: 3.6.9
Operating system (windows, osx, ...): linux
Issue:
NLU training on GPU is non reproducible, each training with the same pipeline and the same train set gives a model that perform differently. I get that GPU training is by nature non-deterministic (although I think some progress have been made on that front), but the variations we see here are significant and makes it really hard to compare the performance impact of pipeline configuration/parameter tuning.
We don't have this problem when training on CPU.
Note: we have 2701 intent examples (5 distinct intents)
Command or request that led to the issue:
Content of configuration file (config.yml):
The text was updated successfully, but these errors were encountered: