-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
weird max_training_processes parameter behaviour #1442
Labels
type:bug 🐛
Inconsistencies or issues which will cause an issue or problem for users or implementors.
Comments
Thanks for raising this issue, @ricwo will get back to you about it soon. |
@dcalvom FYI since you were likely the last one to touch this code. |
ricwo
added a commit
that referenced
this issue
Oct 5, 2018
@frascuchon Thanks for pointing out this issue - it is indeed a bug. I have just pushed a fix with #1449 |
Cool @ricwo! Do you know in which release this fix will be included? |
@frascuchon It'll be in the next release, so |
ricwo
added a commit
that referenced
this issue
Oct 5, 2018
znat
referenced
this issue
in botfront/rasa-for-botfront
Oct 16, 2018
* 0-13-7: (40 commits) preparing next version #92 set one value #1442 travis did not report status -> rebuild #1442 make max_training_processes apply globally removed livechat.html removed livechat added custom language example prepared next release update pushing tags command in readme #1437 remove rogue newlines #1437 use multiprocessing star method spawn changelog #1437 check py2 first #1437 run tf training in separate thread on py3 update on language support language support updated docs and community links prepare next release implementing #1425 annotation too long svm supports gamma parameter ...
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
type:bug 🐛
Inconsistencies or issues which will cause an issue or problem for users or implementors.
Rasa NLU version: 0.13.5
Operating system (windows, osx, ...): ubuntu server 16.04
Issue:
Setting argument
max_training_processes: 1
server deny training more than one model for a project with error:But training a new project is possible
I really don't know if is an issue or an expected behaviour. It
max_training_processes
is related to a single project, the error seems to indicate that you cannot train at all.If it's a global parameter, there is a bug here. In this case, I've locate the bug in source code and I can prepare a little pull request for that.
Thanks in advance
The text was updated successfully, but these errors were encountered: