-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
some times get { "error": "The server can't train more models right now!" } while training new models #1323
Comments
see this PR #1081 which added functionality for training multiple models in parallel for a single project. when you start the server you can pass this as an argument, e.g. see this page in the docs http://rasa.com/docs/nlu/config/ |
@amn41 thanks for this . Ya had guessed so sure will add it to my docker container startup build on rasa base 👍 |
I have this too on latest NLU, I train over HTTP, got a successful response, then retrain, then the error. Server started with default parameters. Status endpoint says project is still training. |
Are you using the tensorflow backend? could you please check if this fixes your issue? #1343 |
@amn41 it does! |
@amn41 Is 0.13.2 available on docker latest image .. Using 0.13.2-full and latest shows version as 0.13.1 still ? |
Rasa NLU version: 0.13.0-full
****:
Content of model configuration file:
Issue:
I do get sometimes
. I am currently using the latest docker version of rasa . I am not sure this is an issue but is there some limit on the number models we can train ,if yes how we can scale to accommodate more models or should the deployment be specific number of models per container ?
The text was updated successfully, but these errors were encountered: