New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why are NeuroVault uploads failing? #565
Comments
It seems that Redis actually just lost track of some of these, even though the collection was created. This actually adds more impetus for adding db capability in celery, as that would prevent us from keeping important data in our redis queue. See: https://www.caktusgroup.com/blog/2016/10/18/dont-keep-important-data-your-celery-queue/ With db access, celery could set the status of the task itself, without relying on the redis channel to send that back. That way "important" data, such as the collection id, would not be lost, and the changes would be "pushed" rather than have to be pulled from redis. This would apply to complication and reports as well. |
Other than possibly slight inelegance (which IMO is a small cost), is there any major downside to just using the same container for both the app and worker? This is how Neurosynth is set up, and it seems to work fine. They just have different entry points. |
I think that having separate containers is not the issue (as the other volume can be mounted). The issue is the circular imports in the Flask project which make it really hard to import models from outside Flask itself. |
Closed with #575 However, need to investigate if the file uploads start failing again, and will add a re-try option in the CLI. |
A lot of uploads recently seem to fail. Why? Check on this. Maybe we are running out of cores in the backend (w/ the analyses running as well).
Perhaps we need to add some more robustness to this as well, or a clean up of old tasks in Celery/redis.
The text was updated successfully, but these errors were encountered: