Conversation
@mattupstate ping :) |
1 similar comment
@mattupstate ping :) |
I don't like this implementation because it does not provide the correct application context. It just makes a dummy app with the same configuration values when an application could be configured in many other ways such that from app_package import app
@job
def long_process():
with app.app_context():
# your job code... |
This is true, but it is not possible to make correct context (what is correct context btw?). Same situation with all other background tasks schedulers (GAE background tasks have different). I think this is ok. This implementation allow to copy |
I beg to differ. Please refer to the pseudo code I provided above. Additionally, refer to this example on how one might implement this with Flask and Celery. |
Example you provided isn't only one possible. |
I still don't believe this is the correct approach as it only solves your specific problem. I find it odd that you don't configure your application in the factory method. I would suggest having one function that returns an instance of a properly configured application instance. This will make it easy to have an app instance in whatever context (delayed job, management script, etc) you want. |
I have a reasons why it made like it. Configuration depends on deployment, for example if you use Heroku you read environment variables, on test or beta server can be different databases, etc. I want to store configuration outside of code. Probably you have different setup and it fits your needs, but it is not mandatory. It is only solution I found. |
You could encapsulate these lines into a separate function to achieve what I'm talking about. |
I found useful to be able to access flask configuration inside of delayed job. So I implemented additional method that allow to run delayed job inside of flask context. It is already useful for me.