New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Docker on digital ocean app platform #522
Comments
I am evaluating django-q with the same docker-compose + digital ocean setttings, I suppose you also used the django-cookie-cutter. I am just starting and do not have a solution, just to reply to get notified with any answer. I suggest that you rename the title since people might not read a thread purely for compliments. |
I am hoping someone will have done this before and will share their setup with the community. If not, I could give it a try myself but I am limited in my time with these things. |
Even better :) |
I am working with some engineers at digital ocean, hopefully we will figure this out. The app platform on digital ocean is awesome. It is pretty new and they are making it better all the time. |
You probably will have to create a dockerfile that is similar to the django dockerfile and then set the CMD to python and Qcluster. |
cookiecutter-django has a docker-compose set up for celery . There are some scripts for celery, flower etc in its docker file. I have tried this setup on digitalocean and can confirm that it works, so we could just adapt it for django-q. |
The following is my last reply to digital ocean tech support: I have solved the problem! I created an executable script in the project root called DOfile (like Procfile for heroku). contents: Bash DOfile This works! I also opened an issue on github in the django-Q project which is the component in my project that executes qcluster command. I seems other people are having the same problem on digital ocean app platform. I will share this solution with them. -John |
@jyoost I have not done it yet, but I think you can use Honcho (or a similar tool) to start the django web-server and django-q in one container. Related blog: https://www.cloudbees.com/blog/using-honcho-create-multi-process-docker-container/ |
@guettli my solution was to package django-q in with a django project and deploy it with the least amount system configuration possible. Keeping it as simple as possible. My solution with a DOfile on digital ocean does this. With no requirements outside of the django package. |
@jyoost what is a DOfile? I googled for it, but found no useful answer. |
Must be something Digital ocean (DO) specific. |
If you read back in this thread, I made it up 😁. I describe what is in it, I did not want it to be confused with Procfile for heroku. |
Hey there, This is what I get in the logs:
Is it possible to deploy separately django and django q on app platform? On Heroku I was able to do this by specifying in Procfile the Thanks! |
If you read the above thread I give detailed instructions on exactly how to
do this.
…On Wed, Apr 28, 2021, 7:48 AM stackbomb ***@***.***> wrote:
Hey there,
I am trying to deploy two components on DigitalOcean App Platform.
The first one is a simple Django app, and I was able to deploy it with no
issues.
The second one is Django Q deployed as a worker in App Platform. The run
command is: python manage.py qcluster
This is what I get in the logs:
worker | 2021-04-28 16:38:02 Traceback (most recent call last):
worker | 2021-04-28 16:38:02 File "manage.py", line 21, in <module>
worker | 2021-04-28 16:38:02 main()
worker | 2021-04-28 16:38:02 File "manage.py", line 17, in main
worker | 2021-04-28 16:38:02 execute_from_command_line(sys.argv)
worker | 2021-04-28 16:38:02 File "/workspace/.heroku/python/lib/python3.7/site-packages/django/core/management/__init__.py", line 401, in execute_from_command_line
worker | 2021-04-28 16:38:02 utility.execute()
worker | 2021-04-28 16:38:02 File "/workspace/.heroku/python/lib/python3.7/site-packages/django/core/management/__init__.py", line 395, in execute
worker | 2021-04-28 16:38:02 self.fetch_command(subcommand).run_from_argv(self.argv)
worker | 2021-04-28 16:38:02 File "/workspace/.heroku/python/lib/python3.7/site-packages/django/core/management/base.py", line 328, in run_from_argv
worker | 2021-04-28 16:38:02 self.execute(*args, **cmd_options)
worker | 2021-04-28 16:38:02 File "/workspace/.heroku/python/lib/python3.7/site-packages/django/core/management/base.py", line 369, in execute
worker | 2021-04-28 16:38:02 output = self.handle(*args, **options)
worker | 2021-04-28 16:38:02 File "/workspace/.heroku/python/lib/python3.7/site-packages/django_q/management/commands/qcluster.py", line 22, in handle
worker | 2021-04-28 16:38:02 q.start()
worker | 2021-04-28 16:38:02 File "/workspace/.heroku/python/lib/python3.7/site-packages/django_q/cluster.py", line 53, in start
worker | 2021-04-28 16:38:02 self.stop_event = Event()
worker | 2021-04-28 16:38:02 File "/workspace/.heroku/python/lib/python3.7/multiprocessing/context.py", line 92, in Event
worker | 2021-04-28 16:38:02 return Event(ctx=self.get_context())
worker | 2021-04-28 16:38:02 File "/workspace/.heroku/python/lib/python3.7/multiprocessing/synchronize.py", line 324, in __init__
worker | 2021-04-28 16:38:02 self._cond = ctx.Condition(ctx.Lock())
worker | 2021-04-28 16:38:02 File "/workspace/.heroku/python/lib/python3.7/multiprocessing/context.py", line 67, in Lock
worker | 2021-04-28 16:38:02 return Lock(ctx=self.get_context())
worker | 2021-04-28 16:38:02 File "/workspace/.heroku/python/lib/python3.7/multiprocessing/synchronize.py", line 162, in __init__
worker | 2021-04-28 16:38:02 SemLock.__init__(self, SEMAPHORE, 1, 1, ctx=ctx)
worker | 2021-04-28 16:38:02 File "/workspace/.heroku/python/lib/python3.7/multiprocessing/synchronize.py", line 59, in __init__
worker | 2021-04-28 16:38:02 unlink_now)
worker | 2021-04-28 16:38:02 OSError: [Errno 38] Function not implemented
worker | 2021-04-28 16:38:02 Sentry is attempting to send 0 pending error messages
worker | 2021-04-28 16:38:02 Waiting up to 2 seconds
worker | 2021-04-28 16:38:02 Press Ctrl-C to quit
Is it possible to deploy separately django and django q on app platform?
On Heroku I was able to do this by specifying in Procfile the worker line.
If I understand correctly, @jyoost <https://github.com/jyoost> is
deploying both django and the workers in the same component. Am I right?
Thanks!
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#522 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AKU5P6PAR76UKHYLL7HVDX3TLAN3PANCNFSM42ETM7ZQ>
.
|
Make sure your command in the dashboard is bash DOfile and that the deploy completes and does not roll back to the previous version. |
Hey @jyoost , tried to deploy a single component with the DOfile as you have described but when I go to the console to check if the worker is live doing I have also tried to upgrade the machine to higher CPU number but still no luck. |
***@***.***:/app# ps -ef
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 Apr27 ? 00:00:00 bash DOfile
root 2 1 0 Apr27 ? 00:00:14 python manage.py qcluster
root 29 2 0 Apr27 ? 00:04:16 python manage.py qcluster
root 3 1 0 Apr27 ? 00:00:30 uwsgi saleor/wsgi/uwsgi.ini
root 30 29 0 Apr27 ? 00:00:00 python manage.py qcluster
root 31 29 0 Apr27 ? 00:00:00 python manage.py qcluster
root 32 29 0 Apr27 ? 00:00:00 python manage.py qcluster
root 33 29 0 Apr27 ? 00:00:00 python manage.py qcluster
root 34 29 0 Apr27 ? 00:00:00 python manage.py qcluster
root 35 29 0 Apr27 ? 00:00:43 python manage.py qcluster
root 64 3 0 05:34 ? 00:00:04 uwsgi saleor/wsgi/uwsgi.ini
root 66 3 0 05:35 ? 00:00:04 uwsgi saleor/wsgi/uwsgi.ini
root 68 3 0 05:35 ? 00:00:04 uwsgi saleor/wsgi/uwsgi.ini
root 71 3 0 10:10 ? 00:00:04 uwsgi saleor/wsgi/uwsgi.ini
root 76 0 0 17:46 ? 00:00:00 bash
root 77 76 0 17:46 ? 00:00:00 ps -ef
***@***.***:/app# python manage.py qinfo
OpenBLAS WARNING - could not determine the L2 cache size on this system,
assuming 256k
…-- Djangorm 1.3.5 on Redis 6.2.1 --
Clusters 1 Workers
4 Restarts
0
Queued 0 Successes
0 Failures
0
Schedules 0 Tasks/day
0.00 Avg time
0.0000
***@***.***:/app#
On Wed, Apr 28, 2021 at 10:29 AM stackbomb ***@***.***> wrote:
Hey @jyoost <https://github.com/jyoost> , tried to deploy a single
component with the DOfile as you have described but when I go to the
console to check if the worker is live doing python manage.py qinfo Or python
manage.py qmonitor I cannot see the clusters.
I have also tried to upgrade the machine to higher CPU number but still no
luck.
Could you confirm you are seeing the clusters by running the commands I
have posted? Thank you!
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#522 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AKU5P6OXTTB2RVVIYXPJFGLTLBAXXANCNFSM42ETM7ZQ>
.
|
Would you mind to share the qcluster configuration? I am using these but no luck:
Thanks for your precious help! |
What version of Django are you using? When I looked at the above errors on a big screen (not my phone) it looks like errors i got when trying to integrate django_q into an older version of django. Requirements
|
Here is my requirements (part of):
I am trying to upgrade django-q lib, will let you know! |
Q_CLUSTER = { |
django-picklefield? |
Trying to use your same settings |
Still getting these errors, very strange:
|
I'm having the exact same issue @stackbomb I am not however, using docker. Manually trying to run qcluster after a successful deployment results in the same error. |
@TSolo315 me and the OP are not using docker. The containerization is something that is done automatically by App Platform (@jyoost please change the title in something like "Django Q on digital ocean app platform"). What I've tried so far:
Still no luck here. But I'll update this issue to find a way to deploy in a standard way. I have also created a repo https://github.com/stackbomb/django-q-do-app-platform |
Ah, I was confused. DO app platform does support using a Procfile, not sure if that has been changed since OP posted this. I've been (frustratingly) working with support over the past several days about this. Everything points to it being a permissions issue. https://stackoverflow.com/questions/6033599/oserror-38-errno-38-with-multiprocessing They told me they would consult the engineering team and get back to me but I'm not particularly optimistic. Not sure why it worked for OP though. Which datacenter region option did you choose @stackbomb? I wonder if that's the difference (San Fran 3 here.) |
Yeah, really frustrating. I think App Platform does not support Procfile. This is a reply I got yesterday for DO support:
BTW, I also think it is an error with permissions, and linked the exact comment you sent lol. @TSolo315 The only person that coul help us to understand better is OP.
Thank you! |
Procfile is definitely supported (I tested it myself while trying to get this working.) You have to remove the run command on the web gui or that will take precedence though. It didn't fix the problem -- worker process set to run qcluster fails silently. |
Update from DO support:
|
I don't understand what the thread about fstab is. Can someone please enlighten me? When running commands inside a docker container, (app platform) you are running as root, so no permission issues. Docker containers have a transitory file system, disappears with each deploy, or whenever restarted. Anything in container when created stays. Any changes after creation go away at some point. Django_q runs without modifying or needing to modify the file system. I'm going on 8 days of django project with django_q running a scheduled task every hour on digital ocean app platform. |
It seems related to the multiprocessing error both I and @TSolo315 are getting, Would you mind to share what datacenter and what machine are you using? We are trying to replicate your setup to understand what's wrong with us. The problem is that both the deployment and the manual run of the command "python manage.py qcluster" result in the same error. |
Probably means that the OS used by DO doesn't have a functioning shared semaphore implementation. You should talk to them if they do support Python Multiprocessing on their platform or if they could. |
I will post my Dockerfile in the next post. It must be doing something to modify the OS put in the container by DO. All I Know is I have never seen an error like this. I have 2 apps in the DO app platform 1 in NYC NOC and 1 in Singapore NOC |
|
After several days of asking me to try different deployment methods and after I asked them about permissions and shared semaphores the support team finally got back to me with:
Did you figure out a workaround with a dockerfile @stackbomb? |
Did you tell them I have 5 instances running on app platform? 🤓 |
No, but they did link me here to try your "DOfile" method -- which had led them to assume multiprocessing was inherently supported. Their support team doesn't really know anything about the limitations of the platform and they rely on googling solutions -- which ends up with them spending six hours only to reply with a suggested solution I had tried days ago before I even contacted them. Rinse and repeat. My app is using gunicorn instead of uwsgi. Other than your dockerfile idea that's my only other theory for the discrepancy. |
😂 I've been on that merry-go-round with them before. Try the Dockerfile I believe it modifies the os or python. I'll post my requirements.txt it may install a library at the os level. Since docker runs as root it could modify anything. |
aniso8601==7.0.0 django-picklefield==3.0.1 idna==2.10 |
Well, deploying via dockerfile did not fix the problem (but was educational.) I'll try adding a few of the packages we didn't have in common but I'm on the verge of giving this up. |
@TSolo315 that is too bad. Digital ocean needs to get better support for the app platform. |
Don't know if it helps but I have a django setup with version: "3.8"
services:
app:
image: dockerregistry.azurecr.io/app:latest
command: gunicorn --workers 4 --threads 4 -b 0.0.0.0:8000 --worker-tmp-dir /dev/shm --access-logfile '-' --error-logfile '-' -k uvicorn.workers.UvicornWorker config.asgi
restart: on-failure
ports:
- "8000:8000"
qcluster:
image:dockerregistry.azurecr.io/app:latest
command: python manage.py qcluster
restart: on-failure
depends_on:
- app As you can see both services are using the same docker image. |
Im also on DigitalOcean App platform, i just added a worker and its not working for me: Run Command: python manage.py qcluster Output: Do you know what can i do? i found on stackoverflow that digitalocean app platform doesnt support it I tried making a bash file and running it from there but its not working |
Has to do with OS version and python version in your docker image. post your Dockerfile. |
Should be in the root folder of your project.
…On Sun, Jan 16, 2022, 9:33 AM Emilianocm23 ***@***.***> wrote:
Thanks for your reply, hmm how do i get the dockerfile?
—
Reply to this email directly, view it on GitHub
<#522 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AKU5P6JSMRBIIPMKX6ZYW6TUWL6NXANCNFSM42ETM7ZQ>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Not an update, just a heads up for anyone that stumbles upon this looking for a solution to the There were no changes on Digital Ocean's side, so you still can't get Django Q to work on their app platform. You might have better luck trying on a droplet instead, but I haven't tried it. |
1/9/2023 Update: The problem still persists. Django apps requiring the use of multiprocessing do not work on the App Engine. For those integrating with Amazon services -- the Boto3 library makes use of that library enough to make it functionally impossible to use the DO App platform in my experience. I think "No ETA as of now" means "no hold your breath." |
My app using Django-Q ran on app platform for a year, never being restarted Django itself was running 4 processes with uwisgi. How I did it is in this thread. |
@jyoost, I'm sorry to be a downer but I believe your solution doesn't really work. Using |
Believe? Don't use this solution then. Don't doubt it without proof or a better solution. You are a downer. |
Bash script that creates an obviously failing background task, then runs something else:
Proof that running it exits with 0, meaning the shell detects no problem:
Are you sure your background jobs are actually being executed? |
Yes, they are running. They ran for a year every hour, no reboots. Are you sure you read my instructions? |
Assuming we talk about this:
Then yes, I did. |
Any news on this? |
First, this is a great product.
I have used django celery beat. This is more straight forward approach.
My question is :
I am running this on the new app platform on digital ocean. They have no procfile like heroku.
It is deployed as a docker container but you must put the command to run in manually.
I can not get both the uwsgi and qcluster to run at deploy. This is such a nice inclusive package, that it would be great to start everything at once.
Any suggestions ?
The text was updated successfully, but these errors were encountered: