-
Notifications
You must be signed in to change notification settings - Fork 532
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Airflow Worker and Scheduler is not picking up job #94
Comments
Me also! Any ideas? |
Same for me, and it seems that it might also cause #44. |
I left several comments in #44 about this, since both might be related. To recap: I have the same issue with 1.8.1, but in my case it seems like a consequence of #94. I have: I tried with previous versions of Docker: 1.12.6 (14937) and 17.03.1 (16048), the problem is still the same. DAGs are found and launched in the webserver, but scheduler and worker actually don't run them:
|
I solved the issue for me: the problem was a recent Docker update. So I had to delete all Docker files, install a previous version and reboot. Now it works with: Version 17.03.0-ce-mac2 (15654) |
@kush99993s Do you use example dags or your own dags ? |
Just trying out airflow for the first time this week was struggling with this issue of no jobs running, on both the default test and local executor. Thanks to a post from @puckel I tried disabling the example dags, and now everything seems to be working well - tested with an example DAG from the Airflow repo and loaded in to the container using docker-compose:
Tasks seem to run fine now - strange that the example DAGs flag lead to this behaviour. |
I'm running Airflow on kubernetes based on this Dockerfile setting + some adjustment and facing the similar issue. When I manually run a DAG, some will run but after that, all remaining tasks will get stuck at a queued status. I use CeleryExecutor with Redis.
I also see this log on a web container, but not sure if it's related. The web server cannot retrieve a log from a worker directly, but it eventually can be seen via S3 when a task is complete, so I thought it's not a critical problem. Is the log retrieval related to this issue? So far, every time I see this issue I manually "clear" the task which get stuck, then it will run. I really have no clue what is the root cause of this problem🙁 |
That's strange. I'm currently only using LocalExecutor - which works well - however when I try Celery I will see if I hit any trouble |
@rdtr I noticed the same issue. |
Hi guys! webserver_1 | [2017-10-16 15:58:48,813] {jobs.py:1443} INFO - Heartbeating the executor
webserver_1 | [2017-10-16 15:58:49,816] {jobs.py:1407} INFO - Heartbeating the process manager
webserver_1 | [2017-10-16 15:58:49,817] {dag_processing.py:559} INFO - Processor for /usr/local/airflow/dags/tuto.py finished
webserver_1 | [2017-10-16 15:58:49,825] {dag_processing.py:627} INFO - Started a process (PID: 151) to generate tasks for /usr/local/airflow/dags/tuto.py - logging into /usr/local/airflow/logs/scheduler/2017-10-16/tuto.py.log ... over and over again. I have tried downgrade docker, as mentioned earlier, from 17.09.0-ce to 17.03.0-ce but I got the same problem. Could someone give me some help? |
We are running this docker image in ECS one or more tasks in the same dag get queued but dont start workers and scheduler seem to have low cpu utilization. All tasks are databricks operators which sleep 60 secs after checking job status. |
Any update on this from the Airflow team? This seems like a critical issues, that tasks are not getting run. I am seeing this issue intermittently as well, with Airflow running on Amazon ECS |
Hello, I had a similar problem on a dockerized airflow scheduler and the reason of the scheduling stop has been the scheduler logs size that filled the disk. |
Hi, I am not using docker to run airflow but facing the same problem.
DAGs are not running manually or even picked by the scheduler. I am using packaged dag.
If I place all these files in dags folder instead of packaging them, airflow scheduler is able to schedule the dags. |
Same here. I'm using Airflow as a standalone app (not in Docker or Kubernetes). When I start the DAG, the DAG shows the state as running. But none of the tasks show queued or started. It takes a long time. I don't have any other DAG running or anybody else using this Airflow. |
Same problem as @mukeshnayak1
Do you have an idea how to deal with it? Running tasks works though... |
Same problem as @OleksiiDuzhyi |
Same issue as reported by @bhrd @OleksiiDuzhyi @mukeshnayak1 , while running in standalone mode : Airflow 1.10.2 sequentialExecutor |
This been the case for me too. driving me absolutely crazy |
I had this problem for several hours today with several different containerized airflow options. This one from Bitnami is working for me, hope it helps someone else. |
I got this problem too recently while i was using the I did not fix it, but by making sure that I run So simply stopping the containers might cause something wrong when restarting them? i'm just speculating |
Seeing this issue quite often as well. 50-100 DAGs in Running state, with only 3-4 "running" tasks. CPU/Memory/Disk space is all fine. We're using CeleryExecuter. Restarting containers sometimes helps, but a lot of the time we need to mark DAGs as Successful to clear things out. |
I'm having the same issue, but only when using CeleryExecutor. LocalExecutor seems to work. |
To the people running into the tasks never actually running...it sounds like an issue w/ the scheduler not being ran (I wondered the same thing for half a day). If it is a |
Has anyone able to solve this issue , running into the exact same error , with dag in running state but not actually picked by the scheduler. [INFO] Handling signal: ttou Read in a stackoverflow post that the handling ttou/ttin is a gunicorn behaviour to refresh workers and is as expected but doesnt quite explain why scheduler is not picking up anything. |
solutiion1: try to run solution2: run airflow worker as non-root user |
I am having the same issue with running in ECS and using Databricks operators. I have 4 main DAGs, two with 2 tasks and two with 1 task. The scheduling seems to work fine for a while, then it stalls, with the DAG still "running", but tasks completed. It stays this way indefinitely, and no new tasks trigger. Restarting the service allows it to continue. As an added complication for debugging, I'm running in Fargate and it's not possible (or not easy) to see internals of the container. |
I ran into this one too and may have found a potential solution. In the doc for CeleryExecutor (https://airflow.apache.org/howto/executor/use-celery.html):
From this recommendation I configured my local setup to have a valid It's working okay so far, but the slowness of the tasks are pretty noticeable. There seems to be a 15-20 second gap between each task: Will give it a shot with the Docker-Compose one later this week. |
solution2: run airflow worker as non-root user Oh, this worked for me on Airflow in Container, Celery Executor, RabbitMQ(another Container) with MySQL(localhost) backend. |
having this with celery as well as dask – local is not working for me EDIT: I have a af scheduler, a dask scheduler (3 dask workers) and a af webserver, which should I run as root or non root? |
Hi, just thought I'd post back here. We solved our issue by increasing the memory on the machine. My guess is the scheduler was dumping lots of errors, but we're currently unable to see the logs due to AWS Fargate limitations. We have a task queued up to expose the logs in S3, which will also allow us to see historical logs. They currently get wiped on deployment of a new container. |
|
@chris-aeviator how did u allow dask workers to run airflow tasks? I get no such file error. Any setup script u can share? |
@tooptoop4
The trick is to create the same env on the airflow side and on the dask side and then use the dask executor (https://airflow.apache.org/howto/executor/use-dask.html) . When pip installing airflow on the dask workers (!!) they will magically pick up airflow tasks (and run them as seq. executor as far as I remember).
Also make sure you have your DAG files cloned to the airflow scheduler, worker AND the dask worker.
Am 23. Oktober 2019 03:20:55 GMT+07:00 schrieb tooptoop4 <notifications@github.com>:
…
@chris-aeviator how did u allow dask workers to run airflow tasks? I
get no such file error. Any setup script u can share?
--
You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub:
#94 (comment)
|
I have same issue. Are there any solutions on this one? |
@vasinata did you check the steps described in my last comment? Both the worker and the scheduler need to have the same python packages, the same dags (at least the same filename for the dag on the scheduler, content of the file does not matter much),...installed for the worker correctly picking up jobs.
I implemented this by making sure both containers are always in sync with a Kubernetes sidecar doing a git clone + pip install on both
In order to help you further please describe your setup and the steps you run and which issues you encounter
Am 11. März 2020 07:48:23 GMT+07:00 schrieb vasinata <notifications@github.com>:
…I have same issue. Are there any solutions on this one?
--
You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub:
#94 (comment)
|
@chris-aeviator Yes. I did check steps as you have recommended. I have kubernetes cluster . every time it creates the pod it gets image from ECR repository (same as scheduler pod) and mounts efs dags to it. I have been even trying to run testing dags and I see same issue: task gets queued, pod is getting created on kubernetes, pod runs and successfully finishes, but task never gets to running or finished state. Am I missing anything? |
Have you check end_time ? Scheduler never pick up Jobs after end_time, even if you trigger in your Web UI. |
I am having the same issue after I migrated to 2.0 using docker. [INFO] Handling signal: ttin |
hello everyone,
I am trying to run docker-compose-CeleryExecutor.yml, however, worker is not picking up job.
I am seeing following message.
After that I am not seeing scheduler and worker picking up job and executing
The text was updated successfully, but these errors were encountered: