Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tasks not being queued at the correct pool #10443

Closed
dinigo opened this issue Aug 21, 2020 · 3 comments
Closed

tasks not being queued at the correct pool #10443

dinigo opened this issue Aug 21, 2020 · 3 comments
Labels
area:core invalid kind:bug This is a clearly a bug

Comments

@dinigo
Copy link
Contributor

dinigo commented Aug 21, 2020

Apache Airflow version: docker image apache/airflow:master-python3.8

What happened: tasks not being scheduled to the correct pool

What you expected to happen: If the pool for the task is set to subdag_pool, for the task to be queued at this pool.

How to reproduce it:

Subdag tasks take all the running slots in the current pool, so I created a separate pool for the sub-dag tasks.
imagen
And then assigned this pool to the subdags

brand_subdag_operator = SubDagOperator(
    task_id=brand_id,
    subdag=brand_subdag,
    pool='subdags_pool',
)

But these tasks keep being scheduled at the default_pool instead of the new subdag_pool.
imagen

@dinigo dinigo added the kind:bug This is a clearly a bug label Aug 21, 2020
@maierru
Copy link

maierru commented Jan 13, 2021

version 1.10.12 same behaviour

@potiuk potiuk added the invalid label Jan 13, 2021
@potiuk
Copy link
Member

potiuk commented Jan 13, 2021

In 1.10 all subdags are scheduled in the same workers/pools as the parent DAGs - by design. This has been changed in Airflow 2.0 where SubDags are processed in a different way and they can run in different workers/pools.

@potiuk potiuk closed this as completed Jan 13, 2021
@dinigo
Copy link
Contributor Author

dinigo commented Jan 14, 2021

@potiuk I cannot check right now if this is still happening, sorry. I was using 2.0 from the master branch. I want to mention that I solved this issue setting the SubDagOperator (which is a sensor under the hood) to mode = 'reschedule' so it doesn't take a whole slot.

Is there a reason for the 'reschedule' mode not being the default? I have enough sensors to fill and block the pool. That's the source of this issue actually.

Anyway I'm using less sensors and building bigger dags now that we have TaskGroups.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area:core invalid kind:bug This is a clearly a bug
Projects
None yet
Development

No branches or pull requests

4 participants