You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the example code you submitted, you create a DagRun with status Running. However, this is done directly on the database and thereby you could consider this to be a manual overwrite.
The limits, max_active_runs, are enforced by the scheduler component of Airflow. This component will create DagRuns based on the schedule and put them in the queued state first. It will only mark them as running if that will not surpass any thresholds like the max_active_runs.
By completely skipping those scheduler steps, there is no way for Airflow to enforce the limit on max_active_runs. Hence, I propose you change the code you showed to set the DagRun to Queued state instead of Running. That should fix your issue :)
Also @denysivanov , upgrade to 2.1.4, there’s a bug on max_active_runs that have been fixed. Not related to your issue though. @Jorricks answered you correctly
Apache Airflow version
2.1.3
Operating System
Linux
Versions of Apache Airflow Providers
apache-airflow-providers-microsoft-azure==3.1.1
apache-airflow-providers-microsoft-mssql==2.0.1
apache-airflow-providers-odbc==2.0.1
apache-airflow-providers-slack==4.0.1
Deployment
Other 3rd-party Helm chart
Deployment details
We are using this helm chart
https://github.com/airflow-helm/charts/tree/main/charts/airflow
What happened
I set max_active_runs=1 and still see multiple instances of the same dag running
At the end of my current dag I am trigger instance of current dag one more time
What you expected to happen
Should have only one active running instance.
New instance of dag should start in the queued state.
According this this one #16401 this problem should be fixed in v2.1.3
How to reproduce
We start instance of dag inside of the current dag
Anything else
No response
Are you willing to submit PR?
Code of Conduct
The text was updated successfully, but these errors were encountered: