-
Apache Airflow versionOther Airflow 2 version (please specify below) If "Other Airflow 2 version" selected, which one?2.6.3 What happened?We used defered operator waiting for something in several instances during high mysql database server load.
What you think should happen instead?Recommended from mysql is re-trigger the operation as is already implemented in airflow for example when fixing: #41428 How to reproduceI created code to trigger the function same way as the code raised the issue:
But even with thise example I was not able to reproduce it. Operating SystemRed Hat Enterprise Linux 8.10 (Ootpa) Versions of Apache Airflow ProvidersNo response DeploymentOther Deployment detailsLocal executor + Airflow is installed on Rhel machine. Anything else?We see the issue two times during one day of testing. Are you willing to submit PR?
Code of Conduct
|
Beta Was this translation helpful? Give feedback.
Replies: 8 comments 4 replies
-
Seems like AI, closing. |
Beta Was this translation helpful? Give feedback.
-
I don't think so @bugraoz93 :). This one looks pretty legitimate (and sorry @nesp159de - we recently have been flooded with almost good looking issues generated by AI so we are pretty "sensitive" now). But I am not sure we can do anything about it regardless. There are a number of places where mysql can deadlock - where it is completely unnecessary and we know it - the deadlocks are far less often happening on Postgres and our recommendation - for now - would be to switch to postgres. There are few other things you can do:
Converting it into discussion if more discussion is needed. |
Beta Was this translation helpful? Give feedback.
-
My bad! Thanks @potiuk! :) It is hard to differentiate whether the account or the post looks like an AI in the latest events. The accounts I saw generally have zero interaction means either they had not pushed anything or had random Sorry, @nesp159de! Indeed, we are a bit sensitive at the moment. What makes it more sensitive and sadder/angrier is that everyone here is trying to help each other and serve the community while being part of it. That's why, please don't take it personally. I am glad to see this didn't fade away along with other generated ones. |
Beta Was this translation helpful? Give feedback.
-
Thank you for fast response. Related to your proposals:
|
Beta Was this translation helpful? Give feedback.
-
The call stack from the latest issue observation: |
Beta Was this translation helpful? Give feedback.
-
2025-01-27T16:19:38.995943+01:00 de875-xv9 AirflowCoreCE-1_041_wd default.INFO watchdogD.py 552 Daemon:0 Traceback (most recent call last): pymysql.err.OperationalError: (1213, 'Deadlock found when trying to get lock; try restarting transaction') The above exception was probably the direct cause of the following exception: Traceback (most recent call last): pymysql.err.OperationalError: (1213, 'Deadlock found when trying to get lock; try restarting transaction') |
Beta Was this translation helpful? Give feedback.
-
We try to switch: try to switch "schedule_after_task_execution" to False, proposed solution did not help. |
Beta Was this translation helpful? Give feedback.
-
We finally fix the issue by remove deffered infrastructure from our code at all because the issue is still there. |
Beta Was this translation helpful? Give feedback.
I am attaching SHOW ENGINE INNODB STATUS results. Btw on 28.1 I was able to reproduce the issue even with the provided sample. (But after two days of runs).
deadlock.txt