-
Notifications
You must be signed in to change notification settings - Fork 13.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix AWS DataSync tests failing (#10985) #11020
Conversation
My GitHub actions were timing out on my repo, so I decided to try creating a PR upstream and see if the CI checks work correctly here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
tests were skipped, updated the version 🤞
So the build is failing, and I get the same failures pattern and errors locally if I do pip install moto==1.3.14. moto==1.3.16 is working a lot better my side. |
Think it may have still been pinned, something dodgy with my force push. Retrying. |
Looks like it is still pinned to moto 1.3.14 instead of 1.3.16. @kaxil any suggestions? :) |
The tests are failing now https://github.com/apache/airflow/pull/11020/checks?check_run_id=1137259948#step:6:4657 |
@kaxil I see the same failures locally if I install moto 1.3.14. I think 1.3.14 is still pinned somewhere, somehow. With moto 1.3.16 most of the tests pass when I run locally. (There is still a little work I have to do on the one test) @feluelle Would love to get your take on this... :) |
Hmm. @potiuk might be able to help. |
FYI latest push fixes the last of the failed tests (I ran the test_datasync.py locally using pytest and with moto 1.3.16 installed). Still, per above comments we need to figure out why/where moto==1.3.14 is being pinned by Airflow. |
@kaxil Maybe this could be because of constraints? |
This is precisely why It happened @baolsen @kaxil . This is all expected. This is the raw log of the "Build Image" workflow corresponding to your build: The line from log:
This is perfectly fine. If your change opens up for a newer version, it might not succeed. The constraints will only be updated after merge, when all tests succeed for the upgraded version. this is to protect the PR authors from accidentally failing on seemingly unrelated upgrades in requirements. Imagine you add a new dependency (say presto) and you want to make PR with it - but in the meantime 'moto' latest version starts breaking the build. If we do not use constraint files then those things could happen easily and you would have to fix unrelated problems in your change. The workflow works like that that only AFTER your change is merged, the master build will attempt to upgrade all deps eagerly and it will run all the tests and it will only modify the constraints if all the tests succeed. Than all builds after that constraint push will start using upgraded moto. So in your case, if you locally tested that your change works with latest moto, all is fine. after merge and successful tests in master everyone will be greeted with the updated moto :) |
And if you really want to make a change where you bump moto and want to (for example) introduce limit to have version >1.3.15 for example, that will also work. Setup.py has precedence over constraints so if setup.py has >1.3.15 and constraints == 1.3.14, the 1.3.16 version will be installed. |
I see now that we do want to get >=1.13.16 :). Good. :) |
Hey @potiuk , thanks for the feedback. The test errors are: Your link expired, but I tried to use "Search the logs" on the build and can't find anything for "moto". |
Yeah. I see it - seems that setup.py DOES NOT give precedence over constraints. I have to figure out something then. I have some idea for that . Sorry for the trouble :( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 👍 (..once we have fixed the issue with setup.py)
Hey @potiuk . Trust all is well :) Is there anything more I can do to assist ? |
Awesome. I ran the tests again now, rebased and pushed. |
#11363 should fix the issue with testing this kind of things within the PR |
Co-authored-by: Felix Uellendall <feluelle@users.noreply.github.com>
I have reverted the changes made to the test_batch_waiters hook and rebased to master. If the failure is still present what I intend to do is disable the problematic test so at least the remaining work on this PR can be merged. I'm not sure what the process is to get the test fixed thereafter, perhaps log a new issue. But I feel that resolution of this timeout / deadlock issue is not in the scope of this PR. We've already included the moto version upgrade, fixed DataSync and a bunch of other tests. |
…_batch_job_waiting
I've skipped the problematic test. Should be good to merge now. |
I think a better way to do it would be to mark it as |
Ya I agree 👍 |
Neat, I didn't know about that. Done & pushed |
@baolsen CI are sad. Can you look at it? |
Co-authored-by: Kaxil Naik <kaxilnaik@gmail.com>
Simple fix for DataSync Operator tests failing.
Moto library was not returning a response containing 'Result' key when calling describe_task_execution.
This is normally returned by DataSync. This Result was used just for logging within the DataSync operator, so can simply be skipped during testing if not found in the response.