-
Notifications
You must be signed in to change notification settings - Fork 58
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore: upload test artifacts to gcs #1660
base: master
Are you sure you want to change the base?
Conversation
@@ -227,6 +256,10 @@ jobs: | |||
MYSQL_DATABASE: syncstorage | |||
resource_class: large | |||
steps: | |||
- gcp-cli/setup: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
note: There were existing GOOGLE_PROJECT_ID
and GOOGLE_PROJECT_NUMBER
variables in the circle settings. So, to avoid collision, and make sure that we're running the the expected credentials (because the orb will use environment variables as their defaults) I created custom variables and explicitly assigned them here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you also document these 3 new env vars at the top of this file? We've also attempted this for Autopush here
tools/tokenserver/pytest.ini
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
note: This is from a slack thread. My prior PR to migrate tokenserver tests to use pytest meant we're running more tests than before. The tests appear to be flaky.
And so, instead of just using .skip
where we could artificially inflate the skip metric, we use custom pytest markers to tag and omit specific tests. Essentially this just puts the pytest
implementation on-par with the prior implementation.
@@ -300,6 +302,7 @@ def test_update_with_no_keys_changed_at2(self): | |||
self.assertEqual(user["generation"], 42) | |||
|
|||
|
|||
@pytest.mark.process_account_events_spanner |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Though this test wasn't originally invoked by the now removed run_tests
, I didn't see it failing on your last PR's run. Is it flaky elsewhere? If not, let's not skip it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, I looked at this one a bit more, and here's what I found.
If we run it in isolation, it can run repeatedly without issue. I ran it 100 times back to back, and it was fine.
But if it's run as part of the larger suite, it will be flaky, failing about 3/10 times. The grain of 🧂 here, though, is that I'm not doing any kind of cleanup of the db between, whereas, in CI, it's always a fresh instance/setup. What do you think?
@@ -227,6 +256,10 @@ jobs: | |||
MYSQL_DATABASE: syncstorage | |||
resource_class: large | |||
steps: | |||
- gcp-cli/setup: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you also document these 3 new env vars at the top of this file? We've also attempted this for Autopush here
Description
Describe these changes.
This adds the workflow step to send unit and integration test artifacts (JUnit results and json coverage reports) to the appropriate gcs bucket. I also added pytest markers to filter out some tests that were now being included from my last PR but are flaky and probably shouldn't be running.
Testing
How should reviewers test?
Issue(s)
Closes SYNC-4584