Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

e2e: fix out of sync configuration #9199

Merged
merged 3 commits into from
Aug 9, 2022

Conversation

tychoish
Copy link
Contributor

@tychoish tychoish commented Aug 9, 2022

The v0.34.x tests have been failing (or reporting failures, I don't
believe that this is a real failure,) because the CI configuration has
been out of sync with itself, likely due to a mistake during
backporting configs from the master branch.

The entire 0.34.x e2e test suite takes 26 minutes to run, plus about 7
minutes to build the docker image. Each split has to build the same
docker image, (and is therefore a cap on the amount of parallelism we
can get at the moment.) Having more groups, just seems like we'll be
burning money building the docker image with no really meaningful
difference in throughput. For a nightly test that people don't really
wait on, the current latency (time-to-completion) of roughly 19
minutes, isn't causing in friction.

@tychoish tychoish requested a review from ebuchman as a code owner August 9, 2022 11:26
@tychoish tychoish requested a review from a team August 9, 2022 11:26
@tychoish tychoish added the S:automerge Automatically merge PR when requirements pass label Aug 9, 2022
@mergify mergify bot merged commit d5ec276 into tendermint:main Aug 9, 2022
samricotta pushed a commit that referenced this pull request Aug 10, 2022
The v0.34.x tests have been failing (or reporting failures, I don't
believe that this is a real failure,) because the CI configuration has
been out of sync with itself, likely due to a mistake during
backporting configs from the `master` branch.

The entire 0.34.x e2e test suite takes 26 minutes to run, plus about 7
minutes to build the docker image. Each split has to build the same
docker image, (and is therefore a cap on the amount of parallelism we
can get at the moment.) Having more groups, just seems like we'll be
burning money building the docker image with no really meaningful
difference in throughput. For a nightly test that people don't really
wait on, the current latency (time-to-completion) of roughly 19
minutes, isn't causing in friction.
samricotta pushed a commit that referenced this pull request Aug 11, 2022
The v0.34.x tests have been failing (or reporting failures, I don't
believe that this is a real failure,) because the CI configuration has
been out of sync with itself, likely due to a mistake during
backporting configs from the `master` branch.

The entire 0.34.x e2e test suite takes 26 minutes to run, plus about 7
minutes to build the docker image. Each split has to build the same
docker image, (and is therefore a cap on the amount of parallelism we
can get at the moment.) Having more groups, just seems like we'll be
burning money building the docker image with no really meaningful
difference in throughput. For a nightly test that people don't really
wait on, the current latency (time-to-completion) of roughly 19
minutes, isn't causing in friction.
samricotta pushed a commit that referenced this pull request Aug 11, 2022
The v0.34.x tests have been failing (or reporting failures, I don't
believe that this is a real failure,) because the CI configuration has
been out of sync with itself, likely due to a mistake during
backporting configs from the `master` branch.

The entire 0.34.x e2e test suite takes 26 minutes to run, plus about 7
minutes to build the docker image. Each split has to build the same
docker image, (and is therefore a cap on the amount of parallelism we
can get at the moment.) Having more groups, just seems like we'll be
burning money building the docker image with no really meaningful
difference in throughput. For a nightly test that people don't really
wait on, the current latency (time-to-completion) of roughly 19
minutes, isn't causing in friction.
samricotta pushed a commit that referenced this pull request Aug 12, 2022
The v0.34.x tests have been failing (or reporting failures, I don't
believe that this is a real failure,) because the CI configuration has
been out of sync with itself, likely due to a mistake during
backporting configs from the `master` branch.

The entire 0.34.x e2e test suite takes 26 minutes to run, plus about 7
minutes to build the docker image. Each split has to build the same
docker image, (and is therefore a cap on the amount of parallelism we
can get at the moment.) Having more groups, just seems like we'll be
burning money building the docker image with no really meaningful
difference in throughput. For a nightly test that people don't really
wait on, the current latency (time-to-completion) of roughly 19
minutes, isn't causing in friction.
samricotta pushed a commit that referenced this pull request Aug 16, 2022
The v0.34.x tests have been failing (or reporting failures, I don't
believe that this is a real failure,) because the CI configuration has
been out of sync with itself, likely due to a mistake during
backporting configs from the `master` branch.

The entire 0.34.x e2e test suite takes 26 minutes to run, plus about 7
minutes to build the docker image. Each split has to build the same
docker image, (and is therefore a cap on the amount of parallelism we
can get at the moment.) Having more groups, just seems like we'll be
burning money building the docker image with no really meaningful
difference in throughput. For a nightly test that people don't really
wait on, the current latency (time-to-completion) of roughly 19
minutes, isn't causing in friction.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
S:automerge Automatically merge PR when requirements pass
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants