-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
yaml tests seem to be consistently timing out #2540
Comments
I added example pipelinerun-with-parallel-tasks-using-pvc.yaml in #2521, few days ago. Things looks worse after that. But I don't really understand what is causing that. Maybe volumes takes time and there are some time outs? I find it hard to see what test is causing trouble. |
In tektoncd#2540 we are seeing that some yaml tests are timing out, but it's hard to see what yaml tests are failing. This commit moves the logic out of bash and into individual go tests - now we will run an individual go test for each yaml example, completing all v1alpha1 before all v1beta1 and cleaning up in between. The output will still be challenging to read since it will be interleaved, however the failures should at least be associated with a specific yaml file. This also makes it easier to run all tests locally, though if you interrupt the tests you end up with your cluster in a bad state and it might be good to update these to execute each example in a separate namespace (in which case we could run all of v1alpha1 and v1beta1 at the same time as well!)
In tektoncd#2540 we are seeing that some yaml tests are timing out, but it's hard to see what yaml tests are failing. This commit moves the logic out of bash and into individual go tests - now we will run an individual go test for each yaml example, completing all v1alpha1 before all v1beta1 and cleaning up in between. The output will still be challenging to read since it will be interleaved, however the failures should at least be associated with a specific yaml file. This also makes it easier to run all tests locally, though if you interrupt the tests you end up with your cluster in a bad state and it might be good to update these to execute each example in a separate namespace (in which case we could run all of v1alpha1 and v1beta1 at the same time as well!)
Yeah, that's my guess 😓
it's gonna do 60 loops of 10s to check the status of the pipelinerun (or taskrun), meaning it times out after 10min. |
/kind bug |
There is few ways to fix:
#2541 does the later. |
@vdemeester did it work better? If it is a regional cluster and the PVCs are zonal the two parallel tasks may be executing in different zones and the third task that mount both PVCs is deadlocked since it can't mount two zonal PVC in a pod. I propose that I remove the example, since it depends so much on what kind of storage and cluster that is used. The intentation was to document PVC access modes but it is not strictly necessary to have an example. |
Not entirely sure. There is less failures but I see some still.
Yeah, having it in a |
@sbwsg thanks. It was exactly that task I was worried about. But that example does not provide much value, and it need to be adapted to any environment. So I think it is best to remove it. But a similar problem may occur for other pipelines that use the same PVC in more than one task. We could move those to the I apologize for the flaky tests the last few days. |
Yeah this might be a good area we can add docs around at some point. I wonder how much of it is platform specific and how much Tekton can describe in a cross-platform way.
No worries, thanks for making the PR to resolve, and all the contributions around Workspaces! We were bound to hit this issue eventually. |
I am curious if we can use some kind of pod affinity to get tasks co-located on the same node. Possibly co-locate all pods belonging to a single PipelineRun so they perfectly fine can use the same PVC as a workspace and perfectly fine can execute parallel. (this is essentially what any single-node CI/CD system does). We would still be a distributed system where different PipelineRuns possibly scheduled to different nodes. Using different PVCs is "easier" for fan-out, but not for fan-in (e.g. git-clone and then parallel tasks using the same files) |
In tektoncd#2540 we are seeing that some yaml tests are timing out, but it's hard to see what yaml tests are failing. This commit moves the logic out of bash and into individual go tests - now we will run an individual go test for each yaml example, completing all v1alpha1 before all v1beta1 and cleaning up in between. The output will still be challenging to read since it will be interleaved, however the failures should at least be associated with a specific yaml file. This also makes it easier to run all tests locally, though if you interrupt the tests you end up with your cluster in a bad state and it might be good to update these to execute each example in a separate namespace (in which case we could run all of v1alpha1 and v1beta1 at the same time as well!)
In tektoncd#2540 we are seeing that some yaml tests are timing out, but it's hard to see what yaml tests are failing. This commit moves the logic out of bash and into individual go tests - now we will run an individual go test for each yaml example, completing all v1alpha1 before all v1beta1 and cleaning up in between. The output will still be challenging to read since it will be interleaved, however the failures should at least be associated with a specific yaml file. This also makes it easier to run all tests locally, though if you interrupt the tests you end up with your cluster in a bad state and it might be good to update these to execute each example in a separate namespace (in which case we could run all of v1alpha1 and v1beta1 at the same time as well!)
In tektoncd#2540 we are seeing that some yaml tests are timing out, but it's hard to see what yaml tests are failing. This commit moves the logic out of bash and into individual go tests - now we will run an individual go test for each yaml example, completing all v1alpha1 before all v1beta1 and cleaning up in between. The output will still be challenging to read since it will be interleaved, however the failures should at least be associated with a specific yaml file. This also makes it easier to run all tests locally, though if you interrupt the tests you end up with your cluster in a bad state and it might be good to update these to execute each example in a separate namespace (in which case we could run all of v1alpha1 and v1beta1 at the same time as well!)
In tektoncd#2540 we are seeing that some yaml tests are timing out, but it's hard to see what yaml tests are failing. This commit moves the logic out of bash and into individual go tests - now we will run an individual go test for each yaml example, completing all v1alpha1 before all v1beta1 and cleaning up in between. The output will still be challenging to read since it will be interleaved, however the failures should at least be associated with a specific yaml file. This also makes it easier to run all tests locally, though if you interrupt the tests you end up with your cluster in a bad state and it might be good to update these to execute each example in a separate namespace (in which case we could run all of v1alpha1 and v1beta1 at the same time as well!)
In tektoncd#2540 we are seeing that some yaml tests are timing out, but it's hard to see what yaml tests are failing. This commit moves the logic out of bash and into individual go tests - now we will run an individual go test for each yaml example, completing all v1alpha1 before all v1beta1 and cleaning up in between. The output will still be challenging to read since it will be interleaved, however the failures should at least be associated with a specific yaml file. This also makes it easier to run all tests locally, though if you interrupt the tests you end up with your cluster in a bad state and it might be good to update these to execute each example in a separate namespace (in which case we could run all of v1alpha1 and v1beta1 at the same time as well!)
In tektoncd#2540 we are seeing that some yaml tests are timing out, but it's hard to see what yaml tests are failing. This commit moves the logic out of bash and into individual go tests - now we will run an individual go test for each yaml example, completing all v1alpha1 before all v1beta1 and cleaning up in between. The output will still be challenging to read since it will be interleaved, however the failures should at least be associated with a specific yaml file. This also makes it easier to run all tests locally, though if you interrupt the tests you end up with your cluster in a bad state and it might be good to update these to execute each example in a separate namespace (in which case we could run all of v1alpha1 and v1beta1 at the same time as well!)
In tektoncd#2540 we are seeing that some yaml tests are timing out, but it's hard to see what yaml tests are failing. This commit moves the logic out of bash and into individual go tests - now we will run an individual go test for each yaml example, completing all v1alpha1 before all v1beta1 and cleaning up in between. The output will still be challenging to read since it will be interleaved, however the failures should at least be associated with a specific yaml file. This also makes it easier to run all tests locally, though if you interrupt the tests you end up with your cluster in a bad state and it might be good to update these to execute each example in a separate namespace (in which case we could run all of v1alpha1 and v1beta1 at the same time as well!)
I don't think we've seen any evidence of this since @jlpettersson 's fixes, closing! |
Expected Behavior
"yaml tests" should only fail if something is actually wrong
Actual Behavior
All of the runs for #2531 have failed:
https://tekton-releases.appspot.com/builds/tekton-prow/pr-logs/pull/tektoncd_pipeline/2531/pull-tekton-pipeline-integration-tests/
And recent runs across PRs it seems most are failing too
https://tekton-releases.appspot.com/builds/tekton-prow/pr-logs/directory/pull-tekton-pipeline-integration-tests
Steps to Reproduce the Problem
Not sure what's going on yet
Additional Info
I can't decipher 1..60 sleep 10 for the life of me:
pipeline/test/e2e-common.sh
Lines 65 to 77 in a4065de
The text was updated successfully, but these errors were encountered: