Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FLINK-16921][e2e] Describe all resources and show pods logs before cleanup when failed #11630

Merged
merged 1 commit into from Apr 4, 2020

Conversation

wangyang0918
Copy link
Contributor

@wangyang0918 wangyang0918 commented Apr 4, 2020

What is the purpose of the change

The pods may be pending because of not enough resources, disk pressure, or other problems. Then wait_rest_endpoint_up will timeout. Describing all resources will help to debug these problems.

We still have some failed instances and can not reproduce in the local environment(Mac/Linux). Open this PR to run e2e tests more times to find the root cause.

Brief change log

  • Describe all resources so that we could find more information about why the K8s e2e tests failed
  • Debug log could not show up in sometimes, so move debug_and_show_logs before cleanup

Verifying this change

  • Run e2e tests more times, K8s related tests should pass

Does this pull request potentially affect one of the following parts:

  • Dependencies (does it add or upgrade a dependency): (yes / no)
  • The public API, i.e., is any changed class annotated with @Public(Evolving): (yes / no)
  • The serializers: (yes / no / don't know)
  • The runtime per-record code paths (performance sensitive): (yes / no / don't know)
  • Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / no / don't know)
  • The S3 file system connector: (yes / no / don't know)

Documentation

  • Does this pull request introduce a new feature? (yes / no)
  • If yes, how is the feature documented? (not applicable / docs / JavaDocs / not documented)

…leanup when failed

The pods may be pending because of not enough resources, disk pressure, or other problems. Then wait_rest_endpoint_up will timeout. Describing all resources will help to debug these problems.
@flinkbot
Copy link
Collaborator

flinkbot commented Apr 4, 2020

Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community
to review your pull request. We will use this comment to track the progress of the review.

Automated Checks

Last check on commit 9a9073f (Sat Apr 04 04:38:33 UTC 2020)

Warnings:

  • No documentation files were touched! Remember to keep the Flink docs up to date!

Mention the bot in a comment to re-run the automated checks.

Review Progress

  • ❓ 1. The [description] looks good.
  • ❓ 2. There is [consensus] that the contribution should go into to Flink.
  • ❓ 3. Needs [attention] from.
  • ❓ 4. The change fits into the overall [architecture].
  • ❓ 5. Overall code [quality] is good.

Please see the Pull Request Review Guide for a full explanation of the review process.


The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands
The @flinkbot bot supports the following commands:

  • @flinkbot approve description to approve one or more aspects (aspects: description, consensus, architecture and quality)
  • @flinkbot approve all to approve all aspects
  • @flinkbot approve-until architecture to approve everything until architecture
  • @flinkbot attention @username1 [@username2 ..] to require somebody's attention
  • @flinkbot disapprove architecture to remove an approval you gave earlier

@flinkbot
Copy link
Collaborator

flinkbot commented Apr 4, 2020

CI report:

Bot commands The @flinkbot bot supports the following commands:
  • @flinkbot run travis re-run the last Travis build
  • @flinkbot run azure re-run the last Azure build

Copy link
Contributor

@rmetzger rmetzger left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for debugging this issue.
I'll merge this PR (Azure has passed)

@rmetzger rmetzger merged commit d2d91b6 into apache:master Apr 4, 2020
echo "Debugging failed Kubernetes test:"
echo "Currently existing Kubernetes resources"
kubectl get all
kubectl describe all
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this also doesn't help, I used these commands for debugging the k8s the last time it was unstable:

kubectl get pods -o json -n kube-system
kubectl get pods -o json
kubectl get events -o json
kubectl get deployments -o json
kubectl describe pods
kubectl describe nodes
kubectl get nodes -o json

if [ $? != 0 ];then
debug_copy_and_show_logs
fi
SUCCEEDED=$?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I recently got to know bash traps, which are basically signal handlers.
You can do trap debug_and_show_logs EXIT, which will run debug_and_show_logs whenever the script exists.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have already use the on_exit cleanup in common_kubernetes.sh. So i put the debug_and_show_logs in cleanup, it will always be called before the script exits.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

True. You could register on_exit debug_and_show_logs before sourcing common_kubernetes.sh.

But I'm fine with your solution. I just wanted to mention it so that you know it exists :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
3 participants