-
Notifications
You must be signed in to change notification settings - Fork 507
METRON-1971: Short timeout value in Cypress may cause build failures #1323
Conversation
Typically for these sorts of intermittent failures, we tend to do a few runs to try to make sure the issue is taken care (e.g. close and reopen the PR a set of times to force Travis reruns). Have you done something similar while testing this change? |
Anything based on a timeout, when executed in travis, in the Apache queue is going to be unreliable at some point isn't it? |
@ottobackwards I agree, anything based on a time will eventually be unreliable at some point. In this case, I am overriding a value that acts as a safety because Cypress will re-run assertions over and over until they are true or reach the timeout value specified. Indeed, it would be better if there was a way for Cypress to determine when to exit an assertion that wasn't time-based, but I'm guessing they chose to do things this way because of the nature of e2e testing in a browser. For example, in the case where the UI builds fine but contains an error that prevents part of the page from rendering, Cypress will re-run an assertion on a non-existing element over and over. Without a timeout value, it will do so infinitely. Protractor has similar timeout configurations: https://github.com/angular/protractor/blob/master/docs/timeouts.md |
@ottobackwards for me it seems like timers and timeouts affected by the load of the Travis executor. |
Right, we have seen issues with things in the apache queue before. I guess we can just do our best and handle it as it goes. |
@justinleet Thanks for cluing me in on standard practice with these sorts of issues. I closed and reopened a handful of times and the tests continually pass. Hopefully that gives us some sort of peace of mind. I guess that's the fun of dealing with intermittent failures in Travis. |
I'm +1 for this. Thanks! @ottobackwards Do you have any concerns beyond the earlier discussion that we want to address right now? |
I don't think that tests that can be variable, like we can never be sure if they are failing or working because of the test env should be tied to the main travis and fail the whole build. This leads to not trusting travis. I don't have a way to resolve that however. +0 |
Contributor Comments
Link to original Jira ticket: https://jira.apache.org/jira/browse/METRON-1971
We currently use the default timeout in Cypress, which is only 4000ms. I believe this short timeout can cause failures like this in Metron: https://travis-ci.org/apache/metron/jobs/483945575
In this Pull Request, I increased Cypress' default timeout and request timeout settings (the request timeout was only 5000ms).
Testing
metron/metron-interface/metron-alerts
npm run build
.npm run start:ci
.npm run cypress:open
.To confirm the changes work, clone this repo and run the steps above. The tests should pass despite the slow time to render.
Pull Request Checklist
Thank you for submitting a contribution to Apache Metron.
Please refer to our Development Guidelines for the complete guide to follow for contributions.
Please refer also to our Build Verification Guidelines for complete smoke testing guides.
In order to streamline the review of the contribution we ask you follow these guidelines and ask you to double check the following:
For all changes:
For code changes:
- [ ] Have you written or updated unit tests and or integration tests to verify your changes?- [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under ASF 2.0?For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in which it is rendered by building and verifying the site-book? If not then run the following commands and the verify changes viasite-book/target/site/index.html
:Note:
Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible.
It is also recommended that travis-ci is set up for your personal repository such that your branches are built there before submitting a pull request.