-
-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ember Try scenario failing on Travis with a Segmentation fault #360
Comments
Can you try running with |
FYI - @scalvert has been digging into this same issue (over in ember-app-scheduler repo). Not yet sure what is going on, and we can't get it to repro locally yet. |
Correct. I've got the same issue on both Steps I've taken to try to isolate (ember-app-scheduler/ember-app-scheduler#312):
As mentioned, I've been engaged with Travis support to try to investigate. |
I can try this when I'm SSHed into the box. |
It's also possible to try that by updating the Travis config on a branch. Worth noting: The latest passing ember-cli-mirage's build was on the latest ember-try. |
Yep, upgrading ember-try to latest had no effect on the occurrence of segfaults. |
Any update on this? |
Getting closer. A pesky weekend got in the way of further debugging efforts. I plan to focus on this today. |
Here's the top of the stack from the core dump from
The top of the stack is |
Did you ever try running with |
Yes, it didn't provide any useful information, unfortunately. |
I was mostly wondering what the last thing that happened before the segfault was? |
Well the tests complete, and the process seems to 'hang' after. I stood up an Ubuntu image in Azure to attempt to replicate it there, mainly due to Travis' debug session having a timeout configured, which means the session will spontaneously end during debugging. I was unable to reproduce the segfault in my server, though the process does hang for a significant portion of time after the tests complete successfully. |
I was wondering because after all the tests run there is a step where ember-try cleans up / reinstalls node_modules; it's possible the segfault is happening during that. |
Ah gotcha. @rwjblue @krisselden and I are chatting about it right now to see if we can determine the issue. It's now happening in @ember/test-helpers too :/ |
I tried running with |
We've identified the issue. It stems from the azureuser@travis:~/ember-lifeline/ember-lifeline$ yarn why sharp
yarn why v1.16.0
[1/4] Why do we have the module "sharp"...?
[2/4] Initialising dependency graph...
[3/4] Finding dependency...
[4/4] Calculating file sizes...
=> Found "sharp@0.22.1"
info Reasons this module exists
- "ember-cli-favicon#broccoli-favicon#favicons" depends on it
- Hoisted from "ember-cli-favicon#broccoli-favicon#favicons#sharp"
info Disk size without dependencies: "31.6MB"
info Disk size with unique dependencies: "32.67MB"
info Disk size with transitive dependencies: "35.05MB"
info Number of shared dependencies: 46
Done in 2.34s. It's the
In the Workaround to unblock:
"resolutions": {
"favicons": "5.3.0"
} We're trying to figure out the best place to report this issue. |
Wow! So many levels... (I am continually amazed anything ever works) |
Full description of issue: ember-cli/ember-try#360 (comment)
Full description of issue: ember-cli/ember-try#360 (comment)
My guess it is related to this queue: https://github.com/lovell/sharp/blob/aa9b328778ef00971e883365ebedd480799394a2/src/common.cc#L420 and likely an issue with libc.so on the linux on travis.
I could be way off base, with a local reproduction it would likely be not too hard to figure out. If such a repro exists, i would recommeding:
|
what version of glibc is on those linux boxes? |
|
Going to close for now, happy to reopen if folks think this is still an issue. |
Hi! I'm investigating a Travis failure in one of my Ember Try scenarios and am looking for any guidance. I have no idea if this has anything to do with Ember Try but figured I'd start here.
Here's the failure. If you look back at the full build you'll see the same failure for all Versioned tests.
When I tried one of them locally by running
it passed with no problem.
My next guess was the Segmentation fault had something to do with Travis' cache. I thought maybe it was due to all the PRs Dependabot was opening. I went back to Travis, deleted all caches, and re-ran master. No change, the fault still happened.
I then thought it might have been due to a code change on my end, so I went back to the last-passing build and re-ran it. I saw the same failures on the Versioned tests.
Any idea for what could be going on? Is there possibly a memory leak that I'm not seeing locally but that is causing Travis to blow up?
Any help much appreciated!
The text was updated successfully, but these errors were encountered: