Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failing CI check for devfile-web #1464

Closed
michael-valdron opened this issue Mar 1, 2024 · 3 comments · Fixed by devfile/devfile-web#121
Closed

Failing CI check for devfile-web #1464

michael-valdron opened this issue Mar 1, 2024 · 3 comments · Fixed by devfile/devfile-web#121
Assignees
Labels
area/ci area/landing-page Issues with the Landing Page area/registry-viewer kind/bug Something isn't working

Comments

@michael-valdron
Copy link
Member

Which area this feature is related to?

/kind bug

Which area this bug is related to?

/area ci
/area registry-viewer
/area landing-page

What versions of software are you using?

Node.js project

Operating System and version:

Node.js version: 18

Yarn version: 1.22.19

Project.json:

Web browser

Operating System and version: N/A

Browser name and version: N/A

Bug Summary

Describe the bug:

CI / Main Job PR check is failing on both registry-viewer-e2e:e2e:production and landing-page-e2e:e2e:production tasks of the E2E step due to timing out. The reason for the timeouts is unclear however a new warning about insufficient disk space is likely pointing to the production builds not completing due to reduced disk space in GitHub actions.

To Reproduce:

Expected behavior

Any logs, error output, screenshots etc? Provide the devfile that sees this bug, if applicable

Full Log: https://github.com/devfile/devfile-web/actions/runs/8088608076

Warnings
image

You are running out of disk space. The runner will stop working when the machine runs out of disk space. Free space left: 26 MB

Error Message in E2E

Timed out waiting for the browser to connect. Retrying...

Timed out waiting for the browser to connect. Retrying again...

The browser never connected. Something is wrong. The tests cannot run. Aborting...

The browser never connected. Something is wrong. The tests cannot run. Aborting...
Warning: We failed processing this video.

This error will not alter the exit code.

TimeoutError: operation timed out
    at afterTimeout (/home/runner/.cache/Cypress/12.6.0/Cypress/resources/app/node_modules/bluebird/js/release/timers.js:46:19)
    at Timeout.timeoutTimeout [as _onTimeout] (/home/runner/.cache/Cypress/12.6.0/Cypress/resources/app/node_modules/bluebird/js/release/timers.js:76:13)
    at listOnTimeout (node:internal/timers:559:17)
    at process.processTimers (node:internal/timers:502:7)

  (Results)

  ┌────────────────────────────────────────────────────────────────────────────────────────────────┐
  │ Tests:        0                                                                                │
  │ Passing:      0                                                                                │
  │ Failing:      1                                                                                │
  │ Pending:      0                                                                                │
  │ Skipped:      0                                                                                │
  │ Screenshots:  0                                                                                │
  │ Video:        false                                                                            │
  │ Duration:     0 seconds                                                                        │
  │ Spec Ran:     app.cy.ts                                                                        │
  └────────────────────────────────────────────────────────────────────────────────────────────────┘

Additional context

Any workaround?

An alternative to keep this to one check is to use an alternative environment that allows for more disk space, i.e. An OpenShift CI prow job.

Suggestion on how to fix the bug

CI / Main Job check is large so splitting this down into different jobs could allot more disk space for the devfile-web builds.

@openshift-ci openshift-ci bot added kind/bug Something isn't working area/ci area/registry-viewer area/landing-page Issues with the Landing Page labels Mar 1, 2024
@michael-valdron michael-valdron added severity/blocker Issues that prevent developers from working and removed severity/blocker Issues that prevent developers from working labels Mar 1, 2024
@Jdubrick Jdubrick self-assigned this Mar 19, 2024
@Jdubrick
Copy link
Contributor

Jdubrick commented Apr 2, 2024

I don't have permissions to rerun the job for the PR where this issue started occuring here but running it on my fork it seems to be passing here with no changes. I'm wondering if there was a lot going on with the runner that day, for a sanity test if either of you has permissions to rerun would you be able to? I just want to be able to confirm if my fork actions are going to be accurate for testing any changes I make to the jobs to fix the issue (or if we use a self hosted runner with more storage)

cc @thepetk @michael-valdron

@thepetk
Copy link
Contributor

thepetk commented Apr 3, 2024

I don't have permissions to rerun the job for the PR where this issue started occuring here but running it on my fork it seems to be passing here with no changes. I'm wondering if there was a lot going on with the runner that day, for a sanity test if either of you has permissions to rerun would you be able to? I just want to be able to confirm if my fork actions are going to be accurate for testing any changes I make to the jobs to fix the issue (or if we use a self hosted runner with more storage)

cc @thepetk @michael-valdron

It also needed to be rebased, so I've rebased it and fingers crossed now it passes!

@thepetk
Copy link
Contributor

thepetk commented Apr 3, 2024

I don't have permissions to rerun the job for the PR where this issue started occuring here but running it on my fork it seems to be passing here with no changes. I'm wondering if there was a lot going on with the runner that day, for a sanity test if either of you has permissions to rerun would you be able to? I just want to be able to confirm if my fork actions are going to be accurate for testing any changes I make to the jobs to fix the issue (or if we use a self hosted runner with more storage)
cc @thepetk @michael-valdron

It also needed to be rebased, so I've rebased it and fingers crossed now it passes!

Yes I think it might be a bad day for the runner or a backported fix from a dependency (I'd say maybe electron). Now it passes!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/ci area/landing-page Issues with the Landing Page area/registry-viewer kind/bug Something isn't working
Projects
Status: Done ✅
Development

Successfully merging a pull request may close this issue.

3 participants