Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

downloaded go modules are not being picked up by the go interpreter when bom generate runs #202

Closed
sandipanpanda opened this issue Nov 29, 2022 · 8 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/release Categorizes an issue or PR as relevant to SIG Release.

Comments

@sandipanpanda
Copy link
Member

sandipanpanda commented Nov 29, 2022

What happened:

bom does not leverage the local go cache to look for dependency data while generating SBOM in Cilium image build actions.

Generating SBOM describing the source in the Cilium repository
using bom takes, on average, 10 minutes. As a result, the CI
build time increases by 30 minutes if we generate an SBOM
describing the source for all three CI images in Image CI Build
and the CI ultimately fails, throwing an error that no space
is left on the runner.

In theory, if you run "bom generate" in the same environment where you are building (especially after building), all modules should be
there already downloaded, and bom can reuse them. But this does not
happen. One thing that bom will not do is download stuff into your
go directory. If a module is missing, bom will download it to
/tmp/spdx/gomod-scanner/, look at it there, and remove it. Even after
performing a "go mod download" before running "bom generate", the
downloaded modules are not being picked up by the go interpreter
when bom runs.

The downloaded modules are not being picked up by the go interpreter when bom runs: https://github.com/cilium/cilium/actions/runs/3490449396/jobs/5841895937#step:23:1755 for this workflow file.

What you expected to happen:

If bom generate is run in the same environment where you are building (especially after building), all modules should be there already downloaded and bom can reuse them.

Anything else we need to know?:

Discussion on this in K8s slack linked here.

cc @puerco @aanm @nbusseneau

@sandipanpanda sandipanpanda added kind/bug Categorizes issue or PR as related to a bug. sig/release Categorizes an issue or PR as relevant to SIG Release. labels Nov 29, 2022
@nbusseneau
Copy link

nbusseneau commented Nov 29, 2022

Attaching log archive for the workflow run, as it will eventually expire and will be unable from GHA: logs_877851.zip

aanm added a commit to cilium/cilium that referenced this issue Jan 19, 2023
Generating SBOM from source takes a long time and due to a bug [1] it
fills out the GH runner disk. Thus we will not be generating the SBOM
from source until the bug is fixed.

[1] kubernetes-sigs/bom#202

Fixes: b11a065 ("build: Generate SBOM during image release")
Signed-off-by: André Martins <andre@cilium.io>
aanm added a commit to cilium/cilium that referenced this issue Jan 22, 2023
Generating SBOM from source takes a long time and due to a bug [1] it
fills out the GH runner disk. Thus we will not be generating the SBOM
from source until the bug is fixed.

[1] kubernetes-sigs/bom#202

Fixes: b11a065 ("build: Generate SBOM during image release")
Signed-off-by: André Martins <andre@cilium.io>
aanm added a commit to cilium/cilium that referenced this issue Jan 23, 2023
[ upstream commit cee3e46 ]

Generating SBOM from source takes a long time and due to a bug [1] it
fills out the GH runner disk. Thus we will not be generating the SBOM
from source until the bug is fixed.

[1] kubernetes-sigs/bom#202

Fixes: b11a065 ("build: Generate SBOM during image release")
Signed-off-by: André Martins <andre@cilium.io>
Signed-off-by: André Martins <andre@cilium.io>
aanm added a commit to cilium/cilium that referenced this issue Jan 23, 2023
[ upstream commit cee3e46 ]

Generating SBOM from source takes a long time and due to a bug [1] it
fills out the GH runner disk. Thus we will not be generating the SBOM
from source until the bug is fixed.

[1] kubernetes-sigs/bom#202

Fixes: b11a065 ("build: Generate SBOM during image release")
Signed-off-by: André Martins <andre@cilium.io>
Signed-off-by: André Martins <andre@cilium.io>
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 27, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 29, 2023
@puerco
Copy link
Member

puerco commented Mar 29, 2023

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Mar 29, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 27, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 19, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Feb 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/release Categorizes an issue or PR as relevant to SIG Release.
Projects
None yet
Development

No branches or pull requests

5 participants