Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

error occurred in "./build-setup.sh riscv-tools" for the 1.9.0 #1441

Open
3 tasks done
zqj2333 opened this issue Apr 13, 2023 · 22 comments
Open
3 tasks done

error occurred in "./build-setup.sh riscv-tools" for the 1.9.0 #1441

zqj2333 opened this issue Apr 13, 2023 · 22 comments
Labels

Comments

@zqj2333
Copy link

zqj2333 commented Apr 13, 2023

Background Work

Chipyard Version and Hash

Release: 1.9.0
the stable version

OS Setup

centos
conda 23.3.1

Other Setup

No response

Current Behavior

There are some error as the image said.
VWO%2QOW%4SZL$ZWB8PEKJ2

Expected Behavior

build successfully

Other Information

I found that there is a time out.
CLHU$_Y(VVR XDKK Q}`0CX

@zqj2333 zqj2333 added the bug label Apr 13, 2023
@zqj2333 zqj2333 changed the title error occurred in "./build-setup.sh riscv-tools" error occurred in "./build-setup.sh riscv-tools" for the 1.9.0 Apr 13, 2023
@xiongdl
Copy link

xiongdl commented Apr 23, 2023

Hi, do you solve the error? I build chipyard-1.9.0 at Google colab, and meet the same error. utils/fireperf/FlameGraph is failed to checked out, and br-base.json is failed to run in Step 9 of build-setup.sh.

@zqj2333
Copy link
Author

zqj2333 commented Apr 23, 2023

Hi, do you solve the error? I build chipyard-1.9.0 at Google colab, and meet the same error. utils/fireperf/FlameGraph is failed to checked out, and br-base.json is failed to run in Step 9 of build-setup.sh.

Hi, unfortunately not.

@xiongdl
Copy link

xiongdl commented Apr 23, 2023

Hi zqj2333, the timeout is OK, it checks whether you are on EC2 or not. If you are not on EC2, the timeout will happens. By the way, your problem may be caused by small ulimit -Hn, a warning is presented in your log.

@zqj2333
Copy link
Author

zqj2333 commented Apr 23, 2023

Hi zqj2333, the timeout is OK, it checks whether you are on EC2 or not. If you are not on EC2, the timeout will happens. By the way, your problem may be caused by small ulimit -Hn, a warning is presented in your log.

Hi, thanks for your reply. I was wondering if your log also has this warning, or you have built it successfully.

@xiongdl
Copy link

xiongdl commented Apr 23, 2023

In the google colab, it has no this warning in the log. But the building of step 9 is still failed, and I do not know why.

@zqj2333
Copy link
Author

zqj2333 commented Apr 23, 2023

In the google colab, it has no this warning in the log. But the building of step 9 is still failed, and I do not know why.

Okay, I will continue to try it, I will give out new information if there is some update.

@seceng-jan
Copy link

seceng-jan commented Apr 25, 2023

I have the same issue on a Ubuntu 20.04. My logfile ends with

2023-04-25 10:43:32,472 [run         ] [DEBUG]  OBJCOPY   platform/generic/firmware/fw_payload.bin
2023-04-25 10:43:32,504 [print_deps  ] [DEBUG]  Running task /home/jan/chipyard/software/firemarshal/images/firechip/br-base/br-base.img because one of its targets does not exist anymore: /home/jan/chipyard/software/firemarshal/images/firechip/br-base/br-base.img
2023-04-25 10:43:32,791 [makeImage   ] [DEBUG]  Applying overlay: /home/jan/chipyard/software/firemarshal/boards/firechip/base-workloads/br-base/overlay
2023-04-25 10:43:32,791 [run         ] [DEBUG]  Running: "guestmount --pid-file guestmount.pid -a /home/jan/chipyard/software/firemarshal/images/firechip/br-base/br-base.img -m /dev/sda /home/jan/chipyard/software/firemarshal/disk-mount" in /home/jan/chipyard/software/firemarshal
2023-04-25 10:43:33,508 [run         ] [DEBUG]  libguestfs: error: /usr/bin/supermin exited with error status 1.
2023-04-25 10:43:33,508 [run         ] [DEBUG]  To see full error messages you may need to enable debugging.
2023-04-25 10:43:33,508 [run         ] [DEBUG]  Do:
2023-04-25 10:43:33,508 [run         ] [DEBUG]  export LIBGUESTFS_DEBUG=1 LIBGUESTFS_TRACE=1
2023-04-25 10:43:33,508 [run         ] [DEBUG]  and run the command again.  For further information, read:
2023-04-25 10:43:33,508 [run         ] [DEBUG]  http://libguestfs.org/guestfs-faq.1.html#debugging-libguestfs
2023-04-25 10:43:33,508 [run         ] [DEBUG]  You can also run 'libguestfs-test-tool' and post the *complete* output
2023-04-25 10:43:33,508 [run         ] [DEBUG]  into a bug report or message to the libguestfs mailing list.
2023-04-25 10:43:33,543 [main        ] [ERROR]  Failed to build workload br-base.json
2023-04-25 10:43:33,543 [main        ] [INFO ]  Log available at: /home/jan/chipyard/software/firemarshal/logs/br-base-build-2023-04-25--10-31-00-NH7MUAV3HG8VQBQD.log
2023-04-25 10:43:33,544 [main        ] [ERROR]  FAILURE: 1 builds failed

Did you solve this issue?

[Edit: Downgrading to 1.8.1 does help.]

@QuqqU
Copy link

QuqqU commented May 1, 2023

I have a same issue.

WARNING:conda_lock.conda_lock:WARNING: installation of pip dependencies is only supported by the 'conda-lock install' command. Other tools may silently ignore them. For portability, we recommend using the newer unified lockfile format (i.e. removing the --kind=explicit argument.
INFO:root:Downloading and Extracting Packages
INFO:root:firtool-1.30.0
INFO:root:riscv-tools-1.0.3
ERROR:root:                                        
ERROR:root:ChecksumMismatchError: Conda detected a mismatch between the expected content and downloaded content
ERROR:root:for url 'https://conda.anaconda.org/ucb-bar/linux-64/riscv-tools-1.0.3-0_h1234567_ga1b1b14.conda'.
ERROR:root:  download saved to: /home/quqqu/miniconda3/pkgs/riscv-tools-1.0.3-0_h1234567_ga1b1b14.conda
ERROR:root:  expected md5: 85c9a0d9dd5311aaa2c5064f2c87b496
ERROR:root:  actual md5: 36b6e97775473002590a94c14eb46284
ERROR:root:
ERROR:root:ChecksumMismatchError: Conda detected a mismatch between the expected content and downloaded content
ERROR:root:for url 'https://conda.anaconda.org/ucb-bar/linux-64/firtool-1.30.0-0_h1234567_gdb40efbcd.conda'.
ERROR:root:  download saved to: /home/quqqu/miniconda3/pkgs/firtool-1.30.0-0_h1234567_gdb40efbcd.conda
ERROR:root:  expected md5: 46a56cfe00f36b35e2d321bfabebf873
ERROR:root:  actual md5: f095f34c5ac1a2c7d71ec6f771236ac8
ERROR:root:
ERROR:root:

@zqj2333
Copy link
Author

zqj2333 commented May 1, 2023

I have the same issue on a Ubuntu 20.04. My logfile ends with

2023-04-25 10:43:32,472 [run         ] [DEBUG]  OBJCOPY   platform/generic/firmware/fw_payload.bin
2023-04-25 10:43:32,504 [print_deps  ] [DEBUG]  Running task /home/jan/chipyard/software/firemarshal/images/firechip/br-base/br-base.img because one of its targets does not exist anymore: /home/jan/chipyard/software/firemarshal/images/firechip/br-base/br-base.img
2023-04-25 10:43:32,791 [makeImage   ] [DEBUG]  Applying overlay: /home/jan/chipyard/software/firemarshal/boards/firechip/base-workloads/br-base/overlay
2023-04-25 10:43:32,791 [run         ] [DEBUG]  Running: "guestmount --pid-file guestmount.pid -a /home/jan/chipyard/software/firemarshal/images/firechip/br-base/br-base.img -m /dev/sda /home/jan/chipyard/software/firemarshal/disk-mount" in /home/jan/chipyard/software/firemarshal
2023-04-25 10:43:33,508 [run         ] [DEBUG]  libguestfs: error: /usr/bin/supermin exited with error status 1.
2023-04-25 10:43:33,508 [run         ] [DEBUG]  To see full error messages you may need to enable debugging.
2023-04-25 10:43:33,508 [run         ] [DEBUG]  Do:
2023-04-25 10:43:33,508 [run         ] [DEBUG]  export LIBGUESTFS_DEBUG=1 LIBGUESTFS_TRACE=1
2023-04-25 10:43:33,508 [run         ] [DEBUG]  and run the command again.  For further information, read:
2023-04-25 10:43:33,508 [run         ] [DEBUG]  http://libguestfs.org/guestfs-faq.1.html#debugging-libguestfs
2023-04-25 10:43:33,508 [run         ] [DEBUG]  You can also run 'libguestfs-test-tool' and post the *complete* output
2023-04-25 10:43:33,508 [run         ] [DEBUG]  into a bug report or message to the libguestfs mailing list.
2023-04-25 10:43:33,543 [main        ] [ERROR]  Failed to build workload br-base.json
2023-04-25 10:43:33,543 [main        ] [INFO ]  Log available at: /home/jan/chipyard/software/firemarshal/logs/br-base-build-2023-04-25--10-31-00-NH7MUAV3HG8VQBQD.log
2023-04-25 10:43:33,544 [main        ] [ERROR]  FAILURE: 1 builds failed

Did you solve this issue?

[Edit: Downgrading to 1.8.1 does help.]

Hello, unfortunately I still haven't solved it.

@zweiwang
Copy link

zweiwang commented May 5, 2023

I have a same issue.

@JL102
Copy link
Contributor

JL102 commented Jun 20, 2023

Is there a way to skip the step that builds firemarshal? I keep getting this issue which is completely stopping me from using Chipyard.

@alfonrod
Copy link
Contributor

I found the same issue in release 1.9.1, but managed to solve it and build everything else by running the following (which seems to effectively disable Firesim and Firemarshall from the setup process), as specified in the usage message of that script:

./build-setup.sh riscv-tools -s 6 -s 7 -s 8 -s 9

@xinyu199
Copy link

xinyu199 commented Jul 3, 2023

Hello, I have an issue in "./build-setup.sh riscv-tools":
/root/mambaforge/lib/python3.10/site-packages/pydantic/_internal/_config.py:257: UserWarning: Valid config keys have changed in V2:

  • 'json_encoders' has been removed
    warnings.warn(message, UserWarning)
    /root/mambaforge/lib/python3.10/site-packages/pydantic/_internal/_config.py:257: UserWarning: Valid config keys have changed in V2:
  • 'allow_mutation' has been removed
    warnings.warn(message, UserWarning)
    Traceback (most recent call last):
    File "/root/mambaforge/bin/conda-lock", line 10, in
    sys.exit(main())
    File "/root/mambaforge/lib/python3.10/site-packages/click/core.py", line 1130, in call
    return self.main(*args, **kwargs)
    File "/root/mambaforge/lib/python3.10/site-packages/click/core.py", line 1055, in main
    rv = self.invoke(ctx)
    File "/root/mambaforge/lib/python3.10/site-packages/click/core.py", line 1657, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
    File "/root/mambaforge/lib/python3.10/site-packages/click/core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
    File "/root/mambaforge/lib/python3.10/site-packages/click/core.py", line 760, in invoke
    return __callback(*args, **kwargs)
    File "/root/mambaforge/lib/python3.10/site-packages/click/decorators.py", line 26, in new_func
    return f(get_current_context(), *args, **kwargs)
    File "/root/mambaforge/lib/python3.10/site-packages/conda_lock/conda_lock.py", line 1422, in install
    with _render_lockfile_for_install(
    File "/root/mambaforge/lib/python3.10/contextlib.py", line 135, in enter
    return next(self.gen)
    File "/root/mambaforge/lib/python3.10/site-packages/conda_lock/conda_lock.py", line 934, in _render_lockfile_for_install
    lock_content = parse_conda_lock_file(pathlib.Path(filename))
    File "/root/mambaforge/lib/python3.10/site-packages/conda_lock/lockfile/init.py", line 137, in parse_conda_lock_file
    return lockfile_v1_to_v2(LockfileV1.parse_obj(content))
    File "/root/mambaforge/lib/python3.10/site-packages/typing_extensions.py", line 2562, in wrapper
    return __arg(*args, **kwargs)
    File "/root/mambaforge/lib/python3.10/site-packages/pydantic/main.py", line 935, in parse_obj
    return cls.model_validate(obj)
    File "/root/mambaforge/lib/python3.10/site-packages/pydantic/main.py", line 480, in model_validate
    return cls.pydantic_validator.validate_python(
    pydantic_core._pydantic_core.ValidationError: 1 validation error for Lockfile
    package.384.optional
    Field required [type=missing, input_value={'dependencies': {}, 'has....whl', 'version': '6.0'}, input_type=dict]

@martonbognar
Copy link

I found the same issue in release 1.9.1, but managed to solve it and build everything else by running the following (which seems to effectively disable Firesim and Firemarshall from the setup process), as specified in the usage message of that script:

./build-setup.sh riscv-tools -s 6 -s 7 -s 8 -s 9

thanks, i also had the same issue but this is a good workaround!

@JL102
Copy link
Contributor

JL102 commented Oct 4, 2023

Hi @jerryz123, responding to #1609

To reproduce my issue, I cloned Chipyard's main branch into ~/chipyard_main and then ran the setup script.

The first time I ran it, I got a different error than the one I was used to:

Cloning into '/home/drak/chipyard_main/software/firemarshal/wlutil/busybox'...
remote: Enumerating objects: 18140, done.
remote: Counting objects: 100% (15/15), done.
remote: Compressing objects: 100% (14/14), done.
remote: Total 18140 (delta 1), reused 1 (delta 1), pack-reused 18125
Receiving objects: 100% (18140/18140), 4.29 MiB | 4.35 MiB/s, done.
Resolving deltas: 100% (1340/1340), done.
Submodule path 'boards/default/distros/br/buildroot': checked out 'd48a8beb39275a479185ab9b3232cd15dcfb87ab'
Submodule path 'boards/default/firmware/opensbi': checked out '5ccebf0a7ec79d0bbef36d6dcdc2717f25d40767'
error: RPC failed; curl 56 OpenSSL SSL_read: OpenSSL/3.1.2: error:0A000119:SSL routines::decryption failed or bad record mac, errno 0
error: 8186 bytes of body are still expected
fetch-pack: unexpected disconnect while reading sideband packet
fatal: early EOF
fatal: fetch-pack: invalid index-pack output
fatal: could not fetch b62836419ea3e2d19de323071132fa26135c2f10 from promisor remote
fatal: Unable to checkout '71bece669db27e8c5bf1b6e25780bab194d23103' in submodule path 'boards/default/linux'

After this, I just ran the setup script a second time.

+ git submodule update --progress --filter=tree:0 --init boards/default/linux boards/default/firmware/opensbi wlutil/busybox boards/default/distros/br/buildroot boards/firechip/drivers/iceblk-driver boards/firechip/drivers/icenet-driver
Submodule path 'boards/default/linux': checked out '71bece669db27e8c5bf1b6e25780bab194d23103'
Submodule path 'boards/firechip/drivers/iceblk-driver': checked out '4e6f183337b27aa5be99dc4873ea507572aceb9f'
Submodule path 'wlutil/busybox': checked out '70f77e4617e06077231b8b63c3fb3406d7f8865d'
To check on progress, either call marshal with '-v' or see the live output at:
/home/drak/chipyard_main/software/firemarshal/logs/br-base-build-2023-10-04--18-32-32-34U5DIC95HPJQN82.log
.  /home/drak/chipyard_main/software/firemarshal/boards/firechip/base-workloads/br-base/host-init.sh
.  /home/drak/chipyard_main/software/firemarshal/images/firechip/br.8fff/br.8fff.img
Attempting to download cached image: https://raw.githubusercontent.com/firesim/firemarshal-public-br-images/main/images/firechip/br.8fff/br.8fff.img.zip
Unzipping cached image: /home/drak/chipyard_main/software/firemarshal/boards/firechip/distros/br/br.8fff.img.zip
Skipping full buildroot build. Using cached image /home/drak/chipyard_main/software/firemarshal/images/firechip/br.8fff/br.8fff.img from /home/drak/chipyard_main/software/firemarshal/boards/firechip/distros/br/br.8fff.img.zip
.  build_busybox
.  /home/drak/chipyard_main/software/firemarshal/images/firechip/br-base/br-base-bin
TaskError - taskid:/home/drak/chipyard_main/software/firemarshal/images/firechip/br-base/br-base-bin
PythonAction Error
Traceback (most recent call last):
  File "/home/drak/chipyard_main/.conda-env/lib/python3.10/site-packages/doit/action.py", line 461, in execute
    returned_value = self.py_callable(*self.args, **kwargs)
  File "/home/drak/chipyard_main/software/firemarshal/wlutil/build.py", line 544, in makeBin
    makeModules(config)
  File "/home/drak/chipyard_main/software/firemarshal/wlutil/build.py", line 462, in makeModules
    wlutil.run(makeCmd + " clean", cwd=driverDir, shell=True)
  File "/home/drak/chipyard_main/software/firemarshal/wlutil/wlutil.py", line 527, in run
    raise sp.CalledProcessError(p.returncode, prettyCmd)
subprocess.CalledProcessError: Command 'make LINUXSRC=/home/drak/chipyard_main/software/firemarshal/boards/firechip/base-workloads/br-base/../../linux clean' returned non-zero exit status 2.

ERROR: Failed to build workload br-base.json
Log available at: /home/drak/chipyard_main/software/firemarshal/logs/br-base-build-2023-10-04--18-32-32-34U5DIC95HPJQN82.log
ERROR: FAILURE: 1 builds failed

I see that both times, it failed at approximately the same step, but with a different specific error.

Here's the full log that's mentioned in the second error: https://pastebin.com/w8eVnEY2

@jerryz123
Copy link
Contributor

I suspect the first error error: RPC failed; curl 56 OpenSSL SSL_read is caused by network instability. This put the repo in a bad state with things partially cloned, that caused the next error when you rerun the setup command.

@JL102
Copy link
Contributor

JL102 commented Oct 5, 2023

I suspect the first error error: RPC failed; curl 56 OpenSSL SSL_read is caused by network instability. This put the repo in a bad state with things partially cloned, that caused the next error when you rerun the setup command.

Hmmm. Network instability is definitely a possibility for the computer I tested it on. I'll try wiping, re-cloning and re-running it again to see if the error occurs in the exact same place (indicating a different error) or seems semi-random (indicating a network error).

@zzulb
Copy link

zzulb commented Oct 6, 2023

I also encountered the same problem!

2023-10-06 17:11:16,889 [run ] [DEBUG] Running: "make LINUXSRC=/chipyard/software/firemarshal/boards/firechip/base-workloads/br-base/../../linux clean" in /chipyard/software/firemarshal/boards/firechip/base-workloads/br-base/../../drivers/icenet-driver
2023-10-06 17:11:16,893 [run ] [DEBUG] make: *** 没有规则可制作目标“clean”。 停止。
2023-10-06 17:11:16,899 [main ] [ERROR] Failed to build workload br-base.json
2023-10-06 17:11:16,899 [main ] [INFO ] Log available at: /chipyard/software/firemarshal/logs/br-base-build-2023-10-06--08-34-33-6N5DJJAYT18BY7DW.log
2023-10-06 17:11:16,899 [main ] [ERROR] FAILURE: 1 builds failed

@JL102
Copy link
Contributor

JL102 commented Oct 7, 2023

Now that #1614 is in a working state (yes, that whole PR was just an ADHD tangent of mine to help out with this issue), the error this time around is related to a guestmount command.

For people to reproduce:

git clone git@github.com:JL102/chipyard.git chipyard_test
cd chipyard_test
git checkout script-trycatch
./build-setup.sh

Last bit of the log:

 ========== BEGINNING STEP 9: Pre-compiling FireMarshal buildroot sources ==========
To check on progress, either call marshal with '-v' or see the live output at:
/home/drak/chipyard_test/software/firemarshal/logs/br-base-build-2023-10-07--00-11-12-SJSYZ8EGR4PWBZF5.log
.  /home/drak/chipyard_test/software/firemarshal/boards/firechip/base-workloads/br-base/host-init.sh
.  /home/drak/chipyard_test/software/firemarshal/images/firechip/br.8fff/br.8fff.img
Attempting to download cached image: https://raw.githubusercontent.com/firesim/firemarshal-public-br-images/main/images/firechip/br.8fff/br.8fff.img.zip
Unzipping cached image: /home/drak/chipyard_test/software/firemarshal/boards/firechip/distros/br/br.8fff.img.zip
Skipping full buildroot build. Using cached image /home/drak/chipyard_test/software/firemarshal/images/firechip/br.8fff/br.8fff.img from /home/drak/chipyard_test/software/firemarshal/boards/firechip/distros/br/br.8fff.img.zip
.  build_busybox
.  /home/drak/chipyard_test/software/firemarshal/images/firechip/br-base/br-base-bin
.  calc_br-base_dep
.  /home/drak/chipyard_test/software/firemarshal/images/firechip/br-base/br-base.img
TaskError - taskid:/home/drak/chipyard_test/software/firemarshal/images/firechip/br-base/br-base.img
PythonAction Error
Traceback (most recent call last):
  File "/home/drak/chipyard_test/.conda-env/lib/python3.10/site-packages/doit/action.py", line 461, in execute
    returned_value = self.py_callable(*self.args, **kwargs)
  File "/home/drak/chipyard_test/software/firemarshal/wlutil/build.py", line 602, in makeImage
    wlutil.applyOverlay(config['img'], config['overlay'])
  File "/home/drak/chipyard_test/software/firemarshal/wlutil/wlutil.py", line 671, in applyOverlay
    copyImgFiles(img, flist, 'in')
  File "/home/drak/chipyard_test/software/firemarshal/wlutil/wlutil.py", line 652, in copyImgFiles
    with mountImg(img, getOpt('mnt-dir')):
  File "/home/drak/chipyard_test/.conda-env/lib/python3.10/contextlib.py", line 135, in __enter__
    return next(self.gen)
  File "/home/drak/chipyard_test/software/firemarshal/wlutil/wlutil.py", line 589, in mountImg
    run(['guestmount', '--pid-file', 'guestmount.pid', '-a', imgPath, '-m', '/dev/sda', mntPath])
  File "/home/drak/chipyard_test/software/firemarshal/wlutil/wlutil.py", line 527, in run
    raise sp.CalledProcessError(p.returncode, prettyCmd)
subprocess.CalledProcessError: Command 'guestmount --pid-file guestmount.pid -a /home/drak/chipyard_test/software/firemarshal/images/firechip/br-base/br-base.img -m /dev/sda /home/drak/chipyard_test/software/firemarshal/disk-mount' returned non-zero exit status 1.

ERROR: Failed to build workload br-base.json
Log available at: /home/drak/chipyard_test/software/firemarshal/logs/br-base-build-2023-10-07--00-11-12-SJSYZ8EGR4PWBZF5.log
ERROR: FAILURE: 1 builds failed
build-setup.sh: Build script failed with exit code 1 at step 9: Pre-compiling FireMarshal buildroot sources

here's the trace:

libguestfs: trace: set_verbose true
libguestfs: trace: set_verbose = 0
libguestfs: create: flags = 0, handle = 0x55f4735c3480, program = guestmount
libguestfs: trace: set_recovery_proc false
libguestfs: trace: set_recovery_proc = 0
libguestfs: trace: add_drive "/home/drak/chipyard_test/software/firemarshal/images/firechip/br-base/br-base.img"
libguestfs: trace: add_drive = 0
libguestfs: trace: launch
libguestfs: trace: max_disks
libguestfs: trace: max_disks = 255
libguestfs: trace: get_tmpdir
libguestfs: trace: get_tmpdir = "/tmp"
libguestfs: trace: version
libguestfs: trace: version = <struct guestfs_version = major: 1, minor: 40, release: 2, extra: , >
libguestfs: trace: get_backend
libguestfs: trace: get_backend = "direct"
libguestfs: launch: program=guestmount
libguestfs: launch: version=1.40.2
libguestfs: launch: backend registered: unix
libguestfs: launch: backend registered: uml
libguestfs: launch: backend registered: libvirt
libguestfs: launch: backend registered: direct
libguestfs: launch: backend=direct
libguestfs: launch: tmpdir=/tmp/libguestfsRUcLIp
libguestfs: launch: umask=0022
libguestfs: launch: euid=1000
libguestfs: trace: get_cachedir
libguestfs: trace: get_cachedir = "/var/tmp"
libguestfs: begin building supermin appliance
libguestfs: run supermin
libguestfs: command: run: /usr/bin/supermin
libguestfs: command: run: \ --build
libguestfs: command: run: \ --verbose
libguestfs: command: run: \ --if-newer
libguestfs: command: run: \ --lock /var/tmp/.guestfs-1000/lock
libguestfs: command: run: \ --copy-kernel
libguestfs: command: run: \ -f ext2
libguestfs: command: run: \ --host-cpu x86_64
libguestfs: command: run: \ /usr/lib/x86_64-linux-gnu/guestfs/supermin.d
libguestfs: command: run: \ -o /var/tmp/.guestfs-1000/appliance.d
supermin: version: 5.1.20
supermin: package handler: debian/dpkg
supermin: acquiring lock on /var/tmp/.guestfs-1000/lock
supermin: build: /usr/lib/x86_64-linux-gnu/guestfs/supermin.d
supermin: reading the supermin appliance
supermin: build: visiting /usr/lib/x86_64-linux-gnu/guestfs/supermin.d/base.tar.gz type gzip base image (tar)
supermin: build: visiting /usr/lib/x86_64-linux-gnu/guestfs/supermin.d/daemon.tar.gz type gzip base image (tar)
supermin: build: visiting /usr/lib/x86_64-linux-gnu/guestfs/supermin.d/excludefiles type uncompressed excludefiles
supermin: build: visiting /usr/lib/x86_64-linux-gnu/guestfs/supermin.d/hostfiles type uncompressed hostfiles
supermin: build: visiting /usr/lib/x86_64-linux-gnu/guestfs/supermin.d/init.tar.gz type gzip base image (tar)
supermin: build: visiting /usr/lib/x86_64-linux-gnu/guestfs/supermin.d/packages type uncompressed packages
supermin: build: visiting /usr/lib/x86_64-linux-gnu/guestfs/supermin.d/packages-hfsplus type uncompressed packages
supermin: build: visiting /usr/lib/x86_64-linux-gnu/guestfs/supermin.d/packages-reiserfs type uncompressed packages
supermin: build: visiting /usr/lib/x86_64-linux-gnu/guestfs/supermin.d/packages-xfs type uncompressed packages
supermin: build: visiting /usr/lib/x86_64-linux-gnu/guestfs/supermin.d/udev-rules.tar.gz type gzip base image (tar)
supermin: mapping package names to installed packages
supermin: resolving full list of package dependencies
supermin: build: 238 packages, including dependencies
supermin: build: 12322 files
supermin: build: 8885 files, after matching excludefiles
supermin: build: 8888 files, after adding hostfiles
supermin: build: 8885 files, after removing unreadable files
supermin: build: 8893 files, after munging
supermin: kernel: looking for kernel using environment variables ...
supermin: kernel: looking for kernels in /lib/modules/*/vmlinuz ...
supermin: kernel: looking for kernels in /boot ...
supermin: failed to find a suitable kernel (host_cpu=x86_64).

I looked for kernels in /boot and modules in /lib/modules.

If this is a Xen guest, and you only have Xen domU kernels
installed, try installing a fullvirt kernel (only for
supermin use, you shouldn't boot the Xen guest with it).
libguestfs: error: /usr/bin/supermin exited with error status 1, see debug messages above
libguestfs: trace: launch = -1 (error)
libguestfs: trace: close
libguestfs: closing guestfs handle 0x55f4735c3480 (state 0)
libguestfs: command: run: rm
libguestfs: command: run: \ -rf /tmp/libguestfsRUcLIp

i see the "failed to find a suitable kernel", could it be related to some RISC-V dependencies not being loaded?

@JL102
Copy link
Contributor

JL102 commented Oct 20, 2023

hmm. I'm now encountering the error as described in the previous comment, but in an already-setup instance of firemarshal. After looking closer at the logs, I see it's attempting to load a file from the system /boot folder. Why's it doing that?

@zhao-denghui
Copy link

Now that #1614 is in a working state (yes, that whole PR was just an ADHD tangent of mine to help out with this issue), the error this time around is related to a guestmount command.

For people to reproduce:

git clone git@github.com:JL102/chipyard.git chipyard_test
cd chipyard_test
git checkout script-trycatch
./build-setup.sh

Last bit of the log:

 ========== BEGINNING STEP 9: Pre-compiling FireMarshal buildroot sources ==========
To check on progress, either call marshal with '-v' or see the live output at:
/home/drak/chipyard_test/software/firemarshal/logs/br-base-build-2023-10-07--00-11-12-SJSYZ8EGR4PWBZF5.log
.  /home/drak/chipyard_test/software/firemarshal/boards/firechip/base-workloads/br-base/host-init.sh
.  /home/drak/chipyard_test/software/firemarshal/images/firechip/br.8fff/br.8fff.img
Attempting to download cached image: https://raw.githubusercontent.com/firesim/firemarshal-public-br-images/main/images/firechip/br.8fff/br.8fff.img.zip
Unzipping cached image: /home/drak/chipyard_test/software/firemarshal/boards/firechip/distros/br/br.8fff.img.zip
Skipping full buildroot build. Using cached image /home/drak/chipyard_test/software/firemarshal/images/firechip/br.8fff/br.8fff.img from /home/drak/chipyard_test/software/firemarshal/boards/firechip/distros/br/br.8fff.img.zip
.  build_busybox
.  /home/drak/chipyard_test/software/firemarshal/images/firechip/br-base/br-base-bin
.  calc_br-base_dep
.  /home/drak/chipyard_test/software/firemarshal/images/firechip/br-base/br-base.img
TaskError - taskid:/home/drak/chipyard_test/software/firemarshal/images/firechip/br-base/br-base.img
PythonAction Error
Traceback (most recent call last):
  File "/home/drak/chipyard_test/.conda-env/lib/python3.10/site-packages/doit/action.py", line 461, in execute
    returned_value = self.py_callable(*self.args, **kwargs)
  File "/home/drak/chipyard_test/software/firemarshal/wlutil/build.py", line 602, in makeImage
    wlutil.applyOverlay(config['img'], config['overlay'])
  File "/home/drak/chipyard_test/software/firemarshal/wlutil/wlutil.py", line 671, in applyOverlay
    copyImgFiles(img, flist, 'in')
  File "/home/drak/chipyard_test/software/firemarshal/wlutil/wlutil.py", line 652, in copyImgFiles
    with mountImg(img, getOpt('mnt-dir')):
  File "/home/drak/chipyard_test/.conda-env/lib/python3.10/contextlib.py", line 135, in __enter__
    return next(self.gen)
  File "/home/drak/chipyard_test/software/firemarshal/wlutil/wlutil.py", line 589, in mountImg
    run(['guestmount', '--pid-file', 'guestmount.pid', '-a', imgPath, '-m', '/dev/sda', mntPath])
  File "/home/drak/chipyard_test/software/firemarshal/wlutil/wlutil.py", line 527, in run
    raise sp.CalledProcessError(p.returncode, prettyCmd)
subprocess.CalledProcessError: Command 'guestmount --pid-file guestmount.pid -a /home/drak/chipyard_test/software/firemarshal/images/firechip/br-base/br-base.img -m /dev/sda /home/drak/chipyard_test/software/firemarshal/disk-mount' returned non-zero exit status 1.

ERROR: Failed to build workload br-base.json
Log available at: /home/drak/chipyard_test/software/firemarshal/logs/br-base-build-2023-10-07--00-11-12-SJSYZ8EGR4PWBZF5.log
ERROR: FAILURE: 1 builds failed
build-setup.sh: Build script failed with exit code 1 at step 9: Pre-compiling FireMarshal buildroot sources

here's the trace:

libguestfs: trace: set_verbose true
libguestfs: trace: set_verbose = 0
libguestfs: create: flags = 0, handle = 0x55f4735c3480, program = guestmount
libguestfs: trace: set_recovery_proc false
libguestfs: trace: set_recovery_proc = 0
libguestfs: trace: add_drive "/home/drak/chipyard_test/software/firemarshal/images/firechip/br-base/br-base.img"
libguestfs: trace: add_drive = 0
libguestfs: trace: launch
libguestfs: trace: max_disks
libguestfs: trace: max_disks = 255
libguestfs: trace: get_tmpdir
libguestfs: trace: get_tmpdir = "/tmp"
libguestfs: trace: version
libguestfs: trace: version = <struct guestfs_version = major: 1, minor: 40, release: 2, extra: , >
libguestfs: trace: get_backend
libguestfs: trace: get_backend = "direct"
libguestfs: launch: program=guestmount
libguestfs: launch: version=1.40.2
libguestfs: launch: backend registered: unix
libguestfs: launch: backend registered: uml
libguestfs: launch: backend registered: libvirt
libguestfs: launch: backend registered: direct
libguestfs: launch: backend=direct
libguestfs: launch: tmpdir=/tmp/libguestfsRUcLIp
libguestfs: launch: umask=0022
libguestfs: launch: euid=1000
libguestfs: trace: get_cachedir
libguestfs: trace: get_cachedir = "/var/tmp"
libguestfs: begin building supermin appliance
libguestfs: run supermin
libguestfs: command: run: /usr/bin/supermin
libguestfs: command: run: \ --build
libguestfs: command: run: \ --verbose
libguestfs: command: run: \ --if-newer
libguestfs: command: run: \ --lock /var/tmp/.guestfs-1000/lock
libguestfs: command: run: \ --copy-kernel
libguestfs: command: run: \ -f ext2
libguestfs: command: run: \ --host-cpu x86_64
libguestfs: command: run: \ /usr/lib/x86_64-linux-gnu/guestfs/supermin.d
libguestfs: command: run: \ -o /var/tmp/.guestfs-1000/appliance.d
supermin: version: 5.1.20
supermin: package handler: debian/dpkg
supermin: acquiring lock on /var/tmp/.guestfs-1000/lock
supermin: build: /usr/lib/x86_64-linux-gnu/guestfs/supermin.d
supermin: reading the supermin appliance
supermin: build: visiting /usr/lib/x86_64-linux-gnu/guestfs/supermin.d/base.tar.gz type gzip base image (tar)
supermin: build: visiting /usr/lib/x86_64-linux-gnu/guestfs/supermin.d/daemon.tar.gz type gzip base image (tar)
supermin: build: visiting /usr/lib/x86_64-linux-gnu/guestfs/supermin.d/excludefiles type uncompressed excludefiles
supermin: build: visiting /usr/lib/x86_64-linux-gnu/guestfs/supermin.d/hostfiles type uncompressed hostfiles
supermin: build: visiting /usr/lib/x86_64-linux-gnu/guestfs/supermin.d/init.tar.gz type gzip base image (tar)
supermin: build: visiting /usr/lib/x86_64-linux-gnu/guestfs/supermin.d/packages type uncompressed packages
supermin: build: visiting /usr/lib/x86_64-linux-gnu/guestfs/supermin.d/packages-hfsplus type uncompressed packages
supermin: build: visiting /usr/lib/x86_64-linux-gnu/guestfs/supermin.d/packages-reiserfs type uncompressed packages
supermin: build: visiting /usr/lib/x86_64-linux-gnu/guestfs/supermin.d/packages-xfs type uncompressed packages
supermin: build: visiting /usr/lib/x86_64-linux-gnu/guestfs/supermin.d/udev-rules.tar.gz type gzip base image (tar)
supermin: mapping package names to installed packages
supermin: resolving full list of package dependencies
supermin: build: 238 packages, including dependencies
supermin: build: 12322 files
supermin: build: 8885 files, after matching excludefiles
supermin: build: 8888 files, after adding hostfiles
supermin: build: 8885 files, after removing unreadable files
supermin: build: 8893 files, after munging
supermin: kernel: looking for kernel using environment variables ...
supermin: kernel: looking for kernels in /lib/modules/*/vmlinuz ...
supermin: kernel: looking for kernels in /boot ...
supermin: failed to find a suitable kernel (host_cpu=x86_64).

I looked for kernels in /boot and modules in /lib/modules.

If this is a Xen guest, and you only have Xen domU kernels
installed, try installing a fullvirt kernel (only for
supermin use, you shouldn't boot the Xen guest with it).
libguestfs: error: /usr/bin/supermin exited with error status 1, see debug messages above
libguestfs: trace: launch = -1 (error)
libguestfs: trace: close
libguestfs: closing guestfs handle 0x55f4735c3480 (state 0)
libguestfs: command: run: rm
libguestfs: command: run: \ -rf /tmp/libguestfsRUcLIp

i see the "failed to find a suitable kernel", could it be related to some RISC-V dependencies not being loaded?

hello JL102, i have the same error now, have you sloved this error?

@jerryz123
Copy link
Contributor

You can just skip the firemarshal step in the build-setup script (Run ./build-setup.sh -h to see the flags)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests