Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add sifive_x280 configuration #737

Merged
merged 11 commits into from Nov 3, 2023
Merged

Conversation

Aaron-Hutchinson
Copy link
Contributor

This PR adds a new configuration to BLIS, called sifive_x280. This configuration is built for the RISC-V instruction set architecture and is optimized for SiFive's X280 processor. Included are implementations for most level 1, 1f, and 3 kernels, with the level 3 gemm and gemmtrsm kernels receiving the most attention.

Since this configuration targets RISC-V, compiling it and running tests on typical machines is challenging. For convenience, we've written a simple script that aims to make testing this configuration easier. The script can be found here, which has the following flow:

  • downloads and builds the RISC-V GNU Linux toolchain (just for the C runtime)
  • downloads and builds the RISC-V LLVM Linux toolchain, integrating the C runtime from GNU
  • downloads QEMU and builds the riscv64 Linux usermode emulator
  • downloads BLIS, configures it for sifive_x280, builds it, and runs make check.

Developers for the sifive_x280 implementation (in alphabetical order):

Special thanks to @fgvanzee for their assistance in debugging various issues and helping our team understand the BLIS framework.

We look forward to your feedback and are very excited to join the BLIS community.

@devinamatthews
Copy link
Member

@Aaron-Hutchinson @nick-knight @myeh01 awesome work, much appreciated! Regarding steps 1-3 of the testing process, can these product be pre-built? This would really help CI build times... @angsch and @leekillough have been putting similar things here.

@nick-knight
Copy link

@devinamatthews Yes, absolutely. The GNU toolchain build, in particular, is substantial. But your comment touches on a larger shortcoming of our PR: we have not addressed CI. (We meant to add a comment about this when we submitted the PR.) We are hoping for some guidance from the community on the best way to go about this, since we have little experience with setting up CI, and none with BLIS CI in particular.

@devinamatthews
Copy link
Member

The PR can be merged without it. Once we get at least one RISC-V configuration running reliably in Travis then adding more shouldn't be too difficult.

@angsch
Copy link
Collaborator

angsch commented Mar 30, 2023

The PR can be merged without it. Once we get at least one RISC-V configuration running reliably in Travis then adding more shouldn't be too difficult.

I think that we can extend the CI infrastructure that @leekillough and I set up. I am happy to help here. Further, before merging the PR, it would be good to check how the x280 target interacts with the auto configure and ISA detection work that we added.

@leekillough
Copy link
Collaborator

Is there a C macro which is always defined when an X280 compiler is being used?

There is an auto-detect mechanism which auto-detects RISC-V architecture based on __riscv* macros. It is used when ./configure auto is invoked.

I want to improve it so that it can also detect X280, because with our PR, it will detect X280 as rv64iv.

If X280 is detected when configure auto is used, do you want it choose the sifive_x280 configuration?

@Aaron-Hutchinson
Copy link
Contributor Author

Regarding steps 1-3 of the testing process, can these product be pre-built? This would really help CI build times... @angsch and @leekillough have been putting similar things here.

I'd be happy to upload a tarball of the prebuilt toolchain and QEMU for CI purposes. It looks like there's already a QEMU tarball in the link in your post, so I can try replacing the QEMU portion of our automation script with just downloading and unpacking that tarball. I can also do something similar with the prebuilt toolchain once it's uploaded.

I think then translating the script over to CI would be much smoother.

Is there a C macro which is always defined when an X280 compiler is being used?

Our automation script uses the upstream toolchain, so I'm not sure if there would be a way to differentiate it from rv64iv through C preprocessor macros. @nick-knight would be able to say more, but is out-of-office through next week.

@devinamatthews
Copy link
Member

Is there anything like cpuid on RISC-V?

@leekillough
Copy link
Collaborator

@devinamatthews: There is no need to use a runtime cpuid on RISC-V, because there are predefined macros in the RISC-V C API. Using a cross-compiler and executing RISC-V on a different host architecture, requiring the use of a simulator in configure, would be awkward. Fortunately, the RISC-V C API provides preprocessor macros for architecture detection, so $(CC) -E can be used to autodetect the RISC-V architecture.

@devinamatthews, @Aaron-Hutchinson @nick-knight :

There are two RISC-V autodetection header files in PR693:

bli_riscv_cpuid.h, which returns one of rv32i, rv32iv, rv64i, rv64iv or generic, depending on whether one of 4 major RISC-V architectures are detected (XLEN=32 and XLEN=64, with and without V vector extension). In the configure script, if this autodetection header returns generic, then the existing BLIS autodetection mechanism is fallen back on.

bli_riscv_detect_arch.h, which returns the full detected RISC-V architecture string, such as rv64imafdcv. The result of this header is used to form the -march= option. On some versions of Clang and GCC, -march=...v needs to be specified to enable the V vector extension to be enabled, which is forced in the BLIS rv32iv and rv64iv configurations by using -DFORCE_RISCV_VECTOR when preprocessing bli_riscv_detect_arch.h, because preprocessing the header with default compiler options would not enable V.

@devinamatthews
Copy link
Member

But if two companies make rv64iv chips how do you tell them apart?

@leekillough
Copy link
Collaborator

But if two companies make rv64iv chips how do you tell them apart?

Hence my question in #737 (comment).

@angsch and I have created a foundational RISC-V BLIS port which should be adaptable to all RISC-V variants. But we understand that there may be specific BLIS implementations for specific RISC-V implementations.

The BLIS RISC-V autodetection mechanism is able to identify base features of the RISC-V implementation, such as whether A, M, V extensions are available, but unless there is a C macro to autodetect x280 or other implementations, the BLIS user will need to specify ./configure sifive_x280 instead of ./configure auto in order to get the most features out of a particular RISC-V implementation.

@Aaron-Hutchinson
Copy link
Contributor Author

Regarding prebuilding the toolchain for CI, I'm not sure how portable the toolchain that our script creates is. It appears it hardcodes some of the filepaths, and I fear this may cause some issues if I were to create a tarball of my local build and upload it (I have limited knowledge in this area, so correct me if I'm wrong).

Would it be possible to have one of the CI machines build the toolchain itself and save the result for future runs?

@angsch
Copy link
Collaborator

angsch commented Apr 4, 2023

Regarding prebuilding the toolchain for CI, I'm not sure how portable the toolchain that our script creates is. It appears it hardcodes some of the filepaths, and I fear this may cause some issues if I were to create a tarball of my local build and upload it (I have limited knowledge in this area, so correct me if I'm wrong).

That concern is justified. I encountered incompatibilities when I first packaged qemu. To package qemu, I had to replicate the build environment of the CI machine. Further, the build of the toolchain was susceptible to the execution environment. I think that the incompatibilities are solely due to dismatching version of linked libraries such as glibc.

I suggest that you use the tarball of qemu and the toolchain that Lee and I use in our PR. That runs successfully on the CI machine.

@angsch
Copy link
Collaborator

angsch commented Apr 4, 2023

Would it be possible to have one of the CI machines build the toolchain itself and save the result for future runs?

I tried this and it is not possible. The Travis runs will hit a timeout.

@Aaron-Hutchinson
Copy link
Contributor Author

I tried this and it is not possible. The Travis runs will hit a timeout.

Can the timeout be increased for the steps that build the toolchain/QEMU?

@angsch
Copy link
Collaborator

angsch commented Apr 4, 2023

Can the timeout be increased for the steps that build the toolchain/QEMU?

We were recommended to aim at a runtime of below 10 minutes for our rv[32,64]iv target. Note that make -j does not do the trick. Further, since your CI target will be triggered also when something unrelated is pushed (e.g. a non-RISC-V target), building the toolchain will burn CPU hours.

@Aaron-Hutchinson
Copy link
Contributor Author

We were recommended to aim at a runtime of below 10 minutes for our rv[32,64]iv target. Note that make -j does not do the trick. Further, since your CI target will be triggered also when something unrelated is pushed (e.g. a non-RISC-V target), building the toolchain will burn CPU hours.

Again please forgive my limited experience in this area. I would think there would be a way to save the toolchain and QEMU builds for use over multiple CI invocations and only build them when they either don't already exist on the machine or the builds become out of date. This way, they're only built once on the CI machine and nearly all CI runs will skip over the build steps for the toolchain and QEMU.

@devinamatthews
Copy link
Member

I think Travis also has Docker images of the CI environment which you can run locally.

@leekillough
Copy link
Collaborator

@Aaron-Hutchinson:

GitHub has a 100 MB limit on tracked files before it requires paid service.

Instead of files stored in the distribution, we would need to use released binaries, which have a 2 GB limit. That is the same 2 GB limit for Git Large File Storage in GitHub.

Travis has quotas on how much CPU, memory and disk space can be used. Once the credits run out for a billing period, they must be bought with paid-for credits, or wait until the next billing period. See this also.

According to @angsch, the dependency on linked libraries makes it a necessity to build the toolchain in an environment that is compatible with the CI machines. So you need to build on a fresh Ubuntu Focal machine / Docker container.

@devinamatthews
Copy link
Member

devinamatthews commented Apr 4, 2023 via email

@Aaron-Hutchinson
Copy link
Contributor Author

@Aaron-Hutchinson:

GitHub has a 100 MB limit on tracked files before it requires paid service.

Instead of files stored in the distribution, we would need to use released binaries, which have a 2 GB limit. That is the same 2 GB limit for Git Large File Storage in GitHub.

Travis has quotas on how much CPU, memory and disk space can be used. Once the credits run out for a billing period, they must be bought with paid-for credits, or wait until the next billing period. See this also.

According to @angsch, the dependency on linked libraries makes it a necessity to build the toolchain in an environment that is compatible with the CI machines. So you need to build on a fresh Ubuntu Focal machine / Docker container.

I'm proposing that we do not track any toolchain/QEMU related files on GitHub, and just use build caching for them. It looks like Travis has built-in functionality for exactly this kind of purpose. See here and here. This line from the first link is particularly relevant:

Caches lets Travis CI store directories between builds, which is useful for storing dependencies that take longer to compile or download.

@fgvanzee
Copy link
Member

@Aaron-Hutchinson Caching sounds fine to me. I read the links you provided, but I'm still not 100% certain how we would employ caching in this context. (Travis could use a few more examples in their documentation!)

@Aaron-Hutchinson
Copy link
Contributor Author

@Aaron-Hutchinson Caching sounds fine to me. I read the links you provided, but I'm still not 100% certain how we would employ caching in this context. (Travis could use a few more examples in their documentation!)

I agree that Travis' documentation is not very thorough. I've read a little bit about this feature and it's something I'd like to try pursuing.

Does anyone know if there is a local version of Travis CI I can use on my own machine to test the results of changes to the .travis.yaml file? The answers I've found from searching around are greatly out of date.

@devinamatthews
Copy link
Member

I believe there is a local version using Docker. At least there was a few years ago.

@Aaron-Hutchinson
Copy link
Contributor Author

I haven't been able to find any official documentation on a local version, and unofficial discussions I've come across are a few years old and don't appear to work any more. It looks like they may have made this an Enterprise feature.

@leekillough
Copy link
Collaborator

Caching is not recommended for built toolchains (unless that document is outdated), and used to not be performed for Docker images, but seems to be now. See this and this too.

CPPROCFLAGS :=
CMISCFLAGS := $(CMISCFLAGS_SIFIVE) -fdata-sections -ffunction-sections \
-fdiagnostics-color=always -fno-rtti -fno-exceptions \
-std=gnu++17
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this read -std=gnu17? I think that gnu++17 is a C++-only option.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

-std=gnu++17 should be removed completely since BLIS already adds std=c99.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. We just copied this from the generic make_defs.mk without really understanding what was required by the project. IIRC, a bunch of the warning flags are also redundant (generated somewhere else in the build system).

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Aaron-Hutchinson I think you forgot to update CMISCFLAGS when you rebased

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the reminder! I did indeed forget. This will be fixed in the upcoming commit.

@leekillough
Copy link
Collaborator

Since this configuration targets RISC-V, compiling it and running tests on typical machines is challenging. For convenience, we've written a simple script that aims to make testing this configuration easier. The script can be found here, which has the following flow:

* downloads and builds the RISC-V GNU Linux toolchain (just for the C runtime)

* downloads and builds the RISC-V LLVM Linux toolchain, integrating the C runtime from GNU

* downloads QEMU and builds the riscv64 Linux usermode emulator

* downloads BLIS, configures it for `sifive_x280`, builds it, and runs `make check`.

RISC-V General Toolchain Builder

The following script is used in-house @tactcomplabs:

build-riscv.txt (rename to build-riscv.sh).

  • It supports any valid RISC-V ARCH / ABI / VLEN combination (e.g., rv64imafdcv/lp64d, rv32imaf/ilp32f).
  • It supports GCC and Clang/LLVM (thanks to @cmuellner).
  • It supports QEMU and Spike/PK.
  • It supports specifying the branch/tag/commit of the riscv-gnu-toolchain to use (e.g., master, rvv-next, latest).
  • It clones from the repositories as needed, starting with a clean build each time.
  • It autodetects missing package dependencies on Debian-based platforms.
  • It creates a riscv.sh script to source to set all of the environment variables to cross-compile and run a software package with a simulator.
  • It unsets any environment variables set beforehand which could affect builds.
  • It uses color highlighting and interactive prompts if it's run on a terminal.
  • It profiles the time spent in each Bash function.

To use it, edit the variables at the top of the file, e.g.,

# Variables defining the RISC-V toolchain

# Build parameters
RISCV_ARCH=rv64imafdv
RISCV_ABI=lp64d
RISCV_VLEN=128

# gnu or llvm
COMPILER=gnu

# latest: The most recent tagged released in RISC-V toolchain
# rvv-next: An experimental RISC-V toolchain branch (stale?)
# master: The latest development branch
# <commitID>
RISCV_GNU_TAG=rvv-next

# qemu or spike
RISCV_SIM=qemu

and then run ./build-riscv.sh or bash ./build-riscv.txt.

To Build BLIS

After the toolchain is built, cd blis and type, e.g.,

source ~/riscv/rv64imafdv_lp64d_vlen128/riscv.sh
./configure rv64iv
make -j
make -j checkblis-fast

Build issues encountered with this PR

(The C++ options have been removed, and merge conflicts eliminated, in sifive#3 .)

Your script sets:

TESTSUITE_WRAPPER="$QEMU_PATH -cpu $QEMU_CPU -L $CLANG_CROSS_INSTALL_DIR/sysroot"

while also using:

BLIS_OPTIONS="--prefix=sifive_x280 --disable-shared"

... which seems to exclude shared libraries, while also specifying options to use them.

When using QEMU, our script sets:

export QEMU_LD_PREFIX=$RISCV/sysroot

... which allows QEMU to work with BLIS shared libraries.

When I build my toolchain with tag rvv-next and then attempt to build BLIS with sifive_x280, I get the following error:

Compiling obj/sifive_x280/kernels/sifive_x280/1/bli_addv_sifive_x280_intr/bli_addv_sifive_x280_intr.o ('sifive_x280' CFLAGS for kernels)
In file included from kernels/sifive_x280/1/bli_addv_sifive_x280_intr/bli_addv_sifive_x280_intr.c:40:
kernels/sifive_x280/1/bli_addv_sifive_x280_intr/./bli_addv_sifive_x280_intr_real.c: In function 'bli_saddv_sifive_x280_intr':
kernels/sifive_x280/1/bli_addv_sifive_x280_intr/../../riscv_overloaded_intrinsics.h:38:34: warning: implicit declaration of function '__riscv_vsetvl_e32m8' [-Wimplicit-function-declaration]
   38 | #define VSETVL_(PRECISION, LMUL) __riscv_vsetvl_e##PRECISION##LMUL
      |                                  ^~~~~~~~~~~~~~~~
kernels/sifive_x280/1/bli_addv_sifive_x280_intr/../../riscv_overloaded_intrinsics.h:39:33: note: in expansion of macro 'VSETVL_'
   39 | #define VSETVL(PRECISION, LMUL) VSETVL_(PRECISION, LMUL)
      |                                 ^~~~~~~
kernels/sifive_x280/1/bli_addv_sifive_x280_intr/./bli_addv_sifive_x280_intr_real.c:52:21: note: in expansion of macro 'VSETVL'
   52 |         size_t vl = VSETVL(PREC, LMUL)(avl);
      |                     ^~~~~~
kernels/sifive_x280/1/bli_addv_sifive_x280_intr/../../riscv_overloaded_intrinsics.h:43:37: warning: implicit declaration of function '__riscv_vle32_v_f32m8' [-Wimplicit-function-declaration]
   43 | #define VLE_V_F_(PRECISION, LMUL)   __riscv_vle##PRECISION##_v_f##PRECISION##LMUL
      |                                     ^~~~~~~~~~~
kernels/sifive_x280/1/bli_addv_sifive_x280_intr/../../riscv_overloaded_intrinsics.h:44:36: note: in expansion of macro 'VLE_V_F_'
   44 | #define VLE_V_F(PRECISION, LMUL)   VLE_V_F_(PRECISION, LMUL)
      |                                    ^~~~~~~~
kernels/sifive_x280/1/bli_addv_sifive_x280_intr/./bli_addv_sifive_x280_intr_real.c:56:20: note: in expansion of macro 'VLE_V_F'
   56 |             xvec = VLE_V_F(PREC, LMUL) (x, vl);
      |                    ^~~~~~~
kernels/sifive_x280/1/bli_addv_sifive_x280_intr/../../riscv_overloaded_intrinsics.h:43:37: error: incompatible types when assigning to type 'vfloat32m8_t' from type 'int'
   43 | #define VLE_V_F_(PRECISION, LMUL)   __riscv_vle##PRECISION##_v_f##PRECISION##LMUL
      |                                     ^~~~~~~~~~~
kernels/sifive_x280/1/bli_addv_sifive_x280_intr/../../riscv_overloaded_intrinsics.h:44:36: note: in expansion of macro 'VLE_V_F_'
   44 | #define VLE_V_F(PRECISION, LMUL)   VLE_V_F_(PRECISION, LMUL)
      |                                    ^~~~~~~~
kernels/sifive_x280/1/bli_addv_sifive_x280_intr/./bli_addv_sifive_x280_intr_real.c:56:20: note: in expansion of macro 'VLE_V_F'
   56 |             xvec = VLE_V_F(PREC, LMUL) (x, vl);
      |                    ^~~~~~~
compilation terminated due to -Wfatal-errors.
make: *** [Makefile:696: obj/sifive_x280/kernels/sifive_x280/1/bli_addv_sifive_x280_intr/bli_addv_sifive_x280_intr.o] Error 1

Is there a rvv-next riscv-gnu-toolchain configure option which needs to be specified, in order to enable the vector intrinsics?

@angsch @nick-knight @Aaron-Hutchinson @devinamatthews @fgvanzee @ct-clmsn

@angsch
Copy link
Collaborator

angsch commented Apr 19, 2023

@Aaron-Hutchinson In order to avoid duplication, I tested the QEMU tarball that Lee and I use. I face the compilation problem with the vector intrinsics, too, so my test experimentally enabled all extension that x280 has. Based on these tests, I am confident that you can use the same QEMU tarball for your CI. The tarball lives in a sibling repo: https://github.com/flame/ci-utils/blob/master/riscv/qemu-riscv-2023.02.25-ubuntu-20.04.tar.gz.

@nick-knight
Copy link

nick-knight commented Apr 19, 2023

Thanks for all the feedback, sorry we're slow to respond.

Regarding the RISC-V vector intrinsics issue, this name-mangling was introduced recently at the behest of the RISC-V Toolchains SIG, in riscv-non-isa/riscv-c-api-doc#31. It made its way into the vector intrinsics API, version 0.11 (multiple PRs, I won't try to list them all). That API change, in turn, appeared in LLVM 16.0.0. Unfortunately, I don't know the status with GCC. Historically, GCC has lagged LLVM w.r.t. chasing unratified/churning RISC-V specs, so I'm not surprised that LLVM works but GCC does not.

On that last point, in case it isn't clear, the RISC-V vector intrinsics API is a community project, sponsored by RISC-V International:

We are working towards v1.0 of the API but have not frozen yet. And it looks like we'll miss the GCC 13 window. The task group meets monthly; we'd love your company. If you have questions on GCC support for the latest intrinsics API changes, this is the right community to bring it up with.

@nick-knight
Copy link

nick-knight commented Jun 29, 2023

Ah yes, the RVV intrinsics API is still not frozen, we should be prepared for churn.

Regarding the API question, we can populate a tuple using the "insertion" functions and extract tuple elements (vectors) using the "extraction" functions.

You raise a good question regarding the inline asm syntax for tuple types; I raised an issue about this: riscv-non-isa/riscv-c-api-doc#43

We're in the midst of cache-tile tuning, and optimizing the packing microkernels. We'll fix these issues as part of a follow-up commit. (FYI: @myeh01)

@leekillough
Copy link
Collaborator

Ah yes, the RVV intrinsics API is still not frozen, we should be prepared for churn.

I am willing to do it for this PR, since I have been locally keeping it up to date .

@Aaron-Hutchinson
Copy link
Contributor Author

Our team would like to get this PR merged soon. We have some updates coming in shortly with minor changes, such as resolving the merge conflicts and updating the RISC-V intrinsics.

What is the best way forward regarding the CI issue? From what I can tell from the comments above this is still unresolved.

@angsch
Copy link
Collaborator

angsch commented Oct 12, 2023

What is the best way forward regarding the CI issue? From what I can tell from the comments above this is still unresolved.

When you have updated the PR, I am happy to test locally if you can reuse the binaries that are used in the current CI pipeline. I am optimistic that the CI suggestions from above still work.

@Aaron-Hutchinson
Copy link
Contributor Author

All of the developmental changes we planned to make are now merged into add_sifive_x280, and the RISC-V intrinsic updates and merge conflicts have been addressed. I believe our team is happy with the state of the branch.

@angsch If you're able and willing to run the CI tests locally, I think the branch should be in a stable place to do so now. Thank you!

@angsch
Copy link
Collaborator

angsch commented Oct 19, 2023

The following should work. I think it makes sense to use the same compiler version for all RISC-V targets, so the compiler version is bumped below for the already existing targets.

diff --git a/.travis.yml b/.travis.yml
index 848cb184..bdfafb6b 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -86,6 +86,11 @@ matrix:
     env: OOT=0 TEST=FAST SDE=0 THR="none" BLD="--disable-shared" CONF="rv32iv" \
       CC=riscv32-unknown-linux-gnu-gcc \
       LDFLAGS=-static
+  - os: linux
+    compiler: clang
+    env: OOT=0 TEST=FAST SDE=0 THR="none" BLD="--disable-shared" CONF="sifive_x280" \
+      CC=clang \
+      LDFLAGS=-static
 install:
 - if [ "$CC" = "gcc"  ] && [ "$TRAVIS_OS_NAME" = "linux" ]; then export CC="gcc-9"; fi
 - if [ -n "$PACKAGES" ] && [ "$TRAVIS_OS_NAME" = "linux" ]; then sudo apt-get install -y $PACKAGES; fi
@@ -106,6 +111,12 @@ script:
     export CXX=$DIST_PATH/../toolchain/riscv/bin/riscv32-unknown-linux-gnu-g++;
     export TESTSUITE_WRAPPER="$DIST_PATH/../toolchain/qemu-riscv32 -cpu rv32,vext_spec=v1.0,v=true,vlen=128 -B 0x100000";
   fi
+- if [ "$CONF" = "sifive_x280" ]; then
+    $DIST_PATH/travis/do_riscv.sh "$CONF";
+    export CC=$DIST_PATH/../toolchain/riscv/bin/clang;
+    export CXX=$DIST_PATH/../toolchain/riscv/bin/clang++;
+    export TESTSUITE_WRAPPER="$DIST_PATH/../toolchain/qemu-riscv64 -cpu rv64,vext_spec=v1.0,v=true,vlen=512 -B 0x100000";
+  fi
 - $DIST_PATH/configure -p `pwd`/../install -t $THR $BLD CC=$CC $CONF
 - pwd
 - ls -l
diff --git a/travis/do_riscv.sh b/travis/do_riscv.sh
index a51d3306..9a114b0e 100755
--- a/travis/do_riscv.sh
+++ b/travis/do_riscv.sh
@@ -3,18 +3,21 @@
 set -e
 set -x
 
-TAG=2023.02.25
+TAG=2023.10.18
 
 # The prebuilt toolchains only support hardfloat, so we only
 # test these for now.
 case $1 in
 	"rv32iv")
-	TARBALL=riscv32-glibc-ubuntu-20.04-nightly-${TAG}-nightly.tar.gz
+	TARBALL=riscv32-glibc-ubuntu-20.04-gcc-nightly-${TAG}-nightly.tar.gz
 	;;
 	"rv64iv")
-	TARBALL=riscv64-glibc-ubuntu-20.04-nightly-${TAG}-nightly.tar.gz
+	TARBALL=riscv64-glibc-ubuntu-20.04-gcc-nightly-${TAG}-nightly.tar.gz
 	;;
+	"sifive_x280")
+	TARBALL=riscv64-glibc-ubuntu-20.04-llvm-nightly-${TAG}-nightly.tar.gz
 	*)
+	;;
 	exit 1
 	;;
 esac

I zipped the patch due to Github's constraints of what can be attached.
0001-Add-sifive_x280-to-CI.zip

@Aaron-Hutchinson
Copy link
Contributor Author

Thanks @angsch. I've opened a PR here to apply the CI patch and update the make_defs.mk.

@Aaron-Hutchinson
Copy link
Contributor Author

@angsch Looks like CI has failed after applying the patch due to not being able to find the compiler:

configure: user specified a C compiler via CC (./../toolchain/riscv/bin/riscv64-unknown-linux-gnu-gcc).
configure: *** Could not find the C compiler specified via CC ('./../toolchain/riscv/bin/riscv64-unknown-linux-gnu-gcc').
configure: *** A working C compiler is required. Please set CC
configure: *** to a C compiler that exists (or unset CC).
The command "$DIST_PATH/configure -p `pwd`/../install -t $THR $BLD CC=$CC $CONF" exited with 1.

Any idea what went wrong?

;;
"sifive_x280")
TARBALL=riscv64-glibc-ubuntu-20.04-llvm-nightly-${TAG}-nightly.tar.gz

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We already have a QEMU in this tarball file. Is it necessary to get another one using the following commands?

# Once CI upgrades to jammy, the next three lines can be removed.
# The qemu version installed via packages (qemu-user qemu-user-binfmt)
# is sufficient.
TARBALL_QEMU=qemu-riscv-2023.02.25-ubuntu-20.04.tar.gz
wget https://github.com/flame/ci-utils/raw/master/riscv/${TARBALL_QEMU}
tar -xf $TARBALL_QEMU

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We just need to update TARBALL to riscv64-glibc-ubuntu-{JAMMY_VER}-gcc-nightly-${TAG}-nightly.tar.gz if the CI is upgraded.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point, I didn't notice that now both the LLVM and the GNU toolchain include qemu.

@alexsifivetw
Copy link

@angsch Looks like CI has failed after applying the patch due to not being able to find the compiler:

configure: user specified a C compiler via CC (./../toolchain/riscv/bin/riscv64-unknown-linux-gnu-gcc).
configure: *** Could not find the C compiler specified via CC ('./../toolchain/riscv/bin/riscv64-unknown-linux-gnu-gcc').
configure: *** A working C compiler is required. Please set CC
configure: *** to a C compiler that exists (or unset CC).
The command "$DIST_PATH/configure -p `pwd`/../install -t $THR $BLD CC=$CC $CONF" exited with 1.

Any idea what went wrong?

Does soft link work?

ln -s -f /your/path/to/clang /usr/bin/clang
CC=clang

Could you try using CC environment variable with absolutely path?

@angsch
Copy link
Collaborator

angsch commented Nov 1, 2023

I think that a syntax error before that introduces the problem. My mistake, sorry.
Can we try do_riscv.shwith

+	"sifive_x280")
+	TARBALL=riscv64-glibc-ubuntu-20.04-llvm-nightly-${TAG}-nightly.tar.gz
+	;;
 	 *)
 	exit 1
 	;;
 esac

(The ;; and *) before exit are flipped)

In the meanwhile, I will try the qemu builds shipped with the toolchain.

@Aaron-Hutchinson
Copy link
Contributor Author

Thanks for the correction @angsch. Looks like with that fix the PR has passed the CI.

@fgvanzee
Copy link
Member

fgvanzee commented Nov 2, 2023

Thank you everyone for your contributions and engagement on this PR!

Does anyone else have any comments before I merge? 🚀

@fgvanzee fgvanzee merged commit 05388dd into flame:master Nov 3, 2023
1 of 2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

9 participants