-
-
Notifications
You must be signed in to change notification settings - Fork 634
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LWJGL3 ARM? #206
Comments
Not yet. Support for ARM is scheduled for the 3.0.2 release. We're currently working on #100 for the 3.0.1 release. |
Thanks,I think we can add a schedule on the website. |
It's hard to tell without knowing more specifics about it. But yes, projects like Kotlin native, Scala native and JEP 295 are very interesting. |
i tried 3.1.3 on Nvidea Jetson TX2 which has
The project is here https://github.com/idsc-frazzoli/owly3d Please let me know, if I am making an obvious mistake. Is |
Support for Android & ARM is a (very slow) work-in-progress. It currently lives in the |
@Spasi thank you for the reply! for now, I only would like to read out a joystick, so I probably will look for a simple, quick alternative. |
Where can I check when lwjgl will be supported on arm? I'm not very good at all this, but from what I understood the code is currently compiled for different OS, but appart from compiling it for a device using arm are there other things to do? (would it be possible to do this work myself? I read that several years ago some people did so, but they did not provide a download link for it) |
No.
You can find build instructions in android-test. Assuming you have Android Studio installed and you're experienced with building Android programs, it should be straightforward. Note that this produces binaries for the core library and native libraries whose code is included in the LWJGL repository (stb, nuklear, nanovg, etc). Libraries built separately (jemalloc, OpenAL Soft, etc.) are currently not supported. This is the biggest piece of the puzzle missing atm. Also note that this produces a build that is Android-specific. It won't work on a generic ARM device. But most of the work done for Android will be useful for generic ARM builds.
Several months at best. Reasons:
|
Thank you for the quick and detailed answer! I'm not working on android actually, I'm working on raspberry. I don't know about android, but I think lwjgl is still very interesting for raspberries. |
Yes. Any device that can run a Linux ARM JDK (e.g. Oracle JDK, Zulu Embedded).
Indeed. And with a Hotspot JVM it should run great as is. |
After spending some time trying to figure out what would be the best solution for me I found this tutorial : http://rogerallen.github.io/jetson/2014/07/31/minecraft-on-jetson-tk1/ The second point explains how to build lwjgl for arm (raspberry), but it is 3 years old. I'll try this as soon as I get my raspberry back, but until then could you tell me if it seems to be a proper way to do it? (I guess things have change in 3 years, maybe it's not a good idea to do it anymore, if it actually ever worked) |
That article is for lwjgl2, so not applicable to lwjgl3. Building LWJGL for ARM locally should be simple. The existing scripts should work out-of-the-box, or may require minimal changes. If you try it and encounter problems, please open a new issue and they will be addressed. In order to have official support though, the build needs to be practical. For LWJGL, this means the ARM builds must run on Travis CI. The script that builds the Linux x64 binaries is here. We need a script that installs a cross-compiling toolchain for ARM and then builds LWJGL using it. Then we need the same for (some of) LWJGL's dependencies. If anyone wants to try that, the process is:
You're done when |
Oh sorry, I forgot it was for LWJGL3. So you're saying if I follow the steps described on this link: https://www.lwjgl.org/guide#build-instructions |
I'm saying it's a good starting point. The master branch doesn't know anything about ARM atm, so it'll think it's doing an x86 or x64 build. This will likely be problematic, but it shouldn't take many changes to make it work. Better build instructions:
The last one will probably fail with an ARM toolchain. You'll have to modify |
I finally found some time to try this! Apparently in order to increase this space and avoid the issue I have to add |
Hmm, yes, doing the Kotlin compilation on a Raspberry is a waste of time. It's very slow, even on a high-end workstation, and there's no support for incremental compilation via the cli. It also needs around 1G of memory, not sure if the Raspberry has enough.
It should. Also copy any
Then copy the I also recommend disabling most bindings in |
Here I am again. I guess it's better for me to wait until an arm version is released, even if it takes a long time. |
I got this to complete the build on my Pi3 albeit with 1 error. Increase the swapfile size:
Change CONF_SWAPSIZE=100 to CONF_SWAPSIZE=1024 Set an ant environment variable to allow java more memory:
Use ‘free -h’ in a separate terminal window to see how much swap space gets used. |
The result of building on the Pi:
|
The Raspberry Pi 3 is a 64 bit system and you probably do not have a cross-compile toolchain installed, but also don't need any. Just leave the "-m32" out. It should produce 64 bit binaries then. |
Although the Pi is 64bit, I'm using raspbian which is only 32bit. I've removed the unrecognised flags but still get errors reported though no detail. The -mfpmath flags didn't work. Any way to get more detail?
|
build.xml:182, it is checking for |
I got the master branch to compile on diff --git a/config/linux/build.xml b/config/linux/build.xml
index 41f340d..8d8e6ce 100644
--- a/config/linux/build.xml
+++ b/config/linux/build.xml
@@ -30,7 +30,7 @@
<apply dir="@{dest}" executable="gcc" dest="@{dest}" skipemptyfilesets="true" failonerror="true" parallel="true" taskname="Compiler">
<arg line="-c -std=c11" unless:set="cpp"/>
<arg line="-c -std=c++11" if:set="cpp"/>
- <arg line="-m64" if:true="${build.arch.x64}"/>
+ <!-- <arg line="-m64" if:true="${build.arch.x64}"/> -->
<arg line="-m32 -mfpmath=sse -msse -msse2" unless:true="${build.arch.x64}"/>
<arg line="-O3 @{lto} -fPIC @{flags} -pthread -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=0 -D_GNU_SOURCE -DNDEBUG -DLWJGL_LINUX -DLWJGL_${build.arch}"/>
@@ -51,7 +51,7 @@
<attribute name="name"/>
<attribute name="dest"/>
<attribute name="lang" default="c"/>
- <attribute name="flags" default="-Werror -Wfatal-errors -Wall -Wextra -pedantic -Wno-extended-offsetof"/>
+ <attribute name="flags" default="-Wall -Wextra -pedantic -Wno-extended-offsetof"/>
<element name="beforeCompile" optional="true"/>
<element name="source"/>
<element name="beforeLink" optional="true"/>
@@ -77,7 +77,7 @@
<apply executable="gcc" failonerror="true" parallel="true" taskname="Linker" unless:set="lib-uptodate">
<srcfile/>
<arg value="-shared"/>
- <arg value="-m64" if:true="${build.arch.x64}"/>
+ <!-- <arg value="-m64" if:true="${build.arch.x64}"/> -->
<arg value="-m32" unless:true="${build.arch.x64}"/>
<arg line="-z noexecstack"/>
@@ -256,13 +256,13 @@
</build>
<!-- SSE -->
- <build name="lwjgl_sse" dest="${bin.native}/sse" if:true="${binding.sse}">
+ <!-- <build name="lwjgl_sse" dest="${bin.native}/sse" if:true="${binding.sse}">
<source>
<arg value="-msse3"/>
<arg value="-I${src.native.rel}/util"/>
<fileset dir="." includes="${src.generated.native}/util/simd/*.c"/>
</source>
- </build>
+ </build> -->
<!-- stb -->
<build name="lwjgl_stb" dest="${bin.native}/stb" if:true="${binding.stb}">
diff --git a/modules/core/src/main/c/system/linux/wrap_memcpy.c b/modules/core/src/main/c/system/linux/wrap_memcpy.c
index 239c958..a71effb 100644
--- a/modules/core/src/main/c/system/linux/wrap_memcpy.c
+++ b/modules/core/src/main/c/system/linux/wrap_memcpy.c
@@ -2,7 +2,7 @@
void *old_memcpy(void *, const void *, size_t);
-__asm__(".symver old_memcpy,memcpy@GLIBC_2.2.5");
+//__asm__(".symver old_memcpy,memcpy@GLIBC_2.2.5");
void *__wrap_memcpy(void *dest, const void *src, size_t n) {
return old_memcpy(dest, src, n);``` |
Has there been any progress on running lwjgl 3 on a raspberry pi 3 without putting too much effort into configuring etc (although I would do it anyway with effort)? If so, can someone give me a tutorial on how to make it work? If not, could someone point me to an alternative I can use as a openGL wrapper until lwjgl supports arm? |
Using Pi3 Raspbian Stretch (32bit) After commenting out the following in the build xml : line 36: line 88: line 266:
I get this far :
|
How far is the progress with ARM support? Because I'd very much like to use LWJGL in a project with a Raspberry PI 3B. I'm just using very few of the modules, Assimp, OpenGL & OpenGL ES is what is necessary for me. |
I updated a few of the dependent libraries to use a x-compiler within Travis CI but ran into an issue where the x-compiler doesn't recognize an assembler command (memcpy is wrapped and points to a specific version of glibc). I tried a simple file on the Pi itself and it also doesn't recognize this compiler command. For 32 bit build, this might not be necessary, but that is where I left it. You can look at the forks in my repo (look for the repos forked from LWJGL-CI and then look at the arm branches). |
Trying to build this on the Pi again. (Pi3B+ Raspbian 32bit OS) Why would I get missing header files when they do exist?
File exists here:
|
I'm having some troubles compiling because of header files not found. Are there some dependencies to install beforehands? [Compiler] lwjgl3/modules/lwjgl/tootle/src/main/c/RayTracer/JRT/JRTH2KDTreeBuilder.cpp:6:23: fatal error: TootlePCH.h: No such file or directory But... find . -name TootlePCH.h
./modules/lwjgl/tootle/src/main/c/TootlePCH.h Any idea? |
Is anyone able to explain this missing header files issue ? Are we missing a library path or some dependency? |
Seem to have more success setting relative="false" in every occurrence in config/linux/build.xml |
These patches seem to solve the missing header files issue:
|
http://fxzjshm.github.io/blog/Build-LWJGL-On-Raspberry-Pi/ these instructions are good enought to build them for rpi3 |
Thats for lwjgl2 not lwjgl3 |
yeah... I need to build the lwjgl-3.1.6.jar armhf one for rpi3. how I get that one? I download the source of the 3.1.6 than I need and I am failing on building it. please, help me with this nightmare. |
I went further, but will have to try it on another machine:
|
If you building on a Pi you can increase the swap file size #206 (comment) |
Zram? |
Thanks. I tried on another machine and got:
|
So, has anyone build successfully lwjgl3 for the pi3 and other armhf and arm64 devices? If any. Please give us a precompiled lib sources. Sorry if I am repetitive. |
Hello, I managed to get a cross-compile to aarch64 working on a Linux x64 host. I have no idea how to integrate this nicely with the build system so my changes play well with others, but I did hack up the ant build files to make the cross-compilation work in my case. If that's of any use to anyone trying to get ARM binaries, I'll post my diffs here. Or maybe this could serve as an inspiration for how to integrate an aarch64 target into the build system properly. Who knows, but I hope it helps someone. diff --git a/config/linux/build.xml b/config/linux/build.xml
index 77eeac633..a8968eb76 100644
--- a/config/linux/build.xml
+++ b/config/linux/build.xml
@@ -13,6 +13,15 @@
<equals arg1="${build.arch}" arg2="x64"/>
</condition>
+ <!-- add cross-compilation toolchain -->
+ <!-- TODO: how to configure? -->
+ <property name="toolchain" value="/path/to/your/cross/compiling/toolchain"/>
+ <property name="toolchain.prefix" value="${toolchain}/bin/aarch64-linux-"/>
+ <property name="toolchain.sysroot" value="${toolchain}/aarch64-cortexa53-linux-gnu/sysroot"/>
+ <property name="toolchain.include" value="${toolchain.sysroot}/usr/include"/>
+ <property name="toolchain.lib" value="${toolchain.sysroot}/usr/lib"/>
+ <property name="system.include" value="/usr/include"/>
+
<condition property="gcc.suffix" value="-${gcc.version}" else="">
<isset property="gcc.version"/>
</condition>
@@ -22,8 +31,8 @@
<macrodef name="compile">
<attribute name="dest" default="${dest}"/>
<attribute name="lang" default="c"/>
- <attribute name="gcc.exec" default="gcc${gcc.suffix}"/>
- <attribute name="gpp.exec" default="g++${gcc.suffix}"/>
+ <attribute name="gcc.exec" default="${toolchain.prefix}gcc${gcc.suffix}"/>
+ <attribute name="gpp.exec" default="${toolchain.prefix}g++${gcc.suffix}"/>
<attribute name="lto" default="-flto"/>
<attribute name="flags" default=""/>
<attribute name="simple" default="false"/>
@@ -39,8 +48,10 @@
<apply dir="@{dest}" executable="${gcc}" dest="@{dest}" skipemptyfilesets="true" failonerror="true" parallel="true" taskname="Compiler">
<arg line="-c -std=c11" unless:set="cpp"/>
<arg line="-c -std=c++11" if:set="cpp"/>
+ <!-- TODO: -m64 -m32 not accepted by aarch64-linux-gcc, how to configure?
<arg line="-m64" if:true="${build.arch.x64}"/>
<arg line="-m32 -mfpmath=sse -msse -msse2" unless:true="${build.arch.x64}"/>
+ -->
<arg line="-O3 @{lto} -fPIC @{flags} -pthread -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=0 -D_GNU_SOURCE -DNDEBUG -DLWJGL_LINUX -DLWJGL_${build.arch}"/>
<arg value="-I${jni.headers}"/>
@@ -51,6 +62,11 @@
<arg value="-I${src.main.rel}" if:true="@{simple}"/>
+ <!-- include toolchain headers BEFORE system headers -->
+ <!-- TODO: how to configure? -->
+ <arg line="-isystem ${toolchain.include}"/>
+ <arg line="-isystem ${system.include}"/>
+
<source/>
<fileset dir="." includes="${src.generated}/*" if:true="@{simple}"/>
@@ -63,8 +79,8 @@
<attribute name="module"/>
<attribute name="linker" default="gcc"/>
<attribute name="lang" default="c"/>
- <attribute name="gcc.exec" default="gcc${gcc.suffix}"/>
- <attribute name="gpp.exec" default="g++${gcc.suffix}"/>
+ <attribute name="gcc.exec" default="${toolchain.prefix}gcc${gcc.suffix}"/>
+ <attribute name="gpp.exec" default="${toolchain.prefix}g++${gcc.suffix}"/>
<attribute name="flags" default="-Werror -Wfatal-errors"/>
<attribute name="simple" default="false"/>
<element name="beforeCompile" optional="true"/>
@@ -122,8 +138,10 @@
<apply executable="${gcc}" failonerror="true" parallel="true" taskname="Linker" unless:set="lib-uptodate">
<srcfile/>
<arg value="-shared"/>
+ <!-- TODO: -m64 -m32 not accepted by aarch64-linux-gcc, how to configure?
<arg value="-m64" if:true="${build.arch.x64}"/>
<arg value="-m32" unless:true="${build.arch.x64}"/>
+ -->
<arg line="-z noexecstack"/>
<arg line="-O3 -flto -fPIC -pthread -o ${lib}/lib${name}${LIB_POSTFIX}.so"/>
@@ -136,7 +154,7 @@
<link/>
</apply>
- <apply executable="strip" failonerror="true" taskname="Symbol strip" unless:set="lib-uptodate">
+ <apply executable="${toolchain.prefix}strip" failonerror="true" taskname="Symbol strip" unless:set="lib-uptodate">
<filelist dir="${lib}" files="lib${name}${LIB_POSTFIX}.so"/>
</apply>
<delete file="${lib}/touch_${platform}.txt" quiet="true" unless:set="lib-uptodate"/>
@@ -145,8 +163,8 @@
<macrodef name="build_simple">
<attribute name="module"/>
- <attribute name="gcc.exec" default="gcc${gcc.suffix}"/>
- <attribute name="gpp.exec" default="g++${gcc.suffix}"/>
+ <attribute name="gcc.exec" default="${toolchain.prefix}gcc${gcc.suffix}"/>
+ <attribute name="gpp.exec" default="${toolchain.prefix}g++${gcc.suffix}"/>
<sequential>
<build module="@{module}" gcc.exec="@{gcc.exec}" gpp.exec="@{gpp.exec}" simple="true" if:true="${binding.@{module}}"/>
</sequential> diff --git a/modules/lwjgl/core/src/main/c/linux/wrap_memcpy.c b/modules/lwjgl/core/src/main/c/linux/wrap_memcpy.c
index 239c95817..e1abc29db 100644
--- a/modules/lwjgl/core/src/main/c/linux/wrap_memcpy.c
+++ b/modules/lwjgl/core/src/main/c/linux/wrap_memcpy.c
@@ -2,7 +2,9 @@
void *old_memcpy(void *, const void *, size_t);
-__asm__(".symver old_memcpy,memcpy@GLIBC_2.2.5");
+// aarch64 toolchain seems to have a different symbol version
+//__asm__(".symver old_memcpy,memcpy@GLIBC_2.2.5");
+__asm__(".symver old_memcpy,memcpy@GLIBC_2.17");
void *__wrap_memcpy(void *dest, const void *src, size_t n) {
return old_memcpy(dest, src, n); I only built and tested the core Oh, and I had to build dyncall for aarch64 too. Here's a helper script I used to do that: #!/bin/sh
AARCH64=/path/to/your/cross/compiling/toolchain
AARCH64_SYSROOT=${AARCH64}/aarch64-cortexa53-linux-gnu/sysroot
AARCH64_PREFIX=/bin/aarch64-linux-
\
AS="${AARCH64}${AARCH64_PREFIX}gcc"\
CC="${AARCH64}${AARCH64_PREFIX}gcc"\
CXX="${AARCH64}${AARCH64_PREFIX}g++"\
LD="${AARCH64}${AARCH64_PREFIX}ld"\
ASFLAGS="-isysroot ${AARCH64_SYSROOT}"\
CFLAGS="-isysroot ${AARCH64_SYSROOT}"\
CXXFLAGS="-isysroot ${AARCH64_SYSROOT}"\
LDFLAGS="-Wl,-syslibroot ${AARCH64_SYSROOT}"\
make all Run this script after cloning the dyncall repo and configuring the build. I ran this as an out-of-source build in a Oh, and here's the aarch64 toolchain I used. It's supplied by my dev board mfr and there's not much in the way of documentation, but it seems to work just fine. https://github.com/friendlyarm/prebuilts I used the Hope that helps! |
You probably want to use debian's built in toolchains and multiarch support.
I've made some extremely quick and dirty build.xml modifications which will build some natives in offline mode a941cf3 It still needs more work in installing/managing dependencies, fixing a few module specific flags (see meow) but it's fairly simple. Just For other reference/limitations see libgdx/libgdx#5556 |
Initial PR's for building most LWJGL-CI modules created. Uses ubuntu's armhf/aarch64 toolchains to build produce arm32/arm64. There's a lot of copy/paste so there may be silly mistakes still.
|
OpenVR for ARM is not necessary or useful. There is no ARM OpenVR runtime and the probably never will be (that would then be handled with OpenXR, if it ever happened) |
@Spasi good!!! Many thanks to @PokeMMO . could you give me some info of how to compile master offline (its needed to be offline now?) With ant ( or any othher way) . ok, so, it will only compile on crosscompiling?? because I replace the build.xml on linux with the @PokeMMO one , set arm32 to true and the rest to false and it starts but crash after some time.
sorry for the ignorance, maybe you could give me some tip of whats happens here. |
The build.xml changes are still incomplete. The |
Closed with 9bd6e60. Starting with 3.2.3 build 3, LWJGL now supports ARMv8/AArch64 and ARMv7/armhf builds! The build customizer on the website has also been updated, you can download the ARM binaries directly or via Maven/Gradle. Please open new issues to report problems you encounter. Progress on Android builds will also be tracked separately. |
I've downloaded the 3.2.3 zip file containing the jars, should there be a liblwjgl.so file? |
Shared libraries are inside jar files. E.g. the shared library for LWJGL core is in |
Does the LWJGL3 support arm platform still?
I can't Build it successful on ARM platform.
The text was updated successfully, but these errors were encountered: