Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LWJGL3 ARM? #206

Open
SuperWangCC opened this Issue Jul 14, 2016 · 77 comments

Comments

Projects
None yet
@SuperWangCC
Copy link

commented Jul 14, 2016

Does the LWJGL3 support arm platform still?
I can't Build it successful on ARM platform.

@Spasi

This comment has been minimized.

Copy link
Member

commented Jul 14, 2016

Not yet. Support for ARM is scheduled for the 3.0.2 release. We're currently working on #100 for the 3.0.1 release.

@SuperWangCC

This comment has been minimized.

Copy link
Author

commented Jul 15, 2016

Thanks,I think we can add a schedule on the website.

@RUSshy

This comment has been minimized.

Copy link

commented Mar 17, 2017

Maybe with release of kotlin native it'll be easier? do you plan to support kotlin native btw?

@Spasi

This comment has been minimized.

Copy link
Member

commented Mar 18, 2017

It's hard to tell without knowing more specifics about it. But yes, projects like Kotlin native, Scala native and JEP 295 are very interesting.

@RUSshy

This comment has been minimized.

Copy link

commented Mar 22, 2017

In slack they shown video of kotlin native running on iOS, and Raspberry Pi (Tetris game using SDL2), exciting news!!

https://files.slack.com/files-tmb/T09229ZC6-F4LKMA03U-237bd9b959/attachment-1_360.gif

https://files.slack.com/files-tmb/T09229ZC6-F4M751GP3-cb8b9c18e8/videotogif_2017.03.22_12.27.53_360.gif

Slack -> #kotlin-native

@RUSshy

This comment has been minimized.

Copy link

commented Apr 4, 2017

@datahaki

This comment has been minimized.

Copy link

commented Aug 24, 2017

i tried 3.1.3 on Nvidea Jetson TX2 which has aarch64 but there doesn't seem to be the right binary available

java.lang.UnsatisfiedLinkError: /tmp/lwjglidsc/3.1.3-SNAPSHOT/liblwjgl.so: /tmp/lwjglidsc/3.1.3-SNAPSHOT/liblwjgl.so: cannot open shared object file: No such file or directory (Possible cause: architecture word width mismatch)
	at java.lang.ClassLoader$NativeLibrary.load(Native Method)
	at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1941)

The project is here

https://github.com/idsc-frazzoli/owly3d

Please let me know, if I am making an obvious mistake. Is arm supported by lwjgl?

@Spasi

This comment has been minimized.

Copy link
Member

commented Aug 24, 2017

Support for Android & ARM is a (very slow) work-in-progress. It currently lives in the android branch. See the android-test repository for build instructions and demos.

@datahaki

This comment has been minimized.

Copy link

commented Aug 24, 2017

@Spasi thank you for the reply!

for now, I only would like to read out a joystick, so I probably will look for a simple, quick alternative.

@Sliveer

This comment has been minimized.

Copy link

commented Sep 2, 2017

Where can I check when lwjgl will be supported on arm?
When you say "(very slow) work-in-progress", do you have any idea how slow it will be? (several months, several years, unlikely to ever exist?)
Is there a version of lwjgl2 that works on arm?

I'm not very good at all this, but from what I understood the code is currently compiled for different OS, but appart from compiling it for a device using arm are there other things to do? (would it be possible to do this work myself? I read that several years ago some people did so, but they did not provide a download link for it)

@Spasi

This comment has been minimized.

Copy link
Member

commented Sep 2, 2017

Is there a version of lwjgl2 that works on arm?

No.

are there other things to do? (would it be possible to do this work myself?)

You can find build instructions in android-test. Assuming you have Android Studio installed and you're experienced with building Android programs, it should be straightforward.

Note that this produces binaries for the core library and native libraries whose code is included in the LWJGL repository (stb, nuklear, nanovg, etc). Libraries built separately (jemalloc, OpenAL Soft, etc.) are currently not supported. This is the biggest piece of the puzzle missing atm.

Also note that this produces a build that is Android-specific. It won't work on a generic ARM device. But most of the work done for Android will be useful for generic ARM builds.

do you have any idea how slow it will be? (several months, several years, unlikely to ever exist?)

Several months at best. Reasons:

  • Currently there isn't anyone else contributing to this effort. I've repeatedly asked for help with porting the LWJGL-CI projects to Android/ARM.
  • I've been very busy the past few months and this will continue for quite a while.
  • This is necessarily a side-project for me, LWJGL needs to progress regardless of what happens with the ARM builds.
  • I've been demoralized by how bad Android is at optimizing Java code. The NIO implementation is a joke and we cannot even depend on the most basic of optimizations. To get decent performance may require significant rewrite of LWJGL internals and even API changes. I really don't want to do that.
  • Recent advances like Scala Native and Kotlin Native may make this entire effort obsolete, why bother with JNI anymore? On Android for example, you can easily write most of your application in Java/Kotlin and offload performance-sensitive and native-interop code (e.g. OpenGL/Vulkan) to Kotlin Native. Great development experience + zero overhead.
@Sliveer

This comment has been minimized.

Copy link

commented Sep 2, 2017

Thank you for the quick and detailed answer!

I'm not working on android actually, I'm working on raspberry.
So when you say "It won't work on a generic ARM device.", I guess raspberries are included in "generic ARM device"?

I don't know about android, but I think lwjgl is still very interesting for raspberries.

@Spasi

This comment has been minimized.

Copy link
Member

commented Sep 2, 2017

I guess raspberries are included in "generic ARM device"?

Yes. Any device that can run a Linux ARM JDK (e.g. Oracle JDK, Zulu Embedded).

I think lwjgl is still very interesting for raspberries.

Indeed. And with a Hotspot JVM it should run great as is.

@Sliveer

This comment has been minimized.

Copy link

commented Sep 7, 2017

After spending some time trying to figure out what would be the best solution for me I found this tutorial : http://rogerallen.github.io/jetson/2014/07/31/minecraft-on-jetson-tk1/

The second point explains how to build lwjgl for arm (raspberry), but it is 3 years old. I'll try this as soon as I get my raspberry back, but until then could you tell me if it seems to be a proper way to do it? (I guess things have change in 3 years, maybe it's not a good idea to do it anymore, if it actually ever worked)

@Spasi

This comment has been minimized.

Copy link
Member

commented Sep 7, 2017

That article is for lwjgl2, so not applicable to lwjgl3.

Building LWJGL for ARM locally should be simple. The existing scripts should work out-of-the-box, or may require minimal changes. If you try it and encounter problems, please open a new issue and they will be addressed.

In order to have official support though, the build needs to be practical. For LWJGL, this means the ARM builds must run on Travis CI. The script that builds the Linux x64 binaries is here. We need a script that installs a cross-compiling toolchain for ARM and then builds LWJGL using it. Then we need the same for (some of) LWJGL's dependencies.

If anyone wants to try that, the process is:

  1. Fork lwjgl3.
  2. Register your forked repository with travis-ci.
  3. Push a .travis.yml file with the build script you wrote.
  4. ...repeat 3 until it works.

You're done when ant compile-native succeeds. Ignore ant upload-native (you don't need awscli and the secure variables either).

@Sliveer

This comment has been minimized.

Copy link

commented Sep 7, 2017

Oh sorry, I forgot it was for LWJGL3.

So you're saying if I follow the steps described on this link: https://www.lwjgl.org/guide#build-instructions
with my raspberry it should actually compile the natives correctly and I'll have a working LWJGL on my raspberry? (I just want to make sure I'm following the right instructions, I'm really not familiar with all this yet)

@Spasi

This comment has been minimized.

Copy link
Member

commented Sep 7, 2017

it should actually compile the natives correctly and I'll have a working LWJGL on my raspberry?

I'm saying it's a good starting point. The master branch doesn't know anything about ARM atm, so it'll think it's doing an x86 or x64 build. This will likely be problematic, but it shouldn't take many changes to make it work. Better build instructions:

  • ant compile-templates (this compiles the Kotlin code, it will take a while)
  • ant compile (generates the Java/C code, compiles the Java code)
  • ant compile-native

The last one will probably fail with an ARM toolchain. You'll have to modify config/build-definitions.xml and config/linux/build.xml to make it work. You can use the android branch as a reference and see what changes were required there. Note that the Android build has its own config/android/build.xml but you won't need to do that for a Raspberry build.

@Sliveer

This comment has been minimized.

Copy link

commented Sep 14, 2017

I finally found some time to try this!
The first build takes approximately 2 hours, but does not reach the end because of a java.lang.OutOfMemoryError: Java heap space.

Apparently in order to increase this space and avoid the issue I have to add -Xmx<space> to the java command lines. So I added <jvmarg value="-Xmx800m"/> each time I could in the build.xml file (probably not the best way to solve the issue, but it could've work I think), but I ended up with the same issue.
As 800MB should be enough for the build to succeed, I guess I did not add the arguments at the necessary places, but I don't have any idea where to put it.
And then I thought "hey, this is not compiling the C code, this doesn't have to be done on the raspberry!". Am I right? If I compile the Kotlin on my pc, then transfer the library to my raspberry to proceed to the next two commands is it going to be ok? (sadfully it's hard for me to test this, because the last steps will generate errors, and I need to be sure the error is not a consequences of the first step that I might've done the wrong way)

@Spasi

This comment has been minimized.

Copy link
Member

commented Sep 15, 2017

Hmm, yes, doing the Kotlin compilation on a Raspberry is a waste of time. It's very slow, even on a high-end workstation, and there's no support for incremental compilation via the cli. It also needs around 1G of memory, not sure if the Raspberry has enough.

If I compile the Kotlin on my pc, then transfer the library to my raspberry to proceed to the next two commands is it going to be ok?

It should. Also copy any touch.txt files and make sure the last modified timestamps are maintained. You should be able to run the following targets on the PC:

  • ant compile-templates (Kotlin compilation)
  • ant generate (Java/C code generation)
  • ant compile (Java compilation)

Then copy the modules/core/src/generated/ and bin/ folders to the Raspberry.

I also recommend disabling most bindings in config/build-bindings.xml (set the corresponding properties to false) until you have the core build working. It should significantly speed-up the build process.

@Sliveer

This comment has been minimized.

Copy link

commented Sep 20, 2017

Here I am again.
I tried several thing, nothing worked... As I mentioned before I really don't have any skill in packaging and building projects so I kinda try things whithout actually knowing what I'm doing...

I guess it's better for me to wait until an arm version is released, even if it takes a long time.
By the way if anyone has an arm compiled version of lwjgl3 that he can share (or even just the right config files to build it) don't hesitate to share it!

@mikehooper

This comment has been minimized.

Copy link

commented Nov 19, 2017

I got this to complete the build on my Pi3 albeit with 1 error.

Increase the swapfile size:

sudo nano /etc/dphys-swapfile

Change CONF_SWAPSIZE=100 to CONF_SWAPSIZE=1024
Reboot

Set an ant environment variable to allow java more memory:

export ANT_OPTS="-Xmx1g"
ant

Use ‘free -h’ in a separate terminal window to see how much swap space gets used.

@mikehooper

This comment has been minimized.

Copy link

commented Nov 19, 2017

The result of building on the Pi:

compile-native-platform:
[Compiler] gcc: error: unrecognized command line option ‘-m32’
[Compiler] gcc: error: unrecognized command line option ‘-mfpmath=sse’
[Compiler] gcc: error: unrecognized command line option ‘-msse’; did you mean ‘-fdse’?
[Compiler] gcc: error: unrecognized command line option ‘-msse2’

BUILD FAILED
/home/pi/lwjgl3/build.xml:388: The following error occurred while executing this line:
/home/pi/lwjgl3/config/linux/build.xml:101: The following error occurred while executing this line:
/home/pi/lwjgl3/config/linux/build.xml:30: apply returned: 1

Total time: 3 seconds

@httpdigest

This comment has been minimized.

Copy link
Member

commented Nov 19, 2017

The Raspberry Pi 3 is a 64 bit system and you probably do not have a cross-compile toolchain installed, but also don't need any. Just leave the "-m32" out. It should produce 64 bit binaries then.
The Pi also has an ARM cpu and no x86, so it has no SSE. Try -mfpmath=neon or -mfpmath=vfp. Also remove the -msse and -msse2.

@mikehooper

This comment has been minimized.

Copy link

commented Nov 19, 2017

Although the Pi is 64bit, I'm using raspbian which is only 32bit. I've removed the unrecognised flags but still get errors reported though no detail. The -mfpmath flags didn't work. Any way to get more detail?

Buildfile: /home/pi/lwjgl3/build.xml

init:

check-dependencies:

bindings:

generate:

-init-compile:

compile:

compile-native:

compile-native-platform:

BUILD FAILED
/home/pi/lwjgl3/build.xml:388: The following error occurred while executing this line:
/home/pi/lwjgl3/config/linux/build.xml:178: The following error occurred while executing this line:
/home/pi/lwjgl3/config/linux/build.xml:182: exec returned: 1

Total time: 3 seconds

@zhiyb

This comment has been minimized.

Copy link

commented Nov 19, 2017

build.xml:182, it is checking for gtk-3.0.
You can try install the libgtk-3-dev package.

@l3eta

This comment has been minimized.

Copy link

commented Feb 4, 2018

What folder is it in? I'm not seeing any .a's being generated. I've got a bunch of .o's

I just checked my echo command from the build and my stuff is getting placed in bin

Edit 2: I think I fixed it, I had to place the built .a's into bin/libs/linux/x86 for it to link them, about to replace my .so and see if it works now.

Edit 3: Got it to work and make a window just gotta fix something that seems to return null now.

@zhiyb

This comment has been minimized.

Copy link

commented Feb 4, 2018

I could be using an outdated version, but for my build, they are directly in repo/libs/linux/x64:

$ ls
bin                gradlew.bat
build.gradle       libs
build.xml          LICENSE.md
config             modules
doc                README.md
gradle             settings.gradle
gradle.properties  update-dependencies.xml
gradlew
$ ls libs/linux/x64/
libdyncallback_s.a  libdyncall_s.a  libdynload_s.a
@zhiyb

This comment has been minimized.

Copy link

commented Feb 4, 2018

They are not generated, the binaries are downloaded directly from a server somewhere.

@l3eta

This comment has been minimized.

Copy link

commented Feb 4, 2018

Yeah mine never seemed to download hence why it wasn't loading. I got it to work now, however I had to inject myself into library class, so at the moment I'm fixing libGL.so.1 path resolve and then I'll know if it works to render my 3d playground

@Spasi

This comment has been minimized.

Copy link
Member

commented Feb 5, 2018

Not even sure how to turn online mode off

The LWJGL repository doesn't contain the source code for all libraries it supports. Some are too big or have complex build systems. These libraries are built externally on Travis CI and AppVeyor. The Ant script downloads the prebuilt binaries as necessary.

Any attempt to do an ARM build should be done in offline mode, simply because there is no CI infrastructure for ARM yet and no binaries available to download. It's also useful when working with a custom build of a supported library and want to be sure it won't be overwritten.

You run export LWJGL_BUILD_OFFLINE=true and then the Ant build will not attempt to download anything.

but for my build, they are directly in repo/libs/linux/x64

Sorry for the confusion, but there's been a major change to the project's directory layout 2 weeks ago (00e1f52 and 06e044e). Previously libs was a top-level directory (relative to the repository root), but now lives under bin/. This is cleaner and makes it easier to get the repo to a clean state (simply delete the bin folder).

  • bin/libs/ is where all (Java & native) dependencies are downloaded and where linked shared libraries are copied to.
  • bin/${platform}/${build.arch} is where compiled object files are stored.

The mali drivers came with libEGL.so only so there is no libGL do I need to not be using GL11.* GL12.* etc. calls in my code or what am I missing?

There is no simple answer, it depends on the driver used and what you're trying to achieve. Without knowing more details, here are some things you can try:

  • First of all, for OpenGL ES, use the classes from the opengles module. org.lwjgl.opengles.GLES , GLESCapabilities, GLES20, etc.
  • To make sure the correct driver is being loaded, enable the loader debugging with: -Dorg.lwjgl.util.Debug=true and -Dorg.lwjgl.util.DebugLoader=true. It will print out which shared libraries are being loaded at startup.
  • To override the library to load, see the org.lwjgl.system.Configuration class. It supports both system properties and programmatic access. Useful options in your case EGL_LIBRARY_NAME, OPENGL(ES)_LIBRARY_NAME (these can be relative/absolute paths to shared libraries, makes things easier).
  • It's technically possible to use EGL with desktop GL, see #359 for example. (again, depends on the available driver)
@l3eta

This comment has been minimized.

Copy link

commented Feb 5, 2018

Thanks for the detailed steps and what not, currently I'm trying to get accelerated graphics going that way I'm not stuck at 5fps. LWJGL3 built and runs great.

What all do you guys have left to push ARM support 100% out?

@mjansson

This comment has been minimized.

Copy link

commented Mar 4, 2018

@mikehooper @Spasi Regarding rpmalloc on ARM, should work fine but if the target processor doesn't have the instructions I used for atomic fences you could try using c11 atomics instead.

  • Which processor are you targeting? Could you post the compiler flags so I could incorporate this into the rpmalloc build tests?

  • Try replacing line 106 & 107 in rpmalloc

#    define atomic_thread_fence_acquire() __asm volatile("dmb ish" ::: "memory")
#    define atomic_thread_fence_release() __asm volatile("dmb ishst" ::: "memory")

with the following C11 atomics instead

#    include <stdatomic.h>
#    define atomic_thread_fence_acquire() __atomic_thread_fence(memory_order_acquire)
#    define atomic_thread_fence_release() __atomic_thread_fence(memory_order_release)
@tristeng

This comment has been minimized.

Copy link

commented Aug 31, 2018

@Spasi I have followed your directions and am attempting to get this project and its dependencies building on ARM through Travis CI, so looking for your guidance on a few things.

Here is where I am at - I have successfully used an ARM cross compiler (provided by Raspberry Pi) to get this project building - I had to also get Dyncall (CI repo) built with the cross compiler and have it building - code here and its also building on Travis CI.

I didn't have to build any of the other 3rd party projects you mentioned for an LWJGL build to succeed, but shouldn't be too difficult to get them building in a similar fashion (I'm happy to take that on).

In the LWJGL project, I've created a new platform (arm) and created various sub-directories based on the linux platform to get it working - changes here. I haven't tested the built native libraries on a Pi yet, but that is my next step - I'm guessing I might have to tweak some compiler options etc but at least the build succeeds.

Looks like the various branches in the CI projects are named in a specific fashion - I just went with "arm" and "arm64" to start but if you have suggestions on naming etc. let me know and I can clean it all up and make pull requests on the dependent projects first.

Any other guidance or tips are appreciated.

@Spasi

This comment has been minimized.

Copy link
Member

commented Aug 31, 2018

Ideally, the ARM builds would reuse the current Linux setup (build.xml, version.script, custom headers, etc). Duplicating all that and maintaining it long-term would be a pain. Then there'd be some kind of flag that enables the cross-platform compilation for the target architecture (it would override build.arch, at least).

Android will probably require a separate setup though. So, long-term, we should end up with the following platform/architecture matrix:

Platform x86 x64 ARMv7/AArch32 ARMv8/AArch64
Android X X
Linux X X X
macOS X
Windows X X

The preferred naming for branches is master-<platform>-<abi>. I don't think we'll support more than one 32-bit ARM ABI (i.e. v7 hard-float), so the generic aarch32/aarch64 can be used for the ARM branches.

Btw, there's a branch per platform/ABI because I hate the .travis.yml syntax for running matrix builds. Not sure if it has been improved lately, but if you can write a script that produces both 32 & 64-bit binaries with good readability, go for it and use a single master-linux-arm branch.

@tristeng

This comment has been minimized.

Copy link

commented Aug 31, 2018

Thanks for the quick reply and that all makes sense. I'll start with the Dyncall CI project first to make sure I'm on the right track.

@Zamundaaa

This comment has been minimized.

Copy link

commented Oct 25, 2018

How far is the progress with ARM support? Because I'd very much like to use LWJGL in a project with a Raspberry PI 3B. I'm just using very few of the modules, Assimp, OpenGL & OpenGL ES is what is necessary for me.
And I would like to offer my help. If I can. (pretty much no experience with this at all)

@tristeng

This comment has been minimized.

Copy link

commented Oct 26, 2018

I updated a few of the dependent libraries to use a x-compiler within Travis CI but ran into an issue where the x-compiler doesn't recognize an assembler command (memcpy is wrapped and points to a specific version of glibc). I tried a simple file on the Pi itself and it also doesn't recognize this compiler command. For 32 bit build, this might not be necessary, but that is where I left it. You can look at the forks in my repo (look for the repos forked from LWJGL-CI and then look at the arm branches).

@mikehooper

This comment has been minimized.

Copy link

commented Nov 10, 2018

Trying to build this on the Pi again. (Pi3B+ Raspbian 32bit OS)

Why would I get missing header files when they do exist?

pi@raspberrypi:~/lwjgl3 $ ant compile-native
Buildfile: /home/pi/lwjgl3/build.xml

bindings:

init:
 [override] Build offline: true

check-dependencies:

generate:

compile:

compile-native:

compile-native-platform:
 [Compiler] /home/pi/lwjgl3/modules/lwjgl/nanovg/src/generated/c/org_lwjgl_nanovg_NanoSVG.c:13:21: fatal error: nanosvg.h: No such file or directory
 [Compiler]  #include "nanosvg.h"

File exists here:

pi@raspberrypi:~/lwjgl3 $ find . -name nanosvg.h
./modules/lwjgl/nanovg/src/main/c/nanosvg.h
pi@raspberrypi:~/lwjgl3 $ 

@gounthar

This comment has been minimized.

Copy link

commented Nov 12, 2018

I'm having some troubles compiling because of header files not found. Are there some dependencies to install beforehands?

 [Compiler] lwjgl3/modules/lwjgl/tootle/src/main/c/RayTracer/JRT/JRTH2KDTreeBuilder.cpp:6:23: fatal error: TootlePCH.h: No such file or directory

But...

 find . -name TootlePCH.h
./modules/lwjgl/tootle/src/main/c/TootlePCH.h

Any idea?
Thanks.

@mikehooper

This comment has been minimized.

Copy link

commented Jan 9, 2019

Is anyone able to explain this missing header files issue ? Are we missing a library path or some dependency?

@mikehooper

This comment has been minimized.

Copy link

commented Jan 10, 2019

Seem to have more success setting relative="false" in every occurrence in config/linux/build.xml

@mikehooper

This comment has been minimized.

Copy link

commented Feb 4, 2019

These patches seem to solve the missing header files issue:

#!/bin/bash

file=~/lwjgl3/config/linux/build.xml

# set relative to false
sed -i -e 's+<attribute name="relative" default="true"/>+<attribute name="relative" default="false"/>+g' $file
sed -i -e 's+relative="true"+relative="false"+g' $file

# set threads to 1
sed -i -e 's+threadsPerProcessor="2"+threadsPerProcessor="1"+g' $file
sed -i -e 's+threadsPerProcessor="4"+threadsPerProcessor="1"+g' $file

# set path
sed -i -e 's+<property name="module.lwjgl.rel" value="../../../../${module.lwjgl}"/>+<property name="module.lwjgl.rel" value="/home/pi/lwjgl3/${module.lwjgl}"/>+g' $file

# comment out -m32
sed -i -e 's+<arg line="-m32 -mfpmath=sse -msse -msse2" unless:true="${build.arch.x64}"/>+<!-- <arg line="-m32 -mfpmath=sse -msse -msse2" unless:true="${build.arch.x64}"/> -->+g' $file
sed -i -e 's+<arg value="-m32" unless:true="${build.arch.x64}"/>+<!-- <arg value="-m32" unless:true="${build.arch.x64}"/> -->+g' $file

@Askmewho

This comment has been minimized.

Copy link

commented Feb 26, 2019

http://fxzjshm.github.io/blog/Build-LWJGL-On-Raspberry-Pi/ these instructions are good enought to build them for rpi3

@mikehooper

This comment has been minimized.

Copy link

commented Feb 26, 2019

http://fxzjshm.github.io/blog/Build-LWJGL-On-Raspberry-Pi/ these instructions are good enought to build them for rpi3

Thats for lwjgl2 not lwjgl3

@Askmewho

This comment has been minimized.

Copy link

commented Feb 26, 2019

yeah... I need to build the lwjgl-3.1.6.jar armhf one for rpi3. how I get that one? I download the source of the 3.1.6 than I need and I am failing on building it. please, help me with this nightmare.

@gounthar

This comment has been minimized.

Copy link

commented Feb 26, 2019

I went further, but will have to try it on another machine:

 [kotlinc] info: kotlinc-jvm 1.3.21 (JRE 1.8.0_181-8u181-b13-2~deb9u1-b13)
  [kotlinc] exception: java.lang.OutOfMemoryError: Java heap space
@mikehooper

This comment has been minimized.

Copy link

commented Feb 26, 2019

If you building on a Pi you can increase the swap file size #206 (comment)

@Askmewho

This comment has been minimized.

Copy link

commented Feb 26, 2019

Zram?

@gounthar

This comment has been minimized.

Copy link

commented Feb 26, 2019

Thanks. I tried on another machine and got:


compile-native:
    [mkdir] Created dir: /root/lwjgl3/bin/linux/x64

compile-native-platform:
 [Compiler] gcc: error: unrecognized command line option ‘-m64’

BUILD FAILED
/root/lwjgl3/build.xml:406: The following error occurred while executing this line:
/root/lwjgl3/config/linux/build.xml:149: The following error occurred while executing this line:
/root/lwjgl3/config/linux/build.xml:39: apply returned: 1

Total time: 8 minutes 10 seconds
 gcc --version
gcc (Ubuntu/Linaro 5.4.0-6ubuntu1~16.04.11) 5.4.0 20160609
Copyright (C) 2015 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
 uname -a
Linux machine 4.15.0-34-generic #37~16.04.1-Ubuntu SMP Tue Aug 28 10:44:16 UTC 2018 aarch64 aarch64 aarch64 GNU/Linux
@Askmewho

This comment has been minimized.

Copy link

commented Feb 26, 2019

Delete that option? 32 bit arm libs should work anyway on 64 bits arm os. Just for testing porpoise delete it. The -m64 its an x86_64 flag. Dont know the equal to arm64 but anyway, if you delete that one it should pass.

@Askmewho

This comment has been minimized.

Copy link

commented Feb 27, 2019

So, has anyone build successfully lwjgl3 for the pi3 and other armhf and arm64 devices? If any. Please give us a precompiled lib sources. Sorry if I am repetitive.

@cuchaz

This comment has been minimized.

Copy link

commented Apr 8, 2019

Hello,

I managed to get a cross-compile to aarch64 working on a Linux x64 host. I have no idea how to integrate this nicely with the build system so my changes play well with others, but I did hack up the ant build files to make the cross-compilation work in my case. If that's of any use to anyone trying to get ARM binaries, I'll post my diffs here. Or maybe this could serve as an inspiration for how to integrate an aarch64 target into the build system properly. Who knows, but I hope it helps someone.

diff --git a/config/linux/build.xml b/config/linux/build.xml
index 77eeac633..a8968eb76 100644
--- a/config/linux/build.xml
+++ b/config/linux/build.xml
@@ -13,6 +13,15 @@
         <equals arg1="${build.arch}" arg2="x64"/>
     </condition>
 
+    <!-- add cross-compilation toolchain -->
+    <!-- TODO: how to configure? -->
+    <property name="toolchain" value="/path/to/your/cross/compiling/toolchain"/>
+    <property name="toolchain.prefix" value="${toolchain}/bin/aarch64-linux-"/>
+    <property name="toolchain.sysroot" value="${toolchain}/aarch64-cortexa53-linux-gnu/sysroot"/>
+    <property name="toolchain.include" value="${toolchain.sysroot}/usr/include"/>
+    <property name="toolchain.lib" value="${toolchain.sysroot}/usr/lib"/>
+    <property name="system.include" value="/usr/include"/>
+
     <condition property="gcc.suffix" value="-${gcc.version}" else="">
         <isset property="gcc.version"/>
     </condition>
@@ -22,8 +31,8 @@
     <macrodef name="compile">
         <attribute name="dest" default="${dest}"/>
         <attribute name="lang" default="c"/>
-        <attribute name="gcc.exec" default="gcc${gcc.suffix}"/>
-        <attribute name="gpp.exec" default="g++${gcc.suffix}"/>
+        <attribute name="gcc.exec" default="${toolchain.prefix}gcc${gcc.suffix}"/>
+        <attribute name="gpp.exec" default="${toolchain.prefix}g++${gcc.suffix}"/>
         <attribute name="lto" default="-flto"/>
         <attribute name="flags" default=""/>
         <attribute name="simple" default="false"/>
@@ -39,8 +48,10 @@
             <apply dir="@{dest}" executable="${gcc}" dest="@{dest}" skipemptyfilesets="true" failonerror="true" parallel="true" taskname="Compiler">
                 <arg line="-c -std=c11" unless:set="cpp"/>
                 <arg line="-c -std=c++11" if:set="cpp"/>
+                <!-- TODO: -m64 -m32 not accepted by aarch64-linux-gcc, how to configure?
                 <arg line="-m64" if:true="${build.arch.x64}"/>
                 <arg line="-m32 -mfpmath=sse -msse -msse2" unless:true="${build.arch.x64}"/>
+                -->
                 <arg line="-O3 @{lto} -fPIC @{flags} -pthread -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=0 -D_GNU_SOURCE -DNDEBUG -DLWJGL_LINUX -DLWJGL_${build.arch}"/>
 
                 <arg value="-I${jni.headers}"/>
@@ -51,6 +62,11 @@
 
                 <arg value="-I${src.main.rel}" if:true="@{simple}"/>
 
+                <!-- include toolchain headers BEFORE system headers -->
+                <!-- TODO: how to configure? -->
+                <arg line="-isystem ${toolchain.include}"/>
+                <arg line="-isystem ${system.include}"/>
+
                 <source/>
                 <fileset dir="." includes="${src.generated}/*" if:true="@{simple}"/>
 
@@ -63,8 +79,8 @@
         <attribute name="module"/>
         <attribute name="linker" default="gcc"/>
         <attribute name="lang" default="c"/>
-        <attribute name="gcc.exec" default="gcc${gcc.suffix}"/>
-        <attribute name="gpp.exec" default="g++${gcc.suffix}"/>
+        <attribute name="gcc.exec" default="${toolchain.prefix}gcc${gcc.suffix}"/>
+        <attribute name="gpp.exec" default="${toolchain.prefix}g++${gcc.suffix}"/>
         <attribute name="flags" default="-Werror -Wfatal-errors"/>
         <attribute name="simple" default="false"/>
         <element name="beforeCompile" optional="true"/>
@@ -122,8 +138,10 @@
             <apply executable="${gcc}" failonerror="true" parallel="true" taskname="Linker" unless:set="lib-uptodate">
                 <srcfile/>
                 <arg value="-shared"/>
+                <!-- TODO: -m64 -m32 not accepted by aarch64-linux-gcc, how to configure?
                 <arg value="-m64" if:true="${build.arch.x64}"/>
                 <arg value="-m32" unless:true="${build.arch.x64}"/>
+                -->
 
                 <arg line="-z noexecstack"/>
                 <arg line="-O3 -flto -fPIC -pthread -o ${lib}/lib${name}${LIB_POSTFIX}.so"/>
@@ -136,7 +154,7 @@
                 <link/>
             </apply>
 
-            <apply executable="strip" failonerror="true" taskname="Symbol strip" unless:set="lib-uptodate">
+            <apply executable="${toolchain.prefix}strip" failonerror="true" taskname="Symbol strip" unless:set="lib-uptodate">
                 <filelist dir="${lib}" files="lib${name}${LIB_POSTFIX}.so"/>
             </apply>
             <delete file="${lib}/touch_${platform}.txt" quiet="true" unless:set="lib-uptodate"/>
@@ -145,8 +163,8 @@
 
     <macrodef name="build_simple">
         <attribute name="module"/>
-        <attribute name="gcc.exec" default="gcc${gcc.suffix}"/>
-        <attribute name="gpp.exec" default="g++${gcc.suffix}"/>
+        <attribute name="gcc.exec" default="${toolchain.prefix}gcc${gcc.suffix}"/>
+        <attribute name="gpp.exec" default="${toolchain.prefix}g++${gcc.suffix}"/>
         <sequential>
             <build module="@{module}" gcc.exec="@{gcc.exec}" gpp.exec="@{gpp.exec}" simple="true" if:true="${binding.@{module}}"/>
         </sequential>
diff --git a/modules/lwjgl/core/src/main/c/linux/wrap_memcpy.c b/modules/lwjgl/core/src/main/c/linux/wrap_memcpy.c
index 239c95817..e1abc29db 100644
--- a/modules/lwjgl/core/src/main/c/linux/wrap_memcpy.c
+++ b/modules/lwjgl/core/src/main/c/linux/wrap_memcpy.c
@@ -2,7 +2,9 @@
 
 void *old_memcpy(void *, const void *, size_t);
 
-__asm__(".symver old_memcpy,memcpy@GLIBC_2.2.5");
+// aarch64 toolchain seems to have a different symbol version
+//__asm__(".symver old_memcpy,memcpy@GLIBC_2.2.5");
+__asm__(".symver old_memcpy,memcpy@GLIBC_2.17");
 
 void *__wrap_memcpy(void *dest, const void *src, size_t n) {
     return old_memcpy(dest, src, n);

I only built and tested the core liblwjgl.so on my ARM board, but I bet lots of the other modules would work this way too.

Oh, and I had to build dyncall for aarch64 too. Here's a helper script I used to do that:

#!/bin/sh

AARCH64=/path/to/your/cross/compiling/toolchain
AARCH64_SYSROOT=${AARCH64}/aarch64-cortexa53-linux-gnu/sysroot
AARCH64_PREFIX=/bin/aarch64-linux-

\
  AS="${AARCH64}${AARCH64_PREFIX}gcc"\
  CC="${AARCH64}${AARCH64_PREFIX}gcc"\
  CXX="${AARCH64}${AARCH64_PREFIX}g++"\
  LD="${AARCH64}${AARCH64_PREFIX}ld"\
  ASFLAGS="-isysroot ${AARCH64_SYSROOT}"\
  CFLAGS="-isysroot ${AARCH64_SYSROOT}"\
  CXXFLAGS="-isysroot ${AARCH64_SYSROOT}"\
  LDFLAGS="-Wl,-syslibroot ${AARCH64_SYSROOT}"\
  make all

Run this script after cloning the dyncall repo and configuring the build. I ran this as an out-of-source build in a build-aarch64 subfolder, but you could run it in the project root too. Then copy the static libs (*.a) to <lwjgl>/bin/libs/linux/x64. I think putting the lwjgl build in offline mode helps ensure your custom dyncall build doesn't get overwritten, but it looks like lwjgl's build doesn't overwrite the custom dyncall libs unless you explicitly clean them. So just copying them to the right folder seemed to work for me.

Oh, and here's the aarch64 toolchain I used. It's supplied by my dev board mfr and there's not much in the way of documentation, but it seems to work just fine.

https://github.com/friendlyarm/prebuilts

I used the gcc-x64/aarch64-cortexa53-linux-gnu-6.4.tar.xz toolchain specifically.

Hope that helps!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.