Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

seeing double and ticks have nothing in common #204

Merged
merged 1 commit into from
Jun 9, 2022

Conversation

mathieu-aubin
Copy link
Contributor

as usual with my stuff, take, edit, leave out as desired, if any

i just guessed a typo for ticks as my brain cant seem to see anything other than that with the content

cargo-c... also... APKBUILD? static for all arches? It wouldnt kill to at least try to solve this headache by contributing to abuild/aports - just a thought, no pressure

as usual with my stuff, take, edit, leave out as desired, if any
@wader
Copy link
Owner

wader commented Jun 9, 2022

LGTM

By cargo-c you mean the gcc_s thing? Ive filed issue about it and there is upstream issue for rust now about the same issue rust-lang/rust#82521

@wader wader merged commit 42a6ae3 into wader:master Jun 9, 2022
@mathieu-aubin
Copy link
Contributor Author

was thinking of binaries... we could create the packages in alpine... maybe-ish?

@mathieu-aubin mathieu-aubin deleted the patch-3 branch June 9, 2022 21:49
@wader
Copy link
Owner

wader commented Jun 11, 2022

Sorry for slow response, on vacation. You mean static-ffmpeg alpine package somehow?

@mathieu-aubin
Copy link
Contributor Author

mathieu-aubin commented Jun 17, 2022

have you seen or tried multiarch/qemu-user-static ? I was able to compile for most common archs on x86_64 no problems. Seems like its also github runner compliant

what i meant originally was having pre-compiled cargo-c that would reduce time spent building. And/or... already usable containers that we could just copy from. Just suggestions...

multiarch is great

https://www.kernel.org/doc/html/v4.14/admin-guide/binfmt-misc.html

@wader
Copy link
Owner

wader commented Jun 18, 2022

That is interesting! So all the emulated build issues we had seems to have been solved? the vorbis one i think got fixed while fixing arm64 things, libvpx had some hardfloat/softfloat problem and i think there was something more?

How slow is it to build? if i remember correctly rav1e (uses rust) was the one that took nearly half of the build time when i tried. If we want to use normal github runners we might have to do some tricks to get over the 6h job limit (use docker cache and restart build in multiple jobs?)

BTW do you know what is the difference between multiarch/qemu-user-static and docker buildx with qemu? i think it uses userland qemu thingy also.

@wader
Copy link
Owner

wader commented Jun 18, 2022

Hi again, i did an attempt with FROM arm32v7/alpine:3.16.0 AS builder and this as a github workflow:

name: test

on: workflow_dispatch

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - uses: docker/setup-qemu-action@v2
      - run: docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
      - run: docker build -t test .

But it fail build aom :(

...
In file included from /aom/av1/common/arm/av1_txfm_neon.c:12:
/aom/av1/common/arm/av1_txfm_neon.c: In function 'av1_round_shift_array_neon':
/usr/lib/gcc/armv7-alpine-linux-musleabihf/11.2.1/include/arm_neon.h:6737:1: error: inlining failed in call to 'always_inline' 'vdupq_n_s32': target specific option mismatch

I think i got and fixed some similar issues last time i tried emulated build. If i remember correctly at least one reason for this to happen was because some build systems, gcc etc pokes around in /proc/cpuinfo to probe things about the target, but problem is that with emulated builds /proc is still a proc for the host kernel.

$ docker run --rm arm32v7/alpine sh -c 'uname -a; cat /proc/cpuinfo | grep model'
WARNING: The requested image's platform (linux/arm/v7) does not match the detected host platform (linux/amd64) and no specific platform was requested
Linux 89a682be640b 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 armv7l Linux
model		: 142
model name	: Intel(R) Core(TM) i5-7267U CPU @ 3.10GHz
model		: 142
model name	: Intel(R) Core(TM) i5-7267U CPU @ 3.10GHz
model		: 142
model name	: Intel(R) Core(TM) i5-7267U CPU @ 3.10GHz
model		: 142
model name	: Intel(R) Core(TM) i5-7267U CPU @ 3.10GHz

How did you build? on a linux intel host? could you try arm32 if you didn't?

@mathieu-aubin
Copy link
Contributor Author

mathieu-aubin commented Jun 19, 2022

ran the register locally then i used similar FROM line in dockerfile only using arm64v8 instead

neon not available for arm32, maybe? no idea Oh i also set ARCH in dockerfile

also as ARG ARCH

@mathieu-aubin
Copy link
Contributor Author

built on (outout of 'cat /proc/cpuinfo | grep model')

model name      : Intel(R) Xeon(R) CPU E5-2640 v4 @ 2.40GHz
model           : 79
model name      : Intel(R) Xeon(R) CPU E5-2640 v4 @ 2.40GHz
model           : 79
model name      : Intel(R) Xeon(R) CPU E5-2640 v4 @ 2.40GHz
model           : 79
......

40 cpus so output is shortened hahha.

Running Linux latest kernel (cant show uname command, too much sensitive infos)

@wader
Copy link
Owner

wader commented Jun 19, 2022

built on (outout of 'cat /proc/cpuinfo | grep model')

model name      : Intel(R) Xeon(R) CPU E5-2640 v4 @ 2.40GHz
model           : 79
model name      : Intel(R) Xeon(R) CPU E5-2640 v4 @ 2.40GHz
model           : 79
model name      : Intel(R) Xeon(R) CPU E5-2640 v4 @ 2.40GHz
model           : 79
......

40 cpus so output is shortened hahha.

Now i feel i should get a machine with more CPUs :)

Running Linux latest kernel (cant show uname command, too much sensitive infos)

No problem, mostly wanted to show that uname shows arm but /proc/cpuinfo says x86. If i remember i ran into issues like build system looked at uname to find target cpu but the later gcc looked at cpuinfo to look for arm specific things but finds x86 things (no neon feature etc). So maybe for some package builds we would need to force target in various ways.

I dug up an old bug report i did about it here docker/for-mac#5245 (says mac specific in title but it's not) if your interested.

@wader
Copy link
Owner

wader commented Jun 19, 2022

Just so i don't forget: If we do want to use emulated builds with docker like this i think we should make sure that host details leaking into the container does not ends up affecting the build, eg cpu features gets disabled etc.

BTW i've started a multiarch arm64 build as a github action, seems it got passed aom at least. 1h40m in, just started building kvazaar.

@wader
Copy link
Owner

wader commented Jun 19, 2022

arm64 build failed after ~2h when starting to build ra1ve, exit code 137 which mean it got signal 9 (137-128) SIGKILL... i guess OOM killed. So i'm a bit skeptical that we can get current static-ffmpeg with all current dependencies building using standard github action runners. So then either strip out some dependencies that use lot of resources or somehow run own hosts to build on, something else? none of the options seems very attractive to me.

I guess we could try emulated multiarch builds for non-amd64/arm64 using the current aws spot instance build setup maybe. Not sure how much time and motivation i have to do it right now, have some other things i want to finish. But if someone else would like to give a shot let me know.

@wader
Copy link
Owner

wader commented Jun 19, 2022

@pldin601 you think it would be possible to do emulated armv7 build etc on a aws spot instance? enough cpu time?

@pyldin601
Copy link
Contributor

I can only guess that it is possible by running build in 32 bit Linux kernel running on arm cpu. But I don't see any 32 bit Linux machine in the AWS EC2 AMI store.

@wader
Copy link
Owner

wader commented Jun 19, 2022

Sorry i meant to run a amd64 linux host but then run multiarch docker image or docker buildx on that host emulating armv7 etc. So same as above but with no time restrictions and more memory.

@mathieu-aubin
Copy link
Contributor Author

what about... seperate build containers/images from which we get relevant stuff in order to build whatever - ffmpeg static in this case...

Would/could probably make things alot easier and drive the comminuty into action having multiple angles of attack. Maintaining multiple containers/dockerfiles would make the whole everything just a piece of cake when it comes to multiarch builds. Selecting apropriate buding materials suitable for context

it for sure would be a task at first, thinking and coding the pieces selector (if i can say so) in order to build the project but.... docker and alpine makes this task easier i believe.

Also... i was tinkering with alpine latest and package builder (abuild) and noticed many libraries we are building have their respective static APK packages already in the system... from memory, i think...
modplug
theora
and a new library to add, libcaca

there are more i think i just cant remember now. i built sucessfuly with the native packages already

i will do a PR with libcaca enabled soonich

@wader
Copy link
Owner

wader commented Jun 22, 2022

Yeap something more modular/configurable would be nice. I've done some thinking and prototyping around it but always ended feeling messy. So any suggestions how to do is is welcome, maybe pro/cons for each alternative? Thinking shell scripts? makefile? use alpine package infra somehow?

Will write more late about some idea i've had. I'm away visiting my parents so will be a slow at answering.

@wader
Copy link
Owner

wader commented Jun 22, 2022

Maybe should start a new issue about it?

@wader
Copy link
Owner

wader commented Jun 28, 2022

Dumped some idea for more module builds here #216

About using alpine packages or not: I don't really have any clear rules for when i've done it or not but usually have boiled down to:

  • Important enough for security/features to use latest before alpine stable gets updated
  • Alpine package lacks -static or *.a files (i usually file a issue about this, some fixed directly some not, also takes time to end up in stabe)
  • Alpine package depend on thing we don't want
  • A bit more reproducible as we have "tag" the version in the Dockerfile

I did look into if abuild could be used for module builds but i didn't find anything for handling conditional options etc.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants