You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
/usr/bin/ld: /usr/lib/gcc/x86_64-redhat-linux/4.8.5/../../../../lib64/gcrt1.o: relocation R_X86_64_32S against `__libc_csu_fini' can not be used when making a shared object; recompile with -fPIC
/usr/lib/gcc/x86_64-redhat-linux/4.8.5/../../../../lib64/gcrt1.o: could not read symbols: Bad value
collect2: error: ld returned 1 exit status
Steps to reproduce behavior
$ cd singularity-2.4.2
$ ./autogen.sh
$ ./configure --prefix=/cm/shared/apps/singularity/dev
change the following two lines in Makefile, src/Makefile and src/lib/Makefile
CFLAGS = -g -O2
LDFLAGS = -Wl,-rpath -Wl,$(libdir)
to
CFLAGS = -g -pg -O2
LDFLAGS = -pg -Wl,-rpath -Wl,$(libdir)
$ make
Background:
My MPI program which was built in and run with Singularity container has very poor performance compared to the bare-metal one (~2x slower). I ran the MPI program on 8 nodes to make their difference more obvious. My host OS is RHEL 7.3 and Singularity OS is Centos 7.4. I have two copies of MPI program binaries. One copy was compiled on the host/bare-metal, and another copy was compiled in container. Now the one compiled in container has very poor performance when running with the container, but the its performance is normal if running on the host. So I think it's the problem of the container.
Now I want to profile Singularity to see which function caused the problem. My MPI program is also a GPU program and Nvidia's profile nvprof cannot profile CPU part in MPI program. So I have to profile Singularity with GPROF tool which requires Singularity to build with -pg flag.
Regards,
Rengan
The text was updated successfully, but these errors were encountered:
With the release of 3.0 a lot of Singularity was rewritten in Go. There is still some C, but perhaps you could try the new release and let us know how things work out? Please let me know if you have further questions!
Version of Singularity:
2.4.2
Expected behavior
Singularity should be built successfully with -pg flag.
Actual behavior
...
gcc -DHAVE_CONFIG_H -I. -DBINDIR="/cm/shared/apps/singularity/dev/bin" -DSYSCONFDIR="/cm/shared/apps/singulari
ty/dev/etc" -DLOCALSTATEDIR="/cm/shared/apps/singularity/dev/var" -DLIBEXECDIR="/cm/shared/apps/singularity/de
v/libexec" -DNS_CLONE_NEWPID -DNS_CLONE_FS -DNS_CLONE_NEWNS -DNS_CLONE_NEWUSER -DNS_CLONE_NEWIPC -DNS_CLONE_NEWNET -DSINGULARITY_NO_NEW_PRIVS -DSINGULARITY_MS_SLAVE -Wall -fpie -fPIC -g -pg -O2 -MT util/action-mount.o -MD -MP -MF util/.deps/action-mount.Tpo -c -o util/action-mount.o
test -f 'util/mount.c' || echo './'
util/mount.cmv -f util/.deps/action-mount.Tpo util/.deps/action-mount.Po
/bin/sh ../libtool --tag=CC --mode=link gcc -Wall -fpie -fPIC -g -pg -O2 -pie -pg -Wl,-rpath -Wl,/cm/shared/apps/singularity/dev/lib -o action action-action.o util/action-util.o util/action-file.o util/action-registry.o util/action-privilege.o util/action-sessiondir.o util/action-suid.o util/action-cleanupd.o util/action-daemon.o util/action-mount.o lib/image/libsingularity-image.la lib/runtime/libsingularity-runtime.la action-lib/libinternal.la
libtool: link: gcc -Wall -fpie -fPIC -g -pg -O2 -pie -pg -Wl,-rpath -Wl,/cm/shared/apps/singularity/dev/lib -o .libs/action action-action.o util/action-util.o util/action-file.o util/action-registry.o util/action-privilege.o util/action-sessiondir.o util/action-suid.o util/action-cleanupd.o util/action-daemon.o util/action-mount.o lib/image/.libs/libsingularity-image.so lib/runtime/.libs/libsingularity-runtime.so action-lib/.libs/libinternal.a -Wl,-rpath -Wl,/cm/shared/apps/singularity/dev/lib/singularity
/usr/bin/ld: /usr/lib/gcc/x86_64-redhat-linux/4.8.5/../../../../lib64/gcrt1.o: relocation R_X86_64_32S against `__libc_csu_fini' can not be used when making a shared object; recompile with -fPIC
/usr/lib/gcc/x86_64-redhat-linux/4.8.5/../../../../lib64/gcrt1.o: could not read symbols: Bad value
collect2: error: ld returned 1 exit status
Steps to reproduce behavior
$ cd singularity-2.4.2
$ ./autogen.sh
$ ./configure --prefix=/cm/shared/apps/singularity/dev
change the following two lines in Makefile, src/Makefile and src/lib/Makefile
CFLAGS = -g -O2
LDFLAGS = -Wl,-rpath -Wl,$(libdir)
to
CFLAGS = -g -pg -O2
LDFLAGS = -pg -Wl,-rpath -Wl,$(libdir)
$ make
Background:
My MPI program which was built in and run with Singularity container has very poor performance compared to the bare-metal one (~2x slower). I ran the MPI program on 8 nodes to make their difference more obvious. My host OS is RHEL 7.3 and Singularity OS is Centos 7.4. I have two copies of MPI program binaries. One copy was compiled on the host/bare-metal, and another copy was compiled in container. Now the one compiled in container has very poor performance when running with the container, but the its performance is normal if running on the host. So I think it's the problem of the container.
Now I want to profile Singularity to see which function caused the problem. My MPI program is also a GPU program and Nvidia's profile nvprof cannot profile CPU part in MPI program. So I have to profile Singularity with GPROF tool which requires Singularity to build with -pg flag.
Regards,
Rengan
The text was updated successfully, but these errors were encountered: