Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CI build platforms #1

Closed
rvagg opened this issue Aug 8, 2014 · 36 comments
Closed

CI build platforms #1

rvagg opened this issue Aug 8, 2014 · 36 comments

Comments

@rvagg
Copy link
Member

rvagg commented Aug 8, 2014

With both Node and libuv being very widely adopted across disparate platforms, it's time for a CI system to match that spread. We should be able to define a list of primary targets that are essential as part of the mix and secondary targets that add additional value but not a main focus of the core team.

Current Node.js core and libuv Jenkins build bot list: http://jenkins.nodejs.org/computer/

Let's try and limit this discussion to CI as much as possible and leave release build platforms for another discussion.

Likely using Jenkins with a very distributed collection of build bots. I've been in contact with DigitalOcean, IBM and @mmalecki so far on hardware provisioning, looking forward to Rackspace and any others that want to step up. NodeSource is happy to cop the maintenance burden and likely some of the cost and do the bidding of the core team(s).

Here's my straw-man, to start discussion off:

Primary

  • Linux (64-bit with at least one 32-bit, maybe CentOS)
    • Ubuntu LTS versions still being supported
    • Ubuntu latest stable
    • EL last three versions (CentOS 5, 6 & 7 in lieu of RHEL 5, 6 & 7)
    • Debian stable
    • Something for ARMv6 (rpi) and ARMv7
  • Windows (64-bit only)
    • Windows Server 2008 R2 (NT 6.1, same as Windows 7)
    • Windows Server 2012 (same as Windows 8)
    • Need variations for VS 2012 and VS 2013
  • OSX (64-bit only)
    • 10.8 "Mountain Lion"
    • 10.9 "Mavericks"
  • Solaris (64-bit only)
    • SmartOS 13.4.2
    • SmartOS 14.2.0

Secondary

  • Linux
    • Debian unstable & testing
    • EL next (CentOS 7 beta)
  • Windows
    • Windows 7 32-bit
    • MinGW
    • VS 2010 on something
  • FreeBSD
  • POWER

Looking for input from anyone but particularly the core team who need to be the ones deciding which are the primary platforms they actually care about, and we're considering both Node and libuv here. I'm happy to do a bunch of the legwork for you but I'll need your guidance because build targets is not my decision to make.

@tjfontaine @bnoordhuis @piscisaureus @trevnorris @TooTallNate @saghul @indutny

Others who have shown an interest in this discussion (or I just feel like pulling in!):

@ingsings @pquerna @voodootikigod @mmalecki @andrewlow @guille @othiym23 @dshaw @wblankenship @wolfeidau

Please subscribe to https://github.com/node-forward/build for further notifications from other issues if you're interested so we don't have to go and pull everyone in each time.

@bnoordhuis
Copy link
Member

Debian unstable & testing

debian/testing seems fine (I think that's what most Debian desktop users run - I do, at least) but sid is frequently broken. It might cause a lot of false negatives.

SmartOS 13.4.2 / SmartOS 14.2.0

Is anyone outside of Joyent running SmartOS? The number of (non-SmartOS) Solaris users is a rounding error, I know both of them personally.

Need variations for VS 2012 and VS 2013

There was a post on the v8-users mailing list this morning that VS 2012 support is being phased out. Node.js is a few V8 releases behind, of course.

@andrewlow
Copy link

Please seriously consider supporting 32bit versions of the Node binaries. The build infrastructure can be 64bit versions of the OS, but only offering 64bit binaries means we're going to run V8 in 64bit mode.

For most Node applications this will be a waste of memory since they don't need to access more than 4G address space and V8 today doesn't do anything beyond a little bit of header compression in the 64bit environment.

Also for what it's worth - Node can probably be built once on a 64bit Linux machine, then packaged as a .dep, rpm, tgz for the various distros. I know from experience that building on RHEL 6.5 produces a binary that runs just fine on Ubuntu 10 and 12. Let's make it a goal to have as few unique binary builds as we can (while allowing for installer packaging wrappers to support the various OSes)

@rmg
Copy link

rmg commented Aug 8, 2014

From a never-looked-at-v8-internals POV, I would expect the extra registers from 64-bit mode would be quite useful for a VM. @andrewlow does the memory bloat outweigh the benefits of doubling the number of registers available?

@andrewlow
Copy link

Intel benefits from having more registers in 64bit mode (and more instructions) but it's not enough to overcome the object size bloat. If you do the trick Java did with compressed references http://lowtek.ca/roo/2008/java-performance-in-64bit-land/ then you do get an overall win, but if you've done that you're really using 32bit objects.

PowerPC gets wider registers, but no new instructions or registers. It's more of a challenge on that platform.

I'm not sure where ARM and MIPS sit with 64bit, I suspect they don't get as big as win as Intel does.

In the end, the memory is so slow relative to CPU speed that having to move more is bad for performance.

@rmg
Copy link

rmg commented Aug 8, 2014

@andrewlow thanks for the explanation :-)

@rvagg
Copy link
Member Author

rvagg commented Aug 10, 2014

/cc @chrislea

@chrislea
Copy link

It's an interesting discussion to me. Before I continue, I will state for the record that I strongly suspect I have considerably less knowledge about the register state / pointer size / performance issues than @andrewlow does.

What I've seen from experience is that convenience and familiarity tend to trump, essentially, everything else. In this case specifically, the end user being able to type apt-get install nodejs or yum install nodejs and having the tooling of their Linux distro of choice install Node for them would, I am guessing, be more useful for overall adoption rates than putting effort into shipping 32bit builds into 64bit machines for some extra bit of runtime speed. I guess this because I also assume that if people were that concerned about runtime speed they'd be developing in Java (or something else that's not Node). I would also guess that most of these apps people are writing in Node are generally very I/O bound, so the runtime speed isn't the bottleneck.

This is sort of tossing $0.02 into the air I know. And I could be dead wrong if there are enterprise customers who really care about benchmarks. But without additional info, I'd continue to ship arch specific builds simply for the reason that it's easier to make everything "just work" in familiar ways to the end user when they are installing and updating.

@saghul
Copy link
Member

saghul commented Aug 10, 2014

MinGW

I would very much like to see MinGW (it's 3 variants) as a primary. Lots of hours have gone into making them actually work, and many people rely on them. Proof if that is that every time we break some MinGW build I get a pull request with a fix, which I need to manually try.

Supporting these doesn't require new gear, the 3 toolchains can be installed on a Windows 7 or 8 machine.

@saghul
Copy link
Member

saghul commented Aug 10, 2014

Obviously my previous comment was regarding libuv :-)

@rvagg
Copy link
Member Author

rvagg commented Aug 10, 2014

@saghul from a previous thread you listed:

  • MinGW32
  • MinGW-w64 (32 bit)
  • MinGW-w64 (64bit)

Which I'm guessing are the ones you are concerned about. Are the build instructions for these included with libuv somewhere or does gyp do a neat job of sorting things out?

@saghul
Copy link
Member

saghul commented Aug 10, 2014

@rvagg yes, those are the ones.

As for how to build them: I found that autoconf is the easiest way. GYP can also work, but it's not officially supported.

The MinGW32 installer comes with a graphical package manager allows you to install some packages, so once you have autoconf, automake and libtool you can just build an run with ./autogen.sh && ./configure && make && make check.

For MinGW-w64, I'd suggest to use MSYS2. There is an installer to get things started. Then in the MSYS2 shell you can install the 2 MinGW-w64 toolchains using pacman (they ported the Arch Linux package manager) and also autoconf, automake and libtool. Building is done the same (running these commands on the appropriate shell): ./autogen.sh && ./configure && make && make check.

I don't have much free time, but I can try to help if needed.

@rvagg
Copy link
Member Author

rvagg commented Aug 10, 2014

FYI I've been messing with Buildbot and am noticing that CentOS 5 is going to require some work to make the tests a little less brittle across both Node and libuv. Mostly things work but there are a couple of tests in both that persistently fail and a few that occasionally fail.

I know this is a pain but I've run into two major US companies already that are deploying Node into RHEL5 still in production. Ultimately it's up to the TC to decide minimum supported operating systems but I'd like to encourage them to seriously consider supporting EL 5, within a reasonable timeframe (perhaps till the end of the year when 7 starts to gain acceptance, which means enterprise is more likely to be running on 6!).

I don't want to share a URL in here but if anyone's interested in taking a look at the failures then email me @ rvagg@nodesource.com and I'll hook you up and can even give Node & libuv core members access to a box to play around with if needed.

@rvagg
Copy link
Member Author

rvagg commented Aug 11, 2014

Next question: llvm vs gcc, is there anything worth testing there or are we assured that compiling with both leads to a similar enough result? I've seen many opting for llvm compiles because of lldb goodness and quicker compiles I think.

@chrislea
Copy link

I've talked to a few people including TJ about this.

The answer is "use gcc when on Linux", even though you can build Node with
clang.

The reason is that C++ doesn't have a stable ABI (which I always manage to
forget because it's 2014 and really !!??!?), and so if people are
building binary modules and their system defaults to using gcc, which it
will, it's possible that things just won't work unless you use the same
compiler.

So, use gcc, and more specifically use the gcc that your target distro ships with.

Annoying, but no way around it. So there you go.

On Sun, Aug 10, 2014 at 9:18 PM, Rod Vagg notifications@github.com wrote:

Next question: llvm vs gcc, is there anything worth testing there or are
we assured that compiling with both leads to a similar enough result?
I've seen many opting for llvm compiles because of lldb goodness and
quicker compiles I think.


Reply to this email directly or view it on GitHub
#1 (comment).

__
https://chrislea.com
http://about.me/chrislea
310-709-4021

@bnoordhuis
Copy link
Member

Next question: llvm vs gcc, is there anything worth testing there or are we assured that compiling with both leads to a similar enough result?

@rvagg clang/llvm has, in my experience, a greater proclivity for exploiting undefined behavior and that has exposed bugs in the past. On the other hand, I've hit a greater number of compiler bugs with clang. It's not all roses and unicorns.

If it's trivial to set up a two compiler matrix, then by all means please do. If it's a lot of work, then I think it's okay if we stick with gcc for now.

On a related subject, do people have opinions on what version (or versions) of gcc to support? The current baseline is 4.2 but that's mostly because of OS X and FreeBSD and they have moved to clang now.

V8 is switching to C++11 soon so middle/long-term, a modern gcc will be mandatory. That sucks for people on RHEL5 systems but back-ports of g++-4.7 and 4.8 are available.

I've seen many opting for llvm compiles because of lldb goodness and quicker compiles I think.

The quicker compile times are something of a perpetuated myth from llvm/clang's early days. It was faster back then because it emitted lousy code. It's mostly a wash these days. I don't see an appreciable difference on the projects I work on, at least.

The reason is that C++ doesn't have a stable ABI (which I always manage to forget because it's 2014 and really !!??!?), and so if people are building binary modules and their system defaults to using gcc, which it will, it's possible that things just won't work unless you use the same compiler.

@chrislea You may be thinking of the situation on Windows where msvc and mingw produce incompatible code (clang's msvc ABI support is improving, though) but clang++ and g++ on Linux (and, I think, most Unices) use a common ABI:

$ for CXX in clang++ g++ ; do $CXX -dM -x c++ -E - </dev/null | grep __GXX_ABI_VERSION ; done
#define __GXX_ABI_VERSION 1002
#define __GXX_ABI_VERSION 1002

Where __GXX_ABI_VERSION - 1000 = 2 is the actual ABI version. Version 2 was introduced with the release of g++ 3.4. There are newer versions but they're mainly bug fixes and you have to opt in with -fabi-version=n.

@chrislea
Copy link

Well then, I stand corrected, as @bnoordhuis clearly knows much more about this than I do!

The only reason I'd heard of to not use clang was the possibility of ABI compatibility issues. If those aren't a real problem, then so be it. I've made builds with clang before and can easily do so again for the next release into the NodeSource repo if people like, I just have to uncomment one line in the build stuff.

Having done it before, I can tell you the compile times are certainly faster with clang. I don't know about absolute performance numbers, or lldb goodness.

Anyway, just let me know if we want to do this going forward for the Ubuntu / Debian builds.

@trevnorris
Copy link

Because I'm a dork and wanted to see the difference. Here are clean builds using gcc 4.8.2 and clang 3.5:

gcc
$ ./configure; /usr/bin/time make -j8
564.04user 36.31system 1:24.62elapsed 709%CPU (0avgtext+0avgdata 355800maxresident)k
26976inputs+367288outputs (162major+16529028minor)pagefaults 0swaps

clang
$ ./configure; /usr/bin/time make -j8
406.75user 26.37system 1:03.04elapsed 687%CPU (0avgtext+0avgdata 128232maxresident)k
0inputs+196184outputs (0major+14644183minor)pagefaults 0swaps

So, saved me 20 seconds :P, but the memory usage is noticeable (though not like I care since I have 16GB RAM).

@saghul
Copy link
Member

saghul commented Aug 13, 2014

Should we also test some libc other than glibc on Linux? Sabotage Linux is based on musl, which makes it a good candidate as buildbot if we want to go for that. Thoughts @bnoordhuis?

@bnoordhuis
Copy link
Member

I don't know. I suspect that the number of Linux systems that don't run glibc/eglibc and are not Android is a really tiny fraction. But I'm not the one maintaining the build matrix, I just provide input. @rvagg et al. are more qualified to comment on the cost/benefit ratio.

@kkoopa
Copy link

kkoopa commented Oct 14, 2014

What about X32? That gives more registers while still using small pointers and longs.

@rvagg
Copy link
Member Author

rvagg commented Oct 14, 2014

I'd like to know if anybody is actually using it. Ultimately this is up to the libuv and Node core teams to dictate and they may have interactions with people compiling x32 .. or not.

@kkoopa
Copy link

kkoopa commented Oct 14, 2014

Probably nobody is using it yet. V8 only got x32 support in June this year, with the release of 3.28.
https://code.google.com/p/v8/source/detail?r=21955

However, Node has now integrated v8 3.28 (and might move on to 3.29), so x32 should be highly on topic.

@bnoordhuis
Copy link
Member

I've started work on a x32 port, see libuv/libuv#1. Patches for node-forward to follow once the necessary changes land in libuv.

x32 is still a rather niche arch. I'm perfectly fine with leaving it unsupported for now.

@ForbesLindesay
Copy link

I don't think this has been mentioned anywhere yet: It would be really good to have tests on windows run both on local discs and on a network share. It's really surprising how many things break when you have windows but with unix style path separators. This applies to azure web sites.

@rvagg
Copy link
Member Author

rvagg commented Nov 11, 2014

Ugh, any suggestions on how to implement this on Rackspace or similar Windows cloud provider?

For now, just getting the Windows builds green is enough of a job!

@ForbesLindesay
Copy link

I'm not sure. Running them on azure web sites would essentially give you that by default, and I would have thought Microsoft would be willing to donate the compute? I would guess you could share a local folder over the network and then just cd into it via its network path rather than its local path. That would probably work.

@pquerna
Copy link

pquerna commented Nov 12, 2014

@rvagg: On Rackspace you could do this by running two instances, probably in an isolated network (aka "Cloud Networks" in Rackspace terms, which is an isolated L2 domain) for security, one acting as a server, the other mounting it.

@qbit
Copy link

qbit commented Dec 4, 2014

What can I do to get an OpenBSD build env going?

@bnoordhuis
Copy link
Member

@rvagg Getting some BSD test coverage besides FreeBSD would be good for both libuv and io.js but I defer to you on how much effort it is to set up integration and whether that's worth it.

I would restrict it to the latest release, currently 5.6, and maybe just amd64? I honestly don't know how many people run i386 openbsd these days.

@qbit Can you commit to anything when it comes to support? Failures on openbsd wouldn't block a release but it would be nice if we could ping you to help investigate (and get a reply within a certain time frame; that's the key point, really.)

@qbit
Copy link

qbit commented Dec 4, 2014

I can commit. IMO the target for OpenBSD should be snapshots as that is the target every developer is using. That said.. release is better than nothing at all :)

@rvagg
Copy link
Member Author

rvagg commented Dec 4, 2014

find me a cloud provider or corporate sponsor that can offer OpenBSD machines that we can control and we can do this for sure

@qbit
Copy link

qbit commented Dec 5, 2014

@rvagg I will poke around and see if any are willing to donate / sponsor - there are a handful that allow for it currently - ArpNetworks and Vultr for sure.

@qbit
Copy link

qbit commented Dec 5, 2014

@rvagg is there any info I can pass along to the perspective sponsors regarding logo placement / publicity and what not?

@chrislea
Copy link

chrislea commented Dec 5, 2014

Just jumping in a little late here with a +1. I'm happy to work on builds for BSD if I can get a build box.

@rvagg
Copy link
Member Author

rvagg commented Dec 23, 2014

@qbit currently we're just putting hardware sponsor names on the build repo README and also the build bot server identifiers have the provider names in them and they get referenced and seen quite a lot. I think we're happy to talk with anyone if they have suggestions for how they'd like to be acknowledged although I don't imagine us bending over backwards for anyone.

@jbergstroem
Copy link
Member

This issue is pretty outdated. We've travelled ways since. Great job, everyone! (closing)

regarding x32: I've actually attempted getting this in but it's just too exotic at the moment. Lets revisit later.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests