Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support execution of compiled binaries #429

Closed
mattgodbolt opened this issue May 16, 2017 · 44 comments
Closed

Support execution of compiled binaries #429

mattgodbolt opened this issue May 16, 2017 · 44 comments

Comments

@mattgodbolt
Copy link
Collaborator

@mattgodbolt mattgodbolt commented May 16, 2017

I really want to be able to support execution, and have been working on it. I hit a flaw, and ended up writing this essay to cover options. I put it here for reference, review and comment.

Execution

Goals

  • Be able to safely execute user-supplied programs.
  • (optionally) Execute the compilers in a more safe manner.

Threat model

  1. Curious internet folk seeing if they can hack the site/read files they shouldn't
    (e.g. /etc/passwd and the like).
  2. Malicious users attempting to DoS Compiler Explorer; either taking it down or attempting
    to delete shared resources (e.g. /opt/compiler-explorer compilers).
  3. Malicious users seeking to gain control of the site to mine bitcoin/send spam/etc.
  4. As above but to attempt to steal other users' data.
  5. As above but to attempt to access AWS to spin up further instances/access other services.

Considerations

The current setup runs each AWS instance with limited privileges. Each AWS instance runs multiple
docker containers (one for each sub-site). The compilers are run inside these docker images, and each is run with an LD_PRELOAD wrapper that attempts to minimise the number of files the compilers can access. Additionally some command-line flags are banned (e.g. -fplugin type functionality).

The docker containers isolate both the node.js server and the compilers it runs from the host AWS
environment: most of the security comes from this. Over the years the LD_PRELOAD has become less effective: most compilers now need to look at most files in /etc and /proc etc, and so a lot
of files one might otherwise want to restrict need to be whitelisted. Additionally, some compilers
are statically linked and so cannot for LD_PRELOADed in this way.

A full breach via the compiler would expose just the one docker instance, and modifications would
be ephemeral. If the site was taken down, AWS healthchecking would kill the instance and spawn a
new one (with a fresh clean docker image), so no lasting damage would be done.

However, there's a weakness even here: the /opt drive is a read-write mounted EFS drive (Amazon's
network file system). Changes made here would be lasting, and would be seen by the other running
nodes immediately.

The /opt drive is mostly a convenience: it stores all the compilers so that each AWS node doesn't
need >20GB of compilers in its image. This allows new AWS nodes to boot quickly when load requires,
and means building a new Compiler Explorer image is also pretty quick.

All data in /opt can be recovered from the S3 source.

Execution of Windows compilers is achieved via wine. This can have a long start-up time, and requires a daemon process (wineserver). In order to minimise startup time, in the gcc.godbolt.org docker image the wineserver is run as a long-lived background process during boot. Calls to wine then execute quickly by attaching to this wineserver instance.

Options

All these options rely on a restricted docker container to run the user executable in. The image
has very little in it; just enough to run the user program. It runs as an unprivileged user and with
(if possible) user namespaces to prevent root in that container being root on the host. It has no
setuid programs installed, and will be run without network bridges, and with restricted cgroups.

Docker-in-docker

Compiler Explorer continues to run in docker, and uses a docker-in-docker approach. This is/was the current plan until issues were found.

By mounting /var/run/docker.sock from the host into the CE docker image, that docker image (with
some userid/groupid gymnastics) can launch other docker images. Thus it can in principal launch
the execution docker images.

The following issues have been found:

  • It is not possible for the execution image (EI) to mount directories from the compiler image (CI).
    The non-docker implementation of the CI compiles to a random directory in /tmp, and then runs the EI with that temporary directory mounted as /home/ce-user. This doesn't work under docker-in-docker. Solutions could include mounting a well-known host directory as the /tmp in the CI, and then having knowledge of how to specify the real host path to this directory as the mount target in the EI. Care must be taken to only mount the one /tmp directory with the user's code in it in the EI, not the whole /tmp, else information leakage between executions is possible.
  • Even when running as a restricted user, the CI must be able to talk to the docker socket. As the docker daemon runs as root on the host, this effectively gives the CI effective host root access, weakening its security considerably. The increased attack vector means targeting the compilation part to gain root on the AWS instance would be feasible.

Pros:

  • Pretty much written

Cons:

  • Has the above flaws and issues

Drop the CI docker

Compiler Explorer would run on the AWS host directly. It would then use the same EI isolation techniques to run the compiler as it does to run user binaries. That way the fact it's running directly on the AWS host is "hidden". In the EI the /opt/compiler-explorer would probably have to be mounted (read-only), to gain access to libraries etc there (gcc's libs for example). If the compiler is run in the EI it'll need a "non-free" mount too to cover the licenses etc.

Pros:

  • Solves compilation security holes in a similar way to execution does.

Cons:

  • Running CE on bare AWS host opens door to more security issues there. (mitigated by compilers being run in safe way).

Remote execution

CE/compilation as before, but executable is transmitted to a separate app to be run. That app would run on AWS directly, but would then use an EI as described above.

Pros:

  • Most isolation from rest of activities of all techniques
  • Could run on separate AWS nodes with even fewer permissions
  • Could be scaled independently of the rest of the Compiler Explorer infrastructure

Cons:

  • A new server type to administrate
  • Handling serialisation of the executable and its dependencies could be tricky
  • Not sure how it fits into the "compilation environment" ideas mooted with Christopher Di Bella. Though
    it might form the foundation of the "make an environment" process too.

Other notes

  • Split /opt/compiler-explorer into sensitive and non-sensitive areas so the licenses etc can be
    specifically unmapped.
  • wine startup time is common to all issues: we either have to accept it or have some kind of persistent wineserver that can potentially leak information. Maybe wineserver can run in its own container and somehow be shared with all wine processes? Looks like wine talks via a UDS in /tmp/.wine-$UID. It might also be the case that the pause is long only due to the particular wine setup used: a container with a "pre-warmed" wine setup might be fast enough.
@ilAYAli
Copy link

@ilAYAli ilAYAli commented Jun 22, 2017

It would be nice if this could be added as an option for a locally running instance until the security concerns are solved.
That could possibly facilitate addressing some of the above mentioned concerns by other developers.

@mattgodbolt
Copy link
Collaborator Author

@mattgodbolt mattgodbolt commented Jun 22, 2017

Hi @badeip - the supportsExecute option in the properties file turns it on. At the moment it relies on the presence of a docker exec image. If you don't want to hassle with that, there's a bit of code you can manually hack: in lib/base-compiler.js in the function execBinary replace the exec.sandbox(... call with exec.execute(...

@ilAYAli
Copy link

@ilAYAli ilAYAli commented Jun 22, 2017

Hi @mattgodbolt!
I tried your latter suggestion, but I don't get any program stdout, just Compiler exited with result code 0

@mattgodbolt
Copy link
Collaborator Author

@mattgodbolt mattgodbolt commented Jun 22, 2017

@badeip you'll still need to have the supportsExecute=true property. Create a file called etc/config/compiler-explorer.local.properties and put supportsExecute=true in it too.

@ilAYAli
Copy link

@ilAYAli ilAYAli commented Jun 25, 2017

I only got <No output> in the disassembly view unless 110010 ("compile to binary and disassemble the output") was enabled, so a GNU compatible objdump had to be installed and used (brew install binutils)
The following enables local execution for macOS:

From 211e666af0f80593ea9139a693ac6741f4548b46 Mon Sep 17 00:00:00 2001
From: Petter Wahlman <petter@wahlman.no>
Date: Sun, 25 Jun 2017 17:57:24 +0200
Subject: [PATCH] local exec on macOS

---
 etc/config/c++.defaults.properties            | 2 +-
 etc/config/compiler-explorer.local.properties | 1 +
 lib/base-compiler.js                          | 2 +-
 3 files changed, 3 insertions(+), 2 deletions(-)
 create mode 100644 etc/config/compiler-explorer.local.properties

diff --git a/etc/config/c++.defaults.properties b/etc/config/c++.defaults.properties
index 50fd7ea..ae1ad2e 100644
--- a/etc/config/c++.defaults.properties
+++ b/etc/config/c++.defaults.properties
@@ -4,7 +4,7 @@ defaultCompiler=/usr/bin/g++
 compileFilename=example.cpp
 postProcess=
 demangler=c++filt
-objdumper=objdump
+objdumper=gobjdump
 #androidNdk=/opt/google/android-ndk-r9c
 options=
 supportsBinary=true
diff --git a/etc/config/compiler-explorer.local.properties b/etc/config/compiler-explorer.local.properties
new file mode 100644
index 0000000..ca121ad
--- /dev/null
+++ b/etc/config/compiler-explorer.local.properties
@@ -0,0 +1 @@
+supportsExecute=true
diff --git a/lib/base-compiler.js b/lib/base-compiler.js
index 0249a5f..1690ad4 100644
--- a/lib/base-compiler.js
+++ b/lib/base-compiler.js
@@ -115,7 +115,7 @@ Compile.prototype.objdump = function (outputFilename, result, maxSize, intelAsm)
 };

 Compile.prototype.execBinary = function (executable, result, maxSize) {
-    return exec.sandbox(executable, [], {
+    return exec.execute(executable, [], {
         maxOutput: maxSize,
         timeoutMs: 2000
     })  // TODO make config
--
2.10.2

@rene-aguirre
Copy link

@rene-aguirre rene-aguirre commented Jun 28, 2017

@badeip did you get any "error: Error Error: spawn EACCES" failures?

How did you fix it?

@ilAYAli
Copy link

@ilAYAli ilAYAli commented Jun 29, 2017

@rene-aguirre yes, the EACCESS failure should be solved with a combination of installing GNU binutils and applying the patch above.

@mattgodbolt
Copy link
Collaborator Author

@mattgodbolt mattgodbolt commented Jul 5, 2017

Thanks for that! I am so Linux-centric I had no idea it was possible to have a working compiler et al without binutils :)

@rene-aguirre
Copy link

@rene-aguirre rene-aguirre commented Jul 7, 2017

@badeip Can't break the EACCESS error. I'm using latest master, it looks compiler explorer is trying to execute a .s assembly file.

I'm using homebrew's binutils and your patch.

Is there anything possible that might be missing in your changes or instructions? Would be possible you'd share a fork with your full changes to take a look?

@ilAYAli
Copy link

@ilAYAli ilAYAli commented Jul 7, 2017

@mattgodbolt macOS has binutils, but it is not argument compatible with the Linux/GNU binutils.
I have used Linux for the last couple of decades, but I need my development tools of choice to work with all my development environments :)

@rene-aguirre git clone https://github.com/badeip/compiler-explorer.git

@ilAYAli
Copy link

@ilAYAli ilAYAli commented Jul 8, 2017

@rene-aguirre also, make sure that you have highlighted 11010 and a.out on the webpage.

@rene-aguirre
Copy link

@rene-aguirre rene-aguirre commented Jul 8, 2017

Thanks @badeip, actually the error is shown when a.out is highlighted and no 11010 (exec by no binary).

I had tried this too, but there is only a binary listing when both binary and execution filters are selected. The log shows no failures though.

I even tried the patches (suportsExecute and changing .sandbox to .execute) in a docker container (to isolate any OS X issue), and got the same results, I can't see the expected "printf" output from my snippets.

Am I missing something here?, I thought the execution implies capturing any stdout from my snippet.

@colejohnson66
Copy link

@colejohnson66 colejohnson66 commented Aug 22, 2017

Just an idea, but would it be possible to write wrappers around, say, glibc that the executed program would run instead? Then a custom fopen wrapper could be written that checks if the requested file is allowed, and if not, returns an error, but if it is, call the original fopen?

@mattgodbolt
Copy link
Collaborator Author

@mattgodbolt mattgodbolt commented Aug 22, 2017

@colejohnson66 It absolutely is, and that's what we do for the compilers (see the LD_PRELOAD code in the c-preload directory). But that doesn't work all that well for security as it stands:

  • It breaks for statically linked executables (already bites us for some compilers)
  • Only works for apps that use fopen or other OS functions. It's incredibly trivial to work around if you can compile your own code, using SYSCALL inline assembly to talk directly to the OS in a non-trappable way.

My plans to support execution rely on virtualisation techniques (e.g. the mostly lightweight stuff that Docker does), instead of user-mode trickery.

@mattgodbolt
Copy link
Collaborator Author

@mattgodbolt mattgodbolt commented Aug 29, 2017

I've just committed a change which enables execution by default with no sandboxing. This will allow anyone running a local server to execute their code with the minimum of fuss.

@mattgodbolt
Copy link
Collaborator Author

@mattgodbolt mattgodbolt commented Oct 7, 2017

New notes on sandboxing, post-CppCon:

@hermanzdosilovic
Copy link

@hermanzdosilovic hermanzdosilovic commented Oct 18, 2017

Hi @mattgodbolt,

Saw your talk on CppCon and just wanted to let you know what might be an option for code execution on Compiler Explorer.

I have made an api that allows you to run untrusted source code in isolated environment. Whole project is available on GitHub. I have also made simple code editor that uses this api in the background (link).

Currently, this API doesn't support as much GCCs as Compiler Explorer but architecture is such that adding new compiler is really easy. I also, currently, do not support compiling with custom flags.

Except GCC I have in total 43 compilers for various languages.

Take a look and let me know what you think about this idea. I will help you on my end to make this happen if you think you should go in that direction. 😃

BR,
Herman

@mattgodbolt
Copy link
Collaborator Author

@mattgodbolt mattgodbolt commented Oct 18, 2017

Thanks @hermanzdosilovic ! I'm starting to look into this more and more. I'm currently evaluating firejail but I'll be sure to check out your site and isolation technique too.

It looks like if I were to use your API I'd need your site to compile the code. That's not ideal as I of course support special builds of compiler etc as you know. Trying to match up the compiler a user selected in my UI with your compiles might be tricky, and seems a little unnecessary as I'll have already compiled their code with the right compiler :)

I took a very quick look through your code and it seems that isolate_job.rb is the spot where the isolation happens, and that relies on an isolate command. I can't find where this executable comes from, can you help me understand where this comes from? Looks like a handy utility if I wish to run my executables locally.

@hermanzdosilovic
Copy link

@hermanzdosilovic hermanzdosilovic commented Oct 18, 2017

@mattgodbolt I am using isolate. That's the command you spotted. My API is basically smart wrapper around this tool.

Yes, I understand that I would also need to compile already compiled code 😄. If you want I could make a feature that enables you to run any binary in my isolated environment. You could compile user code (for x86_64) on your end and send me binary for me to run it. This is just an idea. I am not really sure if that would be safe on my end 😄 but it seems doable. I could make this feature in no time flat.

@mattgodbolt
Copy link
Collaborator Author

@mattgodbolt mattgodbolt commented Oct 18, 2017

Thanks for the link. I think probably given I'm already running things locally on my instances, and have already compiled them, and have the libraries etc locally, it would be easiest to run them locally using either firejail or isolate (like you do). Thanks for the links and the kind offer of a remote service though!

@mattgodbolt mattgodbolt added this to To do in Execution support Apr 30, 2018
@mattgodbolt
Copy link
Collaborator Author

@mattgodbolt mattgodbolt commented Apr 30, 2018

I'm starting to work on this feature again. Will update here and the new "Execution support" project as I discover things and make any decisions.

@germandiagogomez
Copy link

@germandiagogomez germandiagogomez commented Apr 2, 2019

I am not sure what hou are running but what I did before was:

  • Docker + Ubuntu.
  • Use apparmor and confine.
  • limit execution time

I did not read the other comments maybe I am being too naive 😅

@mattgodbolt
Copy link
Collaborator Author

@mattgodbolt mattgodbolt commented Apr 2, 2019

Thanks! Docker + Ubuntu are great; we're already running the processes under Docker. One tricky point is running Docker-inside-docker which is generally frowned upon and is a source of security vulnerabilities.

apparmor is great; but needs kernel support (I think!) We'd need to rebuild our images to support this feature (the default Amazon ones don't have it enabled, unless I'm missing something!)

confine sounds like a good thing to look upl. I'm looking at firejail currently for this. ("currently" is a bit of a lie, I have looked at it but haven't had time for a long while).

Limiting execution is definitely needed too =)

Thanks so much for taking the time to make suggestions!

@janwilmans
Copy link

@janwilmans janwilmans commented Apr 11, 2019

What about running the program in an emulation inside the browser of the user? something along the lines of https://bellard.org/jslinux/

@janwilmans
Copy link

@janwilmans janwilmans commented Apr 11, 2019

You could also think about running compiler explorer completely in Jslinux on the client side, and maybe enable to use it offline as well?

@mattgodbolt
Copy link
Collaborator Author

@mattgodbolt mattgodbolt commented Apr 11, 2019

Good idea: but JSLinux might be too slow: booting up a whole operating system, then sending down many megabytes of dependent binaries (post compilation, I need to send both the binary and any library it uses e.g. libgcc, libstdc++.so.7 etc).

Running CE in JsLinux might be possible. Currently our base docker image which contains all the dependencies needed to fire up the site is pretty large (we have to have a full node installation, plus all the node libraries we use). And then the compilers we have available are just slightly more than 250GB which might be a limiting factor.

@janwilmans
Copy link

@janwilmans janwilmans commented Apr 12, 2019

I agree, 250 GB might be rather large to fit inside a browser.

However, running the compiled code at the client in JsLinux, targeting just that exact target, might be a good start. You would be able to have one target for running, and other targets for inspection.

Ideally, you would not boot it after every compilation, but just update the single binary.
Supporting the execution only on this target prevents you from having to send all the dependencies after compilation, we would just send them once, and keep them around.

@SoniEx2
Copy link

@SoniEx2 SoniEx2 commented Apr 27, 2019

this may be a bit of a stretch, but... https://en.wikipedia.org/wiki/Bochs ?

@mattgodbolt
Copy link
Collaborator Author

@mattgodbolt mattgodbolt commented May 6, 2019

An update with my state-of-mind now. Mostly just my own notes, in a bit of a hope to break the stalemate I currently have.

I'm convinced we need to use the same system for running compilers as running user programs. I hate the LD_PRELOAD stuff, and don't want to support two different mechanisms for isolation.

Isolation approaches

  • Docker-in-docker: this is a no-go for me. I don't want to propagate docker any more than I need to (if at all)
  • firejail still seems to be the best approach: it can be locked down pretty well. It can't be run within docker itself though, which means either we un-docker the main image, or else custom execution instances run natively.
  • Various emulation technologies (bochs, qemu) are appealing and are worth investigating but I worry about start-up times.

Work distribution approaches

There are multiple ways to spread the (presumably greater) execution work around:

  • Run inline (locally). If we undocker the main instance, we can run isolating technologies directly in the current web nodes. Pros: simple (other than de-dockering). Cons: we get no instance-level protection, relying totally on the isolation technique to prevent network access etc. Cons: can't smart load-balance
  • SQS - prototype of this exists (thanks @partouf ). We post data to S3, then post a request to SQS (a queue). An EI picks up the work, executes it, and posts results back. Pros: Allows for awesome isolation and load balancing over multiple workers. Cons: uncertain latencies in both SQS send->SQS rx, and reading and writing to S3.
  • REDIS - using REDIS as a queue. Pretty much as above, but requests can live inline, and latencies should be better. Pros: as above. Cons: requires another moving part and another instance to be the master (and/or cost of running managed instances).
  • MQ - similar to above but using managed MQ.
  • REST requests to a separate pool. Execution requests are POSTed via a load balancer to EIs. Pros: simple, cons: careful config of the LB to prevent external traffic, no smart queueing can be done, one node might be swamped with lots of CPU-intensive jobs while another runs idle.

All queue-based strategies have benefit of us distributing load more evenly across instances (as instances only "pull" a request when idle). The LB approach is dumber, posting all requests to an instance (which maintains its own queue in memory), and if it falls behind those requests in the queue will just be slower.

Queue-based approaches have the drawback of filling up...if we got swamped completely there's a chance we could have thousands of items in the queue to chew through before we "caught up".

Queue-based autoscaling is desireable: the number of items in the queue, or the staleness of items can be used as a metric to autoscale on.

Other

WINE is a big problem. Any of the isolation techniques mostly rely on a short-lived, hermetic environment. WINE breaks that. Perhaps we'll need to accept that, and either drop WINE, accept slower WINE builds, or a non-hermetic environment for WINE builds.

@mattgodbolt
Copy link
Collaborator Author

@mattgodbolt mattgodbolt commented May 6, 2019

In order to make forward progress, I'm going to push on with a firejail-based isolation execution service. This seems the best bet to me, and I have prototyped it a number of times. Whatever mechanism is used to send requests will need this as a fundamental lower layer anyway.

@SoniEx2
Copy link

@SoniEx2 SoniEx2 commented May 6, 2019

if it's not too big of a deal, one idea would be to preload some environments and keep them live, and as execution is requested, spawn more of them to handle the load.

so e.g. you have 5 live idle environments:

[ ][ ][ ][ ][ ]

someone comes along and requests a task, so you spawn another environment. it has some start-up time so you just put them in an idle environment:

[!][ ][ ][ ][ ][.]

(where [!] is busy and [.] is booting)

you can build a linux kernel down to 7MB or less. in something like bochs you could get each of those live environments to use very little RAM (uh, 64MB?). you don't need to run the compiler in them if you don't want to, which also helps the RAM usage. might have issues with the WINE environments, tho.

@mattgodbolt
Copy link
Collaborator Author

@mattgodbolt mattgodbolt commented May 6, 2019

@SoniEx2 Great ideas! Will definitely consider that as a potential solution to the WINE issue.

mattgodbolt added a commit that referenced this issue May 7, 2019
mattgodbolt added a commit that referenced this issue May 13, 2019
… firejail. See #429
mattgodbolt added a commit that referenced this issue May 16, 2019
@mattgodbolt
Copy link
Collaborator Author

@mattgodbolt mattgodbolt commented May 21, 2019

Status update: We've moved to mostly using firejail, though I've also been pointed at minijail and nsjail with the comment "They've received a lot more attention and review". One to strongly consider!

@mattgodbolt
Copy link
Collaborator Author

@mattgodbolt mattgodbolt commented May 21, 2019

gvisor also being suggested

@mattgodbolt
Copy link
Collaborator Author

@mattgodbolt mattgodbolt commented May 21, 2019

Notes from experimentation on the beta site with the firejail solution enabled:

Updated to reflect status as of May 27

  • icc 18 doesn't work: any of its built images crash out early in startup. Suspect a glibc incompatibility. - Filed as #1400
  • fork() bombs are pretty effective at taking the site down. - Fixed
  • Timeouts didn't seem to work; the fork bomb wasn't killed by it and the client sees a "gateway timeout" before it sees an execution timeout. - Fixed
  • Crashing executables just show "Program exited with code 255". @apmorton has a patch for firejail to solve this (thanks!) (still pending a newer firejail)
  • It's not obvious that libraries won't link. "minerva" on slack tried to use fmt and got undefined references to fmt::v5::vprint(...). See #1401
  • Some experimental clangs complain during execution with /output.s: error while loading shared libraries: libc++.so.1: cannot open shared object file: No such file or directory (ldd woes, maybe LD_LIBRARY_PATH needed) - see #1399 and #1398
  • ellcc doesn't run either (same as experimental clangs)

@shepmaster
Copy link

@shepmaster shepmaster commented May 21, 2019

  • fork() bombs are pretty effective at taking the site down.

The Rust playground uses Docker's --pids-limit option to mitigate these. Perhaps your tools have a similar configuration?

@apmorton
Copy link
Member

@apmorton apmorton commented May 21, 2019

Summary of conversions on slack:

  • rlimit-nproc $n in sandbox.profile is indeed per jail, and will effectively mitigate fork bombs
  • nice 19 in sandbox.profile is probably not a bad idea. it should limit impact of cpu intensive user code on the rest of compiler explorer
  • --timeout=hh:mm:ss on cmdline set to a longer timeout than compiler explorers internal execution timeout as a failsafe - firejail itself kills itself more reliably than we can. when firejail times out a jail, it exits silently with code 1. we should retain the existing timeout handling for well behaved user code, since it makes it clear your program was terminated due to execution time limits.
  • env GLIBC_TUNABLES=glibc.malloc.check=1 in sandbox.profile may be useful, since it provides at least some basic error output when you do silly things with malloc/free on a compiler using glibc
    • should look into other runtime libraries equivalent debug features as well
  • apmorton/firejail@b040316 has been tagged as 0.9.58.2-ce-patch.2, and should provide "Segmentation fault (core dumped)" like messages when user code crashes
  • firejail patches are being upstreamed

mattgodbolt added a commit that referenced this issue May 24, 2019
…box limits. Should fix #1393 and helps #429
@mattgodbolt
Copy link
Collaborator Author

@mattgodbolt mattgodbolt commented May 24, 2019

More execution issues:

(now filed as #1399)

@mattgodbolt
Copy link
Collaborator Author

@mattgodbolt mattgodbolt commented May 27, 2019

We're getting close folks :)

@mattgodbolt
Copy link
Collaborator Author

@mattgodbolt mattgodbolt commented May 27, 2019

Latest filebans in /run checked in. Hopefully that's "it". Well let it bake today and then hope to deploy tomorrow...

@janwilmans
Copy link

@janwilmans janwilmans commented May 27, 2019

@RubenRBS RubenRBS moved this from To do to In progress in Execution support May 28, 2019
@mattgodbolt
Copy link
Collaborator Author

@mattgodbolt mattgodbolt commented May 28, 2019

Ok! This is about to go live :) Thanks everyone for their invaluable help getting this done!

Execution support automation moved this from In progress to Done May 28, 2019
@mattgodbolt mattgodbolt unpinned this issue May 28, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
No open projects
Linked pull requests

Successfully merging a pull request may close this issue.

None yet