Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

oc rsync frequently doesn't return #9726

Closed
fbricon opened this issue Jul 7, 2016 · 20 comments
Closed

oc rsync frequently doesn't return #9726

fbricon opened this issue Jul 7, 2016 · 20 comments

Comments

@fbricon
Copy link

fbricon commented Jul 7, 2016

JBoss Tools OpenShift tools delegates workspace to pod synchronization to oc rsync. More often than not, the oc rsync command doesn't return, leaving the process hanging, waiting for a return code, at least on OSX 10.10.5 (Yosemite).

I managed to reproduce the problem by executing the exact same command from a terminal. In that particular example, the pod is sync'ed to the local directory:

/Users/fbricon/Dev/openshift/client/1.2.0/oc rsync --user=openshift-dev --token=q35nNhR3I9KLl2ic0EdBMRgv06cQIgdwgkHfsLagEz0 --server=https://10.1.2.2:8443 --insecure-skip-tls-verify=true --exclude='.git' nodejs-example-4-d5e15:/opt/app-root/src/ -n demo /Users/fbricon/Dev/workspaces/runtime-New_configuration/.metadata/.plugins/org.jboss.ide.eclipse.as.core/demo@nodejs-example/deploy/
Version

oc v1.2.0
kubernetes v1.2.0-36-g4a3f9c5
rsync version 2.6.9 protocol version 29

Steps To Reproduce
  1. have OpenShift 3.2 running from the CDK 2.1
  2. Create an application from the nodejs-example template
  3. Execute an oc rsync command repeatedly
  4. wait for the command to return
Current Result

rsync command doesn't always return

Expected Result

rsync command should return 100% of the time

Additional Information

In http://screencast.com/t/pTPUEVppqp4, you can see the 1st command gets stuck. After killing it, other commands work, until it gets stuck again at the end.

This issue was reported under https://issues.jboss.org/browse/JBIDE-22677

Sometimes, the rsync command gets stuck when rsync'ing from local to the pod

@fbricon
Copy link
Author

fbricon commented Jul 7, 2016

I have the same problem with oc v1.3.0-alpha.2. @bdshadow has no issues on Fedora

@bdshadow
Copy link

bdshadow commented Jul 7, 2016

yes. My version are:
oc v1.3.0-alpha.1
kubernetes v1.3.0-alpha.1-331-g0522e63
rsync version 3.1.1  protocol version 31

@fabianofranz fabianofranz assigned csrwng and unassigned fabianofranz Jul 7, 2016
@csrwng
Copy link
Contributor

csrwng commented Jul 7, 2016

@fbricon To help me better understand where the hang is happening, can you:

  1. run 'oc rsync' with the --loglevel=5 argument (include the log of an instance when it gets stuck)
  2. while it's stuck, switch to a different terminal and signal the oc process with SIGABRT
    kill -6 [pid of oc rsync]. This should dump the thread stack for the 'oc rsync' process. (include that output here).
  3. Also, while stuck, rsh into the container and run ps ax to see if the rsync process is running there.

Thank you much.

@fbricon
Copy link
Author

fbricon commented Jul 9, 2016

killed log level 5 oc:

/Users/fbricon/Dev/openshift/client/1.2.0.rc2/oc rsync --user=openshift-dev --token=U7gRx7htjSdjTpuikrMgviwSCsV06Bx_W0btTc97MVo --server=https://10.1.2.2:8443 --insecure-skip-tls-verify=true --exclude='.git' nodejs-example-6-bmkxs:/opt/app-root/src/ -n demo /Users/fbricon/Dev/workspaces/runtime-New_configuration/.metadata/.plugins/org.jboss.ide.eclipse.as.core/demo@nodejs-example/deploy/ --loglevel=5
I0709 10:33:54.398747   64585 util.go:49] Found parent command: oc
I0709 10:33:54.398820   64585 util.go:55] Setting root command to: /Users/fbricon/Dev/openshift/client/1.2.0.rc2/oc
I0709 10:33:54.398827   64585 util.go:60] The sibling command is: /Users/fbricon/Dev/openshift/client/1.2.0.rc2/oc rsh
I0709 10:33:54.398854   64585 copyrsync.go:45] Rsh command: /Users/fbricon/Dev/openshift/client/1.2.0.rc2/oc rsh --insecure-skip-tls-verify=true --loglevel=5 --namespace=demo --server=https://10.1.2.2:8443 --token=U7gRx7htjSdjTpuikrMgviwSCsV06Bx_W0btTc97MVo --user=openshift-dev
I0709 10:33:54.487294   64585 copyrsync.go:75] Copying files with rsync
I0709 10:33:54.487326   64585 execlocal.go:19] Local executor running command: rsync -a --blocking-io --omit-dir-times --numeric-ids -v --exclude=.git -e /Users/fbricon/Dev/openshift/client/1.2.0.rc2/oc rsh --insecure-skip-tls-verify=true --loglevel=5 --namespace=demo --server=https://10.1.2.2:8443 --token=U7gRx7htjSdjTpuikrMgviwSCsV06Bx_W0btTc97MVo --user=openshift-dev nodejs-example-6-bmkxs:/opt/app-root/src/ /Users/fbricon/Dev/workspaces/runtime-New_configuration/.metadata/.plugins/org.jboss.ide.eclipse.as.core/demo@nodejs-example/deploy/
receiving file list ... done

sent 30 bytes  received 45666 bytes  91392.00 bytes/sec
total size is 11761506  speedup is 257.39
^[[ASIGABRT: abort
PC=0x8c128 m=0

goroutine 44 [syscall, 3 minutes]:
syscall.Syscall(0x3, 0x7, 0x823123257, 0x5a9, 0x57, 0x0, 0x112b9)
    /usr/local/go/src/syscall/asm_darwin_amd64.s:16 +0x5 fp=0x822ed5420 sp=0x822ed5418
syscall.read(0x7, 0x823123257, 0x5a9, 0x5a9, 0x57, 0x0, 0x0)
    /usr/local/go/src/syscall/zsyscall_darwin_amd64.go:972 +0x5f fp=0x822ed5460 sp=0x822ed5420
syscall.Read(0x7, 0x823123257, 0x5a9, 0x5a9, 0x0, 0x0, 0x0)
    /usr/local/go/src/syscall/syscall_unix.go:161 +0x4d fp=0x822ed54a0 sp=0x822ed5460
os.(*File).read(0x822c6e1d8, 0x823123257, 0x5a9, 0x5a9, 0x823123200, 0x0, 0x0)
    /usr/local/go/src/os/file_unix.go:228 +0x75 fp=0x822ed54e0 sp=0x822ed54a0
os.(*File).Read(0x822c6e1d8, 0x823123257, 0x5a9, 0x5a9, 0x57, 0x0, 0x0)
    /usr/local/go/src/os/file.go:95 +0x8a fp=0x822ed5538 sp=0x822ed54e0
bytes.(*Buffer).ReadFrom(0x823026d90, 0x8822c3b930, 0x822c6e1d8, 0x57, 0x0, 0x0)
    /usr/local/go/src/bytes/buffer.go:176 +0x23f fp=0x822ed55e8 sp=0x822ed5538
io.copyBuffer(0x8822c3b958, 0x823026d90, 0x8822c3b930, 0x822c6e1d8, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
    /usr/local/go/src/io/io.go:374 +0x180 fp=0x822ed56a8 sp=0x822ed55e8
io.Copy(0x8822c3b958, 0x823026d90, 0x8822c3b930, 0x822c6e1d8, 0x0, 0x0, 0x0)
    /usr/local/go/src/io/io.go:350 +0x64 fp=0x822ed5700 sp=0x822ed56a8
os/exec.(*Cmd).writerDescriptor.func1(0x0, 0x0)
    /usr/local/go/src/os/exec/exec.go:236 +0x8b fp=0x822ed5780 sp=0x822ed5700
os/exec.(*Cmd).Start.func1(0x822ee43c0, 0x822ea1fc0)
    /usr/local/go/src/os/exec/exec.go:344 +0x1d fp=0x822ed57b0 sp=0x822ed5780
runtime.goexit()
    /usr/local/go/src/runtime/asm_amd64.s:1998 +0x1 fp=0x822ed57b8 sp=0x822ed57b0
created by os/exec.(*Cmd).Start
    /usr/local/go/src/os/exec/exec.go:345 +0x967

goroutine 1 [syscall, 3 minutes]:
syscall.Syscall6(0x7, 0xfc4a, 0x8231bb54c, 0x0, 0x82311d170, 0x0, 0x0, 0x8231bb528, 0x54570, 0x8231bb520)
    /usr/local/go/src/syscall/asm_darwin_amd64.s:41 +0x5
syscall.wait4(0xfc4a, 0x8231bb54c, 0x0, 0x82311d170, 0x90, 0x0, 0x0)
    /usr/local/go/src/syscall/zsyscall_darwin_amd64.go:34 +0x7f
syscall.Wait4(0xfc4a, 0x8231bb594, 0x0, 0x82311d170, 0x822c6e1e0, 0x0, 0x0)
    /usr/local/go/src/syscall/syscall_bsd.go:129 +0x55
os.(*Process).wait(0x822d46d00, 0x66, 0x0, 0x0)
    /usr/local/go/src/os/exec_unix.go:22 +0x105
os.(*Process).Wait(0x822d46d00, 0x0, 0x0, 0x0)
    /usr/local/go/src/os/doc.go:49 +0x2d
os/exec.(*Cmd).Wait(0x822ee43c0, 0x0, 0x0)
    /usr/local/go/src/os/exec/exec.go:396 +0x211
os/exec.(*Cmd).Run(0x822ee43c0, 0x0, 0x0)
    /usr/local/go/src/os/exec/exec.go:262 +0x64
github.com/openshift/origin/pkg/cmd/cli/cmd/rsync.(*localExecutor).Execute(0x2819e48, 0x822cf2fc0, 0xb, 0xe, 0x0, 0x0, 0x8822c7c198, 0x822c6e008, 0x8822c3b958, 0x823026d90, ...)
    /go/src/github.com/openshift/origin/pkg/cmd/cli/cmd/rsync/execlocal.go:24 +0x27d
github.com/openshift/origin/pkg/cmd/cli/cmd/rsync.(*rsyncStrategy).Copy(0x822c88e40, 0x822e01d40, 0x822e01d80, 0x8822c7c198, 0x822c6e008, 0x8822c3b958, 0x823026cb0, 0x0, 0x0)
    /go/src/github.com/openshift/origin/pkg/cmd/cli/cmd/rsync/copyrsync.go:79 +0x494
github.com/openshift/origin/pkg/cmd/cli/cmd/rsync.copyStrategies.Copy(0x822ea1dc0, 0x2, 0x2, 0x822e01d40, 0x822e01d80, 0x8822c7c198, 0x822c6e008, 0x8822c7c198, 0x822c6e010, 0x0, ...)
    /go/src/github.com/openshift/origin/pkg/cmd/cli/cmd/rsync/copymulti.go:26 +0x19b
github.com/openshift/origin/pkg/cmd/cli/cmd/rsync.(*copyStrategies).Copy(0x822ea1de0, 0x822e01d40, 0x822e01d80, 0x8822c7c198, 0x822c6e008, 0x8822c7c198, 0x822c6e010, 0x0, 0x0)
    <autogenerated>:5 +0xf8
github.com/openshift/origin/pkg/cmd/cli/cmd/rsync.(*RsyncOptions).RunRsync(0x822fb3720, 0x0, 0x0)
    /go/src/github.com/openshift/origin/pkg/cmd/cli/cmd/rsync/rsync.go:232 +0x90
github.com/openshift/origin/pkg/cmd/cli/cmd/rsync.NewCmdRsync.func1(0x822f60a00, 0x8230cd2c0, 0x2, 0xa)
    /go/src/github.com/openshift/origin/pkg/cmd/cli/cmd/rsync/rsync.go:97 +0xba
github.com/spf13/cobra.(*Command).execute(0x822f60a00, 0x8230cd180, 0xa, 0xa, 0x0, 0x0)
    /go/src/github.com/openshift/origin/Godeps/_workspace/src/github.com/spf13/cobra/command.go:572 +0x85a
github.com/spf13/cobra.(*Command).ExecuteC(0x822dce200, 0x822f60a00, 0x0, 0x0)
    /go/src/github.com/openshift/origin/Godeps/_workspace/src/github.com/spf13/cobra/command.go:662 +0x53f
github.com/spf13/cobra.(*Command).Execute(0x822dce200, 0x0, 0x0)
    /go/src/github.com/openshift/origin/Godeps/_workspace/src/github.com/spf13/cobra/command.go:618 +0x2d
main.main()
    /go/src/github.com/openshift/origin/cmd/oc/oc.go:27 +0x180

goroutine 19 [chan receive]:
github.com/golang/glog.(*loggingT).flushDaemon(0x27f5fa0)
    /go/src/github.com/openshift/origin/Godeps/_workspace/src/github.com/golang/glog/glog.go:879 +0x67
created by github.com/golang/glog.init.1
    /go/src/github.com/openshift/origin/Godeps/_workspace/src/github.com/golang/glog/glog.go:410 +0x297

goroutine 38 [syscall, 3 minutes]:
os/signal.signal_recv(0x822d7a840)
    /usr/local/go/src/runtime/sigqueue.go:116 +0x132
os/signal.loop()
    /usr/local/go/src/os/signal/signal_unix.go:22 +0x18
created by os/signal.init.1
    /usr/local/go/src/os/signal/signal_unix.go:28 +0x37

goroutine 49 [IO wait, 3 minutes]:
net.runtime_pollWait(0x8822c3dfa0, 0x72, 0x823085000)
    /usr/local/go/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x822e067d0, 0x72, 0x0, 0x0)
    /usr/local/go/src/net/fd_poll_runtime.go:73 +0x3a
net.(*pollDesc).WaitRead(0x822e067d0, 0x0, 0x0)
    /usr/local/go/src/net/fd_poll_runtime.go:78 +0x36
net.(*netFD).Read(0x822e06770, 0x823085000, 0x800, 0x800, 0x0, 0x8822c78028, 0x822c5a080)
    /usr/local/go/src/net/fd_unix.go:250 +0x23a
net.(*conn).Read(0x822d6c598, 0x823085000, 0x800, 0x800, 0x0, 0x0, 0x0)
    /usr/local/go/src/net/net.go:172 +0xe4
crypto/tls.(*block).readFromUntil(0x82308e720, 0x8822c3e0b0, 0x822d6c598, 0x5, 0x0, 0x0)
    /usr/local/go/src/crypto/tls/conn.go:460 +0xcc
crypto/tls.(*Conn).readRecord(0x822ca0900, 0x1f38117, 0x0, 0x0)
    /usr/local/go/src/crypto/tls/conn.go:562 +0x2d1
crypto/tls.(*Conn).Read(0x822ca0900, 0x82304b000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
    /usr/local/go/src/crypto/tls/conn.go:939 +0x167
net/http.noteEOFReader.Read(0x8822c88860, 0x822ca0900, 0x82308b588, 0x82304b000, 0x1000, 0x1000, 0x4e63, 0x0, 0x0)
    /usr/local/go/src/net/http/transport.go:1683 +0x67
net/http.(*noteEOFReader).Read(0x823090460, 0x82304b000, 0x1000, 0x1000, 0x822c51d0d, 0x0, 0x0)
    <autogenerated>:284 +0xd0
bufio.(*Reader).fill(0x82307c5a0)
    /usr/local/go/src/bufio/bufio.go:97 +0x1e9
bufio.(*Reader).Peek(0x82307c5a0, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0)
    /usr/local/go/src/bufio/bufio.go:132 +0xcc
net/http.(*persistConn).readLoop(0x82308b520)
    /usr/local/go/src/net/http/transport.go:1069 +0x177
created by net/http.(*Transport).dialConn
    /usr/local/go/src/net/http/transport.go:853 +0x10a6

goroutine 50 [select, 3 minutes]:
net/http.(*persistConn).writeLoop(0x82308b520)
    /usr/local/go/src/net/http/transport.go:1273 +0x472
created by net/http.(*Transport).dialConn
    /usr/local/go/src/net/http/transport.go:854 +0x10cb

rax    0x2000003
rbx    0x822c41380
rcx    0x822ed5418
rdx    0x5a9
rdi    0x7
rsi    0x823123257
rbp    0x0
rsp    0x822ed5418
r8     0x0
r9     0x0
r10    0x0
r11    0x206
r12    0x553d56c8
r13    0xa
r14    0x1
r15    0x8
rip    0x8c128
rflags 0x206
cs     0x7
fs     0x0
gs     0x0

ps ax on the node

sh-4.2$ ps ax                                                                   
  PID TTY      STAT   TIME COMMAND                                              
    1 ?        Ssl    0:01 node /opt/rh/nodejs010/root/usr/bin/nodemon --debug=8
   23 ?        Sl     0:00 node --debug=8787 server.js                          
  102 ?        Ss     0:00 /bin/sh -i                                           
  112 ?        R+     0:00 ps ax    

@rhcarvalho
Copy link
Contributor

Looking at the logs, seems to me that what hanged was the rsync process running in the host.
oc hanged on the external call to rsync via the os/exec pkg.
There's no rsync process running in the container, but there should be one running/hung in the host.

I did find some people reporting problems with rsync hangs:

@csrwng
Copy link
Contributor

csrwng commented Jul 13, 2016

The hang could also be happening in the second 'oc' process that is initiated by rsync ... since we're using 'oc rsh' as transport. Unfortunately, I haven't been able to reproduce this locally.

@fbricon one possible solution would be to expose a --timeout flag that can be passed to the rsync invocation, would this be an acceptable solution?

@csrwng
Copy link
Contributor

csrwng commented Jul 20, 2016

@fbricon bump

@fbricon
Copy link
Author

fbricon commented Jul 21, 2016

Sorry @csrwng, I thought I replied already, but apparently forgot to hit the green button.

So, a timeout flag could be useful in any case, but it's gonna be hard to determine a proper value for a it.
Since this seems to be a common problem on osx, I don't see a better solution though

@csrwng
Copy link
Contributor

csrwng commented Jul 21, 2016

Thanks @fbricon, I'll put together a PR so you can try it out and see if it works for you. The --timeout argument will apply only to time that there's no I/O so I'm thinking it shouldn't be that bad to come up with a reasonable default.

@fbricon
Copy link
Author

fbricon commented Jul 21, 2016

Yeah that's what I was thinking too. Still, if rsync'ing a large file takes too long, well, it's not gonna play well

@csrwng
Copy link
Contributor

csrwng commented Jul 22, 2016

Thanks @fbricon for trying it out. The hang is still happening since rsync seems to ignore the --timeout when delegating to the transport call, which is ultimately what's stuck.

Thread dump from oc rsh (invoked by rsync -e 'oc rsh'):
https://paste.fedoraproject.org/394045/16791146/

@ncdc can you please take a look at that thread dump and see if there's something that you recognize that could be going on?

@ncdc
Copy link
Contributor

ncdc commented Jul 22, 2016

The thread dump shows that the remote command execution has finished (goroutine 38 [chan send, 4 minutes]:), but for some reason, there is still one io.Copy goroutine still holding the sync.WaitGroup open before the exec invocation can complete (goroutine 42 [select, 4 minutes]:). Note, the other io.Copy from goroutine 39 [syscall, 4 minutes]: is not part of the wait group.

I don't know offhand why the node would close the error stream but keep the stderr stream open (which is what's happening here).

I would recommend running oc rsync with the env var DEBUG=1 set, and do the same for the openshift node process. If you can reproduce this, please pastebin logs from both the rsync run and the node. And another thread dump from rsync, as well as one from the node would be useful. (To get the node thread dump, run it with the env var OPENSHIFT_PROFILE=web, then on the node itself curl http://localhost:6060/debug/pprof/goroutine?debug=2.)

@jeffmaury
Copy link
Member

I was able to reproduce the problem without MacOS.
Here are the steps:

  1. Start the CDK (vagrant up)
  2. SSH into CDK (vagrant ssh)
  3. Create a folder: mkdir rsync
  4. cd into that dir: cd rsync
  5. login into Openshift: oc login
  6. create the nodejs-example: oc new-app nodejs-example
    6; wait to the pod to be available: oc get pods
    7: start a first rsync: oc rsync podid:/opt/app-root/src/ .
  7. Remove transfered files: rm -rf *
  8. goto 7 until command blocks

@csrwng
Copy link
Contributor

csrwng commented Aug 2, 2016

@fbricon or @jeffmaury are you able to reproduce with the debug information that @ncdc requested? Any help in tracking this down is greatly appreciated.

@csrwng
Copy link
Contributor

csrwng commented Aug 11, 2016

@fbricon I'm still not able to reproduce this bug on my end. I will close it for now. If you get a chance to run with the debug env var on and reproduce it, please reopen it so we can further investigate.

@csrwng csrwng closed this as completed Aug 11, 2016
@csrwng csrwng reopened this Aug 22, 2016
@fbricon
Copy link
Author

fbricon commented Aug 22, 2016

@ncdc sorry for the late reply, but this is the debugging info I got with the help from @csrwng

https://gist.github.com/fbricon/f387246d6e070a99466ab30486d540e3

@csrwng
Copy link
Contributor

csrwng commented Aug 24, 2016

@ncdc with the server-side stack dump, I can see the API server copying from the client to the kubelet. However, I don't see any evidence of a connection open on the kubelet side:

goroutine 7512 [IO wait]:
net.(*pollDesc).Wait(0xc20cafdd40, 0x72, 0x0, 0x0)
    /usr/lib/golang/src/net/fd_poll_runtime.go:84 +0x47
net.(*pollDesc).WaitRead(0xc20cafdd40, 0x0, 0x0)
    /usr/lib/golang/src/net/fd_poll_runtime.go:89 +0x43
net.(*netFD).Read(0xc20cafdce0, 0xc210bb3000, 0x800, 0x800, 0x0, 0x7fc897f8eda8, 0xc20ea80598)
    /usr/lib/golang/src/net/fd_unix.go:242 +0x40f
net.(*conn).Read(0xc20ef371e8, 0xc210bb3000, 0x800, 0x800, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/net/net.go:121 +0xdc
crypto/tls.(*block).readFromUntil(0xc210fa62a0, 0x7fc894583690, 0xc20ef371e8, 0x5, 0x0, 0x0)
    /usr/lib/golang/src/crypto/tls/conn.go:454 +0xe6
crypto/tls.(*Conn).readRecord(0xc20f989600, 0x17, 0x0, 0x0)
    /usr/lib/golang/src/crypto/tls/conn.go:539 +0x2da
crypto/tls.(*Conn).Read(0xc20f989600, 0xc2127be000, 0x8000, 0x8000, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/crypto/tls/conn.go:904 +0x166
io.Copy(0x7fc894583748, 0xc20f9898c0, 0x7fc894583770, 0xc20f989600, 0x1c53, 0x0, 0x0)
    /usr/lib/golang/src/io/io.go:362 +0x1f6
k8s.io/kubernetes/pkg/registry/generic/rest.func·001()
    /builddir/build/BUILD/atomic-openshift-git-0.a4463d9/_thirdpartyhacks/src/k8s.io/kubernetes/pkg/registry/generic/rest/proxy.go:170 +0x15e
created by k8s.io/kubernetes/pkg/registry/generic/rest.(*UpgradeAwareProxyHandler).tryUpgrade
    /builddir/build/BUILD/atomic-openshift-git-0.a4463d9/_thirdpartyhacks/src/k8s.io/kubernetes/pkg/registry/generic/rest/proxy.go:175 +0x695

@fbricon what version of the server are you running?

@csrwng csrwng removed their assignment Aug 24, 2016
@fbricon
Copy link
Author

fbricon commented Aug 25, 2016

For CLI, I use oc v1.3.0-Alpha3

And then the web console tells me:
OpenShift Master: v3.2.0.44
Kubernetes Master: v1.2.0-36-g4a3f9c5

That's the version of openshift shipped with the official CDK 2.1 release

@csrwng
Copy link
Contributor

csrwng commented Aug 25, 2016

@fbricon I'm wondering if it's an issue with the back-end. Would you be willing to try a different back-end with the latest origin build? I will e-mail you the details.

@ncdc
Copy link
Contributor

ncdc commented May 22, 2017

This hasn't been updated in a long time, so I'm going to close it. Feel free to reopen if necessary.

@ncdc ncdc closed this as completed May 22, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

8 participants