Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A lot of files in /var/lib/bblfshd/tmp/ #168

Open
EgorBu opened this issue Jun 6, 2018 · 8 comments
Open

A lot of files in /var/lib/bblfshd/tmp/ #168

EgorBu opened this issue Jun 6, 2018 · 8 comments
Labels

Comments

@EgorBu
Copy link

EgorBu commented Jun 6, 2018

Hi,

I noticed that the directory /var/lib/bblfshd/tmp/ has too many files:

egor@science-3 ~ $ sudo du -csh /var/lib/bblfshd/*
4.5M    /var/lib/bblfshd/containers
983M    /var/lib/bblfshd/images
3.5G    /var/lib/bblfshd/tmp
egor@science-3 ~ $ ls /var/lib/bblfshd/tmp/ | wc -l
57976

Do you know how it is caused? Should it be removed automatically?

@juanjux
Copy link
Contributor

juanjux commented Jun 6, 2018

As @dennwc said in Slack, these files (grpc sockets) should probably be created on /tmp since there is no need to persist them.

@smola
Copy link
Member

smola commented Jun 18, 2018

On the other hand, why would we have 57976 grpc sockets without cleanup? Aren't these removed when they are closed?

@juanjux
Copy link
Contributor

juanjux commented Jun 18, 2018

I think last time I checked they should, but I'm not 100% sure, also I don't know if when bblfshd/libcontainer restarts or kill a container it has any chance to clean up stuff.

@creachadair
Copy link
Contributor

creachadair commented Dec 12, 2018

Closing a Unix-domain socket does not necessarily remove it, cf. https://gist.github.com/creachadair/214e46db4a0b891d3e9b52ff8bd861d5

Also, regarding /tmp vs. other locations, generally /tmp doesn't get cleaned up except when the system reboots, so probably we can't rely on that for a long-running container.

@juanjux
Copy link
Contributor

juanjux commented Dec 13, 2018

@EgorBu can you confirm that this is still happening? I remember doing some changes about how sockets where closed some time ago and my tmp is pretty clean after running several drivers.

@smola
Copy link
Member

smola commented Dec 13, 2018

I can reproduce this with bblfshd:2.11.0-drivers image:

  1. Start bblfshd: docker run --privileged -d -p 9432:9432 --name burn-bblfsh bblfsh/bblfshd:v2.11.0-drivers
  2. Run N threads with a loop doing parse requests. Use N = NumCPU. You might not be able to reproduce this issue if N <= 2 or N > NumCPU. See source code below.
  3. Checking sockets with docker exec -it burn-bblfsh find /var/lib/bblfshd/tmp/ -name 'rpc.sock'|wc -l I observe an ever growing number of dangling socket files (currently 214 sockets after 1 million requests).

Note that doing bblfshctl instances I can confirm you can reproduce #194 with the same steps. They are two different issues, but their combination aggravates both of them.

Code:

package main

import (
	"fmt"
	"runtime"
	"sync"
	"sync/atomic"

	bblfsh "gopkg.in/bblfsh/client-go.v2"
)

func main() {
	client, err := bblfsh.NewClient("0.0.0.0:9432")
	if err != nil {
		panic(err)
	}

	wg := sync.WaitGroup{}
	var count int32
	for i := 0; i < runtime.NumCPU(); i++ {
		wg.Add(1)
		go func() {
			for {
				code := `package main`
				_, _, err = client.
					NewParseRequest().
					Language("go").
					Content(code).
					UAST()
				if err != nil {
					panic(err)
				}

				newCount := atomic.AddInt32(&count, 1)
				if newCount%100 == 0 {
					fmt.Printf("Requests: %d\n", newCount)
				}
			}
			wg.Done()
		}()
	}
	wg.Wait()
}

@smola
Copy link
Member

smola commented Dec 13, 2018

You can reproduce with N > NumCPU too, I think it just reduces the driver instance churn rate a little bit.

@juanjux
Copy link
Contributor

juanjux commented Dec 13, 2018

Thanks @smola, this will be very useful.

@EgorBu EgorBu mentioned this issue Jan 3, 2019
36 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants