Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

memory leakage #554

Closed
1a1a11a opened this issue Mar 9, 2019 · 13 comments
Closed

memory leakage #554

1a1a11a opened this issue Mar 9, 2019 · 13 comments

Comments

@1a1a11a
Copy link

1a1a11a commented Mar 9, 2019

Hi there,
Thank you for this great project! It is amazing, however, I am encountering memory leakage problem, I used pipelinedClient, I acquire a request, use it to send request to server and copy the body over, then release the request and response. Do I miss anything and how would you suggest me in terms of memory leakage? My program runs OOM on a 64GB server if I keep sending requests.

@erikdubbelboer
Copy link
Collaborator

Can you share some code?

Getting a memory leak in Go is almost impossible with the garbage collector. Doing so with fasthttp even more impossible as it really tries to prevent any heap allocations.

@1a1a11a
Copy link
Author

1a1a11a commented Mar 11, 2019

It is hard to simplify the codebase, right now I am adding a gc collect before the return of handler (for debug), it won't OOM very fast now, but it is increasing very slowly.
Do you have any suggestion on locating the source of potential memory leak (if there is). I have tried pprof, but it does not give me any useful information (or I didn't find how to use it to achieve this).

@1a1a11a
Copy link
Author

1a1a11a commented Mar 13, 2019

Still experience the memory leak problem, is it possible that some data is kept after the handler is returned?

@1a1a11a
Copy link
Author

1a1a11a commented Mar 13, 2019

Here is a simplified case where fasthttp uses huge amount of memory. Request a 2 GB file will use more than 10 GB of server memory.

func handler(ctx *fasthttp.RequestCtx) {

	s := bytes.Split(ctx.Path(), []byte{'/'})
	reqContent := s[2]
	s2 := bytes.Split(reqContent, []byte{'_'})
	objSize, _ := strconv.Atoi(string(s2[1]))


	index := 0
	req := fasthttp.AcquireRequest()
	resp := fasthttp.AcquireResponse()
	defer fasthttp.ReleaseRequest(req)
	defer fasthttp.ReleaseResponse(resp)
	url := fmt.Sprintf("http://anotherServer%s/%s", hosts[index], string(reqContent))
	req.SetRequestURI(url)

	if err := pipelineClients[index].Do(req, resp); err != nil {
		SugarLogger.Fatal("Error when sending request ", index, " ", hosts[index], ", : ", err)
	}
	if resp.StatusCode() != fasthttp.StatusOK {
		SugarLogger.Fatalf("unexpected status code: %d. Expecting %d url %s hostInd %d httpRequester addr %s",
			resp.StatusCode(), fasthttp.StatusOK, url, index, pipelineClients[index].Addr)
	}


	var respContent []byte
	//respContent = make([]byte, objSize)
	//copy(respContent, resp.Body())
	respContent = resp.Body()


	fmt.Println(objSize, len(respContent))

	ctx.SetBody(respContent)

	runtime.GC()
}

@erikdubbelboer
Copy link
Collaborator

fasthttp isn't build for such huge requests I'm afraid. fasthttp gets its speed by reusing buffers everywhere. If your response is 2GB that means we'll keep the 2GB buffer in memory to reuse for another request/response in the future.

fasthttp also always loads full bodies into memory before doing anything with them. With such huge responses it would be much better if you stream the response back to the client. This is something net/http supports so I suggest you use that.

@1a1a11a
Copy link
Author

1a1a11a commented Mar 14, 2019

Hi Erik, thank you for your suggestions! I have migrated the HTTP client to net/http, but do you think fasthttp will be fine for the server side if I use net/http to fetch (using streaming) to content (from another server) then stream to client?

My use case is the following, I use fasthttp as my server side, when a request comes in, the handler will request some other contents from other servers depending on the request (not a simple routing) and then serve to the client.
In terms of request size, I don't always have such larger file, 2GB was only used to find out the memory problem. The average request size is around 200K, but occasionally there are a few hundred MB responses.

@erikdubbelboer
Copy link
Collaborator

Yes you should use RequestCtx.SetBodyStream to stream the body to the client.

If the files aren't that big usually there shouldn't be a problem. Yes memory usage might grow at first as buffers are allocated but once you have a stable amount of buffers that can be reused the memory usage should stay the same.

@1a1a11a
Copy link
Author

1a1a11a commented Mar 14, 2019 via email

@erikdubbelboer
Copy link
Collaborator

It also depends on how many requests per second. But with those sizes this should only take max a couple hundred MB. This sounds to me like there is some other issue in your code where you maybe keep references to old data somewhere?

@1a1a11a
Copy link
Author

1a1a11a commented Mar 14, 2019 via email

@1a1a11a
Copy link
Author

1a1a11a commented Mar 14, 2019

A new problem, I use a pipe for streaming from fasthttp to client, but it seems blocking when I write to the pipe, any idea why?

	piper, pipew := io.Pipe()
	ctx.SetBodyStream(piper, -1)
        // do something and write to pipew, but writing to pipiew is blocking.
     

@erikdubbelboer
Copy link
Collaborator

Yes you have to write to the pipe in another Goroutine. Fasthttp won't start reading from the pipe until after that request handler finishes. Only when the request handler finishes fasthttp will start writing the response headers and body to the client.

@1a1a11a
Copy link
Author

1a1a11a commented Mar 15, 2019

Ah, right, you mentioned it in the doc. Thank you!
I think it would be better to mention again under SetBodyStream function. :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants