Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possible memory leak in Go agent #906

Open
betabandido opened this issue Apr 29, 2024 · 9 comments
Open

Possible memory leak in Go agent #906

betabandido opened this issue Apr 29, 2024 · 9 comments
Labels

Comments

@betabandido
Copy link

Description

We have several Go applications that use the New Relic Go agent, and they work well. But there is a particular one that seems to be experiencing what looks like a memory leak. The setup for the agent is very similar in these applications. This might suggest that the problem is elsewhere, but we have ran Go's pprof tool and everything points to an issue with the NR Go agent. We're trying to create a reproducible example, but in the meantime we wanted to create this issue, and add to it as much information as possible, with the hope of setting the wheels turning to find a solution.

Steps to Reproduce

We cannot reproduce the issue outside of this particular application (yet).

Expected Behavior

Memory should not be increasing over time.

NR Diag results

The following diagram shows the memory usage for the application running on our kubernetes cluster.

image

As the image shows, memory keeps increasing until either we deploy a new release or the kubernetes cluster kills the app because of consuming too much memory.

Your Environment

A Go application compiled using Go 1.21, running on a kubernetes cluster (EKS 1.27).

New Relic Go agent version 3.32.0.

Reproduction case

We are working on trying to reproduce the issue.

Additional context

Here you can see two outputs from pprof. The first one is from last Friday (Apr 26th), and the second one is from today (Apr 29th). As you can see the heap memory in use within StoreLog method has doubled (from ~40 MB to ~80 MB).

  1. Apr 26th
    profile001

  2. Apr 29th
    profile002

@iamemilio
Copy link
Contributor

iamemilio commented Apr 29, 2024

Hi @betabandido. Before we flag this as a memory leak, we need to inform you that as a feature of the go agent, transactions are stored on logs until that transaction ends. Transactions that run for a long time will accumulate all sorts of memory, but logs happen to be very large. We do this this way because of sampling in the agent. The weight of a sampled transaction is not calculated until it is finished, so rather than risk dropping critical data, we hold onto it. All logs associated with a transaction are stored on board until that transaction ends. This is currently a design requirement of all agents.

@betabandido
Copy link
Author

betabandido commented Apr 29, 2024

Thanks @iamemilio ! That makes sense. Yet, it seems we have none of the following:

  • long-lasting transactions
  • a continuously increasing number of transactions

Therefore, I would expect StoreLog to consume a certain amount of memory, but not to increase its memory consumption over time.

I collected some data on how long the transactions for this particular app are with:

SELECT average(duration * 1000), percentile(duration * 1000, 99), max(duration * 1000)
FROM Transaction 
WHERE appName = '<our app name>'
SINCE 7 days ago

This is what I got:

11.7 ms (avg)
78 ms (99th percentile)
3360 ms (max)

No really long-lasting transactions here.

For the number of transactions, I used:

SELECT count(*)
FROM Transaction 
WHERE appName = '<our app name>'
SINCE 7 days ago
TIMESERIES 1 hour

and I got the following chart:

image

There is a cyclic pattern, but I cannot see an ever-growing number of transactions between April 26th and 29th.

I acknowledge this might be a difficult issue to debug without a reproducible example. But, while we try to come up with one, please let us know how else we can help you gather more data to pinpoint the root cause.

@betabandido
Copy link
Author

@iamemilio This morning I just got some more data with pprof. Memory consumption in StoreLog went from ~80 MB to ~100 MB (see diagram below). Eventually the pods will get killed, and the process will start over again.

profile003

@mirackara
Copy link
Contributor

Hi @betabandido. Thanks for the profiling data and bringing this to our attention (especially the last couple of updates!). We will continue investigating this, but it may take a while to identify. If a reproducible example is possible, that would be greatly appreciated and will help us pinpoint the issue faster. We'll keep monitoring this thread in the meanwhile.

Thanks!

@betabandido
Copy link
Author

Hi @iamemilio @mirackara. We have come up with a reproducible example that we can share. What would be the preferred way to do so?

@mirackara
Copy link
Contributor

Hi @betabandido. A public repository would be great if possible.

@iamemilio
Copy link
Contributor

iamemilio commented May 13, 2024

If you would prefer not to share code publicly, then please get in touch with New Relic support and request assistance with the Go agent. An agent will handle your case and we will be able to communicate privately from there. That being said, the issue is already public, so as long as the reproducer is not sensitive to you, then we don't mind it being posted somewhere public, or just inline in this thread even.

@betabandido
Copy link
Author

@mirackara @iamemilio I have created this repo: https://github.com/betabandido/nrml

I've also added some instructions that hopefully will help. Please let me know if you have any doubts or questions.

@mirackara
Copy link
Contributor

Hi @betabandido

First off, huge kudos to you + team. This reproducer is very detailed and will be a massive help for us. We don't have an ETA on this yet, but we are tracking this internally and will give the investigation the time it deserves. If we have any updates/questions we'll post them in this thread.

Thank You!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants