Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory leakage #316

Closed
Akado2009 opened this issue Oct 7, 2020 · 11 comments
Closed

Memory leakage #316

Akado2009 opened this issue Oct 7, 2020 · 11 comments

Comments

@Akado2009
Copy link

Hi,

so I have a program that reads data from kafka nad pushes it to clickhouse. So here is a function establishing a connection.

func NewConnection(ch models.CHStatBase) (sdp *sqlx.DB, err error) {
	sdpStr := fmt.Sprintf("tcp://%s:%d?username=%s&password=%s&database=%s&debug=false&compress=true&pool_size=20", ch.Server, ch.Port, os.Getenv("CLICKHOUSE_USER"), os.Getenv("CLICKHOUSE_PASSWORD"), ch.Database)
	sdp, err = sqlx.Connect("clickhouse", sdpStr)
	return sdp, err
}

Here is a function to insert the data.

        tx, err := chDB.Begin()
	if err != nil {
		return err
	}

	stmt, err := tx.Prepare(fmt.Sprintf(`INSERT INTO %s
		(EventDate,	EventDateTime)
		VALUES (?, ?)`, table))
	if err != nil {
		return err
	}
	defer stmt.Close()

	for _, evd := range evds {
		if _, err = stmt.Exec(evd.LastVisit, evd.LastVisit); err != nil {
			return err
		}
	}

	err = tx.Commit()

	return err

so there is a problem of memory leakage.
Here is a heap inuse memory from pprof

File: kafka-consumer
Build ID: 1760c52b6ea4f03c44eddf2dd30c1a553e513f14
Type: inuse_space
Time: Oct 7, 2020 at 6:54am (MSK)
Entering interactive mode (type "help" for commands, "o" for options)
(pprof) top
Showing nodes accounting for 5347.94MB, 99.15% of 5393.71MB total
Dropped 121 nodes (cum <= 26.97MB)
Showing top 10 nodes out of 53
      flat  flat%   sum%        cum   cum%
 2668.40MB 49.47% 49.47%  2668.40MB 49.47%  github.com/ClickHouse/clickhouse-go/lib/binary.NewCompressWriter (inline)
 2242.72MB 41.58% 91.05%  2242.72MB 41.58%  github.com/ClickHouse/clickhouse-go/lib/binary.NewCompressReader (inline)
  186.01MB  3.45% 94.50%   186.01MB  3.45%  encoding/json.(*decodeState).literalStore
  102.04MB  1.89% 96.39%  1471.23MB 27.28%  main.runOldEVHandler.func1
   77.52MB  1.44% 97.83%   317.95MB  5.89%  main.runOldPVHandler.func1
   61.39MB  1.14% 98.97%    61.39MB  1.14%  github.com/ClickHouse/clickhouse-go/lib/leakypool.GetBytes
    5.52MB   0.1% 99.07%   240.43MB  4.46%  main.runOldPVHandler.func1.1
    1.84MB 0.034% 99.11%  1369.19MB 25.39%  main.runOldEVHandler.func1.1
    1.50MB 0.028% 99.13%   474.59MB  8.80%  github.com/ClickHouse/clickhouse-go.(*stmt).ExecContext
       1MB 0.019% 99.15%  1367.35MB 25.35%  git.wildberries.ru/statistics/kafka-consumer/service.InsertEvents

And it keeps growing, any ideas on fixing?

@Akado2009
Copy link
Author

@kshvakov

@kshvakov
Copy link
Collaborator

Hi, driver has no memory leak. Driver uses buffer pool for the each connect. You can have some problem, if you didn't set max conn and max idle conn. Also check your driver settings https://github.com/ClickHouse/clickhouse-go#dsn

@nickiv
Copy link

nickiv commented Jan 28, 2021

Hi, @Akado2009 @kshvakov . I suppose the issue here is that in the first Example of the readme of this repo connect.Close() is missing. So if you copied this example to your code, connection will not be released to the pool and memory will be leaking. Just add defer connect.Close() after sql.open and you will eliminate the leak.

@kshvakov
Copy link
Collaborator

@nickiv you must use sql.open only once in your app

@nickiv
Copy link

nickiv commented Jan 28, 2021

@kshvakov you are right, and documentation say so. I mistakenly used it many times and symptoms were just like @Akado2009 mentioned in issue (same top memory consumers NewCompressWriter and NewCompressReader). connect.Close() eliminates the leak, but still it is better to rewrite code so sql.open is used only once!

Thanks!

@jinxing3114
Copy link

@nickiv Hi, I also encountered a similar problem. The heap is constantly growing. I close the connection, but the heap memory cannot be reduced. I would like to ask how to solve it.

@nickiv
Copy link

nickiv commented May 18, 2021

@jinxing3114 how many times in your code sql.open is called? If you call sql.open more then once (like in a loop or on event), than your leak could be related to this issue. Rewrite the code so it is called once, on program initialization. If not, then it must be something else. Please try to use pprof to investigate what consumes memory.

@Akado2009
Copy link
Author

@jinxing3114 Hi ! I suggest you check for sql.open firstly. Secondly, as for my issue - I was clearing the slice with sl = sl[:0] and then filling it again. But this doesn't actually delete elements, they still exist (at least the memory occupied by it). So by time the memory usage of this slice went from 1kb to 2gb. Good luck!

@jinxing3114
Copy link

jinxing3114 commented May 19, 2021

@nickiv sql.open is executed only once. The results I checked through pprof are almost the same, both NewCompressWriter and NewCompressReader occupies most of the memory, and the number of heaps continues to increase. My code is similar to #360 (comment). After initialization, it will continue to perform write queries and other operations, and only see the heap growth.

conn, err = sql.Open("clickhouse", "tcp://127.0.0.1:9000?debug=false")
if err != nil {
	return err
}
conn.SetMaxOpenConns(5)
conn.SetMaxIdleConns(1)
conn.SetConnMaxLifetime(time.Minute)

@FingerLiu
Copy link

Hi @jinxing3114 any update on this?

@kshvakov
Copy link
Collaborator

Can't reproduce in V2 stress.go

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants