Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
Memory leaking in ExecuteNonQuery #1650
in case of massive call to ExecuteNonQuery, memory amount needed by app is increasing up.
For the attachment code, the amount of memo needed is here:
two SQL tables was created (CTS added): table1 and table2, and fedded with data so that the ID was from 1 to 5K at list. (not all IDs are present).
If your query takes longer than ~5ms, then you might have an ever growing number of threads blocked on the lock. It would explain why you see memory grow faster with more database activity.
You can investigate this using a threadsafe counter around the lock:
If the update rate is so fast the application is at risk of not keeping up, then you need to choose how your app degrades under contention e.g.
You could queue tasks with ConcurrentQueue but that still risks unlimited memory growth if the database is slower than the request rate.
By the results I can guess that no thread leaking. for this test I used Win-XP on virtual box, But during last week I've got similar results using windows-10 on real computer, and ms-server 2012 on vmware.
GuyDafny - take into account, that you are using a table without index on id column. From PostgreSQL doc, serial data type: "In most cases you would also want to attach a UNIQUE or PRIMARY KEY constraint to prevent duplicate values from being inserted by accident, but this is not automatic." Without key/index PostgreSQL has to read all records from the table each time you make an update - it is CPU intensive operation (your small table is buffered in memory).
In main method you have:
Each timer ticks every 10ms and calls InsertToDB which has for loop with 80 EXPENSIVE database updates.
Unfortunately I know nothing about garbage collector internals, but maybe your CPU is overloaded and background process used to free memory doesn't have enough time to do its work. From MS doc:
Andrzej-W - First thing thank you for your help. the code I'd upload had been written in order to reproduce the problem as fast as possible, and isn't the production code. The real code handles udp packages that originally sent by some hundreds of embedded controllers and were added to DB using C#, while some python code reads this data, process it and update it. It's too complicated to be as example, This is the reason for the "while (true)" loop, and the lazy job I've done while creating the DB.
I had altered the loop like this:
The CPU usage went down from 89% to 20%, but the memory still leaking.
The production machine is ms-server2012 with 16 GB ram, two processors (I do not knowthe full configuration). the test machine (after the customer said the system craches every week or so) is virtual machine with windows xp, emulated 2 GB of ram and single dual core processor.
If I'm the only one to have this problem, it might be wasting of your time to fix it.
Anyhow, when i change the next function (remark the line "cmd.ExecuteNonQuery();" ) - no memory leakage. remove the remark from it, the problem returns.
The point isn't how much RAM do I have on this computer, but the fuct that the memory usage increased more and more (so at the end no free memory, It simply takes more time if you have more RAM).
and here we can read:
So, your "memory leak" is probably growing queue in ThreadPool. There is no public interface to check the queue length, but you can check GetAvailableThreads and if it returns 0 there is a big chance that internal queue is growing. Disable your timers for a while, and enable them when there is a reasonable number free threads in the pool. Alternatively add primary key on id column in all test tables and make only single update in InsertToDB. This function have to execute in less than 10 ms.
It's important to note that your "lock counter" implemented earlier is not directly related to queue in ThreadPool. Although we see that you have about 20-30 threads inside InsertToDB function and only 10 timers. This means that single call to InsertToDB takes longer than 10 ms, and as I said earlier internal queue in ThreadPool probably grows indefinitely.
Agree with @Andrzej-W:any leak here seems to be a result more of the test scenario and not as a result of any issue with Npgsql itself. No leaks have been reported and Npgsql is in wide production use. I'll reopen if a simpler demonstration of the problem is submitted - there's really no need for a massively concurrent system to demonstrate a leak, a simple loop should suffice.