When a table is created and filled in the same commit, a logical ref will continue to point to the bats after the table has been dropped, preventing the bat to be unlink.
On restart, the bat is correctly unlinked.
Reproducible: Always
Steps to Reproduce:
1.Create a csv with enough lines to trigger a flush.
seq 1000000 > /tmp/test.csv
2.Create tables with this script:
-- First table, with commit create and copy inside the same commit
CREATE TABLE "test" ("test" CHARACTER LARGE OBJECT);
COPY INTO "test" FROM '/tmp/test.csv';
COMMIT;
-- Second table, with commit after create
CREATE TABLE "test2" ("test" CHARACTER LARGE OBJECT);
COMMIT;
COPY INTO "test2" FROM '/tmp/test.csv';
COMMIT;
SELECT location FROM storage where table like 'test%';
DROP TABLE "test";
DROP TABLE "test2";
COMMIT;
-- Third table, just for flushing
CREATE TABLE "test3" ("test" CHARACTER LARGE OBJECT);
COMMIT;
COPY INTO "test3" FROM '/tmp/test.csv';
COMMIT;
Actual Results:
After waiting 30 secondes for the flush to occur, the bats from test1 table are still present.
I have the following location from storage:
+----------+
| location |
+==========+
| 2 |
| 50 |
+----------+
the scenario described in this bug report is fixed. Please report new leaks, with a new bug report, including a scenario which could be used to reproduce the leak consistently.
Date: 2015-09-01 13:52:20 +0200
From: anthonin.bonnefoy
To: SQL devs <>
Version: 11.21.5 (Jul2015)
CC: guillaume.savary, @njnes, @yzchang
Last updated: 2015-11-03 10:18:42 +0100
Comment 21245
Date: 2015-09-01 13:52:20 +0200
From: anthonin.bonnefoy
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:41.0) Gecko/20100101 Firefox/41.0 Iceweasel/41.0
Build Identifier:
When a table is created and filled in the same commit, a logical ref will continue to point to the bats after the table has been dropped, preventing the bat to be unlink.
On restart, the bat is correctly unlinked.
Reproducible: Always
Steps to Reproduce:
1.Create a csv with enough lines to trigger a flush.
seq 1000000 > /tmp/test.csv
2.Create tables with this script:
-- First table, with commit create and copy inside the same commit
CREATE TABLE "test" ("test" CHARACTER LARGE OBJECT);
COPY INTO "test" FROM '/tmp/test.csv';
COMMIT;
-- Second table, with commit after create
CREATE TABLE "test2" ("test" CHARACTER LARGE OBJECT);
COMMIT;
COPY INTO "test2" FROM '/tmp/test.csv';
COMMIT;
SELECT location FROM storage where table like 'test%';
DROP TABLE "test";
DROP TABLE "test2";
COMMIT;
-- Third table, just for flushing
CREATE TABLE "test3" ("test" CHARACTER LARGE OBJECT);
COMMIT;
COPY INTO "test3" FROM '/tmp/test.csv';
COMMIT;
Actual Results:
After waiting 30 secondes for the flush to occur, the bats from test1 table are still present.
I have the following location from storage:
+----------+
| location |
+==========+
| 2 |
| 50 |
+----------+
A ls bat/ give me this:
01 02 03 04 05 06 07 10 11 12 13 14 15 16 17 20 23 2.tail 2.theap 3.tail 3.theap 4.tail 4.theap 5.tail 6.tail BACKUP LEFTOVERS
Looking the bbp stats with "mdb.start(); bbp.get();", I have
[2] tmp_2 count=1000000 lrefs=1 refs=0 loaded tmp
Expected Results:
The bat of the dropped table should be unlinked.
Comment 21336
Date: 2015-10-14 18:46:01 +0200
From: MonetDB Mercurial Repository <>
Changeset 773e781c6a1b made by Niels Nes niels@cwi.nl in the MonetDB repo, refers to this bug.
For complete details, see http//devmonetdborg/hg/MonetDB?cmd=changeset;node=773e781c6a1b
Changeset description:
Comment 21393
Date: 2015-10-23 23:08:10 +0200
From: @yzchang
Reopen this bug, as it seems that the problem still exists with long running tests.
Comment 21395
Date: 2015-10-23 23:26:00 +0200
From: @yzchang
Just add a cross-reference: see also this bug report: https://www.monetdb.org/bugzilla/show_bug.cgi?id=3835
Comment 21405
Date: 2015-10-25 09:37:19 +0100
From: @njnes
the scenario described in this bug report is fixed. Please report new leaks, with a new bug report, including a scenario which could be used to reproduce the leak consistently.
Comment 21452
Date: 2015-11-03 10:18:42 +0100
From: @sjoerdmullender
Jul2015 SP1 has been released.
The text was updated successfully, but these errors were encountered: