Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mclient terminates connection during insertion of a large number of records from a file #2960

monetdb-team opened this issue Nov 30, 2020 · 0 comments


Copy link

@monetdb-team monetdb-team commented Nov 30, 2020

Date: 2011-12-28 01:31:40 +0100
From: Babis <>
To: SQL devs <>
Version: 11.9.1 (Apr2012) [obsolete]
CC: @njnes, @drstmane

Last updated: 2012-05-25 12:58:47 +0200

Comment 16702

Date: 2011-12-28 01:31:40 +0100
From: Babis <>

User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_2) AppleWebKit/534.52.7 (KHTML, like Gecko) Version/5.1.2 Safari/534.52.7
Build Identifier:

mclient terminates the connection to the mserver when executing some "CREATE TABLE" commands and then a large INSERT command. Sometimes the database crashes (reported as crashed from the monetdb status command).

During execution of SQL commands mclient reports the following:
operation successful
MAPI = (monetdb) /tmp/.s.monetdb.50000
ACTION= read_line
ERROR = !Connection terminated

Reproducible: Always

Steps to Reproduce:

  1. mclient -d test -lsql
  2. sql> < <attatched_file_1_here>

What's more, I observed a somewhat similar bad behavior with another big INSERT statement, which does not close the connection, but it executes correctly and no records have been stored in the table. To see this, do the above steps with the second attached file.

Comment 16703

Date: 2011-12-28 01:36:22 +0100
From: Babis <>

Created attachment 91
first input

Attached file: (application/zip, 341614 bytes)
Description: first input

Comment 16704

Date: 2011-12-28 01:37:07 +0100
From: Babis <>

Created attachment 92
second input

Attached file: (application/zip, 24652 bytes)
Description: second input

Comment 16705

Date: 2011-12-28 09:16:58 +0100
From: @grobian

Ok, the 1st statement looks like this:

[... x100K]

IOW one very large statement. Likely the server crashes due to overallocation or something similar. Mapi at the same time could go wrong here.

The second is similar, but has much less values (~8K).

Comment 16710

Date: 2012-01-02 10:04:51 +0100
From: @sjoerdmullender

The server crashes when running the first input due to extremely deep recursion in the function rel_properties in the file rel_optimizer.c (line 264 in the Dec2011 branch). I'm thinking it recurses into that function for each tuple being inserted, and there are some 130000.

Comment 16972

Date: 2012-02-22 23:35:29 +0100
From: @drstmane

Most probably, this one won't be fixed soon (if at all ...).

Instead of one large insert statement for "very many" tuples, use one insert statement per tuple (slow), or better (and strongly recommended) a COPY INTO (from file or stdin) statement (fast); cf.,

E.g., try

cat bug_large-insert1.sql | awk '/^INSERT INTO .+ VALUES *$/ {n=0; s=""; t=$3;} /^ *(.+)[,;] *$/ {n++; s=s gensub("^ *\((.+)\)[,;] *$","\1",1,$0) "\n";} /^ *(.+); *$/ {printf "COPY %d RECORDS INTO %s FROM stdin USING DELIMITERS ''',''','''\n''','''\'''';\n%s",n,t,s;} !/^(INSERT INTO .+ VALUES| *(.+)[,;]) *$/ {print $0;}' > bug_large-insert1x.sql
mclient bug_large-insert1x.sql

cat bug_large-insert2.sql | awk '/^INSERT INTO .+ VALUES *$/ {n=0; s=""; t=$3;} /^ *(.+)[,;] *$/ {n++; s=s gensub("^ *\((.+)\)[,;] *$","\1",1,$0) "\n";} /^ *(.+); *$/ {printf "COPY %d RECORDS INTO %s FROM stdin USING DELIMITERS ''', ''','''\n''','''\'''';\n%s",n,t,s;} !/^(INSERT INTO .+ VALUES| *(.+)[,;]) *$/ {print $0;}' > bug_large-insert2x.sql
mclient bug_large-insert2x.sql

(If necessary, set the record delimiters according to your original insert statements syntax.)

Comment 17241

Date: 2012-05-09 16:21:41 +0200
From: @njnes

fixed the recursion problem. Instead of a recursive union we now use a list of values.

Comment 17243

Date: 2012-05-09 16:56:55 +0200
From: @njnes

Changeset 5a8a62a9f0b5 made by Niels Nes in the MonetDB repo, refers to this bug.

For complete details, see http//devmonetdborg/hg/MonetDB?cmd=changeset;node=5a8a62a9f0b5

Changeset description:

fixed problem with deep recursion, when loading/inserting lots of values
using values (),() etc. This fixes bug #2960
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
1 participant