Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"COPY INTO ..." doesn't work, if executing from 2 processes concurrently. #3758

monetdb-team opened this issue Nov 30, 2020 · 0 comments


Copy link

@monetdb-team monetdb-team commented Nov 30, 2020

Date: 2015-07-09 13:50:00 +0200
From: Dieter <>
To: SQL devs <>
Version: 11.19.15 (Oct2014-SP4)
CC: @njnes, @drstmane, @yzchang

Last updated: 2016-03-23 15:38:28 +0100

Comment 20976

Date: 2015-07-09 13:50:00 +0200
From: Dieter <>

User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.130 Safari/537.36
Build Identifier:

"COPY INTO ..." doesn't work, if executing from 2 processes in parallel.

ODBC Driver reports
"General Error"
"CREATE TABLE: name 'table01' already in use"

The concurrent (parallel) import of data should work correctly.

Also a parallel create table/drop table or other commands changing the database catalog information.

Reproducible: Always

Steps to Reproduce:

1.Put Data.csv to your home directory on Fedora operating system.
2.Check monetbulk.cmd source code if 32bit / 64bit is used
3.Check monetBulk.vbs source code and check user/pw/odbc-dsn for
accessing database voc
4.Execute "monetbulk.cmd" to reproduce the error situation
It may take some seconds until the error occurs

Actual Results:

ODBC Driver reports
"General Error"
"CREATE TABLE: name 'table01' already in use"

Expected Results:

All data of data.csv should be imported into the 2 tables without error.

In our test script monetBulk.vbs we do a loop to import data.csv 10 times.
We are doing this in the test script, because so we can merely ensure that the error situation occurs.

Comment 20977

Date: 2015-07-09 13:51:30 +0200
From: Dieter <>

Created attachment 338
Description and comments for the scenario

Attached file: description.txt (text/plain, 484 bytes)
Description: Description and comments for the scenario

Comment 20978

Date: 2015-07-09 13:56:19 +0200
From: Dieter <>

Created attachment 339
Data.csv for import

Attached file: data.csv (application/, 1000894 bytes)
Description: Data.csv for import

Comment 20979

Date: 2015-07-09 13:57:21 +0200
From: Dieter <>

Created attachment 340
Windows .cmd-File for executing 2 VBScripts in parallel

Attached file: monetbulk.cmd (application/octet-stream, 539 bytes)
Description: Windows .cmd-File for executing 2 VBScripts in parallel

Comment 20980

Date: 2015-07-09 13:58:37 +0200
From: Dieter <>

Created attachment 341
VBScript for import of data.csv

Attached file: monetBulk.vbs (application/octet-stream, 2131 bytes)
Description: VBScript for import of data.csv

Comment 21055

Date: 2015-07-23 11:56:25 +0200
From: Dieter <>

Created attachment 345
.cmd File for testing drop/create in parallel

This cmd file reproduces the error situation without importing via COPY INTO, but only execute drop table and create table in concurrent processes.

Attached file: monetDropCreate.cmd (text/plain, 444 bytes)
Description: .cmd File for testing drop/create in parallel

Comment 21056

Date: 2015-07-23 11:58:41 +0200
From: Dieter <>

Created attachment 346
.vbs file reproducing error only with drop/create table

This .vbs file only executes drop table and create table
without COPY INTO.

Attached file: monetDropCreate.vbs (text/plain, 1864 bytes)
Description: .vbs file reproducing error only with drop/create table

Comment 21057

Date: 2015-07-23 12:01:51 +0200
From: Dieter <>

I have made further analysis on the error situation.
The problem occurs not only when importing data with COPY INTO-Statement.
It also happens if I only execute

    in parallel for 2 different tables.

Comment 21058

Date: 2015-07-23 12:34:14 +0200
From: @drstmane

Most probably it not the concurrent copy into into different tables,
but rather the concurrent creation of (different) tables.

Schema statements are (conflicting) transactions on the SQL catalogue.
With MonetDB's optimistic concurrency control approach,
all but one conflicting transaction(s) will be aborted
and it's up to the user / application to handle the abort as desired
(e.g., retry until successful).

I'm admittedly not a VBS expert,
but I cannot find where/how in your script you check whether the create table
statement succeeded or failed, and what you do in case it fails ...

Comment 21062

Date: 2015-07-27 21:15:40 +0200
From: Dieter <>

Yes you are right in saying that the problem occurs in conflicting schema changing operations.
For there are some open questions:

  1. Why dont you lock these transaction on the monet server? Every database I know (Oracle, MySQL, Msql Server, DB2,..) handels concurrent transactions itself (queuing the requests in arrival sequence) without leaving the problem to the application.
    Stating this behavior as "optimistic concurrency control approach" is a friendly way to say: "it's your problem", isn't it?

  2. If it is the only way to handle this behaviour. What unique ODBC error code Inhave to check to handle exactly the lock situation?
    In my test scenario I got the message "common error" which not seems to be an error code for only this specific situation.

  3. You say: every but one SQL catalogue changing transaction will be aborted.
    But why do I get more than one record for a certain table in a specific schema
    SELECT name, schema_id FROM sys.tables where name='table01' and schema_id=3146 ?
    (3146 is an example, I'v got about 1800 same records!)
    As I understand it, this is a corrupt database catalogue information, what I think should never occur.
    BTW: DROP TABLE voc.table01 reduces the number of records in the resultset by one.

Your help would be appreciated.

Comment 21092

Date: 2015-08-02 12:12:00 +0200
From: @njnes

  1. locking isn't always friendly as well, you could start a transaction do one DDL statement and never comeback.

  2. get the DDL statement out of the concurrent code, is the obvious solution. If this isn't possible an transaction failure should occure. Not sure what ODBC will give in that case.

  3. This indicates a bug, which I'm convinced is fixed in the jul2015 and default branch.

Comment 21227

Date: 2015-08-28 13:43:00 +0200
From: @sjoerdmullender

Jul2015 has been released.

Comment 21348

Date: 2015-10-17 10:47:18 +0200
From: Dieter <>

Corrupted schema data are still present also with July 2015 release of MonetDB.
The problem is not fixed jet.

When doing concurrent schema operations on the same table, after a while I find no records sys.columns for my table, but in sys.tables there are several records with the same table name! --> corrupted schema

The Problem occurs on simple concurrent create table/drop table operations without importing data.

Mostly server threads of MonetDB abort with segmentation faults. In some cases I observed that the whole dbfarm was out of work after an error.

Even doing concurrent schema operations on different tables also results in error situation and server crashes.

Comment 21947

Date: 2016-03-23 15:38:28 +0100
From: @njnes

many fixes were done around concurrent schema changes, ie we expect this problem to be fixed (in the to be release sp3)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
1 participant