Skip to content

Commit

Permalink
HOPSWORKS-2565
Browse files Browse the repository at this point in the history
ndb_import will use values from CSV files to set values of auto increment
values. It will however not ensure that the auto increment values set
in RonDB will be correct after the ndb_import process.

To solve this we introduce a new option to ndb_import called
--use-auto-increment, this will ignore the settings from the CSV file
and instead set an autoincrement value, this will ensure that INSERTs
to the table will be correct and working after importing data using
ndb_import.

If this option isn't set it is necessary to set a new starting value
for the auto increment. This can be done e.g. by the following
transaction:

BEGIN;
INSERT INTO table (column list) (auto_increment_value, values)
ROLLBACK;

This will set the new auto increment value in the table.
  • Loading branch information
mikaelronstrom committed May 20, 2021
1 parent 18746fc commit 4b7631f
Show file tree
Hide file tree
Showing 8 changed files with 223 additions and 27 deletions.
89 changes: 70 additions & 19 deletions man/ndb_import.1
Expand Up @@ -2,12 +2,12 @@
.\" Title: \fBndb_import\fR
.\" Author: [FIXME: author] [see http://docbook.sf.net/el/author]
.\" Generator: DocBook XSL Stylesheets v1.79.1 <http://docbook.sf.net/>
.\" Date: 11/26/2020
.\" Manual: MySQL Database System
.\" Source: MySQL 8.0
.\" Date: 05/20/2021
.\" Manual: RonDB Database System
.\" Source: RonDB 21.04
.\" Language: English
.\"
.TH "\FBNDB_IMPORT\FR" "1" "11/26/2020" "MySQL 8\&.0" "MySQL Database System"
.TH "\FBNDB_IMPORT\FR" "1" "05/20/2021" "RonDB 21\&.04" "RonDB Database System"
.\" -----------------------------------------------------------------
.\" * Define some portability stuff
.\" -----------------------------------------------------------------
Expand All @@ -28,7 +28,7 @@
.\" * MAIN CONTENT STARTS HERE *
.\" -----------------------------------------------------------------
.SH "NAME"
ndb_import \- Import CSV data into NDB
ndb_import \- Import CSV data into RonDB
.SH "SYNOPSIS"
.HP \w'\fBndb_import\ \fR\fB\fIoptions\fR\fR\ 'u
\fBndb_import \fR\fB\fIoptions\fR\fR
Expand All @@ -38,10 +38,10 @@ ndb_import \- Import CSV data into NDB
imports CSV\-formatted data, such as that produced by
\fBmysqldump\fR
\fB\-\-tab\fR, directly into
NDB
RonDB
using the NDB API\&.
\fBndb_import\fR
requires a connection to an NDB management server (\fBndb_mgmd\fR) to function; it does not require a connection to a MySQL Server\&.
requires a connection to an RonDB management server (\fBndb_mgmd\fR) to function; it does not require a connection to a MySQL Server\&.
Usage
.sp
.if n \{\
Expand All @@ -63,15 +63,15 @@ is the name of the CSV file from which to read the data; this must include the p
\fBndb_import\fR
include those for specifying field separators, escapes, and line terminators, and are described later in this section\&.
\fBndb_import\fR
must be able to connect to an NDB Cluster management server; for this reason, there must be an unused
must be able to connect to an RonDB Cluster management server; for this reason, there must be an unused
[api]
slot in the cluster
config\&.ini
file\&.
.PP
To duplicate an existing table that uses a different storage engine, such as
InnoDB, as an
NDB
RonDB
table, use the
\fBmysql\fR
client to perform a
Expand All @@ -83,15 +83,15 @@ ALTER TABLE \&.\&.\&. ENGINE=NDB
on the new table; after this, from the system shell, invoke
\fBndb_import\fR
to load the data into the new
NDB
RonDB
table\&. For example, an existing
InnoDB
table named
myinnodb_table
in a database named
myinnodb
can be exported into an
NDB
RonDB
table named
myndb_table
in a database named
Expand Down Expand Up @@ -251,6 +251,16 @@ T}:T{
T}
T{
.PP
\fB \fR\fB--use-auto-increment=#\fR\fB \fR
T}:T{
For table with auto increment, ignore value in CSV file and generate
normal auto increment value
T}:T{
.PP
(Supported in all RonDB 21.04.1 based releases)
T}
T{
.PP
\fB \fR\fB--ai-increment=#\fR\fB \fR
T}:T{
For table with hidden PK, specify autoincrement increment. See mysqld
Expand Down Expand Up @@ -472,8 +482,8 @@ T{
.PP
\fB \fR\fB--opbatch=#\fR\fB \fR
T}:T{
A db execution batch is a set of transactions and operations sent to NDB
kernel. This option limits NDB operations (including blob
A db execution batch is a set of transactions and operations sent to RonDB
kernel. This option limits RonDB operations (including blob
operations) in a db execution batch. Therefore it also
limits number of asynch transactions. Value 0 is not valid
T}:T{
Expand Down Expand Up @@ -674,6 +684,43 @@ Dump core on any fatal error; used for debugging only\&.
.sp -1
.IP \(bu 2.3
.\}
\fB\-\-use\-auto\-increment\fR=\fI#\fR
.TS
allbox tab(:);
lB l
lB l
lB l
lB l
lB l.
T{
Command-Line Format
T}:T{
--use-auto-increment=#
T}
T{
Type
T}:T{
Boolealn
T}
T{
Default Value
T}:T{
TRUE
T}
.TE
.sp 1
For a table with auto increment, ignore the value in the CSV file and instead
generate a normal auto increment value\&.
.RE
.sp
.RS 4
.ie n \{\
\h'-04'\(bu\h'+03'\c
.\}
.el \{\
.sp -1
.IP \(bu 2.3
.\}
\fB\-\-ai\-increment\fR=\fI#\fR
.TS
allbox tab(:);
Expand Down Expand Up @@ -1107,7 +1154,7 @@ T}
T{
Default Value
T}:T{
\
Backslash
T}
.TE
.sp 1
Expand Down Expand Up @@ -1184,7 +1231,7 @@ T}
T{
Default Value
T}:T{
\t
TAB
T}
.TE
.sp 1
Expand Down Expand Up @@ -1496,7 +1543,7 @@ T}
T{
Default Value
T}:T{
\n
CR
T}
.TE
.sp 1
Expand Down Expand Up @@ -1552,7 +1599,7 @@ T}
.sp 1
Performs internal logging at the given level\&. This option is intended primarily for internal and development use\&.
.sp
In debug builds of NDB only, the logging level can be set using this option to a maximum of 4\&.
In debug builds of RonDB only, the logging level can be set using this option to a maximum of 4\&.
.RE
.sp
.RS 4
Expand Down Expand Up @@ -2488,6 +2535,8 @@ was added in NDB 7\&.6\&.2\&.
.br
.PP
Copyright \(co 1997, 2020, Oracle and/or its affiliates.
.br
Copyright \(co 2021, 2021, Logical Clocks AB and/or its affiliates.
.PP
This documentation is free software; you can redistribute it and/or modify it only under the terms of the GNU General Public License as published by the Free Software Foundation; version 2 of the License.
.PP
Expand All @@ -2496,8 +2545,10 @@ This documentation is distributed in the hope that it will be useful, but WITHOU
You should have received a copy of the GNU General Public License along with the program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA or see http://www.gnu.org/licenses/.
.sp
.SH "SEE ALSO"
For more information, please refer to the MySQL Reference Manual,
For more information, please refer to the MySQL Reference Manual and the RonDB Documentation
which may already be installed locally and which is also available
online at http://dev.mysql.com/doc/.
online at http://dev.mysql.com/doc/ and at http://docs.rondb.com.
.SH AUTHOR
Oracle Corporation (http://dev.mysql.com/).
.br
Logical Clocks AB (http://logicalclocks.com/).
84 changes: 84 additions & 0 deletions mysql-test/suite/ndb/t/ndb_import0.test
Expand Up @@ -648,6 +648,86 @@ exec $NDB_IMPORT --state-dir=$MYSQLTEST_VARDIR/tmp
--csvopt=cqn --pagesize=256 --pagebuffer=512
test $MYSQLTEST_VARDIR/tmp/tpersons.csv >> $NDB_TOOLS_OUTPUT 2>&1;

perl;
use strict;
use Symbol;
my $lt = !$ENV{IS_WINDOWS} ? "\n" : "\r\n";
my $vardir = $ENV{MYSQLTEST_VARDIR}
or die "need MYSQLTEST_VARDIR";
my $file = "$vardir/tmp/tauto_inc1.csv";
my $fh = gensym();
open($fh, ">:raw", $file)
or die "$file: open for write failed: $!";
for my $i (1..5) {
print $fh $i, "\t", $i, $lt;
}
close($fh)
or die "$file: close after write failed: $!";
exit(0)
EOF

perl;
use strict;
use Symbol;
my $lt = !$ENV{IS_WINDOWS} ? "\n" : "\r\n";
my $vardir = $ENV{MYSQLTEST_VARDIR}
or die "need MYSQLTEST_VARDIR";
my $file = "$vardir/tmp/tauto_inc2.csv";
my $fh = gensym();
open($fh, ">:raw", $file)
or die "$file: open for write failed: $!";
for my $i (1..5) {
print $fh $i+4, "\t", $i, $lt;
}
close($fh)
or die "$file: close after write failed: $!";
exit(0)
EOF

perl;
use strict;
use Symbol;
my $lt = !$ENV{IS_WINDOWS} ? "\n" : "\r\n";
my $vardir = $ENV{MYSQLTEST_VARDIR}
or die "need MYSQLTEST_VARDIR";
my $file = "$vardir/tmp/tauto_inc3.csv";
my $fh = gensym();
open($fh, ">:raw", $file)
or die "$file: open for write failed: $!";
for my $i (1..5) {
print $fh $i+4, "\t", $i, $lt;
}
close($fh)
or die "$file: close after write failed: $!";
exit(0)
EOF

CREATE TABLE tauto_inc1 (
id INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
val INT NOT NULL
) engine=NDB;

CREATE TABLE tauto_inc2 (
id INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
val INT NOT NULL
) engine=NDB;

CREATE TABLE tauto_inc3 (
id BIGINT NOT NULL AUTO_INCREMENT PRIMARY KEY,
val INT NOT NULL
) engine=NDB;

exec $NDB_IMPORT --state-dir=$MYSQLTEST_VARDIR/tmp --ai-prefetch-sz=1
test $MYSQLTEST_VARDIR/tmp/tauto_inc1.csv >> $NDB_TOOLS_OUTPUT 2>&1;
exec $NDB_IMPORT --state-dir=$MYSQLTEST_VARDIR/tmp --ai-prefetch-sz=1
test $MYSQLTEST_VARDIR/tmp/tauto_inc2.csv >> $NDB_TOOLS_OUTPUT 2>&1;
exec $NDB_IMPORT --state-dir=$MYSQLTEST_VARDIR/tmp --ai-prefetch-sz=1
test $MYSQLTEST_VARDIR/tmp/tauto_inc3.csv >> $NDB_TOOLS_OUTPUT 2>&1;

select id from tauto_inc1 order by id;
select id from tauto_inc2 order by id;
select id from tauto_inc3 order by id;

# Look for the specific error message from ndb_import
--let $assert_file=$NDB_TOOLS_OUTPUT
--let $assert_only_after=import test.tpersons
Expand All @@ -656,6 +736,10 @@ exec $NDB_IMPORT --state-dir=$MYSQLTEST_VARDIR/tmp
--let $assert_count=1
--source include/assert_grep.inc

drop table tauto_inc1;
drop table tauto_inc2;
drop table tauto_inc3;

drop table tpersons;

drop table t1, t1ver, t2, t2ver, t3, t3ver, t4,
Expand Down
1 change: 1 addition & 0 deletions storage/ndb/tools/NdbImport.cpp
Expand Up @@ -91,6 +91,7 @@ NdbImport::Opt::Opt()
m_monitor = 2;
m_ai_prefetch_sz = 1024;
m_ai_increment = 1;
m_use_auto_increment = true;
m_ai_offset = 1;
m_no_asynch = false;
m_no_hint = false;
Expand Down
1 change: 1 addition & 0 deletions storage/ndb/tools/NdbImport.hpp
Expand Up @@ -83,6 +83,7 @@ class NdbImport {
uint m_ai_prefetch_sz;
uint m_ai_increment;
uint m_ai_offset;
bool m_use_auto_increment;
bool m_no_asynch;
bool m_no_hint;
uint m_pagesize;
Expand Down
39 changes: 35 additions & 4 deletions storage/ndb/tools/NdbImportImpl.cpp
Expand Up @@ -3265,12 +3265,11 @@ NdbImportImpl::ExecOpWorkerAsynch::state_define()
Row* row = op->m_row;
require(row != 0);
const Table& table = m_util.get_table(row->m_tabid);
if (table.m_has_hidden_pk)
if (table.m_has_hidden_pk ||
table.m_has_auto_increment)
{
const Attrs& attrs = table.m_attrs;
const uint attrcnt = attrs.size();
const Attr& attr = attrs[attrcnt - 1];
require(attr.m_type == NdbDictionary::Column::Bigunsigned);
Uint64 val;
if (m_ndb->getAutoIncrementValue(table.m_tab, val,
opt.m_ai_prefetch_sz,
Expand Down Expand Up @@ -3304,7 +3303,39 @@ NdbImportImpl::ExecOpWorkerAsynch::state_define()
break;
}
}
attr.set_value(row, &val, 8);
if (table.m_has_hidden_pk)
{
const Attr& attr = attrs[attrcnt - 1];
require(attr.m_type == NdbDictionary::Column::Bigunsigned);
attr.set_value(row, &val, 8);
}
else
{
const Attr& attr = attrs[table.m_auto_increment_col];
if (attr.m_type == NdbDictionary::Column::Bigunsigned)
{
attr.set_value(row, &val, 8);
}
else if (attr.m_type == NdbDictionary::Column::Bigint)
{
Int64 val64 = Int64(val);
attr.set_value(row, &val64, 8);
}
else if (attr.m_type == NdbDictionary::Column::Int)
{
Int32 val32 = Uint32(val);
attr.set_value(row, &val32, 4);
}
else if (attr.m_type == NdbDictionary::Column::Unsigned)
{
Uint32 val32 = Uint32(val);
attr.set_value(row, &val32, 4);
}
else
{
require(false);
}
}
}
const bool no_hint = opt.m_no_hint;
Tx* tx = 0;
Expand Down

0 comments on commit 4b7631f

Please sign in to comment.