Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

select result is not correct while concurrently update table on the same table #9407

Closed
zisedeqing opened this issue Jan 13, 2020 · 1 comment
Closed
Assignees

Comments

@zisedeqing
Copy link

@zisedeqing zisedeqing commented Jan 13, 2020

Greenplum version or build

master, 6X_STABLE

OS version and uname -a

Linux h16c01235.na62 3.10.0-327.ali2010.alios7.x86_64 #1 SMP Mon Mar 20 19:58:02 CST 2017 x86_64 x86_64 x86_64 GNU/Linux

autoconf options used ( config.status --config )

./configure --prefix=/home/gpdb/gpdb-install --with-openssl --with-ldap --with-libxml --with-gssapi --enable-debug --without-zstd --disable-orca --with-blocksize=8 --with-wal-blocksize=8 --with-wal-segsize=16 CFLAGS="-O2 -DIMPLEMENT_ASYNC_COMMIT" --no-create --no-recursion

Expected behavior

select always is 0 rows

Actual behavior

sometime select result is:
result: 12 | (5,74) | 39361 | 41268 | 940 | 430709.16
12 | (5,11) | 41406 | 41486 | 940 | 440973.41
12 | (1,88) | 40800 | 41149 | 989 | 413173.29
12 | (1,82) | 41276 | 41678 | 989 | 419824.15
1 | (0,66) | 40787 | 41162 | 883 | 403844.59
1 | (0,64) | 41422 | 41422 | 883 | 409497.41

Step to reproduce the behavior

  1. create gpdb cluster
  • my gpdb cluster: 1 master + 16 primary segment, no mirrors
  • gpdb configure is:
   log_statement = none
  gp_enable_global_deadlock_detector=on
  shared_buffers=8GB
  1. use benchmarksql 5.0 to run the update statement
    benchmarksql download link: https://sourceforge.net/projects/benchmarksql/files/
  • create benchmarksql property file for gpdb , content is :
db=postgres
driver=org.postgresql.Driver
conn=jdbc:postgresql://127.0.0.1:25000/test
user=benchmarksql
password=123456

warehouses=1000
loadWorkers=32
fileLocation=/data/tpcc_1000w/

terminals=150
//To run specified transactions per terminal- runMins must equal zero
runTxnsPerTerminal=0
//To run for specified minutes- runTxnsPerTerminal must equal zero
runMins=120
//Number of total transactions per minute
limitTxnsPerMin=300000000

//Set to true to run in 4.x compatible mode. Set to false to use the
//entire configured database evenly.
terminalWarehouseFixed=false

//The following five values must add up to 100
//The default percentages of 45, 43, 4, 4 & 4 match the TPC-C spec
// Only do payment transaction for testing
newOrderWeight=0
paymentWeight=100
orderStatusWeight=0
deliveryWeight=0
stockLevelWeight=0

// Directory name to create for collecting detailed result data.
// Comment this out to suppress.
resultDirectory=my_result_%tY-%tm-%td_%tH%tM%tS
osCollectorScript=./misc/os_collector_linux.py
osCollectorInterval=1
//osCollectorSSHAddr=user@dbhost
osCollectorDevices=net_eth0 blk_sda
  • patch the follow patch for benchmarksql:
    the function of the patch is that only update warehouse for payment transaction.
diff --git a/src/client/jTPCCTData.java b/src/client/jTPCCTData.java
index 890d9fb..ca8e36b 100644
--- a/src/client/jTPCCTData.java
+++ b/src/client/jTPCCTData.java
@@ -758,32 +758,32 @@ public class jTPCCTData

 	try
 	{
-	    // Update the DISTRICT.
-	    stmt = db.stmtPaymentUpdateDistrict;
-	    stmt.setDouble(1, payment.h_amount);
-	    stmt.setInt(2, payment.w_id);
-	    stmt.setInt(3, payment.d_id);
-	    stmt.executeUpdate();
-
-	    // Select the DISTRICT.
-	    stmt = db.stmtPaymentSelectDistrict;
-	    stmt.setInt(1, payment.w_id);
-	    stmt.setInt(2, payment.d_id);
-	    rs = stmt.executeQuery();
-	    if (!rs.next())
-	    {
-		rs.close();
-		throw new Exception("District for" +
-			" W_ID=" + payment.w_id +
-			" D_ID=" + payment.d_id + " not found");
-	    }
-	    payment.d_name = rs.getString("d_name");
-	    payment.d_street_1 = rs.getString("d_street_1");
-	    payment.d_street_2 = rs.getString("d_street_2");
-	    payment.d_city = rs.getString("d_city");
-	    payment.d_state = rs.getString("d_state");
-	    payment.d_zip = rs.getString("d_zip");
-	    rs.close();
+	    //// Update the DISTRICT.
+	    //stmt = db.stmtPaymentUpdateDistrict;
+	    //stmt.setDouble(1, payment.h_amount);
+	    //stmt.setInt(2, payment.w_id);
+	    //stmt.setInt(3, payment.d_id);
+	    //stmt.executeUpdate();
+
+	    //// Select the DISTRICT.
+	    //stmt = db.stmtPaymentSelectDistrict;
+	    //stmt.setInt(1, payment.w_id);
+	    //stmt.setInt(2, payment.d_id);
+	    //rs = stmt.executeQuery();
+	    //if (!rs.next())
+	    //{
+	    //    rs.close();
+	    //    throw new Exception("District for" +
+	    //    	" W_ID=" + payment.w_id +
+	    //    	" D_ID=" + payment.d_id + " not found");
+	    //}
+	    //payment.d_name = rs.getString("d_name");
+	    //payment.d_street_1 = rs.getString("d_street_1");
+	    //payment.d_street_2 = rs.getString("d_street_2");
+	    //payment.d_city = rs.getString("d_city");
+	    //payment.d_state = rs.getString("d_state");
+	    //payment.d_zip = rs.getString("d_zip");
+	    //rs.close();

 	    // Update the WAREHOUSE.
 	    stmt = db.stmtPaymentUpdateWarehouse;
@@ -791,144 +791,144 @@ public class jTPCCTData
 	    stmt.setInt(2, payment.w_id);
 	    stmt.executeUpdate();

-	    // Select the WAREHOUSE.
-	    stmt = db.stmtPaymentSelectWarehouse;
-	    stmt.setInt(1, payment.w_id);
-	    rs = stmt.executeQuery();
-	    if (!rs.next())
-	    {
-		rs.close();
-		throw new Exception("Warehouse for" +
-			" W_ID=" + payment.w_id + " not found");
-	    }
-	    payment.w_name = rs.getString("w_name");
-	    payment.w_street_1 = rs.getString("w_street_1");
-	    payment.w_street_2 = rs.getString("w_street_2");
-	    payment.w_city = rs.getString("w_city");
-	    payment.w_state = rs.getString("w_state");
-	    payment.w_zip = rs.getString("w_zip");
-	    rs.close();
-
-	    // If C_LAST is given instead of C_ID (60%), determine the C_ID.
-	    if (payment.c_last != null)
-	    {
-		stmt = db.stmtPaymentSelectCustomerListByLast;
-		stmt.setInt(1, payment.c_w_id);
-		stmt.setInt(2, payment.c_d_id);
-		stmt.setString(3, payment.c_last);
-		rs = stmt.executeQuery();
-		while (rs.next())
-		    c_id_list.add(rs.getInt("c_id"));
-		rs.close();
-
-		if (c_id_list.size() == 0)
-		{
-		    throw new Exception("Customer(s) for" +
-				" C_W_ID=" + payment.c_w_id +
-				" C_D_ID=" + payment.c_d_id +
-				" C_LAST=" + payment.c_last + " not found");
-		}
-
-		payment.c_id = c_id_list.get((c_id_list.size() + 1) / 2 - 1);
-	    }
-
-	    // Select the CUSTOMER.
-	    stmt = db.stmtPaymentSelectCustomer;
-	    stmt.setInt(1, payment.c_w_id);
-	    stmt.setInt(2, payment.c_d_id);
-	    stmt.setInt(3, payment.c_id);
-	    rs = stmt.executeQuery();
-	    if (!rs.next())
-	    {
-		throw new Exception("Customer for" +
-			" C_W_ID=" + payment.c_w_id +
-			" C_D_ID=" + payment.c_d_id +
-			" C_ID=" + payment.c_id + " not found");
-	    }
-	    payment.c_first = rs.getString("c_first");
-	    payment.c_middle = rs.getString("c_middle");
-	    if (payment.c_last == null)
-		payment.c_last = rs.getString("c_last");
-	    payment.c_street_1 = rs.getString("c_street_1");
-	    payment.c_street_2 = rs.getString("c_street_2");
-	    payment.c_city = rs.getString("c_city");
-	    payment.c_state = rs.getString("c_state");
-	    payment.c_zip = rs.getString("c_zip");
-	    payment.c_phone = rs.getString("c_phone");
-	    payment.c_since = rs.getTimestamp("c_since").toString();
-	    payment.c_credit = rs.getString("c_credit");
-	    payment.c_credit_lim = rs.getDouble("c_credit_lim");
-	    payment.c_discount = rs.getDouble("c_discount");
-	    payment.c_balance = rs.getDouble("c_balance");
-	    payment.c_data = new String("");
-	    rs.close();
-
-	    // Update the CUSTOMER.
-	    payment.c_balance -= payment.h_amount;
-	    if (payment.c_credit.equals("GC"))
-	    {
-		// Customer with good credit, don't update C_DATA.
-		stmt = db.stmtPaymentUpdateCustomer;
-		stmt.setDouble(1, payment.h_amount);
-		stmt.setDouble(2, payment.h_amount);
-		stmt.setInt(3, payment.c_w_id);
-		stmt.setInt(4, payment.c_d_id);
-		stmt.setInt(5, payment.c_id);
-		stmt.executeUpdate();
-	    }
-	    else
-	    {
-		// Customer with bad credit, need to do the C_DATA work.
-		stmt = db.stmtPaymentSelectCustomerData;
-		stmt.setInt(1, payment.c_w_id);
-		stmt.setInt(2, payment.c_d_id);
-		stmt.setInt(3, payment.c_id);
-		rs = stmt.executeQuery();
-		if (!rs.next())
-		{
-		    throw new Exception("Customer.c_data for" +
-			" C_W_ID=" + payment.c_w_id +
-			" C_D_ID=" + payment.c_d_id +
-			" C_ID=" + payment.c_id + " not found");
-		}
-		payment.c_data = rs.getString("c_data");
-		rs.close();
-
-		stmt = db.stmtPaymentUpdateCustomerWithData;
-		stmt.setDouble(1, payment.h_amount);
-		stmt.setDouble(2, payment.h_amount);
-
-		StringBuffer sbData = new StringBuffer();
-		Formatter fmtData = new Formatter(sbData);
-		fmtData.format("C_ID=%d C_D_ID=%d C_W_ID=%d " +
-			       "D_ID=%d W_ID=%d H_AMOUNT=%.2f   ",
-			       payment.c_id, payment.c_d_id, payment.c_w_id,
-			       payment.d_id, payment.w_id, payment.h_amount);
-		sbData.append(payment.c_data);
-		if (sbData.length() > 500)
-		    sbData.setLength(500);
-		payment.c_data = sbData.toString();
-		stmt.setString(3, payment.c_data);
-
-		stmt.setInt(4, payment.c_w_id);
-		stmt.setInt(5, payment.c_d_id);
-		stmt.setInt(6, payment.c_id);
-		stmt.executeUpdate();
-	    }
-
-	    // Insert the HISORY row.
-	    stmt = db.stmtPaymentInsertHistory;
-	    stmt.setInt(1, payment.c_id);
-	    stmt.setInt(2, payment.c_d_id);
-	    stmt.setInt(3, payment.c_w_id);
-	    stmt.setInt(4, payment.d_id);
-	    stmt.setInt(5, payment.w_id);
-	    stmt.setTimestamp(6, new java.sql.Timestamp(h_date));
-	    stmt.setDouble(7, payment.h_amount);
-	    stmt.setString(8, payment.w_name + "    " + payment.d_name);
-	    stmt.executeUpdate();
-
-	    payment.h_date = new java.sql.Timestamp(h_date).toString();
+	    //// Select the WAREHOUSE.
+	    //stmt = db.stmtPaymentSelectWarehouse;
+	    //stmt.setInt(1, payment.w_id);
+	    //rs = stmt.executeQuery();
+	    //if (!rs.next())
+	    //{
+	    //    rs.close();
+	    //    throw new Exception("Warehouse for" +
+	    //    	" W_ID=" + payment.w_id + " not found");
+	    //}
+	    //payment.w_name = rs.getString("w_name");
+	    //payment.w_street_1 = rs.getString("w_street_1");
+	    //payment.w_street_2 = rs.getString("w_street_2");
+	    //payment.w_city = rs.getString("w_city");
+	    //payment.w_state = rs.getString("w_state");
+	    //payment.w_zip = rs.getString("w_zip");
+	    //rs.close();
+
+	    //// If C_LAST is given instead of C_ID (60%), determine the C_ID.
+	    //if (payment.c_last != null)
+	    //{
+	    //    stmt = db.stmtPaymentSelectCustomerListByLast;
+	    //    stmt.setInt(1, payment.c_w_id);
+	    //    stmt.setInt(2, payment.c_d_id);
+	    //    stmt.setString(3, payment.c_last);
+	    //    rs = stmt.executeQuery();
+	    //    while (rs.next())
+	    //        c_id_list.add(rs.getInt("c_id"));
+	    //    rs.close();
+
+	    //    if (c_id_list.size() == 0)
+	    //    {
+	    //        throw new Exception("Customer(s) for" +
+	    //    		" C_W_ID=" + payment.c_w_id +
+	    //    		" C_D_ID=" + payment.c_d_id +
+	    //    		" C_LAST=" + payment.c_last + " not found");
+	    //    }
+
+	    //    payment.c_id = c_id_list.get((c_id_list.size() + 1) / 2 - 1);
+	    //}
+
+	    //// Select the CUSTOMER.
+	    //stmt = db.stmtPaymentSelectCustomer;
+	    //stmt.setInt(1, payment.c_w_id);
+	    //stmt.setInt(2, payment.c_d_id);
+	    //stmt.setInt(3, payment.c_id);
+	    //rs = stmt.executeQuery();
+	    //if (!rs.next())
+	    //{
+	    //    throw new Exception("Customer for" +
+	    //    	" C_W_ID=" + payment.c_w_id +
+	    //    	" C_D_ID=" + payment.c_d_id +
+	    //    	" C_ID=" + payment.c_id + " not found");
+	    //}
+	    //payment.c_first = rs.getString("c_first");
+	    //payment.c_middle = rs.getString("c_middle");
+	    //if (payment.c_last == null)
+	    //    payment.c_last = rs.getString("c_last");
+	    //payment.c_street_1 = rs.getString("c_street_1");
+	    //payment.c_street_2 = rs.getString("c_street_2");
+	    //payment.c_city = rs.getString("c_city");
+	    //payment.c_state = rs.getString("c_state");
+	    //payment.c_zip = rs.getString("c_zip");
+	    //payment.c_phone = rs.getString("c_phone");
+	    //payment.c_since = rs.getTimestamp("c_since").toString();
+	    //payment.c_credit = rs.getString("c_credit");
+	    //payment.c_credit_lim = rs.getDouble("c_credit_lim");
+	    //payment.c_discount = rs.getDouble("c_discount");
+	    //payment.c_balance = rs.getDouble("c_balance");
+	    //payment.c_data = new String("");
+	    //rs.close();
+
+	    //// Update the CUSTOMER.
+	    //payment.c_balance -= payment.h_amount;
+	    //if (payment.c_credit.equals("GC"))
+	    //{
+	    //    // Customer with good credit, don't update C_DATA.
+	    //    stmt = db.stmtPaymentUpdateCustomer;
+	    //    stmt.setDouble(1, payment.h_amount);
+	    //    stmt.setDouble(2, payment.h_amount);
+	    //    stmt.setInt(3, payment.c_w_id);
+	    //    stmt.setInt(4, payment.c_d_id);
+	    //    stmt.setInt(5, payment.c_id);
+	    //    stmt.executeUpdate();
+	    //}
+	    //else
+	    //{
+	    //    // Customer with bad credit, need to do the C_DATA work.
+	    //    stmt = db.stmtPaymentSelectCustomerData;
+	    //    stmt.setInt(1, payment.c_w_id);
+	    //    stmt.setInt(2, payment.c_d_id);
+	    //    stmt.setInt(3, payment.c_id);
+	    //    rs = stmt.executeQuery();
+	    //    if (!rs.next())
+	    //    {
+	    //        throw new Exception("Customer.c_data for" +
+	    //    	" C_W_ID=" + payment.c_w_id +
+	    //    	" C_D_ID=" + payment.c_d_id +
+	    //    	" C_ID=" + payment.c_id + " not found");
+	    //    }
+	    //    payment.c_data = rs.getString("c_data");
+	    //    rs.close();
+
+	    //    stmt = db.stmtPaymentUpdateCustomerWithData;
+	    //    stmt.setDouble(1, payment.h_amount);
+	    //    stmt.setDouble(2, payment.h_amount);
+
+	    //    StringBuffer sbData = new StringBuffer();
+	    //    Formatter fmtData = new Formatter(sbData);
+	    //    fmtData.format("C_ID=%d C_D_ID=%d C_W_ID=%d " +
+	    //    	       "D_ID=%d W_ID=%d H_AMOUNT=%.2f   ",
+	    //    	       payment.c_id, payment.c_d_id, payment.c_w_id,
+	    //    	       payment.d_id, payment.w_id, payment.h_amount);
+	    //    sbData.append(payment.c_data);
+	    //    if (sbData.length() > 500)
+	    //        sbData.setLength(500);
+	    //    payment.c_data = sbData.toString();
+	    //    stmt.setString(3, payment.c_data);
+
+	    //    stmt.setInt(4, payment.c_w_id);
+	    //    stmt.setInt(5, payment.c_d_id);
+	    //    stmt.setInt(6, payment.c_id);
+	    //    stmt.executeUpdate();
+	    //}
+
+	    //// Insert the HISORY row.
+	    //stmt = db.stmtPaymentInsertHistory;
+	    //stmt.setInt(1, payment.c_id);
+	    //stmt.setInt(2, payment.c_d_id);
+	    //stmt.setInt(3, payment.c_w_id);
+	    //stmt.setInt(4, payment.d_id);
+	    //stmt.setInt(5, payment.w_id);
+	    //stmt.setTimestamp(6, new java.sql.Timestamp(h_date));
+	    //stmt.setDouble(7, payment.h_amount);
+	    //stmt.setString(8, payment.w_name + "    " + payment.d_name);
+	    //stmt.executeUpdate();
+
+	    //payment.h_date = new java.sql.Timestamp(h_date).toString();

 	    db.commit();
 	}
  1. create table and load data
  • cd ./benchmarksql-5.0/run
  • run the follow command to generate source data in directory /data/tpcc_1000w/
    ./runLoader.sh gpdb.properties
  • create test table:
create table bmsql_config (
  cfg_name    varchar(30) primary key,
  cfg_value   varchar(50)
);
create table bmsql_warehouse (
  w_id        integer   not null primary key,
  w_ytd       decimal(12,2),
  w_tax       decimal(4,4),
  w_name      varchar(10),
  w_street_1  varchar(20),
  w_street_2  varchar(20),
  w_city      varchar(20),
  w_state     char(2),
  w_zip       char(9)
);
  • copy data to test table
psql -d test -U benchmarksql  -c "copy bmsql_config from '/data/tpcc_1000w/bmsql_config.csv' with delimiter ',' null '';" 
psql -d test -U benchmarksql  -c "copy bmsql_warehouse from '/data/tpcc_1000w/bmsql_warehouse.csv' with delimiter ',' null '';" 
  • do analyze
    psql -d test -U benchmarksql -c "analyze"
  1. run benchmarksql use follow command:
    ./runBenchmark.sh gpdb.properties
  2. then run the check shell, shell content is:
#!/bin/bash
sql="select * from (select w_id, count(1) as c from bmsql_warehouse group by w_id) t where c > 1;"
while true
do
	res=`psql -d test -c "$sql" -t`
	if [ ! -z "$res" ];then
		echo "result: $res"
		echo "check failed!!!!"
		exit 1
	else
		echo "check succes"
	fi
	sleep 0.01
done

the primary key of bmsql_warehouse is w_id, so the query: "select * from (select w_id, count(1) as c from bmsql_warehouse group by w_id) t where c > 1", must be return 0 row. but the actual result is not, my test result is:

check succes
check succes
check succes
result: 334 | 2
check failed!!!!

@xiong-gang

This comment has been minimized.

Copy link
Member

@xiong-gang xiong-gang commented Jan 17, 2020

Thanks @zisedeqing for reporting the issue.

This issue happens when we enable GDD and let the UPDATE statements run parallel on segments, the scenario is like this:

tx1: update tuple t1 on segment 0
tx2: update tuple t1 and wait for tx1 to commit (wait on segment 0)
tx1: commit, the `commit prepared` command on segment 0 will wake up tx2
tx2: continue executing and commit on both segment and master
tx3: take a snapshot
tx1: finish commit on master

The transaction tx3 will take an invalid distributed snapshot on the master, it sees tx1 as in-progress and tx2 as committed.

We came up with several solutions:

  1. use the ingenious trick that Heikki used to detect the distributed deadlock: send a ‘notice’ to QD when QE is trying to acquire a lock on transaction id, and let the QD to acquire a lock on the corresponding distributed transaction id. In this way, we can migrate the wait to QD.

  2. let the transaction remember all the gxids it has waited and send them back to QD as the result of the ‘commit prepared’. Then QD wait for those gxids before clear it’s own gxid

  3. transaction doesn’t actually wake up other transactions during ‘commit prepared’, instead, it sends the flag back to QD to let QD aware there are other transactions waiting, then QD clear the gxid and do the 3rd phase commit to wake up the other transactions.

  4. don’t change anything on the transaction and snapshot side, instead, we use distributed snapshot and local snapshot together to determine a tuple’s visibility.

Solution 1 has some defects when there’re multiple waits, and send ‘notice’ while acquire and release lock is a considerable cost.
Solution 3 needs to take care of the crash recovery on QD between phase 2 and phase 3, and also there’s communication cost.
Solution 4 seems like a doable solution but needs proving, also it adds overhead to every tuple scan on every query, that’s not good.

We are going to move forward with solution 2.

xiong-gang added a commit to xiong-gang/gpdb that referenced this issue Jan 19, 2020
After enabling the global deadlock detector, we can support concurrent updates.
When updating one tuple at the same time, the conflict and wait are moved from
QD to QE. We need to make sure the implicated transaction order on the segments
is also considered on the master when taking the distributed snapshots.

This is reported: greenplum-db#9407
xiong-gang added a commit to xiong-gang/gpdb that referenced this issue Jan 20, 2020
After enabling the global deadlock detector, we can support concurrent updates.
When updating one tuple at the same time, the conflict and wait are moved from
QD to QE. We need to make sure the implicated transaction order on the segments
is also considered on the master when taking the distributed snapshots.

This is reported: greenplum-db#9407
xiong-gang added a commit to xiong-gang/gpdb that referenced this issue Jan 21, 2020
After enabling the global deadlock detector, we can support concurrent updates.
When updating one tuple at the same time, the conflict and wait are moved from
QD to QE. We need to make sure the implicated transaction order on the segments
is also considered on the master when taking the distributed snapshots.

This is reported: greenplum-db#9407
xiong-gang added a commit to xiong-gang/gpdb that referenced this issue Feb 11, 2020
After enabling the global deadlock detector, we can support concurrent updates.
When updating one tuple at the same time, the conflict and wait are moved from
QD to QE. We need to make sure the implicated transaction order on the segments
is also considered on the master when taking the distributed snapshots.

This is reported: greenplum-db#9407
xiong-gang added a commit to xiong-gang/gpdb that referenced this issue Feb 12, 2020
After enabling the global deadlock detector, we can support concurrent updates.
When updating one tuple at the same time, the conflict and wait are moved from
QD to QE. We need to make sure the implicated transaction order on the segments
is also considered on the master when taking the distributed snapshots.

This is reported: greenplum-db#9407
xiong-gang added a commit to xiong-gang/gpdb that referenced this issue Feb 19, 2020
After enabling the global deadlock detector, we can support concurrent updates.
When updating one tuple at the same time, the conflict and wait are moved from
QD to QE. We need to make sure the implicated transaction order on the segments
is also considered on the master when taking the distributed snapshots.

This is reported: greenplum-db#9407
xiong-gang added a commit to xiong-gang/gpdb that referenced this issue Mar 4, 2020
After enabling the global deadlock detector, we can support concurrent updates.
When updating one tuple at the same time, the conflict and wait are moved from
QD to QE. We need to make sure the implicated transaction order on the segments
is also considered on the master when taking the distributed snapshots.

This is reported: greenplum-db#9407
xiong-gang added a commit to xiong-gang/gpdb that referenced this issue Mar 6, 2020
After enabling the global deadlock detector, we can support concurrent updates.
When updating one tuple at the same time, the conflict and wait are moved from
QD to QE. We need to make sure the implicated transaction order on the segments
is also considered on the master when taking the distributed snapshots.

This is reported: greenplum-db#9407
xiong-gang added a commit that referenced this issue Mar 6, 2020
After enabling the global deadlock detector, we can support concurrent updates.
When updating one tuple at the same time, the conflict and wait are moved from
QD to QE. We need to make sure the implicated transaction order on the segments
is also considered on the master when taking the distributed snapshots.

This is reported: #9407
xiong-gang added a commit to xiong-gang/gpdb that referenced this issue Mar 10, 2020
After enabling the global deadlock detector, we can support concurrent updates.
When updating one tuple at the same time, the conflict and wait are moved from
QD to QE. We need to make sure the implicated transaction order on the segments
is also considered on the master when taking the distributed snapshots.

This is reported: greenplum-db#9407
xiong-gang added a commit to xiong-gang/gpdb that referenced this issue Mar 11, 2020
After enabling the global deadlock detector, we can support concurrent updates.
When updating one tuple at the same time, the conflict and wait are moved from
QD to QE. We need to make sure the implicated transaction order on the segments
is also considered on the master when taking the distributed snapshots.

This is reported: greenplum-db#9407
xiong-gang added a commit to xiong-gang/gpdb that referenced this issue Mar 12, 2020
After enabling the global deadlock detector, we can support concurrent updates.
When updating one tuple at the same time, the conflict and wait are moved from
QD to QE. We need to make sure the implicated transaction order on the segments
is also considered on the master when taking the distributed snapshots.

This is reported: greenplum-db#9407
xiong-gang added a commit to xiong-gang/gpdb that referenced this issue Mar 12, 2020
After enabling the global deadlock detector, we can support concurrent updates.
When updating one tuple at the same time, the conflict and wait are moved from
QD to QE. We need to make sure the implicated transaction order on the segments
is also considered on the master when taking the distributed snapshots.

This is reported: greenplum-db#9407
xiong-gang added a commit to xiong-gang/gpdb that referenced this issue Mar 15, 2020
After enabling the global deadlock detector, we can support concurrent updates.
When updating one tuple at the same time, the conflict and wait are moved from
QD to QE. We need to make sure the implicated transaction order on the segments
is also considered on the master when taking the distributed snapshots.

This is reported: greenplum-db#9407
xiong-gang added a commit that referenced this issue Mar 15, 2020
After enabling the global deadlock detector, we can support concurrent updates.
When updating one tuple at the same time, the conflict and wait are moved from
QD to QE. We need to make sure the implicated transaction order on the segments
is also considered on the master when taking the distributed snapshots.

This is reported: #9407
weinan003 added a commit to weinan003/gpdb that referenced this issue Mar 15, 2020
After enabling the global deadlock detector, we can support concurrent updates.
When updating one tuple at the same time, the conflict and wait are moved from
QD to QE. We need to make sure the implicated transaction order on the segments
is also considered on the master when taking the distributed snapshots.

This is reported: greenplum-db#9407
@gaos1 gaos1 closed this Mar 18, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
4 participants
You can’t perform that action at this time.