Skip to content

ADBDEV-36 Fixup incorrect PACKAGE_VERSION#12

Merged
acmnu merged 1 commit intoarenadata:adb-5.xfrom
acmnu:ADBDEV-36-1
Jul 25, 2018
Merged

ADBDEV-36 Fixup incorrect PACKAGE_VERSION#12
acmnu merged 1 commit intoarenadata:adb-5.xfrom
acmnu:ADBDEV-36-1

Conversation

@acmnu
Copy link

@acmnu acmnu commented Jul 25, 2018

That commit also renames project to Arenadata Database.

@acmnu acmnu requested a review from kapustor July 25, 2018 09:26
That commit also renames project to Arenadata Database.
@acmnu acmnu merged commit 92749b9 into arenadata:adb-5.x Jul 25, 2018
@acmnu acmnu deleted the ADBDEV-36-1 branch July 25, 2018 09:39
leskin-in pushed a commit that referenced this pull request Dec 14, 2018
It had two indexes:

* pg_statlastop_classid_objid_index on (classid oid_ops, objid oid_ops), and
* pg_statlastop_classid_objid_staactionname_index on (classid oid_ops, objid
  oid_ops, staactionname name_ops)

The first one is completely redundant with the second one. Remove it.

Fixes assertion failure https://github.com/greenplum-db/gpdb/issues/6362.
The assertion was added in PostgreSQL 9.1, commit d2f60a3. The failure
happened on "VACUUM FULL pg_stat_last_operation", if the VACUUM FULL
itself added a new row to the table. The insertion also inserted entries
in the indexes, which tripped the assertion that checks that you don't try
to insert entries into an index that's currently being reindexed, or
pending reindexing:

> (gdb) bt
> #0  0x00007f02f5189783 in __select_nocancel () from /lib64/libc.so.6
> #1  0x0000000000be76ef in pg_usleep (microsec=30000000) at pgsleep.c:53
> #2  0x0000000000ad75aa in elog_debug_linger (edata=0x11bf760 <errordata>) at elog.c:5293
> #3  0x0000000000acdba4 in errfinish (dummy=0) at elog.c:675
> #4  0x0000000000acc3bf in ExceptionalCondition (conditionName=0xc15798 "!(!ReindexIsProcessingIndex(((indexRelation)->rd_id)))", errorType=0xc156ef "FailedAssertion",
>     fileName=0xc156d0 "indexam.c", lineNumber=215) at assert.c:46
> #5  0x00000000004fded5 in index_insert (indexRelation=0x7f02f6b6daa0, values=0x7ffdb43915e0, isnull=0x7ffdb43915c0 "", heap_t_ctid=0x240bd64, heapRelation=0x24efa78,
>     checkUnique=UNIQUE_CHECK_YES) at indexam.c:215
> #6  0x00000000005bda59 in CatalogIndexInsert (indstate=0x240e5d0, heapTuple=0x240bd60) at indexing.c:136
> #7  0x00000000005bdaaa in CatalogUpdateIndexes (heapRel=0x24efa78, heapTuple=0x240bd60) at indexing.c:162
> #8  0x00000000005b2203 in MetaTrackAddUpdInternal (classid=1259, objoid=6053, relowner=10, actionname=0xc51543 "VACUUM", subtype=0xc5153b "REINDEX", rel=0x24efa78,
>     old_tuple=0x0) at heap.c:744
> #9  0x00000000005b229d in MetaTrackAddObject (classid=1259, objoid=6053, relowner=10, actionname=0xc51543 "VACUUM", subtype=0xc5153b "REINDEX") at heap.c:773
> #10 0x00000000005b2553 in MetaTrackUpdObject (classid=1259, objoid=6053, relowner=10, actionname=0xc51543 "VACUUM", subtype=0xc5153b "REINDEX") at heap.c:856
> #11 0x00000000005bd271 in reindex_index (indexId=6053, skip_constraint_checks=1 '\001') at index.c:3741
> #12 0x00000000005bd418 in reindex_relation (relid=6052, flags=2) at index.c:3870
> #13 0x000000000067ba71 in finish_heap_swap (OIDOldHeap=6052, OIDNewHeap=16687, is_system_catalog=1 '\001', swap_toast_by_content=0 '\000', swap_stats=1 '\001',
>     check_constraints=0 '\000', is_internal=1 '\001', frozenXid=821, cutoffMulti=1) at cluster.c:1667
> #14 0x0000000000679ed5 in rebuild_relation (OldHeap=0x7f02f6b7a6f0, indexOid=0, verbose=0 '\000') at cluster.c:648
> #15 0x0000000000679913 in cluster_rel (tableOid=6052, indexOid=0, recheck=0 '\000', verbose=0 '\000', printError=1 '\001') at cluster.c:461
> #16 0x0000000000717580 in vacuum_rel (onerel=0x0, relid=6052, vacstmt=0x2533c38, lmode=8, for_wraparound=0 '\000') at vacuum.c:2315
> #17 0x0000000000714ce7 in vacuumStatement_Relation (vacstmt=0x2533c38, relid=6052, relations=0x24c12f8, bstrategy=0x24c1220, do_toast=1 '\001', for_wraparound=0 '\000',
>     isTopLevel=1 '\001') at vacuum.c:787
> #18 0x0000000000714303 in vacuum (vacstmt=0x2403260, relid=0, do_toast=1 '\001', bstrategy=0x24c1220, for_wraparound=0 '\000', isTopLevel=1 '\001') at vacuum.c:337
> #19 0x0000000000969cd2 in standard_ProcessUtility (parsetree=0x2403260, queryString=0x24027e0 "vacuum full;", context=PROCESS_UTILITY_TOPLEVEL, params=0x0, dest=0x2403648,
>     completionTag=0x7ffdb4392550 "") at utility.c:804
> #20 0x00000000009691be in ProcessUtility (parsetree=0x2403260, queryString=0x24027e0 "vacuum full;", context=PROCESS_UTILITY_TOPLEVEL, params=0x0, dest=0x2403648,
>     completionTag=0x7ffdb4392550 "") at utility.c:373

In this scenario, we had just reindexed one of the indexes of
pg_stat_last_operation, and the metatrack update of that tried to insert a
row into the same table. But the second index in the table was pending
reindexing, which triggered the assertion.

After removing the redundant index, pg_stat_last_operation has only one
index, and that scenario no longer happens. This is a bit fragile fix,
because the problem will reappear as soon as you add a second index on the
table. But we have no plans of doing that, and I believe no harm would be
done in production builds with assertions disabled, anyway. So this will
do for now.

Reviewed-by: Ashwin Agrawal <aagrawal@pivotal.io>
Reviewed-by: Shaoqi Bai <sbai@pivotal.io>
Reviewed-by: Jamie McAtamney <jmcatamney@pivotal.io>
leskin-in pushed a commit that referenced this pull request May 7, 2019
We found that if gp_segment_configuration is locked, then it will
fail by triggering FTS. We got the stack below
	#2  0x0000000000a6bb29 in ExceptionalCondition at assert.c:66
	#3  0x0000000000aac19a in enable_timeout timeout.c:143
	#4  0x0000000000aacb6c in enable_timeout_after timeout.c:473
	#5  0x00000000008e86ef in ProcSleep at proc.c:1300
	#6  0x00000000008deb70 in WaitOnLock at lock.c:1894
	#7  0x00000000008e019e in LockAcquireExtended at lock.c:1205
	#8  0x00000000008dd2d8 in LockRelationOid at lmgr.c:102
	#9  0x000000000051c928 in heap_open at heapam.c:1083
	#10 0x0000000000b7feaf in getCdbComponentInfo at cdbutil.c:173
	#11 0x0000000000b81365 in cdbcomponent_getCdbComponents at cdbutil.c:606
	#12 0x00000000007603e1 in ftsMain at fts.c:351
	#13 0x0000000000760715 in ftsprobe_start at fts.c:121
	#14 0x00000000004cc7b0 in ServerLoop ()
	#15 0x00000000008769bf in PostmasterMain at postmaster.c:1531
	#16 0x000000000079098b in main ()
So it is that FTS hasn't initialized timeout. Any process that
wants to use timeout must call initilization first. This is the
root cause gpexpand job fails on master pipeline in build 71 and 79.
We added this initialization in FTS and GDD.
deart2k pushed a commit that referenced this pull request May 17, 2019
We found that if gp_segment_configuration is locked, then it will
fail by triggering FTS. We got the stack below
	#2  0x0000000000a6bb29 in ExceptionalCondition at assert.c:66
	#3  0x0000000000aac19a in enable_timeout timeout.c:143
	#4  0x0000000000aacb6c in enable_timeout_after timeout.c:473
	#5  0x00000000008e86ef in ProcSleep at proc.c:1300
	#6  0x00000000008deb70 in WaitOnLock at lock.c:1894
	#7  0x00000000008e019e in LockAcquireExtended at lock.c:1205
	#8  0x00000000008dd2d8 in LockRelationOid at lmgr.c:102
	#9  0x000000000051c928 in heap_open at heapam.c:1083
	#10 0x0000000000b7feaf in getCdbComponentInfo at cdbutil.c:173
	#11 0x0000000000b81365 in cdbcomponent_getCdbComponents at cdbutil.c:606
	#12 0x00000000007603e1 in ftsMain at fts.c:351
	#13 0x0000000000760715 in ftsprobe_start at fts.c:121
	#14 0x00000000004cc7b0 in ServerLoop ()
	#15 0x00000000008769bf in PostmasterMain at postmaster.c:1531
	#16 0x000000000079098b in main ()
So it is that FTS hasn't initialized timeout. Any process that
wants to use timeout must call initilization first. This is the
root cause gpexpand job fails on master pipeline in build 71 and 79.
We added this initialization in FTS and GDD.
darthunix pushed a commit that referenced this pull request Aug 24, 2021
…CREATE/ALTER resouce group.

In some scenarios, the AccessExclusiveLock for table pg_resgroupcapability may cause database setup/recovery pending. Below is why we need change the AccessExclusiveLock to ExclusiveLock. 

This lock on table pg_resgroupcapability is used to concurrent update this table when run "Create/Alter resource group" statement. There is a CPU limit, after modify one resource group, it has to check if the whole CPU usage of all resource groups doesn't exceed 100%.

Before this fix, AccessExclusiveLock is used. Suppose one user is running "Alter resource group" statement, QD will dispatch this statement to all QEs, so it is a two phase commit(2PC) transaction. When QD dispatched "Alter resource group" statement and QE acquire the AccessExclusiveLock for table pg_resgroupcapability. Until the 2PC distributed transaction committed, QE can release the AccessExclusiveLock for this table.

In the second phase, QD will call function doNotifyingCommitPrepared to broadcast "commit prepared" command to all QEs, QE has already finish prepared, this transation is a prepared transaction. Suppose at this point, there is a primary segment down and a mirror will be promoted to primary.

The mirror got the "promoted" message from coordinator, and will recover based on xlog from primary, in order to recover the prepared transaction, it will read the prepared transaction log entry and acquire AccessExclusiveLock for table pg_resgroupcapability. The callstack is:
#0 lock_twophase_recover (xid=, info=, recdata=, len=) at lock.c:4697
#1 ProcessRecords (callbacks=, xid=2933, bufptr=0x1d575a8 "") at twophase.c:1757
#2 RecoverPreparedTransactions () at twophase.c:2214
#3 StartupXLOG () at xlog.c:8013
#4 StartupProcessMain () at startup.c:231
#5 AuxiliaryProcessMain (argc=argc@entry=2, argv=argv@entry=0x7fff84b94a70) at bootstrap.c:459
#6 StartChildProcess (type=StartupProcess) at postmaster.c:5917
#7 PostmasterMain (argc=argc@entry=7, argv=argv@entry=0x1d555b0) at postmaster.c:1581
#8 main (argc=7, argv=0x1d555b0) at main.c:240

After that, the database instance will start up, all related initialization functions will be called. However, there is a function named "InitResGroups", it will acquire AccessShareLock for table pg_resgroupcapability and do some initialization stuff. The callstack is:
#6 WaitOnLock (locallock=locallock@entry=0x1c7f248, owner=owner@entry=0x1ca0a40) at lock.c:1999
#7 LockAcquireExtended (locktag=locktag@entry=0x7ffd15d18d90, lockmode=lockmode@entry=1, sessionLock=sessionLock@entry=false, dontWait=dontWait@entry=false, reportMemoryError=reportMemoryError@entry=true, locallockp=locallockp@entry=0x7ffd15d18d88) at lock.c:1192
#8 LockRelationOid (relid=6439, lockmode=1) at lmgr.c:126
#9 relation_open (relationId=relationId@entry=6439, lockmode=lockmode@entry=1) at relation.c:56
#10 table_open (relationId=relationId@entry=6439, lockmode=lockmode@entry=1) at table.c:47
#11 InitResGroups () at resgroup.c:581
#12 InitResManager () at resource_manager.c:83
#13 initPostgres (in_dbname=, dboid=dboid@entry=0, username=username@entry=0x1c5b730 "linw", useroid=useroid@entry=0, out_dbname=out_dbname@entry=0x0, override_allow_connections=override_allow_connections@entry=false) at postinit.c:1284
#14 PostgresMain (argc=1, argv=argv@entry=0x1c8af78, dbname=0x1c89e70 "postgres", username=0x1c5b730 "linw") at postgres.c:4812
#15 BackendRun (port=, port=) at postmaster.c:4922
#16 BackendStartup (port=0x1c835d0) at postmaster.c:4607
#17 ServerLoop () at postmaster.c:1963
#18 PostmasterMain (argc=argc@entry=7, argv=argv@entry=0x1c595b0) at postmaster.c:1589
#19 in main (argc=7, argv=0x1c595b0) at main.c:240

The AccessExclusiveLock is not released, and it is not compatible with any other locks, so the startup process will be pending on this lock. So the mirror can't become primary successfully.

Even users run "gprecoverseg" to recover the primary segment. the result is similar. The primary segment will recover from xlog, it will recover prepared transactions and acquire AccessExclusiveLock for table pg_resgroupcapability. Then the startup process is pending on this lock. Unless users change the resource type to "queue", the function InitResGroups will not be called, and won't be blocked, then the primary segment can startup normally.

After this fix, ExclusiveLock is acquired when alter resource group. In above case, the startup process acquires AccessShareLock, ExclusiveLock and AccessShareLock are compatible. The startup process can run successfully. After startup, QE will get RECOVERY_COMMIT_PREPARED command from QD, it will finish the second phase of this distributed transaction and release ExclusiveLock on table pg_resgroupcapability. The callstack is:
#0 lock_twophase_postcommit (xid=, info=, recdata=0x3303458, len=) at lock.c:4758
#1 ProcessRecords (callbacks=, xid=, bufptr=0x3303458 "") at twophase.c:1757
#2 FinishPreparedTransaction (gid=gid@entry=0x323caf5 "25", isCommit=isCommit@entry=true, raiseErrorIfNotFound=raiseErrorIfNotFound@entry=false) at twophase.c:1704
#3 in performDtxProtocolCommitPrepared (gid=gid@entry=0x323caf5 "25", raiseErrorIfNotFound=raiseErrorIfNotFound@entry=false) at cdbtm.c:2107
#4 performDtxProtocolCommand (dtxProtocolCommand=dtxProtocolCommand@entry=DTX_PROTOCOL_COMMAND_RECOVERY_COMMIT_PREPARED, gid=gid@entry=0x323caf5 "25", contextInfo=contextInfo@entry=0x10e1820 ) at cdbtm.c:2279
#5 exec_mpp_dtx_protocol_command (contextInfo=0x10e1820 , gid=0x323caf5 "25", loggingStr=0x323cad8 "Recovery Commit Prepared", dtxProtocolCommand=DTX_PROTOCOL_COMMAND_RECOVERY_COMMIT_PREPARED) at postgres.c:1570
#6 PostgresMain (argc=, argv=argv@entry=0x3268f98, dbname=0x3267e90 "postgres", username=) at postgres.c:5482

The test case of this commit simulates a repro of this bug.
BenderArenadata pushed a commit that referenced this pull request Jan 24, 2024
## Problem
An error occurs in python lib when a plpython function is executed.
After our analysis, in the user's cluster, a plpython UDF 
was running with the unstable network, and got a timeout error:
`failed to acquire resources on one or more segments`.
Then a plpython UDF was run in the same session, and the UDF
failed with GC error.

Here is the core dump:
```
2023-11-24 10:15:18.945507 CST,,,p2705198,th2081832064,,,,0,,,seg-1,,,,,"LOG","00000","3rd party error log:
    #0 0x7f7c68b6d55b in frame_dealloc /home/cc/repo/cpython/Objects/frameobject.c:509:5
    #1 0x7f7c68b5109d in gen_send_ex /home/cc/repo/cpython/Objects/genobject.c:108:9
    #2 0x7f7c68af9ddd in PyIter_Next /home/cc/repo/cpython/Objects/abstract.c:3118:14
    #3 0x7f7c78caa5c0 in PLy_exec_function /home/cc/repo/gpdb6/src/pl/plpython/plpy_exec.c:134:11
    #4 0x7f7c78cb5ffb in plpython_call_handler /home/cc/repo/gpdb6/src/pl/plpython/plpy_main.c:387:13
    #5 0x562f5e008bb5 in ExecMakeTableFunctionResult /home/cc/repo/gpdb6/src/backend/executor/execQual.c:2395:13
    #6 0x562f5e0dddec in FunctionNext_guts /home/cc/repo/gpdb6/src/backend/executor/nodeFunctionscan.c:142:5
    #7 0x562f5e0da094 in FunctionNext /home/cc/repo/gpdb6/src/backend/executor/nodeFunctionscan.c:350:11
    #8 0x562f5e03d4b0 in ExecScanFetch /home/cc/repo/gpdb6/src/backend/executor/execScan.c:84:9
    #9 0x562f5e03cd8f in ExecScan /home/cc/repo/gpdb6/src/backend/executor/execScan.c:154:10
    #10 0x562f5e0da072 in ExecFunctionScan /home/cc/repo/gpdb6/src/backend/executor/nodeFunctionscan.c:380:9
    #11 0x562f5e001a1c in ExecProcNode /home/cc/repo/gpdb6/src/backend/executor/execProcnode.c:1071:13
    #12 0x562f5dfe6377 in ExecutePlan /home/cc/repo/gpdb6/src/backend/executor/execMain.c:3202:10
    #13 0x562f5dfe5bf4 in standard_ExecutorRun /home/cc/repo/gpdb6/src/backend/executor/execMain.c:1171:5
    #14 0x562f5dfe4877 in ExecutorRun /home/cc/repo/gpdb6/src/backend/executor/execMain.c:992:4
    #15 0x562f5e857e69 in PortalRunSelect /home/cc/repo/gpdb6/src/backend/tcop/pquery.c:1164:4
    #16 0x562f5e856d3f in PortalRun /home/cc/repo/gpdb6/src/backend/tcop/pquery.c:1005:18
    #17 0x562f5e84607a in exec_simple_query /home/cc/repo/gpdb6/src/backend/tcop/postgres.c:1848:10
```

## Reproduce
We can use a simple procedure to reproduce the above problem:
- set timeout GUC: `gpconfig -c gp_segment_connect_timeout -v 5` and `gpstop -ari`
- prepare function:
```
CREATE EXTENSION plpythonu;
CREATE OR REPLACE FUNCTION test_func() RETURNS SETOF int AS
$$
plpy.execute("select pg_backend_pid()")

for i in range(0, 5):
    yield (i)

$$ LANGUAGE plpythonu;
```
- exit from the current psql session.
- stop the postmaster of segment: `gdb -p "the pid of segment postmaster"`
- enter a psql session.
- call `SELECT test_func();` and get error
```
gpadmin=# select test_func();
ERROR:  function "test_func" error fetching next item from iterator (plpy_elog.c:121)
DETAIL:  Exception: failed to acquire resources on one or more segments
CONTEXT:  Traceback (most recent call last):
PL/Python function "test_func"
```
- quit gdb and make postmaster runnable.
- call  `SELECT test_func();` again and get panic
```
gpadmin=# SELECT test_func();
server closed the connection unexpectedly
        This probably means the server terminated abnormally
        before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.
!> 
```

## Analysis
- There is an SPI call in test_func(): `plpy.execute()`. 
- Then coordinator will start a subtransaction by PLy_spi_subtransaction_begin();
- Meanwhile, if the segment cannot receive the instruction from the coordinator,
  the subtransaction beginning procedure return fails.
- BUT! The Python processor does not know whether an error happened and
  does not clean its environment.
- Then the next plpython UDF in the same session will fail due to the wrong
  Python environment.

## Solution
- Use try-catch to catch the exception caused by PLy_spi_subtransaction_begin()
- set the python error indicator by PLy_spi_exception_set()

backport from #16856


Co-authored-by: Chen Mulong <chenmulong@gmail.com>
BenderArenadata pushed a commit that referenced this pull request Jan 24, 2024
An error occurs in python lib when a plpython function is executed.
After our analysis, in the user's cluster, a plpython UDF
was running with the unstable network, and got a timeout error:
`failed to acquire resources on one or more segments`.
Then a plpython UDF was run in the same session, and the UDF
failed with GC error.

Here is the core dump:
```
2023-11-24 10:15:18.945507 CST,,,p2705198,th2081832064,,,,0,,,seg-1,,,,,"LOG","00000","3rd party error log:
    #0 0x7f7c68b6d55b in frame_dealloc /home/cc/repo/cpython/Objects/frameobject.c:509:5
    #1 0x7f7c68b5109d in gen_send_ex /home/cc/repo/cpython/Objects/genobject.c:108:9
    #2 0x7f7c68af9ddd in PyIter_Next /home/cc/repo/cpython/Objects/abstract.c:3118:14
    #3 0x7f7c78caa5c0 in PLy_exec_function /home/cc/repo/gpdb6/src/pl/plpython/plpy_exec.c:134:11
    #4 0x7f7c78cb5ffb in plpython_call_handler /home/cc/repo/gpdb6/src/pl/plpython/plpy_main.c:387:13
    #5 0x562f5e008bb5 in ExecMakeTableFunctionResult /home/cc/repo/gpdb6/src/backend/executor/execQual.c:2395:13
    #6 0x562f5e0dddec in FunctionNext_guts /home/cc/repo/gpdb6/src/backend/executor/nodeFunctionscan.c:142:5
    #7 0x562f5e0da094 in FunctionNext /home/cc/repo/gpdb6/src/backend/executor/nodeFunctionscan.c:350:11
    #8 0x562f5e03d4b0 in ExecScanFetch /home/cc/repo/gpdb6/src/backend/executor/execScan.c:84:9
    #9 0x562f5e03cd8f in ExecScan /home/cc/repo/gpdb6/src/backend/executor/execScan.c:154:10
    #10 0x562f5e0da072 in ExecFunctionScan /home/cc/repo/gpdb6/src/backend/executor/nodeFunctionscan.c:380:9
    #11 0x562f5e001a1c in ExecProcNode /home/cc/repo/gpdb6/src/backend/executor/execProcnode.c:1071:13
    #12 0x562f5dfe6377 in ExecutePlan /home/cc/repo/gpdb6/src/backend/executor/execMain.c:3202:10
    #13 0x562f5dfe5bf4 in standard_ExecutorRun /home/cc/repo/gpdb6/src/backend/executor/execMain.c:1171:5
    #14 0x562f5dfe4877 in ExecutorRun /home/cc/repo/gpdb6/src/backend/executor/execMain.c:992:4
    #15 0x562f5e857e69 in PortalRunSelect /home/cc/repo/gpdb6/src/backend/tcop/pquery.c:1164:4
    #16 0x562f5e856d3f in PortalRun /home/cc/repo/gpdb6/src/backend/tcop/pquery.c:1005:18
    #17 0x562f5e84607a in exec_simple_query /home/cc/repo/gpdb6/src/backend/tcop/postgres.c:1848:10
```

We can use a simple procedure to reproduce the above problem:
- set timeout GUC: `gpconfig -c gp_segment_connect_timeout -v 5` and `gpstop -ari`
- prepare function:
```
CREATE EXTENSION plpythonu;
CREATE OR REPLACE FUNCTION test_func() RETURNS SETOF int AS
$$
plpy.execute("select pg_backend_pid()")

for i in range(0, 5):
    yield (i)

$$ LANGUAGE plpythonu;
```
- exit from the current psql session.
- stop the postmaster of segment: `gdb -p "the pid of segment postmaster"`
- enter a psql session.
- call `SELECT test_func();` and get error
```
gpadmin=# select test_func();
ERROR:  function "test_func" error fetching next item from iterator (plpy_elog.c:121)
DETAIL:  Exception: failed to acquire resources on one or more segments
CONTEXT:  Traceback (most recent call last):
PL/Python function "test_func"
```
- quit gdb and make postmaster runnable.
- call  `SELECT test_func();` again and get panic
```
gpadmin=# SELECT test_func();
server closed the connection unexpectedly
        This probably means the server terminated abnormally
        before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.
!>
```

- There is an SPI call in test_func(): `plpy.execute()`.
- Then coordinator will start a subtransaction by PLy_spi_subtransaction_begin();
- Meanwhile, if the segment cannot receive the instruction from the coordinator,
  the subtransaction beginning procedure return fails.
- BUT! The Python processor does not know whether an error happened and
  does not clean its environment.
- Then the next plpython UDF in the same session will fail due to the wrong
  Python environment.

- Use try-catch to catch the exception caused by PLy_spi_subtransaction_begin()
- set the python error indicator by PLy_spi_exception_set()

backport from #16856

Co-authored-by: Chen Mulong <chenmulong@gmail.com>
(cherry picked from commit 45d6ba8)

Co-authored-by: Zhang Hao <hzhang2@vmware.com>
Stolb27 pushed a commit that referenced this pull request Feb 15, 2024
…586)

My previous commit 8915cd0 caused coredump in some pipeline jobs.
Example stack:
```
Core was generated by `postgres:  7000, ic proxy process
Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x0000000000b46ec3 in pg_atomic_read_u32_impl (ptr=0x7f05a8c51104) at ../../../../src/include/port/atomics/generic.h:48
(gdb) bt
#0  0x0000000000b46ec3 in pg_atomic_read_u32_impl (ptr=0x7f05a8c51104) at ../../../../src/include/port/atomics/generic.h:48
#1  pg_atomic_read_u32 (ptr=0x7f05a8c51104) at ../../../../src/include/port/atomics.h:247
#2  LWLockAttemptLock (mode=LW_EXCLUSIVE, lock=0x7f05a8c51100) at lwlock.c:751
#3  LWLockAcquire (lock=0x7f05a8c51100, mode=mode@entry=LW_EXCLUSIVE) at lwlock.c:1188
#4  0x0000000000b32fff in ShmemInitStruct (name=name@entry=0x130e160 "", size=size@entry=4, foundPtr=foundPtr@entry=0x7ffcf94513bf) at shmem.c:412
#5  0x0000000000d6d18e in ic_proxy_server_main () at ic_proxy_main.c:545
#6  0x0000000000d6c219 in ICProxyMain (main_arg=<optimized out>) at ic_proxy_bgworker.c:36
#7  0x0000000000aa9caa in StartBackgroundWorker () at bgworker.c:955
#8  0x0000000000ab9407 in do_start_bgworker (rw=<optimized out>) at postmaster.c:6450
#9  maybe_start_bgworkers () at postmaster.c:6706
#10 0x0000000000abbc59 in ServerLoop () at postmaster.c:2095
#11 0x0000000000abd777 in PostmasterMain (argc=argc@entry=5, argv=argv@entry=0x36e3650) at postmaster.c:1633
#12 0x00000000006e4764 in main (argc=5, argv=0x36e3650) at main.c:240
(gdb) p *ptr
Cannot access memory at address 0x7f05a8c51104
```

The root cause is I forgot to init SHM structure at CreateSharedMemoryAndSemaphores().
Fix it in this commit.
Stolb27 pushed a commit that referenced this pull request Feb 15, 2024
## Problem
An error occurs in python lib when a plpython function is executed.
After our analysis, in the user's cluster, a plpython UDF 
was running with the unstable network, and got a timeout error:
`failed to acquire resources on one or more segments`.
Then a plpython UDF was run in the same session, and the UDF
failed with GC error.

Here is the core dump:
```
2023-11-24 10:15:18.945507 CST,,,p2705198,th2081832064,,,,0,,,seg-1,,,,,"LOG","00000","3rd party error log:
    #0 0x7f7c68b6d55b in frame_dealloc /home/cc/repo/cpython/Objects/frameobject.c:509:5
    #1 0x7f7c68b5109d in gen_send_ex /home/cc/repo/cpython/Objects/genobject.c:108:9
    #2 0x7f7c68af9ddd in PyIter_Next /home/cc/repo/cpython/Objects/abstract.c:3118:14
    #3 0x7f7c78caa5c0 in PLy_exec_function /home/cc/repo/gpdb6/src/pl/plpython/plpy_exec.c:134:11
    #4 0x7f7c78cb5ffb in plpython_call_handler /home/cc/repo/gpdb6/src/pl/plpython/plpy_main.c:387:13
    #5 0x562f5e008bb5 in ExecMakeTableFunctionResult /home/cc/repo/gpdb6/src/backend/executor/execQual.c:2395:13
    #6 0x562f5e0dddec in FunctionNext_guts /home/cc/repo/gpdb6/src/backend/executor/nodeFunctionscan.c:142:5
    #7 0x562f5e0da094 in FunctionNext /home/cc/repo/gpdb6/src/backend/executor/nodeFunctionscan.c:350:11
    #8 0x562f5e03d4b0 in ExecScanFetch /home/cc/repo/gpdb6/src/backend/executor/execScan.c:84:9
    #9 0x562f5e03cd8f in ExecScan /home/cc/repo/gpdb6/src/backend/executor/execScan.c:154:10
    #10 0x562f5e0da072 in ExecFunctionScan /home/cc/repo/gpdb6/src/backend/executor/nodeFunctionscan.c:380:9
    #11 0x562f5e001a1c in ExecProcNode /home/cc/repo/gpdb6/src/backend/executor/execProcnode.c:1071:13
    #12 0x562f5dfe6377 in ExecutePlan /home/cc/repo/gpdb6/src/backend/executor/execMain.c:3202:10
    #13 0x562f5dfe5bf4 in standard_ExecutorRun /home/cc/repo/gpdb6/src/backend/executor/execMain.c:1171:5
    #14 0x562f5dfe4877 in ExecutorRun /home/cc/repo/gpdb6/src/backend/executor/execMain.c:992:4
    #15 0x562f5e857e69 in PortalRunSelect /home/cc/repo/gpdb6/src/backend/tcop/pquery.c:1164:4
    #16 0x562f5e856d3f in PortalRun /home/cc/repo/gpdb6/src/backend/tcop/pquery.c:1005:18
    #17 0x562f5e84607a in exec_simple_query /home/cc/repo/gpdb6/src/backend/tcop/postgres.c:1848:10
```

## Reproduce
We can use a simple procedure to reproduce the above problem:
- set timeout GUC: `gpconfig -c gp_segment_connect_timeout -v 5` and `gpstop -ari`
- prepare function:
```
CREATE EXTENSION plpythonu;
CREATE OR REPLACE FUNCTION test_func() RETURNS SETOF int AS
$$
plpy.execute("select pg_backend_pid()")

for i in range(0, 5):
    yield (i)

$$ LANGUAGE plpythonu;
```
- exit from the current psql session.
- stop the postmaster of segment: `gdb -p "the pid of segment postmaster"`
- enter a psql session.
- call `SELECT test_func();` and get error
```
gpadmin=# select test_func();
ERROR:  function "test_func" error fetching next item from iterator (plpy_elog.c:121)
DETAIL:  Exception: failed to acquire resources on one or more segments
CONTEXT:  Traceback (most recent call last):
PL/Python function "test_func"
```
- quit gdb and make postmaster runnable.
- call  `SELECT test_func();` again and get panic
```
gpadmin=# SELECT test_func();
server closed the connection unexpectedly
        This probably means the server terminated abnormally
        before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.
!> 
```

## Analysis
- There is an SPI call in test_func(): `plpy.execute()`. 
- Then coordinator will start a subtransaction by PLy_spi_subtransaction_begin();
- Meanwhile, if the segment cannot receive the instruction from the coordinator,
  the subtransaction beginning procedure return fails.
- BUT! The Python processor does not know whether an error happened and
  does not clean its environment.
- Then the next plpython UDF in the same session will fail due to the wrong
  Python environment.

## Solution
- Use try-catch to catch the exception caused by PLy_spi_subtransaction_begin()
- set the python error indicator by PLy_spi_exception_set()


Co-authored-by: Chen Mulong <chenmulong@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants