Merged
Conversation
* add --port-range option to reference page * Remove COPY query limitation for ON SEGMENT clause * small edit to port range * adding whitespace for consistency * start --dest-table additions * Change --port-range to --data-port-range * Additional info about --dest-table * Some initial work on query support * clean up port variables * Revert "Some initial work on query support" This reverts commit fd33bab. * Edits from Chuck * Removing limitation wording regarding renaming conflict resolution * Updating --data-port-range based on review feedback * Feedback from Lisa * --include-table-file support for query, new JSON-format file * Change minimum data-port range to 1024; add requirement to cover --jobs; add example port range to --job example * Split JSON info into new command argument * Add notice that query isn't supported with multiple table selections * Add json limitations * Indent --dest-table for better visibility in synopsis * Regrouping related parameters in the synopsis * Reorganizing options according to function; adding sections for each option category * Removing SQL query info from --include-table-file * Edits to clarify connection options * --dest-table edits * remove --validate limitation; add fix from Jerome * Revert "Remove COPY query limitation for ON SEGMENT clause" This reverts commit 5a3119b. * Revert "Revert "Remove COPY query limitation for ON SEGMENT clause"" This reverts commit 32b3afc. * Remove limitation of --validate --append with data in destination table * Remove semicolon warning, per Ming Li * Revert "Remove limitation of --validate --append with data in destination table" This reverts commit 9304673.
* docs - add resgroup best prac section addressing low swap * edits requested by david
* docs - add info about config'ing pxf OOM actions * misc edits requested by david * clarify dump file renaming per comment from oliver * address comments from francisco
* docs - pxf max tomcat threads now user-configurable * misc edits
* docs - add info about per-greenplum-user pxf configuration * misc easy edits requested by david * remove file inadvertently added to the last commit :^( * rename topic title and some sectiont titles, regorg * more topic renames, add xrefs to connector cfg topics * change remote back to external; address comments from francisco
* docs - add info about configing hive access via jdbc * add About to title * remove some nesting, misc edits from review * better table column names? clearer Authenticated User values * edits requested by alex, adjust heading levels
* docs - resgroup MEMORY_LIMIT=0, MEMORY_SPILL_RATIO fallback * primary segments PER HOST
Improve PciIntervalFromScalarBoolOr() to use faster algorithm Only allow const expr evaluation for (const cmp const) ignoring casts Remove FDisjointLeft check in CRange::PrngExtend() Co-authored-by: Chris Hajas <chajas@pivotal.io> Co-authored-by: Shreedhar Hardikar <shardikar@pivotal.io>
* Docs: add note that gpcopy inherits encryption from client connection * Change note to clarify that SSL encryption is not supported * typo fix
ORCA commit: Skip group expression containing circular references Authored-by: Chris Hajas <chajas@pivotal.io>
* docs - pxf now bundled with postgresql 42.2.5 jar * remove postgresql jar registration from the example
This commit handles a missed case in the previous commit: "Fix algebrization of subqueries in queries with complex GROUP BYs". The logic inside RunExtractAggregatesMutator's Var case was intended to fix top-level Vars inside subqueries in the targetlist, but also incorrectly fixed top-level Vars in subqueries inside of aggregates.
A Motion node may has a subplan whose targetlist need to be expanded to make sure all entries of the subplan's hashExpr are in its targetlist, in this case, the motion node should also update its targetlist, otherwise, the difference of targetlist will cause Motion node parse the tuple (especially for MemTuple) incorrectly with a mismatched tuple description, make wrong result or even crash. For a subplan, if we can get a Expr for distkey column from the targetlist, the Expr must be in the targetlist or reference the Vars in the targetlist, The old logic is, if the Expr is valid, the Expr that reference Vars is also added to the targetlist. For example, a subplan has targetlist (c1) and the locus whose distkey is (c1::float8), so the Expr returned will be one referenced Vars in the targetlist and then the old targetlist will be extended to (c1, c1::float8). Obviously, the extension of targetlist here is unnecessary, the Expr can evaluate from Vars, meanwhile, motion need to transfer more columns in a tuple. Backported from master a565622. Co-authored-by: Adam Lee <ali@pivotal.io> Reviewed-by: Heikki Linnakangas <hlinnakangas@pivotal.io> Reviewed-by: Richard Guo <rguo@pivotal.io> Reviewed-by: Daniel Gustafsson <dgustafsson@pivotal.io>
darthunix
approved these changes
Jul 1, 2019
RekGRpth
pushed a commit
that referenced
this pull request
Dec 3, 2025
When a coordinator backend encounters an OOM error during a distributed query, it triggers a cleanup sequence that can execute twice or more times, causing a use-after-free crash. The sequence of events is as follows: during normal execution cleanup in `standard_ExecutorEnd()`, the system calls `cdbdisp_destroyDispatcherState()` to clean up the dispatcher state. This function frees the results array, including the `segdbDesc` pointers that reference executor backend connections. However, an OOM error may occur inside already occuring cleanup since we are foolishly trying to allocate more memory, `AbortTransaction()`, then `cdbcomponent_recycleIdleQE()` will be called later. This abort calls `cdbdisp_cleanupDispatcherHandle()` to perform cleanup operations again. The second cleanup attempt tries to read data from executor backends using the `segdbDesc` pointers that were already freed during the first cleanup in `checkDispatchResult()` to cancel the query, resulting in a segmentation fault when dereferencing invalid memory. Additionally, the `numActiveQEs` counter can become negative during reentrant calls to `cdbcomponent_recycleIdleQE()`, causing assertion failures. When a backend encounters an OOM error, it will throw an error and abort the transaction. If the backend happens to be a coordinator, it would also attempt to cancel the distributed query first, by reading results from executor backends. It is futile, since the gang would be discarded anyways if `ERRCODE_GP_MEMPROT_KILL` is encountered, which is a sign of OOM. Fixes: - Move `numActiveQEs` decrement after a `goto`. - Prevent dispatcher handle cleanup from cancelling the dispatch in case of OOM. It is instead done by `DisconnectAndDestroyAllGangs()`, based on the new global variable. - Avoid `ABORT` dispatch for distributed transactions when the new flag is active. Without a response from coordinator, the segments will eventually trigger `ABORT` themselves. - Whether gang is destroyed depends on the error code. Specify error code for runaway cleaner cancellations, so they are treated equivalently to OOM conditions. - Unit tests are modified to avoid mocking `in_oom_error_trouble()` for every `palloc()` call. - The test makes sure there aren't any `palloc()` calls at all if the server is currently handling an OOM condition. Ticket: ADBDEV-7916 Co-authored-by: Vladimir Sarmin <v.sarmin@arenadata.io>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
No description provided.