Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions config/orca.m4
Original file line number Diff line number Diff line change
Expand Up @@ -40,10 +40,10 @@ AC_RUN_IFELSE([AC_LANG_PROGRAM([[
#include <string.h>
]],
[
return strncmp("3.50.", GPORCA_VERSION_STRING, 5);
return strncmp("3.52.", GPORCA_VERSION_STRING, 5);
])],
[AC_MSG_RESULT([[ok]])],
[AC_MSG_ERROR([Your ORCA version is expected to be 3.50.XXX])]
[AC_MSG_ERROR([Your ORCA version is expected to be 3.52.XXX])]
)
AC_LANG_POP([C++])
])# PGAC_CHECK_ORCA_VERSION
Expand Down
8 changes: 4 additions & 4 deletions configure
Original file line number Diff line number Diff line change
Expand Up @@ -582,8 +582,8 @@ MAKEFLAGS=
# Identity of this package.
PACKAGE_NAME='Greenplum Database'
PACKAGE_TARNAME='greenplum-database'
PACKAGE_VERSION='5.20.0'
PACKAGE_STRING='Greenplum Database 5.20.0'
PACKAGE_VERSION='5.20.1'
PACKAGE_STRING='Greenplum Database 5.20.1'
PACKAGE_BUGREPORT='support@greenplum.org'
PACKAGE_URL=''

Expand Down Expand Up @@ -12606,7 +12606,7 @@ int
main ()
{

return strncmp("3.50.", GPORCA_VERSION_STRING, 5);
return strncmp("3.52.", GPORCA_VERSION_STRING, 5);

;
return 0;
Expand All @@ -12616,7 +12616,7 @@ if ac_fn_cxx_try_run "$LINENO"; then :
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: ok" >&5
$as_echo "ok" >&6; }
else
as_fn_error $? "Your ORCA version is expected to be 3.50.XXX" "$LINENO" 5
as_fn_error $? "Your ORCA version is expected to be 3.52.XXX" "$LINENO" 5

fi
rm -f core *.core core.conftest.* gmon.out bb.out conftest$ac_exeext \
Expand Down
2 changes: 1 addition & 1 deletion depends/conanfile_orca.txt
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
[requires]
orca/v3.50.0@gpdb/stable
orca/v3.52.0@gpdb/stable

[imports]
include, * -> build/include
Expand Down
2 changes: 1 addition & 1 deletion gpAux/releng/releng.mk
Original file line number Diff line number Diff line change
Expand Up @@ -120,7 +120,7 @@ sync_tools: opt_write_test /opt/releng/apache-ant
-Divyrepo.user=$(IVYREPO_USER) -Divyrepo.passwd="$(IVYREPO_PASSWD)" -quiet resolve);

ifeq "$(findstring aix,$(BLD_ARCH))" ""
LD_LIBRARY_PATH='' wget --no-check-certificate -q -O - https://github.com/greenplum-db/gporca/releases/download/v3.50.0/bin_orca_centos5_release.tar.gz | tar zxf - -C $(BLD_TOP)/ext/$(BLD_ARCH)
LD_LIBRARY_PATH='' wget --no-check-certificate -q -O - https://github.com/greenplum-db/gporca/releases/download/v3.52.0/bin_orca_centos5_release.tar.gz | tar zxf - -C $(BLD_TOP)/ext/$(BLD_ARCH)
endif

clean_tools: opt_write_test
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@
<li><a href="/5200/pxf/about_pxf_dir.html" format="markdown">About the PXF Installation and Configuration Directories</a></li>
<li><a href="/5200/pxf/install_java.html" format="markdown">Installing Java for PXF</a></li>
<li><a href="/5200/pxf/init_pxf.html" format="markdown">Initializing PXF</a></li>
<li><a href="/5200/pxf/cfg_server.html" format="markdown">About PXF Server Configuration</a></li>
<li><a href="/5200/pxf/cfg_server.html" format="markdown">Configuring PXF Servers</a></li>
<li class="has_submenu">
<a href="/5200/pxf/client_instcfg.html" format="markdown">Configuring PXF Hadoop Connectors (Optional)</a>
<ul>
Expand Down
5 changes: 3 additions & 2 deletions gpdb-doc/dita/admin_guide/managing/gpcopy-migrate.xml
Original file line number Diff line number Diff line change
Expand Up @@ -54,8 +54,9 @@
cluster to a Greenplum Database 5.9 or later cluster. However Greenplum Database 4.3.26 and
later do not include the actual <codeph>gpcopy</codeph> utility. You must manually copy the
<codeph>gpcopy</codeph> utility from your version 5.9 or later cluster into the older
version cluster to migrate data.. For
example:<codeblock>$ cp /usr/local/greenplum-db-5.8.0/bin/gpcopy /usr/local/greenplum-db-4.3.26.0/bin/</codeblock></p>
version cluster to migrate data. For
example:<codeblock>$ cp /usr/local/greenplum-db-5.9.0/bin/gpcopy /usr/local/greenplum-db-4.3.26.0/bin/</codeblock></p>
<p><codeph>gpcopy</codeph> does not currently support SSL encryption for its connections.</p>
</body>
</topic>
<topic id="topic_qwl_2rp_zdb">
Expand Down
2 changes: 1 addition & 1 deletion gpdb-doc/dita/admin_guide/wlmgmt_intro.xml
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@

<sectiondiv>
<p><b>Configuring vm.overcommit_ratio when Resource Group-Based Resource Management is Active</b></p>
<p>When resource group-based resource management is active, the operating system <codeph>vm.overcommit_ratio</codeph> default value is a good starting point. Tune as necessary. If your memory utilization is too low, increase the value; if your memory or swap usage is too high, decrease the setting.</p>
<p>When resource group-based resource management is active, tune the operating system <codeph>vm.overcommit_ratio</codeph> as necessary. If your memory utilization is too low, increase the value; if your memory or swap usage is too high, decrease the setting.</p>
</sectiondiv>
<sectiondiv>
<p><b>Configuring vm.overcommit_ratio when Resource Queue-Based Resource Management is Active</b></p>
Expand Down
84 changes: 63 additions & 21 deletions gpdb-doc/dita/admin_guide/workload_mgmt_resgroups.xml
Original file line number Diff line number Diff line change
Expand Up @@ -150,13 +150,13 @@
</row>
<row>
<entry colname="col1">MEMORY_LIMIT</entry>
<entry colname="col2">The percentage of memory resources available to this resource
group.</entry>
<entry colname="col2">The percentage of reserved memory resources available to
this resource group.</entry>
</row>
<row>
<entry colname="col1">MEMORY_SHARED_QUOTA</entry>
<entry colname="col2">The percentage of memory to share across transactions submitted
in this resource group.</entry>
<entry colname="col2">The percentage of reserved memory to share across transactions
submitted in this resource group.</entry>
</row>
<row>
<entry colname="col1">MEMORY_SPILL_RATIO</entry>
Expand Down Expand Up @@ -380,17 +380,22 @@
<codeblock>
rg_perseg_mem = ((RAM * (vm.overcommit_ratio / 100) + SWAP) * gp_resource_group_memory_limit) / num_active_primary_segments</codeblock>
</p>
<p>Each resource group reserves a percentage of the segment memory for resource management.
<p>Each resource group may reserve a percentage of the segment memory for resource management.
You identify this percentage via the <codeph>MEMORY_LIMIT</codeph> value that you specify
when you create the resource group. The minimum <codeph>MEMORY_LIMIT</codeph> percentage you
can specify for a resource group is 1, the maximum is 100.</p>
can specify for a resource group is 0, the maximum is 100. When <codeph>MEMORY_LIMIT</codeph>
is 0, Greenplum Database reserves no memory for the resource group, but uses resource
group global shared memory to fulfill all memory requests in the group. Refer to
<xref href="#topic833glob" type="topic" format="dita"/>
for more information about resource group global shared memory.</p>
<p>The sum of <codeph>MEMORY_LIMIT</codeph>s specified for all resource groups that you define
in your Greenplum Database cluster must not exceed 100.</p>
</body>
<topic id="mem_roles" xml:lang="en">
<title>Additional Memory Limits for Role-based Resource Groups</title>
<body>
<p>The memory reserved by a resource group for roles is further divided into fixed and
<p>If resource group memory is reserved for roles (non-zero
<codeph>MEMORY_LIMIT</codeph>), the memory is further divided into fixed and
shared components. The <codeph>MEMORY_SHARED_QUOTA</codeph> value that you specify when
you create the resource group identifies the percentage of reserved resource group memory
that may be shared among the currently running transactions. This memory is allotted on a
Expand All @@ -399,9 +404,10 @@ rg_perseg_mem = ((RAM * (vm.overcommit_ratio / 100) + SWAP) * gp_resource_group_
<p>The minimum <codeph>MEMORY_SHARED_QUOTA</codeph> that you can specify is 0, the maximum
is 100. The default <codeph>MEMORY_SHARED_QUOTA</codeph> is 20.</p>
<p>As mentioned previously, <codeph>CONCURRENCY</codeph> identifies the maximum number of
concurrently running transactions permitted in a resource group for roles. The fixed
memory reserved by a resource group is divided into <codeph>CONCURRENCY</codeph> number of
transaction slots. Each slot is allocated a fixed, equal amount of resource group memory.
concurrently running transactions permitted in a resource group for roles. If fixed
memory is reserved by a resource group (non-zero <codeph>MEMORY_LIMIT</codeph>),
it is divided into <codeph>CONCURRENCY</codeph> number of
transaction slots. Each slot is allocated a fixed, equal amount of the resource group memory.
Greenplum Database guarantees this fixed memory to each transaction. <fig
id="fig_py5_1sl_wlrg">
<title>Resource Group Memory Allotments</title>
Expand All @@ -425,7 +431,7 @@ rg_perseg_mem = ((RAM * (vm.overcommit_ratio / 100) + SWAP) * gp_resource_group_
<p>Resource group global shared memory is available only to resource groups that you
configure with the <codeph>vmtracker</codeph> memory auditor.</p>
<p>When available, Greenplum Database allocates global shared memory to a transaction
after first allocating slot and resource group shared memory. Greenplum Database
after first allocating slot and resource group shared memory (if applicable). Greenplum Database
allocates resource group global shared memory to transactions on a first-come
first-served basis.</p>
<note>Greenplum Database tracks, but does not actively monitor, transaction memory usage
Expand Down Expand Up @@ -466,24 +472,56 @@ rg_perseg_mem = ((RAM * (vm.overcommit_ratio / 100) + SWAP) * gp_resource_group_
transaction spills to disk. Greenplum Database uses the
<codeph>MEMORY_SPILL_RATIO</codeph> to determine the initial memory to allocate to a
transaction.</p>
<p> The minimum <codeph>MEMORY_SPILL_RATIO</codeph> percentage that you can specify for a
resource group is 0. The maximum is 100. The default <codeph>MEMORY_SPILL_RATIO</codeph>
is 20.</p>
<p>You define the <codeph>MEMORY_SPILL_RATIO</codeph> when you create a resource group for
roles. You can selectively set this limit on a per-query basis at the session level with
<p>You can specify an integer percentage value from 0 to 100 inclusive for
<codeph>MEMORY_SPILL_RATIO</codeph>. The default <codeph>MEMORY_SPILL_RATIO</codeph>
is 20.</p>
<p>When <codeph>MEMORY_SPILL_RATIO</codeph> is 0, Greenplum Database uses the
<xref href="../ref_guide/config_params/guc-list.xml#statement_mem" format="dita"><codeph>statement_mem</codeph></xref>
server configuration parameter value to control initial query operator memory.</p>
<note>When you set <codeph>MEMORY_LIMIT</codeph> to 0,
<codeph>MEMORY_SPILL_RATIO</codeph> must also be set to 0.</note>
<p>You can selectively set the <codeph>MEMORY_SPILL_RATIO</codeph> on a per-query
basis at the session level with
the <codeph><xref href="../ref_guide/config_params/guc-list.xml#memory_spill_ratio"
type="topic"/></codeph> server configuration parameter.</p>
<section id="topic833low" xml:lang="en">
<title>memory_spill_ratio and Low Memory Queries </title>
<p>A low <codeph>memory_spill_ratio</codeph> setting (for example, in the 0-2% range)
<p>A low <codeph>statement_mem</codeph> setting (for example, in the 10MB range)
has been shown to increase the performance of queries with low memory requirements.
Use the <codeph>memory_spill_ratio</codeph> server configuration parameter to override
the setting on a per-query basis. For example:
<codeblock>SET memory_spill_ratio=0;</codeblock></p>
Use the <codeph>memory_spill_ratio</codeph> and <codeph>statement_mem</codeph>
server configuration parameters to override the setting on a per-query basis.
For example:
<codeblock>SET memory_spill_ratio=0;
SET statement_mem='10 MB';</codeblock></p>
</section>
</body>
</topic>
</topic>
<topic id="topic833fvs" xml:lang="en">
<title>About Using Reserved Resource Group Memory vs. Using Resource Group Global Shared Memory</title>
<body>
<p>When you do not reserve memory for a resource group (<codeph>MEMORY_LIMIT</codeph>
and <codeph>MEMORY_SPILL_RATIO</codeph> are set to 0):</p><ul>
<li>It increases the size of the resource group global shared memory pool.</li>
<li>The resource group functions similarly to a resource queue, using the
<xref href="../ref_guide/config_params/guc-list.xml#statement_mem" format="dita"><codeph>statement_mem</codeph></xref>
server configuration parameter value to control initial query operator memory.</li>
<li>Any query submitted in the resource group competes for resource group global
shared memory on a first-come, first-served basis with queries running in other
groups.</li>
<li>There is no guarantee that Greenplum Database will be able to allocate memory
for a query running in the resource group. The risk of a query in the group
encountering an out of memory (OOM) condition increases when there are many
concurrent queries consuming memory from the resource group global shared
memory pool at the same time.</li>
</ul>
<p>To reduce the risk of OOM for a query running in an important resource group,
consider reserving some fixed memory for the group. While reserving fixed memory
for a group reduces the size of the resource group global shared memory pool,
this may be a fair tradeoff to reduce the risk of encountering an OOM condition
in a query running in a critical resource group.</p>
</body>
</topic>
<topic id="topic833cons" xml:lang="en">
<title>Other Memory Considerations</title>
<body>
Expand Down Expand Up @@ -756,7 +794,11 @@ gpstart
<p id="iz152723">When you create a resource group for a role, you must provide
<codeph>CPU_RATE_LIMIT</codeph> or <codeph>CPUSET</codeph> and
<codeph>MEMORY_LIMIT</codeph> limit values. These limits identify the percentage of
Greenplum Database resources to allocate to this resource group. For example, to create a
Greenplum Database resources to allocate to this resource group. You specify a
<codeph>MEMORY_LIMIT</codeph> to reserve a fixed amount of memory for the resource
group. If you specify a <codeph>MEMORY_LIMIT</codeph> of 0, Greenplum Database
uses global shared memory to fulfill all memory requirements for the resource group.</p>
<p>For example, to create a
resource group named <i>rgroup1</i> with a CPU limit of 20 and a memory limit of 25:</p>
<p>
<codeblock>=# CREATE RESOURCE GROUP <i>rgroup1</i> WITH (CPU_RATE_LIMIT=20, MEMORY_LIMIT=25);
Expand Down
Loading