```
-2. Choose a name for the JDBC server. You will provide the name to Greenplum users that you choose to allow to reference tables in the external SQL database as the configured user.
+2. Choose a name for the JDBC server. You will provide the name to Cloudberry users that you choose to allow to reference tables in the external SQL database as the configured user.
**Note**: The server name `default` is reserved.
@@ -401,7 +399,7 @@ In this procedure, you name and add a PXF JDBC server configuration for a Postgr
```
6. Save your changes and exit the editor.
-7. Use the `pxf cluster sync` command to copy the new server configuration to the Greenplum Database cluster:
+7. Use the `pxf cluster sync` command to copy the new server configuration to the Apache Cloudberry cluster:
``` shell
gpadmin@coordinator$ pxf cluster sync
diff --git a/docs/content/jdbc_pxf.html.md.erb b/docs/content/jdbc_pxf.html.md.erb
index e562a642..42898d2c 100644
--- a/docs/content/jdbc_pxf.html.md.erb
+++ b/docs/content/jdbc_pxf.html.md.erb
@@ -32,9 +32,9 @@ This section describes how to use the PXF JDBC connector to access data in an ex
Before you access an external SQL database using the PXF JDBC connector, ensure that:
- You can identify the PXF runtime configuration directory (`$PXF_BASE`).
-- You have configured PXF, and PXF is running on each Greenplum Database host. See [Configuring PXF](instcfg_pxf.html) for additional information.
-- Connectivity exists between all Greenplum Database hosts and the external SQL database.
-- You have configured your external SQL database for user access from all Greenplum Database hosts.
+- You have configured PXF, and PXF is running on each Apache Cloudberry host. See [Configuring PXF](instcfg_pxf.html) for additional information.
+- Connectivity exists between all Apache Cloudberry hosts and the external SQL database.
+- You have configured your external SQL database for user access from all Apache Cloudberry hosts.
- You have registered any JDBC driver JAR dependencies.
- (Recommended) You have created one or more named PXF JDBC connector server configurations as described in [Configuring the PXF JDBC Connector](jdbc_cfg.html).
@@ -68,11 +68,25 @@ PXF includes version 1.1.0 of the Hive JDBC driver. This version does **not** su
| BYTEA | N/A | N/A | Read, Write |
## Accessing an External SQL Database
-The PXF JDBC connector supports a single profile named `jdbc`. You can both read data from and write data to an external SQL database table with this profile. You can also use the connector to run a static, named query in external SQL database and read the results.
+The PXF JDBC connector supports a single profile named `jdbc` for external tables and `jdbc_pxf_fdw` foreign data wrapper for FDW. You can both read data from and write data to an external SQL database table with this profile. You can also use the connector to run a static, named query in external SQL database and read the results.
-To access data in a remote SQL database, you create a readable or writable Greenplum Database external table that references the remote database table. The Greenplum Database external table and the remote database table or query result tuple must have the same definition; the column names and types must match.
+To access data in a remote SQL database, you create a readable or writable Apache Cloudberry external table that references the remote database table. The Apache Cloudberry external table and the remote database table or query result tuple must have the same definition; the column names and types must match.
-Use the following syntax to create a Greenplum Database external table that references a remote SQL database table or a query result from the remote database:
+Use the following syntax to create a Apache Cloudberry foreign table that references a remote SQL database table or a query result from the remote database:
+
+
+CREATE SERVER "my_server" FOREIGN DATA WRAPPER jdbc_pxf_fdw;
+CREATE USER MAPPING FOR CURRENT_USER SERVER "my_server";
+CREATE FOREIGN TABLE <table_name>
+ ( <column_name> <data_type> [, ...] | LIKE <other_table> )
+ SERVER "my_server"
+ OPTIONS (
+ resource '<external-table-name>|query:<query_name>'
+ [,&<custom-option>=<value>[...]]
+ )
+
+
+OR create a Apache Cloudberry external table:
CREATE [READABLE | WRITABLE] EXTERNAL TABLE <table_name>
@@ -82,7 +96,7 @@ FORMAT 'CUSTOM' (FORMATTER='pxfwritable_import'|'pxfwritable_export');
-The specific keywords and values used in the Greenplum Database [CREATE EXTERNAL TABLE](https://docs.vmware.com/en/VMware-Greenplum/6/greenplum-database/ref_guide-sql_commands-CREATE_EXTERNAL_TABLE.html) command are described in the table below.
+The specific keywords and values used in the Apache Cloudberry [CREATE FOREIGN TABLE](https://cloudberry.apache.org/docs/sql-stmts/create-foreign-table) / [CREATE EXTERNAL TABLE](https://cloudberry.apache.org/docs/sql-stmts/create-external-table/) command are described in the table below.
| Keyword | Value |
|-------|-------------------------------------|
@@ -97,7 +111,7 @@ The specific keywords and values used in the Greenplum Database [CREATE EXTERNAL
### JDBC Custom Options
-You include JDBC connector custom options in the `LOCATION` URI, prefacing each option with an ampersand `&`. `CREATE EXTERNAL TABLE` \s supported by the `jdbc` profile include:
+You include JDBC connector custom options in the `OPTIONS` or `LOCATION` URI (prefacing each option with an ampersand `&`). `CREATE FOREIGN TABLE` / `CREATE EXTERNAL TABLE` \s supported by the `jdbc` profile include:
| Option Name | Operation | Description
|---------------|------------|--------|
@@ -111,7 +125,6 @@ You include JDBC connector custom options in the `LOCATION` URI, prefacing each
| INTERVAL | Read | Required when `PARTITION_BY` is specified and of the `int`, `bigint`, or `date` type. The interval, \[:\], of one fragment. Used with `RANGE` as a hint to aid the creation of partitions. Specify the size of the fragment in \. If the partition column is a `date` type, use the \ to specify `year`, `month`, or `day`. PXF ignores `INTERVAL` when the `PARTITION_BY` column is of the `enum` type. |
| QUOTE_COLUMNS | Read | Controls whether PXF should quote column names when constructing an SQL query to the external database. Specify `true` to force PXF to quote all column names; PXF does not quote column names if any other value is provided. If `QUOTE_COLUMNS` is not specified (the default), PXF automatically quotes *all* column names in the query when *any* column name:
- includes special characters, or
- is mixed case and the external database does not support unquoted mixed case identifiers. |
-
#### Batching Insert Operations (Write)
*When the JDBC driver of the external SQL database supports it*, batching of `INSERT` operations may significantly increase performance.
@@ -152,9 +165,9 @@ To deactivate or activate a thread pool and set the pool size, create the PXF ex
#### Partitioning (Read)
-The PXF JDBC connector supports simultaneous read access from PXF instances running on multiple Greenplum Database hosts to an external SQL table. This feature is referred to as partitioning. Read partitioning is not activated by default. To activate read partitioning, set the `PARTITION_BY`, `RANGE`, and `INTERVAL` custom options when you create the PXF external table.
+The PXF JDBC connector supports simultaneous read access from PXF instances running on multiple Apache Cloudberry hosts to an external SQL table. This feature is referred to as partitioning. Read partitioning is not activated by default. To activate read partitioning, set the `PARTITION_BY`, `RANGE`, and `INTERVAL` custom options when you create the PXF external table.
-PXF uses the `RANGE` and `INTERVAL` values and the `PARTITON_BY` column that you specify to assign specific data rows in the external table to PXF instances running on the Greenplum Database segment hosts. This column selection is specific to PXF processing, and has no relationship to a partition column that you may have specified for the table in the external SQL database.
+PXF uses the `RANGE` and `INTERVAL` values and the `PARTITON_BY` column that you specify to assign specific data rows in the external table to PXF instances running on the Apache Cloudberry segment hosts. This column selection is specific to PXF processing, and has no relationship to a partition column that you may have specified for the table in the external SQL database.
Example JDBC \ substrings that identify partitioning parameters:
@@ -176,9 +189,9 @@ For example, when a user queries a PXF external table created with a `LOCATION`
- Fragment 4: WHERE (id >= 5) - implicitly-generated fragment for RANGE end-bounded interval
- Fragment 5: WHERE (id IS NULL) - implicitly-generated fragment
-PXF distributes the fragments among Greenplum Database segments. A PXF instance running on a segment host spawns a thread for each segment on that host that services a fragment. If the number of fragments is less than or equal to the number of Greenplum segments configured on a segment host, a single PXF instance may service all of the fragments. Each PXF instance sends its results back to Greenplum Database, where they are collected and returned to the user.
+PXF distributes the fragments among Apache Cloudberry segments. A PXF instance running on a segment host spawns a thread for each segment on that host that services a fragment. If the number of fragments is less than or equal to the number of Cloudberry segments configured on a segment host, a single PXF instance may service all of the fragments. Each PXF instance sends its results back to Apache Cloudberry, where they are collected and returned to the user.
-When you specify the `PARTITION_BY` option, tune the `INTERVAL` value and unit based upon the optimal number of JDBC connections to the target database and the optimal distribution of external data across Greenplum Database segments. The `INTERVAL` low boundary is driven by the number of Greenplum Database segments while the high boundary is driven by the acceptable number of JDBC connections to the target database. The `INTERVAL` setting influences the number of fragments, and should ideally not be set too high nor too low. Testing with multiple values may help you select the optimal settings.
+When you specify the `PARTITION_BY` option, tune the `INTERVAL` value and unit based upon the optimal number of JDBC connections to the target database and the optimal distribution of external data across Apache Cloudberry segments. The `INTERVAL` low boundary is driven by the number of Apache Cloudberry segments while the high boundary is driven by the acceptable number of JDBC connections to the target database. The `INTERVAL` setting influences the number of fragments, and should ideally not be set too high nor too low. Testing with multiple values may help you select the optimal settings.
## Examples
@@ -197,12 +210,12 @@ The PXF JDBC Connector allows you to specify a statically-defined query to run a
- You need to join several tables that all reside in the same external database.
- You want to perform complex aggregation closer to the data source.
- You would use, but are not allowed to create, a `VIEW` in the external database.
-- You would rather consume computational resources in the external system to minimize utilization of Greenplum Database resources.
+- You would rather consume computational resources in the external system to minimize utilization of Apache Cloudberry resources.
- You want to run a HIVE query and control resource utilization via YARN.
-The Greenplum Database administrator defines a query and provides you with the query name to use when you create the external table. Instead of a table name, you specify `query:` in the `CREATE EXTERNAL TABLE` `LOCATION` clause to instruct the PXF JDBC connector to run the static query named `` in the remote SQL database.
+The Apache Cloudberry administrator defines a query and provides you with the query name to use when you create the external table. Instead of a table name, you specify `query:` in the `CREATE EXTERNAL TABLE` `LOCATION` clause to instruct the PXF JDBC connector to run the static query named `` in the remote SQL database.
-PXF supports named queries only with readable external tables. You must create a unique Greenplum Database readable external table for each query that you want to run.
+PXF supports named queries only with readable external tables. You must create a unique Apache Cloudberry readable external table for each query that you want to run.
The names and types of the external table columns must exactly match the names, types, and order of the columns return by the query result. If the query returns the results of an aggregation or other function, be sure to use the `AS` qualifier to specify a specific column name.
@@ -222,8 +235,14 @@ SELECT c.name, sum(o.amount) AS total, o.month
GROUP BY c.name, o.month
```
-This query returns tuples of type `(name text, total int, month int)`. If the `order_rpt` query is defined for the PXF JDBC server named `pgserver`, you could create a Greenplum Database external table to read these query results as follows:
+This query returns tuples of type `(name text, total int, month int)`. If the `order_rpt` query is defined for the PXF JDBC server named `pgserver`, you could create a Apache Cloudberry external/foreign table to read these query results as follows:
+``` sql
+CREATE FOREIGN TABLE orderrpt_frompg(name text, total int, month int)
+ SERVER pgserver
+ OPTIONS ( resource 'query:order_rpt', PARTITION_BY 'month:int', RANGE '1:13', INTERVAL '3')
+```
+OR
``` sql
CREATE EXTERNAL TABLE orderrpt_frompg(name text, total int, month int)
LOCATION ('pxf://query:order_rpt?PROFILE=jdbc&SERVER=pgserver&PARTITION_BY=month:int&RANGE=1:13&INTERVAL=3')
@@ -238,7 +257,7 @@ The PXF JDBC connector automatically applies column projection and filter pushdo
## Overriding the JDBC Server Configuration with DDL
-You can override certain properties in a JDBC server configuration for a specific external database table by directly specifying the custom option in the `CREATE EXTERNAL TABLE` `LOCATION` clause:
+You can override certain properties in a JDBC server configuration for a specific external database table by directly specifying the custom option in the `CREATE SERVER` `OPTIONS` section or `CREATE EXTERNAL TABLE` `LOCATION` clause:
| Custom Option Name | jdbc-site.xml Property Name |
|----------------------|-----------------------------|
@@ -251,18 +270,28 @@ You can override certain properties in a JDBC server configuration for a specifi
| QUERY_TIMEOUT | jdbc.statement.queryTimeout |
| DATE_WIDE_RANGE | jdbc.date.wideRange |
-Example JDBC connection strings specified via custom options:
-
-``` pre
-&JDBC_DRIVER=org.postgresql.Driver&DB_URL=jdbc:postgresql://pgserverhost:5432/pgtestdb&USER=pguser1&PASS=changeme
-&JDBC_DRIVER=com.mysql.jdbc.Driver&DB_URL=jdbc:mysql://mysqlhost:3306/testdb&USER=user1&PASS=changeme
+For foreign tables:
+```sql
+CREATE SERVER "pgserver" FOREIGN DATA WRAPPER jdbc_pxf_fdw
+ OPTIONS (
+ jdbc_driver 'org.postgresql.Driver',
+ db_url 'jdbc:postgresql://pgserverhost:5432/pgtestdb',
+ user 'pxfuser1',
+ pass 'changeme'
+ );
+CREATE USER MAPPING FOR CURRENT_USER SERVER "pgserver";
+CREATE FOREIGN TABLE pxf_pgtbl(name varchar, age int)
+ SERVER "pgserver"
+ OPTIONS (resource 'public.forpxf_table1');
```
-For example:
-CREATE EXTERNAL TABLE pxf_pgtbl(name text, orders int)
- LOCATION ('pxf://public.forpxf_table1?PROFILE=jdbc&JDBC_DRIVER=org.postgresql.Driver&DB_URL=jdbc:postgresql://pgserverhost:5432/pgtestdb&USER=pxfuser1&PASS=changeme')
-FORMAT 'CUSTOM' (FORMATTER='pxfwritable_export');
+For external tables:
+```sql
+CREATE EXTERNAL TABLE pxf_pgtbl(name text, orders int)
+ LOCATION ('pxf://public.forpxf_table1?PROFILE=jdbc&JDBC_DRIVER=org.postgresql.Driver&DB_URL=jdbc:postgresql://pgserverhost:5432/pgtestdb&USER=pxfuser1&PASS=changeme')
+FORMAT 'CUSTOM' (FORMATTER='pxfwritable_export');
+```
Warning: Credentials that you provide in this manner are visible as part of the external table definition. Do not use this method of passing credentials in a production environment.
-Refer to [Configuration Property Precedence](cfg_server.html#override) for detailed information about the precedence rules that PXF uses to obtain configuration property settings for a Greenplum Database user.
+Refer to [Configuration Property Precedence](cfg_server.html#override) for detailed information about the precedence rules that PXF uses to obtain configuration property settings for a Apache Cloudberry user.
diff --git a/docs/content/jdbc_pxf_mysql.html.md.erb b/docs/content/jdbc_pxf_mysql.html.md.erb
index 017be284..2c0bc593 100644
--- a/docs/content/jdbc_pxf_mysql.html.md.erb
+++ b/docs/content/jdbc_pxf_mysql.html.md.erb
@@ -77,27 +77,27 @@ Perform the following steps to create a MySQL table named `names` in a database
You must create a JDBC server configuration for MySQL, download the MySQL driver JAR file to your system, copy the JAR file to the PXF user configuration directory, synchronize the PXF configuration, and then restart PXF.
-This procedure will typically be performed by the Greenplum Database administrator.
+This procedure will typically be performed by the Apache Cloudberry administrator.
-1. Log in to the Greenplum Database coordinator host:
+1. Log in to the Apache Cloudberry coordinator host:
``` shell
$ ssh gpadmin@
```
1. Download the MySQL JDBC driver and place it under `$PXF_BASE/lib`. If you [relocated $PXF_BASE](about_pxf_dir.html#movebase), make sure you use the updated location. You can download a MySQL JDBC driver from your preferred download location. The following example downloads the driver from Maven Central and places it under `$PXF_BASE/lib`:
- 1. If you did not relocate `$PXF_BASE`, run the following from the Greenplum coordinator:
+ 1. If you did not relocate `$PXF_BASE`, run the following from the Cloudberry coordinator:
```shell
- gpadmin@gcoord$ cd /usr/local/pxf-gp/lib
- gpadmin@coordinator$ wget https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.21/mysql-connector-java-8.0.21.jar
+ gpadmin@coordinator$ cd /usr/local/pxf-gp/lib
+ gpadmin@coordinator$ wget https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.26/mysql-connector-java-8.0.26.jar
```
- 2. If you relocated `$PXF_BASE`, run the following from the Greenplum coordinator:
+ 2. If you relocated `$PXF_BASE`, run the following from the Cloudberry coordinator:
```shell
gpadmin@coordinator$ cd $PXF_BASE/lib
- gpadmin@coordinator$ wget https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.21/mysql-connector-java-8.0.21.jar
+ gpadmin@coordinator$ wget https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.26/mysql-connector-java-8.0.26.jar
```
1. Synchronize the PXF configuration, and then restart PXF:
@@ -135,7 +135,7 @@ This procedure will typically be performed by the Greenplum Database administrat
```
-3. Synchronize the PXF server configuration to the Greenplum Database cluster:
+3. Synchronize the PXF server configuration to the Apache Cloudberry cluster:
``` shell
gpadmin@coordinator$ pxf cluster sync
@@ -153,6 +153,15 @@ Perform the following procedure to create a PXF external table that references t
FORMAT 'CUSTOM' (formatter='pxfwritable_import');
```
+ OR foreign table
+ ``` sql
+ gpadmin=# CREATE SERVER "mysql" FOREIGN DATA WRAPPER jdbc_pxf_fdw;
+ gpadmin=# CREATE USER MAPPING FOR CURRENT_USER SERVER "mysql";
+ gpadmin=# CREATE FOREIGN TABLE names_in_mysql (id int, name text, last text)
+ SERVER "mysql"
+ OPTIONS ( resource 'names' );
+ ```
+
2. Display all rows of the `names_in_mysql` table:
``` sql
@@ -175,6 +184,7 @@ Perform the following procedure to insert some data into the `names` MySQL table
LOCATION('pxf://names?PROFILE=jdbc&SERVER=mysql')
FORMAT 'CUSTOM' (formatter='pxfwritable_export');
```
+ OR reuse foreign table from previous steps
4. Insert some data into the `names_in_mysql_w` table. For example:
diff --git a/docs/content/jdbc_pxf_named.html.md.erb b/docs/content/jdbc_pxf_named.html.md.erb
index 3ee10b40..7804450b 100644
--- a/docs/content/jdbc_pxf_named.html.md.erb
+++ b/docs/content/jdbc_pxf_named.html.md.erb
@@ -82,11 +82,11 @@ Perform the following procedure to create PostgreSQL tables named `customers` an
## Configure the Named Query
-In this procedure you create a named query text file, add it to the `pgsrvcfg` JDBC server configuration, and synchronize the PXF configuration to the Greenplum Database cluster.
+In this procedure you create a named query text file, add it to the `pgsrvcfg` JDBC server configuration, and synchronize the PXF configuration to the Apache Cloudberry cluster.
-This procedure will typically be performed by the Greenplum Database administrator.
+This procedure will typically be performed by the Apache Cloudberry administrator.
-1. Log in to the Greenplum Database coordinator host:
+1. Log in to the Apache Cloudberry coordinator host:
``` shell
$ ssh gpadmin@
@@ -109,7 +109,7 @@ This procedure will typically be performed by the Greenplum Database administrat
4. Save the file and exit the editor.
-5. Synchronize these changes to the PXF configuration to the Greenplum Database cluster:
+5. Synchronize these changes to the PXF configuration to the Apache Cloudberry cluster:
``` shell
gpadmin@coordinator$ pxf cluster sync
@@ -117,7 +117,7 @@ This procedure will typically be performed by the Greenplum Database administrat
## Read the Query Results
-Perform the following procedure on your Greenplum Database cluster to create a PXF external table that references the query file that you created in the previous section, and then reads the query result data:
+Perform the following procedure on your Apache Cloudberry cluster to create a PXF external table that references the query file that you created in the previous section, and then reads the query result data:
1. Create the PXF external table specifying the `jdbc` profile. For example:
@@ -127,7 +127,21 @@ Perform the following procedure on your Greenplum Database cluster to create a P
FORMAT 'CUSTOM' (FORMATTER='pxfwritable_import');
```
- With this partitioning scheme, PXF will issue 4 queries to the remote SQL database, one query per quarter. Each query will return customer names and the total amount of all of their orders in a given month, aggregated per customer, per month, for each month of the target quarter. Greenplum Database will then combine the data into a single result set for you when you query the external table.
+ OR foreign table
+ ``` sql
+ CREATE SERVER "pgsrvcfg" FOREIGN DATA WRAPPER jdbc_pxf_fdw;
+ CREATE USER MAPPING FOR CURRENT_USER SERVER "pgsrvcfg";
+ CREATE FOREIGN TABLE pxf_queryres_frompg (name text, city text, total int, month int)
+ SERVER "pgsrvcfg"
+ OPTIONS (
+ resource 'query:pg_order_report',
+ PARTITION_BY 'month:int'
+ RANGE '1:13'
+ INTERVAL '3'
+ );
+ ```
+
+ With this partitioning scheme, PXF will issue 4 queries to the remote SQL database, one query per quarter. Each query will return customer names and the total amount of all of their orders in a given month, aggregated per customer, per month, for each month of the target quarter. Apache Cloudberry will then combine the data into a single result set for you when you query the external table.
2. Display all rows of the query result:
@@ -154,7 +168,7 @@ Perform the following procedure on your Greenplum Database cluster to create a P
(2 rows)
```
- When you run this query, PXF requests and retrieves query results for only the `city` and `total` columns, reducing the amount of data sent back to Greenplum Database.
+ When you run this query, PXF requests and retrieves query results for only the `city` and `total` columns, reducing the amount of data sent back to Apache Cloudberry.
4. Provide additional filters and aggregations to filter the `total` in PostgreSQL:
@@ -170,5 +184,5 @@ Perform the following procedure on your Greenplum Database cluster to create a P
(2 rows)
```
- In this example, PXF will add the `WHERE` filter to the subquery. This filter is pushed to and run on the remote database system, reducing the amount of data that PXF sends back to Greenplum Database. The `GROUP BY` aggregation, however, is not pushed to the remote and is performed by Greenplum.
+ In this example, PXF will add the `WHERE` filter to the subquery. This filter is pushed to and run on the remote database system, reducing the amount of data that PXF sends back to Apache Cloudberry. The `GROUP BY` aggregation, however, is not pushed to the remote and is performed by Cloudberry.
diff --git a/docs/content/jdbc_pxf_postgresql.html.md.erb b/docs/content/jdbc_pxf_postgresql.html.md.erb
index 11520a69..d813adf4 100644
--- a/docs/content/jdbc_pxf_postgresql.html.md.erb
+++ b/docs/content/jdbc_pxf_postgresql.html.md.erb
@@ -75,16 +75,16 @@ Perform the following steps to create a PostgreSQL table named `forpxf_table1` i
With these privileges, `pxfuser1` can read from and write to the `forpxf_table1` table.
-7. Update the PostgreSQL configuration to allow user `pxfuser1` to access `pgtestdb` from each Greenplum Database host. This configuration is specific to your PostgreSQL environment. You will update the `/var/lib/pgsql/pg_hba.conf` file and then restart the PostgreSQL server.
+7. Update the PostgreSQL configuration to allow user `pxfuser1` to access `pgtestdb` from each Apache Cloudberry host. This configuration is specific to your PostgreSQL environment. You will update the `/var/lib/pgsql/pg_hba.conf` file and then restart the PostgreSQL server.
## Configure the JDBC Connector
You must create a JDBC server configuration for PostgreSQL and synchronize the PXF configuration. The PostgreSQL JAR file is bundled with PXF, so there is no need to manually download it.
-This procedure will typically be performed by the Greenplum Database administrator.
+This procedure will typically be performed by the Apache Cloudberry administrator.
-1. Log in to the Greenplum Database coordinator host:
+1. Log in to the Apache Cloudberry coordinator host:
``` shell
$ ssh gpadmin@
@@ -114,7 +114,7 @@ This procedure will typically be performed by the Greenplum Database administrat
```
-3. Synchronize the PXF server configuration to the Greenplum Database cluster:
+3. Synchronize the PXF server configuration to the Apache Cloudberry cluster:
``` shell
gpadmin@coordinator$ pxf cluster sync
@@ -132,6 +132,15 @@ Perform the following procedure to create a PXF external table that references t
FORMAT 'CUSTOM' (FORMATTER='pxfwritable_import');
```
+ OR foreign table
+ ``` sql
+ gpadmin=# CREATE SERVER "pgsrvcfg" FOREIGN DATA WRAPPER jdbc_pxf_fdw;
+ gpadmin=# CREATE USER MAPPING FOR CURRENT_USER SERVER "pgsrvcfg";
+ gpadmin=# CREATE FOREIGN TABLE pxf_tblfrompg (name text, city text, total int, month int)
+ SERVER "pgsrvcfg"
+ OPTIONS ( resource 'public.forpxf_table1' );
+ ```
+
2. Display all rows of the `pxf_tblfrompg` table:
``` sql
@@ -155,6 +164,7 @@ Perform the following procedure to insert some data into the `forpxf_table1` Pos
LOCATION ('pxf://public.forpxf_table1?PROFILE=jdbc&SERVER=pgsrvcfg')
FORMAT 'CUSTOM' (FORMATTER='pxfwritable_export');
```
+ OR reuse foreign table from previous steps
4. Insert some data into the `pxf_writeto_postgres` table. For example:
diff --git a/docs/content/jdbc_pxf_trino.html.md.erb b/docs/content/jdbc_pxf_trino.html.md.erb
index ab30e01e..601651c9 100644
--- a/docs/content/jdbc_pxf_trino.html.md.erb
+++ b/docs/content/jdbc_pxf_trino.html.md.erb
@@ -30,9 +30,9 @@ Create a Trino table named `names` and insert some data into this table:
You must create a JDBC server configuration for Trino, download the Trino driver JAR file to your system, copy the JAR file to the PXF user configuration directory, synchronize the PXF configuration, and then restart PXF.
-This procedure will typically be performed by the Greenplum Database administrator.
+This procedure will typically be performed by the Apache Cloudberry administrator.
-1. Log in to the Greenplum Database coordinator host:
+1. Log in to the Apache Cloudberry coordinator host:
```shell
$ ssh gpadmin@
@@ -43,14 +43,14 @@ This procedure will typically be performed by the Greenplum Database administrat
See [Trino Documentation - JDBC Driver](https://trino.io/docs/current/client/jdbc.html#installing) for instructions on downloading the Trino JDBC driver.
The following example downloads the driver and places it under `$PXF_BASE/lib`:
- 1. If you did not relocate `$PXF_BASE`, run the following from the Greenplum coordinator:
+ 1. If you did not relocate `$PXF_BASE`, run the following from the Cloudberry coordinator:
```shell
gpadmin@coordinator$ cd /usr/local/pxf-gp/lib
gpadmin@coordinator$ wget
```
- 2. If you relocated `$PXF_BASE`, run the following from the Greenplum coordinator:
+ 2. If you relocated `$PXF_BASE`, run the following from the Cloudberry coordinator:
```shell
gpadmin@coordinator$ cd $PXF_BASE/lib
@@ -131,7 +131,7 @@ This procedure will typically be performed by the Greenplum Database administrat
```
-1. Synchronize the PXF server configuration to the Greenplum Database cluster:
+1. Synchronize the PXF server configuration to the Apache Cloudberry cluster:
```shell
gpadmin@coordinator$ pxf cluster sync
@@ -149,6 +149,14 @@ Perform the following procedure to create a PXF external table that references t
LOCATION('pxf://memory.default.names?PROFILE=jdbc&SERVER=trino')
FORMAT 'CUSTOM' (formatter='pxfwritable_import');
```
+ OR foreign table
+ ``` sql
+ CREATE SERVER "trino" FOREIGN DATA WRAPPER jdbc_pxf_fdw;
+ CREATE USER MAPPING FOR CURRENT_USER SERVER "trino";
+ CREATE FOREIGN TABLE pxf_trino_memory_names (id int, name text, last text)
+ SERVER "trino"
+ OPTIONS ( resource 'memory.default.names' );
+ ```
1. Display all rows of the `pxf_trino_memory_names` table:
@@ -173,6 +181,7 @@ You must create a new external table for the write operation.
LOCATION('pxf://memory.default.names?PROFILE=jdbc&SERVER=trino')
FORMAT 'CUSTOM' (formatter='pxfwritable_export');
```
+ OR reuse foreign table from previous steps
1. Insert some data into the `pxf_trino_memory_names_w` table. For example: