Skip to content
Permalink
Browse files
uppercase for SQL keywords
  • Loading branch information
lisakowen committed Nov 1, 2016
1 parent 015cf58 commit c40bcad1d8de923ad7c872e6a534bd2b73ea2594
Showing 15 changed files with 50 additions and 50 deletions.
@@ -191,21 +191,21 @@ This example of using `gpfdist` backs up and restores a 1TB `tpch` database. To
master_host$ psql tpch
```
```sql
tpch=# create writable external table wext_orders (like orders)
tpch-# location('gpfdist://sdw1:8080/orders1.csv', 'gpfdist://sdw1:8081/orders2.csv') format 'CSV';
tpch=# create writable external table wext_lineitem (like lineitem)
tpch-# location('gpfdist://sdw1:8080/lineitem1.csv', 'gpfdist://sdw1:8081/lineitem2.csv') format 'CSV';
tpch=# CREATE WRITABLE EXTERNAL TABLE wext_orders (LIKE orders)
tpch-# LOCATION('gpfdist://sdw1:8080/orders1.csv', 'gpfdist://sdw1:8081/orders2.csv') FORMAT 'CSV';
tpch=# CREATE WRITABLE EXTERNAL TABLE wext_lineitem (LIKE lineitem)
tpch-# LOCATION('gpfdist://sdw1:8080/lineitem1.csv', 'gpfdist://sdw1:8081/lineitem2.csv') FORMAT 'CSV';
```

The sample shows two tables in the `tpch` database: `orders` and `line item`. The sample shows that two corresponding external tables are created. Specify a location or each `gpfdist` instance in the `LOCATION` clause. This sample uses the CSV text format here, but you can also choose other delimited text formats. For more information, see the `CREATE EXTERNAL TABLE` SQL command.

4. Unload data to the external tables:

```sql
tpch=# begin;
tpch=# insert into wext_orders select * from orders;
tpch=# insert into wext_lineitem select * from lineitem;
tpch=# commit;
tpch=# BEGIN;
tpch=# INSERT INTO wext_orders SELECT * FROM orders;
tpch=# INSERT INTO wext_lineitem SELECT * FROM lineitem;
tpch=# COMMIT;
```

5. **\(Optional\)** Stop `gpfdist` servers to free ports for other processes:
@@ -242,17 +242,17 @@ This example of using `gpfdist` backs up and restores a 1TB `tpch` database. To
```

```sql
tpch2=# create external table rext_orders (like orders) location('gpfdist://sdw1:8080/orders1.csv', 'gpfdist://sdw1:8081/orders2.csv') format 'CSV';
tpch2=# create external table rext_lineitem (like lineitem) location('gpfdist://sdw1:8080/lineitem1.csv', 'gpfdist://sdw1:8081/lineitem2.csv') format 'CSV';
tpch2=# CREATE EXTERNAL TABLE rext_orders (LIKE orders) LOCATION('gpfdist://sdw1:8080/orders1.csv', 'gpfdist://sdw1:8081/orders2.csv') FORMAT 'CSV';
tpch2=# CREATE EXTERNAL TABLE rext_lineitem (LIKE lineitem) LOCATION('gpfdist://sdw1:8080/lineitem1.csv', 'gpfdist://sdw1:8081/lineitem2.csv') FORMAT 'CSV';
```

**Note:** The location clause is the same as the writable external table above.

4. Load data back from external tables:

```sql
tpch2=# insert into orders select * from rext_orders;
tpch2=# insert into lineitem select * from rext_lineitem;
tpch2=# INSERT INTO orders SELECT * FROM rext_orders;
tpch2=# INSERT INTO lineitem SELECT * FROM rext_lineitem;
```

5. Run the `ANALYZE` command after data loading:
@@ -97,7 +97,7 @@ For example purposes in this procedure, we are adding a new node named `sdw4`.
```

```sql
postgres=# select * from gp_segment_configuration;
postgres=# SELECT * FROM gp_segment_configuration;
```

```
@@ -164,7 +164,7 @@ For example purposes in this procedure, we are adding a new node named `sdw4`.
```

```sql
postgres=# select * from gp_segment_configuration ;
postgres=# SELECT * FROM gp_segment_configuration ;
```

```
@@ -203,7 +203,7 @@ For example purposes in this procedure, we are adding a new node named `sdw4`.
```

```sql
postgres=# select gp_metadata_cache_clear();
postgres=# SELECT gp_metadata_cache_clear();
```

16. After expansion, if the new size of your cluster is greater than or equal \(#nodes >=4\) to 4, change the value of the `output.replace-datanode-on-failure` HDFS parameter in `hdfs-client.xml` to `false`.
@@ -34,9 +34,9 @@ To move the filespace location to a HA-enabled HDFS location, you must move the
SELECT
fsname, fsedbid, fselocation
FROM
pg_filespace as sp, pg_filespace_entry as entry, pg_filesystem as fs
pg_filespace AS sp, pg_filespace_entry AS entry, pg_filesystem AS fs
WHERE
sp.fsfsys = fs.oid and fs.fsysname = 'hdfs' and sp.oid = entry.fsefsoid
sp.fsfsys = fs.oid AND fs.fsysname = 'hdfs' AND sp.oid = entry.fsefsoid
ORDER BY
entry.fsedbid;
```
@@ -91,15 +91,15 @@ When you enable HA HDFS, you are changing the HAWQ catalog and persistent table
1. Disconnect all workload connections. Check the active connection with:

```shell
$ psql -p ${PGPORT} -c "select * from pg_catalog.pg_stat_activity" -d template1
$ psql -p ${PGPORT} -c "SELECT * FROM pg_catalog.pg_stat_activity" -d template1
```
where `${PGPORT}` corresponds to the port number you optionally customized for HAWQ master.


2. Issue a checkpoint: 

```shell
$ psql -p ${PGPORT} -c "checkpoint" -d template1
$ psql -p ${PGPORT} -c "CHECKPOINT" -d template1
```

3. Shut down the HAWQ cluster: 
@@ -57,7 +57,7 @@ The *hawq\_toolkit* administrative schema contains several views for checking th
```sql
=> SELECT relname AS name, sotdsize AS size, sotdtoastsize
AS toast, sotdadditionalsize AS other
FROM hawq_size_of_table_disk as sotd, pg_class
FROM hawq_size_of_table_disk AS sotd, pg_class
WHERE sotd.sotdoid=pg_class.oid ORDER BY relname;
```

@@ -66,7 +66,7 @@ WHERE sotd.sotdoid=pg_class.oid ORDER BY relname;
The *hawq\_toolkit* administrative schema contains a number of views for checking index sizes. To see the total size of all index\(es\) on a table, use the *hawq\_size\_of\_all\_table\_indexes* view. To see the size of a particular index, use the *hawq\_size\_of\_index* view. The index sizing views list tables and indexes by object ID \(not by name\). To check the size of an index by name, you must look up the relation name \(`relname`\) in the *pg\_class* table. For example:

```sql
=> SELECT soisize, relname as indexname
=> SELECT soisize, relname AS indexname
FROM pg_class, hawq_size_of_index
WHERE pg_class.oid=hawq_size_of_index.soioid
AND pg_class.relkind='i';
@@ -81,9 +81,9 @@ HAWQ tracks various metadata information in its system catalogs about the object
You can use the system views *pg\_stat\_operations* and *pg\_stat\_partition\_operations* to look up actions performed on an object, such as a table. For example, to see the actions performed on a table, such as when it was created and when it was last analyzed:

```sql
=> SELECT schemaname as schema, objname as table,
usename as role, actionname as action,
subtype as type, statime as time
=> SELECT schemaname AS schema, objname AS table,
usename AS role, actionname AS action,
subtype AS type, statime AS time
FROM pg_stat_operations
WHERE objname='cust';
```
@@ -212,7 +212,7 @@ After you have set up Kerberos on the HAWQ master, you can configure HAWQ to use
1. Create a HAWQ administrator role in the database `template1` for the Kerberos principal that is used as the database administrator. The following example uses `gpamin/kerberos-gpdb`.

``` bash
$ psql template1 -c 'create role "gpadmin/kerberos-gpdb" login superuser;'
$ psql template1 -c 'CREATE ROLE "gpadmin/kerberos-gpdb" LOGIN SUPERUSER;'

```

@@ -203,21 +203,21 @@ To set the `password_hash_algorithm` server parameter for an individual session:
2. Set the `password_hash_algorithm` to `SHA-256` \(or `SHA-256-FIPS` to use the FIPS-compliant libraries for SHA-256\):

``` sql
# set password_hash_algorithm = 'SHA-256'
# SET password_hash_algorithm = 'SHA-256'
SET
```

or:

``` sql
# set password_hash_algorithm = 'SHA-256-FIPS'
# SET password_hash_algorithm = 'SHA-256-FIPS'
SET
```

3. Verify the setting:

``` sql
# show password_hash_algorithm;
# SHOW password_hash_algorithm;
password_hash_algorithm
```

@@ -240,7 +240,7 @@ To set the `password_hash_algorithm` server parameter for an individual session:
4. Login in as a super user and verify the password hash algorithm setting:

``` sql
# show password_hash_algorithm
# SHOW password_hash_algorithm
password_hash_algorithm
-------------------------------
SHA-256-FIPS
@@ -249,7 +249,7 @@ To set the `password_hash_algorithm` server parameter for an individual session:
5. Create a new role with password that has login privileges.

``` sql
create role testdb with password 'testdb12345#' LOGIN;
CREATE ROLE testdb WITH PASSWORD 'testdb12345#' LOGIN;
```

6. Change the client authentication method to allow for storage of SHA-256 encrypted passwords:
@@ -276,7 +276,7 @@ To set the `password_hash_algorithm` server parameter for an individual session:
2. Execute the following:

``` sql
# select rolpassword from pg_authid where rolname = 'testdb';
# SELECT rolpassword FROM pg_authid WHERE rolname = 'testdb';
Rolpassword
-----------
sha256<64 hexidecimal characters>
@@ -45,7 +45,7 @@ By default, a new database is created by cloning the standard system database te
If you are working in the `psql` client program, you can use the `\l` meta-command to show the list of databases and templates in your HAWQ system. If using another client program and you are a superuser, you can query the list of databases from the `pg_database` system catalog table. For example:

``` sql
=> SELECT datname from pg_database;
=> SELECT datname FROM pg_database;
```

## <a id="topic7"></a>Altering a Database
@@ -134,8 +134,8 @@ These tablespaces use the system default filespace, `pg_system`, the data direct
To see filespace information, look in the *pg\_filespace* and *pg\_filespace\_entry* catalog tables. You can join these tables with *pg\_tablespace* to see the full definition of a tablespace. For example:

``` sql
=# SELECT spcname as tblspc, fsname as filespc,
fsedbid as seg_dbid, fselocation as datadir
=# SELECT spcname AS tblspc, fsname AS filespc,
fsedbid AS seg_dbid, fselocation AS datadir
FROM pg_tablespace pgts, pg_filespace pgfs,
pg_filespace_entry pgfse
WHERE pgts.spcfsoid=pgfse.fsefsoid
@@ -22,7 +22,7 @@ gpadmin=# CREATE FUNCTION count_orders() RETURNS bigint AS $$
SELECT count(*) FROM orders;
$$ LANGUAGE SQL;
CREATE FUNCTION
gpadmin=# select count_orders();
gpadmin=# SELECT count_orders();
my_count
----------
830513
@@ -136,7 +136,7 @@ Perform the following steps as the `gpadmin` user:
To affect only the *current* database session, set the `pljava_classpath` configuration parameter at the `psql` prompt:

``` sql
psql> set pljava_classpath='myclasses.jar';
psql> SET pljava_classpath='myclasses.jar';
```

To affect *all* sessions, set the `pljava_classpath` server configuration parameter and restart the HAWQ cluster:
@@ -635,7 +635,7 @@ $ hawq restart cluster
From the `psql` command line, run the following command to show the installed JAR files.

```shell
psql# show pljava_classpath
psql# SHOW pljava_classpath
```

The following SQL commands create a table and define a Java function to test the method in the JAR file:
@@ -144,7 +144,7 @@ In terms of performance, importing a Python module is an expensive operation and

```sql
psql=#
CREATE FUNCTION pytest() returns text as $$
CREATE FUNCTION pytest() RETRUNS text AS $$
if 'mymodule' not in GD:
import mymodule
GD['mymodule'] = mymodule
@@ -434,8 +434,8 @@ This PL/Python UDF imports the NumPy module. The function returns SUCCESS if the

```sql
CREATE OR REPLACE FUNCTION plpy_test(x int)
returns text
as $$
RETURNS text
AS $$
try:
from numpy import *
return 'SUCCESS'
@@ -28,7 +28,7 @@ The following `CREATE TABLE` command uses the `r_norm` function to populate the

```sql
CREATE TABLE test_norm_var
AS SELECT id, r_norm(10,0,1) as x
AS SELECT id, r_norm(10,0,1) AS x
FROM (SELECT generate_series(1,30:: bigint) AS ID) foo
DISTRIBUTED BY (id);
```
@@ -89,13 +89,13 @@ However, the changed resource quota for the virtual segment cannot exceed the re
In the following example, when executing the next query statement, the HAWQ resource manager will attempt to allocate 10 virtual segments and each segment has a 256MB memory quota.

``` sql
postgres=# set hawq_rm_stmt_vseg_memory='256mb';
postgres=# SET hawq_rm_stmt_vseg_memory='256mb';
SET
postgres=# set hawq_rm_stmt_nvseg=10;
postgres=# SET hawq_rm_stmt_nvseg=10;
SET
postgres=# create table t(i integer);
postgres=# CREATE TABLE t(i integer);
CREATE TABLE
postgres=# insert into t values(1);
postgres=# INSERT INTO t VALUES(1);
INSERT 0 1
```

@@ -12,7 +12,7 @@ Any query execution requiring resource allocation from HAWQ resource manager has
The following is an example query to obtain connection track status:

``` sql
postgres=# select * from dump_resource_manager_status(1);
postgres=# SELECT * FROM dump_resource_manager_status(1);
```

``` pre
@@ -59,7 +59,7 @@ Besides the information provided in pg\_resqueue\_status, you can also get YARN
The following is a query to obtain resource queue status:

``` sql
postgres=# select * from dump_resource_manager_status(2);
postgres=# SELECT * FROM dump_resource_manager_status(2);
```

``` pre
@@ -104,7 +104,7 @@ QUEUSE(alloc=(0 MB,0.000000 CORE):request=(0 MB,0.000000 CORE):inuse=(0 MB,0.000
Use the following query to obtain the status of a HAWQ segment.

``` sql
postgres=# select * from dump_resource_manager_status(3);
postgres=# SELECT * FROM dump_resource_manager_status(3);
```

``` pre
@@ -54,7 +54,7 @@ The minimum value that can be configured is 3, and the maximum is 1024.
To check the currently configured limit, you can execute the following command:

``` sql
postgres=# show hawq_rm_nresqueue_limit;
postgres=# SHOW hawq_rm_nresqueue_limit;
```

``` pre
@@ -164,7 +164,7 @@ The query displays all the attributes and their values of the selected resource
You can also check the runtime status of existing resource queues by querying the `pg_resqueue_status` view:

``` sql
postgres=# select * from pg_resqueue_status;
postgres=# SELECT * FROM pg_resqueue_status;
```


0 comments on commit c40bcad

Please sign in to comment.