Skip to content
Permalink
Browse files
HAWQ-1261 - add discussion of HAWQ administrative log files (closes #88)
  • Loading branch information
lisakowen authored and dyozie committed Jan 21, 2017
1 parent c8cb302 commit 2351d2880ababe170ea273c1c1cfb6d440eb467e
Showing 6 changed files with 277 additions and 92 deletions.
@@ -120,6 +120,9 @@
<li>
<a href="/docs/userguide/2.1.0.0-incubating/admin/monitor.html">Monitoring a HAWQ System</a>
</li>
<li>
<a href="/docs/userguide/2.1.0.0-incubating/admin/logfiles.html">HAWQ Administrative Log Files</a>
</li>
</ul>
</li>
<li class="has_submenu">

Large diffs are not rendered by default.

@@ -8,24 +8,8 @@ To keep a HAWQ system running efficiently, the database must be regularly cleare

HAWQ requires that certain tasks be performed regularly to achieve optimal performance. The tasks discussed here are required, but database administrators can automate them using standard UNIX tools such as `cron` scripts. An administrator sets up the appropriate scripts and checks that they execute successfully. See [Recommended Monitoring and Maintenance Tasks](RecommendedMonitoringTasks.html) for additional suggested maintenance activities you can implement to keep your HAWQ system running optimally.

## <a id="topic10"></a>Database Server Log Files
## <a id="topic10"></a>Log File Maintenance

HAWQ log output tends to be voluminous, especially at higher debug levels, and you do not need to save it indefinitely. Administrators rotate the log files periodically so new log files are started and old ones are removed.
Every database instance in HAWQ \(master and segments\) runs a PostgreSQL database server with its own server log file. For information about managing these log files, refer to [HAWQ Database Server Log Files](logfiles.html#topic28).

HAWQ has log file rotation enabled on the master and all segment instances. Daily log files are created in the `pg_log` subdirectory of the master and each segment data directory using the following naming convention: <code>hawq-<i>YYYY-MM-DD\_hhmmss</i>.csv</code>. Although log files are rolled over daily, they are not automatically truncated or deleted. Administrators need to implement scripts or programs to periodically clean up old log files in the `pg_log` directory of the master and of every segment instance.

For information about viewing the database server log files, see [Viewing the Database Server Log Files](monitor.html).

## <a id="topic11"></a>Management Utility Log Files

Log files for the HAWQ management utilities are written to `~/hawqAdminLogs` by default. The naming convention for management log files is:

<pre><code><i>script_name_date</i>.log
</code></pre>

The log entry format is:

<pre><code><i>timestamp:utility:host:user</i>:[INFO|WARN|FATAL]:<i>message</i>
</code></pre>

The log file for a particular utility execution is appended to its daily log file each time that utility is run.
Log files are also generated when you invoke HAWQ management utilities such as `hawq start` and `gpfdist`. [Management Utility Log Files](logfiles.html#mgmtutil_log) provides information and maintenance strategies for these log files.
@@ -145,65 +145,6 @@ Views in the *hawq\_toolkit* schema include:
- *hawq\_workfile\_usage\_per\_segment* - one row for each segment where each row displays the total amount of disk space currently in use for workfiles on the segment


## <a id="topic28"></a>Viewing the Database Server Log Files

Every database instance in HAWQ \(master and segments\) runs a PostgreSQL database server with its own server log file. Daily log files are created in the `pg_log` directory of the master and each segment data directory.

### <a id="topic29"></a>Log File Format

The server log files are written in comma-separated values \(CSV\) format. Log entries may not include values for all log fields. For example, only log entries associated with a query worker process will have the `slice_id` populated. You can identify related log entries of a particular query by the query's session identifier \(`gp_session_id`\) and command identifier \(`gp_command_count`\).

Log entries may include the following fields:

<table>
<tr><th>#</th><th>Field Name</th><th>Data Type</th><th>Description</th></tr>
<tr><td>1</td><td>event_time</td><td>timestamp with time zone</td><td>Time that the log entry was written to the log</td></tr>
<tr><td>2</td><td>user_name</td><td>varchar(100)</td><td>The database user name</td></tr>
<tr><td>3</td><td>database_name</td><td>varchar(100)</td><td>The database name</td></tr>
<tr><td>4</td><td>process_id</td><td>varchar(10)</td><td>The system process ID (prefixed with "p")</td></tr>
<tr><td>5</td><td>thread_id</td><td>varchar(50)</td><td>The thread count (prefixed with "th")</td></tr>
<tr><td>6</td><td>remote_host</td><td>varchar(100)</td><td>On the master, the hostname/address of the client machine. On the segment, the hostname/address of the master.</td></tr>
<tr><td>7</td><td>remote_port</td><td>varchar(10)</td><td>The segment or master port number</td></tr>
<tr><td>8</td><td>session_start_time</td><td>timestamp with time zone</td><td>Time session connection was opened</td></tr>
<tr><td>9</td><td>transaction_id</td><td>int</td><td>Top-level transaction ID on the master. This ID is the parent of any subtransactions.</td></tr>
<tr><td>10</td><td>gp_session_id</td><td>text</td><td>Session identifier number (prefixed with "con")</td></tr>
<tr><td>11</td><td>gp_command_count</td><td>text</td><td>The command number within a session (prefixed with "cmd")</td></tr>
<tr><td>12</td><td>gp_segment</td><td>text</td><td>The segment content identifier. The master always has a content ID of -1.</td></tr>
<tr><td>13</td><td>slice_id</td><td>text</td><td>The slice ID (portion of the query plan being executed)</td></tr>
<tr><td>14</td><td>distr_tranx_id</td><td>text</td><td>Distributed transaction ID</td></tr>
<tr><td>15</td><td>local_tranx_id</td><td>text</td><td>Local transaction ID</td></tr>
<tr><td>16</td><td>sub_tranx_id</td><td>text</td><td>Subtransaction ID</td></tr>
<tr><td>17</td><td>event_severity</td><td>varchar(10)</td><td>Values include: LOG, ERROR, FATAL, PANIC, DEBUG1, DEBUG2</td></tr>
<tr><td>18</td><td>sql_state_code</td><td>varchar(10)</td><td>SQL state code associated with the log message</td></tr>
<tr><td>19</td><td>event_message</td><td>text</td><td>Log or error message text</td></tr>
<tr><td>20</td><td>event_detail</td><td>text</td><td>Detail message text associated with an error or warning message</td></tr>
<tr><td>21</td><td>event_hint</td><td>text</td><td>Hint message text associated with an error or warning message</td></tr>
<tr><td>22</td><td>internal_query</td><td>text</td><td>The internally-generated query text</td></tr>
<tr><td>23</td><td>internal_query_pos</td><td>int</td><td>The cursor index into the internally-generated query text</td></tr>
<tr><td>24</td><td>event_context</td><td>text</td><td>The context in which this message gets generated</td></tr>
<tr><td>25</td><td>debug_query_string</td><td>text</td><td>User-supplied query string with full detail for debugging. This string can be modified for internal use.</td></tr>
<tr><td>26</td><td>error_cursor_pos</td><td>int</td><td>The cursor index into the query string</td></tr>
<tr><td>27</td><td>func_name</td><td>text</td><td>The function in which this message is generated</td></tr>
<tr><td>28</td><td>file_name</td><td>text</td><td>The internal code file where the message originated</td></tr>
<tr><td>29</td><td>file_line</td><td>int</td><td>The line of the code file where the message originated</td></tr>
<tr><td>30</td><td>stack_trace</td><td>text</td><td>Stack trace text associated with this message</td></tr>
</table>
### <a id="topic30"></a>Searching the HAWQ Server Log Files

You can use the `gplogfilter` HAWQ utility to search through a HAWQ log file for entries matching specific criteria. By default, this utility searches through the HAWQ master log file in the default logging location. For example, to display the entries to the master log file starting after 2 pm on a certain date:

``` shell
$ gplogfilter -b '2016-01-18 14:00'
```

To search through all segment log files simultaneously, run `gplogfilter` through the `hawq ssh` utility. For example, specify a \<seg\_hosts\> file that includes all segment hosts of interest, then invoke `gplogfilter` to display the last three lines of each segment log file on each segment host. (Note: enter the commands after the `=>` prompt, do not include the `=>`.):

``` shell
$ hawq ssh -f <seg_hosts>
=> source /usr/local/hawq/greenplum_path.sh
=> gplogfilter -n 3 /data/hawq/segment/pg_log/hawq*.csv
```

## <a id="topic_jx2_rqg_kp"></a>HAWQ Error Codes

The following section describes SQL error codes for certain database events.
@@ -2593,19 +2593,7 @@ For information about the legacy query optimizer and GPORCA, see [About GPORCA](

## <a name="optimizer_minidump"></a>optimizer\_minidump

GPORCA generates minidump files to describe the optimization context for a given query. Use the minidump files to analyze HAWQ issues. The minidump file is located under the master data directory and uses the following naming format:

`Minidump_date_time.mdp`

The minidump file contains this query related information:

- Catalog objects including data types, tables, operators, and statistics required by GPORCA
- An internal representation (DXL) of the query
- An internal representation (DXL) of the plan produced by GPORCA
- System configuration information passed to GPORCA such as server configuration parameters, cost and statistics configuration, and number of segments
- A stack trace of errors generated while optimizing the query

Setting this parameter to `ALWAYS` generates a minidump for all queries.
GPORCA generates minidump files to describe the optimization context for a given query. Set this parameter to `ALWAYS` to generate a minidump for all queries.

**Note:** Set this parameter to `ONERROR` in production environments to minimize total optimization time.

@@ -66,4 +66,4 @@ Resource manager adjusts segment localhost original resource capacity from (8192
Resource manager adjusts segment localhost original global resource manager resource capacity from (8192 MB, 5 CORE) to (5120 MB, 5 CORE)
```

See [Viewing the Database Server Log Files](../admin/monitor.html#topic28) for more information on working with HAWQ log files.
See [HAWQ Database Server Log Files](../admin/logfiles.html#topic28) for more information on working with HAWQ database server log files.

0 comments on commit 2351d28

Please sign in to comment.