Skip to content

Commit 9e8c7ea

Browse files
committed
Fix typos
1 parent fbd7009 commit 9e8c7ea

11 files changed

+13
-13
lines changed

0002_how_to_troubleshoot_and_speedup_postgres_restarts.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ The second reason – a lot of dirty buffers in the buffer pool – is less triv
3838
- checkpoint tuning was performed in favor of fewer overheads of bulk random writes and fewer full-page writes (usually meaning that `max_wal_size` and `checkpoint_timeout` are increased)
3939
- the latest checkpoint happened quite long ago (can be seen in PG logs in `log_checkpoint = on`, which is recommended to have in most cases).
4040

41-
The amount of dirty buffers is quite easy to observe, using extension pg_buffercache (standard contrib modele) and this query (may take significant time; see [the docs](https://postgresql.org/docs/current/pgbuffercache.html)):
41+
The amount of dirty buffers is quite easy to observe, using extension pg_buffercache (standard contrib module) and this query (may take significant time; see [the docs](https://postgresql.org/docs/current/pgbuffercache.html)):
4242
```sql
4343
select count(*), pg_size_pretty(count(*) * 8 * 1024)
4444
from pg_buffercache
@@ -82,4 +82,4 @@ Interestingly, slow/failing `archive_command` can cause longer downtime during s
8282

8383
That's it. Note that we didn't cover various timeouts (e.g., pg_ctl's option `--timeout` and wait behavior `-w`, `-W`, see [Postgres docs](https://postgresql.org/docs/current/app-pg-ctl.html)) here and just discussed what can cause delays in shutdown/restart attempts.
8484

85-
Hope it was helpful - as usual, [subscribe](https://twitter.com/samokhvalov/), like, share, and comment! 💙
85+
Hope it was helpful - as usual, [subscribe](https://twitter.com/samokhvalov/), like, share, and comment! 💙

0003_how_to_troubleshoot_long_startup.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -159,7 +159,7 @@ Bonus: how to simulate long startup / REDO time:
159159
1. Increase the distance between checkpoints raising `max_wal_size` and `checkpoint_timeout` (say, `'100GB'` and `'60min'`)
160160
2. Create a large table `t1` (say, 10-100M rows): `create table t1 as select i, random() from generate_series(1, 100000000) i;`
161161
3. Execute a long transaction to data from `t1` (not necessary to finish it): `begin; delete from t1;`
162-
4. Observe the amount of dirity buffers with extension `pg_buffercache`:
162+
4. Observe the amount of dirty buffers with extension `pg_buffercache`:
163163
- create extension `pg_buffercache`;
164164
- `select isdirty, count(*), pg_size_pretty(count(*) * 8 * 1024) from pg_buffercache group by 1 \watch`
165165
5. When the total size of dirty buffers reaches a few GiB, intentionally crash your server, sending `kill -9 <pid>` using PID of any Postgres backend process.

0004_tuple_sparsenes.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -136,7 +136,7 @@ Thus, the Postgres executor must handle 88 KiB to return 317 bytes – this is f
136136
- Index maintenance: bloat control as well + regular reindexing, because index health declines over time even if autovacuum is well-tuned (btree health degradation rates improved in PG14, but those optimization does not eliminate the need to reindex on regular basis in heavily loaded systems).
137137
- Partitioning: one of benefits of partitioning is improved data locality.
138138

139-
**Option 2.** Use index-only scans instead of index scans. This can be achieved by using mutli-column indexes or covering indexes, to include all the columns needed for our query. For our example:
139+
**Option 2.** Use index-only scans instead of index scans. This can be achieved by using multi-column indexes or covering indexes, to include all the columns needed for our query. For our example:
140140
```
141141
nik=# create index on t1(user_id) include (id);
142142
CREATE INDEX

0010_flamegraphs_for_postgres.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -180,11 +180,11 @@ where i between 1000 and 2000;
180180
(5 rows)
181181
```
182182

183-
In this case, the planning time is really low, sub-millisecond – but I encountered with cases, when planning happened to be extremely slow, many seconds or even dozens of seconds. And it turned out (thanks to flamegraphs!) that analysing the Merge Join paths was the reason, so with "set enable_mergejoin = off" the planning time dropped to very low, sane values. But this is another story.
183+
In this case, the planning time is really low, sub-millisecond – but I encountered with cases, when planning happened to be extremely slow, many seconds or even dozens of seconds. And it turned out (thanks to flamegraphs!) that analyzing the Merge Join paths was the reason, so with "set enable_mergejoin = off" the planning time dropped to very low, sane values. But this is another story.
184184

185185
## Some good mate
186186
- Brendan Gregg's books: "Systems Performance" and "BPF Performance Tools"
187-
- Brendant Gregg's talks – for example, ["eBPF: Fueling New Flame Graphs & more • Brendan Gregg"](https://youtube.com/watch?v=HKQR7wVapgk) (video, 67 min)
187+
- Brendan Gregg's talks – for example, ["eBPF: Fueling New Flame Graphs & more • Brendan Gregg"](https://youtube.com/watch?v=HKQR7wVapgk) (video, 67 min)
188188
- [Profiling with perf](https://wiki.postgresql.org/wiki/Profiling_with_perf) (Postgres wiki)
189189

190190
---

0012_from_pgss_to_explain__how_to_find_query_examples.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -99,7 +99,7 @@ TBD:
9999
- tricks for versions <16
100100

101101
## Summary
102-
- In PG14+, use `compute_query_id` to have quer`y_id values both in Postgres logs and `pg_stat_activity`
102+
- In PG14+, use `compute_query_id` to have query_id values both in Postgres logs and `pg_stat_activity`
103103
- Increase `track_activity_query_size` (requires restart) to be able to track larger queries in `pg_stat_activity`
104104
- Organize workflow to combine records from `pg_stat_statements` and query examples from logs and `pg_stat_activity`, so when it comes to query optimization, you have good examples ready to be used with `EXPLAIN (ANALYZE, BUFFERS)`.
105105

0016_how_to_get_into_trouble_using_some_postgres_features.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,7 @@ A couple of tips – how to make your code NULL-safe:
4848
- For comparison, instead of `=` or `<>`: `IS [NOT] DISTINCT FROM` (check out the `EXPLAIN` plan though).
4949
- Instead of concatenation, use: `format('%s %s', var1, var2)`.
5050
- Don't use `WHERE NOT IN (SELECT ...)` – use `NOT EXISTS` instead (
51-
see thia [JOOQ blog post](https://jooq.org/doc/latest/manual/reference/dont-do-this/dont-do-this-sql-not-in/)).
51+
see this [JOOQ blog post](https://jooq.org/doc/latest/manual/reference/dont-do-this/dont-do-this-sql-not-in/)).
5252
- Just be careful. `NULL`s are treacherous.
5353

5454
## Subtransactions under heavy loads

0019_how_to_import_csv_to_postgres.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -123,7 +123,7 @@ nik=# select clock_timestamp, pid, query from slow_tx_from_csv_1 order by clock_
123123
## Method 2: Query CSV data live via file_fdw
124124

125125
This method should be used when we need to query the CSV file data via SQL "live" without loading a snapshot. To achieve
126-
this, we'll be using [file_fwd](https://postgresql.org/docs/current/file-fdw.html).
126+
this, we'll be using [file_fdw](https://postgresql.org/docs/current/file-fdw.html).
127127

128128
Having a great advantage (live data!), this method has its obvious disadvantages:
129129

0023_how_to_use_openai_apis_in_postgres.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -176,7 +176,7 @@ Here's a straightforward example of using semantic search to find the most relev
176176
The concept here is straightforward:
177177

178178
1. First, we call OpenAI API to "vectorize" the text of our request.
179-
2. Then, we use `pgvector`'s similarity search to find K nearest neighbours.
179+
2. Then, we use `pgvector`'s similarity search to find K nearest neighbors.
180180

181181
We will use the `HNSW` index, considered one of the best approaches today (although originally described in
182182
[2016](https://arxiv.org/abs/1603.09320)); added in by many DBMSes. In `pgvector`, it was added in version 0.5.0.

0040_how_to_break_a_database_part_2_simulate_xid_wraparound.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -140,7 +140,7 @@ Everyone should not only monitor (with alerting) for traditional XID wraparound
140140
risks. And it should be included in the snippets showing how much of the "capacity" is used.
141141

142142
> 🎯 **TODO:** snippet to show both XID w-d and MultiXID w-d, at both DB and table level
143-
> Obviously, multixid wraparrounds are encountered less often in the wild – I don't see people have `datminmxid` and
143+
> Obviously, multixid wraparounds are encountered less often in the wild – I don't see people have `datminmxid` and
144144
> `relminmxid` used in the snippets.
145145
> Basic version:
146146
>

0059_psql_tuning.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@ After installation:
5151

5252
To install:
5353

54-
- on macOS/homebrew: `brew install pspg`
54+
- on macOS/Homebrew: `brew install pspg`
5555
- Ubuntu/Debian: `sudo apt update && sudo apt install -y pspg`
5656

5757
Then:

0065_uuid_v7_and_partitioning_timescaledb.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -228,7 +228,7 @@ test=# explain select * from my_table
228228

229229
## Postscript
230230

231-
Also read the following comment by [@jamessewell](https://twitter.com/jamessewell), originaly posted
231+
Also read the following comment by [@jamessewell](https://twitter.com/jamessewell), originally posted
232232
[here](https://twitter.com/jamessewell/status/1730125437903450129):
233233

234234
> If update your `create_hypertable` call with:

0 commit comments

Comments
 (0)