Skip to content

Commit

Permalink
Fix some typos
Browse files Browse the repository at this point in the history
  • Loading branch information
striezel authored and mildbyte committed Apr 4, 2023
1 parent 3dff101 commit d30fb81
Show file tree
Hide file tree
Showing 6 changed files with 6 additions and 6 deletions.
2 changes: 1 addition & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -194,7 +194,7 @@ Full set of changes: [`v0.2.10...v0.2.11`](https://github.com/splitgraph/sgr/com

* Fix CSV schema inference not supporting BIGINT data types (https://github.com/splitgraph/sgr/pull/407)
* Fix Splitfiles only expecting tags to contain alphanumeric characters (https://github.com/splitgraph/sgr/pull/407)
* Speedups for the Snowflake / SQLAlchemy data soure (https://github.com/splitgraph/sgr/pull/405)
* Speedups for the Snowflake / SQLAlchemy data source (https://github.com/splitgraph/sgr/pull/405)

Full set of changes: [`v0.2.9...v0.2.10`](https://github.com/splitgraph/sgr/compare/v0.2.9...v0.2.10)

Expand Down
2 changes: 1 addition & 1 deletion examples/benchmarking/benchmarking.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -876,7 +876,7 @@
" * Writing the original table also takes into account the time to write and parse the INSERT statements. Splitgraph's commits and checkouts defer to Postgres for moving data around and don't actually inspect the data (apart from object hashing), avoiding this step.\n",
" * Postgres' WAL and durability guarantees were enabled when the data was originally written. At commit time, writes to Splitgraph's `cstore_fdw` object files don't get reflected in the WAL (since `cstore_fdw` is implemented as a foreign table). Since the data is already in a PostgreSQL table, this is an acceptable tradeoff -- if the commit gets abruptly terminated, Splitgraph will restart it from scratch. Splitgraph itself doesn't commit its own metadata writes to PostgreSQL until after the image has been created.\n",
"\n",
"Inserting data to a table with change tracking enabled incurs a slighly smaller than 1x overhead. This is because besides the actual target table, all writes have to be recorded to the pending changes table by the audit trigger."
"Inserting data to a table with change tracking enabled incurs a slightly smaller than 1x overhead. This is because besides the actual target table, all writes have to be recorded to the pending changes table by the audit trigger."
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion splitgraph/config/keys.py
Original file line number Diff line number Diff line change
Expand Up @@ -147,7 +147,7 @@
"SG_ENGINE_POSTGRES_DB_NAME": "Name of the default database that the superuser connects to to initialize Splitgraph.",
"SG_ENGINE_OBJECT_PATH": "Path on the engine's filesystem where Splitgraph physical object files are stored.",
"SG_LQ_TUNING": "Postgres query planner configuration for Splitfile execution and table imports. This is run before a layered query is executed and allows to tune query planning in case of LQ performance issues. For possible values, see the [PostgreSQL documentation](https://www.postgresql.org/docs/12/runtime-config-query.html).",
"SG_COMMIT_CHUNK_SIZE": "Default chunk size when `sgr commit` is run. Can be overriden in the command line client by passing `--chunk-size`",
"SG_COMMIT_CHUNK_SIZE": "Default chunk size when `sgr commit` is run. Can be overridden in the command line client by passing `--chunk-size`",
"SG_ENGINE_POOL": "Size of the connection pool used to download/upload objects. Note that in the case of layered querying with joins on multiple tables, each table will use this many parallel threads to download objects, which can overwhelm the engine. Decrease this value in that case.",
"SG_CONFIG_FILE": "Location of the Splitgraph configuration file. By default, Splitgraph looks for the configuration in `~/.splitgraph/.sgconfig` and then the current directory.",
"SG_META_SCHEMA": "Name of the metadata schema. Note that whilst this can be changed, it hasn't been tested and won't be taken into account by engines connecting to this one.",
Expand Down
2 changes: 1 addition & 1 deletion splitgraph/core/image.py
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@

class Image(NamedTuple):
"""
Represents a Splitgraph image. Should't be created directly, use Image-loading methods in the
Represents a Splitgraph image. Shouldn't be created directly, use Image-loading methods in the
:class:`splitgraph.core.repository.Repository` class instead.
"""

Expand Down
2 changes: 1 addition & 1 deletion splitgraph/hooks/external_objects.py
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ def download_objects(
) -> Sequence[str]:
"""Download objects from the external location into the Splitgraph cache.
:param objects: List of tuples `(object_id, object_url)` that this handler had previosly
:param objects: List of tuples `(object_id, object_url)` that this handler had previously
uploaded the objects to.
:param remote_engine: An instance of Engine class that the objects will be registered on
:return: A list of object IDs that have been successfully downloaded.
Expand Down
2 changes: 1 addition & 1 deletion test/splitgraph/conftest.py
Original file line number Diff line number Diff line change
Expand Up @@ -521,7 +521,7 @@ def remote_engine(test_remote_engine):
def unprivileged_remote_engine(remote_engine_registry):
remote_engine_registry.commit()
remote_engine_registry.close()
# Assuption: unprivileged_remote_engine is the same server as remote_engine_registry but with an
# Assumption: unprivileged_remote_engine is the same server as remote_engine_registry but with an
# unprivileged user.
engine = get_engine("unprivileged_remote_engine")
engine.close()
Expand Down

0 comments on commit d30fb81

Please sign in to comment.