Skip to content

Commit

Permalink
Fix typo (#7193)
Browse files Browse the repository at this point in the history
* Fix typo

* fix another typo

* fix another typo
  • Loading branch information
lmossman committed Oct 20, 2021
1 parent d38ba5d commit a7ddd16
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 3 deletions.
4 changes: 2 additions & 2 deletions docs/understanding-airbyte/airbyte-specification.md
Expand Up @@ -72,7 +72,7 @@ read(Config, AirbyteCatalog, State) -> Stream<AirbyteMessage>
* The `connectionSpecification` of the `ConnectorSpecification` must be valid JsonSchema. It describes what inputs are needed in order for the source to interact with the underlying data source.
* e.g. If using a Postgres source, the `ConnectorSpecification` would specify that a `hostname`, `port`, and `password` are required in order for the connector to function.
* The UI reads the JsonSchema in this field in order to render the input fields for a user to fill in.
* This JsonSchema is also used to validate that the provided inputs are valid. e.g. If `port` is one of the fields and the JsonSchema in the `connectorSpecification` specifies that this filed should be a number, if a user inputs "airbyte", they will receive an error. Airbyte adheres to JsonSchema validation rules.
* This JsonSchema is also used to validate that the provided inputs are valid. e.g. If `port` is one of the fields and the JsonSchema in the `connectorSpecification` specifies that this field should be a number, if a user inputs "airbyte", they will receive an error. Airbyte adheres to JsonSchema validation rules.

#### Check

Expand Down Expand Up @@ -168,7 +168,7 @@ read(Config, AirbyteCatalog, State) -> Stream<AirbyteMessage>

* Input:
1. `config` - A configuration JSON object that has been validated using the `ConnectorSpecification`.
2. `catalog` - An `ConfiguredAirbyteCatalog`. This `catalog` should be constructed from the `catalog` returned by the `discover` command. To convert an `AirbyteStream` to a `ConfiguredAirbyteStream` copy the `AirbyteStream` into the stream field of the `ConfiguredAirbyteStream`. Any additional configurations can be specified in the `ConfiguredAirbyteStream`. More details on how this is configured in the [catalog documentation](catalog.md). This catalog will be used in the `read` command to both select what data is transferred and how it is replicated.
2. `catalog` - A `ConfiguredAirbyteCatalog`. This `catalog` should be constructed from the `catalog` returned by the `discover` command. To convert an `AirbyteStream` to a `ConfiguredAirbyteStream` copy the `AirbyteStream` into the stream field of the `ConfiguredAirbyteStream`. Any additional configurations can be specified in the `ConfiguredAirbyteStream`. More details on how this is configured in the [catalog documentation](catalog.md). This catalog will be used in the `read` command to both select what data is transferred and how it is replicated.
3. `state` - A JSON object. This object is only ever written or read by the source, so it is a JSON blob with whatever information is necessary to keep track of how much of the data source has already been read. This is important whenever we need to replicate data with Incremental sync modes such as [Incremental Append](connections/incremental-append.md) or [Incremental Deduped History](connections/incremental-deduped-history.md). Note that this is not currently based on the state of data existing on the destination side.
* Output:
1. `message stream` - A stream of `AirbyteRecordMessage`s and `AirbyteStateMessage`s piped to stdout.
Expand Down
2 changes: 1 addition & 1 deletion docs/understanding-airbyte/jobs.md
Expand Up @@ -21,7 +21,7 @@ For more information on the schema of the messages that are passed, refer to [Ai

## Worker Lifecycle

This section will depict the lifecycle of a worker. It will only show the 2 connector version. The since connector version is the same with one side removed.
This section will depict the lifecycle of a worker. It will only show the 2 connector version. The single connector version is the same with one side removed.

Note: When a source has passed all of its messages, the docker process should automatically exit. After a destination has received all records, it should automatically shutdown. The worker gives each a grace period to shutdown on their own. If that grace period expires, then the worker will force shutdown.

Expand Down

0 comments on commit a7ddd16

Please sign in to comment.