Skip to content

Commit

Permalink
Fix typos (double words and it's/its) (#33623)
Browse files Browse the repository at this point in the history
  • Loading branch information
eumiro committed Aug 23, 2023
1 parent 513c1d2 commit a54c242
Show file tree
Hide file tree
Showing 78 changed files with 90 additions and 90 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/build-images.yml
Expand Up @@ -99,7 +99,7 @@ jobs:
}' --jq '.data.node.labels.nodes[]' | jq --slurp -c '[.[].name]' >> ${GITHUB_OUTPUT}
if: github.event_name == 'pull_request_target'
# Retrieve it to be able to determine which files has changed in the incoming commit of the PR
# we checkout the target commit and it's parent to be able to compare them
# we checkout the target commit and its parent to be able to compare them
- name: Cleanup repo
run: docker run -v "${GITHUB_WORKSPACE}:/workspace" -u 0:0 bash -c "rm -rf /workspace/*"
- uses: actions/checkout@v3
Expand Down
4 changes: 2 additions & 2 deletions CONTRIBUTORS_QUICK_START.rst
Expand Up @@ -104,7 +104,7 @@ Colima
------
If you use Colima as your container runtimes engine, please follow the next steps:

1. `Install buildx manually <https://github.com/docker/buildx#manual-download>`_ and follow it's instructions
1. `Install buildx manually <https://github.com/docker/buildx#manual-download>`_ and follow its instructions

2. Link the Colima socket to the default socket path. Note that this may break other Docker servers.

Expand Down Expand Up @@ -252,7 +252,7 @@ Typical development tasks
#########################

For many of the development tasks you will need ``Breeze`` to be configured. ``Breeze`` is a development
environment which uses docker and docker-compose and it's main purpose is to provide a consistent
environment which uses docker and docker-compose and its main purpose is to provide a consistent
and repeatable environment for all the contributors and CI. When using ``Breeze`` you avoid the "works for me"
syndrome - because not only others can reproduce easily what you do, but also the CI of Airflow uses
the same environment to run all tests - so you should be able to easily reproduce the same failures you
Expand Down
2 changes: 1 addition & 1 deletion Dockerfile
Expand Up @@ -1336,7 +1336,7 @@ RUN if [[ -f /docker-context-files/requirements.txt ]]; then \

##############################################################################################
# This is the actual Airflow image - much smaller than the build one. We copy
# installed Airflow and all it's dependencies from the build image to make it smaller.
# installed Airflow and all its dependencies from the build image to make it smaller.
##############################################################################################
FROM ${PYTHON_BASE_IMAGE} as main

Expand Down
2 changes: 1 addition & 1 deletion RELEASE_NOTES.rst
Expand Up @@ -5439,7 +5439,7 @@ It has been removed.
``airflow.settings.CONTEXT_MANAGER_DAG``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

CONTEXT_MANAGER_DAG was removed from settings. It's role has been taken by ``DagContext`` in
CONTEXT_MANAGER_DAG was removed from settings. Its role has been taken by ``DagContext`` in
'airflow.models.dag'. One of the reasons was that settings should be rather static than store
dynamic context from the DAG, but the main one is that moving the context out of settings allowed to
untangle cyclic imports between DAG, BaseOperator, SerializedDAG, SerializedBaseOperator which was
Expand Down
2 changes: 1 addition & 1 deletion airflow/cli/cli_config.py
Expand Up @@ -593,7 +593,7 @@ def string_lower_type(val):
ARG_DEPENDS_ON_PAST = Arg(
("-d", "--depends-on-past"),
help="Determine how Airflow should deal with past dependencies. The default action is `check`, Airflow "
"will check if the the past dependencies are met for the tasks having `depends_on_past=True` before run "
"will check if the past dependencies are met for the tasks having `depends_on_past=True` before run "
"them, if `ignore` is provided, the past dependencies will be ignored, if `wait` is provided and "
"`depends_on_past=True`, Airflow will wait the past dependencies until they are met before running or "
"skipping the task",
Expand Down
2 changes: 1 addition & 1 deletion airflow/cli/commands/dag_command.py
Expand Up @@ -240,7 +240,7 @@ def dag_dependencies_show(args) -> None:

@providers_configuration_loaded
def dag_show(args) -> None:
"""Display DAG or saves it's graphic representation to the file."""
"""Display DAG or saves its graphic representation to the file."""
dag = get_dag(args.subdir, args.dag_id)
dot = render_dag(dag)
filename = args.save
Expand Down
2 changes: 1 addition & 1 deletion airflow/example_dags/example_setup_teardown_taskflow.py
Expand Up @@ -50,7 +50,7 @@ def my_third_task():

# The method `as_teardown` will mark task_3 as teardown, task_1 as setup, and
# arrow task_1 >> task_3.
# Now if you clear task_2, then it's setup task, task_1, will be cleared in
# Now if you clear task_2, then its setup task, task_1, will be cleared in
# addition to its teardown task, task_3

# it's also possible to use a decorator to mark a task as setup or
Expand Down
2 changes: 1 addition & 1 deletion airflow/jobs/local_task_job_runner.py
Expand Up @@ -43,7 +43,7 @@
an attempt by a program/library to write or read outside its allocated memory.
In Python environment usually this signal refers to libraries which use low level C API.
Make sure that you use use right libraries/Docker Images
Make sure that you use right libraries/Docker Images
for your architecture (Intel/ARM) and/or Operational System (Linux/macOS).
Suggested way to debug
Expand Down
2 changes: 1 addition & 1 deletion airflow/jobs/scheduler_job_runner.py
Expand Up @@ -1752,7 +1752,7 @@ def _cleanup_stale_dags(self, session: Session = NEW_SESSION) -> None:
Find all dags that were not updated by Dag Processor recently and mark them as inactive.
In case one of DagProcessors is stopped (in case there are multiple of them
for different dag folders), it's dags are never marked as inactive.
for different dag folders), its dags are never marked as inactive.
Also remove dags from SerializedDag table.
Executed on schedule only if [scheduler]standalone_dag_processor is True.
"""
Expand Down
2 changes: 1 addition & 1 deletion airflow/jobs/triggerer_job_runner.py
Expand Up @@ -508,7 +508,7 @@ async def cancel_triggers(self):
"""
Drain the to_cancel queue and ensure all triggers that are not in the DB are cancelled.
This allows the the cleanup job to delete them.
This allows the cleanup job to delete them.
"""
while self.to_cancel:
trigger_id = self.to_cancel.popleft()
Expand Down
2 changes: 1 addition & 1 deletion airflow/models/dagrun.py
Expand Up @@ -1320,7 +1320,7 @@ def schedule_tis(
"""
Set the given task instances in to the scheduled state.
Each element of ``schedulable_tis`` should have it's ``task`` attribute already set.
Each element of ``schedulable_tis`` should have its ``task`` attribute already set.
Any EmptyOperator without callbacks or outlets is instead set straight to the success state.
Expand Down
2 changes: 1 addition & 1 deletion airflow/providers/amazon/aws/hooks/quicksight.py
Expand Up @@ -152,7 +152,7 @@ def wait_for_state(
:param target_state: Describes the QuickSight Job's Target State
:param check_interval: the time interval in seconds which the operator
will check the status of QuickSight Ingestion
:return: response of describe_ingestion call after Ingestion is is done
:return: response of describe_ingestion call after Ingestion is done
"""
while True:
status = self.get_status(aws_account_id, data_set_id, ingestion_id)
Expand Down
2 changes: 1 addition & 1 deletion airflow/providers/amazon/aws/operators/eks.py
Expand Up @@ -987,7 +987,7 @@ class EksPodOperator(KubernetesPodOperator):
empty, then the default boto3 configuration would be used (and must be
maintained on each worker node).
:param on_finish_action: What to do when the pod reaches its final state, or the execution is interrupted.
If "delete_pod", the pod will be deleted regardless it's state; if "delete_succeeded_pod",
If "delete_pod", the pod will be deleted regardless its state; if "delete_succeeded_pod",
only succeeded pod will be deleted. You can set to "keep_pod" to keep the pod.
Current default is `keep_pod`, but this will be changed in the next major release of this provider.
:param is_delete_operator_pod: What to do when the pod reaches its final
Expand Down
2 changes: 1 addition & 1 deletion airflow/providers/amazon/aws/triggers/README.md
Expand Up @@ -38,7 +38,7 @@ The first step to making an existing operator deferrable is to add `deferrable`
The next step is to determine where the operator should be deferred. This will be dependent on what the operator does, and how it is written. Although every operator is different, there are a few guidelines to determine the best place to defer an operator.

1. If the operator has a `wait_for_completion` parameter, the `self.defer` method should be called right before the check for wait_for_completion .
2. If there is no `wait_for_completion` , look for the "main" task that the operator does. Often, operators will make various describe calls to to the boto3 API to verify certain conditions, or look up some information before performing its "main" task. Often, right after the "main" call to the boto3 API is made is a good place to call `self.defer`.
2. If there is no `wait_for_completion` , look for the "main" task that the operator does. Often, operators will make various describe calls to the boto3 API to verify certain conditions, or look up some information before performing its "main" task. Often, right after the "main" call to the boto3 API is made is a good place to call `self.defer`.


Once the location to defer is decided in the operator, call the `self.defer` method if the `deferrable` flag is `True`. The `self.defer` method takes in several parameters, listed below:
Expand Down
2 changes: 1 addition & 1 deletion airflow/providers/apache/hdfs/CHANGELOG.rst
Expand Up @@ -75,7 +75,7 @@ you can use 3.* version of the provider, but the recommendation is to switch to
Protobuf 3 required by the snakebite-py3 library has ended its life in June 2023 and Airflow and it's
providers stopped supporting it. If you would like to continue using HDFS hooks and sensors
based on snakebite-py3 library when you have protobuf library 4.+ you can install the 3.* version
of the provider but due to Protobuf incompatibility, you need to do one of the the two things:
of the provider but due to Protobuf incompatibility, you need to do one of the two things:

* set ``PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python`` variable in your environment.
* downgrade protobuf to latest 3.* version (3.20.3 at this time)
Expand Down
2 changes: 1 addition & 1 deletion airflow/providers/apache/kafka/triggers/await_message.py
Expand Up @@ -49,7 +49,7 @@ class AwaitMessageTrigger(BaseTrigger):
defaults to None
:param poll_timeout: How long the Kafka client should wait before returning from a poll request to
Kafka (seconds), defaults to 1
:param poll_interval: How long the the trigger should sleep after reaching the end of the Kafka log
:param poll_interval: How long the trigger should sleep after reaching the end of the Kafka log
(seconds), defaults to 5
"""
Expand Down
2 changes: 1 addition & 1 deletion airflow/providers/cncf/kubernetes/operators/pod.py
Expand Up @@ -232,7 +232,7 @@ class KubernetesPodOperator(BaseOperator):
:param poll_interval: Polling period in seconds to check for the status. Used only in deferrable mode.
:param log_pod_spec_on_failure: Log the pod's specification if a failure occurs
:param on_finish_action: What to do when the pod reaches its final state, or the execution is interrupted.
If "delete_pod", the pod will be deleted regardless it's state; if "delete_succeeded_pod",
If "delete_pod", the pod will be deleted regardless its state; if "delete_succeeded_pod",
only succeeded pod will be deleted. You can set to "keep_pod" to keep the pod.
:param is_delete_operator_pod: What to do when the pod reaches its final
state, or the execution is interrupted. If True (default), delete the
Expand Down
2 changes: 1 addition & 1 deletion airflow/providers/cncf/kubernetes/triggers/pod.py
Expand Up @@ -61,7 +61,7 @@ class KubernetesPodTrigger(BaseTrigger):
:param get_logs: get the stdout of the container as logs of the tasks.
:param startup_timeout: timeout in seconds to start up the pod.
:param on_finish_action: What to do when the pod reaches its final state, or the execution is interrupted.
If "delete_pod", the pod will be deleted regardless it's state; if "delete_succeeded_pod",
If "delete_pod", the pod will be deleted regardless its state; if "delete_succeeded_pod",
only succeeded pod will be deleted. You can set to "keep_pod" to keep the pod.
:param should_delete_pod: What to do when the pod reaches its final
state, or the execution is interrupted. If True (default), delete the
Expand Down
2 changes: 1 addition & 1 deletion airflow/providers/common/sql/hooks/sql.pyi
Expand Up @@ -18,7 +18,7 @@
# This is automatically generated stub for the `common.sql` provider
#
# This file is generated automatically by the `update-common-sql-api stubs` pre-commit
# and the .pyi file represents part of the the "public" API that the
# and the .pyi file represents part of the "public" API that the
# `common.sql` provider exposes to other providers.
#
# Any, potentially breaking change in the stubs will require deliberate manual action from the contributor
Expand Down
2 changes: 1 addition & 1 deletion airflow/providers/common/sql/operators/sql.pyi
Expand Up @@ -18,7 +18,7 @@
# This is automatically generated stub for the `common.sql` provider
#
# This file is generated automatically by the `update-common-sql-api stubs` pre-commit
# and the .pyi file represents part of the the "public" API that the
# and the .pyi file represents part of the "public" API that the
# `common.sql` provider exposes to other providers.
#
# Any, potentially breaking change in the stubs will require deliberate manual action from the contributor
Expand Down
2 changes: 1 addition & 1 deletion airflow/providers/common/sql/sensors/sql.pyi
Expand Up @@ -18,7 +18,7 @@
# This is automatically generated stub for the `common.sql` provider
#
# This file is generated automatically by the `update-common-sql-api stubs` pre-commit
# and the .pyi file represents part of the the "public" API that the
# and the .pyi file represents part of the "public" API that the
# `common.sql` provider exposes to other providers.
#
# Any, potentially breaking change in the stubs will require deliberate manual action from the contributor
Expand Down
Expand Up @@ -429,7 +429,7 @@ class GKEStartPodOperator(KubernetesPodOperator):
:param regional: The location param is region name.
:param deferrable: Run operator in the deferrable mode.
:param on_finish_action: What to do when the pod reaches its final state, or the execution is interrupted.
If "delete_pod", the pod will be deleted regardless it's state; if "delete_succeeded_pod",
If "delete_pod", the pod will be deleted regardless its state; if "delete_succeeded_pod",
only succeeded pod will be deleted. You can set to "keep_pod" to keep the pod.
Current default is `keep_pod`, but this will be changed in the next major release of this provider.
:param is_delete_operator_pod: What to do when the pod reaches its final
Expand Down
2 changes: 1 addition & 1 deletion airflow/providers/google/cloud/operators/pubsub.py
Expand Up @@ -709,7 +709,7 @@ class PubSubPullOperator(GoogleCloudBaseOperator):
:param gcp_conn_id: The connection ID to use connecting to
Google Cloud.
:param messages_callback: (Optional) Callback to process received messages.
It's return value will be saved to XCom.
Its return value will be saved to XCom.
If you are pulling large messages, you probably want to provide a custom callback.
If not provided, the default implementation will convert `ReceivedMessage` objects
into JSON-serializable dicts using `google.protobuf.json_format.MessageToDict` function.
Expand Down
2 changes: 1 addition & 1 deletion airflow/providers/google/cloud/sensors/pubsub.py
Expand Up @@ -73,7 +73,7 @@ class PubSubPullSensor(BaseSensorOperator):
:param gcp_conn_id: The connection ID to use connecting to
Google Cloud.
:param messages_callback: (Optional) Callback to process received messages.
It's return value will be saved to XCom.
Its return value will be saved to XCom.
If you are pulling large messages, you probably want to provide a custom callback.
If not provided, the default implementation will convert `ReceivedMessage` objects
into JSON-serializable dicts using `google.protobuf.json_format.MessageToDict` function.
Expand Down
2 changes: 1 addition & 1 deletion airflow/providers/google/cloud/triggers/bigquery_dts.py
Expand Up @@ -95,7 +95,7 @@ async def run(self) -> AsyncIterator[TriggerEvent]:
self.log.info("Current state is %s", state)

if state == TransferState.SUCCEEDED:
self.log.info("Job has completed it's work.")
self.log.info("Job has completed its work.")
yield TriggerEvent(
{
"status": "success",
Expand Down
Expand Up @@ -55,7 +55,7 @@ class GKEStartPodTrigger(KubernetesPodTrigger):
will consult the class variable BASE_CONTAINER_NAME (which defaults to "base") for the base
container name to use.
:param on_finish_action: What to do when the pod reaches its final state, or the execution is interrupted.
If "delete_pod", the pod will be deleted regardless it's state; if "delete_succeeded_pod",
If "delete_pod", the pod will be deleted regardless its state; if "delete_succeeded_pod",
only succeeded pod will be deleted. You can set to "keep_pod" to keep the pod.
:param should_delete_pod: What to do when the pod reaches its final
state, or the execution is interrupted. If True (default), delete the
Expand Down
2 changes: 1 addition & 1 deletion airflow/providers/google/cloud/triggers/pubsub.py
Expand Up @@ -41,7 +41,7 @@ class PubsubPullTrigger(BaseTrigger):
immediately rather than by any downstream tasks
:param gcp_conn_id: Reference to google cloud connection id
:param messages_callback: (Optional) Callback to process received messages.
It's return value will be saved to XCom.
Its return value will be saved to XCom.
If you are pulling large messages, you probably want to provide a custom callback.
If not provided, the default implementation will convert `ReceivedMessage` objects
into JSON-serializable dicts using `google.protobuf.json_format.MessageToDict` function.
Expand Down
2 changes: 1 addition & 1 deletion airflow/providers/hashicorp/hooks/vault.py
Expand Up @@ -49,7 +49,7 @@ class VaultHook(BaseHook):
The mount point should be placed as a path in the URL - similarly to Vault's URL schema:
This indicates the "path" the secret engine is mounted on. Default id not specified is "secret".
Note that this ``mount_point`` is not used for authentication if authentication is done via a
different engines. Each engine uses it's own engine-specific authentication mount_point.
different engines. Each engine uses its own engine-specific authentication mount_point.
The extras in the connection are named the same as the parameters ('kv_engine_version', 'auth_type', ...).
Expand Down
4 changes: 2 additions & 2 deletions airflow/providers/microsoft/azure/CHANGELOG.rst
Expand Up @@ -718,11 +718,11 @@ Breaking changes

This change removes ``azure_container_instance_default`` connection type and replaces it with the
``azure_default``. The problem was that AzureContainerInstance was not needed as it was exactly the
same as the plain "azure" connection, however it's presence caused duplication in the field names
same as the plain "azure" connection, however its presence caused duplication in the field names
used in the UI editor for connections and unnecessary warnings generated. This version uses
plain Azure Hook and connection also for Azure Container Instance. If you already have
``azure_container_instance_default`` connection created in your DB, it will continue to work, but
the first time you edit it with the UI you will have to change it's type to ``azure_default``.
the first time you edit it with the UI you will have to change its type to ``azure_default``.

Features
~~~~~~~~
Expand Down
2 changes: 1 addition & 1 deletion airflow/providers/microsoft/azure/hooks/data_lake.py
Expand Up @@ -241,7 +241,7 @@ class AzureDataLakeStorageV2Hook(BaseHook):
accounts that have a hierarchical namespace. Using Adls_v2 connection
details create DataLakeServiceClient object.
Due to Wasb is marked as legacy and and retirement of the (ADLS1), it would
Due to Wasb is marked as legacy and retirement of the (ADLS1), it would
be nice to implement ADLS gen2 hook for interacting with the storage account.
.. seealso::
Expand Down
Expand Up @@ -92,7 +92,7 @@ class AzureDataFactoryRunPipelineOperator(BaseOperator):
``AzureDataFactoryHook`` will attempt to use the resource group name provided in the corresponding
connection.
:param factory_name: The data factory name. If a value is not passed in to the operator, the
``AzureDataFactoryHook`` will attempt to use the factory name name provided in the corresponding
``AzureDataFactoryHook`` will attempt to use the factory name provided in the corresponding
connection.
:param reference_pipeline_run_id: The pipeline run identifier. If this run ID is specified the parameters
of the specified run will be used to create a new run.
Expand Down
2 changes: 1 addition & 1 deletion airflow/ti_deps/deps/mapped_task_expanded.py
Expand Up @@ -21,7 +21,7 @@


class MappedTaskIsExpanded(BaseTIDep):
"""Checks that a mapped task has been expanded before it's TaskInstance can run."""
"""Checks that a mapped task has been expanded before its TaskInstance can run."""

NAME = "Task has been mapped"
IGNORABLE = False
Expand Down

0 comments on commit a54c242

Please sign in to comment.