Skip to content

Commit

Permalink
docs: Fix typos for spelling grammar
Browse files Browse the repository at this point in the history
  • Loading branch information
slowy07 committed Aug 4, 2021
1 parent bbf4910 commit d1fe549
Show file tree
Hide file tree
Showing 47 changed files with 83 additions and 83 deletions.
16 changes: 8 additions & 8 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,12 +71,12 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Source for parsing `.ini` file formats
- Tests for noasync high level API.
- Tests for load and save functions in high level API.
- `Operation` inputs and ouputs default to empty `dict` if not given.
- `Operation` inputs and outputs default to empty `dict` if not given.
- Ability to export any object with `dffml service dev export`
- Complete example for dataflow run cli command
- Tests for default configs instantiation.
- Example ffmpeg operation.
- Operations to deploy docker container on receving github webhook.
- Operations to deploy docker container on receiving github webhook.
- New use case `Redeploying dataflow on webhook` in docs.
- Documentation for creating Source for new File types taking `.ini` as an example.
- New input modes, output modes for HTTP API dataflow registration.
Expand Down Expand Up @@ -151,7 +151,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- CSV source overwriting configloaded data to every row
- Race condition in `MemoryRedundancyChecker` when more than 4 possible
parameter sets for an operation.
- Typing of config vlaues for numpy parsed docstrings where type should be tuple
- Typing of config values for numpy parsed docstrings where type should be tuple
or list
- Model predict methods now use `SourcesContext.with_features`
### Removed
Expand Down Expand Up @@ -240,7 +240,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Create a fresh archive of the git repo for release instead of cleaning
existing repo with `git clean` for development service release command.
- Simplified SLR tests for scratch model
- Test tensorflow DNNClassifier documentation exaples in CI
- Test tensorflow DNNClassifier documentation examples in CI
- config directories and files associated with ConfigLoaders have been renamed
to configloader.
- Model config directory parameters are now `pathlib.Path` objects
Expand All @@ -257,7 +257,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Directions on how to read the CI under the Git and GitHub page of the
contributing documentation.
- HTTP API
- Static file serving from a dirctory with `-static`
- Static file serving from a directory with `-static`
- `api.js` file serving with the `-js` flag
- Docs page for JavaScript example
- shouldi got an operation to run golangci-lint on Golang code
Expand Down Expand Up @@ -295,7 +295,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
documentation for the master branch (on GitHub pages).
- Virtual environment, GitPod, and Docker development environment setup notes to
the CONTRIBUTING.md file.
- Changelog now included in documenation website.
- Changelog now included in documentation website.
- Database abstraction `dffml.db`
- SQLite connector
- MySQL connector
Expand All @@ -307,7 +307,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- File listing endpoint to HTTP service.
- When an operation throws an exception the name of the instance and the
parameters it was executed with will be thrown via an `OperationException`.
- Network utilities to preformed cached downloads with hash validation.
- Network utilities to peformed cached downloads with hash validation.
- Development service got a new command, which can retrieve an argument passed
to setuptools `setup` function within a `setup.py` file.
### Changed
Expand Down Expand Up @@ -419,7 +419,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Repo.feature method to select a single piece of feature data within a repo.
- Dev service to help with hacking on DFFML and to create models from templates
in the skel/ directory.
- Classification type parameter to DNNClassifierModelConfig to specifiy data
- Classification type parameter to DNNClassifierModelConfig to specify data
type of given classification options.
- util.cli CMD classes have their argparse description set to their docstring.
- util.cli CMD classes can specify the formatter class used in
Expand Down
2 changes: 1 addition & 1 deletion dffml/accuracy/accuracy.py
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ async def score(
class AccuracyScorer(BaseDataFlowFacilitatorObject):
"""
Abstract base class which should be derived from
and implmented using various accuracy scorer.
and implemented using various accuracy scorer.
"""

CONFIG = AccuracyConfig
Expand Down
8 changes: 4 additions & 4 deletions dffml/df/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -151,9 +151,9 @@ def add_label(cls, *above):
@classmethod
def _imp(cls, loaded):
"""
Returns the operation implemention from a loaded entrypoint object, or
None if its not an operation implemention or doesn't have the imp
parameter which is an operation implemention.
Returns the operation implementation from a loaded entrypoint object, or
None if its not an operation implementation or doesn't have the imp
parameter which is an operation implementation.
"""
for obj in [getattr(loaded, "imp", None), loaded]:
if inspect.isclass(obj) and issubclass(obj, cls):
Expand Down Expand Up @@ -973,7 +973,7 @@ async def dispatch(

# TODO We should be able to specify multiple operation implementation networks.
# This would enable operations to live in different place, accessed via the
# orchestrator transparently. This will probably invlove
# orchestrator transparently. This will probably involve
# dffml.util.asynchelper.AsyncContextManagerList
@base_entry_point("dffml.operation.implementation.network", "opimp", "network")
class BaseOperationImplementationNetwork(BaseDataFlowObject):
Expand Down
16 changes: 8 additions & 8 deletions dffml/df/memory.py
Original file line number Diff line number Diff line change
Expand Up @@ -344,7 +344,7 @@ async def add(self, input_set: BaseInputSet):
in self.ctxhd[handle_string].definitions
):
self.ctxhd[handle_string].definitions[item.definition] = []
# Add input to by defintion set
# Add input to by definition set
self.ctxhd[handle_string].definitions[item.definition].append(
item
)
Expand Down Expand Up @@ -609,11 +609,11 @@ async def gather_inputs(
# Generate parameters from inputs
for item in by_origin[origin]:
# TODO(p2) We favored comparing names to
# defintions because sometimes we create
# defintions which have specs which create new
# definitions because sometimes we create
# definitions which have specs which create new
# types which will not equal each other. We
# maybe want to consider switching to comparing
# exported Defintions
# exported Definitions
if alternate_definitions:
if (
item.definition.name
Expand Down Expand Up @@ -738,7 +738,7 @@ async def gather_inputs(
)
break
# If there is no default value, we don't have a complete
# paremeter set, so we bail out
# parameter set, so we bail out
else:
return
# Generate all possible permutations of applicable inputs
Expand Down Expand Up @@ -1110,7 +1110,7 @@ async def run_no_retry(
"""
# Check that our network contains the operation
await self.ensure_contains(operation)
# Create an opimp context and run the opertion
# Create an opimp context and run the operation
async with self.operations[operation.instance_name](
ctx, octx
) as opctx:
Expand Down Expand Up @@ -1357,7 +1357,7 @@ async def __aenter__(self) -> "BaseOrchestratorContext":
self.logger.debug("Reusing %s: %s", name, ctx)
del enter[name]
setattr(self, name, ctx)
# Creat the exit stack and enter all the contexts we won't be reusing
# Create the exit stack and enter all the contexts we won't be reusing
self._stack = AsyncExitStack()
self._stack = await aenter_stack(self, enter)
# Ensure that we can run the dataflow
Expand Down Expand Up @@ -1387,7 +1387,7 @@ async def initialize_dataflow(self, dataflow: DataFlow) -> None:
# We may have been provided with the implemenation. Attempt to
# look it up from within the dataflow.
opimp = dataflow.implementations.get(operation.name, None)
# There is a possiblity the operation implemenation network will
# There is a possibility the operation implementation network will
# be able to instantiate from the given operation implementation
# if present. But we can't count on it.
if not await self.nctx.instantiable(operation, opimp=opimp):
Expand Down
2 changes: 1 addition & 1 deletion dffml/operation/output.py
Original file line number Diff line number Diff line change
Expand Up @@ -364,7 +364,7 @@ class AssociateDefinition(OperationImplementationContext):
"""

async def run(self, inputs: Dict[str, Any]) -> Dict[str, Any]:
# Look up the definiton for each definition name given
# Look up the definition for each definition name given
try:
spec = {
await self.octx.ictx.definition(
Expand Down
8 changes: 4 additions & 4 deletions dffml/service/dev.py
Original file line number Diff line number Diff line change
Expand Up @@ -329,7 +329,7 @@ class InstallConfig:
default_factory=lambda: [],
)
nocheck: bool = field(
"Do not preform pre-install dependency checks", default=False
"Do not perform pre-install dependency checks", default=False
)
user: bool = field(
"Perform user install", default=False, action="store_true"
Expand Down Expand Up @@ -388,7 +388,7 @@ async def run(self):
main_package = is_develop("dffml")
if not main_package:
raise NotImplementedError(
"Currenty you need to have at least the main package already installed in development mode."
"Currently you need to have at least the main package already installed in development mode."
)
# Check if plugins not in skip list have unmet dependencies
if not self.nocheck:
Expand Down Expand Up @@ -511,7 +511,7 @@ class SetupPy(CMD):

class RepoDirtyError(Exception):
"""
Raised when a release was attempted but there are uncommited changes
Raised when a release was attempted but there are uncommitted changes
"""


Expand Down Expand Up @@ -542,7 +542,7 @@ async def run(self):
if stderr or proc.returncode != 0:
raise RuntimeError(stderr.decode())
if stdout:
raise RepoDirtyError("Uncommited changes")
raise RepoDirtyError("Uncommitted changes")
# cd to directory
with chdir(str(self.package)):
# Get name
Expand Down
4 changes: 2 additions & 2 deletions dffml/util/asynchelper.py
Original file line number Diff line number Diff line change
Expand Up @@ -127,8 +127,8 @@ async def concurrently(
if not task.done() and (nocancel is None or task not in nocancel):
task.cancel()
else:
# For tasks which are done but have expections which we didn't
# raise, collect their execptions
# For tasks which are done but have exceptions which we didn't
# raise, collect their exceptions
task.exception()


Expand Down
2 changes: 1 addition & 1 deletion dffml/util/testing/consoletest/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -311,7 +311,7 @@ directives.

- Do not fail the whole test if the command exits with a non-zero exit code.

- `compare-output-imports`: Comma seperated list of Python modules
- `compare-output-imports`: Comma separated list of Python modules

- Python modules to import before running `compare-output`

Expand Down
2 changes: 1 addition & 1 deletion docs/_ext/literalinclude_diff.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ def wrapper(self) -> List[Node]:
else:
new_path = old_path[0]
old_path = old_path[0]
# Preform the replacement
# Perform the replacement
lines[0] = f"--- {old_path}\n"
lines[1] = f"+++ {new_path}\n"

Expand Down
2 changes: 1 addition & 1 deletion docs/_ext/literalinclude_relative.py
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ class LiteralIncludeReader(sphinx.directives.code.LiteralIncludeReader):
def __init__(self, filename: str, options: Dict, config: Config) -> None:
super().__init__(filename, options, config)
# HACK Reach into caller's (LiteralInclude.run()) local variables and
# access the self variable, which will have the arguements to
# access the self variable, which will have the arguments to
# literalinclude, which is the unresolved relative path. Caller is the
# first index in the list of FrameInfo objects returned by
# inspect.stack()
Expand Down
4 changes: 2 additions & 2 deletions docs/cli.rst
Original file line number Diff line number Diff line change
Expand Up @@ -119,13 +119,13 @@ Create, modify, run, and visualize DataFlows.
Create
~~~~~~

Ouput the dataflow description to standard output using the specified
Output the dataflow description to standard output using the specified
configloader format.

In the following example we create a DataFlow consisting of 2 operations,
``dffml.mapping.create``, and ``print_output``. We use ``-flow`` to edit the
DataFlow and have the input of the ``print_output`` operation come from the
ouput of the ``dffml.mapping.create`` operation. If you want to see the
output of the ``dffml.mapping.create`` operation. If you want to see the
difference create a diagram of the DataFlow with and without using the ``-flow``
flag during generation.

Expand Down
2 changes: 1 addition & 1 deletion docs/concepts/dataflow.rst
Original file line number Diff line number Diff line change
Expand Up @@ -217,7 +217,7 @@ running a DataFlow. The following sequence of events take place.
.. image:: /images/dataflow_diagram.svg
:alt: Flow chart showing how DataFlow Orchestrator works

Benifits of DataFlows
Benefits of DataFlows
---------------------

- Modularity
Expand Down
2 changes: 1 addition & 1 deletion docs/contributing/codebase.rst
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ A *Official* plugin is any plugin maintained within the main Git repo.

This means users only have to install what they need. TensorFlow is several
hundred megabytes, not everyone wants that, or needs that to get machine
learning models that preform accurately on their problem.
learning models that perform accurately on their problem.

All plugins have their base class that they derive from in the main package,
which is located in the ``dffml`` directory at the root of the git repo.
Expand Down
2 changes: 1 addition & 1 deletion docs/contributing/git.rst
Original file line number Diff line number Diff line change
Expand Up @@ -134,7 +134,7 @@ is okay.
+--------------+---------------------------------------------------------------+
| whitespace | https://softwareengineering.stackexchange.com/q/121555 |
+--------------+---------------------------------------------------------------+
| style | You need to run the ``black`` formater |
| style | You need to run the ``black`` formatter |
+--------------+---------------------------------------------------------------+
| docs | There was an issue when running the ./scripts/docs.sh script |
+--------------+---------------------------------------------------------------+
Expand Down
2 changes: 1 addition & 1 deletion docs/contributing/notebooks.rst
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ present in the same directory to the documentation. To overcome this,
``nbsphinx-link`` is used, which allows you to create a symblic link to
notebooks in other directories.

You can create the link to a notebook in a file with extention ``.nblink`` as follows
You can create the link to a notebook in a file with extension ``.nblink`` as follows

.. literalinclude:: /examples/notebooks/moving_between_models.nblink

Expand Down
2 changes: 1 addition & 1 deletion docs/contributing/style.rst
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ Run the `black <https://github.com/psf/black>`_ formatter on all Python files.
$ black .
In VSCode open command pallete by Ctrl+Shift+p, open Settings(JSON) and add the properties.
In VSCode open command pallette by Ctrl+Shift+p, open Settings(JSON) and add the properties.

.. code-block:: json
Expand Down
2 changes: 1 addition & 1 deletion docs/examples/dataflows.rst
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ Then change directory into the ``shouldi`` source code you would have written in
the :doc:`/examples/shouldi` example.

.. We have to install dffml-feature-git with the shouldi install command or else
it will downlaod the latest production release from PyPi
it will download the latest production release from PyPi
.. code-block:: console
:test:
Expand Down
2 changes: 1 addition & 1 deletion docs/examples/icecream_sales.rst
Original file line number Diff line number Diff line change
Expand Up @@ -174,7 +174,7 @@ Prediction
For the prediction we will using the `test_dataset.csv` this data
was not present in the training dataset.

Here insted of creating and intermediary file we are directly
Here instead of creating and intermediary file we are directly
providing the output of the dataflow (temperature and population)
for the prediction of sales.

Expand Down
4 changes: 2 additions & 2 deletions docs/examples/mnist.rst
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ and the seed to have the input of the associate_definition operation come from t

.. literalinclude:: /../examples/MNIST/create_dataflow.sh

.. TODO genreate this automaticlly
.. TODO genereate this automaticlly
graph TD
0fbe41b549bb236aabadebd7924379fd[multiply]
24e79f7035a289834b34967054b338f5(seed.image)
Expand Down Expand Up @@ -78,7 +78,7 @@ and to have the input of the associate_definition operation come from the output

.. literalinclude:: /../examples/MNIST/create_dataflow_1.sh

.. TODO genreate this automaticlly
.. TODO genreate this automatically
graph TD
4b0696d0a16f6124e7f9edd38a1c1505[flatten]
0fbe41b549bb236aabadebd7924379fd[multiply]
Expand Down
2 changes: 1 addition & 1 deletion docs/examples/webhook/webhook.rst
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
.. _usage_webhook:

Redeploying on receving GitHub webhook
Redeploying on receiving GitHub webhook
======================================

We'll move ``ffmpeg`` to a GitHub repo, and set up a webhook DataFlow such that whenever
Expand Down
2 changes: 1 addition & 1 deletion docs/installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -154,7 +154,7 @@ and replace ``-`` with ``/``.
There's an online IDE based on Theia (similar to VS Code) called GitPod that
gives you a setup development environment to get started working with/on DFFML
right away. However, it comes with the master branch installed, you'll need to
run the above commands to get the lastest released version.
run the above commands to get the latest released version.

.. image:: https://gitpod.io/button/open-in-gitpod.svg
:target: https://gitpod.io/#https://github.com/intel/dffml
Expand Down
8 changes: 4 additions & 4 deletions docs/news/0_4_0_alpha_release.rst
Original file line number Diff line number Diff line change
Expand Up @@ -54,9 +54,9 @@ for the full details. We'll cover some highlights here.
added. We now have over 115 models! Check out the model plugins page to see
them all.

- :doc:`Documentation Testing with Sphinx consoletest extention </contributing/consoletest>`
- :doc:`Documentation Testing with Sphinx consoletest extension </contributing/consoletest>`

- We developed a Sphinx extention which has allowed us to test the
- We developed a Sphinx extension which has allowed us to test the
``code-block:: console`` directives and others in our documentation. This
serves as integration testing / documentation validation. The
:doc:`/tutorials/models/docs` tutorial was written to explain how this can
Expand All @@ -79,7 +79,7 @@ We have several major things on deck that we want to get before we declare beta.

- We now have a lot of models to choose from and are at the stage where we
need models to help us choose our models! We're going to have AutoML in the
Beta release. This will pick the best model with the best hyper paramters
Beta release. This will pick the best model with the best hyper parameters
for the job.

- Accuracy Scorers
Expand Down Expand Up @@ -119,7 +119,7 @@ We have several major things on deck that we want to get before we declare beta.
- Config files in place of command line parameters

- To stop users from having to copy paste so many command line parameters
across command invocations, we'll be implementating support for config
across command invocations, we'll be implementing support for config
files. YAML, JSON, etc. will all be able to be used to store what could also
be command line arguments.

Expand Down
2 changes: 1 addition & 1 deletion docs/quickstart/model.rst
Original file line number Diff line number Diff line change
Expand Up @@ -204,7 +204,7 @@ If we wanted to do everything within Python our file might look like this

.. literalinclude:: /../examples/quickstart.py

The ouput should be as follows
The output should be as follows

.. code-block::
Expand Down

0 comments on commit d1fe549

Please sign in to comment.