Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: fix grammar issues #9959

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ their own contributing guides:

3. [Console (JavaScript)](frontend/docs/generic-info.md#contributing-to-hasura-console)

All of the three components have a single version, denoted by either the git tag or a combination of branch name and git
All the three components have a single version, denoted by either the git tag or a combination of branch name and git
commit SHA.

For all contributions, a CLA (Contributor License Agreement) needs to be signed
Expand Down
2 changes: 1 addition & 1 deletion dc-agents/CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ Once node is installed, run `npm ci` to restore all npm packages.
To restore the npm modules used by all the projects, ensure you run `npm ci` in the `/dc-agents` directory (ie. this directory).

### Deriving lockfiles
Because `sqlite` and `reference` are linked into the root workspace, they don't normally get their own lockfiles (ie. `package-lock.json`), as the lockfile is managed at the root workspace level. However, we want to be able to take these projects and build them outside of the workspace setup we have here, where the root `package-lock.json` does not exist.
Because `sqlite` and `reference` are linked into the root workspace, they don't normally get their own lockfiles (ie. `package-lock.json`), as the lockfile is managed at the root workspace level. However, we want to be able to take these projects and build them outside the workspace setup we have here, where the root `package-lock.json` does not exist.

In order to achieve this, we have a tool that will derive individual `package-lock.json` files for the `reference` and `sqlite` packages from the root `package-lock.json` file. These derived `package-lock.json` files are committed to the repository so that they can be used by the package-specific Dockerfiles (eg `reference/Dockerfile`).

Expand Down
2 changes: 1 addition & 1 deletion dc-agents/HUB.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ The reference agent is a good starting point if you want to build your own conne

## API's

- Github (Coming soon...)
- GitHub (Coming soon...)
- Prometheus (Coming soon...)
- Salesforce (Coming soon...)
- Zendesk (Coming soon...)
2 changes: 1 addition & 1 deletion docs/CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,5 +25,5 @@ touch with the maintainers in the `GraphQL Engine`->`contrib` channel in the com
## Notes

- Docs are currently deployed manually. Changes will not reflect immediately after a PR gets merged.
- The search is powered by [Algolia](https://www.algolia.com/) and is updated everyday. Your local changes will not be
- The search is powered by [Algolia](https://www.algolia.com/) and is updated every day. Your local changes will not be
reflected in search results.
2 changes: 1 addition & 1 deletion rfcs/apollo-federation.md
Original file line number Diff line number Diff line change
Expand Up @@ -542,7 +542,7 @@ query, for the 1st selection set (`TowPks`), we will have the argument
3. Next we would have to use the parsers for the fields (`TwoPksByPk` and
`UsersDataByPk`) to evaluate the `Field` (constructed using the selection set
and arguments in above steps).
4. Finally we would concatenate the results in a list.
4. Finally, we would concatenate the results in a list.

Note: The above method of evaluating the query may do multiple fetches from the
database, which might be something that can be optimised in further iterations.
Expand Down
4 changes: 2 additions & 2 deletions rfcs/catalog-migration-db-stats-logging.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,15 +55,15 @@ The output of the above query looks like the following:
]
```

Doing this will let the user know that some of the queries that are running on the database are in a locked state.
Doing this will let the user know that some queries that are running on the database are in a locked state.

## Implementation details

1. We'll create a new function `logPGSourceCatalogStats` which will infinitely query in fixed intervals of five seconds, the database using the above mentioned SQL query. The type signature of the function will look like:

`logPGSourceCatalogStats :: forall pgKind m . (MonadIO m, MonadTx m) => Logger Hasura -> SourceConfig '(Postgres pgKind) -> m Void`

2. The `logPGSourceCatalogStats` function will be run in a separate thread so as to not block the migrations. This can be done by calling `logPGSourceCatalogStats` using the `forkManagedT` function.
2. The `logPGSourceCatalogStats` function will be run in a separate thread to not block the migrations. This can be done by calling `logPGSourceCatalogStats` using the `forkManagedT` function.

3. To avoid implementing this feature for metadata DB catalog migrations, we need to make the 40_to_41 migration a no-op and move the logic to
be a source catalog migration. Which means to make the `40_to_41.sql` a noop migration and move the migration to `0_to_1.sql` in `pg_source_migrations`. The `41_to_40.sql` migration should change accordingly, it has to check first whether the `locked` column is a `timestamp with time zone` and iff alter it to the boolean type otherwise do nothing.
Expand Down
2 changes: 1 addition & 1 deletion rfcs/column-mutability.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ with less effort is an important enabler of this.

Our ability to achieve this (and thus the quality of the finished product)
is affected by our choices of the abstractions that we use to express our solution.
Therefore it's important we pick those that enable us rather than hamper us.
Therefore, it's important we pick those that enable us rather than hamper us.

### Success

Expand Down
2 changes: 1 addition & 1 deletion rfcs/computed-fields-filters-perms-orderby.md
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ AS $function$
$function$
```

I should able to fetch an author whose `full_name` is 'Bob Morley'
I should be able to fetch an author whose `full_name` is 'Bob Morley'

```graphql
query {
Expand Down
6 changes: 3 additions & 3 deletions rfcs/disable-query-and-subscription-root-fields.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,10 @@ original issue: https://github.com/hasura/graphql-engine/pull/4110

## Allow disabling query root fields

Currently when a select permission is defined for a role on a table, we
Currently, when a select permission is defined for a role on a table, we
automatically generate 3 fields for the table (`<table>`, `<table_by_pk>`,
`<table_aggregate>`) in `query_root` and likewise in `subscription_root`. This
should be customisable to allow some of the patterns as discussed below.
should be customisable to allow some patterns as discussed below.

### Motivation

Expand Down Expand Up @@ -60,7 +60,7 @@ the permission on the table, only `<table>_by_pk` should be exposed in

## Allow disabling subscription fields

Currently we do not provide a fine grained control on subscriptions that are exposed - if a select permission is defined on a table, the live queries on that table are exposed through `subscription_root`. (Note: the discussion of `query_root` customisability also applies to `subscription_root`).
Currently, we do not provide a fine-grained control on subscriptions that are exposed - if a select permission is defined on a table, the live queries on that table are exposed through `subscription_root`. (Note: the discussion of `query_root` customisability also applies to `subscription_root`).

### Proposed solution

Expand Down
8 changes: 4 additions & 4 deletions rfcs/function-permissions.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ modify the database and the data returned by the function is filtered using the
permissions that are specified precisely for that data.

Now consider mutable/volatile functions, we can't automatically infer whether
or not these functions should be exposed for the sole reason that they *can*
these functions should be exposed for the sole reason that they *can*
modify the database. This necessitates a permission system for functions.

## Permissions on functions
Expand All @@ -30,7 +30,7 @@ args:
definition: {}
```

Inside the metadata, it be as follows:
Inside the metadata, it is as follows:

```yaml
version: 3
Expand All @@ -44,12 +44,12 @@ sources:
definition: {}
```

`definition` is empty for now but we'll have more options as we extend
`definition` is empty for now, but we'll have more options as we extend
the feature set.

## Backwards compatibility

Currently stable/immutable functions are exposed automatically, so we need to
Currently, stable/immutable functions are exposed automatically, so we need to
preserve this behaviour. So, we can introduce a new flag
`--infer-function-permissions`, the presence of which is an explicit indication
from the user that they want stable/immutable functions which return a table
Expand Down
8 changes: 4 additions & 4 deletions rfcs/hspec-test-suite.md
Original file line number Diff line number Diff line change
Expand Up @@ -144,7 +144,7 @@ The x-axis represents the various backends. Each backend should implement the fo

- The ability to list out supported all test groups

- The ability to set up the test database schema, which should cover all of the supported test groups
- The ability to set up the test database schema, which should cover all the supported test groups

- A blank metadata structure which provides the test database as a data source with a standard name

Expand Down Expand Up @@ -182,7 +182,7 @@ Given these backends and test groups, the basic test plan looks like this:

- Replace the metadata on the server with this generated metadata

- Run all of the tests cases in group G
- Run all the tests cases in group G

Related work
------------
Expand All @@ -196,8 +196,8 @@ Related work
Within that PR, [Vamshi described](https://github.com/hasura/graphql-engine-mono/pull/2403#issuecomment-933630333) the expected DB-to-DB relationship behaviour and a proposal for designing new DB-to-DB joins tests.

*Feedback from DB-to-DB joins testing effort*
1. the test monad - we're currently using a mix of `Test.Hspec.Wai`'s `WaiSession` and a reader called `TestM` to carry around some postgres config. this limits us to request/response testing, and doesn't give us much access to the guts of the running server. if it were me, i might go full `YesodExample` style and add a new type with an `Example` instance and expose helpers to encapsulate both request/response testing and examining the server state. this also gives you better type inferences and error messages
2. integrating with `scripts/dev.sh` - these tests are currently in a separate module tree from the other haskell tests (i believe intentionally?), so dev.sh doesn't appear to know about them. probably want to rectify that
1. the test monad - we're currently using a mix of `Test.Hspec.Wai`'s `WaiSession` and a reader called `TestM` to carry around some postgres config. this limits us to request/response testing, and doesn't give us much access to the guts of the running server. if it were me, I might go full `YesodExample` style and add a new type with an `Example` instance and expose helpers to encapsulate both request/response testing and examining the server state. this also gives you better type inferences and error messages
2. integrating with `scripts/dev.sh` - these tests are currently in a separate module tree from the other haskell tests (I believe intentionally?), so dev.sh doesn't appear to know about them. probably want to rectify that
3. conventions for backend-specific setup - Phil's composable `testCaseFamily :: SourceMetadata backend -> m (SourceMetadata backend)` looks reasonable, but we're currently doing:
```haskell
withMetadata
Expand Down
2 changes: 1 addition & 1 deletion rfcs/identity-columns.md
Original file line number Diff line number Diff line change
Expand Up @@ -115,7 +115,7 @@ In a sentence:

* Syntax closer to SQL standard: `column GENERATED BY DEFAULT AS IDENTITY`, `column GENERATED ALWAYS AS IDENTITY`.
* Implemented on top of `series`.
* Columns `GENERATED BY DEFAULT` may be both `INSERT`ed and and `UPDATE`d.
* Columns `GENERATED BY DEFAULT` may be both `INSERT`ed and `UPDATE`d.
* Columns `GENERATED ALWAYS` may be `INSERT`ed (guarded by an `OVERRIDE SYSTEM VALUE` keyword), but never `UPDATE`d.


Expand Down
2 changes: 1 addition & 1 deletion rfcs/inherited-roles-improvements.md
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@ but it's not a solution because other permissions exposed by the inherited role

1. How to explicitly set permission to "no permission" when there is a conflict while deriving
permissions for an inherited role? See "Conflicts while inheriting permissions" - product team
2. What happens when an inherited role is dropped? We'll need to track all the dependent roles of a role, the problem is that roles are an implicit part of the metadata and tracking dependencies for explicit things are easier. For example
2. What happens when an inherited role is dropped? We'll need to track all the dependent roles of a role, the problem is that roles are an implicit part of the metadata and tracking dependencies for explicit things are easier. For
example: a remote relationship is dependent on its remote schema, so we know that when the remote schema is dropped, if the remote relationship exists then we need to make the metadata inconsistent, in this case the dropping of
the remote schema being the trigger to check its dependencies but it's not the case with roles because a role will be deleted only when no permission (table/remote/function/action) uses the role.
3. Currently, the select permission of an inherited role cannot be expressed in the current select permission metadata syntax because it doesn't account for the column(s) being conditionally present depending on the row filter. TODO (future work) - product team
Expand Down
2 changes: 1 addition & 1 deletion server/COMPILING-ON-MACOS.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ If you are re-running this command to update your Mac, you may need to run
ln -s cabal/dev-sh.project.local cabal.project.local
```

(Copying and pasting allows you to add local projects overrides, which may be needed if you are are planning to make changes to the graphql-engine code, but is not required for simply compiling the code as-is).
(Copying and pasting allows you to add local projects overrides, which may be needed if you are planning to make changes to the graphql-engine code, but is not required for simply compiling the code as-is).

6. Write the version number of the graphql-server that you are intending to build to the file `server/CURRENT_VERSION`.
For example if you are building `v2.13.0` then you can run the following command:
Expand Down
4 changes: 2 additions & 2 deletions server/CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ This project contains scripts for installing project dependencies automatically

## Development workflow

You should fork the repo on github and then `git clone https://github.com/<your-username>/graphql-engine`.
You should fork the repo on GitHub and then `git clone https://github.com/<your-username>/graphql-engine`.
After making your changes

### Compile
Expand All @@ -70,7 +70,7 @@ To set up the project configuration to coincide with the testing scripts below,

#### Compiling on MacOS

If you are on MacOS, or experiencing any errors related to missing dependencies on MacOS, please try [this alternative setup guide](COMPILING-ON-MACOS.md), or try Nix (as above).
If you are on macOS, or experiencing any errors related to missing dependencies on macOS, please try [this alternative setup guide](COMPILING-ON-MACOS.md), or try Nix (as above).

### IDE Support

Expand Down