From 09d3a04262718d34fb9db02e8a8dda939bc2aecd Mon Sep 17 00:00:00 2001 From: Takayuki Matsubara Date: Wed, 5 Dec 2018 09:30:29 +0900 Subject: [PATCH 1/2] fix typo --- content/documentation/master/data-fetching.md | 2 +- content/documentation/v10/data-fetching.md | 2 +- content/documentation/v11/data-fetching.md | 2 +- content/documentation/v9/data-fetching.md | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/content/documentation/master/data-fetching.md b/content/documentation/master/data-fetching.md index 6ddcbfc0..50f7f720 100644 --- a/content/documentation/master/data-fetching.md +++ b/content/documentation/master/data-fetching.md @@ -223,6 +223,6 @@ The sub fields here of the ``products`` field represent the selection set of tha so the data fetcher can optimise the data access queries. For example an SQL backed system may be able to use the field sub selection to only retrieve the columns that have been asked for. -In the example above we have asked for ``selectionLocations`` information and hence we may be able to make an more efficient data access query where +In the example above we have asked for ``sellingLocations`` information and hence we may be able to make an more efficient data access query where we ask for product information and selling location information at the same time. diff --git a/content/documentation/v10/data-fetching.md b/content/documentation/v10/data-fetching.md index 35db0abc..04c91005 100644 --- a/content/documentation/v10/data-fetching.md +++ b/content/documentation/v10/data-fetching.md @@ -223,6 +223,6 @@ The sub fields here of the ``products`` field represent the selection set of tha so the data fetcher can optimise the data access queries. For example an SQL backed system may be able to use the field sub selection to only retrieve the columns that have been asked for. -In the example above we have asked for ``selectionLocations`` information and hence we may be able to make an more efficient data access query where +In the example above we have asked for ``sellingLocations`` information and hence we may be able to make an more efficient data access query where we ask for product information and selling location information at the same time. diff --git a/content/documentation/v11/data-fetching.md b/content/documentation/v11/data-fetching.md index 6ddcbfc0..50f7f720 100644 --- a/content/documentation/v11/data-fetching.md +++ b/content/documentation/v11/data-fetching.md @@ -223,6 +223,6 @@ The sub fields here of the ``products`` field represent the selection set of tha so the data fetcher can optimise the data access queries. For example an SQL backed system may be able to use the field sub selection to only retrieve the columns that have been asked for. -In the example above we have asked for ``selectionLocations`` information and hence we may be able to make an more efficient data access query where +In the example above we have asked for ``sellingLocations`` information and hence we may be able to make an more efficient data access query where we ask for product information and selling location information at the same time. diff --git a/content/documentation/v9/data-fetching.md b/content/documentation/v9/data-fetching.md index 35db0abc..04c91005 100644 --- a/content/documentation/v9/data-fetching.md +++ b/content/documentation/v9/data-fetching.md @@ -223,6 +223,6 @@ The sub fields here of the ``products`` field represent the selection set of tha so the data fetcher can optimise the data access queries. For example an SQL backed system may be able to use the field sub selection to only retrieve the columns that have been asked for. -In the example above we have asked for ``selectionLocations`` information and hence we may be able to make an more efficient data access query where +In the example above we have asked for ``sellingLocations`` information and hence we may be able to make an more efficient data access query where we ask for product information and selling location information at the same time. From 195bbbd886bd967342e717cfe372ea8985f1f7f5 Mon Sep 17 00:00:00 2001 From: Takayuki Matsubara Date: Wed, 5 Dec 2018 09:36:27 +0900 Subject: [PATCH 2/2] fix mismatched quotes pairs --- content/documentation/master/data-fetching.md | 2 +- content/documentation/master/exceptions.md | 4 ++-- content/documentation/master/execution.md | 2 +- content/documentation/master/scalars.md | 4 ++-- content/documentation/v10/data-fetching.md | 2 +- content/documentation/v10/exceptions.md | 4 ++-- content/documentation/v10/execution.md | 2 +- content/documentation/v10/scalars.md | 4 ++-- content/documentation/v11/data-fetching.md | 2 +- content/documentation/v11/exceptions.md | 4 ++-- content/documentation/v11/execution.md | 2 +- content/documentation/v11/scalars.md | 4 ++-- content/documentation/v9/data-fetching.md | 2 +- content/documentation/v9/exceptions.md | 4 ++-- content/documentation/v9/execution.md | 2 +- content/documentation/v9/scalars.md | 4 ++-- 16 files changed, 24 insertions(+), 24 deletions(-) diff --git a/content/documentation/master/data-fetching.md b/content/documentation/master/data-fetching.md index 50f7f720..9e0d0b27 100644 --- a/content/documentation/master/data-fetching.md +++ b/content/documentation/master/data-fetching.md @@ -183,7 +183,7 @@ the query is executed. The following section explains more on this. * ``DataFetchingFieldSelectionSet getSelectionSet()`` - the selection set represents the child fields that have been "selected" under neath the currently executing field. This can be useful to help look ahead to see what sub field information a client wants. The following section explains more on this. -* ```ExecutionId getExecutionId()`` - each query execution is given a unique id. You can use this perhaps on logs to tag each individual +* ``ExecutionId getExecutionId()`` - each query execution is given a unique id. You can use this perhaps on logs to tag each individual query. diff --git a/content/documentation/master/exceptions.md b/content/documentation/master/exceptions.md index 69da5b1e..604740cb 100644 --- a/content/documentation/master/exceptions.md +++ b/content/documentation/master/exceptions.md @@ -28,7 +28,7 @@ These are not graphql errors in execution but rather totally unacceptable condit - `graphql.execution.UnresolvedTypeException` - is thrown if a graphql.schema.TypeResolver` fails to provide a concrete + is thrown if a `graphql.schema.TypeResolver` fails to provide a concrete object type given a interface or union type. @@ -47,7 +47,7 @@ These are not graphql errors in execution but rather totally unacceptable condit - `graphql.schema.validation.InvalidSchemaException` is thrown if the schema is not valid when built via - graphql.schema.GraphQLSchema.Builder#build()` + `graphql.schema.GraphQLSchema.Builder#build()` - `graphql.execution.UnknownOperationException` diff --git a/content/documentation/master/execution.md b/content/documentation/master/execution.md index e5aea061..741f4f40 100644 --- a/content/documentation/master/execution.md +++ b/content/documentation/master/execution.md @@ -419,7 +419,7 @@ By default the "query" execution strategy is ``graphql.execution.AsyncExecutionS each field as ``CompleteableFuture`` objects and not care which ones complete first. This strategy allows for the most performant execution. -The data fetchers invoked can themselves return `CompletionStage`` values and this will create +The data fetchers invoked can themselves return ``CompletionStage`` values and this will create fully asynchronous behaviour. So imagine a query as follows diff --git a/content/documentation/master/scalars.md b/content/documentation/master/scalars.md index 49318030..8e03815b 100644 --- a/content/documentation/master/scalars.md +++ b/content/documentation/master/scalars.md @@ -69,12 +69,12 @@ We would create a singleton ``graphql.schema.GraphQLScalarType`` instance for th The real work in any custom scalar implementation is the ``graphql.schema.Coercing`` implementation. This is responsible for 3 functions * ``parseValue`` - takes a variable input object and converts into the Java runtime representation -* ``parseLiteral`` - takes an AST literal ``graphql.language.Value` as input and converts into the Java runtime representation +* ``parseLiteral`` - takes an AST literal ``graphql.language.Value`` as input and converts into the Java runtime representation * ``serialize`` - takes a Java object and converts it into the output shape for that scalar So your custom scalar code has to handle 2 forms of input (parseValue / parseLiteral) and 1 form of output (serialize). -Imagine this query, which uses variables, AST literals and outputs our scalar type ```email``. +Imagine this query, which uses variables, AST literals and outputs our scalar type ``email``. {{< highlight graphql "linenos=table" >}} mutation Contact($mainContact: Email!) { diff --git a/content/documentation/v10/data-fetching.md b/content/documentation/v10/data-fetching.md index 04c91005..0a7f09a5 100644 --- a/content/documentation/v10/data-fetching.md +++ b/content/documentation/v10/data-fetching.md @@ -183,7 +183,7 @@ the query is executed. The following section explains more on this. * ``DataFetchingFieldSelectionSet getSelectionSet()`` - the selection set represents the child fields that have been "selected" under neath the currently executing field. This can be useful to help look ahead to see what sub field information a client wants. The following section explains more on this. -* ```ExecutionId getExecutionId()`` - each query execution is given a unique id. You can use this perhaps on logs to tag each individual +* ``ExecutionId getExecutionId()`` - each query execution is given a unique id. You can use this perhaps on logs to tag each individual query. diff --git a/content/documentation/v10/exceptions.md b/content/documentation/v10/exceptions.md index 69da5b1e..604740cb 100644 --- a/content/documentation/v10/exceptions.md +++ b/content/documentation/v10/exceptions.md @@ -28,7 +28,7 @@ These are not graphql errors in execution but rather totally unacceptable condit - `graphql.execution.UnresolvedTypeException` - is thrown if a graphql.schema.TypeResolver` fails to provide a concrete + is thrown if a `graphql.schema.TypeResolver` fails to provide a concrete object type given a interface or union type. @@ -47,7 +47,7 @@ These are not graphql errors in execution but rather totally unacceptable condit - `graphql.schema.validation.InvalidSchemaException` is thrown if the schema is not valid when built via - graphql.schema.GraphQLSchema.Builder#build()` + `graphql.schema.GraphQLSchema.Builder#build()` - `graphql.execution.UnknownOperationException` diff --git a/content/documentation/v10/execution.md b/content/documentation/v10/execution.md index 2e72bbec..9be82c0f 100644 --- a/content/documentation/v10/execution.md +++ b/content/documentation/v10/execution.md @@ -419,7 +419,7 @@ By default the "query" execution strategy is ``graphql.execution.AsyncExecutionS each field as ``CompleteableFuture`` objects and not care which ones complete first. This strategy allows for the most performant execution. -The data fetchers invoked can themselves return `CompletionStage`` values and this will create +The data fetchers invoked can themselves return ``CompletionStage`` values and this will create fully asynchronous behaviour. So imagine a query as follows diff --git a/content/documentation/v10/scalars.md b/content/documentation/v10/scalars.md index 49318030..8e03815b 100644 --- a/content/documentation/v10/scalars.md +++ b/content/documentation/v10/scalars.md @@ -69,12 +69,12 @@ We would create a singleton ``graphql.schema.GraphQLScalarType`` instance for th The real work in any custom scalar implementation is the ``graphql.schema.Coercing`` implementation. This is responsible for 3 functions * ``parseValue`` - takes a variable input object and converts into the Java runtime representation -* ``parseLiteral`` - takes an AST literal ``graphql.language.Value` as input and converts into the Java runtime representation +* ``parseLiteral`` - takes an AST literal ``graphql.language.Value`` as input and converts into the Java runtime representation * ``serialize`` - takes a Java object and converts it into the output shape for that scalar So your custom scalar code has to handle 2 forms of input (parseValue / parseLiteral) and 1 form of output (serialize). -Imagine this query, which uses variables, AST literals and outputs our scalar type ```email``. +Imagine this query, which uses variables, AST literals and outputs our scalar type ``email``. {{< highlight graphql "linenos=table" >}} mutation Contact($mainContact: Email!) { diff --git a/content/documentation/v11/data-fetching.md b/content/documentation/v11/data-fetching.md index 50f7f720..9e0d0b27 100644 --- a/content/documentation/v11/data-fetching.md +++ b/content/documentation/v11/data-fetching.md @@ -183,7 +183,7 @@ the query is executed. The following section explains more on this. * ``DataFetchingFieldSelectionSet getSelectionSet()`` - the selection set represents the child fields that have been "selected" under neath the currently executing field. This can be useful to help look ahead to see what sub field information a client wants. The following section explains more on this. -* ```ExecutionId getExecutionId()`` - each query execution is given a unique id. You can use this perhaps on logs to tag each individual +* ``ExecutionId getExecutionId()`` - each query execution is given a unique id. You can use this perhaps on logs to tag each individual query. diff --git a/content/documentation/v11/exceptions.md b/content/documentation/v11/exceptions.md index 69da5b1e..604740cb 100644 --- a/content/documentation/v11/exceptions.md +++ b/content/documentation/v11/exceptions.md @@ -28,7 +28,7 @@ These are not graphql errors in execution but rather totally unacceptable condit - `graphql.execution.UnresolvedTypeException` - is thrown if a graphql.schema.TypeResolver` fails to provide a concrete + is thrown if a `graphql.schema.TypeResolver` fails to provide a concrete object type given a interface or union type. @@ -47,7 +47,7 @@ These are not graphql errors in execution but rather totally unacceptable condit - `graphql.schema.validation.InvalidSchemaException` is thrown if the schema is not valid when built via - graphql.schema.GraphQLSchema.Builder#build()` + `graphql.schema.GraphQLSchema.Builder#build()` - `graphql.execution.UnknownOperationException` diff --git a/content/documentation/v11/execution.md b/content/documentation/v11/execution.md index e5aea061..741f4f40 100644 --- a/content/documentation/v11/execution.md +++ b/content/documentation/v11/execution.md @@ -419,7 +419,7 @@ By default the "query" execution strategy is ``graphql.execution.AsyncExecutionS each field as ``CompleteableFuture`` objects and not care which ones complete first. This strategy allows for the most performant execution. -The data fetchers invoked can themselves return `CompletionStage`` values and this will create +The data fetchers invoked can themselves return ``CompletionStage`` values and this will create fully asynchronous behaviour. So imagine a query as follows diff --git a/content/documentation/v11/scalars.md b/content/documentation/v11/scalars.md index 49318030..8e03815b 100644 --- a/content/documentation/v11/scalars.md +++ b/content/documentation/v11/scalars.md @@ -69,12 +69,12 @@ We would create a singleton ``graphql.schema.GraphQLScalarType`` instance for th The real work in any custom scalar implementation is the ``graphql.schema.Coercing`` implementation. This is responsible for 3 functions * ``parseValue`` - takes a variable input object and converts into the Java runtime representation -* ``parseLiteral`` - takes an AST literal ``graphql.language.Value` as input and converts into the Java runtime representation +* ``parseLiteral`` - takes an AST literal ``graphql.language.Value`` as input and converts into the Java runtime representation * ``serialize`` - takes a Java object and converts it into the output shape for that scalar So your custom scalar code has to handle 2 forms of input (parseValue / parseLiteral) and 1 form of output (serialize). -Imagine this query, which uses variables, AST literals and outputs our scalar type ```email``. +Imagine this query, which uses variables, AST literals and outputs our scalar type ``email``. {{< highlight graphql "linenos=table" >}} mutation Contact($mainContact: Email!) { diff --git a/content/documentation/v9/data-fetching.md b/content/documentation/v9/data-fetching.md index 04c91005..0a7f09a5 100644 --- a/content/documentation/v9/data-fetching.md +++ b/content/documentation/v9/data-fetching.md @@ -183,7 +183,7 @@ the query is executed. The following section explains more on this. * ``DataFetchingFieldSelectionSet getSelectionSet()`` - the selection set represents the child fields that have been "selected" under neath the currently executing field. This can be useful to help look ahead to see what sub field information a client wants. The following section explains more on this. -* ```ExecutionId getExecutionId()`` - each query execution is given a unique id. You can use this perhaps on logs to tag each individual +* ``ExecutionId getExecutionId()`` - each query execution is given a unique id. You can use this perhaps on logs to tag each individual query. diff --git a/content/documentation/v9/exceptions.md b/content/documentation/v9/exceptions.md index 69da5b1e..604740cb 100644 --- a/content/documentation/v9/exceptions.md +++ b/content/documentation/v9/exceptions.md @@ -28,7 +28,7 @@ These are not graphql errors in execution but rather totally unacceptable condit - `graphql.execution.UnresolvedTypeException` - is thrown if a graphql.schema.TypeResolver` fails to provide a concrete + is thrown if a `graphql.schema.TypeResolver` fails to provide a concrete object type given a interface or union type. @@ -47,7 +47,7 @@ These are not graphql errors in execution but rather totally unacceptable condit - `graphql.schema.validation.InvalidSchemaException` is thrown if the schema is not valid when built via - graphql.schema.GraphQLSchema.Builder#build()` + `graphql.schema.GraphQLSchema.Builder#build()` - `graphql.execution.UnknownOperationException` diff --git a/content/documentation/v9/execution.md b/content/documentation/v9/execution.md index e5aea061..741f4f40 100644 --- a/content/documentation/v9/execution.md +++ b/content/documentation/v9/execution.md @@ -419,7 +419,7 @@ By default the "query" execution strategy is ``graphql.execution.AsyncExecutionS each field as ``CompleteableFuture`` objects and not care which ones complete first. This strategy allows for the most performant execution. -The data fetchers invoked can themselves return `CompletionStage`` values and this will create +The data fetchers invoked can themselves return ``CompletionStage`` values and this will create fully asynchronous behaviour. So imagine a query as follows diff --git a/content/documentation/v9/scalars.md b/content/documentation/v9/scalars.md index 49318030..8e03815b 100644 --- a/content/documentation/v9/scalars.md +++ b/content/documentation/v9/scalars.md @@ -69,12 +69,12 @@ We would create a singleton ``graphql.schema.GraphQLScalarType`` instance for th The real work in any custom scalar implementation is the ``graphql.schema.Coercing`` implementation. This is responsible for 3 functions * ``parseValue`` - takes a variable input object and converts into the Java runtime representation -* ``parseLiteral`` - takes an AST literal ``graphql.language.Value` as input and converts into the Java runtime representation +* ``parseLiteral`` - takes an AST literal ``graphql.language.Value`` as input and converts into the Java runtime representation * ``serialize`` - takes a Java object and converts it into the output shape for that scalar So your custom scalar code has to handle 2 forms of input (parseValue / parseLiteral) and 1 form of output (serialize). -Imagine this query, which uses variables, AST literals and outputs our scalar type ```email``. +Imagine this query, which uses variables, AST literals and outputs our scalar type ``email``. {{< highlight graphql "linenos=table" >}} mutation Contact($mainContact: Email!) {