Skip to content

Commit

Permalink
Improve documentation
Browse files Browse the repository at this point in the history
  • Loading branch information
timothyarmes committed Feb 20, 2020
1 parent c2b9a2d commit e4b6096
Showing 1 changed file with 129 additions and 36 deletions.
165 changes: 129 additions & 36 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,17 +59,20 @@ const schema = makeExecutableSchema({

## Specifying the Joins

The function `createPipeline` recursively analyses the requested fields and handles the creation of the
pipeline that performs the joins. To do this, it needs to know which fields are joins, and how to join them.
Apongo needs to know which fields are joins, and how to join them.

A custom GraphQL directive, `@apongo`, is used to specify this information directly in the types declaration.
You can add this directive to your type definitions to specify the joins. Here's an example:
A custom GraphQL directive, `@apongo`, is used to specify this information directly in the types declaration. Here's an example:

```
type User {
...
company: Company @apongo(lookup: { collection: "companies", localField: "companyId", foreignField: "_id" })
}
type Query {
...
users: [User!]!
}
```

## Writing the Resolvers
Expand All @@ -88,50 +91,96 @@ const users = (_, { limit = 20 }, context, resolveInfo) => new Promise((resolve,
{ $match: { type: 'client' } }
// Include all the pipeline stages generated by Apongo to do the joins
...createPipeline('users', resolveInfo, context),
// We pass `null` since the `users` query is mapped directly to the result
// of an aggregation on the Users collection
...createPipeline(null, resolveInfo, context),
// Filter, sort or limit the result
{ $limit: limit },
];
// How you call Mongo will depend on your code base. You'll need to pass your pipeline to Mongo's aggregate
return UsersCollection.aggregate(pipeline).next((err, res) => {
if (err) return reject(err);
return resolve(res);
});
// How you call Mongo will depend on your code base. You'll need to pass your pipeline to Mongo's aggregate.
return UsersCollection.aggregate(pipeline);
});
```

## Development Considerations
## API

1. Remember that the directives are only used by resolvers that call `createPipeline` to create an
aggregation pipeline. They are ignored by all other resolvers.
### createPipeline

2. It's very important to understand that resolvers are __always__ called, even for fields which have already
been fetched by createPipeline. In our example above, if we provide a `company` resolver for the User type
then it will be called for each fetched user, even though it would have already been fetched by the aggregation.
`createPipeline` is called with three parameters:

It would be very costly to allow the server to refetch all of these fields unnecessarily, so the resolvers
need to be written to only fetch the field if it doesn't already exist in the root.

Our User resolver might look like this:
* _mainFieldName_ : The name of the field containing the result of the aggregation, or null. See below.
* _resolveInfo_ : The `resolveInfo` passed to your resolved
* _context_ : The context passed to your resolved

This function will analyse the query and construct an aggregation pipeline to construct the joins. In the example above, as we aggregate the __Users__ collection it sees the request for the `companies` fields and it will
add a join to the pipeline:

```
const User = {
// We only fetch fields that haven't been fetched by createPipeline.
// companyId comes from the database collection, company is the result fetched via the pipeline
company: ({ companyId, company }) => company || CompaniesCollection.findOne(companyId),
...
[
{
'$lookup': {
from: 'companies',
localField: 'companyId', // companyId comes from the Users document
foreignField: '_id',
as: 'user'
}
},
{ '$unwind': { path: '$user', preserveNullAndEmptyArrays: true } }
]
```

In the above example we simply test if `company` has already been fetched into the root object
(via the $lookup stage created by apongo), and if it hasn't we perform the lookup in the traditional way.
By default `createPipeline` assumes that the fields in current GraphQL request map directly to the collection that you're aggregating. However, this may not be the case. Take this example:

```
type PaginatedUsers {
users: [User!]!
count: Int
}
type Query {
paginatedUsers: PaginatedUsers!
}
`;
```

Here, calling `createPipeline` within the `paginatedUsers` resolver with `null` as the `mainField` will result in slight problem:

```
[
{
'$lookup': {
from: 'users',
localField: 'users.userId', // Error - this should be 'usersId'
foreignField: '_id',
as: 'tasks.user'
}
},
{
'$unwind': { path: '$tasks.user', preserveNullAndEmptyArrays: true }
},
]
```

Apongo, having recursed into the users field of the request, will now try to look up `userId`
at `users.userId` within the current pipeline.

When we wish to create a pipeline for a specific field withing the response, we need to pass in
the name of that field:

There's a slight performance limitation that occurs when the $lookup returns a null value.
In that case the resolver receives null for the joined field, and it can't know that an attempt
was made to do the join. In this case we'll have to __unnecessarily__ call the database (which will again return `null`).
Such is life.

```
// Pass 'users' as the field returning data from the Users collection...
const pipeline = createPipeline('users', resolveInfo, context)
// ...then aggregate over the Users collection
return UsersCollection.aggregate(pipeline);
```

See below for more information about handling pagination.

## The @apongo directive

Expand Down Expand Up @@ -210,6 +259,37 @@ const types = gql`
Wherever you need to access a field using $ you should include the token `@path`. Apongo will replace occurrences of
`@path` with the path, allowing for previous joins.


## Development Considerations

1. Remember that the directives are only used by resolvers that call `createPipeline` to create an
aggregation pipeline. They are ignored by all other resolvers.

2. It's very important to understand that resolvers are __always__ called, even for fields which have already
been fetched by createPipeline. In our example above, if we provide a `company` resolver for the User type
then it will be called for each fetched user, even though it would have already been fetched by the aggregation.

It would be very costly to allow the server to refetch all of these fields unnecessarily, so the resolvers
need to be written to only fetch the field if it doesn't already exist in the root.

Our User resolver might look like this:

```
const User = {
// We only fetch fields that haven't been fetched by createPipeline.
// companyId comes from the database collection, company is the result fetched via the pipeline
company: ({ companyId, company }) => company || CompaniesCollection.findOne(companyId),
...
```

In the above example we simply test if `company` has already been fetched into the root object
(via the $lookup stage created by apongo), and if it hasn't we perform the lookup in the traditional way.

There's a slight performance limitation that occurs when the $lookup returns a null value.
In that case the resolver receives null for the joined field, and it can't know that an attempt
was made to do the join. In this case we'll have to __unnecessarily__ call the database (which will again return `null`).
Such is life.

## Recipes

### Pagination
Expand All @@ -218,7 +298,21 @@ Displaying a table of paginated data across multiple collections is likely to be
Typically when displaying paginated data we need to supply the Apollo client with both the data to display,
and also the total number of results so that the total number of pages can be displayed on the UI.

By enhancing the aggregation pipeline we can do this quite easily:
By enhancing the aggregation pipeline we can do this quite easily. The types might look like this:

```
type PaginatedUsers {
users: [User!]!
count: Int
}
type Query {
paginatedUsers: PaginatedUsers!
}
`;
```

And the resolver:

```
const paginatedUsers = (_, { limit = 20, offset = 0 }, context, resolveInfo) => new Promise((resolve, reject) => {
Expand All @@ -228,6 +322,7 @@ const paginatedUsers = (_, { limit = 20, offset = 0 }, context, resolveInfo) =>
{ $match: { type: 'client' } }
// Include all the pipeline stages generated by Apongo to do the joins
// Note that we *must* specify the field for which we're creating the pipeline
...createPipeline('users', resolveInfo, context),
];
Expand All @@ -252,10 +347,8 @@ const paginatedUsers = (_, { limit = 20, offset = 0 }, context, resolveInfo) =>
},
);
return UsersCollection.aggregate(pipeline).next((err, res) => {
if (err) return reject(err);
const { users, count } = res;
return resolve({ users, count: count.length === 0 ? 0 : count[0].count });
return UsersCollection.aggregate(pipeline).exec().then(([{users, count}]) => {
return { tasks, count: count.length === 0 ? 0 : count[0].count };
});
});
```
Expand Down

0 comments on commit e4b6096

Please sign in to comment.