Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove pagination limits #259

Closed
marktani opened this issue Jun 18, 2017 · 6 comments
Closed

Remove pagination limits #259

marktani opened this issue Jun 18, 2017 · 6 comments

Comments

@marktani
Copy link
Contributor

@marktani marktani commented Jun 18, 2017

The limit of 1000 nodes per pagination query can be quite a hindrance when running scripts or migrations.

@oori

This comment has been minimized.

Copy link

@oori oori commented Jul 3, 2017

when running scripts or migrations.

Not only. sometimes you want the client to grab the whole table, when it's a functional and lean one, for example, TAGs key/id mappings - search/auto-complete/nlp you'd do purely on the client-side.

@ejoebstl

This comment has been minimized.

Copy link
Contributor

@ejoebstl ejoebstl commented Jul 23, 2017

Pagination can be very limiting for backend processes - especially when combined with

  • The lack of aggregation
  • The execution time limit of AWS Lambda

The lack of aggregation forces us to fetch all the data to perform simple groupBy/count aggregations.

In the second case, the problem is that also running separate queries, or multiple queries in a single request, might take too long to run on Lambda.

We might get around this by structuring our data differently, though, or by moving away from AWS Lambda.

@oori

This comment has been minimized.

Copy link

@oori oori commented Jul 25, 2017

Right now, I just hack my way around it, which is ugly and lose-lose for both of us..
For example, grab a lean list of 3200 tags:

query allTags {
  a1: allTags(first:1000) {
    ...TagFragment
  }
  a2: allTags(first:1000, skip:1000) {
    ...TagFragment
  }
  a3: allTags(first:1000, skip:2000) {
    ...TagFragment
  }
  a4: allTags(first:1000, skip:3000) {
    ...TagFragment
  }
  _allTagsMeta {
    count
  }
}

then, merge them all on post-response.. silly, right?

The only alternative I found is to bundle all tags in the build process as static json, and only fetch the diff (updated since build timestamp). I opted against, as it just feels like another hack, doesn't save much network bytes, and is another (slow) step in the build.

@mysport12

This comment has been minimized.

Copy link

@mysport12 mysport12 commented Nov 5, 2017

I did something similar to @oori . Probably not the best way to go about it, but it gave me the results I was looking for.

const allClubsQuery = `
  query allClubs(
    $skipNum: Int!
  ) {
    allClubs(
      first: 1000
      skip: $skipNum
    ) {
      id
      clubRef
    }
  }
`  
let clubIdMap = []
for (let i = 0; i < 15; i++) {
    let skipNum = 1000 * i
    let clubData = await request(endpoint, allClubsQuery, { skipNum })
    clubIdMap = clubIdMap.concat(clubData.allClubs)
}
await Promise.all(clubIdMap)`
@marktani marktani removed the area/extra label Nov 8, 2017
@marktani

This comment has been minimized.

Copy link
Contributor Author

@marktani marktani commented Dec 19, 2017

Further being discussed in #748.

@marktani marktani closed this Dec 19, 2017
@boid-com

This comment has been minimized.

Copy link

@boid-com boid-com commented Jan 9, 2018

The Documentation states: "a maximum of 1000 nodes can be returned per pagination field."

However it should also state that this 1000 node limitation applied to ALL queries.

I assumed that I could grab all nodes so long as I was not specifying pagination.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
7 participants
You can’t perform that action at this time.