Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

get pods list API for large datasets, pagination in plans? #2349

Closed
darkgaro opened this issue Nov 13, 2014 · 12 comments
Closed

get pods list API for large datasets, pagination in plans? #2349

darkgaro opened this issue Nov 13, 2014 · 12 comments

Comments

@darkgaro
Copy link
Contributor

@darkgaro darkgaro commented Nov 13, 2014

Are there plans to add Pagination to those api endpoint results that can end up possibly being large, like listing all of the pods.

@smarterclayton
Copy link
Contributor

@smarterclayton smarterclayton commented Nov 16, 2014

We've been discussing it in a few places. Etcd 0.6 has proposed a pageable internal data structure which would potentially allow us to that efficiently. It's probably a ways out.

@bgrant0607
Copy link
Member

@bgrant0607 bgrant0607 commented Nov 17, 2015

We should try to use the same approach for paging for metrics, logs, events, lists, etc.
cc @vishh @timstclair @dchen1107

@smarterclayton
Copy link
Contributor

@smarterclayton smarterclayton commented Nov 17, 2015

I'd prefer to go down the (prefix for start, next page token, limit) route
rather than pure numeric paging. Numeric paging doesn't really help
clients much and is a pain to optimize at the limit.

On Tue, Nov 17, 2015 at 1:21 PM, Brian Grant notifications@github.com
wrote:

We should try to use the same approach for paging for metrics, logs,
events, lists, etc.
cc @vishh https://github.com/vishh @timstclair
https://github.com/timstclair @dchen1107 https://github.com/dchen1107


Reply to this email directly or view it on GitHub
#2349 (comment)
.

@bgrant0607
Copy link
Member

@bgrant0607 bgrant0607 commented Dec 2, 2015

@smarterclayton I agree. Best practice for Google APIs is to allow list request to specify a max size and page token (empty for first page), with each page returning an opaque next page token. To ensure opacity, page tokens are typically base64 encoded, and often contain structured data.

@lavalamp
Copy link
Member

@lavalamp lavalamp commented Dec 3, 2015

Yes, I think the internal rules around this are good and we should use them.

@lavalamp
Copy link
Member

@lavalamp lavalamp commented Feb 2, 2016

@bgrant0607 @smarterclayton I am getting a bit worried that someone is going to run a huge cluster and this will turn into an emergency overnight, but it's a fair amount of work, so I think we want to have it in before it becomes an emergency. Does it make sense to target this for 1.3? @krousey is interested in adding this to API server.

@smarterclayton
Copy link
Contributor

@smarterclayton smarterclayton commented Feb 2, 2016

I wanted to wait for etcd3, because I suspect with protobuf + etcd we will
push back the urgency of this by a few months.

On Tue, Feb 2, 2016 at 1:29 PM, Daniel Smith notifications@github.com
wrote:

@bgrant0607 https://github.com/bgrant0607 @smarterclayton
https://github.com/smarterclayton I am getting a bit worried that
someone is going to run a huge cluster and this will turn into an emergency
overnight, but it's a fair amount of work, so I think we want to have it in
before it becomes an emergency. Does it make sense to target this for 1.3?
@krousey https://github.com/krousey is interested in adding this to API
server.


Reply to this email directly or view it on GitHub
#2349 (comment)
.

@bgrant0607
Copy link
Member

@bgrant0607 bgrant0607 commented Feb 3, 2016

@smarterclayton How much work is there left to do on protobuf?

@smarterclayton
Copy link
Contributor

@smarterclayton smarterclayton commented Feb 3, 2016

  • Merge the serializer implementation
  • Add dev tooling (protoc in an image to run hack/update-generated-protobuf)
  • Finish the watch implementation refactor (to support multiple types of
    streaming protocol), which is mostly complete
  • Determine a versioning policy on protobuf schema (internal only for 1.3,
    experimental, etc)
  • Review the generated protobuf schema by an expert and the details of the
    streaming protocol
  • Give clients command line flags to ask for protobuf (kubelet, controllers)
  • Stress test the system with protobuf on
  • Fix bugs

A few weeks of effort altogether and the necessary soak time

On Tue, Feb 2, 2016 at 11:58 PM, Brian Grant notifications@github.com
wrote:

@smarterclayton https://github.com/smarterclayton How much work is
there left to do on protobuf?


Reply to this email directly or view it on GitHub
#2349 (comment)
.

@smarterclayton smarterclayton self-assigned this Jul 20, 2017
k8s-github-robot pushed a commit to kubernetes/community that referenced this issue Aug 29, 2017
Automatic merge from submit-queue

Design for consistent API chunking in Kubernetes

In order to reduce the memory allocation impact of very large LIST operations against the Kubernetes apiserver, it should be possible to receive a single large resource list as many individual page requests to the server.

Part of kubernetes/enhancements#365. Taken from kubernetes/kubernetes#49338. Original discussion in kubernetes/kubernetes#2349
@liggitt
Copy link
Member

@liggitt liggitt commented Jan 7, 2018

Chunking is now supported via the api

@liggitt liggitt closed this Jan 7, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
9 participants
You can’t perform that action at this time.