Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update min minikube spec to allow zk/kafka #704

Merged
merged 1 commit into from
Aug 8, 2019

Conversation

tbaums
Copy link
Contributor

@tbaums tbaums commented Aug 6, 2019

Based on simplistic testing, here, minimum minikube requirements to deploy zk and kafka is 4cpu and 10gig of memory.

@tbaums tbaums requested a review from gerred as a code owner August 6, 2019 21:09
@kudo-ci
Copy link

kudo-ci commented Aug 6, 2019

Welcome @tbaums! It looks like this is your first PR to kudobuilder/kudo 🎉

@kudo-ci
Copy link

kudo-ci commented Aug 6, 2019

Hi @tbaums. Thanks for your PR.

I'm waiting for a kudobuilder member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@kudo-ci kudo-ci added the size/XS label Aug 6, 2019
@jbarrick-mesosphere
Copy link
Member

The Kafka default requirements are here: https://github.com/kudobuilder/operators/blob/master/repository/kafka/operator/params.yaml#L16

We have three replicas and set a memory request of 2048mb. So that's six gigabytes without accounting for zookeeper and any other running services.

A base 6 gigabytes of RAM makes a base kafka cluster expensive to run - either in dev or in production - and represents a higher barrier of entry for new contributors.

Additionally, we'll have to account for this in CI and it could limit the number of tests we can run at once.

Maybe instead of recommending bigger clusters, we could re-evaluate our limit? Do we know how it was chosen?

cc @zmalik

@jbarrick-mesosphere
Copy link
Member

/approve

@kudo-ci
Copy link

kudo-ci commented Aug 6, 2019

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: jbarrick-mesosphere, tbaums

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:
  • OWNERS [jbarrick-mesosphere]

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@alenkacz
Copy link
Contributor

alenkacz commented Aug 7, 2019

@justinbarrick I think those defaults are good defaults for production use, just not necessary for running locally in minikube 🤔 should we just update all examples to install kafka locally with those memory requirements overriden?

@zmalik
Copy link
Member

zmalik commented Aug 7, 2019

@justinbarrick Kafka defaults are already in the lower part of the spectrum of production-ready defaults.
Confluent recommend 8GB/broker minimum for production use. And in most cases, the recommendation is from 6GB to 28GB, depending on the workload. I wouldn't change those defaults as we didn't have any issue with those defaults over DC/OS either.

said that as @alenkacz says we can start brokers with as less 512mb of memory by overriding parameters in our tests or local dev installations

@jbarrick-mesosphere
Copy link
Member

/lgtm

Sounds good to me.

@jbarrick-mesosphere
Copy link
Member

/kind documentation

@jbarrick-mesosphere
Copy link
Member

/ok-to-test

@kudo-ci kudo-ci merged commit de4f770 into kudobuilder:master Aug 8, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants