Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Document basic resource requirements #173

Closed
jdonenine opened this issue Jan 8, 2021 · 11 comments
Closed

Document basic resource requirements #173

jdonenine opened this issue Jan 8, 2021 · 11 comments
Assignees
Labels
complexity:high documentation Improvements or additions to documentation enhancement New feature or request sprint:1 sprint:2

Comments

@jdonenine
Copy link
Contributor

Is your feature request related to a problem? Please describe.
It's not clear what system resource requirements there are to run k8ssandra.

Describe the solution you'd like
Document the resource requirements for some basic development and production scenarios.

@jdonenine jdonenine added documentation Improvements or additions to documentation enhancement New feature or request priority: p1 labels Jan 8, 2021
@jdonenine jdonenine added this to the 1.0.0 milestone Jan 8, 2021
@jdonenine jdonenine added this to To do in K8ssandra via automation Jan 8, 2021
@jdonenine jdonenine moved this from To do to To do (P1) in K8ssandra Jan 8, 2021
@jdonenine jdonenine self-assigned this Jan 8, 2021
@bradfordcp
Copy link
Member

There are a few items to document here. This ticket describes resources for a development / production scenario, which reads as k8s cluster requirements (and potentially specs for VMs in environments where resources may be limited, eg Docker for Mac).

  1. Document K8s cluster requirements - Node count, CPU, memory, and storage
  2. Document K8ssandra component resource requirements / limits and how to override within values.yaml
  3. What are the different values for a single worker deployment (dev environment) vs multi-worker (prod)?
  4. Should we call out how binpacking multiple C* nodes per worker changes the specs?

@jsanda
Copy link
Contributor

jsanda commented Feb 4, 2021

I know we expose resources for some components but not sure if we do for all. We should create additional tickets for any components for which resources are not configurable.

@jdonenine jdonenine added this to To do in K8ssandra Feb 5, 2021
@jdonenine jdonenine moved this from To do to Documentation To Do in K8ssandra Feb 12, 2021
@jdonenine jdonenine moved this from Documentation To Do to Engineering To do in K8ssandra Feb 16, 2021
@jdonenine
Copy link
Contributor Author

Yesterday when we talked about this we discussed trying to establish a resource estimate based on a few different profiles of users, @bradfordcp @jsanda do you all have any suggestions on what those profiles should be, beyond a "developer"?

We also need to figure out are we talking bare minimum resources to deploy? resources with some simulated load applied? Load applied directly? Via Stargate? There's a lot to consider here 😄

@jdonenine
Copy link
Contributor Author

@adejanovski is going to run some performance tests with NoSQLBench to evaluate some developer focused settings.

For 1.0 our goal will be to create a measured set of resource defaults (memory, cpu, disk) that we can provide for a local developer environment and be able to document what types of tests were executed against that environment.

@jdonenine jdonenine moved this from Engineering To do to In progress in K8ssandra Feb 24, 2021
@adejanovski
Copy link
Contributor

Based on the benchmarks and the blog post I'm writing, we have the following min requirements for Docker resources to run the whole stack (including 3 Cassandra nodes and 1 Stargate node):

  • 4 cores
  • 8GB RAM
  • Not sure about disks, I didn't really check that part. 40GB of free space to pull the various images and put some data in seems like a decent assumption.

Reducing the number of Cassandra nodes to a single one allows to go as low as 4GB RAM instead of 8GB.

The above specs allow for a 100 ops/s workload with decent latencies.

@adejanovski
Copy link
Contributor

@johnwfrancis, is that enough information for you to document this?
I can point you to the draft blog post as well for more details.

@johnwfrancis
Copy link
Contributor

@adejanovski actually it's not. I've not been able to get 3 nodes running reliable with 6 cores and 8 gigs of RAM. I need the helm chart settings and any K8s config info--for instance, do you need 3 k8s workers? What should the heap/ram info be for the k8ssandra helm chart? Pretend like I'm an idiot, which, in this case, I sort of am. Once I get that, I'll run through the configuration on my machine to make sure everything is working, repeatable and reliable. Thanks!

@adejanovski
Copy link
Contributor

Good point @johnwfrancis!
I need to give you the correct settings to get the stack running with 8GB RAM:

cassandra:
  datacenters:
  - name: dc1
    size: 3
  heap:
    size: 500M
    newGenSize: 250M

stargate:
  enabled: true
  replicas: 1
  heapMB: 300

With 4GB RAM, that would be:

cassandra:
  datacenters:
  - name: dc1
    size: 1
  heap:
    size: 500M
    newGenSize: 250M

stargate:
  enabled: true
  replicas: 1
  heapMB: 300

The number of workers doesn't matter performance wise, but if you have a single worker node (like with minikube), you'll need to set allowMultipleNodesPerWorker to true (defaults to false). That would then give the following values:

cassandra:
  allowMultipleNodesPerWorker: true
  datacenters:
  - name: dc1
    size: 3
  heap:
    size: 500M
    newGenSize: 250M

stargate:
  enabled: true
  replicas: 1
  heapMB: 300

@johnwfrancis
Copy link
Contributor

Thanks!! So with only 4 gigs (allocated to Docker in the case of the standalone solutions like Kind/K3d/MInikube) you're limited to a single C* node, right?

@adejanovski
Copy link
Contributor

Correct 👍

@jdonenine jdonenine moved this from In progress to Done in K8ssandra Mar 19, 2021
@jdonenine jdonenine moved this from Done to Reviewer approved in K8ssandra Mar 23, 2021
@jdonenine jdonenine moved this from Reviewer approved to Done in K8ssandra Mar 23, 2021
@johnsmartco
Copy link
Contributor

K8ssandra Documentation automation moved this from In progress to Done Apr 1, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
complexity:high documentation Improvements or additions to documentation enhancement New feature or request sprint:1 sprint:2
Projects
K8ssandra
  
Done
Development

No branches or pull requests

6 participants