Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature request: support for storage pools #311

Closed
sebastiansirch opened this issue Oct 17, 2018 · 10 comments
Closed

Feature request: support for storage pools #311

sebastiansirch opened this issue Oct 17, 2018 · 10 comments
Assignees
Labels
area/ui UI related like UI or CLI component/longhorn-manager Longhorn manager (control plane) kind/feature Feature request, new feature
Milestone

Comments

@sebastiansirch
Copy link

Currently, the disks of all nodes are added to a single storage pool. This implies that all disks have the same IO performance (otherwise, it becomes a random choice how fast the provided volume actually is).

In a setting with heterogeneous nodes (e.g. some have SSDs, some HDDs), the following features would be very helpful:

  • assign disks to storage pools (e.g. "hdd-pool", "sdd-pool")
  • create storage classes (e.g. "fast", "medium", "slow") referencing these storage pools such that PVCs can be automatically provisioned based on the resources from the given storage pool. this would allow developers to make a specific and reliable choice about the IO performance they request (and actually get)
@yasker yasker added kind/feature Feature request, new feature area/ui UI related like UI or CLI component/longhorn-manager Longhorn manager (control plane) labels Oct 17, 2018
@yasker yasker added this to the v0.4.0 milestone Oct 17, 2018
@yasker yasker modified the milestones: v0.4.0, v0.5.0 Jan 13, 2019
@yasker yasker modified the milestones: v0.5.0, v0.6.0 Apr 3, 2019
@mmriis
Copy link

mmriis commented May 21, 2019

+1 would like to see this as we utilize bare-metal servers that have both HDD and SSD storage.

@yasker
Copy link
Member

yasker commented Jun 3, 2019

We can also add node level tags. The disks in each node will inherit that node tags then users can control which group of nodes can be used for storage. @junkiebev

@mmriis
Copy link

mmriis commented Jun 4, 2019

This would mean that some nodes should only have fast storage and some slow? So no mixing storage classes in nodes?

@yasker
Copy link
Member

yasker commented Jun 4, 2019

@mmriis No. The nodes tag are not overriding the disk tag. You can still mix storage classes in the nodes.

For example, the node tag can be main for node 1-5, backup for node 6-10. The disk A tag for node 1 can be fast, disk B for node 1 can be slow, disk C for node 6 can be fast. Internally, we will treat disk A on node 1 as both fast and main, disk B on node 1 as slow and main, disk C on node C as fast and backup.

If one volume asking for fast, disk A and disk C may be used. If one volume asking for storage, main, disk A and B may be used. The selector will always be and, so if one volume asking for fast and main, only disk A may be used, given there is no other disk with fast and main.

@yasker yasker assigned ttpcodes and unassigned ttpcodes Jun 10, 2019
@fmax
Copy link

fmax commented Jun 14, 2019

+1 that would be greate to have a SSD and HDD pool
:)

@yasker
Copy link
Member

yasker commented Jun 21, 2019

@smallteeths backend merged. We can start UI development.

@yasker
Copy link
Member

yasker commented Jun 25, 2019

@ttpcodes can you write the API reference for the feature? @smallteeths needs it for UI

@smallteeths
Copy link
Contributor

@ttpcodes There is no connection between node tags and disk tags. When I create volume, I select one of node tags and one of disk tags. However, this node does not contain the disk tag, which fails to create volume.It would be nice to add a parameter in the 'disktags' API to return the appropriate disktags

@meldafrawi
Copy link
Contributor

Steps to test:

  • Create 4 nodes cluster, with each node have main disk A, and additional disk B
  • Deploy Longhorn, create longhorn StorageClass.
  • for each node, make sure you added the extra disks and scheduling is enabled on them.
  • for Nodes 1 & 2 : Create a node tag main
  • for Nodes 3 & 4 : Create a node tag backup
  • Create a disk tag ssd-pool for
    • Node 1 Disk A
    • Node 2 Disk A
    • Node 3 Disk A
    • Node 4 Disk A
  • Create a disk tag hdd-pool for
    • Node 1 Disk B
    • Node 2 Disk B
    • Node 3 Disk B
    • Node 4 Disk B
  1. Create a volume-1, and only set Node Tag to main
    Expected result: volume replicas should be scheduled on nodes 1 & 2

  2. Create a volume-2, and only set Node Tag to backup
    Expected result: volume replicas should be scheduled on nodes 3 & 4

  3. Create a volume-3, and only set Disk Tag to ssd-pool
    Expected result: volume replicas should be scheduled on all nodes on A Disks

  4. Create a volume-4, and only set Disk Tag to hdd-pool
    Expected result: volume replicas should be scheduled on all nodes on B Disks

  5. Create a volume-5, set Node Tag to main , and Disk Tag to ssd-pool
    Expected result: volume replicas should be scheduled on nodes 1 & 2 on A Disks

  6. Create a volume-6, set Node Tag to main , and Disk Tag to hdd-pool
    Expected result: volume replicas should be scheduled on nodes 1 & 2 on B Disks

  7. Create a volume-7, set Node Tag to backup , and Disk Tag to ssd-pool
    Expected result: volume replicas should be scheduled on nodes 3 & 4 on A Disks

  8. Create a volume-8, set Node Tag to backup , and Disk Tag to hdd-pool
    Expected result: volume replicas should be scheduled on nodes 3 & 4 on B Disks

  • Stop scheduling on Node 1
  • Create a new node Node 5 with 2 disks A and B, make sure disks added and scheduling is enabled
  • Create Node Tag main on Node 5
  • Create Disk Tag ssd-pool on Node 5, Disk A
  • Create Disk Tag hdd-pool on Node 5, Disk B
  1. Delete some/all volume-1 replicas on Node 1
    Expected result: replicas should start rebuilding on Node 5, on Both Disks A & B.

  2. Delete some/all volume-5 replicas on Node 1
    Expected result: replicas should start rebuilding on Node 5, on Disk A

  3. Delete some/all volume-5 replicas on Node 1
    Expected result: replicas should start rebuilding on Node 5, on Disk B

@iomari
Copy link

iomari commented Jan 28, 2022

Greetings,
Any recent update on having storage pools? This is over 2 years later.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/ui UI related like UI or CLI component/longhorn-manager Longhorn manager (control plane) kind/feature Feature request, new feature
Projects
None yet
Development

No branches or pull requests

8 participants