-
Notifications
You must be signed in to change notification settings - Fork 63
Local storage 2/4: Add the new disk type #9409
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
b7bcce3
e141a46
f7c8bd7
135f6a9
950ba73
e970dfe
0fe00c8
658be0b
355efd8
f3259d5
b67ddc4
3c268e2
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,68 @@ | ||
| // This Source Code Form is subject to the terms of the Mozilla Public | ||
| // License, v. 2.0. If a copy of the MPL was not distributed with this | ||
| // file, You can obtain one at https://mozilla.org/MPL/2.0/. | ||
|
|
||
| use crate::ByteCount; | ||
| use crate::typed_uuid::DbTypedUuid; | ||
| use nexus_db_schema::schema::disk_type_local_storage; | ||
| use omicron_common::api::external; | ||
| use omicron_uuid_kinds::DatasetKind; | ||
| use omicron_uuid_kinds::DatasetUuid; | ||
| use serde::{Deserialize, Serialize}; | ||
| use uuid::Uuid; | ||
|
|
||
| /// A Disk can be backed using a zvol slice from the local storage dataset | ||
| /// present on each zpool of a sled. | ||
| #[derive( | ||
| Queryable, Insertable, Clone, Debug, Selectable, Serialize, Deserialize, | ||
| )] | ||
| #[diesel(table_name = disk_type_local_storage)] | ||
| pub struct DiskTypeLocalStorage { | ||
| disk_id: Uuid, | ||
|
|
||
| /// For zvols inside a parent dataset, there's an overhead that must be | ||
| /// accounted for when setting a quota and reservation on that parent | ||
| /// dataset. Record at model creation time how much overhead is required for | ||
| /// the parent `local_storage` dataset slice in order to fit the child | ||
| /// volume. | ||
| required_dataset_overhead: ByteCount, | ||
|
|
||
| local_storage_dataset_allocation_id: Option<DbTypedUuid<DatasetKind>>, | ||
| } | ||
|
|
||
| impl DiskTypeLocalStorage { | ||
| /// Creates a new `DiskTypeLocalStorage`. Returns Err if the computed | ||
| /// required dataset overhead does not fit in a `ByteCount`. | ||
| pub fn new( | ||
| disk_id: Uuid, | ||
| size: external::ByteCount, | ||
| ) -> Result<DiskTypeLocalStorage, external::ByteCountRangeError> { | ||
| // For zvols, there's an overhead that must be accounted for, and it | ||
| // empirically seems to be about 65M per 1G for volblocksize=4096. | ||
| // Multiple the disk size by something a little over this value. | ||
|
|
||
| let one_gb = external::ByteCount::from_gibibytes_u32(1).to_bytes(); | ||
| let gbs = size.to_bytes() / one_gb; | ||
| let overhead: u64 = | ||
| external::ByteCount::from_mebibytes_u32(70).to_bytes() * gbs; | ||
|
|
||
| // Don't unwrap this - the size of this disk is a parameter set by an | ||
| // API call, and we don't want to panic on out of range input. | ||
| let required_dataset_overhead = | ||
| external::ByteCount::try_from(overhead)?; | ||
|
Comment on lines
+49
to
+52
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. i think we should probably add some kind of context to this error in the case that it bubbles up to the user. the
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think so too, but there's currently no way that this would get out to a user from a saga (without querying for specific node's outputs, which would be brittle to changes in the saga itself). The way we've structured things is that (after some validation of the parameters) the disk create saga takes in the I put an additional step in |
||
|
|
||
| Ok(DiskTypeLocalStorage { | ||
| disk_id, | ||
| required_dataset_overhead: required_dataset_overhead.into(), | ||
| local_storage_dataset_allocation_id: None, | ||
| }) | ||
| } | ||
|
|
||
| pub fn required_dataset_overhead(&self) -> external::ByteCount { | ||
| self.required_dataset_overhead.into() | ||
| } | ||
|
|
||
| pub fn local_storage_dataset_allocation_id(&self) -> Option<DatasetUuid> { | ||
| self.local_storage_dataset_allocation_id.map(Into::into) | ||
| } | ||
| } | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in the cases where the instance doesn't exist, or the sled doesn't exist, that's probably an invalid state. but since OMDB is a debugging tool, i think it would be nicer if we didn't bail out here and just printed that we couldn't find the instance/sled and keep going so that we can still get partial output. OMDB ought to be useable in cases where we have put nonsensical state in the database, if possible.