Skip to content

Commit

Permalink
Add support for HDD Storage Type
Browse files Browse the repository at this point in the history
  • Loading branch information
gkao123 committed Oct 14, 2020
1 parent abc7875 commit a7fe242
Show file tree
Hide file tree
Showing 9 changed files with 303 additions and 21 deletions.
8 changes: 6 additions & 2 deletions examples/kubernetes/dynamic_provisioning/README.md
Expand Up @@ -13,15 +13,19 @@ parameters:
subnetId: subnet-056da83524edbe641
securityGroupIds: sg-086f61ea73388fb6b
deploymentType: PERSISTENT_1
storageType: HDD
```
* subnetId - the subnet ID that the FSx for Lustre filesystem should be created inside.
* securityGroupIds - a common separated list of security group IDs that should be attached to the filesystem
* deploymentType (Optional) - FSx for Lustre supports three deployment types, SCRATCH_1, SCRATCH_2 and PERSISTENT_1. Default: SCRATCH_1.
* kmsKeyId (Optional) - for deployment type PERSISTENT_1, customer can specify a KMS key to use.
* perUnitStorageThroughput (Optional) - for deployment type PERSISTENT_1, customer can specify the storage throughput. Default: "200". Note that customer has to specify as a string here like "200" or "100" etc.
* storageType (Optional) - for deployment type PERSISTENT_1, customer can specify the storage type, either SSD or HDD. Default: "SSD"
* driveCacheType (Required if storageType is "HDD") - for HDD PERSISTENT_1, specify the type of drive cache, either NONE or READ.
* automaticBackupRetentionDays (Optional) - The number of days to retain automatic backups. The default is to retain backups for 7 days. Setting this value to 0 disables the creation of automatic backups. The maximum retention period for backups is 35 days
* dailyAutomaticBackupStartTime (Optional) - The preferred time to take daily automatic backups, formatted HH:MM in the UTC time zone.
* copyTagsToBackups (Optional) - A boolean flag indicating whether tags for the file system should be copied to backups. This value defaults to false. If it's set to true, all tags for the file system are copied to all automatic and user-initiated backups where the user doesn't specify tags. If this value is true, and you specify one or more tags, only the specified tags are copied to backups. If you specify one or more tags when creating a user-initiated backup, no tags are copied from the file system, regardless of this value.

### Edit [Persistent Volume Claim Spec](./specs/claim.yaml)
```
apiVersion: v1
Expand All @@ -34,9 +38,9 @@ spec:
storageClassName: fsx-sc
resources:
requests:
storage: 1200Gi
storage: 6000Gi
```
Update `spec.resource.requests.storage` with the storage capacity to request. The storage capacity value will be rounded up to 1200 GiB, 2400 GiB, or a multiple of 3600 GiB.
Update `spec.resource.requests.storage` with the storage capacity to request. The storage capacity value will be rounded up to 1200 GiB, 2400 GiB, or a multiple of 3600 GiB for SSD. If the storageType is specified as HDD, the storage capacity will be rounded up to 6000 GiB or a multiple of 6000 GiB if the perUnitStorageThroughput is 12, or rounded up to 1800 or a multiple of 1800 if the perUnitStorageThroughput is 40.

### Deploy the Application
Create PVC, storageclass and the pod that consumes the PV:
Expand Down
4 changes: 3 additions & 1 deletion examples/kubernetes/dynamic_provisioning_s3/README.md
Expand Up @@ -26,6 +26,8 @@ parameters:
* deploymentType (Optional) - FSx for Lustre supports three deployment types, SCRATCH_1, SCRATCH_2 and PERSISTENT_1. Default: SCRATCH_1.
* kmsKeyId (Optional) - for deployment type PERSISTENT_1, customer can specify a KMS key to use.
* perUnitStorageThroughput (Optional) - for deployment type PERSISTENT_1, customer can specify the storage throughput. Default: "200". Note that customer has to specify as a string here like "200" or "100" etc.
* storageType (Optional) - for deployment type PERSISTENT_1, customer can specify the storage type, either SSD or HDD. Default: "SSD"
* driveCacheType (Required if storageType is "HDD") - for HDD PERSISTENT_1, specify the type of drive cache, either NONE or READ.

Note:
- S3 Bucket in s3ImportPath and s3ExportPath must be same, otherwise the driver can not create FSx for lustre successfully.
Expand All @@ -47,7 +49,7 @@ spec:
requests:
storage: 1200Gi
```
Update `spec.resource.requests.storage` with the storage capacity to request. The storage capacity value will be rounded up to 1200 GiB, 2400 GiB, or a multiple of 3600 GiB.
Update `spec.resource.requests.storage` with the storage capacity to request. The storage capacity value will be rounded up to 1200 GiB, 2400 GiB, or a multiple of 3600 GiB for SSD. If the storageType is specified as HDD, the storage capacity will be rounded up to 6000 GiB or a multiple of 6000 GiB if the perUnitStorageThroughput is 12, or rounded up to 1800 or a multiple of 1800 if the perUnitStorageThroughput is 40.

### Deploy the Application
Create PVC, storageclass and the pod that consumes the PV:
Expand Down
6 changes: 6 additions & 0 deletions go.sum
Expand Up @@ -51,6 +51,8 @@ github.com/auth0/go-jwt-middleware v0.0.0-20170425171159-5493cabe49f7/go.mod h1:
github.com/aws/aws-sdk-go v1.16.26/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo=
github.com/aws/aws-sdk-go v1.29.9 h1:PHq9ddjfZYfCOXyqHKiCZ1CHRAk7nXhV7WTqj5l+bmQ=
github.com/aws/aws-sdk-go v1.29.9/go.mod h1:1KvfttTE3SPKMpo8g2c6jL3ZKfXtFvKscTgahTma5Xg=
github.com/aws/aws-sdk-go v1.35.7 h1:FHMhVhyc/9jljgFAcGkQDYjpC9btM0B8VfkLBfctdNE=
github.com/aws/aws-sdk-go v1.35.7/go.mod h1:tlPOdRjfxPBpNIwqDj61rmsnA85v9jc0Ps9+muhnW+k=
github.com/bazelbuild/bazel-gazelle v0.18.2/go.mod h1:D0ehMSbS+vesFsLGiD6JXu3mVEzOlfUl8wNnq+x/9p0=
github.com/bazelbuild/bazel-gazelle v0.19.1-0.20191105222053-70208cbdc798/go.mod h1:rPwzNHUqEzngx1iVBfO/2X2npKaT3tqPqqHW6rVsn/A=
github.com/bazelbuild/buildtools v0.0.0-20190731111112-f720930ceb60/go.mod h1:5JP0TXzWDHXv8qvxRC4InIazwdyDseBDbzESUMKk1yU=
Expand Down Expand Up @@ -319,6 +321,9 @@ github.com/jellevandenhooff/dkim v0.0.0-20150330215556-f50fe3d243e1/go.mod h1:E0
github.com/jimstudt/http-authentication v0.0.0-20140401203705-3eca13d6893a/go.mod h1:wK6yTYYcgjHE1Z1QtXACPDjcFJyBskHEdagmnq3vsP8=
github.com/jmespath/go-jmespath v0.0.0-20180206201540-c2b33e8439af h1:pmfjZENx5imkbgOkpRUYLnmbU7UEFbjtDA2hxJ1ichM=
github.com/jmespath/go-jmespath v0.0.0-20180206201540-c2b33e8439af/go.mod h1:Nht3zPeWKUH0NzdCt2Blrr5ys8VGpn0CEB0cQHVjt7k=
github.com/jmespath/go-jmespath v0.4.0 h1:BEgLn5cpjn8UN1mAw4NjwDrS35OdebyEtFe+9YPoQUg=
github.com/jmespath/go-jmespath v0.4.0/go.mod h1:T8mJZnbsbmF+m6zOOFylbeCJqk5+pHWvzYPziyZiYoo=
github.com/jmespath/go-jmespath/internal/testify v1.5.1/go.mod h1:L3OGu8Wl2/fWfCI6z80xFu9LTZmf1ZRjMHUOPmWr69U=
github.com/jonboulle/clockwork v0.1.0 h1:VKV+ZcuP6l3yW9doeqz6ziZGgcynBVQO+obU0+0hcPo=
github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo=
github.com/json-iterator/go v0.0.0-20180612202835-f2b4162afba3/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
Expand Down Expand Up @@ -748,6 +753,7 @@ gopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.4 h1:/eiJrUcujPVeJ3xlSWaiNi3uSVmDGBK1pDHUHAnao1I=
gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gotest.tools v2.1.0+incompatible/go.mod h1:DsYFclhRJ6vuDpmuTbkuFWG+y2sxOXAzmJt81HFBacw=
gotest.tools v2.2.0+incompatible/go.mod h1:DsYFclhRJ6vuDpmuTbkuFWG+y2sxOXAzmJt81HFBacw=
gotest.tools/gotestsum v0.3.5/go.mod h1:Mnf3e5FUzXbkCfynWBGOwLssY7gTQgCHObK9tMpAriY=
Expand Down
10 changes: 9 additions & 1 deletion pkg/cloud/cloud.go
Expand Up @@ -75,6 +75,8 @@ type FileSystemOptions struct {
DeploymentType string
KmsKeyId string
PerUnitStorageThroughput int64
StorageType string
DriveCacheType string
DailyAutomaticBackupStartTime string
AutomaticBackupRetentionDays int64
CopyTagsToBackups bool
Expand Down Expand Up @@ -135,6 +137,10 @@ func (c *cloud) CreateFileSystem(ctx context.Context, volumeName string, fileSys
lustreConfiguration.SetDeploymentType(fileSystemOptions.DeploymentType)
}

if fileSystemOptions.DriveCacheType != "" {
lustreConfiguration.SetDriveCacheType(fileSystemOptions.DriveCacheType)
}

if fileSystemOptions.PerUnitStorageThroughput != 0 {
lustreConfiguration.SetPerUnitStorageThroughput(fileSystemOptions.PerUnitStorageThroughput)
}
Expand All @@ -154,7 +160,6 @@ func (c *cloud) CreateFileSystem(ctx context.Context, volumeName string, fileSys
ClientRequestToken: aws.String(volumeName),
FileSystemType: aws.String("LUSTRE"),
LustreConfiguration: lustreConfiguration,
StorageCapacity: aws.Int64(fileSystemOptions.CapacityGiB),
SubnetIds: []*string{aws.String(fileSystemOptions.SubnetId)},
SecurityGroupIds: aws.StringSlice(fileSystemOptions.SecurityGroupIds),
Tags: []*fsx.Tag{
Expand All @@ -165,6 +170,9 @@ func (c *cloud) CreateFileSystem(ctx context.Context, volumeName string, fileSys
},
}

if fileSystemOptions.StorageType != "" {
input.StorageType = aws.String(fileSystemOptions.StorageType)
}
if fileSystemOptions.KmsKeyId != "" {
input.KmsKeyId = aws.String(fileSystemOptions.KmsKeyId)
}
Expand Down
143 changes: 143 additions & 0 deletions pkg/cloud/cloud_test.go
Expand Up @@ -161,6 +161,149 @@ func TestCreateFileSystem(t *testing.T) {
mockCtl.Finish()
},
},
{
name: "success: normal with deploymentType and storageTypeSsd",
testFunc: func(t *testing.T) {
mockCtl := gomock.NewController(t)
mockFSx := mocks.NewMockFSx(mockCtl)
c := &cloud{
fsx: mockFSx,
}

req := &FileSystemOptions{
CapacityGiB: volumeSizeGiB,
SubnetId: subnetId,
SecurityGroupIds: securityGroupIds,
DeploymentType: deploymentType,
StorageType: fsx.StorageTypeSsd,
}

output := &fsx.CreateFileSystemOutput{
FileSystem: &fsx.FileSystem{
FileSystemId: aws.String(fileSystemId),
StorageCapacity: aws.Int64(volumeSizeGiB),
DNSName: aws.String(dnsname),
LustreConfiguration: &fsx.LustreFileSystemConfiguration{
MountName: aws.String(mountName),
},
},
}
ctx := context.Background()
mockFSx.EXPECT().CreateFileSystemWithContext(gomock.Eq(ctx), gomock.Any()).Return(output, nil)
resp, err := c.CreateFileSystem(ctx, volumeName, req)
if err != nil {
t.Fatalf("CreateFileSystem is failed: %v", err)
}

if resp == nil {
t.Fatal("resp is nil")
}

if resp.FileSystemId != fileSystemId {
t.Fatalf("FileSystemId mismatches. actual: %v expected: %v", resp.FileSystemId, fileSystemId)
}

if resp.CapacityGiB != volumeSizeGiB {
t.Fatalf("CapacityGiB mismatches. actual: %v expected: %v", resp.CapacityGiB, volumeSizeGiB)
}

if resp.DnsName != dnsname {
t.Fatalf("DnsName mismatches. actual: %v expected: %v", resp.DnsName, dnsname)
}

if resp.MountName != mountName {
t.Fatalf("MountName mismatches. actual: %v expected: %v", resp.MountName, mountName)
}

mockCtl.Finish()
},
},
{
name: "success: normal with deploymentType and storageTypeHdd",
testFunc: func(t *testing.T) {
mockCtl := gomock.NewController(t)
mockFSx := mocks.NewMockFSx(mockCtl)
c := &cloud{
fsx: mockFSx,
}

req := &FileSystemOptions{
CapacityGiB: volumeSizeGiB,
SubnetId: subnetId,
SecurityGroupIds: securityGroupIds,
DeploymentType: fsx.LustreDeploymentTypePersistent1,
StorageType: fsx.StorageTypeHdd,
DriveCacheType: fsx.DriveCacheTypeNone,
}

output := &fsx.CreateFileSystemOutput{
FileSystem: &fsx.FileSystem{
FileSystemId: aws.String(fileSystemId),
StorageCapacity: aws.Int64(volumeSizeGiB),
DNSName: aws.String(dnsname),
LustreConfiguration: &fsx.LustreFileSystemConfiguration{
MountName: aws.String(mountName),
DeploymentType: aws.String(fsx.LustreDeploymentTypePersistent1),
DriveCacheType: aws.String(fsx.DriveCacheTypeNone),
},
},
}
ctx := context.Background()
mockFSx.EXPECT().CreateFileSystemWithContext(gomock.Eq(ctx), gomock.Any()).Return(output, nil)
resp, err := c.CreateFileSystem(ctx, volumeName, req)
if err != nil {
t.Fatalf("CreateFileSystem is failed: %v", err)
}

if resp == nil {
t.Fatal("resp is nil")
}

if resp.FileSystemId != fileSystemId {
t.Fatalf("FileSystemId mismatches. actual: %v expected: %v", resp.FileSystemId, fileSystemId)
}

if resp.CapacityGiB != volumeSizeGiB {
t.Fatalf("CapacityGiB mismatches. actual: %v expected: %v", resp.CapacityGiB, volumeSizeGiB)
}

if resp.DnsName != dnsname {
t.Fatalf("DnsName mismatches. actual: %v expected: %v", resp.DnsName, dnsname)
}

if resp.MountName != mountName {
t.Fatalf("MountName mismatches. actual: %v expected: %v", resp.MountName, mountName)
}

mockCtl.Finish()
},
},
{
name: "failure: incompatible deploymentType and storageTypeHdd",
testFunc: func(t *testing.T) {
mockCtl := gomock.NewController(t)
mockFSx := mocks.NewMockFSx(mockCtl)
c := &cloud{
fsx: mockFSx,
}

req := &FileSystemOptions{
CapacityGiB: volumeSizeGiB,
SubnetId: subnetId,
SecurityGroupIds: securityGroupIds,
DeploymentType: deploymentType,
StorageType: fsx.StorageTypeHdd,
}
ctx := context.Background()
mockFSx.EXPECT().CreateFileSystemWithContext(gomock.Eq(ctx), gomock.Any()).Return(nil, errors.New("CreateFileSystemWithContext failed"))
_, err := c.CreateFileSystem(ctx, volumeName, req)
if err == nil {
t.Fatalf("CreateFileSystem is not failed")
}

mockCtl.Finish()
},
},
{
name: "success: S3 data repository",
testFunc: func(t *testing.T) {
Expand Down
12 changes: 11 additions & 1 deletion pkg/driver/controller.go
Expand Up @@ -47,6 +47,8 @@ const (
volumeParamsDeploymentType = "deploymentType"
volumeParamsKmsKeyId = "kmsKeyId"
volumeParamsPerUnitStorageThroughput = "perUnitStorageThroughput"
volumeParamsStorageType = "storageType"
volumeParamsDriveCacheType = "driveCacheType"
volumeParamsAutomaticBackupRetentionDays = "automaticBackupRetentionDays"
volumeParamsDailyAutomaticBackupStartTime = "dailyAutomaticBackupStartTime"
volumeParamsCopyTagsToBackups = "copyTagsToBackups"
Expand Down Expand Up @@ -118,6 +120,14 @@ func (d *Driver) CreateVolume(ctx context.Context, req *csi.CreateVolumeRequest)
fsOptions.CopyTagsToBackups = b
}

if val, ok := volumeParams[volumeParamsStorageType]; ok {
fsOptions.StorageType = val
}

if val, ok := volumeParams[volumeParamsDriveCacheType]; ok {
fsOptions.DriveCacheType = val
}

if val, ok := volumeParams[volumeParamsPerUnitStorageThroughput]; ok {
n, err := strconv.ParseInt(val, 10, 64)
if err != nil {
Expand All @@ -130,7 +140,7 @@ func (d *Driver) CreateVolume(ctx context.Context, req *csi.CreateVolumeRequest)
if capRange == nil {
fsOptions.CapacityGiB = cloud.DefaultVolumeSize
} else {
fsOptions.CapacityGiB = util.RoundUpVolumeSize(capRange.GetRequiredBytes(), fsOptions.DeploymentType)
fsOptions.CapacityGiB = util.RoundUpVolumeSize(capRange.GetRequiredBytes(), fsOptions.DeploymentType, fsOptions.StorageType, fsOptions.PerUnitStorageThroughput)
}

fs, err := d.cloud.CreateFileSystem(ctx, volName, fsOptions)
Expand Down
2 changes: 2 additions & 0 deletions pkg/driver/controller_test.go
Expand Up @@ -142,6 +142,7 @@ func TestCreateVolume(t *testing.T) {
volumeParamsSubnetId: subnetId,
volumeParamsSecurityGroupIds: securityGroupIds,
volumeParamsDeploymentType: fsx.LustreDeploymentTypeScratch2,
volumeParamsStorageType: fsx.StorageTypeSsd,
},
}

Expand Down Expand Up @@ -215,6 +216,7 @@ func TestCreateVolume(t *testing.T) {
volumeParamsDeploymentType: fsx.LustreDeploymentTypePersistent1,
volumeParamsKmsKeyId: "arn:aws:kms:us-east-1:215474938041:key/48313a27-7d88-4b51-98a4-fdf5bc80dbbe",
volumeParamsPerUnitStorageThroughput: "200",
volumeParamsStorageType: fsx.StorageTypeSsd,
volumeParamsAutomaticBackupRetentionDays: "1",
volumeParamsDailyAutomaticBackupStartTime: "00:00",
volumeParamsCopyTagsToBackups: "true",
Expand Down
37 changes: 24 additions & 13 deletions pkg/util/util.go
Expand Up @@ -30,23 +30,34 @@ const (
GiB = 1024 * 1024 * 1024
)

// RoundUpVolumeSize rounds up the volume size in bytes upto
// 1200 GiB, 2400 GiB, or multiplications of 3600 GiB in the
// unit of GiB for DeploymentType SCRATCH_1, or multiplications
// of 2400 GiB for other DeploymentType
func RoundUpVolumeSize(volumeSizeBytes int64, deploymentType string) int64 {
if deploymentType == fsx.LustreDeploymentTypeScratch1 ||
deploymentType == "" {
if volumeSizeBytes < 3600*GiB {
return roundUpSize(volumeSizeBytes, 1200*GiB) * 1200
// RoundUpVolumeSize rounds the volume size in bytes up to
// 1200 GiB, 2400 GiB, or multiples of 3600 GiB for DeploymentType SCRATCH_1,
// to 1200 GiB or multiples of 2400 GiB for DeploymentType SCRATCH_2 or for
// DeploymentType PERSISTENT_1 and StorageType SSD, multiples of 6000 GiB for
// DeploymentType PERSISTENT_1, StorageType HDD, and PerUnitStorageThroughput 12,
// and multiples of 1800 GiB for DeploymentType PERSISTENT_1, StorageType HDD, and
// PerUnitStorageThroughput 40.
func RoundUpVolumeSize(volumeSizeBytes int64, deploymentType string, storageType string, perUnitStorageThroughput int64) int64 {
if storageType == fsx.StorageTypeHdd {
if perUnitStorageThroughput == 12 {
return roundUpSize(volumeSizeBytes, 6000*GiB) * 6000
} else {
return roundUpSize(volumeSizeBytes, 3600*GiB) * 3600
return roundUpSize(volumeSizeBytes, 1800*GiB) * 1800
}
} else {
if volumeSizeBytes < 2400*GiB {
return roundUpSize(volumeSizeBytes, 1200*GiB) * 1200
if deploymentType == fsx.LustreDeploymentTypeScratch1 ||
deploymentType == "" {
if volumeSizeBytes < 3600*GiB {
return roundUpSize(volumeSizeBytes, 1200*GiB) * 1200
} else {
return roundUpSize(volumeSizeBytes, 3600*GiB) * 3600
}
} else {
return roundUpSize(volumeSizeBytes, 2400*GiB) * 2400
if volumeSizeBytes < 2400*GiB {
return roundUpSize(volumeSizeBytes, 1200*GiB) * 1200
} else {
return roundUpSize(volumeSizeBytes, 2400*GiB) * 2400
}
}
}
}
Expand Down

0 comments on commit a7fe242

Please sign in to comment.