forked from openshift/console
-
Notifications
You must be signed in to change notification settings - Fork 0
/
ceph-storage-plugin.json
738 lines (738 loc) · 64.1 KB
/
ceph-storage-plugin.json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
{
"Troubleshoot": "Troubleshoot",
"Add Capacity": "Add Capacity",
"Object Buckets": "Object Buckets",
"Object Bucket Claims": "Object Bucket Claims",
"Overview": "Overview",
"Edit BlockPool": "Edit BlockPool",
"Edit Bucket Class Resources": "Edit Bucket Class Resources",
"Use existing claim": "Use existing claim",
"Select claim": "Select claim",
"Create new claim": "Create new claim",
"Create": "Create",
"Cancel": "Cancel",
"StorageSystems": "StorageSystems",
"StorageSystem details": "StorageSystem details",
"Enabled": "Enabled",
"Disabled": "Disabled",
"Last synced": "Last synced",
"BlockPool List": "BlockPool List",
"Delete BlockPool": "Delete BlockPool",
"{{replica}} Replication": "{{replica}} Replication",
"Pool name": "Pool name",
"my-block-pool": "my-block-pool",
"pool-name-help": "pool-name-help",
"Data protection policy": "Data protection policy",
"Select replication": "Select replication",
"Volume type": "Volume type",
"Select volume type": "Select volume type",
"Compression": "Compression",
"Enable compression": "Enable compression",
"Enabling compression may result in little or no space savings for encrypted or random data. Also, enabling compression may have an impact on I/O performance.": "Enabling compression may result in little or no space savings for encrypted or random data. Also, enabling compression may have an impact on I/O performance.",
"OpenShift Data Foundation's StorageCluster is not available. Try again after the StorageCluster is ready to use.": "OpenShift Data Foundation's StorageCluster is not available. Try again after the StorageCluster is ready to use.",
"Create BlockPool": "Create BlockPool",
"Close": "Close",
"Pool creation is not supported for OpenShift Data Foundation's external RHCS StorageSystem.": "Pool creation is not supported for OpenShift Data Foundation's external RHCS StorageSystem.",
"A BlockPool is a logical entity providing elastic capacity to applications and workloads. Pools provide a means of supporting policies for access data resilience and storage efficiency.": "A BlockPool is a logical entity providing elastic capacity to applications and workloads. Pools provide a means of supporting policies for access data resilience and storage efficiency.",
"BlockPool Creation Form": "BlockPool Creation Form",
"Name": "Name",
"Bucket Name": "Bucket Name",
"Type": "Type",
"Region": "Region",
"BackingStore Table": "BackingStore Table",
"Each BackingStore can be used for one tier at a time. Selecting a BackingStore in one tier will remove the resource from the second tier option and vice versa.": "Each BackingStore can be used for one tier at a time. Selecting a BackingStore in one tier will remove the resource from the second tier option and vice versa.",
"Bucket created for OpenShift Data Foundation's Service": "Bucket created for OpenShift Data Foundation's Service",
"Tier 1 - BackingStores": "Tier 1 - BackingStores",
"Create BackingStore ": "Create BackingStore ",
"Tier-1-Table": "Tier-1-Table",
"{{bs, number}} BackingStore": "{{bs, number}} BackingStore",
"{{bs, number}} BackingStore_plural": "{{bs, number}} BackingStores",
"selected": "selected",
"Tier 2 - BackingStores": "Tier 2 - BackingStores",
"Tier-2-Table": "Tier-2-Table",
"{{bs, number}} BackingStore ": "{{bs, number}} BackingStore ",
"{{bs, number}} BackingStore _plural": "{{bs, number}} BackingStore s",
"General": "General",
"Placement Policy": "Placement Policy",
"Resources": "Resources",
"Review": "Review",
"Create BucketClass": "Create BucketClass",
"Create new BucketClass": "Create new BucketClass",
"BucketClass is a CRD representing a class for buckets that defines tiering policies and data placements for an OBC.": "BucketClass is a CRD representing a class for buckets that defines tiering policies and data placements for an OBC.",
"Next": "Next",
"Back": "Back",
"Edit BucketClass Resource": "Edit BucketClass Resource",
"{{storeType}} represents a storage target to be used as the underlying storage for the data in Multicloud Object Gateway buckets.": "{{storeType}} represents a storage target to be used as the underlying storage for the data in Multicloud Object Gateway buckets.",
"Cancel ": "Cancel ",
"Save": "Save",
"What is a BackingStore?": "What is a BackingStore?",
"BackingStore represents a storage target to be used as the underlying storage for the data in Multicloud Object Gateway buckets.": "BackingStore represents a storage target to be used as the underlying storage for the data in Multicloud Object Gateway buckets.",
"Multiple types of BackingStores are supported: asws-s3 s3-compatible google-cloud-storage azure-blob obc PVC.": "Multiple types of BackingStores are supported: asws-s3 s3-compatible google-cloud-storage azure-blob obc PVC.",
"Learn More": "Learn More",
"What is a BucketClass?": "What is a BucketClass?",
"A set of policies which would apply to all buckets (OBCs) created with the specific bucket class. These policies include placement, namespace and caching": "A set of policies which would apply to all buckets (OBCs) created with the specific bucket class. These policies include placement, namespace and caching",
"BucketClass type": "BucketClass type",
"3-63 chars": "3-63 chars",
"Starts and ends with lowercase number or letter": "Starts and ends with lowercase number or letter",
"Only lowercase letters, numbers, non-consecutive periods or hyphens": "Only lowercase letters, numbers, non-consecutive periods or hyphens",
"Avoid using the form of an IP address": "Avoid using the form of an IP address",
"Globally unique name": "Globally unique name",
"BucketClass name": "BucketClass name",
"A unique name for the bucket class within the project.": "A unique name for the bucket class within the project.",
"my-multi-cloud-mirror": "my-multi-cloud-mirror",
"BucketClass Name": "BucketClass Name",
"Description (Optional)": "Description (Optional)",
"Description of bucket class": "Description of bucket class",
"What is a Namespace Policy?": "What is a Namespace Policy?",
"Namespace policy can be set to one single read and write source, multi read sources or cached policy.": "Namespace policy can be set to one single read and write source, multi read sources or cached policy.",
"Namespace Policy Type": "Namespace Policy Type",
"What is Caching?": "What is Caching?",
"Caching is a policy that creates local copies of the data. It saves the copies locally to improve performance for frequently accessed data. Each cached copy has a TTL and is verified against the hub. Each non-read operation (upload, overwrite, delete) is performed on the hub": "Caching is a policy that creates local copies of the data. It saves the copies locally to improve performance for frequently accessed data. Each cached copy has a TTL and is verified against the hub. Each non-read operation (upload, overwrite, delete) is performed on the hub",
"Hub namespace store ": "Hub namespace store ",
"A single NamespaceStore that defines the read and write target of the namespace bucket.": "A single NamespaceStore that defines the read and write target of the namespace bucket.",
"NamespaceStore": "NamespaceStore",
"Cache data settings": "Cache data settings",
"The data will be temporarily copied on a backing store in order to later access it much more quickly.": "The data will be temporarily copied on a backing store in order to later access it much more quickly.",
"Backing store": "Backing store",
"a local backing store is recommended for better performance": "a local backing store is recommended for better performance",
"Time to live": "Time to live",
"Time to live is the time that an object is stored in a caching system before it is deleted or refreshed. Default: 1 hr, Min: 15 mins, Max: 24 hrs": "Time to live is the time that an object is stored in a caching system before it is deleted or refreshed. Default: 1 hr, Min: 15 mins, Max: 24 hrs",
"Read NamespaceStores": "Read NamespaceStores",
"Select a list of NamespaceStores that defines the read targets of the namespace bucket.": "Select a list of NamespaceStores that defines the read targets of the namespace bucket.",
"Create NamespaceStore": "Create NamespaceStore",
"{{nns, number}} namespace store ": "{{nns, number}} namespace store",
"{{nns, number}} namespace store _plural": "{{nns, number}} namespace stores",
" selected": " selected",
"Write NamespaceStore": "Write NamespaceStore",
"Select a single NamespaceStore that defines the write targets of the namespace bucket.": "Select a single NamespaceStore that defines the write targets of the namespace bucket.",
"Read and Write NamespaceStore ": "Read and Write NamespaceStore ",
"Select one NamespaceStore which defines the read and write targets of the namespace bucket.": "Select one NamespaceStore which defines the read and write targets of the namespace bucket.",
"What is a Placement Policy?": "What is a Placement Policy?",
"Data placement capabilities are built as a multi-layer structure here are the layers bottom-up:": "Data placement capabilities are built as a multi-layer structure here are the layers bottom-up:",
"Spread Tier - list of BackingStores aggregates the storage of multiple stores.": "Spread Tier - list of BackingStores aggregates the storage of multiple stores.",
"Mirroring Tier - list of spread-layers async-mirroring to all mirrors with locality optimization (will allocate on the closest region to the source endpoint). Mirroring requires at least two BackingStores.": "Mirroring Tier - list of spread-layers async-mirroring to all mirrors with locality optimization (will allocate on the closest region to the source endpoint). Mirroring requires at least two BackingStores.",
"The number of replicas can be configured via the NooBaa management console.": "The number of replicas can be configured via the NooBaa management console.",
"Tier 1 - Policy Type": "Tier 1 - Policy Type",
"Spread": "Spread",
"Spreading the data across the chosen resources. By default a replica of one copy is used and does not include failure tolerance in case of resource failure.": "Spreading the data across the chosen resources. By default a replica of one copy is used and does not include failure tolerance in case of resource failure.",
"Mirror": "Mirror",
"Full duplication of the data in each chosen resource. By default a replica of one copy per location is used. Includes failure tolerance in case of resource failure.": "Full duplication of the data in each chosen resource. By default a replica of one copy per location is used. Includes failure tolerance in case of resource failure.",
"Add Tier": "Add Tier",
"Tier 2 - Policy type": "Tier 2 - Policy type",
"Remove Tier": "Remove Tier",
"Spreading the data across the chosen resources does not include failure tolerance in case of resource failure.": "Spreading the data across the chosen resources does not include failure tolerance in case of resource failure.",
"Full duplication of the data in each chosen resource includes failure tolerance in cause of resource failure.": "Full duplication of the data in each chosen resource includes failure tolerance in cause of resource failure.",
"Namespace Policy: ": "Namespace Policy: ",
"Read and write NamespaceStore : ": "Read and write NamespaceStore : ",
"Hub namespace store: ": "Hub namespace store: ",
"Cache backing store: ": "Cache backing store: ",
"Time to live: ": "Time to live: ",
"Resources ": "Resources ",
"Selected read namespace stores: ": "Selected read namespace stores: ",
"Selected write namespace store: ": "Selected write namespace store: ",
"Placement policy details ": "Placement policy details ",
"Tier 1: ": "Tier 1: ",
"Selected BackingStores": "Selected BackingStores",
"Tier 2: ": "Tier 2: ",
"Review BucketClass": "Review BucketClass",
"BucketClass type: ": "BucketClass type: ",
"BucketClass name: ": "BucketClass name: ",
"Description: ": "Description: ",
"Provider {{provider}}": "Provider {{provider}}",
"Create new BackingStore ": "Create new BackingStore ",
"An error has occured while fetching backing stores": "An error has occured while fetching backing stores",
"Select a backing store": "Select a backing store",
"Storage targets that are used to store chunks of data on Multicloud Object Gateway buckets.": "Storage targets that are used to store chunks of data on Multicloud Object Gateway buckets.",
"A BackingStore represents a storage target to be used as the underlying storage layer in Multicloud Object Gateway buckets.": "A BackingStore represents a storage target to be used as the underlying storage layer in Multicloud Object Gateway buckets.",
"Multiple types of BackingStores are supported: AWS S3 S3 Compatible Google Cloud Storage Azure Blob PVC.": "Multiple types of BackingStores are supported: AWS S3 S3 Compatible Google Cloud Storage Azure Blob PVC.",
"BackingStore Name": "BackingStore Name",
"A unique name for the BackingStore within the project": "A unique name for the BackingStore within the project",
"Name can contain a max of 43 characters": "Name can contain a max of 43 characters",
"Provider": "Provider",
"Create BackingStore": "Create BackingStore",
"This is an Advanced subscription feature. It requires Advanced Edition subscription. Please contact the account team for more information.": "This is an Advanced subscription feature. It requires Advanced Edition subscription. Please contact the account team for more information.",
"Advanced Subscription": "Advanced Subscription",
"Advanced": "Advanced",
"Deployment type": "Deployment type",
"A default StorageClass is needed for deployment.": "A default StorageClass is needed for deployment.",
"Storage platform": "Storage platform",
"Select a storage platform you wish to connect": "Select a storage platform you wish to connect",
"Select external system from list": "Select external system from list",
"Use an existing StorageClass": "Use an existing StorageClass",
"OpenShift Data Foundation will use an existing StorageClass available on your hosting platform.": "OpenShift Data Foundation will use an existing StorageClass available on your hosting platform.",
"Create a new StorageClass using local storage devices": "Create a new StorageClass using local storage devices",
"OpenShift Data Foundation will use a StorageClass provided by the Local Storage Operator (LSO) on top of your attached drives. This option is available on any platform with devices attached to nodes.": "OpenShift Data Foundation will use a StorageClass provided by the Local Storage Operator (LSO) on top of your attached drives. This option is available on any platform with devices attached to nodes.",
"Connect an external storage platform": "Connect an external storage platform",
"OpenShift Data Foundation will create a dedicated StorageClass.": "OpenShift Data Foundation will create a dedicated StorageClass.",
"Select capacity": "Select capacity",
"Requested capacity": "Requested capacity",
"Select nodes": "Select nodes",
"Select at least 3 nodes preferably in 3 different zones. It is recommended to start with at least 14 CPUs and 34 GiB per node.": "Select at least 3 nodes preferably in 3 different zones. It is recommended to start with at least 14 CPUs and 34 GiB per node.",
"PersistentVolumes are being provisioned on the selected nodes.": "PersistentVolumes are being provisioned on the selected nodes.",
"Error while loading PersistentVolumes.": "Error while loading PersistentVolumes.",
"Selected capacity": "Selected capacity",
"Available raw capacity": "Available raw capacity",
"The available capacity is based on all attached disks associated with the selected StorageClass <2>{{storageClassName}}</2>": "The available capacity is based on all attached disks associated with the selected StorageClass <2>{{storageClassName}}</2>",
"Selected nodes": "Selected nodes",
"Role": "Role",
"CPU": "CPU",
"Memory": "Memory",
"Zone": "Zone",
"Selected nodes table": "Selected nodes table",
"To support high availability when two data centers can be used, enable arbiter to get a valid quorum between the two data centers.": "To support high availability when two data centers can be used, enable arbiter to get a valid quorum between the two data centers.",
"Arbiter minimum requirements": "Arbiter minimum requirements",
"Stretch Cluster": "Stretch Cluster",
"Enable arbiter": "Enable arbiter",
"Arbiter zone": "Arbiter zone",
"An arbiter node will be automatically selected from this zone": "An arbiter node will be automatically selected from this zone",
"Select an arbiter zone": "Select an arbiter zone",
"Arbiter zone selection": "Arbiter zone selection",
"Connection details": "Connection details",
"Disks on all nodes": "Disks on all nodes",
"{{nodes, number}} node": "{{nodes, number}} node",
"{{nodes, number}} node_plural": "{{nodes, number}} nodes",
"Please enter a positive Integer": "Please enter a positive Integer",
"LocalVolumeSet name": "LocalVolumeSet name",
"A LocalVolumeSet will be created to allow you to filter a set of disks, group them and create a dedicated StorageClass to consume storage from them.": "A LocalVolumeSet will be created to allow you to filter a set of disks, group them and create a dedicated StorageClass to consume storage from them.",
"StorageClass name": "StorageClass name",
"Filter disks by": "Filter disks by",
"Uses the available disks that match the selected filters on all nodes.": "Uses the available disks that match the selected filters on all nodes.",
"Disks on selected nodes": "Disks on selected nodes",
"Uses the available disks that match the selected filters only on selected nodes.": "Uses the available disks that match the selected filters only on selected nodes.",
"Disk type": "Disk type",
"Volume mode": "Volume mode",
"Device type": "Device type",
"Select disk types": "Select disk types",
"Disk size": "Disk size",
"Minimum": "Minimum",
"Please enter a value less than or equal to max disk size": "Please enter a value less than or equal to max disk size",
"Maximum": "Maximum",
"Please enter a value greater than or equal to min disk size": "Please enter a value greater than or equal to min disk size",
"Units": "Units",
"Maximum disks limit": "Maximum disks limit",
"Disks limit will set the maximum number of PVs to create on a node. If the field is empty we will create PVs for all available disks on the matching nodes.": "Disks limit will set the maximum number of PVs to create on a node. If the field is empty we will create PVs for all available disks on the matching nodes.",
"All": "All",
"Local Storage Operator not installed": "Local Storage Operator not installed",
"Before we can create a StorageSystem, the Local Storage Operator needs to be installed. When installation is finished come back to OpenShift Data Foundation to create a StorageSystem.<1><0>Install</0></1>": "Before we can create a StorageSystem, the Local Storage Operator needs to be installed. When installation is finished come back to OpenShift Data Foundation to create a StorageSystem.<1><0>Install</0></1>",
"Checking Local Storage Operator installation": "Checking Local Storage Operator installation",
"Discovering disks on all hosts. This may take a few minutes.": "Discovering disks on all hosts. This may take a few minutes.",
"Minimum Node Requirement": "Minimum Node Requirement",
"A minimum of 3 nodes are required for the initial deployment. Only {{nodes}} node match to the selected filters. Please adjust the filters to include more nodes.": "A minimum of 3 nodes are required for the initial deployment. Only {{nodes}} node match to the selected filters. Please adjust the filters to include more nodes.",
"After the LocalVolumeSet is created you won't be able to edit it.": "After the LocalVolumeSet is created you won't be able to edit it.",
"Note:": "Note:",
"Create LocalVolumeSet": "Create LocalVolumeSet",
"Yes": "Yes",
"Are you sure you want to continue?": "Are you sure you want to continue?",
"Node": "Node",
"Model": "Model",
"Capacity": "Capacity",
"Selected Disks": "Selected Disks",
"Disk List": "Disk List",
"{{nodes, number}} Node": "{{nodes, number}} Node",
"{{nodes, number}} Node_plural": "{{nodes, number}} Nodes",
"{{disks, number}} Disk": "{{disks, number}} Disk",
"{{disks, number}} Disk_plural": "{{disks, number}} Disks",
"Selected versus Available Capacity": "Selected versus Available Capacity",
"Out of {{capacity}}": "Out of {{capacity}}",
"{{displayName}} connection details": "{{displayName}} connection details",
"Not connected": "Not connected",
"Backing storage": "Backing storage",
"StorageClass: {{name}}": "StorageClass: {{name}}",
"Deployment type: {{deployment}}": "Deployment type: {{deployment}}",
"External storage platform: {{storagePlatform}}": "External storage platform: {{storagePlatform}}",
"Capacity and nodes": "Capacity and nodes",
"Cluster capacity: {{capacity}}": "Cluster capacity: {{capacity}}",
"Selected nodes: {{nodeCount, number}} node": "Selected nodes: {{nodeCount, number}} node",
"Selected nodes: {{nodeCount, number}} node_plural": "Selected nodes: {{nodeCount, number}} nodes",
"CPU and memory: {{cpu, number}} CPU and {{memory}} memory": "CPU and memory: {{cpu, number}} CPU and {{memory}} memory",
"Zone: {{zoneCount, number}} zone": "Zone: {{zoneCount, number}} zone",
"Zone: {{zoneCount, number}} zone_plural": "Zone: {{zoneCount, number}} zones",
"Arbiter zone: {{zone}}": "Arbiter zone: {{zone}}",
"Security": "Security",
"Encryption: Enabled": "Encryption: Enabled",
"External key management service: {{kmsStatus}}": "External key management service: {{kmsStatus}}",
"Security and network": "Security and network",
"Encryption: {{encryptionStatus}}": "Encryption: {{encryptionStatus}}",
"Network: {{networkType}}": "Network: {{networkType}}",
"Encryption level": "Encryption level",
"The StorageCluster encryption level can be set to include all components under the cluster (including StorageClass and PVs) or to include only StorageClass encryption. PV encryption can use an auth token that will be used with the KMS configuration to allow multi-tenancy.": "The StorageCluster encryption level can be set to include all components under the cluster (including StorageClass and PVs) or to include only StorageClass encryption. PV encryption can use an auth token that will be used with the KMS configuration to allow multi-tenancy.",
"Cluster-wide encryption": "Cluster-wide encryption",
"Encryption for the entire cluster (block and file)": "Encryption for the entire cluster (block and file)",
"StorageClass encryption": "StorageClass encryption",
"An encryption key will be generated for each persistent volume (block) created using an encryption enabled StorageClass.": "An encryption key will be generated for each persistent volume (block) created using an encryption enabled StorageClass.",
"Connection settings": "Connection settings",
"Connect to an external key management service": "Connect to an external key management service",
"Data encryption for block and file storage. MultiCloud Object Gateway is always encrypted.": "Data encryption for block and file storage. MultiCloud Object Gateway is always encrypted.",
"MultiCloud Object Gateway is always encrypted.": "MultiCloud Object Gateway is always encrypted.",
"Enable data encryption for block and file storage": "Enable data encryption for block and file storage",
"Enable encryption": "Enable encryption",
"Encryption": "Encryption",
"An error has occurred: {{error}}": "An error has occurred: {{error}}",
"IP address": "IP address",
"Rest API IP address of IBM FlashSystem.": "Rest API IP address of IBM FlashSystem.",
"The endpoint is not a valid IP address": "The endpoint is not a valid IP address",
"Username": "Username",
"Password": "Password",
"Hide password": "Hide password",
"Reveal password": "Reveal password",
"The uploaded file is not a valid JSON file": "The uploaded file is not a valid JSON file",
"External storage system metadata": "External storage system metadata",
"Download <1>{{SCRIPT_NAME}}</1> script and run on the RHCS cluster, then upload the results (JSON) in the External storage system metadata field.": "Download <1>{{SCRIPT_NAME}}</1> script and run on the RHCS cluster, then upload the results (JSON) in the External storage system metadata field.",
"Download script": "Download script",
"Browse": "Browse",
"Clear": "Clear",
"Upload helper script": "Upload helper script",
"An error has occurred": "An error has occurred",
"Create StorageSystem": "Create StorageSystem",
"Create a StorageSystem to represent your OpenShift Data Foundation system and all its required storage and computing resources.": "Create a StorageSystem to represent your OpenShift Data Foundation system and all its required storage and computing resources.",
"{{nodeCount, number}} node": "{{nodeCount, number}} node",
"{{nodeCount, number}} node_plural": "{{nodeCount, number}} nodes",
"selected ({{cpu}} CPU and {{memory}} on ": "selected ({{cpu}} CPU and {{memory}} on ",
"{{zoneCount, number}} zone": "{{zoneCount, number}} zone",
"{{zoneCount, number}} zone_plural": "{{zoneCount, number}} zones",
"Search by node name...": "Search by node name...",
"Search by node label...": "Search by node label...",
"Not found": "Not found",
"Compression eligibility": "Compression eligibility",
"Compression eligibility indicates the percentage of incoming data that is compressible": "Compression eligibility indicates the percentage of incoming data that is compressible",
"Compression savings": "Compression savings",
"Compression savings indicates the total savings gained from compression for this pool, including replicas": "Compression savings indicates the total savings gained from compression for this pool, including replicas",
"Compression ratio": "Compression ratio",
"Compression ratio indicates the achieved compression on eligible data for this pool": "Compression ratio indicates the achieved compression on eligible data for this pool",
"Compression status": "Compression status",
"Storage efficiency": "Storage efficiency",
"Details": "Details",
"Replicas": "Replicas",
"Inventory": "Inventory",
"Not available": "Not available",
"Image states info": "Image states info",
"What does each state mean?": "What does each state mean?",
" <1>Starting replay:</1> Initiating image (PV) replication process. ": " <1>Starting replay:</1> Initiating image (PV) replication process. ",
" <1>Replaying:</1> Image (PV) replication is ongoing or idle between clusters. ": " <1>Replaying:</1> Image (PV) replication is ongoing or idle between clusters. ",
" <1>Stopping replay:</1> Image (PV) replication process is shutting down. ": " <1>Stopping replay:</1> Image (PV) replication process is shutting down. ",
" <1>Stopped:</1> Image (PV) replication process has shut down. ": " <1>Stopped:</1> Image (PV) replication process has shut down. ",
" <1>Error:</1> Image (PV) replication process stopped due to an error. ": " <1>Error:</1> Image (PV) replication process stopped due to an error. ",
" <1>Unknown:</1> Unable to determine image (PV) state due to an error. Check your network connection and remote cluster mirroring daemon. ": " <1>Unknown:</1> Unable to determine image (PV) state due to an error. Check your network connection and remote cluster mirroring daemon. ",
"image states info": "image states info",
"Image States": "Image States",
"Mirroring": "Mirroring",
"Mirroring status": "Mirroring status",
"Overall image health": "Overall image health",
"Show image states": "Show image states",
"Last checked": "Last checked",
"Raw Capacity shows the total physical capacity from all storage media within the storage subsystem": "Raw Capacity shows the total physical capacity from all storage media within the storage subsystem",
"Start replay": "Start replay",
"Stop reply": "Stop reply",
"Replaying": "Replaying",
"Stopped": "Stopped",
"Error": "Error",
"Syncing": "Syncing",
"Unknown": "Unknown",
"Status": "Status",
"Performance": "Performance",
"IOPS": "IOPS",
"Throughput": "Throughput",
"Not enough usage data": "Not enough usage data",
"used": "used",
"available": "available",
"Other": "Other",
"All other capacity usage that are not a part of the top 5 consumers.": "All other capacity usage that are not a part of the top 5 consumers.",
"Available": "Available",
"Breakdown Chart": "Breakdown Chart",
"Warning": "Warning",
"Raw capacity": "Raw capacity",
"Used": "Used",
"Available versus Used Capacity": "Available versus Used Capacity",
"Used of {{capacity}}": "Used of {{capacity}}",
"Not Available": "Not Available",
"Rebuilding data resiliency": "Rebuilding data resiliency",
"{{formattedProgress, number}}%": "{{formattedProgress, number}}%",
"Activity": "Activity",
"Estimating {{formattedEta}} to completion": "Estimating {{formattedEta}} to completion",
"Object": "Object",
"Object_plural": "Objects",
"Buckets": "Buckets",
"Buckets card represents the number of S3 buckets managed on Multicloud Object Gateway and the number of ObjectBucketClaims and the ObjectBuckets managed on both Multicloud Object Gateway and RGW (if deployed).": "Buckets card represents the number of S3 buckets managed on Multicloud Object Gateway and the number of ObjectBucketClaims and the ObjectBuckets managed on both Multicloud Object Gateway and RGW (if deployed).",
"NooBaa Bucket": "NooBaa Bucket",
"Break by": "Break by",
"Total": "Total",
"Projects": "Projects",
"BucketClasses": "BucketClasses",
"Service type": "Service type",
"Cluster-wide": "Cluster-wide",
"Any NON Object bucket claims that were created via an S3 client or via the NooBaa UI system.": "Any NON Object bucket claims that were created via an S3 client or via the NooBaa UI system.",
"Capacity breakdown": "Capacity breakdown",
"This card shows used capacity for different resources. The available capacity is based on cloud services therefore it cannot be shown.": "This card shows used capacity for different resources. The available capacity is based on cloud services therefore it cannot be shown.",
"Type: {{serviceType}}": "Type: {{serviceType}}",
"Service Type Dropdown": "Service Type Dropdown",
"Service Type Dropdown Toggle": "Service Type Dropdown Toggle",
"By: {{serviceType}}": "By: {{serviceType}}",
"Break By Dropdown": "Break By Dropdown",
"Providers": "Providers",
"Accounts": "Accounts",
"Metric": "Metric",
"I/O Operations": "I/O Operations",
"Logial Used Capacity": "Logial Used Capacity",
"Physical vs. Logical used capacity": "Physical vs. Logical used capacity",
"Egress": "Egress",
"Latency": "Latency",
"Bandwidth": "Bandwidth",
"Service Type": "Service Type",
"Type: {{selectedService}}": "Type: {{selectedService}}",
"{{selectedMetric}} by {{selectedBreakdown}}": "{{selectedMetric}} by {{selectedBreakdown}}",
"thousands": "thousands",
"millions": "millions",
"billions": "billions",
"Total Reads {{totalRead}}": "Total Reads {{totalRead}}",
"Total Writes {{totalWrite}}": "Total Writes {{totalWrite}}",
"Total Logical Used Capacity {{logicalCapacity}}": "Total Logical Used Capacity {{logicalCapacity}}",
"Total Physical Used Capacity {{physicalcapacity}}": "Total Physical Used Capacity {{physicalcapacity}}",
"Shows an overview of the data consumption per provider or account collected from the day of the entity creation.": "Shows an overview of the data consumption per provider or account collected from the day of the entity creation.",
"(in {{suffixLabel}})": "(in {{suffixLabel}})",
"Data Consumption Graph": "Data Consumption Graph",
"GET {{GETLatestValue}}": "GET {{GETLatestValue}}",
"PUT {{PUTLatestValue}}": "PUT {{PUTLatestValue}}",
"OpenShift Data Foundation": "OpenShift Data Foundation",
"OpenShift Container Storage": "OpenShift Container Storage",
"Service Name": "Service Name",
"System Name": "System Name",
"Multicloud Object Gateway": "Multicloud Object Gateway",
"RADOS Object Gateway": "RADOS Object Gateway",
"Version": "Version",
"Resource Providers": "Resource Providers",
"A list of all Multicloud Object Gateway resources that are currently in use. Those resources are used to store data according to the buckets' policies and can be a cloud-based resource or a bare metal resource.": "A list of all Multicloud Object Gateway resources that are currently in use. Those resources are used to store data according to the buckets' policies and can be a cloud-based resource or a bare metal resource.",
"Object Service": "Object Service",
"Data Resiliency": "Data Resiliency",
"Object Service Status": "Object Service Status",
"The object service includes 2 services.": "The object service includes 2 services.",
"The data resiliency includes 2 services": "The data resiliency includes 2 services",
"Services": "Services",
"Object Gateway (RGW)": "Object Gateway (RGW)",
"All resources are unhealthy": "All resources are unhealthy",
"Object Bucket has an issue": "Object Bucket has an issue",
"Many buckets have issues": "Many buckets have issues",
"Some buckets have issues": "Some buckets have issues",
"{{capacityRatio, number}}:1": "{{capacityRatio, number}}:1",
"OpenShift Data Foundation can be configured to use compression. The efficiency rate reflects the actual compression ratio when using such a configuration.": "OpenShift Data Foundation can be configured to use compression. The efficiency rate reflects the actual compression ratio when using such a configuration.",
"Savings": "Savings",
"Savings shows the uncompressed and non-deduped data that would have been stored without those techniques.": "Savings shows the uncompressed and non-deduped data that would have been stored without those techniques.",
"Storage Efficiency": "Storage Efficiency",
"OpenShift Container Storage Overview": "OpenShift Container Storage Overview",
"Block and File": "Block and File",
"BlockPools": "BlockPools",
"Storage Classes": "Storage Classes",
"Pods": "Pods",
"{{metricType}}": "{{metricType}}",
"Break by dropdown": "Break by dropdown",
"Cluster Name": "Cluster Name",
"Mode": "Mode",
"Storage Cluster": "Storage Cluster",
"Utilization": "Utilization",
"Used Capacity": "Used Capacity",
"Expanding StorageCluster": "Expanding StorageCluster",
"Upgrading OpenShift Data Foundation's Operator": "Upgrading OpenShift Data Foundation's Operator",
"Used Capacity Breakdown": "Used Capacity Breakdown",
"This card shows the used capacity for different Kubernetes resources. The figures shown represent the Usable storage, meaning that data replication is not taken into consideration.": "This card shows the used capacity for different Kubernetes resources. The figures shown represent the Usable storage, meaning that data replication is not taken into consideration.",
"Service name": "Service name",
"Cluster name": "Cluster name",
"Internal": "Internal",
"Raw capacity is the absolute total disk space available to the array subsystem.": "Raw capacity is the absolute total disk space available to the array subsystem.",
"Active health checks": "Active health checks",
"Progressing": "Progressing",
"The Compression Ratio represents the compressible data effectiveness metric inclusive of all compression-enabled pools.": "The Compression Ratio represents the compressible data effectiveness metric inclusive of all compression-enabled pools.",
"The Savings metric represents the actual disk capacity saved inclusive of all compression-enabled pools and associated replicas.": "The Savings metric represents the actual disk capacity saved inclusive of all compression-enabled pools and associated replicas.",
"Performance metrics over time showing IOPS, Latency and more. Each metric is a link to a detailed view of this metric.": "Performance metrics over time showing IOPS, Latency and more. Each metric is a link to a detailed view of this metric.",
"Recovery": "Recovery",
"Disk State": "Disk State",
"OpenShift Data Foundation status": "OpenShift Data Foundation status",
"Filesystem": "Filesystem",
"Disks List": "Disks List",
"Start Disk Replacement": "Start Disk Replacement",
"<0>{{diskName}}</0> can be replaced with a disk of same type.": "<0>{{diskName}}</0> can be replaced with a disk of same type.",
"Troubleshoot disk <1>{{diskName}}</1>": "Troubleshoot disk <1>{{diskName}}</1>",
"here": "here",
"Online": "Online",
"Offline": "Offline",
"NotResponding": "NotResponding",
"PreparingToReplace": "PreparingToReplace",
"ReplacementFailed": "ReplacementFailed",
"ReplacementReady": "ReplacementReady",
"This is a required field": "This is a required field",
"Please enter a URL": "Please enter a URL",
"Please enter a valid port": "Please enter a valid port",
"Connect to a Key Management Service": "Connect to a Key Management Service",
"Key management service provider": "Key management service provider",
"kms-provider-name": "kms-provider-name",
"A unique name for the key management service within the project.": "A unique name for the key management service within the project.",
"Address": "Address",
"Port": "Port",
"Token": "Token",
"Hide token": "Hide token",
"Reveal token": "Reveal token",
"Advanced Settings": "Advanced Settings",
"Raw Capacity": "Raw Capacity",
"x {{ replica, number }} replicas =": "x {{ replica, number }} replicas =",
"No StorageClass selected": "No StorageClass selected",
"The Arbiter stretch cluster requires a minimum of 4 nodes (2 different zones, 2 nodes per zone). Please choose a different StorageClass or create a new LocalVolumeSet that matches the minimum node requirement.": "The Arbiter stretch cluster requires a minimum of 4 nodes (2 different zones, 2 nodes per zone). Please choose a different StorageClass or create a new LocalVolumeSet that matches the minimum node requirement.",
"The StorageCluster requires a minimum of 3 nodes. Please choose a different StorageClass or create a new LocalVolumeSet that matches the minimum node requirement.": "The StorageCluster requires a minimum of 3 nodes. Please choose a different StorageClass or create a new LocalVolumeSet that matches the minimum node requirement.",
"Adding capacity for <1>{{name}}</1>, may increase your expenses.": "Adding capacity for <1>{{name}}</1>, may increase your expenses.",
"StorageClass": "StorageClass",
"Currently Used:": "Currently Used:",
"Add": "Add",
"Vault enterprise namespaces are isolated environments that functionally exist as Vaults within a Vault. They have separate login paths and support creating and managing data isolated to their namespace.": "Vault enterprise namespaces are isolated environments that functionally exist as Vaults within a Vault. They have separate login paths and support creating and managing data isolated to their namespace.",
"Maximum file size exceeded. File limit is 4MB.": "Maximum file size exceeded. File limit is 4MB.",
"A PEM-encoded CA certificate file used to verify the Vault server's SSL certificate.": "A PEM-encoded CA certificate file used to verify the Vault server's SSL certificate.",
"A PEM-encoded client certificate. This certificate is used for TLS communication with the Vault server.": "A PEM-encoded client certificate. This certificate is used for TLS communication with the Vault server.",
"An unencrypted, PEM-encoded private key which corresponds to the matching client certificate provided with VAULT_CLIENT_CERT.": "An unencrypted, PEM-encoded private key which corresponds to the matching client certificate provided with VAULT_CLIENT_CERT.",
"The name to use as the SNI host when OpenShift Data Foundation connecting via TLS to the Vault server": "The name to use as the SNI host when OpenShift Data Foundation connecting via TLS to the Vault server",
"Key Management Service Advanced Settings": "Key Management Service Advanced Settings",
"Backend Path": "Backend Path",
"path/": "path/",
"TLS Server Name": "TLS Server Name",
"Vault Enterprise Namespace": "Vault Enterprise Namespace",
"The name must be accurate and must match the service namespace": "The name must be accurate and must match the service namespace",
"CA Certificate": "CA Certificate",
"Upload a .PEM file here...": "Upload a .PEM file here...",
"Client Certificate": "Client Certificate",
"Client Private Key": "Client Private Key",
"Attach OBC to a Deployment": "Attach OBC to a Deployment",
"Deployment Name": "Deployment Name",
"Attach": "Attach",
"<0><0>{{poolName}}</0> cannot be deleted. When a pool is bounded to PVC it cannot be deleted. Please detach all the resources from StorageClass(es):</0>": "<0><0>{{poolName}}</0> cannot be deleted. When a pool is bounded to PVC it cannot be deleted. Please detach all the resources from StorageClass(es):</0>",
"<0>Deleting <1>{{poolName}}</1> will remove all the saved data of this pool. Are you sure want to delete?</0>": "<0>Deleting <1>{{poolName}}</1> will remove all the saved data of this pool. Are you sure want to delete?</0>",
"BlockPool Delete Modal": "BlockPool Delete Modal",
"Try Again": "Try Again",
"Finish": "Finish",
"Go To Pvc List": "Go To Pvc List",
"BlockPool Update Form": "BlockPool Update Form",
"replacement disallowed: disk {{diskName}} is {{replacingDiskStatus}}": "replacement disallowed: disk {{diskName}} is {{replacingDiskStatus}}",
"replacement disallowed: disk {{diskName}} is {{replacementStatus}}": "replacement disallowed: disk {{diskName}} is {{replacementStatus}}",
"Disk Replacement": "Disk Replacement",
"This action will start preparing the disk for replacement.": "This action will start preparing the disk for replacement.",
"Data rebalancing is in progress": "Data rebalancing is in progress",
"See data resiliency status": "See data resiliency status",
"Are you sure you want to replace <1>{{diskName}}</1>?": "Are you sure you want to replace <1>{{diskName}}</1>?",
"Replace": "Replace",
"Create NamespaceStore ": "Create NamespaceStore ",
"Represents an underlying storage to be used as read or write target for the data in the namespace buckets.": "Represents an underlying storage to be used as read or write target for the data in the namespace buckets.",
"Provider {{provider}} | Region: {{region}}": "Provider {{provider}} | Region: {{region}}",
"Create new NamespaceStore ": "Create new NamespaceStore ",
"An error has occurred while fetching namespace stores": "An error has occurred while fetching namespace stores",
"Select a namespace store": "Select a namespace store",
"Namespace store name": "Namespace store name",
"A unique name for the namespace store within the project": "A unique name for the namespace store within the project",
"Namespace Store Table": "Namespace Store Table",
"Where can I find Google Cloud credentials?": "Where can I find Google Cloud credentials?",
"Service account keys are needed for Google Cloud Storage authentication. The keys can be found in the service accounts page in the GCP console.": "Service account keys are needed for Google Cloud Storage authentication. The keys can be found in the service accounts page in the GCP console.",
"Learn more": "Learn more",
"Upload a .json file with the service account keys provided by Google Cloud Storage.": "Upload a .json file with the service account keys provided by Google Cloud Storage.",
"Secret Key": "Secret Key",
"Upload JSON": "Upload JSON",
"Uploaded File Name": "Uploaded File Name",
"Upload File": "Upload File",
"Switch to Secret": "Switch to Secret",
"Select Secret": "Select Secret",
"Switch to upload JSON": "Switch to upload JSON",
"Cluster Metadata": "Cluster Metadata",
"Target Bucket": "Target Bucket",
"Number of Volumes": "Number of Volumes",
"Volume Size": "Volume Size",
"Target blob container": "Target blob container",
"Target bucket": "Target bucket",
"Account name": "Account name",
"Access key": "Access key",
"Account key": "Account key",
"Secret key": "Secret key",
"Region Dropdown": "Region Dropdown",
"Endpoint": "Endpoint",
"Endpoint Address": "Endpoint Address",
"Secret": "Secret",
"Switch to Credentials": "Switch to Credentials",
"Access Key Field": "Access Key Field",
"Secret Key Field": "Secret Key Field",
"ObjectBucketClaim Name": "ObjectBucketClaim Name",
"my-object-bucket": "my-object-bucket",
"If not provided a generic name will be generated.": "If not provided a generic name will be generated.",
"Defines the object-store service and the bucket provisioner.": "Defines the object-store service and the bucket provisioner.",
"BucketClass": "BucketClass",
"Select BucketClass": "Select BucketClass",
"Create ObjectBucketClaim": "Create ObjectBucketClaim",
"Edit YAML": "Edit YAML",
"Attach to Deployment": "Attach to Deployment",
"Object Bucket Claim Details": "Object Bucket Claim Details",
"Object Bucket": "Object Bucket",
"Namespace": "Namespace",
"OBCTableHeader": "OBCTableHeader",
"Object Bucket Claim Data": "Object Bucket Claim Data",
"Hide Values": "Hide Values",
"Reveal Values": "Reveal Values",
"Data": "Data",
"Create Object Bucket": "Create Object Bucket",
"Object Bucket Name": "Object Bucket Name",
"ob-name-help": "ob-name-help",
"Object Bucket Details": "Object Bucket Details",
"Object Bucket Claim": "Object Bucket Claim",
"OBTableHeader": "OBTableHeader",
"Uses the available disks that match the selected filters on all nodes selected in the previous step.": "Uses the available disks that match the selected filters on all nodes selected in the previous step.",
"A LocalVolumeSet allows you to filter a set of disks, group them and create a dedicated StorageClass to consume storage from them.": "A LocalVolumeSet allows you to filter a set of disks, group them and create a dedicated StorageClass to consume storage from them.",
"OpenShift Container Storage's StorageCluster requires a minimum of 3 nodes for the initial deployment. Only {{nodes}} node match to the selected filters. Please adjust the filters to include more nodes.": "OpenShift Container Storage's StorageCluster requires a minimum of 3 nodes for the initial deployment. Only {{nodes}} node match to the selected filters. Please adjust the filters to include more nodes.",
"After the LocalVolumeSet and StorageClass are created you won't be able to go back to this step.": "After the LocalVolumeSet and StorageClass are created you won't be able to go back to this step.",
"Create StorageClass": "Create StorageClass",
"Selected Capacity": "Selected Capacity",
"Selected Nodes": "Selected Nodes",
"Review StorageCluster": "Review StorageCluster",
"Storage and nodes": "Storage and nodes",
"Arbiter zone:": "Arbiter zone:",
"None": "None",
"selected based on the created StorageClass:": "selected based on the created StorageClass:",
"Total CPU and memory of {{cpu, number}} CPU and {{memory}}": "Total CPU and memory of {{cpu, number}} CPU and {{memory}}",
"Configure": "Configure",
"Enable Encryption": "Enable Encryption",
"Connect to external key management service: {{name}}": "Connect to external key management service: {{name}}",
"Encryption Level: {{level}}": "Encryption Level: {{level}}",
"Using {{networkLabel}}": "Using {{networkLabel}}",
"Discover disks": "Discover disks",
"Review and create": "Review and create",
"Info Alert": "Info Alert",
"Internal - Attached devices": "Internal - Attached devices",
"Can be used on any platform where there are attached devices to the nodes, using the Local Storage Operator. The infrastructure StorageClass is provided by Local Storage Operator, on top of the attached drives.": "Can be used on any platform where there are attached devices to the nodes, using the Local Storage Operator. The infrastructure StorageClass is provided by Local Storage Operator, on top of the attached drives.",
"Before we can create a StorageCluster, the Local Storage operator needs to be installed. When installation is finished come back to OpenShift Container Storage to create a StorageCluster.<1><0>Install</0></1>": "Before we can create a StorageCluster, the Local Storage operator needs to be installed. When installation is finished come back to OpenShift Container Storage to create a StorageCluster.<1><0>Install</0></1>",
"Node Table": "Node Table",
"StorageCluster exists": "StorageCluster exists",
"Back to operator page": "Back to operator page",
"Go to cluster page": "Go to cluster page",
"<0>A StorageCluster <1>{{clusterName}}</1> already exists.<3>You cannot create another StorageCluster.</3></0>": "<0>A StorageCluster <1>{{clusterName}}</1> already exists.<3>You cannot create another StorageCluster.</3></0>",
"Connect to external cluster": "Connect to external cluster",
"Download <1>{{SCRIPT_NAME}}</1> script and run on the RHCS cluster, then upload the results (JSON) in the External cluster metadata field.": "Download <1>{{SCRIPT_NAME}}</1> script and run on the RHCS cluster, then upload the results (JSON) in the External cluster metadata field.",
"Download Script": "Download Script",
"A bucket will be created to provide the OpenShift Data Foundation's Service.": "A bucket will be created to provide the OpenShift Data Foundation's Service.",
"Bucket created for OpenShift Container Storage's Service": "Bucket created for OpenShift Container Storage's Service",
"Create External StorageCluster": "Create External StorageCluster",
"External cluster metadata": "External cluster metadata",
"Upload JSON File": "Upload JSON File",
"Upload Credentials file": "Upload Credentials file",
"JSON data": "JSON data",
"Create Button": "Create Button",
"Create StorageCluster": "Create StorageCluster",
"OpenShift Container Storage runs as a cloud-native service for optimal integration with applications in need of storage and handles the scenes such as provisioning and management.": "OpenShift Container Storage runs as a cloud-native service for optimal integration with applications in need of storage and handles the scenes such as provisioning and management.",
"Select mode:": "Select mode:",
"If not labeled, the selected nodes are labeled <1>{{label}}</1> to make them target hosts for OpenShift Data Foundation's components.": "If not labeled, the selected nodes are labeled <1>{{label}}</1> to make them target hosts for OpenShift Data Foundation's components.",
"Mark nodes as dedicated": "Mark nodes as dedicated",
"This will taint the nodes with the<1>key: node.ocs.openshift.io/storage</1>, <3>value: true</3>, and <6>effect: NoSchedule</6>": "This will taint the nodes with the<1>key: node.ocs.openshift.io/storage</1>, <3>value: true</3>, and <6>effect: NoSchedule</6>",
"Selected nodes will be dedicated to OpenShift Container Storage use only": "Selected nodes will be dedicated to OpenShift Container Storage use only",
"OpenShift Container Storage deployment in two data centers, with an arbiter node to settle quorum decisions.": "OpenShift Container Storage deployment in two data centers, with an arbiter node to settle quorum decisions.",
"To support high availability when two data centers can be used, enable arbiter to get the valid quorum between two data centers.": "To support high availability when two data centers can be used, enable arbiter to get the valid quorum between two data centers.",
"Select arbiter zone": "Select arbiter zone",
"Network": "Network",
"The default SDN networking uses a single network for all data operations such read/write and also for control plane, such as data replication. Multus allows a network separation between the data operations and the control plane operations.": "The default SDN networking uses a single network for all data operations such read/write and also for control plane, such as data replication. Multus allows a network separation between the data operations and the control plane operations.",
"Default (SDN)": "Default (SDN)",
"Custom (Multus)": "Custom (Multus)",
"Public Network Interface": "Public Network Interface",
"Select a network": "Select a network",
"Cluster Network Interface": "Cluster Network Interface",
"Requested Cluster Capacity:": "Requested Cluster Capacity:",
"StorageClass:": "StorageClass:",
"Select Capacity": "Select Capacity",
"Requested Capacity": "Requested Capacity",
"Select Nodes": "Select Nodes",
"create internal mode StorageCluster wizard": "create internal mode StorageCluster wizard",
"Can be used on any platform, except bare metal. It means that OpenShift Container Storage uses an infrastructure StorageClass, provided by the hosting platform. For example, gp2 on AWS, thin on VMWare, etc.": "Can be used on any platform, except bare metal. It means that OpenShift Container Storage uses an infrastructure StorageClass, provided by the hosting platform. For example, gp2 on AWS, thin on VMWare, etc.",
"{{title}} steps": "{{title}} steps",
"{{title}} content": "{{title}} content",
"{{availableCapacity}} / {{replica}} replicas": "{{availableCapacity}} / {{replica}} replicas",
"Available capacity:": "Available capacity:",
"Filesystem name": "Filesystem name",
"Enter filesystem name": "Enter filesystem name",
"CephFS filesystem name into which the volume shall be created": "CephFS filesystem name into which the volume shall be created",
"no compression": "no compression",
"with compression": "with compression",
"Replica {{poolSize}} {{compressionText}}": "Replica {{poolSize}} {{compressionText}}",
"Create New Pool": "Create New Pool",
"Storage Pool": "Storage Pool",
"Select a Pool": "Select a Pool",
"Storage pool into which volume data shall be stored": "Storage pool into which volume data shall be stored",
"Error retrieving Parameters": "Error retrieving Parameters",
"my-storage-pool": "my-storage-pool",
"An encryption key will be generated for each PersistentVolume created using this StorageClass.": "An encryption key will be generated for each PersistentVolume created using this StorageClass.",
"Select an existing connection": "Select an existing connection",
"KMS service {{value}} already exist": "KMS service {{value}} already exist",
"Choose existing KMS connection": "Choose existing KMS connection",
"Create new KMS connection": "Create new KMS connection",
"PV expansion operation is not supported for encrypted PVs.": "PV expansion operation is not supported for encrypted PVs.",
"Enable Thick Provisioning": "Enable Thick Provisioning",
"By enabling thick-provisioning, volumes will allocate the requested capacity upon volume creation. Volume creation will be slower when thick-provisioning is enabled.": "By enabling thick-provisioning, volumes will allocate the requested capacity upon volume creation. Volume creation will be slower when thick-provisioning is enabled.",
"{{resource}} details": "{{resource}} details",
"Kind": "Kind",
"Labels": "Labels",
"Last updated": "Last updated",
"Storage Systems": "Storage Systems",
"Used capacity": "Used capacity",
"Storage status represents the health status of {{operatorName}}'s StorageCluster.": "Storage status represents the health status of {{operatorName}}'s StorageCluster.",
"Health": "Health",
"Standard": "Standard",
"Data will be consumed by a Multi-cloud object gateway, deduped, compressed, and encrypted. The encrypted chunks would be saved on the selected BackingStores. Best used when the applications would always use the OpenShift Data Foundation endpoints to access the data.": "Data will be consumed by a Multi-cloud object gateway, deduped, compressed, and encrypted. The encrypted chunks would be saved on the selected BackingStores. Best used when the applications would always use the OpenShift Data Foundation endpoints to access the data.",
"Data is stored on the NamespaceStores without performing de-duplication, compression, or encryption. BucketClasses of namespace type allow connecting to existing data and serving from them. These are best used for existing data or when other applications (and cloud-native services) need to access the data from outside OpenShift Data Foundation.": "Data is stored on the NamespaceStores without performing de-duplication, compression, or encryption. BucketClasses of namespace type allow connecting to existing data and serving from them. These are best used for existing data or when other applications (and cloud-native services) need to access the data from outside OpenShift Data Foundation.",
"Single NamespaceStore": "Single NamespaceStore",
"Multi NamespaceStores": "Multi NamespaceStores",
"The namespace bucket will serve reads from several selected backing stores, creating a virtual namespace on top of them and will write to one of those as its chosen write target": "The namespace bucket will serve reads from several selected backing stores, creating a virtual namespace on top of them and will write to one of those as its chosen write target",
"Cache NamespaceStore": "Cache NamespaceStore",
"The caching bucket will serve data from a large raw data out of a local caching tiering.": "The caching bucket will serve data from a large raw data out of a local caching tiering.",
"Create storage class": "Create storage class",
"Create local volume set": "Create local volume set",
"Logical used capacity per account": "Logical used capacity per account",
"Egress Per Provider": "Egress Per Provider",
"I/O Operations count": "I/O Operations count",
"The StorageClass used by OpenShift Data Foundation to write its data and metadata.": "The StorageClass used by OpenShift Data Foundation to write its data and metadata.",
"Infrastructure StorageClass created by Local Storage Operator and used by OpenShift Container Storage to write its data and metadata.": "Infrastructure StorageClass created by Local Storage Operator and used by OpenShift Container Storage to write its data and metadata.",
"The amount of capacity that would be dynamically allocated on the selected StorageClass.": "The amount of capacity that would be dynamically allocated on the selected StorageClass.",
"If you wish to use the Arbiter stretch cluster, a minimum of 4 nodes (2 different zones, 2 nodes per zone) and 1 additional zone with 1 node is required. All nodes must be pre-labeled with zones in order to be validated on cluster creation.": "If you wish to use the Arbiter stretch cluster, a minimum of 4 nodes (2 different zones, 2 nodes per zone) and 1 additional zone with 1 node is required. All nodes must be pre-labeled with zones in order to be validated on cluster creation.",
"Selected nodes are based on the StorageClass <1>{{scName}}</1> and with a recommended requirement of 14 CPU and 34 GiB RAM per node.": "Selected nodes are based on the StorageClass <1>{{scName}}</1> and with a recommended requirement of 14 CPU and 34 GiB RAM per node.",
"Selected nodes are based on the StorageClass <1>{{scName}}</1> and fulfill the stretch cluster requirements with a recommended requirement of 14 CPU and 34 GiB RAM per node.": "Selected nodes are based on the StorageClass <1>{{scName}}</1> and fulfill the stretch cluster requirements with a recommended requirement of 14 CPU and 34 GiB RAM per node.",
"Storage": "Storage",
"Disks": "Disks",
"Backing Store": "Backing Store",
"Bucket Class": "Bucket Class",
"Namespace Store": "Namespace Store",
"Loading...": "Loading...",
"Pool {{name}} creation in progress": "Pool {{name}} creation in progress",
"Pool {{name}} was successfully created": "Pool {{name}} was successfully created",
"An error occurred. Pool {{name}} was not created": "An error occurred. Pool {{name}} was not created",
"Pool {{name}} creation timed out. Please check if odf operator and rook operator are running": "Pool {{name}} creation timed out. Please check if odf operator and rook operator are running",
"The creation of a StorageCluster is still in progress or has failed. Try again after the StorageCuster is ready to use.": "The creation of a StorageCluster is still in progress or has failed. Try again after the StorageCuster is ready to use.",
"Pool management tasks are not supported for default pool and OpenShift Container Storage's external mode.": "Pool management tasks are not supported for default pool and OpenShift Container Storage's external mode.",
"Pool {{name}} was created with errors.": "Pool {{name}} was created with errors.",
"Delete": "Delete",
"StorageClasses": "StorageClasses",
"hr": "hr",
"min": "min",
"A minimal cluster deployment will be performed.": "A minimal cluster deployment will be performed.",
"The selected nodes do not match OpenShift Data Foundation's StorageCluster requirement of an aggregated 30 CPUs and 72 GiB of RAM. If the selection cannot be modified a minimal cluster will be deployed.": "The selected nodes do not match OpenShift Data Foundation's StorageCluster requirement of an aggregated 30 CPUs and 72 GiB of RAM. If the selection cannot be modified a minimal cluster will be deployed.",
"Back to nodes selection": "Back to nodes selection",
"Select a StorageClass to continue": "Select a StorageClass to continue",
"This is a required field. The StorageClass will be used to request storage from the underlying infrastructure to create the backing PersistentVolumes that will be used to provide the OpenShift Data Foundation service.": "This is a required field. The StorageClass will be used to request storage from the underlying infrastructure to create the backing PersistentVolumes that will be used to provide the OpenShift Data Foundation service.",
"Create new StorageClass": "Create new StorageClass",
"This is a required field. The StorageClass will be used to request storage from the underlying infrastructure to create the backing persistent volumes that will be used to provide the OpenShift Data Foundation service.": "This is a required field. The StorageClass will be used to request storage from the underlying infrastructure to create the backing persistent volumes that will be used to provide the OpenShift Data Foundation service.",
"All required fields are not set": "All required fields are not set",
"In order to create the StorageCluster you must set the StorageClass, select at least 3 nodes (preferably in 3 different zones) and meet the minimum or recommended requirement": "In order to create the StorageCluster you must set the StorageClass, select at least 3 nodes (preferably in 3 different zones) and meet the minimum or recommended requirement",
"The StorageCluster requires a minimum of 3 nodes for the initial deployment. Please choose a different StorageClass or go to create a new LocalVolumeSet that matches the minimum node requirement.": "The StorageCluster requires a minimum of 3 nodes for the initial deployment. Please choose a different StorageClass or go to create a new LocalVolumeSet that matches the minimum node requirement.",
"Create new volume set instance": "Create new volume set instance",
"Select at least 1 encryption level or disable encryption.": "Select at least 1 encryption level or disable encryption.",
"Fill out the details in order to connect to key management system": "Fill out the details in order to connect to key management system",
"This is a required field.": "This is a required field.",
"Both public and cluster network attachment definition cannot be empty": "Both public and cluster network attachment definition cannot be empty",
"A public or cluster network attachment definition must be selected to use Multus.": "A public or cluster network attachment definition must be selected to use Multus.",
"The number of selected zones is less than the minimum requirement of 3. If not modified a host-based failure domain deployment will be enforced.": "The number of selected zones is less than the minimum requirement of 3. If not modified a host-based failure domain deployment will be enforced.",
"When the nodes in the selected StorageClass are spread across fewer than 3 availability zones, the StorageCluster will be deployed with the host based failure domain.": "When the nodes in the selected StorageClass are spread across fewer than 3 availability zones, the StorageCluster will be deployed with the host based failure domain.",
"Cluster-Wide and StorageClass": "Cluster-Wide and StorageClass",
"Cluster-Wide": "Cluster-Wide",
"Select at least 2 Backing Store resources": "Select at least 2 Backing Store resources",
"Select at least 1 Backing Store resource": "Select at least 1 Backing Store resource",
"x {{replica}} replicas = {{osdSize, number}} TiB": "x {{replica}} replicas = {{osdSize, number}} TiB",
"SmallScale": "SmallScale",
"0.5 TiB": "0.5 TiB",
"2 TiB": "2 TiB",
"LargeScale": "LargeScale",
"4 TiB": "4 TiB",
"{{osdSize, number}} TiB": "{{osdSize, number}} TiB",
"Help": "Help"
}