New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HDFS-15683. Allow configuring DISK/ARCHIVE capacity for individual volumes. #2625
Conversation
💔 -1 overall
This message was automatically generated. |
💔 -1 overall
This message was automatically generated. |
🎊 +1 overall
This message was automatically generated. |
...roject/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/StorageLocation.java
Show resolved
Hide resolved
...roject/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/StorageLocation.java
Show resolved
Hide resolved
if (!info.setCapacityRatio( | ||
target.getStorageType(), capacityRatio)) { | ||
throw new IOException( | ||
"Not enought capacity ratio left on mount: " |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
typo: enough. Also we may want to make the exception msg more detailed
@@ -35,10 +36,13 @@ | |||
class MountVolumeInfo { | |||
private final ConcurrentMap<StorageType, FsVolumeImpl> | |||
storageTypeVolumeMap; | |||
private final ConcurrentMap<StorageType, Double> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
shall we use EnumMap for both storageTypeVolumeMap and capacityRatioMap?
if (leftover < capacityRatio) { | ||
return false; | ||
} | ||
capacityRatioMap.put(storageType, capacityRatio); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we allow overwrite an existing capacity ratio of the storage type?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think later it will be convenient when we add a feature to update the capacity ratio without restarting datanode. It should be pretty harmless for now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it possible that this setCapacityRatio call is triggered by refreshVolumes op? In that case if we do not reload capacity ratio configuration for refreshVolumes, we can have inconsistency here. So I think we need to make sure this new feature works well along with refreshVolumes. What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I agree. This is a good point. We need to refresh the capacity ratio as well when calling refreshVolumes to make this a complete feature. Let me spend some time on it.
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
Show resolved
Hide resolved
💔 -1 overall
This message was automatically generated. |
💔 -1 overall
This message was automatically generated. |
💔 -1 overall
This message was automatically generated. |
💔 -1 overall
This message was automatically generated. |
* Also, we will need to adjust new capacity ratio when | ||
* refreshVolume in the future. | ||
*/ | ||
if (ioe.getMessage() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It may be better to check if there is conflict with the same-disk-tiering feature when we first load the refreshVolume configuration. I.e., we can do some verification on changedVolumes
💔 -1 overall
This message was automatically generated. |
💔 -1 overall
This message was automatically generated. |
💔 -1 overall
This message was automatically generated. |
💔 -1 overall
This message was automatically generated. |
The javac warning looks unrelated. It reported "generated 14 new + 580 unchanged - 14 fixed = 594 total (was 594)" and the warnings are not caused by this PR. But we can use this chance to fix these warnings. Please file a new jira to do that, @LeonGao91 Other than that the changes look good to me. +1 |
thx @Jing9 for the review! I will open a jira for the javac issues. |
No description provided.