HDFS-17796. WebHDFS: map storage-capacity exceptions to HTTP 507#8460
Open
1fanwang wants to merge 1 commit intoapache:trunkfrom
Open
HDFS-17796. WebHDFS: map storage-capacity exceptions to HTTP 507#84601fanwang wants to merge 1 commit intoapache:trunkfrom
1fanwang wants to merge 1 commit intoapache:trunkfrom
Conversation
WebHDFS write failures caused by storage-capacity limits surface as a
generic HTTP 403 (or, when the channel times out before the handler
runs, 500), giving clients no way to distinguish "out of space" from
permission denial or transient server failure.
Add a typed mapping in both the NameNode-side
(o.a.h.hdfs.web.resources.ExceptionHandler) and DataNode-side
(o.a.h.hdfs.server.datanode.web.webhdfs.ExceptionHandler) handlers:
* ClusterStorageCapacityExceededException (and subclasses, including
DSQuotaExceededException and NSQuotaExceededException)
* DiskChecker.DiskOutOfSpaceException
both map to HTTP 507 Insufficient Storage (RFC 4918).
JAX-RS 2.1's Response.Status enum does not include 507, so the
NameNode-side handler defines a small Response.StatusType constant.
The DataNode-side handler uses Netty's
HttpResponseStatus.INSUFFICIENT_STORAGE.
Tests: TestExceptionHandler in both packages cover the new mappings
plus regression checks for IOException -> 403, FileNotFound -> 404,
and IllegalArgument -> 400.
|
🎊 +1 overall
This message was automatically generated. |
|
🎊 +1 overall
This message was automatically generated. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description of PR
JIRA: HDFS-17796
WebHDFS write failures caused by storage-capacity limits today surface as a generic HTTP 403 (or, when the channel times out before the handler runs, 500). Clients have no way to distinguish "out of space" from permission denial or transient server failure.
Both ExceptionHandlers — NameNode-side (
o.a.h.hdfs.web.resources.ExceptionHandler) and DataNode-side (o.a.h.hdfs.server.datanode.web.webhdfs.ExceptionHandler) — currently fall through to the genericIOException -> FORBIDDEN (403)mapping for the storage-capacity exception family.Fix
Add a typed mapping ahead of the generic
IOExceptionbranch:ClusterStorageCapacityExceededException(and subclasses, includingDSQuotaExceededException/NSQuotaExceededException)DiskChecker.DiskOutOfSpaceExceptionboth map to HTTP 507 Insufficient Storage (RFC 4918).
JAX-RS 2.1's
Response.Statusenum doesn't include 507, so the NN-side handler defines a smallResponse.StatusTypeconstant. The DN-side handler uses Netty's built-inHttpResponseStatus.INSUFFICIENT_STORAGE.How was this patch tested?
Two new unit tests, one per handler package:
org.apache.hadoop.hdfs.web.resources.TestExceptionHandler— 6 testsorg.apache.hadoop.hdfs.server.datanode.web.webhdfs.TestExceptionHandler— 6 testsEach suite covers
DSQuotaExceededException,NSQuotaExceededException,DiskOutOfSpaceException, plus regression checks for the existing mappings (IOException -> 403,FileNotFound -> 404,IllegalArgument -> 400).Out of scope
HDFS-17796 also asks for cleanup of incomplete files left behind by failed writes. That part is intentionally not included here — it requires plumbing the file path through
HdfsWriterso the handler can attempt a best-effort delete on failure, and is best handled in a follow-up PR so the status-code fix can land independently. Happy to follow up.For code changes: