Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

logger: avoid writing audit log response header twice #10642

Merged
merged 1 commit into from Oct 8, 2020

Conversation

aead
Copy link
Member

@aead aead commented Oct 8, 2020

Description

This commit fixes a misuse of the http.ResponseWriter.WriteHeader.
A caller should either call WriteHeader exactly once or
write to the response writer and causing an implicit 200 OK.

Writing the response headers more than once causes a http: superfluous response.WriteHeader call log message. This commit fixes this
by preventing a 2nd WriteHeader call being forwarded to the underlying
ResponseWriter.

Updates #10587

Motivation and Context

#10587 - potential bugfix

How to test this PR?

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)

Checklist:

  • Fixes a regression (If yes, please add commit-id or PR # here)
  • Documentation needed
  • Unit tests needed

This commit fixes a misuse of the `http.ResponseWriter.WriteHeader`.
A caller should **either** call `WriteHeader` exactly once **or**
write to the response writer and causing an implicit 200 OK.

Writing the response headers more than once causes a `http: superfluous
response.WriteHeader call` log message. This commit fixes this
by preventing a 2nd `WriteHeader` call being forwarded to the underlying
`ResponseWriter`.

Updates minio#10587
@minio-trusted
Copy link
Contributor

Mint Automation

Test Result
mint-large-bucket.sh ✔️
mint-fs.sh ✔️
mint-gateway-s3.sh ✔️
mint-erasure.sh ✔️
mint-dist-erasure.sh ✔️
mint-zoned.sh ✔️
mint-gateway-nas.sh ✔️
mint-gateway-azure.sh more...

10642-4db307a/mint-gateway-azure.sh.log:

Running with
SERVER_ENDPOINT:      minio-dev7.minio.io:31678
ACCESS_KEY:           minioazure
SECRET_KEY:           ***REDACTED***
ENABLE_HTTPS:         0
SERVER_REGION:        us-east-1
MINT_DATA_DIR:        /mint/data
MINT_MODE:            full
ENABLE_VIRTUAL_STYLE: 0

To get logs, run 'docker cp 34e7308e6442:/mint/log /tmp/mint-logs'

(1/15) Running aws-sdk-go tests ... done in 8 seconds
(2/15) Running aws-sdk-java tests ... done in 1 seconds
(3/15) Running aws-sdk-php tests ... done in 1 minutes and 22 seconds
(4/15) Running aws-sdk-ruby tests ... done in 23 seconds
(5/15) Running awscli tests ... done in 3 minutes and 10 seconds
(6/15) Running healthcheck tests ... done in 0 seconds
(7/15) Running mc tests ... done in 3 minutes and 52 seconds
(8/15) Running minio-dotnet tests ... done in 2 minutes and 41 seconds
(9/15) Running minio-go tests ... FAILED in 1 minutes and 42 seconds
{
  "args": {
    "bucketName": "minio-go-test-ymd0npa6gdj6ble6",
    "objectName": "test-object",
    "opts": "",
    "size": -1
  },
  "duration": 982,
  "function": "PutObject(bucketName, objectName, reader,size,opts)",
  "message": "Unexpected size",
  "name": "minio-go: testPutObjectStreaming",
  "status": "FAIL"
}
(9/15) Running minio-java tests ... FAILED in 9 minutes and 47 seconds
{
  "name": "minio-java",
  "function": "composeObject(ComposeObjectArgs args)",
  "args": "offset: 10, length: 6291436 bytes",
  "duration": 3238,
  "status": "FAIL",
  "error": "java.net.SocketException: Connection reset >>> [java.base/java.net.SocketInputStream.read(SocketInputStream.java:186), java.base/java.net.SocketInputStream.read(SocketInputStream.java:140), okio.Okio$2.read(Okio.java:140), okio.AsyncTimeout$2.read(AsyncTimeout.java:237), okio.RealBufferedSource.indexOf(RealBufferedSource.java:358), okio.RealBufferedSource.readUtf8LineStrict(RealBufferedSource.java:230), okhttp3.internal.http1.Http1ExchangeCodec.readHeaderLine(Http1ExchangeCodec.java:242), okhttp3.internal.http1.Http1ExchangeCodec.readResponseHeaders(Http1ExchangeCodec.java:213), okhttp3.internal.connection.Exchange.readResponseHeaders(Exchange.java:115), okhttp3.internal.http.CallServerInterceptor.intercept(CallServerInterceptor.java:94), okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142), okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.java:43), okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142), okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:117), okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.java:94), okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142), okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:117), okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.java:93), okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142), okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.java:88), okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142), okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:117), okhttp3.RealCall.getResponseWithInterceptorChain(RealCall.java:229), okhttp3.RealCall.execute(RealCall.java:81), io.minio.MinioClient.execute(MinioClient.java:1014), io.minio.MinioClient.uploadPart(MinioClient.java:7781), io.minio.MinioClient.putObject(MinioClient.java:4706), io.minio.MinioClient.putObject(MinioClient.java:4881), FunctionalTest.composeObject_test2(FunctionalTest.java:2404), FunctionalTest.runTests(FunctionalTest.java:3826), FunctionalTest.main(FunctionalTest.java:4058)]"
}
(9/15) Running minio-js tests ... FAILED in 45 seconds
{
  "name": "minio-js",
  "function": "\"after all\" hook in \"functional tests\"",
  "duration": 75,
  "status": "FAIL",
  "error": "S3Error: The bucket you tried to delete is not empty at Object.parseError (node_modules/minio/dist/main/xml-parsers.js:86:11) at /mint/run/core/minio-js/node_modules/minio/dist/main/transformers.js:156:22 at DestroyableTransform._flush (node_modules/minio/dist/main/transformers.js:80:10) at DestroyableTransform.prefinish (node_modules/readable-stream/lib/_stream_transform.js:129:10) at prefinish (node_modules/readable-stream/lib/_stream_writable.js:611:14) at finishMaybe (node_modules/readable-stream/lib/_stream_writable.js:620:5) at endWritable (node_modules/readable-stream/lib/_stream_writable.js:643:3) at DestroyableTransform.Writable.end (node_modules/readable-stream/lib/_stream_writable.js:571:22) at IncomingMessage.onend (_stream_readable.js:682:10) at endReadableNT (_stream_readable.js:1252:12) at processTicksAndRejections (internal/process/task_queues.js:80:21)"
}
(9/15) Running minio-py tests ... FAILED in 17 minutes and 52 seconds
{
  "name": "minio-py:test_thread_safe",
  "status": "FAIL",
  "args": {
    "bucket_name": "minio-py-test-78aa8a17-c4fc-4596-8d42-1e7881effdf5",
    "object_name": "687776ba-ee83-4064-98fd-7dfc75439df0"
  },
  "message": "Sha-sum mismatch on multi-threaded put and get objects",
  "error": "Traceback (most recent call last):\n  File \"/mint/run/core/minio-py/tests.py\", line 145, in _call_test\n    func(log_entry, *args, **kwargs)\n  File \"/mint/run/core/minio-py/tests.py\", line 1666, in test_thread_safe\n    raise exceptions[0]\n  File \"/mint/run/core/minio-py/tests.py\", line 1631, in get_object_and_check\n    'Sha-sum mismatch on multi-threaded put and '\nValueError: Sha-sum mismatch on multi-threaded put and get objects\n",
  "duration": 26196
}
(9/15) Running s3cmd tests ... done in 2 minutes and 15 seconds
(10/15) Running s3select tests ... done in 36 seconds
(11/15) Running security tests ... done in 0 seconds

Executed 11 out of 15 tests successfully.

Deleting image on docker hub
Deleting image locally
Error: No such image: minio/minio:10642-4db307a

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants