Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add connection close proactively for Walk() http/rpc #7645

Merged
merged 1 commit into from
May 14, 2019

Conversation

harshavardhana
Copy link
Member

Description

Add connection close proactively for Walk() http/rpc

Motivation and Context

Limit the connections when lots of listing is possible in multi-thread Hadoop jobs such as

  • hive table inserts
  • hadoop distcp

Regression

No

How Has This Been Tested?

Using through a hadoop cluster with map-reduce jobs.

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)

Checklist:

  • My change requires a change to the documentation.
  • I have updated the documentation accordingly.
  • I have added unit tests to cover my changes.
  • I have added/updated functional tests in mint. (If yes, add mint PR # here: )
  • All new and existing tests passed.

Copy link
Contributor

@kannappanr kannappanr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@codecov
Copy link

codecov bot commented May 14, 2019

Codecov Report

Merging #7645 into master will decrease coverage by 0.49%.
The diff coverage is 0%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #7645      +/-   ##
==========================================
- Coverage   47.26%   46.76%    -0.5%     
==========================================
  Files         295      283      -12     
  Lines       47474    36665   -10809     
==========================================
- Hits        22437    17146    -5291     
+ Misses      22945    17430    -5515     
+ Partials     2092     2089       -3
Impacted Files Coverage Δ
cmd/storage-rest-server.go 48.96% <0%> (+3.93%) ⬆️
cmd/gateway-startup-msg.go 52.38% <0%> (-11.73%) ⬇️
cmd/posix-errors.go 40% <0%> (-9.39%) ⬇️
pkg/quick/errorutil.go 76% <0%> (-7.34%) ⬇️
pkg/s3select/sql/parser.go 76.92% <0%> (-6.42%) ⬇️
cmd/web-router.go 82.14% <0%> (-5.86%) ⬇️
cmd/fs-v1-rwpool.go 66.66% <0%> (-5.75%) ⬇️
cmd/retry.go 81.81% <0%> (-5.69%) ⬇️
cmd/fs-v1-helpers.go 62.02% <0%> (-5.56%) ⬇️
cmd/erasure-utils.go 67.5% <0%> (-5.09%) ⬇️
... and 262 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 9b4a81e...562dcc5. Read the comment docs.

@minio-ops
Copy link

Mint Automation

Test Result
mint-compression-xl.sh ✔️
mint-xl.sh ✔️
mint-compression-dist-xl.sh ✔️
mint-compression-fs.sh ✔️
mint-worm.sh ✔️
mint-fs.sh ✔️
mint-dist-xl.sh ✔️
mint-gateway-nas.sh ✔️
mint-large-bucket.sh more...

7645-562dcc5/mint-large-bucket.sh.log:

Running with
SERVER_ENDPOINT:      72.28.97.53:30365
ACCESS_KEY:           minio
SECRET_KEY:           ***REDACTED***
ENABLE_HTTPS:         0
SERVER_REGION:        us-east-1
MINT_DATA_DIR:        /mint/data
MINT_MODE:            full
ENABLE_VIRTUAL_STYLE: 0

To get logs, run 'docker cp d4605dcbb9cf:/mint/log /tmp/mint-logs'

(1/14) Running aws-sdk-go tests ... done in 2 seconds
(2/14) Running aws-sdk-java tests ... done in 2 seconds
(3/14) Running aws-sdk-php tests ... done in 5 minutes and 49 seconds
(4/14) Running aws-sdk-ruby tests ... done in 14 seconds
(5/14) Running awscli tests ... done in 1 minutes and 10 seconds
(6/14) Running healthcheck tests ... done in 0 seconds
(7/14) Running mc tests ... done in 33 seconds
(8/14) Running minio-dotnet tests ... done in 1 minutes and 7 seconds
(9/14) Running minio-go tests ... done in 1 minutes and 39 seconds
(10/14) Running minio-java tests ... FAILED in 7 minutes and 36 seconds
{
  "name": "minio-java",
  "function": "copyObject(String bucketName, String objectName, String destBucketName, CopyConditions copyConditions)",
  "duration": 300291,
  "status": "FAIL",
  "error": "error occurred\nErrorResponse(code=XMinioReadQuorum, message=Multiple disk failures, unable to reconstruct data., bucketName=minio-java-test-1oh0obs, objectName=minio-java-test-586093, resource=/minio-java-test-1oh0obs/minio-java-test-586093, requestId=159EAE77289993CE, hostId=cc0a0c7e-c42f-4697-be32-3ea8ea1f84c4)\nrequest={method=GET, url=http://72.28.97.53:30365/minio-java-test-1oh0obs/minio-java-test-586093, headers=Host: 72.28.97.53:30365\nUser-Agent: Minio (amd64; amd64) minio-java/dev\nx-amz-content-sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855\nx-amz-date: 20190514T225227Z\nAuthorization: AWS4-HMAC-SHA256 Credential=*REDACTED*/20190514/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=*REDACTED*\n}\nresponse={code=503, headers=Accept-Ranges: bytes\nContent-Length: 1024\nContent-Security-Policy: block-all-mixed-content\nContent-Type: application/xml\nETag: \"a29b1e795e2e95362dcde4f41703b91b-1\"\nLast-Modified: Tue, 14 May 2019 22:52:27 GMT\nRetry-After: 120\nServer: MinIO/DEVELOPMENT.2019-05-14T22-21-24Z\nVary: Origin\nX-Amz-Request-Id: 159EAE77289993CE\nX-Minio-Deployment-Id: cc0a0c7e-c42f-4697-be32-3ea8ea1f84c4\nX-Xss-Protection: 1; mode=block\nDate: Tue, 14 May 2019 22:52:27 GMT\n}\n >>> [io.minio.MinioClient.executeReq(MinioClient.java:1190), io.minio.MinioClient.execute(MinioClient.java:1056), io.minio.MinioClient.executeGet(MinioClient.java:1284), io.minio.MinioClient.getObject(MinioClient.java:1860), io.minio.MinioClient.getObject(MinioClient.java:1594), FunctionalTest.copyObject_test3(FunctionalTest.java:2101), FunctionalTest.runTests(FunctionalTest.java:3008), FunctionalTest.main(FunctionalTest.java:3121)]"
}

Executed 9 out of 14 tests successfully.

@kannappanr kannappanr merged commit 0022c9d into minio:master May 14, 2019
@harshavardhana harshavardhana deleted the network branch May 15, 2019 01:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants