Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support variable server pools #11256

Merged
merged 1 commit into from
Jan 16, 2021

Conversation

harshavardhana
Copy link
Member

Description

Support variable server pools

Motivation and Context

The current implementation requires server pools to have
same erasure stripe sizes, to facilitate same SLA
and expectations.

This PR allows server pools to be variadic, i.e they
do not have to be same erasure stripe sizes - instead
they should have SLA for parity ratio.

If the parity ratio cannot be guaranteed by the new
server pool, the deployment is rejected i.e server
pool expansion is not allowed.

How to test this PR?

Test with the following example

version: '2'

# starts 4 docker containers running minio server instances. Each
# minio server's web interface will be accessible on the host at port
# 9001 through 9004.
services:
 minio1:
  image: y4m4/minio:dev
  restart: always
  volumes:
   - /home/harsha/data1:/data
  ports:
   - "9001:9000"
  environment:
   MINIO_ACCESS_KEY: minio
   MINIO_SECRET_KEY: minio123
  command: server http://minio{1...4}/data http://minio{5...10}/data
 minio2:
  image: y4m4/minio:dev
  restart: always
  volumes:
   - /home/harsha/data2:/data
  ports:
   - "9002:9000"
  environment:
   MINIO_ACCESS_KEY: minio
   MINIO_SECRET_KEY: minio123
  command: server http://minio{1...4}/data http://minio{5...10}/data
 minio3:
  image: y4m4/minio:dev
  volumes:
   - /home/harsha/data3:/data
  ports:
   - "9003:9000"
  environment:
   MINIO_ACCESS_KEY: minio
   MINIO_SECRET_KEY: minio123
  command: server http://minio{1...4}/data http://minio{5...10}/data
  restart: on-failure
 minio4:
  image: y4m4/minio:dev
  volumes:
   - /home/harsha/data4:/data
  ports:
   - "9004:9000"
  environment:
   MINIO_ACCESS_KEY: minio
   MINIO_SECRET_KEY: minio123
  command: server http://minio{1...4}/data http://minio{5...10}/data
  restart: on-failure
 minio5:
  image: y4m4/minio:dev
  volumes:
   - /home/harsha/data5:/data
  ports:
   - "9005:9000"
  environment:
   MINIO_ACCESS_KEY: minio
   MINIO_SECRET_KEY: minio123
  command: server http://minio{1...4}/data http://minio{5...10}/data
  restart: on-failure
 minio6:
  image: y4m4/minio:dev
  volumes:
   - /home/harsha/data6:/data
  ports:
   - "9006:9000"
  environment:
   MINIO_ACCESS_KEY: minio
   MINIO_SECRET_KEY: minio123
  command: server http://minio{1...4}/data http://minio{5...10}/data
  restart: on-failure
 minio7:
  image: y4m4/minio:dev
  volumes:
   - /home/harsha/data7:/data
  ports:
   - "9007:9000"
  environment:
   MINIO_ACCESS_KEY: minio
   MINIO_SECRET_KEY: minio123
  command: server http://minio{1...4}/data http://minio{5...10}/data
  restart: on-failure
 minio8:
  image: y4m4/minio:dev
  volumes:
   - /home/harsha/data8:/data
  ports:
   - "9008:9000"
  environment:
   MINIO_ACCESS_KEY: minio
   MINIO_SECRET_KEY: minio123
  command: server http://minio{1...4}/data http://minio{5...10}/data   
  restart: on-failure
 minio9:
  image: y4m4/minio:dev
  volumes:
   - /home/harsha/data9:/data
  ports:
   - "9009:9000"
  environment:
   MINIO_ACCESS_KEY: minio
   MINIO_SECRET_KEY: minio123
  command: server http://minio{1...4}/data http://minio{5...10}/data
  restart: on-failure
 minio10:
  image: y4m4/minio:dev
  volumes:
   - /home/harsha/data10:/data
  ports:
   - "9010:9000"
  environment:
   MINIO_ACCESS_KEY: minio
   MINIO_SECRET_KEY: minio123
  command: server http://minio{1...4}/data http://minio{5...10}/data
 minio10:
  image: y4m4/minio:dev
  volumes:
   - /home/harsha/data10:/data
  ports:
   - "9010:9000"
  environment:
   MINIO_ACCESS_KEY: minio
   MINIO_SECRET_KEY: minio123
  command: server http://minio{1...4}/data http://minio{5...10}/data
  restart: on-failure
$ docker-compose -f /tmp/docker-compose.yaml up

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Optimization (provides speedup with no functional changes)
  • Breaking change (fix or feature that would cause existing functionality to change)

Checklist:

  • Fixes a regression (If yes, please add commit-id or PR # here)
  • Documentation updated
  • Unit tests added/updated

Copy link
Contributor

@klauspost klauspost left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cmd/format-erasure.go Show resolved Hide resolved
@harshavardhana harshavardhana force-pushed the variable-server-sets branch 5 times, most recently from 886cd8c to ab00f17 Compare January 15, 2021 06:58
Copy link
Contributor

@poornas poornas left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM per testing

Current implementation requires server pools to have
same erasure stripe sizes, to facilitate same SLA
and expectations.

This PR allows server pools to be variadic, i.e they
do not have to be same erasure stripe sizes - instead
they should have SLA for parity ratio.

If the parity ratio cannot be guaranteed by the new
server pool, the deployment is rejected i.e server
pool expansion is not allowed.
@minio-trusted
Copy link
Contributor

Mint Automation

Test Result
mint-fs.sh ✔️
mint-gateway-s3.sh ✔️
mint-erasure.sh ✔️
mint-dist-erasure.sh ✔️
mint-zoned.sh ✔️
mint-gateway-nas.sh ✔️
mint-large-bucket.sh more...
mint-gateway-azure.sh more...

11256-d7c9030/mint-gateway-azure.sh.log:

Running with
SERVER_ENDPOINT:      minio-dev7.minio.io:32210
ACCESS_KEY:           minioazure
SECRET_KEY:           ***REDACTED***
ENABLE_HTTPS:         0
SERVER_REGION:        us-east-1
MINT_DATA_DIR:        /mint/data
MINT_MODE:            full
ENABLE_VIRTUAL_STYLE: 0

To get logs, run 'docker cp 1dae08a09efd:/mint/log /tmp/mint-logs'

(1/15) Running aws-sdk-go tests ... done in 9 seconds
(2/15) Running aws-sdk-java tests ... done in 2 seconds
(3/15) Running aws-sdk-php tests ... done in 2 minutes and 28 seconds
(4/15) Running aws-sdk-ruby tests ... done in 20 seconds
(5/15) Running awscli tests ... done in 3 minutes and 4 seconds
(6/15) Running healthcheck tests ... done in 0 seconds
(7/15) Running mc tests ... done in 3 minutes and 51 seconds
(8/15) Running minio-dotnet tests ... done in 1 minutes and 39 seconds
(9/15) Running minio-go tests ... done in 6 minutes and 48 seconds
(10/15) Running minio-java tests ... FAILED in 8 minutes and 56 seconds
{
  "name": "minio-java",
  "function": "putObject()",
  "args": "[user metadata]",
  "duration": 162,
  "status": "FAIL",
  "error": "error occurred\nErrorResponse(code = AuthenticationFailed, message = -> github.com/Azure/azure-storage-blob-go/azblob.newStorageError, github.com/Azure/azure-storage-blob-go@v0.10.0/azblob/zc_storage_error.go:42\n===== RESPONSE ERROR (ServiceCode=AuthenticationFailed) =====\nDescription=Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.\nRequestId:9b4cb784-e01e-011b-063e-ec8529000000\nTime:2021-01-16T19:35:44.7906076Z, Details: \n   AuthenticationErrorDetail: The MAC signature found in the HTTP request 'UWUl+bSKESEQp4d8idVHg5eMEKEjNya2lcO72+Ha97E=' is not the same as any computed signature. Server used following string to sign: 'PUT\n\n\n128\n\napplication/xml\n\n\n\n\n\n\nx-ms-blob-cache-control:\nx-ms-blob-content-disposition:\nx-ms-blob-content-encoding:\nx-ms-blob-content-language:\nx-ms-blob-content-type:application/octet-stream\nx-ms-client-request-id:0f8bd29d-667c-4bda-6e0c-f5a045ba0d31\nx-ms-date:Sat, 16 Jan 2021 19:35:44 GMT\nx-ms-meta-my_header1:a   b   c\nx-ms-meta-my_header2:\"a   b   c\"\nx-ms-meta-my_project:Project One\nx-ms-meta-my_unicode_tag:商å“�\nx-ms-version:2019-02-02\n/minioazure/minio-java-test-1nsb0a/minio-java-test-8c58sk\ncomp:blocklist\ntimeout:1501'.\n   Code: AuthenticationFailed\n   PUT https://minioazure.blob.core.windows.net/minio-java-test-1nsb0a/minio-java-test-8c58sk?comp=blocklist&timeout=1501\n   Authorization: REDACTED\n   Content-Length: [128]\n   Content-Type: [application/xml]\n   User-Agent: [APN/1.0 MinIO/1.0 MinIO/2021-01-16T19:02:10Z]\n   X-Ms-Blob-Cache-Control: []\n   X-Ms-Blob-Content-Disposition: []\n   X-Ms-Blob-Content-Encoding: []\n   X-Ms-Blob-Content-Language: []\n   X-Ms-Blob-Content-Type: [application/octet-stream]\n   X-Ms-Client-Request-Id: [0f8bd29d-667c-4bda-6e0c-f5a045ba0d31]\n   X-Ms-Date: [Sat, 16 Jan 2021 19:35:44 GMT]\n   X-Ms-Meta-My_header1: [a   b   c]\n   X-Ms-Meta-My_header2: [\"a   b   c\"]\n   X-Ms-Meta-My_project: [Project One]\n   X-Ms-Meta-My_unicode_tag: [商品]\n   X-Ms-Version: [2019-02-02]\n   --------------------------------------------------------------------------------\n   RESPONSE Status: 403 Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.\n   Content-Length: [1090]\n   Content-Type: [application/xml]\n   Date: [Sat, 16 Jan 2021 19:35:43 GMT]\n   Server: [Microsoft-HTTPAPI/2.0]\n   X-Ms-Error-Code: [AuthenticationFailed]\n   X-Ms-Request-Id: [9b4cb784-e01e-011b-063e-ec8529000000]\n\n\n, bucketName = minio-java-test-1nsb0a, objectName = minio-java-test-8c58sk, resource = /minio-java-test-1nsb0a/minio-java-test-8c58sk, requestId = 165ACD7AF2DDFF63, hostId = 0ae5d776-2e9f-425a-90f6-503e97a45d3b)\nrequest={method=PUT, url=http://minio-dev7.minio.io:32210/minio-java-test-1nsb0a/minio-java-test-8c58sk, headers=x-amz-meta-My-Unicode-Tag: 商品\nx-amz-meta-My-Project: Project One\nx-amz-meta-My-header1: a   b   c\nx-amz-meta-My-Header2: \"a   b   c\"\nContent-Type: application/octet-stream\nHost: minio-dev7.minio.io:32210\nAccept-Encoding: identity\nUser-Agent: MinIO (Linux; amd64) minio-java/8.0.3\nContent-MD5: A9oFTxee7YVcJ9fWsgQeKg==\nx-amz-content-sha256: 1ff7959f86334ddc5c188a5083268f600146328b2b6c5185e75bf7d9387d6b74\nx-amz-date: 20210116T193544Z\nAuthorization: AWS4-HMAC-SHA256 Credential=*REDACTED*/20210116/us-east-1/s3/aws4_request, SignedHeaders=content-md5;host;x-amz-content-sha256;x-amz-date;x-amz-meta-my-header1;x-amz-meta-my-header2;x-amz-meta-my-project;x-amz-meta-my-unicode-tag, Signature=*REDACTED*\n}\nresponse={code=403, headers=Accept-Ranges: bytes\nContent-Length: 3078\nContent-Security-Policy: block-all-mixed-content\nContent-Type: application/xml\nServer: MinIO\nVary: Origin\nX-Amz-Request-Id: 165ACD7AF2DDFF63\nX-Xss-Protection: 1; mode=block\nDate: Sat, 16 Jan 2021 19:35:44 GMT\n}\n >>> [io.minio.MinioClient.execute(MinioClient.java:775), io.minio.MinioClient.putObject(MinioClient.java:4547), io.minio.MinioClient.putObject(MinioClient.java:2713), io.minio.MinioClient.putObject(MinioClient.java:2830), FunctionalTest.testPutObject(FunctionalTest.java:763), FunctionalTest.putObject(FunctionalTest.java:890), FunctionalTest.runObjectTests(FunctionalTest.java:3751), FunctionalTest.runTests(FunctionalTest.java:3783), FunctionalTest.main(FunctionalTest.java:3927)]"
}
(10/15) Running minio-js tests ... done in 2 minutes and 41 seconds
(11/15) Running minio-py tests ... done in 18 minutes and 26 seconds
(12/15) Running s3cmd tests ... done in 2 minutes and 19 seconds
(13/15) Running s3select tests ... done in 59 seconds
(14/15) Running security tests ... done in 0 seconds

Executed 14 out of 15 tests successfully.

11256-d7c9030/mint-large-bucket.sh.log:

Running with
SERVER_ENDPOINT:      minio-dev7.minio.io:30678
ACCESS_KEY:           minio
SECRET_KEY:           ***REDACTED***
ENABLE_HTTPS:         0
SERVER_REGION:        us-east-1
MINT_DATA_DIR:        /mint/data
MINT_MODE:            full
ENABLE_VIRTUAL_STYLE: 0

To get logs, run 'docker cp f56a6d3fd7f5:/mint/log /tmp/mint-logs'

(1/15) Running aws-sdk-go tests ... done in 2 seconds
(2/15) Running aws-sdk-java tests ... done in 1 seconds
(3/15) Running aws-sdk-php tests ... done in 44 seconds
(4/15) Running aws-sdk-ruby tests ... done in 4 seconds
(5/15) Running awscli tests ... done in 2 minutes and 19 seconds
(6/15) Running healthcheck tests ... done in 0 seconds
(7/15) Running mc tests ... done in 1 minutes and 11 seconds
(8/15) Running minio-dotnet tests ... done in 54 seconds
(9/15) Running minio-go tests ... done in 2 minutes and 25 seconds
(10/15) Running minio-java tests ... done in 1 minutes and 42 seconds
(11/15) Running minio-js tests ... FAILED in 15 seconds
{
  "name": "minio-js",
  "function": "\"after all\" hook in \"functional tests\"",
  "duration": 19,
  "status": "FAIL",
  "error": "S3Error: The bucket you tried to delete is not empty at Object.parseError (node_modules/minio/dist/main/xml-parsers.js:79:11) at /mint/run/core/minio-js/node_modules/minio/dist/main/transformers.js:156:22 at DestroyableTransform._flush (node_modules/minio/dist/main/transformers.js:80:10) at DestroyableTransform.prefinish (node_modules/readable-stream/lib/_stream_transform.js:129:10) at prefinish (node_modules/readable-stream/lib/_stream_writable.js:611:14) at finishMaybe (node_modules/readable-stream/lib/_stream_writable.js:620:5) at endWritable (node_modules/readable-stream/lib/_stream_writable.js:643:3) at DestroyableTransform.Writable.end (node_modules/readable-stream/lib/_stream_writable.js:571:22) at IncomingMessage.onend (internal/streams/readable.js:684:10) at endReadableNT (internal/streams/readable.js:1327:12) at processTicksAndRejections (internal/process/task_queues.js:80:21)"
}
(11/15) Running minio-py tests ... done in 3 minutes and 0 seconds
(12/15) Running s3cmd tests ... done in 20 seconds
(13/15) Running s3select tests ... done in 7 seconds
(14/15) Running security tests ... done in 0 seconds

Executed 14 out of 15 tests successfully.

Deleting image on docker hub
Deleting image locally

@harshavardhana harshavardhana merged commit f903cae into minio:master Jan 16, 2021
@harshavardhana harshavardhana deleted the variable-server-sets branch January 16, 2021 20:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants