Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

heal: Preserve deployment ID from reference format.json #7126

Merged

Conversation

vadmeste
Copy link
Member

Description

Deployment ID is not copied into new formats after healing format. Although,
this is not critical since a new deployment ID will be generated and set in the
next cluster restart, it is still much better if we don't change the deployment
id of a cluster for a better tracking.

Motivation and Context

Fixing a small bug

Regression

No

How Has This Been Tested?

  1. Run a distributed setup (let's say 4 disks)
  2. Check deployment id from format.json
  3. Remove one disk content
  4. mc admin heal -r alias/
  5. Check again the deployement id

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)

Checklist:

  • My code follows the code style of this project.
  • My change requires a change to the documentation.
  • I have updated the documentation accordingly.
  • I have added unit tests to cover my changes.
  • I have added/updated functional tests in mint. (If yes, add mint PR # here: )
  • All new and existing tests passed.

Deployment ID is not copied into new formats after healing format. Although,
this is not critical since a new deployment ID will be generated and set in the
next cluster restart, it is still much better if we don't change the deployment
id of a cluster for a better tracking.
Copy link
Member

@harshavardhana harshavardhana left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks LGTM @vadmeste

@minio-ops
Copy link

Mint Automation

Test Result
mint-tls.sh ✔️
mint-compression-xl.sh ✔️
mint-xl.sh ✔️
mint-gateway-nas.sh ✔️
mint-compression-fs.sh ✔️
mint-worm.sh ✔️
mint-fs.sh ✔️
mint-dist-xl.sh ✔️
mint-large-bucket.sh more...

7126-71746ef/mint-large-bucket.sh.log:

Running with
SERVER_ENDPOINT:      minikube:30507
ACCESS_KEY:           minio
SECRET_KEY:           ***REDACTED***
ENABLE_HTTPS:         0
SERVER_REGION:        us-east-1
MINT_DATA_DIR:        /mint/data
MINT_MODE:            full
ENABLE_VIRTUAL_STYLE: 0

To get logs, run 'docker cp 0f042dab74cd:/mint/log /tmp/mint-logs'

(1/13) Running awscli tests ... done in 3 minutes and 42 seconds
(2/13) Running aws-sdk-go tests ... done in 3 seconds
(3/13) Running aws-sdk-java tests ... done in 8 seconds
(4/13) Running aws-sdk-php tests ... done in 1 minutes and 1 seconds
(5/13) Running aws-sdk-ruby tests ... done in 28 seconds
(6/13) Running mc tests ... done in 2 minutes and 6 seconds
(7/13) Running minio-dotnet tests ... FAILED in 5 minutes and 1 seconds

Unhandled Exception: System.AggregateException: One or more errors occurred. (One or more errors occurred. (Minio API responded with message=Multiple disks failures, unable to write data.) (Minio API responded with message=Multiple disks failures, unable to write data.) (Minio API responded with message=Multiple disks failures, unable to write data.) (Minio API responded with message=Multiple disks failures, unable to write data.) (Minio API responded with message=Multiple disks failures, unable to write data.) (Minio API responded with message=Multiple disks failures, unable to write data.) (Minio API responded with message=Multiple disks failures, unable to write data.) (Minio API responded with message=Multiple disks failures, unable to write data.) (Minio API responded with message=Multiple disks failures, unable to write data.) (Minio API responded with message=Multiple disks failures, unable to write data.) (Minio API responded with message=Multiple disks failures, unable to write data.) (Minio API responded with message=Multiple disks failures, unable to write data.) (Minio API responded with message=Multiple disks failures, unable to write data.) (Minio API responded with message=Multiple disks failures, unable to write data.) (Minio API responded with message=Multiple disks failures, unable to write data.) (Minio API responded with message=Multiple disks failures, unable to write data.) (Minio API responded with message=Multiple disks failures, unable to write data.) (Minio API responded with message=Multiple disks failures, unable to write data.) (Minio API responded with message=Multiple disks failures, unable to write data.) (Minio API responded with message=Multiple disks failures, unable to write data.) (Minio API responded with message=Multiple disks failures, unable to write data.)) ---> System.AggregateException: One or more errors occurred. (Minio API responded with message=Multiple disks failures, unable to write data.) (Minio API responded with message=Multiple disks failures, unable to write data.) (Minio API responded with message=Multiple disks failures, unable to write data.) (Minio API responded with message=Multiple disks failures, unable to write data.) (Minio API responded with message=Multiple disks failures, unable to write data.) (Minio API responded with message=Multiple disks failures, unable to write data.) (Minio API responded with message=Multiple disks failures, unable to write data.) (Minio API responded with message=Multiple disks failures, unable to write data.) (Minio API responded with message=Multiple disks failures, unable to write data.) (Minio API responded with message=Multiple disks failures, unable to write data.) (Minio API responded with message=Multiple disks failures, unable to write data.) (Minio API responded with message=Multiple disks failures, unable to write data.) (Minio API responded with message=Multiple disks failures, unable to write data.) (Minio API responded with message=Multiple disks failures, unable to write data.) (Minio API responded with message=Multiple disks failures, unable to write data.) (Minio API responded with message=Multiple disks failures, unable to write data.) (Minio API responded with message=Multiple disks failures, unable to write data.) (Minio API responded with message=Multiple disks failures, unable to write data.) (Minio API responded with message=Multiple disks failures, unable to write data.) (Minio API responded with message=Multiple disks failures, unable to write data.) (Minio API responded with message=Multiple disks failures, unable to write data.) ---> Minio.Exceptions.MinioException: Minio API responded with message=Multiple disks failures, unable to write data.
   at Minio.MinioClient.ParseError(IRestResponse response) in /q/.q/sources/minio-dotnet/Minio/MinioClient.cs:line 371
   at Minio.MinioClient.HandleIfErrorResponse(IRestResponse response, IEnumerable`1 handlers, DateTime startTime) in /q/.q/sources/minio-dotnet/Minio/MinioClient.cs:line 513
   at Minio.MinioClient.ExecuteTaskAsync(IEnumerable`1 errorHandlers, IRestRequest request, CancellationToken cancellationToken) in /q/.q/sources/minio-dotnet/Minio/MinioClient.cs:line 359
   at Minio.MinioClient.PutObjectAsync(String bucketName, String objectName, String uploadId, Int32 partNumber, Byte[] data, Dictionary`2 metaData, Dictionary`2 sseHeaders, CancellationToken cancellationToken) in /q/.q/sources/minio-dotnet/Minio/ApiEndpoints/ObjectOperations.cs:line 492
   at Minio.MinioClient.PutObjectAsync(String bucketName, String objectName, Stream data, Int64 size, String contentType, Dictionary`2 metaData, ServerSideEncryption sse, CancellationToken cancellationToken) in /q/.q/sources/minio-dotnet/Minio/ApiEndpoints/ObjectOperations.cs:line 254
   at Minio.Functional.Tests.FunctionalTest.PutObject_Task(MinioClient minio, String bucketName, String objectName, String fileName, String contentType, Int64 size, Dictionary`2 metaData, MemoryStream mstream) in /mint/run/core/minio-dotnet/FunctionalTest.cs:line 923
   --- End of inner exception stack trace ---
   at System.Threading.Tasks.Task.Wait(Int32 millisecondsTimeout, CancellationToken cancellationToken)
   at System.Threading.Tasks.Task.Wait()
   at Minio.Functional.Tests.FunctionalTest.RemoveObjects_Test2(MinioClient minio) in /mint/run/core/minio-dotnet/FunctionalTest.cs:line 2134
   --- End of inner exception stack trace ---
   at System.Threading.Tasks.Task.Wait(Int32 millisecondsTimeout, CancellationToken cancellationToken)
   at System.Threading.Tasks.Task.Wait()
   at Minio.Functional.Tests.FunctionalTest.Main(String[] args) in /mint/run/core/minio-dotnet/FunctionalTest.cs:line 221

Executed 6 out of 13 tests successfully.

@codecov
Copy link

codecov bot commented Jan 21, 2019

Codecov Report

Merging #7126 into master will decrease coverage by 0.02%.
The diff coverage is 100%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #7126      +/-   ##
==========================================
- Coverage   52.37%   52.34%   -0.03%     
==========================================
  Files         275      275              
  Lines       42970    42971       +1     
==========================================
- Hits        22504    22494      -10     
- Misses      18382    18389       +7     
- Partials     2084     2088       +4
Impacted Files Coverage Δ
cmd/format-xl.go 64.89% <100%> (+0.06%) ⬆️
cmd/posix-list-dir_windows.go 60.29% <0%> (-4.42%) ⬇️
cmd/fs-v1-helpers.go 68.19% <0%> (-0.62%) ⬇️
cmd/fs-v1.go 63.88% <0%> (-0.34%) ⬇️
cmd/posix.go 65.19% <0%> (-0.33%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 5353edc...71746ef. Read the comment docs.

Copy link
Member

@harshavardhana harshavardhana left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM tested

@kannappanr kannappanr merged commit dc2348d into minio:master Jan 23, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants