Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"mc admin heal" fails with "internal error" starting with RELEASE.2023-12-09. #18783

Closed
JeffByers-SF opened this issue Jan 12, 2024 · 3 comments · Fixed by #18784
Closed

"mc admin heal" fails with "internal error" starting with RELEASE.2023-12-09. #18783

JeffByers-SF opened this issue Jan 12, 2024 · 3 comments · Fixed by #18784

Comments

@JeffByers-SF
Copy link

NOTE

If this case is urgent, please subscribe to Subnet so that our 24/7 support team may help you faster.

The "mc admin heal" command fails with "internal error" starting with
minio RELEASE.2023-12-09T18-17-51Z. It worked before then. "mc admin
heal --recursive" still works.

Expected Behavior

There should be no internal error, the heal command should work.

Current Behavior

You get an internal error and the command fails.

Possible Solution

Steps to Reproduce (for bugs)

minio --version

minio version RELEASE.2024-01-11T07-46-16Z (commit-id=099e88516dd6450ff210606abb06f38051d3bb6a)
Runtime: go1.21.5 linux/amd64

minio server --address :8399 http://10.1.1.231/minio/disk1 http://10.1.1.231/minio/disk2 http://10.1.1.231/minio/disk3 http://10.1.1.231/minio/disk4 &

export MC_HOST_s3_8399="http://minioadmin:minioadmin@10.1.1.231:8399"

mc --no-color stat s3_8399/

mc --no-color admin heal s3_8399/

API: BackgroundHealStatus()
Time: 19:25:37 UTC 01/12/2024
DeploymentID: 60808010-c04e-4953-9a34-d4d4b13d13ac
RequestID: 17A9B014BBD10B44
RemoteHost: 10.1.1.231
Host: 10.1.1.231:8399
UserAgent: MinIO (linux; amd64) madmin-go/2.0.0 mc-RELEASE.2024-01-11T05-49-32Z/RELEASE.2024-01-11T05-49-32Z
Error: all remote servers failed to report heal status, cluster is unhealthy (*errors.errorString)
6: internal/logger/logger.go:259:logger.LogIf()
5: cmd/api-errors.go:2370:cmd.toAPIErrorCode()
4: cmd/admin-handler-utils.go:232:cmd.toAdminAPIErrCode()
3: cmd/admin-handler-utils.go:220:cmd.toAdminAPIErr()
2: cmd/admin-handlers.go:1121:cmd.adminAPIHandlers.BackgroundHealStatusHandler()
1: net/http/server.go:2136:http.HandlerFunc.ServeHTTP()
mc-RELEASE.2024-01-11T05-49-32Z: Unable to get background heal status. We encountered an internal error, please try again. (all remote servers failed to report heal status, cluster is unhealthy).

mc --no-color admin info s3_8399/

? 10.1.1.231:8399
Uptime: 6 minutes
Version: 2024-01-11T07:46:16Z
Network: 1/1 OK
Drives: 4/4 OK
Pool: 1

Pools:
1st, Erasure sets: 1, Drives per erasure set: 4

4 drives online, 0 drives offline

mc --no-color admin heal --recursive s3_8399/

? [bucket-metadata].minio.sys/config/iam/format.json
0/0 objects; 0 B in 1s
????????????????????????????????????
? Green ? 2 ? 100.0% ???????????? ?
? Yellow ? 0 ? 0.0% ?
? Red ? 0 ? 0.0% ?
? Grey ? 0 ? 0.0% ?
????????????????????????????????????

Context

Trying to check the healing status, if any.

Regression

Yes.

RELEASE.2023-12-09T18-17-51Z

Your Environment

  • Version used (minio --version):
    RELEASE.2024-01-11T07-46-16Z
  • Server setup and configuration:
    Simple reproducer test env.
  • Operating System and version (uname -a):
    CentOS Linux.
@harshavardhana
Copy link
Member

@JeffByers-SF how did you start the server?

@JeffByers-SF
Copy link
Author

JeffByers-SF commented Jan 12, 2024

# minio server --address :8399 http://10.1.1.231/minio/disk1 http://10.1.1.231/minio/disk2 http://10.1.1.231/minio/disk3 http://10.1.1.231/minio/disk4

@harshavardhana
Copy link
Member

All servers are local what is the point of running distributed setup like this? just run it

minio server --address :8399 /minio/disk{1...4}

Then mc admin heal will work fine. You are trying to setup a distributed cluster where there are no new nodes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants