Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: when I try to run connections.connect, it raised grpc.FutureTimeoutError #28697

Closed
1 task done
zhaowenZhou opened this issue Nov 23, 2023 · 16 comments
Closed
1 task done
Assignees
Labels
help wanted Extra attention is needed stale indicates no udpates for 30 days

Comments

@zhaowenZhou
Copy link

Is there an existing issue for this?

  • I have searched the existing issues

Environment

- Milvus version: 2.3.3
- Deployment mode(standalone or cluster): standalone
- MQ type(rocksmq, pulsar or kafka):    
- SDK version(e.g. pymilvus v2.0.0rc2): pymilvus 2.3.3
- OS(Ubuntu or CentOS): Ubuntu
- CPU/Memory: 
- GPU: 
- Others:

Current Behavior

minio not showed up througn sudo docker compose ps after sudo docker-compose up -d
and as title, and when I try to run
connections.connect("default", host="localhost", port="19530")
It raises grpc.FutureTimeoutError

Expected Behavior

No response

Steps To Reproduce

No response

Milvus Log

No response

Anything else?

No response

@zhaowenZhou zhaowenZhou added kind/bug Issues or changes related a bug needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Nov 23, 2023
@zhaowenZhou
Copy link
Author

I do have milvus log but its quite too long to put it here, there are lots of errors and I selected one randomly and put it here
milvus-standalone | [2023/11/23 10:57:18.260 +00:00] [ERROR] [grpcclient/client.go:429] ["retry func failed"] ["retry time"=4] [error="empty grpc client: find no available datacoord, check datacoord state"] [stack="github.com/milvus-io/milvus/internal/util/grpcclient.(*ClientBase[...]).call\n\t/go/src/github.com/milvus-io/milvus/internal/util/grpcclient/client.go:429\ngithub.com/milvus-io/milvus/internal/util/grpcclient.(*ClientBase[...]).Call\n\t/go/src/github.com/milvus-io/milvus/internal/util/grpcclient/client.go:513\ngithub.com/milvus-io/milvus/internal/util/grpcclient.(*ClientBase[...]).ReCall\n\t/go/src/github.com/milvus-io/milvus/internal/util/grpcclient/client.go:529\ngithub.com/milvus-io/milvus/internal/distributed/datacoord/client.wrapGrpcCall[...]\n\t/go/src/github.com/milvus-io/milvus/internal/distributed/datacoord/client/client.go:102\ngithub.com/milvus-io/milvus/internal/distributed/datacoord/client.(*Client).GetMetrics\n\t/go/src/github.com/milvus-io/milvus/internal/distributed/datacoord/client/client.go:397\ngithub.com/milvus-io/milvus/internal/rootcoord.(*QuotaCenter).syncMetrics.func2\n\t/go/src/github.com/milvus-io/milvus/internal/rootcoord/quota_center.go:216\ngolang.org/x/sync/errgroup.(*Group).Go.func1\n\t/go/pkg/mod/golang.org/x/sync@v0.1.0/errgroup/errgroup.go:75"]

@zhaowenZhou
Copy link
Author

This one is the latest
milvus-standalone | [2023/11/23 10:57:50.859 +00:00] [ERROR] [components/query_node.go:54] ["QueryNode starts error"] [error="attempt #0: Get "http://minio:9000/a-bucket/?location=": dial tcp: lookup minio on 127.0.0.11:53: server misbehaving: attempt #1: Get "http://minio:9000/a-bucket/?location=": dial tcp: lookup minio on 127.0.0.11:53: server misbehaving: attempt #2: Get "http://minio:9000/a-bucket/?location=": dial tcp: lookup minio on 127.0.0.11:53: server misbehaving: attempt #3: Get "http://minio:9000/a-bucket/?location=": dial tcp: lookup minio on 127.0.0.11:53: server misbehaving: attempt #4: Get "http://minio:9000/a-bucket/?location=": dial tcp: lookup minio on 127.0.0.11:53: server misbehaving: attempt #5: Get "http://minio:9000/a-bucket/?location=": dial tcp: lookup minio on 127.0.0.11:53: server misbehaving: attempt #6: Get "http://minio:9000/a-bucket/?location=": dial tcp: lookup minio on 127.0.0.11:53: server misbehaving: attempt #7: Get "http://minio:9000/a-bucket/?location=": dial tcp: lookup minio on 127.0.0.11:53: server misbehaving: attempt #8: Get "http://minio:9000/a-bucket/?location=": dial tcp: lookup minio on 127.0.0.11:53: server misbehaving: attempt #9: Get "http://minio:9000/a-bucket/?location=": dial tcp: lookup minio on 127.0.0.11:53: server misbehaving: attempt #10: Get "http://minio:9000/a-bucket/?location=": dial tcp: lookup minio on 127.0.0.11:53: server misbehaving: attempt #11: Get "http://minio:9000/a-bucket/?location=": dial tcp: lookup minio on 127.0.0.11:53: server misbehaving: attempt #12: Get "http://minio:9000/a-bucket/?location=": dial tcp: lookup minio on 127.0.0.11:53: server misbehaving: attempt #13: Get "http://minio:9000/a-bucket/?location=": dial tcp: lookup minio on 127.0.0.11:53: server misbehaving: attempt #14: Get "http://minio:9000/a-bucket/?location=": dial tcp: lookup minio on 127.0.0.11:53: server misbehaving: attempt #15: Get "http://minio:9000/a-bucket/?location=": dial tcp: lookup minio on 127.0.0.11:53: server misbehaving: attempt #16: Get "http://minio:9000/a-bucket/?location=": dial tcp: lookup minio on 127.0.0.11:53: server misbehaving: attempt #17: Get "http://minio:9000/a-bucket/?location=": dial tcp: lookup minio on 127.0.0.11:53: server misbehaving: attempt #18: Get "http://minio:9000/a-bucket/?location=": dial tcp: lookup minio on 127.0.0.11:53: server misbehaving: attempt #19: Get "http://minio:9000/a-bucket/?location=": dial tcp: lookup minio on 127.0.0.11:53: server misbehaving"] [stack="github.com/milvus-io/milvus/cmd/components.(*QueryNode).Run\n\t/go/src/github.com/milvus-io/milvus/cmd/components/query_node.go:54\ngithub.com/milvus-io/milvus/cmd/roles.runComponent[...].func1\n\t/go/src/github.com/milvus-io/milvus/cmd/roles/roles.go:113"]

@yanliang567
Copy link
Contributor

it seems that your minio service is not working, please help to double check. Also you could use docker-compose logs > milvus.log to export the logs.

@yanliang567
Copy link
Contributor

/assign @zhaowenZhou

@yanliang567 yanliang567 added triage/needs-information Indicates an issue needs more information in order to work on it. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Nov 23, 2023
@zhaowenZhou
Copy link
Author

it seems that your minio service is not working, please help to double check. Also you could use docker-compose logs > milvus.log to export the logs.

How to make it work? I followed the instruction provided by tutorial: sudo docker-compose up -d

@zhaowenZhou
Copy link
Author

I saved all logs to a txt file
log.txt

@zhaowenZhou
Copy link
Author

@yanliang567

@yanliang567
Copy link
Contributor

have you deployed milvus before, did you clean up the milvus volumes? it seem that the current milvus is trying to connect etcd with some exiting meta.

milvus-standalone  | [2023/11/23 10:54:58.000 +00:00] [WARN] [server/rocksmq_impl.go:398] ["rocksmq topic already exists "] [topic=by-dev-rootcoord-dml_14]
milvus-standalone  | [2023/11/23 10:54:58.000 +00:00] [INFO] [msgstream/common_mq_factory.go:31] ["Msg Stream state"] [can_produce=true]
milvus-standalone  | [2023/11/23 10:54:58.000 +00:00] [WARN] [server/rocksmq_impl.go:398] ["rocksmq topic already exists "] [topic=by-dev-rootcoord-dml_15]
milvus-standalone  | [2023/11/23 10:54:58.000 +00:00] [INFO] [rootcoord/dml_channels.go:215] ["init dml channels"] [prefix=by-dev-rootcoord-dml] [num=16]
milvus-standalone  | [2023/11/23 10:54:58.000 +00:00] [INFO] [rootcoord/timeticksync.go:126] ["recover physical channels"] [collectionID=444979618623520769] ["physical channels"="[by-dev-rootcoord-dml_0,by-dev-rootcoord-dml_1]"]
milvus-standalone  | [2023/11/23 10:54:58.000 +00:00] [INFO] [rootcoord/timeticksync.go:126] ["recover physical channels"] [collectionID=445536272938372417] ["physical channels"="[by-dev-rootcoord-dml_0,by-dev-rootcoord-dml_1]"]
milvus-standalone  | [2023/11/23 10:54:58.000 +00:00] [INFO] [rootcoord/timeticksync.go:126] ["recover physical channels"] [collectionID=443058036732329985] ["physical channels"="[by-dev-rootcoord-dml_10,by-dev-rootcoord-dml_11]"]
milvus-standalone  | [2023/11/23 10:54:58.000 +00:00] [INFO] [rootcoord/timeticksync.go:126] ["recover physical channels"] [collectionID=444429165594673153] ["physical channels"="[by-dev-rootcoord-dml_0,by-dev-rootcoord-dml_1]"]

@zhaowenZhou
Copy link
Author

have you deployed milvus before, did you clean up the milvus volumes? it seem that the current milvus is trying to connect etcd with some exiting meta.

milvus-standalone  | [2023/11/23 10:54:58.000 +00:00] [WARN] [server/rocksmq_impl.go:398] ["rocksmq topic already exists "] [topic=by-dev-rootcoord-dml_14]
milvus-standalone  | [2023/11/23 10:54:58.000 +00:00] [INFO] [msgstream/common_mq_factory.go:31] ["Msg Stream state"] [can_produce=true]
milvus-standalone  | [2023/11/23 10:54:58.000 +00:00] [WARN] [server/rocksmq_impl.go:398] ["rocksmq topic already exists "] [topic=by-dev-rootcoord-dml_15]
milvus-standalone  | [2023/11/23 10:54:58.000 +00:00] [INFO] [rootcoord/dml_channels.go:215] ["init dml channels"] [prefix=by-dev-rootcoord-dml] [num=16]
milvus-standalone  | [2023/11/23 10:54:58.000 +00:00] [INFO] [rootcoord/timeticksync.go:126] ["recover physical channels"] [collectionID=444979618623520769] ["physical channels"="[by-dev-rootcoord-dml_0,by-dev-rootcoord-dml_1]"]
milvus-standalone  | [2023/11/23 10:54:58.000 +00:00] [INFO] [rootcoord/timeticksync.go:126] ["recover physical channels"] [collectionID=445536272938372417] ["physical channels"="[by-dev-rootcoord-dml_0,by-dev-rootcoord-dml_1]"]
milvus-standalone  | [2023/11/23 10:54:58.000 +00:00] [INFO] [rootcoord/timeticksync.go:126] ["recover physical channels"] [collectionID=443058036732329985] ["physical channels"="[by-dev-rootcoord-dml_10,by-dev-rootcoord-dml_11]"]
milvus-standalone  | [2023/11/23 10:54:58.000 +00:00] [INFO] [rootcoord/timeticksync.go:126] ["recover physical channels"] [collectionID=444429165594673153] ["physical channels"="[by-dev-rootcoord-dml_0,by-dev-rootcoord-dml_1]"]

You are right, i deployed it before, could you show me how to clean up volume? And after I did it, what's the next step? Thank you

@yanliang567
Copy link
Contributor

jsut delete the folder /volume/, which locates the same folder with docker-compose.yaml.
or create s new folder and copy the yaml into it.

@zhaowenZhou
Copy link
Author

jsut delete the folder /volume/, which locates the same folder with docker-compose.yaml. or create s new folder and copy the yaml into it.

A big thank you for your great help. BTW could you tell me the logic behind it? Why I lost connections to the db yesterday and what happened after I deleted volumes folder

@yanliang567
Copy link
Contributor

the milvus was not running healthy yesterday, so you are not able to connect to it.

@yanliang567 yanliang567 removed their assignment Nov 24, 2023
@yanliang567 yanliang567 added help wanted Extra attention is needed and removed kind/bug Issues or changes related a bug triage/needs-information Indicates an issue needs more information in order to work on it. labels Nov 24, 2023
@zhaowenZhou
Copy link
Author

zhaowenZhou commented Nov 27, 2023 via email

@zhaowenZhou
Copy link
Author

the milvus was not running healthy yesterday, so you are not able to connect to it.

Sorry for asking again
What does deleting the /volume folder mean? Does this mean remove existed data? If yes this is quite dangerous.
Is there anyway that I can keep previous data and reconnect to milvus?

@yanliang567
Copy link
Contributor

then you can create a new folder and run docker compose in the new created folder.

Copy link

stale bot commented Dec 28, 2023

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Rotten issues close after 30d of inactivity. Reopen the issue with /reopen.

@stale stale bot added the stale indicates no udpates for 30 days label Dec 28, 2023
@stale stale bot closed this as completed Jan 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed stale indicates no udpates for 30 days
Projects
None yet
Development

No branches or pull requests

2 participants