Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: The created collection cannot be loaded and cannot be written to. #33366

Closed
1 task done
waitwindy opened this issue May 24, 2024 · 11 comments
Closed
1 task done
Assignees
Labels
kind/bug Issues or changes related a bug stale indicates no udpates for 30 days triage/needs-information Indicates an issue needs more information in order to work on it.

Comments

@waitwindy
Copy link

Is there an existing issue for this?

  • I have searched the existing issues

Environment

- Milvus version:2.3.5
- Deployment mode(standalone or cluster):cluster
- MQ type(rocksmq, pulsar or kafka):  kafka  
- SDK version(e.g. pymilvus v2.0.0rc2):pymilvus 2.4
- OS(Ubuntu or CentOS): centos
- CPU/Memory: 
- GPU: 
- Others:

Current Behavior

When I created the milvus collection in the attu tool, I could not perform the load operation, and found that the time of inserting the sample data was wrong, the error was "deny to write the message to mq".Checking etcd shows that some historical deleted collections are still stored in the meta file

Expected Behavior

No response

Steps To Reproduce

1.attu Creates a new collection
2.attu writes the sample data

Milvus Log

[2024/05/23 10:00:01.916 +00:00] [ERROR] [rootcoord/dml_channels.go:282] ["Broadcast failed"] [error="deny to write the message to mq"] [chanName=kfk-topic-2-rootcoord-dml_5] [stack="github.com/milvus-io/milvus/internal/rootcoord.(*dmlChannels).broadcast\n\t/go/src/github.com/milvus-io/milvus/internal/rootcoord/dml_channels.go:282\ngithub.com/milvus-io/milvus/internal/rootcoord.(*timetickSync).broadcastDmlChannels\n\t/go/src/github.com/milvus-io/milvus/internal/rootcoord/timeticksync.go:392\ngithub.com/milvus-io/milvus/internal/rootcoord.(*bgGarbageCollector).notifyCollectionGc\n\t/go/src/github.com/milvus-io/milvus/internal/rootcoord/garbage_collector.go:190\ngithub.com/milvus-io/milvus/internal/rootcoord.(*bgGarbageCollector).GcCollectionData\n\t/go/src/github.com/milvus-io/milvus/internal/rootcoord/garbage_collector.go:236\ngithub.com/milvus-io/milvus/internal/rootcoord.(*deleteCollectionDataStep).Execute\n\t/go/src/github.com/milvus-io/milvus/internal/rootcoord/step.go:197\ngithub.com/milvus-io/milvus/internal/rootcoord.(*stepStack).Execute\n\t/go/src/github.com/milvus-io/milvus/internal/rootcoord/step_executor.go:59\ngithub.com/milvus-io/milvus/internal/rootcoord.(*bgStepExecutor).process.func1\n\t/go/src/github.com/milvus-io/milvus/internal/rootcoord/step_executor.go:201"]

[2024/05/23 10:02:00.675 +00:00] [WARN] [datacoord/index_service.go:264] ["there are multiple indexes, please specify the index_name"] [traceID=afdcb4804255e2e0fd1f9ee7b48c14bc] [collectionID=448145066652305196] [indexName=]

[2024/05/23 09:59:00.376 +00:00] [WARN] [kafka/kafka_consumer.go:138] ["consume msg failed"] [topic=kfk-topic-2-rootcoord-dml_6] [groupID=datanode-147-kfk-topic-2-rootcoord-dml_6_448145066652305348v0-true] [error="Local: Timed out"]

error.log

Anything else?

No response

@waitwindy waitwindy added kind/bug Issues or changes related a bug needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels May 24, 2024
@yhmo
Copy link
Contributor

yhmo commented May 24, 2024

[2024/05/23 09:58:36.142 +00:00] [WARN] [timerecord/time_recorder.go:134] ["RootCoord haven't synchronized the time tick for 2.000000 minutes"]

This warning indicates the etcd or message queue doesn't work.
Double-check the etcd state(log) and the disk space.

@yanliang567
Copy link
Contributor

/assign @waitwindy
/unassign

@yanliang567 yanliang567 added triage/needs-information Indicates an issue needs more information in order to work on it. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels May 24, 2024
@waitwindy
Copy link
Author

But now etcd is readable by the client

@xiaofan-luan
Copy link
Collaborator

it seems to be a kafka error, not an etcd issue

@waitwindy
Copy link
Author

But there is another milvus cluster that works fine with this Kafka cluster.

@xiaofan-luan
Copy link
Collaborator

are they using different topic name?

@xiaofan-luan
Copy link
Collaborator

from the error log, milvus failed to consume kafka message.
If two milvus share kafka, they need to use different prefix

@waitwindy
Copy link
Author

Yes, their prefixes are different
image

@xiaofan-luan
Copy link
Collaborator

是的,它们的前缀不同 图像

could this be a config issue? You have to figure out why kafka timeout

@waitwindy
Copy link
Author

Maybe kafka's consumer group can't be found. I found that kafka topic exists, but the consumer group can't be found in the existing tools

Copy link

stale bot commented Jun 26, 2024

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Rotten issues close after 30d of inactivity. Reopen the issue with /reopen.

@stale stale bot added the stale indicates no udpates for 30 days label Jun 26, 2024
@stale stale bot closed this as completed Jul 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Issues or changes related a bug stale indicates no udpates for 30 days triage/needs-information Indicates an issue needs more information in order to work on it.
Projects
None yet
Development

No branches or pull requests

4 participants