New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: milvus-proxy panic when data continues to be written while a collection is being deleted #24767
Comments
/assign @jiaoew1991 |
/assign @jaime0815 |
seems that we need to implement a reference count policy on channel_mgr.go |
@xiaofan-luan @jaime0815 Here is a test for this problem:
Through the results, it can be found that the first test has no panic, and the call to send returns an error, but the program has no panic; the second test has a panic. That is to say, after the kafka producer is closed, calling send will return normally and will not panic immediately, but the kafka event loop will panic after a period of time, causing milvus to work abnormally. |
/assign @NicoYuan1986 |
/close the issue has been fixed |
/close |
@SimFG: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Is there an existing issue for this?
Environment
Current Behavior
When i drop collection on attu and another app is writing data to the dropping collection,pop-up prompt “unvailable connection”,i turn to command line execute
kubectl get pod
, found that "milvus-proxy" pod container is restarting。Expected Behavior
proxy should not restart
Steps To Reproduce
Milvus Log
, the log is show below:
[2023/06/07 08:29:33.943 +00:00] [ERROR] [kafka/kafka_producer.go:63] ["kafka produce message fail because of delivery chan is closed"] [topic=by-dev-rootcoord-dml_3] [stack="github.com/milvus-io/milvus/internal/mq/msgstream/mqwrapper/kafka.(*kafkaProducer).Send\n\t/go/src/github.com/milvus-io/milvus/internal/mq/msgstream/mqwrapper/kafka/kafka_producer.go:63\ngithub.com/milvus-io/milvus/internal/mq/msgstream.(*mqMsgStream).Produce\n\t/go/src/github.com/milvus-io/milvus/internal/mq/msgstream/mq_msgstream.go:281\ngithub.com/milvus-io/milvus/internal/proxy.(*insertTask).Execute\n\t/go/src/github.com/milvus-io/milvus/internal/proxy/task_insert.go:459\ngithub.com/milvus-io/milvus/internal/proxy.(*taskScheduler).processTask\n\t/go/src/github.com/milvus-io/milvus/internal/proxy/task_scheduler.go:470"]
[2023/06/07 08:29:33.943 +00:00] [WARN] [proxy/task_scheduler.go:473] ["Failed to execute task: "] [error="delivery chan of kafka producer is closed"] [traceID=4385263f0b043a8e]
[2023/06/07 08:29:33.943 +00:00] [WARN] [proxy/impl.go:2620] ["Failed to execute insert task in task scheduler: delivery chan of kafka producer is closed"] [traceID=4385263f0b043a8e]
panic:
goroutine 2618709github.com/confluentinccontluent-kafrkatka*handle)eventPoll(oxcooloof610.0xc00289d6e0onfluent-kafka-goav1.9.1/kafka/event .go:251 +0xae5github.poller(oxc0e100f6000x17faac6?]/go/pkg/mod/githubconfluent-kafka-goav1.9.1/kafka/producer.go:627+0x52github.com/confluentinc/confluent.roducer.tunc!/go/pkg/mod/github9.1/kafka/producer+0x29went-Codwlcreatedby github.com/confluentinc /-kafka-go/kafkaNewP roducercontluent'go/pkg/mod/github.com/confluentinc/confluent-kafka-goav1.9.1/kafka/producer.go:532+0x88b
0x0,0x3e8?,0xc01bd85a0)
Anything else?
No response
The text was updated successfully, but these errors were encountered: