Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: milvus-proxy panic when data continues to be written while a collection is being deleted #24767

Closed
1 task done
gateray opened this issue Jun 8, 2023 · 8 comments
Closed
1 task done
Assignees
Labels
kind/bug Issues or changes related a bug triage/accepted Indicates an issue or PR is ready to be actively worked on.
Milestone

Comments

@gateray
Copy link

gateray commented Jun 8, 2023

Is there an existing issue for this?

  • I have searched the existing issues

Environment

- Milvus version: 2.2.8
- Deployment mode(standalone or cluster): cluster
- MQ type(rocksmq, pulsar or kafka): kafka
- SDK version(e.g. pymilvus v2.0.0rc2): 2.2.8
- OS(Ubuntu or CentOS): ubuntu
- CPU/Memory: 32core64GB
- GPU: no
- Others: use helm deploy,kafka as message-storage

Current Behavior

When i drop collection on attu and another app is writing data to the dropping collection,pop-up prompt “unvailable connection”,i turn to command line execute kubectl get pod, found that "milvus-proxy" pod container is restarting。

Expected Behavior

proxy should not restart

Steps To Reproduce

1. build the env:
k8s:1.22
os:ubuntu20.04
external apache kafka:3.2
external etcd:3.5.7
milvus:2.2.8,use helm deploy
2. start a script writing data to A collection continues.
3. drop collection A on attu

Milvus Log

, the log is show below:
[2023/06/07 08:29:33.943 +00:00] [ERROR] [kafka/kafka_producer.go:63] ["kafka produce message fail because of delivery chan is closed"] [topic=by-dev-rootcoord-dml_3] [stack="github.com/milvus-io/milvus/internal/mq/msgstream/mqwrapper/kafka.(*kafkaProducer).Send\n\t/go/src/github.com/milvus-io/milvus/internal/mq/msgstream/mqwrapper/kafka/kafka_producer.go:63\ngithub.com/milvus-io/milvus/internal/mq/msgstream.(*mqMsgStream).Produce\n\t/go/src/github.com/milvus-io/milvus/internal/mq/msgstream/mq_msgstream.go:281\ngithub.com/milvus-io/milvus/internal/proxy.(*insertTask).Execute\n\t/go/src/github.com/milvus-io/milvus/internal/proxy/task_insert.go:459\ngithub.com/milvus-io/milvus/internal/proxy.(*taskScheduler).processTask\n\t/go/src/github.com/milvus-io/milvus/internal/proxy/task_scheduler.go:470"]
[2023/06/07 08:29:33.943 +00:00] [WARN] [proxy/task_scheduler.go:473] ["Failed to execute task: "] [error="delivery chan of kafka producer is closed"] [traceID=4385263f0b043a8e]
[2023/06/07 08:29:33.943 +00:00] [WARN] [proxy/impl.go:2620] ["Failed to execute insert task in task scheduler: delivery chan of kafka producer is closed"] [traceID=4385263f0b043a8e]

panic:
goroutine 2618709github.com/confluentinccontluent-kafrkatka*handle)eventPoll(oxcooloof610.0xc00289d6e0onfluent-kafka-goav1.9.1/kafka/event .go:251 +0xae5github.poller(oxc0e100f6000x17faac6?]/go/pkg/mod/githubconfluent-kafka-goav1.9.1/kafka/producer.go:627+0x52github.com/confluentinc/confluent.roducer.tunc!/go/pkg/mod/github9.1/kafka/producer+0x29went-Codwlcreatedby github.com/confluentinc /-kafka-go/kafkaNewP roducercontluent'go/pkg/mod/github.com/confluentinc/confluent-kafka-goav1.9.1/kafka/producer.go:532+0x88b
0x0,0x3e8?,0xc01bd85a0)

Anything else?

No response

@gateray gateray added kind/bug Issues or changes related a bug needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Jun 8, 2023
@yanliang567
Copy link
Contributor

/assign @jiaoew1991
/unassign

@yanliang567 yanliang567 added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Jun 9, 2023
@yanliang567 yanliang567 added this to the 2.2.10 milestone Jun 9, 2023
@xiaofan-luan
Copy link
Contributor

/assign @jaime0815

@xiaofan-luan
Copy link
Contributor

seems that we need to implement a reference count policy on channel_mgr.go

@SimFG
Copy link
Contributor

SimFG commented Jun 25, 2023

@xiaofan-luan @jaime0815
The fix plan is to check whether it is closed before calling the send method of kafka producer

Here is a test for this problem:

  1. Close the kafka producer, call send, and make an error assertion
  2. Close the kafka producer, call send, the program sleeps for 30s, and make an error assertion

image
image

Through the results, it can be found that the first test has no panic, and the call to send returns an error, but the program has no panic; the second test has a panic. That is to say, after the kafka producer is closed, calling send will return normally and will not panic immediately, but the kafka event loop will panic after a period of time, causing milvus to work abnormally.

@yanliang567
Copy link
Contributor

/assign @NicoYuan1986
shall we add a test in ci nightly for this scenario?

@SimFG
Copy link
Contributor

SimFG commented Jul 26, 2023

/close the issue has been fixed

@SimFG
Copy link
Contributor

SimFG commented Jul 26, 2023

/close

@sre-ci-robot
Copy link
Contributor

@SimFG: Closing this issue.

In response to this:

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Issues or changes related a bug triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

No branches or pull requests

8 participants