New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[CI_Block]num_entities of partition is not the same with inserted quantities under multiple replicas #6436
Comments
in CI tests(no replicas), if insert 100,000 entities, it repros as well:
It looks loke flush is not completed when calling .num_entities. Does flush run as sync or async by default? |
it seems to be getting worse and worse, this time it repros by 3000 entities in CI tests(only in standalone pipeline): stats = connect.get_collection_stats(collection_list[i])= |
@czs007 any updates, dudes? |
Also re-pro with multi collections
|
I am working on it. |
Please state your issue using the following template and, most importantly, in English.
Describe the bug
num_entities of partition is not the same with inserted quantity under multiple replicas when data is relatively big
Steps/Code to reproduce behavior
1 deploy multiple replicas enviroment:
proxy: 1, querynode:2, indexnode:2, datanode: 2
2 run test case: tests20/python_client/testcases/test_partition.py::TestPartitionOperations::test_partition_insert_maximum_size_data
Result:
assert partition_w.num_entities == max_size
E assert 77090 == 100000
E +77090
E -100000
Logs:
log.tar.zip
Expected behavior
num_entities of partition equals to inserted quantities
Method of installation
Environment details
Hardware/Software conditions (OS, CPU, GPU, Memory)
Milvus version (master or released version)
milvus-master (a8e5fd2)
Name: pymilvus
Version: 2.0.0rc2.dev11
Name: pymilvus-orm
Version: 2.0.0rc2.dev29
Configuration file
Settings you made in
server_config.yaml
ormilvus.yaml
paste-file-content-here
Screenshots
If applicable, add screenshots to help explain your problem.
Additional context
Add any other context about the problem here.
The text was updated successfully, but these errors were encountered: