Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: [ milvus-2.4.4 ] error="incomplete query result, missing id xxxxxxx, len(searchIDs) = 40, len(queryIDs) = 20 #34021

Open
1 task done
laozhu1900 opened this issue Jun 20, 2024 · 13 comments
Assignees
Labels
kind/bug Issues or changes related a bug triage/needs-information Indicates an issue needs more information in order to work on it.

Comments

@laozhu1900
Copy link

Is there an existing issue for this?

  • I have searched the existing issues

Environment

- Milvus version: 2.4.4 compiled by source code
- Deployment mode(standalone or cluster): standalone
- MQ type(rocksmq, pulsar or kafka):   default
- SDK version(e.g. pymilvus v2.0.0rc2):  java-sdk-2.4.1
- OS(Ubuntu or CentOS): centos
- CPU/Memory: 
- GPU: 
- Others:

Current Behavior

[2024/06/19 20:08:41.958 +08:00] [WARN] [proxy/task_search.go:652] ["failed to requery"] [traceID=b408f2c41120e8e42736ca341109633d] [nq=1] [error="incomplete query result, missing id ff050e79da8465590af76b, len(searchIDs) = 40, len(queryIDs) = 20, collection=450545176328401015: inconsistent requery result"] [errorVerbose="incomplete query result, missing id ff050e79da8465590af76b, len(searchIDs) = 40, len(queryIDs) = 20, collection=450545176328401015: inconsistent requery result\n(1) attached stack trace\n  -- stack trace:\n  | github.com/milvus-io/milvus/pkg/util/merr.WrapErrInconsistentRequery\n  | \t/root/code/milvus/pkg/util/merr/utils.go:1038\n  | github.com/milvus-io/milvus/internal/proxy.doRequery\n  | \t/root/code/milvus/internal/proxy/task_search.go:860\n  | github.com/milvus-io/milvus/internal/proxy.(*searchTask).Requery\n  | \t/root/code/milvus/internal/proxy/task_search.go:747\n  | github.com/milvus-io/milvus/internal/proxy.(*searchTask).PostExecute\n  | \t/root/code/milvus/internal/proxy/task_search.go:650\n  | github.com/milvus-io/milvus/internal/proxy.(*taskScheduler).processTask\n  | \t/root/code/milvus/internal/proxy/task_scheduler.go:474\n  | github.com/milvus-io/milvus/internal/proxy.(*taskScheduler).queryLoop.func1\n  | \t/root/code/milvus/internal/proxy/task_scheduler.go:545\n  | github.com/milvus-io/milvus/pkg/util/conc.(*Pool[...]).Submit.func1\n  | \t/root/code/milvus/pkg/util/conc/pool.go:81\n  | github.com/panjf2000/ants/v2.(*goWorker).run.func1\n  | \t/root/goSpace/pkg/mod/github.com/panjf2000/ants/v2@v2.7.2/worker.go:67\n  | runtime.goexit\n  | \t/root/packages/go/src/runtime/asm_amd64.s:1695\nWraps: (2) incomplete query result, missing id ff050e79da8465590af76b, len(searchIDs) = 40, len(queryIDs) = 20, collection=450545176328401015\nWraps: (3) inconsistent requery result\nError types: (1) *withstack.withStack (2) *errutil.withPrefix (3) merr.milvusError"]

index Type: IVF_SQ8, metric_type:IP.

primaryKey type is Varchar, generate by UUID

15 columns. including Varchar and int64

I upsert 1218 records to milvus two days ago.

when I search, if my TOPK less than 592, it search correctly, if my TOPK more than 592, it report error .
If I execute flush() , TOPK more than 592, it also search correctly.

Is it caused by inconsistencies between indexes and data ?

Expected Behavior

No response

Steps To Reproduce

No response

Milvus Log

No response

Anything else?

No response

@laozhu1900 laozhu1900 added kind/bug Issues or changes related a bug needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Jun 20, 2024
@bigsheeper
Copy link
Contributor

bigsheeper commented Jun 20, 2024

@laozhu1900 Hello, could you please provide the collection schema?
And, which output fields are specified to search?

@laozhu1900
Copy link
Author

PrimaryKey VarChar(218),
10 columns' type is VarChar(218).
one column type is int64.
two columns is int32,
model Dimension is 256.
When I search , output is all columns.
Only model build index.

@bigsheeper
Copy link
Contributor

Hi, @laozhu1900, I am unable to reproduce this issue. Could you please help confirm if my script differs from yours? If there are no differences, could you try running it to see if the issue persists?

import time
import string
import random

import numpy as np
from pymilvus import (
    connections,
    utility,
    FieldSchema, CollectionSchema, DataType,
    Collection,
)

fmt = "\n=== {:30} ===\n"
search_latency_fmt = "search latency = {:.4f}s"
num_entities, dim = 1218, 256
str_length = 218

#################################################################################
# connect to Milvus
print(fmt.format("start connecting to Milvus"))
connections.connect("default", host="localhost", port="19530")

has = utility.has_collection("hello_milvus")
print(f"Does collection hello_milvus exist in Milvus: {has}")

###############################################################################
print(fmt.format("Drop collection `hello_milvus`"))
utility.drop_collection("hello_milvus")

#################################################################################
# create collection
fields = [
    FieldSchema(name="pk", dtype=DataType.VARCHAR, is_primary=True, auto_id=False, max_length=str_length),
    FieldSchema(name="s0", dtype=DataType.VARCHAR, max_length=str_length),
    FieldSchema(name="s1", dtype=DataType.VARCHAR, max_length=str_length),
    FieldSchema(name="s2", dtype=DataType.VARCHAR, max_length=str_length),
    FieldSchema(name="s3", dtype=DataType.VARCHAR, max_length=str_length),
    FieldSchema(name="s4", dtype=DataType.VARCHAR, max_length=str_length),
    FieldSchema(name="s5", dtype=DataType.VARCHAR, max_length=str_length),
    FieldSchema(name="s6", dtype=DataType.VARCHAR, max_length=str_length),
    FieldSchema(name="s7", dtype=DataType.VARCHAR, max_length=str_length),
    FieldSchema(name="s8", dtype=DataType.VARCHAR, max_length=str_length),
    FieldSchema(name="s9", dtype=DataType.VARCHAR, max_length=str_length),
    FieldSchema(name="random0", dtype=DataType.DOUBLE),
    FieldSchema(name="random1", dtype=DataType.FLOAT),
    FieldSchema(name="embeddings", dtype=DataType.FLOAT_VECTOR, dim=dim)
]

schema = CollectionSchema(fields, "hello_milvus is the simplest demo to introduce the APIs")

print(fmt.format("Create collection `hello_milvus`"))
hello_milvus = Collection("hello_milvus", schema, consistency_level="Strong")

################################################################################
# create index
print(fmt.format("Start Creating index IVF_FLAT"))
index = {
    "index_type": "IVF_SQ8",
    "metric_type": "L2",
    "params": {"nlist": 128},
}

hello_milvus.create_index("embeddings", index)

################################################################################
# load
print(fmt.format("Start loading"))
hello_milvus.load()

################################################################################
# insert data
def randomstr(length):
   letters = string.ascii_lowercase
   return ''.join(random.choice(letters) for i in range(length))

print(fmt.format("Start inserting entities"))
rng = np.random.default_rng(seed=19530)
entities = [
    # provide the pk field because `auto_id` is set to False
    [str(f'primary_key_{i}') for i in range(num_entities)],
    [randomstr(str_length) for i in range(num_entities)],
    [randomstr(str_length) for i in range(num_entities)],
    [randomstr(str_length) for i in range(num_entities)],
    [randomstr(str_length) for i in range(num_entities)],
    [randomstr(str_length) for i in range(num_entities)],
    [randomstr(str_length) for i in range(num_entities)],
    [randomstr(str_length) for i in range(num_entities)],
    [randomstr(str_length) for i in range(num_entities)],
    [randomstr(str_length) for i in range(num_entities)],
    [randomstr(str_length) for i in range(num_entities)],
    rng.random(num_entities).tolist(),  # field random, only supports list
    rng.random(num_entities).tolist(),  # field random, only supports list
    rng.random((num_entities, dim), np.float32),    # field embeddings, supports numpy.ndarray and list
]

insert_result = hello_milvus.upsert(entities)

# hello_milvus.flush()
# print(f"Number of entities in Milvus: {hello_milvus.num_entities}")  # check the num_entities

# -----------------------------------------------------------------------------
# search based on vector similarity
print(fmt.format("Start searching based on vector similarity"))
vectors_to_search = entities[-1][-1:]
search_params = {
    "metric_type": "L2",
    "params": {"nprobe": 10},
}

for i in range(10):
    start_time = time.time()
    result = hello_milvus.search(vectors_to_search, "embeddings", search_params, limit=1024, output_fields=["*"])
    end_time = time.time()
    print(search_latency_fmt.format(end_time - start_time))

@laozhu1900
Copy link
Author

ok , I try it

@yanliang567
Copy link
Contributor

/assign @laozhu1900
/unassign

@yanliang567 yanliang567 added triage/needs-information Indicates an issue needs more information in order to work on it. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Jun 21, 2024
@laozhu1900
Copy link
Author

Hi, @laozhu1900, I am unable to reproduce this issue. Could you please help confirm if my script differs from yours? If there are no differences, could you try running it to see if the issue persists?

import time
import string
import random

import numpy as np
from pymilvus import (
    connections,
    utility,
    FieldSchema, CollectionSchema, DataType,
    Collection,
)

fmt = "\n=== {:30} ===\n"
search_latency_fmt = "search latency = {:.4f}s"
num_entities, dim = 1218, 256
str_length = 218

#################################################################################
# connect to Milvus
print(fmt.format("start connecting to Milvus"))
connections.connect("default", host="localhost", port="19530")

has = utility.has_collection("hello_milvus")
print(f"Does collection hello_milvus exist in Milvus: {has}")

###############################################################################
print(fmt.format("Drop collection `hello_milvus`"))
utility.drop_collection("hello_milvus")

#################################################################################
# create collection
fields = [
    FieldSchema(name="pk", dtype=DataType.VARCHAR, is_primary=True, auto_id=False, max_length=str_length),
    FieldSchema(name="s0", dtype=DataType.VARCHAR, max_length=str_length),
    FieldSchema(name="s1", dtype=DataType.VARCHAR, max_length=str_length),
    FieldSchema(name="s2", dtype=DataType.VARCHAR, max_length=str_length),
    FieldSchema(name="s3", dtype=DataType.VARCHAR, max_length=str_length),
    FieldSchema(name="s4", dtype=DataType.VARCHAR, max_length=str_length),
    FieldSchema(name="s5", dtype=DataType.VARCHAR, max_length=str_length),
    FieldSchema(name="s6", dtype=DataType.VARCHAR, max_length=str_length),
    FieldSchema(name="s7", dtype=DataType.VARCHAR, max_length=str_length),
    FieldSchema(name="s8", dtype=DataType.VARCHAR, max_length=str_length),
    FieldSchema(name="s9", dtype=DataType.VARCHAR, max_length=str_length),
    FieldSchema(name="random0", dtype=DataType.DOUBLE),
    FieldSchema(name="random1", dtype=DataType.FLOAT),
    FieldSchema(name="embeddings", dtype=DataType.FLOAT_VECTOR, dim=dim)
]

schema = CollectionSchema(fields, "hello_milvus is the simplest demo to introduce the APIs")

print(fmt.format("Create collection `hello_milvus`"))
hello_milvus = Collection("hello_milvus", schema, consistency_level="Strong")

################################################################################
# create index
print(fmt.format("Start Creating index IVF_FLAT"))
index = {
    "index_type": "IVF_SQ8",
    "metric_type": "L2",
    "params": {"nlist": 128},
}

hello_milvus.create_index("embeddings", index)

################################################################################
# load
print(fmt.format("Start loading"))
hello_milvus.load()

################################################################################
# insert data
def randomstr(length):
   letters = string.ascii_lowercase
   return ''.join(random.choice(letters) for i in range(length))

print(fmt.format("Start inserting entities"))
rng = np.random.default_rng(seed=19530)
entities = [
    # provide the pk field because `auto_id` is set to False
    [str(f'primary_key_{i}') for i in range(num_entities)],
    [randomstr(str_length) for i in range(num_entities)],
    [randomstr(str_length) for i in range(num_entities)],
    [randomstr(str_length) for i in range(num_entities)],
    [randomstr(str_length) for i in range(num_entities)],
    [randomstr(str_length) for i in range(num_entities)],
    [randomstr(str_length) for i in range(num_entities)],
    [randomstr(str_length) for i in range(num_entities)],
    [randomstr(str_length) for i in range(num_entities)],
    [randomstr(str_length) for i in range(num_entities)],
    [randomstr(str_length) for i in range(num_entities)],
    rng.random(num_entities).tolist(),  # field random, only supports list
    rng.random(num_entities).tolist(),  # field random, only supports list
    rng.random((num_entities, dim), np.float32),    # field embeddings, supports numpy.ndarray and list
]

insert_result = hello_milvus.upsert(entities)

# hello_milvus.flush()
# print(f"Number of entities in Milvus: {hello_milvus.num_entities}")  # check the num_entities

# -----------------------------------------------------------------------------
# search based on vector similarity
print(fmt.format("Start searching based on vector similarity"))
vectors_to_search = entities[-1][-1:]
search_params = {
    "metric_type": "L2",
    "params": {"nprobe": 10},
}

for i in range(10):
    start_time = time.time()
    result = hello_milvus.search(vectors_to_search, "embeddings", search_params, limit=1024, output_fields=["*"])
    end_time = time.time()
    print(search_latency_fmt.format(end_time - start_time))

Use this demo, I can't reproduce , In my demo, if the outFields have embeddings, it report error ...
Is this error related to data in collection ?

@yanliang567
Copy link
Contributor

@laozhu1900 do you happen to have a reprodceable code snippet for sharing, we can try to reproduce it in house.

@SimFG
Copy link
Contributor

SimFG commented Jun 25, 2024

@laozhu1900 Have you manually set the timeout for the search request?

@bigsheeper
Copy link
Contributor

@laozhu1900 Additionally, could you dump all the upsert data and provide it to us?

Copy link

stale bot commented Jul 27, 2024

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Rotten issues close after 30d of inactivity. Reopen the issue with /reopen.

@stale stale bot added the stale indicates no udpates for 30 days label Jul 27, 2024
@laozhu1900
Copy link
Author

laozhu1900 commented Jul 27, 2024 via email

@stale stale bot removed the stale indicates no udpates for 30 days label Jul 27, 2024
Copy link

stale bot commented Aug 27, 2024

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Rotten issues close after 30d of inactivity. Reopen the issue with /reopen.

@stale stale bot added the stale indicates no udpates for 30 days label Aug 27, 2024
@laozhu1900
Copy link
Author

laozhu1900 commented Aug 27, 2024 via email

@stale stale bot removed the stale indicates no udpates for 30 days label Aug 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Issues or changes related a bug triage/needs-information Indicates an issue needs more information in order to work on it.
Projects
None yet
Development

No branches or pull requests

4 participants