You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
However I don't see anything like that for the pipeline Extractor.
More concretely, is it possible to do something like this?
# Create and run extractor instanceextractor=Extractor(embeddings, llm, output="reference", ...)
results=extractor.search("I want a 15 inch laptop with 16gb and 1tb of storage") # <-- is there something like this 'search' method for pipeline package?forresultinresults:
print("ANSWER:", result["answer"])
print("REFERENCE:", embeddings.search("select id, text from txtai where id = :id", parameters={"id": result["reference"]}))
The text was updated successfully, but these errors were encountered:
hizzbizz
changed the title
Extractor search; get multiple answers
Pipeline Extractor search; get multiple answers
Jan 8, 2024
Totally understand! I look forward to the future - And thank you so much for the work that's already here - brought a lot of joy to my AI/LLM experiments!
Searched google, docs, API, codebase, and thumbed through issues; could be maybe my vocab is off so apologies if this has already been answered.
First off, this is pretty amazing; I was able to use the pipeline package to get OpenAI to find stuff in my data; so cool, so easy.
However I'd like to return multiple results, with a confidence score. For my use case, just the top answer will not cut it.
I see there is something like that for the
Embeddings
type: https://github.com/neuml/txtai/blob/master/examples/13_Similarity_search_with_images.ipynbHowever I don't see anything like that for the pipeline
Extractor
.More concretely, is it possible to do something like this?
The text was updated successfully, but these errors were encountered: