You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
index_ret = index.as_retriever(similarity_top_k=3)
query = "西安交通大学图书馆有几部分组成?"
result = index_ret.retrieve(query)
print(result)
ref_doc = ' '.join([doc.text for doc in result])
response = bot.run("西安交通大学图书馆有几部分组成?", remote=False, print_info=True, ref_doc=ref_doc)
text = ''
for chunk in response:
text += chunk
print(text)
Link
No response
The text was updated successfully, but these errors were encountered:
Description
请问如何运营rag agent自主进行多轮对话,谢谢。
例如:PROMPT_TEMPLATE="""
我的问题或指令:
{question}
请根据下面的参考信息回答我的问题或回复我的指令,并遵循以下改进后的指南:
. . .
您的回复应该遵循以下改进后的结构:
. . .
参考信息:
{context}
"""
目前我参照的模板是rag example中的llamaindex_rag.ipynb
import logging
import sys
import os
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))
from llama_index.core import (
SimpleDirectoryReader,
VectorStoreIndex,
Settings
)
from modelscope import snapshot_download
Specify multiple document paths
document_paths = [
'D:\PPT&WORD\folder\knowledge'
# Add more document paths as needed
]
Load data from multiple documents
documents = []
for path in document_paths:
documents.extend(SimpleDirectoryReader(path).load_data())
embedding_name='damo/nlp_gte_sentence-embedding_chinese-base'
local_embedding = snapshot_download(embedding_name)
Settings.embed_model = "local:"+local_embedding
index = VectorStoreIndex.from_documents(documents)
os.environ['ZHIPU_API_KEY']='apikey'
from modelscope_agent.agents import RolePlay
role_template = '知识库查询小助手,可以优先通过查询本地知识库来回答用户的问题'
llm_config = {
'model': 'GLM-4',
'model_server': 'zhipu'
}
function_list = []
bot = RolePlay(function_list=function_list,llm=llm_config, instruction=role_template)
index_ret = index.as_retriever(similarity_top_k=3)
query = "西安交通大学图书馆有几部分组成?"
result = index_ret.retrieve(query)
print(result)
ref_doc = ' '.join([doc.text for doc in result])
response = bot.run("西安交通大学图书馆有几部分组成?", remote=False, print_info=True, ref_doc=ref_doc)
text = ''
for chunk in response:
text += chunk
print(text)
Link
No response
The text was updated successfully, but these errors were encountered: