Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[<Agent component: framework|tool|llm|etc...>] #475

Open
kbzh2558 opened this issue Jun 7, 2024 · 1 comment
Open

[<Agent component: framework|tool|llm|etc...>] #475

kbzh2558 opened this issue Jun 7, 2024 · 1 comment
Assignees

Comments

@kbzh2558
Copy link

kbzh2558 commented Jun 7, 2024

Description

请问如何运营rag agent自主进行多轮对话,谢谢。

例如:PROMPT_TEMPLATE="""

我的问题或指令:

{question}

请根据下面的参考信息回答我的问题或回复我的指令,并遵循以下改进后的指南:
. . .

您的回复应该遵循以下改进后的结构:
. . .


参考信息:

{context}
"""

目前我参照的模板是rag example中的llamaindex_rag.ipynb

import logging
import sys
import os

logging.basicConfig(stream=sys.stdout, level=logging.INFO)
logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))

from llama_index.core import (
SimpleDirectoryReader,
VectorStoreIndex,
Settings
)
from modelscope import snapshot_download

Specify multiple document paths

document_paths = [
'D:\PPT&WORD\folder\knowledge'
# Add more document paths as needed
]

Load data from multiple documents

documents = []
for path in document_paths:
documents.extend(SimpleDirectoryReader(path).load_data())

embedding_name='damo/nlp_gte_sentence-embedding_chinese-base'
local_embedding = snapshot_download(embedding_name)
Settings.embed_model = "local:"+local_embedding

index = VectorStoreIndex.from_documents(documents)

os.environ['ZHIPU_API_KEY']='apikey'

from modelscope_agent.agents import RolePlay

role_template = '知识库查询小助手,可以优先通过查询本地知识库来回答用户的问题'
llm_config = {
'model': 'GLM-4',
'model_server': 'zhipu'
}
function_list = []

bot = RolePlay(function_list=function_list,llm=llm_config, instruction=role_template)

index_ret = index.as_retriever(similarity_top_k=3)
query = "西安交通大学图书馆有几部分组成?"
result = index_ret.retrieve(query)
print(result)

ref_doc = ' '.join([doc.text for doc in result])
response = bot.run("西安交通大学图书馆有几部分组成?", remote=False, print_info=True, ref_doc=ref_doc)
text = ''
for chunk in response:
text += chunk
print(text)

Link

No response

@suluyana
Copy link
Collaborator

可以参考以下代码:

`
from modelscope_agent.memory import MemoryWithRag
from modelscope_agent.agents import RolePlay

role_template = '知识库查询小助手,可以优先通过查询本地知识库来回答用户的问题'
llm_config = {
'model': 'GLM-4',
'model_server': 'zhipu'
}
function_list = []
file_paths = ['./tests/samples/常见QA.pdf']
bot = RolePlay(function_list=function_list,llm=llm_config, instruction=role_template)
memory = MemoryWithRag(urls=file_paths, use_knowledge_cache=False)
use_llm = True if len(function_list) else False

query = "高德天气API在哪申请"
ref_doc = memory.run(query, use_llm=use_llm)

response = bot.run(query, remote=False, print_info=True, ref_doc=ref_doc)
text = ''
for chunk in response:
text += chunk
print(text)
`

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants