-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Agent tool support #134
Agent tool support #134
Conversation
@carsonwang , this is initial PR to support tool calls. |
* initial version * update path & support pdf uploader * update * update * update * update * update * update * update memory * upd after testing on spr
3a0e574
to
ff0b577
Compare
@carsonwang , please help to review, and BTW, can you point me some one to check the right steps for adding CI functions. |
Hello, @carsonwang and @jiafuzha, I'll only add http based query test in this PR. And add my langgchain/openai CI in a separate PR including both agent tool and MLLM tests. |
@carsonwang , CI is added. This PR is ready for review. |
@xuechendi , this is great! I will look into it. |
@carsonwang , follow up on this PR, any comments? |
llm_on_ray/ui/start_ui.py
Outdated
@@ -1713,6 +1718,12 @@ def _init_ui(self): | |||
type=str, | |||
help="The ip:port of head node to connect when restart a worker node.", | |||
) | |||
parser.add_argument( | |||
"--ref_app_url", | |||
default="http://127.0.0.1:8501", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How can other external users make this work by default? can you please add document how to launch it? If not, can you please remove and add an option to make it work for you only for now?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, I removed this update
TARGET=${{steps.target.outputs.target}} | ||
if [[ ${{ matrix.model }} == "mistral-7b-v0.1" ]]; then | ||
docker exec "${TARGET}" bash -c "llm_on_ray-serve --models ${{ matrix.model }}" | ||
docker exec "${TARGET}" bash -c "python examples/inference/api_server_openai/query_http_requests_tool.py --model_name ${{ matrix.model }}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is to run the example, can we test and verify the LLM response is a correct Json to call the function?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
UT verification added, only test on function name, using llama to inference
70e801f
to
f9d1287
Compare
Signed-off-by: Xue, Chendi <chendi.xue@intel.com>
Signed-off-by: Xue, Chendi <chendi.xue@intel.com>
Signed-off-by: jiafu zhang <jiafuzha@apache.org>
Signed-off-by: Xue, Chendi <chendi.xue@intel.com>
Co-authored-by: Carson Wang <carson.wang@intel.com> Signed-off-by: Chendi.Xue <chendi.xue@intel.com>
Signed-off-by: Xue, Chendi <chendi.xue@intel.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
CI passed. Thanks @xuechendi !
Support for openAI format tools and tool_choice
Request:
Reply:
Example screen shot:
Example used codes: