Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 4 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,14 +22,15 @@ of plugins in the form of functions to execute data analytics tasks in a statefu


## 🆕 News
- 📅2024-01-30: TaskWeaver introduces a new plugin-only mode that securely generates calls to specified plugins without producing extraneous code.🪡
- 📅2024-01-23: TaskWeaver can now be personalized by transforming your chat histories into enduring [experiences](https://microsoft.github.io/TaskWeaver/docs/experience) 🎉
- 📅2024-01-17: TaskWeaver now has a plugin [vision_web_explorer](https://github.com/microsoft/TaskWeaver/blob/main/project/plugins/README.md#vision_web_explorer) that can open a web browser and explore websites.🌐
- 📅2024-01-15: TaskWeaver now supports Streaming♒ in both UI and command line.✌️
- 📅2024-01-01: Welcome join TaskWeaver [Discord](https://discord.gg/Z56MXmZgMb).
- 📅2023-12-21: TaskWeaver now supports a number of LLMs, such as LiteLLM, Ollama, Gemini, and QWen🎈.
- 📅2023-12-21: TaskWeaver Website is now [available](https://microsoft.github.io/TaskWeaver/) with more documentations.
<!-- - 📅2023-12-21: TaskWeaver now supports a number of LLMs, such as LiteLLM, Ollama, Gemini, and QWen🎈.) -->
<!-- - 📅2023-12-21: TaskWeaver Website is now [available]&#40;https://microsoft.github.io/TaskWeaver/&#41; with more documentations.) -->
<!-- - 📅2023-12-12: A simple UI demo is available in playground/UI folder, try it [here](https://microsoft.github.io/TaskWeaver/docs/usage/webui)! -->
<!-- - [2023-11-30] TaskWeaver is released on GitHub🎈. -->
<!-- - 📅2023-11-30: TaskWeaver is released on GitHub🎈. -->


## 💥 Highlights
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ def reply(
code: List[str] = []
for i, f in enumerate(functions):
function_name = f["name"]
function_args = json.loads(f["arguments"])
function_args = f["arguments"]
function_call = (
f"r{self.return_index + i}={function_name}("
+ ", ".join(
Expand Down
32 changes: 10 additions & 22 deletions taskweaver/llm/google_genai.py
Original file line number Diff line number Diff line change
Expand Up @@ -100,28 +100,16 @@ def chat_completion(
stop: Optional[List[str]] = None,
**kwargs: Any,
) -> Generator[ChatMessageType, None, None]:
try:
return self._chat_completion(
messages=messages,
use_backup_engine=use_backup_engine,
stream=stream,
temperature=temperature,
max_tokens=max_tokens,
top_p=top_p,
stop=stop,
**kwargs,
)
except Exception:
return self._completion(
messages=messages,
use_backup_engine=use_backup_engine,
stream=stream,
temperature=temperature,
max_tokens=max_tokens,
top_p=top_p,
stop=stop,
**kwargs,
)
return self._chat_completion(
messages=messages,
use_backup_engine=use_backup_engine,
stream=stream,
temperature=temperature,
max_tokens=max_tokens,
top_p=top_p,
stop=stop,
**kwargs,
)

def _chat_completion(
self,
Expand Down
16 changes: 10 additions & 6 deletions taskweaver/llm/openai.py
Original file line number Diff line number Diff line change
Expand Up @@ -197,13 +197,17 @@ def chat_completion(
message=oai_response.content if oai_response.content is not None else "",
)
if oai_response.tool_calls is not None:
import json

response["role"] = "function"
response["content"] = (
"["
+ ",".join(
[t.function.model_dump_json() for t in oai_response.tool_calls],
)
+ "]"
response["content"] = json.dumps(
[
{
"name": t.function.name,
"arguments": json.loads(t.function.arguments),
}
for t in oai_response.tool_calls
],
)
yield response

Expand Down
47 changes: 26 additions & 21 deletions taskweaver/llm/zhipuai.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,9 @@
from typing import Any, Generator, List, Optional

from injector import inject

from taskweaver.llm.util import ChatMessageType, format_chat_message

from .base import CompletionService, EmbeddingService, LLMServiceConfig

DEFAULT_STOP_TOKEN: List[str] = ["</s>"]
Expand Down Expand Up @@ -56,34 +59,32 @@ class ZhipuAIService(CompletionService, EmbeddingService):

@inject
def __init__(self, config: ZhipuAIServiceConfig):

if ZhipuAIService.zhipuai is None:
try:
import zhipuai

ZhipuAIService.zhipuai = zhipuai
except Exception:
raise Exception(
"Package zhipuai>=2.0.0 is required for using ZhipuAI API.",
)

self.config = config
self.client = (
ZhipuAIService.zhipuai.ZhipuAI(
base_url=self.config.api_base,
api_key=self.config.api_key,
)
self.client = ZhipuAIService.zhipuai.ZhipuAI(
base_url=self.config.api_base,
api_key=self.config.api_key,
)

def chat_completion(
self,
messages: List[ChatMessageType],
use_backup_engine: bool = False,
stream: bool = False,
temperature: Optional[float] = None,
max_tokens: Optional[int] = None,
top_p: Optional[float] = None,
stop: Optional[List[str]] = None,
**kwargs: Any,
self,
messages: List[ChatMessageType],
use_backup_engine: bool = False,
stream: bool = False,
temperature: Optional[float] = None,
max_tokens: Optional[int] = None,
top_p: Optional[float] = None,
stop: Optional[List[str]] = None,
**kwargs: Any,
) -> Generator[ChatMessageType, None, None]:
engine = self.config.model
backup_engine = self.config.backup_model
Expand Down Expand Up @@ -136,13 +137,17 @@ def chat_completion(
message=zhipuai_response.content if zhipuai_response.content is not None else "",
)
if zhipuai_response.tool_calls is not None:
import json

response["role"] = "function"
response["content"] = (
"["
+ ",".join(
[t.function.model_dump_json() for t in zhipuai_response.tool_calls],
)
+ "]"
response["content"] = json.dumps(
[
{
"name": t.function.name,
"arguments": json.loads(t.function.arguments),
}
for t in zhipuai_response.tool_calls
],
)
yield response
except Exception as e:
Expand Down
71 changes: 71 additions & 0 deletions website/docs/customization/plugin/plugin_only.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
---
id: plugin_only
description: The Plugin Only Mode
slug: /plugin/plugin_only
---

# The Plugin-Only Mode

## What is the plugin-only mode?
The plugin-only mode is a restricted mode of TaskWeaver that only allows you to use plugins.
Compared to the full mode, the plugin-only mode has the following restrictions:

1. The generated code only contains the calls to the plugins.
For example, the following code only calls the `ascii_render` plugin and does not contain any "free-form" code.
```python
r1=ascii_render(text="Hello")
r1
```

2. Only the plugins with `plugin_only: true` in the yaml file will be loaded.
For example, the following plugin will be loaded in the plugin-only mode:
```yaml
name: ascii_render
code: ascii_render
plugin_only: true
...
```
If this field is not specified, the default value is `false`.
For plugins in the plugin-only mode, the argument type can only be `str`, `int`, `boolean`, or `float`.
Other types such as `DataFrame` are not allowed.
Essentially, we consider these plugins only produce "text-like" output that can be directly consumed by the LLM.

To enable the plugin-only mode, you can add the configuration `"session.plugin_only_mode": true`
in the project configuration file `taskweaver_config.json`.

## Why do we need the plugin-only mode?

Although the plugin-only mode is restricted, it is still useful in some scenarios.
For example, you may want to use TaskWeaver to only generate the code to call a certain plugin,
and you want to get the response from the plugin directly, without generating any other code
for safety reasons.

## How is the plugin-only mode implemented?

The plugin-only mode is implemented based on the [function calling](https://platform.openai.com/docs/guides/function-calling) mode of LLMs.
In this mode, the LLM is trained to generate a JSON object that contains the function name and the arguments.
For example, the following JSON object is generated by the LLM:
```json
{
"function": "ascii_render",
"arguments": {
"text": "Hello"
}
}
```
With this JSON object, we assemble the code to call the plugin:
```python
r1=ascii_render(text="Hello")
r1
```
Then, we execute the code and get the response from the plugin.
Therefore, the code is not directly generated by the LLM in the plugin-only mode.

## Which models support the plugin-only mode?

Currently, the plugin-only mode is only supported by

- [OpenAI models](https://platform.openai.com/docs/guides/function-calling)
- [ZhipuAI models](https://open.bigmodel.cn/dev/api)

Likely other models that are compatible with the OpenAI models will also support the plugin-only mode.
2 changes: 1 addition & 1 deletion website/sidebars.js
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ const sidebars = {
label: 'Plugin',
collapsible: true,
collapsed: true,
items: ['customization/plugin/plugin_intro', 'customization/plugin/plugin_selection', 'customization/plugin/embedding', 'customization/plugin/develop_plugin', 'customization/plugin/multi_yaml_single_impl'],
items: ['customization/plugin/plugin_intro', 'customization/plugin/plugin_selection', 'customization/plugin/embedding', 'customization/plugin/develop_plugin', 'customization/plugin/multi_yaml_single_impl', 'customization/plugin/plugin_only'],
},
{
type: 'category',
Expand Down