Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ dist/
*.egg-info/

.streamlit/
.ga-switch/

.vscode/
.idea/
Expand All @@ -22,6 +23,7 @@ dist/
Thumbs.db

*.log
api_server.pid
.env
auth.json
model_responses.txt
Expand Down
45 changes: 45 additions & 0 deletions API_SERVER_README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
# GA Switch API Server

## 快速启动

### Windows
```bash
start_api_server.bat
```

### Linux/macOS
```bash
python api_server.py
```

服务器将在 `http://127.0.0.1:8765` 启动。

## API 端点

- `GET /api/health` - 健康检查
- `GET /api/snapshot` - 获取完整快照
- `GET /api/routes` - 路由列表
- `POST /api/routes` - 创建路由
- `PUT /api/routes/{id}` - 更新路由
- `DELETE /api/routes/{id}` - 删除路由
- `POST /api/routes/{id}/activate` - 激活路由
- `GET /api/providers` - Provider 列表
- `POST /api/providers` - 创建 Provider
- `PUT /api/providers/{id}` - 更新 Provider
- `DELETE /api/providers/{id}` - 删除 Provider
- `POST /api/providers/{id}/test` - 测试 Provider
- `GET /api/diagnostics` - 诊断事件
- `POST /api/reload` - 软重载
- `POST /api/import-legacy` - 导入配置

## 依赖安装

```bash
pip install -r requirements-api.txt
```

## 测试

```bash
curl http://127.0.0.1:8765/api/health
```
8 changes: 7 additions & 1 deletion GETTING_STARTED.md
Original file line number Diff line number Diff line change
Expand Up @@ -171,6 +171,12 @@ Agent 会自己读代码、找出需要的包、全部装好。
python3 launch.pyw
```

如果额外安装了 `PySide6`,`launch.pyw` 会优先进入 Qt 桌面前端;否则自动回退到现有的 `Streamlit + pywebview` 窗口。

```bash
pip install PySide6
```

启动后会出现一个桌面悬浮窗,直接在里面输入任务指令。

### 可选:让 Agent 帮你做的事
Expand Down Expand Up @@ -266,4 +272,4 @@ GenericAgent 不预设技能,而是**靠使用进化**。每完成一个新任

> Agent 会自动 pull 最新代码并解读 commit log,告诉你新增了什么能力。

> 更多细节请参阅 [README.md](README.md) 或 [详细版图文教程](https://my.feishu.cn/wiki/CGrDw0T76iNFuskmwxdcWrpinPb)。
> 更多细节请参阅 [README.md](README.md) 或 [详细版图文教程](https://my.feishu.cn/wiki/CGrDw0T76iNFuskmwxdcWrpinPb)。
16 changes: 12 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -80,6 +80,9 @@ cd GenericAgent
# 2. Install minimal dependencies
pip install streamlit pywebview

# 2.5 Optional: install the preferred Qt desktop shell
pip install PySide6

# 3. Configure API Key
cp mykey_template.py mykey.py
# Edit mykey.py and fill in your LLM API Key
Expand All @@ -88,6 +91,8 @@ cp mykey_template.py mykey.py
python launch.pyw
```

`launch.pyw` now prefers the Qt desktop frontend when `PySide6` is available, and falls back to the Streamlit + `pywebview` shell otherwise.

Full guide: [GETTING_STARTED.md](GETTING_STARTED.md)

---
Expand All @@ -108,10 +113,10 @@ python frontends/tgapp.py

### Alternative App Frontends

Besides the default Streamlit web UI, you can also try other frontend styles:
Besides the default launch flow, you can also try other frontend styles directly:

```bash
python frontends/qtapp.py # Qt-based desktop app
python frontends/qtapp.py # Qt desktop app with Route Center
streamlit run frontends/stapp2.py # Alternative Streamlit UI
```

Expand Down Expand Up @@ -271,6 +276,7 @@ cd GenericAgent

# 2. 安装最小依赖
pip install streamlit pywebview
pip install PySide6 # 可选:启用默认优先的 Qt 桌面壳

# 3. 配置 API Key
cp mykey_template.py mykey.py
Expand All @@ -280,6 +286,8 @@ cp mykey_template.py mykey.py
python launch.pyw
```

如果本机已安装 `PySide6`,`launch.pyw` 会优先进入 Qt 桌面前端;否则自动回退到 `Streamlit + pywebview`。

完整引导流程见 [GETTING_STARTED.md](GETTING_STARTED.md)。

📖 新手使用指南(图文版):[飞书文档](https://my.feishu.cn/wiki/CGrDw0T76iNFuskmwxdcWrpinPb)
Expand Down Expand Up @@ -370,10 +378,10 @@ dingtalk_allowed_users = ["your_staff_id"] # 或 ['*']

### 其他 App 前端

除默认的 Streamlit Web UI 外,还可以尝试不同风格的前端
除默认启动流程外,还可以直接尝试不同风格的前端

```bash
python frontends/qtapp.py # 基于 Qt 的桌面应用
python frontends/qtapp.py # 基于 Qt 的桌面应用(含 Route Center)
streamlit run frontends/stapp2.py # 另一种 Streamlit 风格 UI
```

Expand Down
191 changes: 163 additions & 28 deletions agentmain.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,11 +9,12 @@
from llmcore import LLMSession, ToolClient, ClaudeSession, MixinSession, NativeToolClient, NativeClaudeSession, NativeOAISession
from agent_loop import agent_runner_loop
from ga import GenericAgentHandler, smart_format, get_global_memory, format_error, consume_file
from ga_switch import get_service

script_dir = os.path.dirname(os.path.abspath(__file__))
def load_tool_schema(suffix=''):
global TOOLS_SCHEMA
TS = open(os.path.join(script_dir, f'assets/tools_schema{suffix}.json'), 'r', encoding='utf-8').read()
with open(os.path.join(script_dir, f'assets/tools_schema{suffix}.json'), 'r', encoding='utf-8') as f: TS = f.read()
TOOLS_SCHEMA = json.loads(TS if os.name == 'nt' else TS.replace('powershell', 'bash'))
load_tool_schema()

Expand Down Expand Up @@ -43,47 +44,181 @@ class GeneraticAgent:
def __init__(self):
script_dir = os.path.dirname(os.path.abspath(__file__))
os.makedirs(os.path.join(script_dir, 'temp'), exist_ok=True)
self.lock = threading.Lock()
self.task_dir = None
self.history = []
self.task_queue = queue.Queue()
self.is_running = False; self.stop_sig = False
self.llm_no = 0; self.llmclient = None; self.llmclients = []
self.config_source = 'legacy'; self.config_meta = {}
self.ga_switch = get_service()
self.inc_out = False
self.handler = None; self.verbose = True
self._reload_clients(initial=True)

def _sync_tool_schema(self):
name = self.get_llm_name().lower()
if 'glm' in name or 'minimax' in name or 'kimi' in name: load_tool_schema('_cn')
else: load_tool_schema()

def _tag_client(self, client, *, route_name=None, route_kind='single', backend_kind=None, members=None):
client.ga_switch_route_id = getattr(client, 'ga_switch_route_id', None)
client.ga_switch_route_name = route_name or getattr(client, 'ga_switch_route_name', getattr(client.backend, 'name', ''))
client.ga_switch_route_kind = route_kind
client.ga_switch_backend_kind = backend_kind or getattr(client, 'ga_switch_backend_kind', None)
client.ga_switch_members = list(members or getattr(client, 'ga_switch_members', []))
return client

def build_llmclients_from_store(self):
return self.ga_switch.build_clients_from_store()

def build_llmclients_from_legacy_mykey(self):
from llmcore import mykeys
llm_sessions = []
for k, cfg in mykeys.items():
if not any(x in k for x in ['api', 'config', 'cookie']): continue
if not any(x in k for x in ['api', 'config', 'cookie']):
continue
try:
if 'native' in k and 'claude' in k: llm_sessions += [NativeToolClient(NativeClaudeSession(cfg=cfg))]
elif 'native' in k and 'oai' in k: llm_sessions += [NativeToolClient(NativeOAISession(cfg=cfg))]
elif 'claude' in k: llm_sessions += [ToolClient(ClaudeSession(cfg=cfg))]
elif 'oai' in k: llm_sessions += [ToolClient(LLMSession(cfg=cfg))]
elif 'mixin' in k: llm_sessions += [{'mixin_cfg': cfg}]
except: pass
if 'native' in k and 'claude' in k:
llm_sessions.append(self._tag_client(NativeToolClient(NativeClaudeSession(cfg=cfg)), route_name=cfg.get('name') or k, backend_kind='native_claude'))
elif 'native' in k and 'oai' in k:
llm_sessions.append(self._tag_client(NativeToolClient(NativeOAISession(cfg=cfg)), route_name=cfg.get('name') or k, backend_kind='native_oai'))
elif 'claude' in k:
llm_sessions.append(self._tag_client(ToolClient(ClaudeSession(cfg=cfg)), route_name=cfg.get('name') or k, backend_kind='claude_text'))
elif 'oai' in k:
llm_sessions.append(self._tag_client(ToolClient(LLMSession(cfg=cfg)), route_name=cfg.get('name') or k, backend_kind='oai_text'))
elif 'mixin' in k:
llm_sessions.append({'mixin_cfg': cfg, 'route_name': cfg.get('name') or k})
except Exception as e:
print(f'[WARN] Failed to init legacy session {k}: {e}')
for i, s in enumerate(llm_sessions):
if isinstance(s, dict) and 'mixin_cfg' in s:
try:
mixin = MixinSession(llm_sessions, s['mixin_cfg'])
if isinstance(mixin._sessions[0], (NativeClaudeSession, NativeOAISession)): llm_sessions[i] = NativeToolClient(mixin)
else: llm_sessions[i] = ToolClient(mixin)
except Exception as e: print(f'[WARN] Failed to init MixinSession with cfg {s["mixin_cfg"]}: {e}')
self.llmclients = llm_sessions
self.lock = threading.Lock()
self.task_dir = None
self.history = []
self.task_queue = queue.Queue()
self.is_running = False; self.stop_sig = False
self.llm_no = 0; self.inc_out = False
self.handler = None; self.verbose = True
client = NativeToolClient(mixin) if isinstance(mixin._sessions[0], (NativeClaudeSession, NativeOAISession)) else ToolClient(mixin)
llm_sessions[i] = self._tag_client(client, route_name=s['route_name'], route_kind='failover', backend_kind='mixin')
except Exception as e:
print(f'[WARN] Failed to init MixinSession with cfg {s["mixin_cfg"]}: {e}')
llm_sessions = [s for s in llm_sessions if not isinstance(s, dict)]
return llm_sessions, {'source': 'legacy', 'active_index': min(self.llm_no, max(len(llm_sessions) - 1, 0)), 'routes': []}

def _build_client_set(self):
if self.ga_switch.use_structured_config() and self.ga_switch.has_usable_routes():
try:
clients, meta = self.build_llmclients_from_store()
if clients:
meta = dict(meta or {}, source='store')
return clients, meta
except Exception as e:
print(f'[WARN] Structured config load failed, fallback to legacy: {e}')
return self.build_llmclients_from_legacy_mykey()

def _reload_clients(self, *, initial=False, preserve_history=True):
old_client = self.llmclient
old_history = getattr(old_client.backend, 'history', None) if old_client and preserve_history else None
old_route_id = getattr(old_client, 'ga_switch_route_id', None) if old_client else None
old_idx = self.llm_no
clients, meta = self._build_client_set()
self.llmclients = clients
self.config_source = meta.get('source', 'legacy')
self.config_meta = meta
if not self.llmclients:
self.llm_no = 0
self.llmclient = None
return []
target_idx = meta.get('active_index', 0)
if not initial and preserve_history:
if self.config_source == 'store' and old_route_id is not None:
matched_idx = next((i for i, client in enumerate(self.llmclients) if getattr(client, 'ga_switch_route_id', None) == old_route_id), None)
if matched_idx is not None:
target_idx = matched_idx
elif old_idx < len(self.llmclients):
target_idx = old_idx
self.llm_no = target_idx % len(self.llmclients)
self.llmclient = self.llmclients[self.llm_no]
if preserve_history and old_history is not None:
self.llmclient.backend.history = old_history
if self.config_source == 'store' and getattr(self.llmclient, 'ga_switch_route_id', None) is not None:
self.ga_switch.set_active_route(self.llmclient.ga_switch_route_id)
self._sync_tool_schema()
return self.llmclients

def next_llm(self, n=-1):
if not self.llmclients:
self.llmclient = None
return None
self.llm_no = ((self.llm_no + 1) if n < 0 else n) % len(self.llmclients)
lastc = self.llmclient
self.llmclient = self.llmclients[self.llm_no]
self.llmclient.backend.history = lastc.backend.history
self.llmclient.last_tools = ''
name = self.get_llm_name().lower()
if 'glm' in name or 'minimax' in name or 'kimi' in name: load_tool_schema('_cn')
else: load_tool_schema()
def list_llms(self): return [(i, self.get_llm_name(b), i == self.llm_no) for i, b in enumerate(self.llmclients)]
def get_llm_name(self, b=None):
b = self.llmclient if b is None else b
return f"{type(b.backend).__name__}/{b.backend.name}" if not isinstance(b, dict) else "BADCONFIG_MIXIN"
if lastc is not None:
self.llmclient.backend.history = lastc.backend.history
if hasattr(self.llmclient, 'last_tools'):
self.llmclient.last_tools = ''
if self.config_source == 'store' and getattr(self.llmclient, 'ga_switch_route_id', None) is not None:
self.ga_switch.set_active_route(self.llmclient.ga_switch_route_id)
self._sync_tool_schema()
return self.llmclient

def set_active_route(self, route_id_or_idx):
if self.config_source == 'store':
target_idx = next((i for i, client in enumerate(self.llmclients) if getattr(client, 'ga_switch_route_id', None) == route_id_or_idx), None)
if target_idx is None and isinstance(route_id_or_idx, int) and 0 <= route_id_or_idx < len(self.llmclients):
target_idx = route_id_or_idx
if target_idx is None:
raise ValueError(f'Unknown route id: {route_id_or_idx}')
self.next_llm(target_idx)
return self.describe_llms()[self.llm_no]
if not isinstance(route_id_or_idx, int):
raise ValueError(f'Legacy mode only supports index switching, got {route_id_or_idx!r}')
self.next_llm(route_id_or_idx)
return self.describe_llms()[self.llm_no]

def reload_llm_config(self, preserve_history=True):
if self.is_running:
raise RuntimeError('Cannot reload LLM config while agent is running.')
self._reload_clients(initial=False, preserve_history=preserve_history)
return self.describe_llms()

def describe_llms(self):
result = []
for idx, client in enumerate(self.llmclients):
backend = client.backend
diag = backend.describe_diagnostics() if hasattr(backend, 'describe_diagnostics') else {}
members = getattr(client, 'ga_switch_members', [])
route_name = getattr(client, 'ga_switch_route_name', getattr(backend, 'name', ''))
backend_class = type(backend).__name__
item = {
'idx': idx,
'active': idx == self.llm_no,
'source': self.config_source,
'route_id': getattr(client, 'ga_switch_route_id', None),
'name': route_name,
'display_name': f"{route_name} [{backend_class}/{backend.name}]",
'route_kind': getattr(client, 'ga_switch_route_kind', 'single'),
'backend_class': backend_class,
'backend_kind': getattr(client, 'ga_switch_backend_kind', getattr(backend, 'backend_kind', None)),
'provider_id': getattr(backend, 'provider_id', None),
'provider_name': getattr(backend, 'provider_name', getattr(backend, 'name', None)),
'model': getattr(backend, 'model', None),
'api_mode': getattr(backend, 'api_mode', None),
'native_tools': isinstance(client, NativeToolClient) or 'Native' in backend_class,
'member_names': [m.get('name', '') if isinstance(m, dict) else str(m) for m in members],
'active_member_name': getattr(backend, 'active_member_name', getattr(backend, 'name', None)),
'last_switch_reason': getattr(backend, 'last_switch_reason', ''),
'spring_back_seconds': getattr(backend, '_spring_sec', None),
}
item.update(diag)
result.append(item)
return result

def list_llms(self):
return [(item['idx'], item['display_name'], item['active']) for item in self.describe_llms()]

def get_llm_name(self):
if self.llmclient is None:
return 'No LLM'
item = self.describe_llms()[self.llm_no]
return item['display_name']

def abort(self):
if not self.is_running: return
Expand Down
Loading