feat(core/hooks): add hooks settings with Z.AI glm-4.7 provider support#500
feat(core/hooks): add hooks settings with Z.AI glm-4.7 provider support#500am-will wants to merge 2 commits intojust-every:mainfrom
Conversation
Add comprehensive hooks system with configurable model/provider settings. Supports Z.AI glm-4.7 as a hooks model provider with reasoning_content streaming and tool call enhancements. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: f6989a22b6
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| #[serde(default)] | ||
| pub prompt: Option<String>, | ||
| #[serde(alias = "run", deserialize_with = "deserialize_command_vec")] | ||
| pub command: Vec<String>, |
There was a problem hiding this comment.
Make prompt hooks configurable without a command
The new prompt hook type still has command as a required field with no #[serde(default)] or Option, so a type: prompt hook that omits command will fail to deserialize and the whole config load will error. That makes prompt-only hooks unusable unless users add a dummy command, which defeats the purpose of hook_type = prompt. Consider making command optional/defaulted and only validating it for Command hooks.
Useful? React with 👍 / 👎.
| } else if model_provider_id.eq_ignore_ascii_case("zai") { | ||
| model_provider = model_providers | ||
| .get("openai") | ||
| .ok_or_else(|| { | ||
| std::io::Error::new( |
There was a problem hiding this comment.
Avoid forcing Z.AI models onto the OpenAI provider
This branch rewrites model_provider_id to openai whenever the provider is zai but the model is not exactly glm-4.7. If a user explicitly selects Z.AI for another GLM model (e.g., glm-4.6/4.5 are documented in zai docs), the request will be silently redirected to the OpenAI endpoint with an unsupported model name, leading to runtime failures. It would be safer to respect the configured zai provider for other GLM models or gate this override more narrowly.
Useful? React with 👍 / 👎.
|
I didn't mean to push this. sorry guys. I'm just testing a few things and wanted to fork it. I did get Z.AI coding plan working no problem though using the openAI endpoint. Just not streaming. |
Summary
Test plan
./build-fast.sh🤖 Generated with Claude Code