Skip to content

feat(core/hooks): add hooks settings with Z.AI glm-4.7 provider support#500

Closed
am-will wants to merge 2 commits intojust-every:mainfrom
am-will:hooks-zai-provider
Closed

feat(core/hooks): add hooks settings with Z.AI glm-4.7 provider support#500
am-will wants to merge 2 commits intojust-every:mainfrom
am-will:hooks-zai-provider

Conversation

@am-will
Copy link
Copy Markdown

@am-will am-will commented Jan 6, 2026

Summary

  • Add comprehensive hooks system with configurable model/provider settings
  • Support Z.AI glm-4.7 as hooks model provider
  • Add reasoning_content streaming for GLM-4.7
  • Enhance tool call payload structure for hooks

Test plan

  • Verify hooks work with Z.AI provider
  • Test reasoning_content streaming
  • Validate hook payload structure
  • Run ./build-fast.sh

🤖 Generated with Claude Code

am-will and others added 2 commits January 5, 2026 19:22
Add comprehensive hooks system with configurable model/provider settings.
Supports Z.AI glm-4.7 as a hooks model provider with reasoning_content
streaming and tool call enhancements.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
@am-will am-will closed this Jan 6, 2026
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: f6989a22b6

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment on lines +1503 to 1506
#[serde(default)]
pub prompt: Option<String>,
#[serde(alias = "run", deserialize_with = "deserialize_command_vec")]
pub command: Vec<String>,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Make prompt hooks configurable without a command

The new prompt hook type still has command as a required field with no #[serde(default)] or Option, so a type: prompt hook that omits command will fail to deserialize and the whole config load will error. That makes prompt-only hooks unusable unless users add a dummy command, which defeats the purpose of hook_type = prompt. Consider making command optional/defaulted and only validating it for Command hooks.

Useful? React with 👍 / 👎.

Comment on lines +1113 to +1117
} else if model_provider_id.eq_ignore_ascii_case("zai") {
model_provider = model_providers
.get("openai")
.ok_or_else(|| {
std::io::Error::new(
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Avoid forcing Z.AI models onto the OpenAI provider

This branch rewrites model_provider_id to openai whenever the provider is zai but the model is not exactly glm-4.7. If a user explicitly selects Z.AI for another GLM model (e.g., glm-4.6/4.5 are documented in zai docs), the request will be silently redirected to the OpenAI endpoint with an unsupported model name, leading to runtime failures. It would be safer to respect the configured zai provider for other GLM models or gate this override more narrowly.

Useful? React with 👍 / 👎.

@am-will
Copy link
Copy Markdown
Author

am-will commented Jan 6, 2026

I didn't mean to push this. sorry guys. I'm just testing a few things and wanted to fork it. I did get Z.AI coding plan working no problem though using the openAI endpoint. Just not streaming.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant