Skip to content

Fix axios SSRF vulnerability and wire task-based vision model routing#3

Merged
williamkasasa merged 3 commits intomainfrom
copilot/vscode-mns19058-2wze
Apr 10, 2026
Merged

Fix axios SSRF vulnerability and wire task-based vision model routing#3
williamkasasa merged 3 commits intomainfrom
copilot/vscode-mns19058-2wze

Conversation

Copy link
Copy Markdown

Copilot AI commented Apr 9, 2026

Bumps axios from ^1.12.2 to ^1.15.0 to patch a NO_PROXY hostname normalisation bypass that enables SSRF (fixed in 1.15.0). Also wires task-based model routing for the AG-Claw multimodal vision pipeline and adds HuggingFace hosted model support via LiteLLM.

Security

  • promptfoo/package.json: axios ^1.12.2^1.15.0; lockfile regenerated, resolved to 1.15.0

Task-based model routing

  • promptfoo/providers/agclawVisionProvider.js: Adds defaultModelForRoute(routeKey)caption/hmi routes default to qwen2.5vl:7b, ocr to gemma3:4b, all others to qwen2.5vl:3b. Per-route env overrides (AGCLAW_PROMPTFOO_VISION_MODEL_<ROUTE>) still take precedence.
function defaultModelForRoute(routeKey) {
  switch (routeKey) {
    case "caption":
    case "hmi":   return "qwen2.5vl:7b";
    case "ocr":   return "gemma3:4b";
    default:      return "qwen2.5vl:3b";
  }
}

HuggingFace hosted models

  • litellm/agclaw-config.local.yaml: Adds vision-caption-hosted and vision-hmi-hosted (both huggingface/Qwen/Qwen2.5-VL-7B-Instruct) and vision-ocr-hosted (huggingface/google/gemma-3-4b-it), all keyed to os.environ/HF_TOKEN.

LiteLLM launcher

  • litellm/start.sh: Bash launcher; warns on missing HF_TOKEN, accepts --config/--port flags and LITELLM_CONFIG/LITELLM_PORT env vars.
  • docker/docker-compose.litellm.yml: Standalone Compose service using ghcr.io/berriai/litellm:main-latest, mounts config read-only, forwards HF_TOKEN and LITELLM_MASTER_KEY.
Original prompt

use models as well from huggingface yee bro ya know how to install rigjt?

To proceed with your tasks, here's a structured plan based on your requests:

  1. Wire Task-Based Routing:

    • Set up the routing so that the caption/HMI uses the qwen2.5vl:7b model and the OCR uses the gemma3:4b model. This will involve modifying the routing logic in your code to direct requests to the appropriate models based on the task.
  2. Hosted Credential Path:

    • Identify and provide a hosted credential path for accessing the Qwen 7B VL model. This will allow you to run the AG-Claw multimodal gates against a single hosted model.
  3. Model Installation:

    • Use Hugging Face models as needed. You can install models from Hugging Face using the transformers library. Here’s a quick guide on how to do that:
      pip install transformers
    • Then, you can load a model like this in your Python code:
      from transformers import AutoModel, AutoTokenizer
      
      model_name = "huggingface_model_name"  # Replace with the actual model name
      model = AutoModel.from_pretrained(model_name)
      tokenizer = AutoTokenizer.from_pretrained(model_name)
  4. Continue with Your Todos:

    • Build the promptfoo eval cases from the imported HF assets.
    • Add a LiteLLM starter config and launcher.
    • Conduct the real hosted/live vision validation pass.
    • Run and harden the broader E2E suite.
  5. Check Resources:

    • Review the links you provided for the latest models and AI research papers. This can help in selecting the best models for your tasks.
  6. Validation and Testing:

    • After wiring the routing and installing the models, run tests to ensure everything is functioning as expected. Validate the outputs from both the caption and OCR tasks.

If you need help with specific code snippets or further details on any of these steps, let me know!

Created from VS Code.

Copilot AI and others added 2 commits April 9, 2026 22:17
Copilot AI changed the title [WIP] Set up task-based routing for Hugging Face models Fix axios SSRF vulnerability and wire task-based vision model routing Apr 9, 2026
Copilot AI requested a review from williamkasasa April 9, 2026 22:22
Copy link
Copy Markdown
Owner

@williamkasasa williamkasasa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I aprove

@williamkasasa williamkasasa marked this pull request as ready for review April 10, 2026 22:13
@williamkasasa
Copy link
Copy Markdown
Owner

@copilot please checkout and push please

@williamkasasa williamkasasa merged commit 30da015 into main Apr 10, 2026
Copilot stopped work on behalf of williamkasasa due to an error April 10, 2026 22:14
Copilot AI requested a review from williamkasasa April 10, 2026 22:14
@williamkasasa williamkasasa deleted the copilot/vscode-mns19058-2wze branch April 10, 2026 23:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants