-
Notifications
You must be signed in to change notification settings - Fork 0
Open
Labels
enhancementNew feature or requestNew feature or request
Description
Problem
When a cli provider script needs to call an LLM (e.g. a RAG pipeline that does search then generation), there is no way to delegate the LLM call to a named target defined in targets.yaml. The script must make the HTTP call itself, which means:
- LLM credentials and endpoint details must be hardcoded in the script (or injected via
env:on the target) - Swapping the LLM backend (e.g. from OpenAI to Azure OpenAI, Claude, or Copilot) requires changing the script, not just
targets.yaml - The RAG/search layer is tightly coupled to the LLM layer
Proposed Solution
Add an agentv invoke sub-command that a CLI script can call to delegate the LLM step to a named target:
agentv invoke <target-name> --prompt-file <path> --output-file <path>Example usage in a CLI script
// 1. Do search/retrieval, build prompt file
// 2. Delegate LLM call to the named target from targets.yaml
spawnSync('agentv', ['invoke', 'openai', '--prompt-file', builtPrompt, '--output-file', outputFile])targets.yaml
targets:
- name: my-rag-pipeline
provider: cli
command: node .agentv/scripts/my-pipeline.cjs {PROMPT_FILE} {OUTPUT_FILE}
grader_target: openai
- name: openai
provider: openai
endpoint: $OPENAI_ENDPOINT
api_key: $OPENAI_API_KEY
model: $OPENAI_MODELSwapping to Claude is then just changing openai to claude in the command, with no script changes.
Benefits
- No credentials in scripts at all
- LLM backend is fully configurable from
targets.yaml - Enables multi-turn pipelines where the script drives retrieval and agentv drives generation
- Consistent with how agentv already owns provider config for LLM targets
Workaround
Currently using env: injection on the cli target to pass credentials explicitly, but the env var names are still hardcoded in the script.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
enhancementNew feature or requestNew feature or request