Use a custom LLM provider when an agent should call your own OpenAI-compatible gateway instead of Tracecat’s managed LiteLLM gateway.Documentation Index
Fetch the complete documentation index at: https://docs.tracecat.com/llms.txt
Use this file to discover all available pages before exploring further.
Routing model
Each agent uses its own model configuration to decide where its LLM requests go.| Agent model configuration | Request route |
|---|---|
passthrough: true | Direct to that agent’s configured base_url |
passthrough: false or unset | Tracecat’s managed LiteLLM gateway |
Root agents
For a root agent, Tracecat keys direct passthrough routing by the model string the root agent sends.model: customer-alias go directly to https://customer-litellm.example.
Subagents
For a subagent, Tracecat keys direct passthrough routing by the subagent’s scoped model route. This lets Tracecat route each preset agent independently, even when several agents share the same sandbox process.https://child-litellm.example. Requests for other subagents fall back to managed LiteLLM unless those subagents also enable passthrough.
Base URL format
Store custom provider base URLs in OpenAI-compatible form, such ashttps://gateway.example/v1.
Tracecat strips the trailing version segment before forwarding sandbox requests because SDK clients send paths such as /v1/messages. This prevents doubled paths such as /v1/v1/messages.
Credentials
Tracecat resolves passthrough credentials from the custom provider selected by the agent’s model configuration. If a root agent and subagent use different passthrough providers, each route uses its own provider credentials. Managed LiteLLM requests keep the sandbox’s managed gateway token.Related pages
- See AI agent for the
ai.agentandai.preset_agentaction reference. - See Secrets and variables for agent secret handling.