Prerequisites

  • Basic Tracecat deployment with Docker Compose
  • Minimum 10GB of disk space

Instructions

Deploying self-hosted LLMs is resource intensive with large downloads and large model weights. The Ollama Docker image is 1.5GB+ large and model weights can vary greatly in size.

Only models less than 5GB in size are currently supported.

Tracecat supports self-hosted LLMs through Ollama.

Supported models:

1

Configure open source models

Specify the open source models you wish to use in Tracecat by setting the TRACECAT__PRELOAD_OSS_MODELS environment variable in the .env file.

For example, to preload the llama3.2 model, set the following:

TRACECAT__PRELOAD_OSS_MODELS=llama3.2
2

Configure the Ollama service

Uncomment out the ollama service and ollama volume at the bottom of the docker-compose.yml file.

ollama:
  image: ollama/ollama:${OLLAMA__VERSION}
  ports:
    - 11434:11434
  networks:
    - core
  volumes:
    - ollama:/root/.ollama

volumes:
    core-db:
    temporal-db:
    ollama:
3

Deploy

Deploy Tracecat with the Ollama docker compose extension:

docker compose up -d
4

AI Action

You can now use Tracecat’s AI action to call your preloaded open source LLMs. For example, to call the llama3.2 model, you can specify the following arguments:

Was this page helpful?