Skip to main content
Discover Ollama models here
Get started with Ollama

Configuration

config.yaml
name: My Config
version: 0.0.1
schema: v1

models:
  - name: <MODEL_NAME>
    provider: ollama
    model: <MODEL_ID>
    apiBase: http://<my endpoint>:11434 # if running a remote instance of Ollama
Check out a more advanced configuration here

How to Configure Model Capabilities in Ollama

Ollama models usually have their capabilities auto-detected correctly. However, if you’re using custom model names or experiencing issues with tools/images not working, you can explicitly set capabilities:
config.yaml
name: My Config
version: 0.0.1
schema: v1

models:
  - name: <CUSTOM_MODEL_NAME>
    provider: ollama
    model: <CUSTOM_MODEL_ID>
    capabilities:
      - tool_use      # Enable if your model supports function calling
      - image_input   # Enable for vision models
Many Ollama models support tool use by default. Vision models often also support image input

Troubleshooting

”Model requires more system memory”

Continue may set a higher default context length than other Ollama tools, causing this error even when the model works elsewhere. Fix by reducing contextLength:
config.yaml
models:
  - name: Qwen 3 8B
    provider: ollama
    model: qwen3:8b
    defaultCompletionOptions:
      contextLength: 2048
You can also try a smaller model variant.