Ollama + Cognito

Set up Ollama so the Cognito Chrome extension can use your local AI models — completely private, running on your machine.

Ollama runs a local server at http://127.0.0.1:11434. The guides below configure your system so the Cognito extension can talk to it.

Before you start

  1. Install Ollama from ollama.com. Open the app or run ollama serve, then pull a model (e.g. ollama pull llama3.2).
  2. Install the Cognito extension in Chrome (or any Chromium-based browser like Edge, Brave, or Arc).

Why is setup needed?

By default, browsers prevent extensions from connecting to services running on your computer. This is a security feature.

Ollama has a setting called OLLAMA_ORIGINS that controls which apps are allowed to connect. Without it, the extension will show a "connection failed" or "CORS" error when trying to reach Ollama.

The setup takes under 2 minutes and you only need to do it once.

Choose your operating system

Follow the guide for your OS. Each guide walks you through setting OLLAMA_ORIGINS step by step.

PlatformGuideTime
WindowsSetup on Windows~2 min
macOSSetup on macOS~2 min
LinuxSetup on Linux~2 min

Ollama Cloud models

Want to use larger models that don't fit on your machine? Ollama Cloud models (e.g. gpt-oss:20b-cloud, gpt-oss:120b-cloud) run on Ollama's servers. You still use them through your local Ollama — just sign in at ollama.com and link your device.

ScenarioGuide
Ollama Cloud modelsSetup Ollama Cloud

After setup

  1. Open the Cognito extension in Chrome.
  2. Choose Ollama as the AI provider.
  3. Pick a model from the dropdown (e.g. llama3.2).
  4. Start chatting.

You can add more models anytime from the extension's Ollama panel or by running ollama pull <model> in your terminal.