Skip to content

Latest commit

 

History

History
49 lines (33 loc) · 1.04 KB

File metadata and controls

49 lines (33 loc) · 1.04 KB

Ollama Setup (Local CapYap)

Use this guide if you want CapYap agents to run on local Ollama models for free.

1) Install Ollama

2) Pull at least one model

Run in Terminal:

ollama serve
ollama pull llama3.1

You can pull other models too (for example qwen2.5:7b).

3) Start CapYap locally

From the repo root:

conda activate capyap
capyap start

CapYap runs locally and the backend serves the app on localhost.

Desktop app note:

  • Packaged desktop builds try to auto-start backend.
  • If desktop cannot connect, run this command manually and keep it open:
capyap start --no-browser

4) In CapYap, choose Ollama

  1. Load a transcript source.
  2. Click Start AI Analysis.
  3. Select ollama in the provider picker.
  4. Wait for the Ollama status check.
  5. Click Start Analyzing when status is ready.

Notes:

  • No external cloud API key is required for Ollama.
  • If Ollama is not detected, use the modal's Install Ollama link and Re-check button.