Skip to content

vlouf/squelch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

36 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Squelch

Meeting transcription tool with live transcription and AI-powered summaries.

Features

  • 🎀 Live audio capture from system audio (Windows WASAPI, Linux PipeWire)
  • πŸ“ Real-time transcription using faster-whisper
  • πŸ”„ Dual-pass transcription β€” fast model for low latency, better model for accuracy
  • πŸ€– AI-powered Q&A during meetings (local via Ollama, or cloud via OpenAI/Claude/Gemini)
  • πŸ“‹ Automatic summary generation with key themes and action items
  • πŸ“„ Markdown export with collapsible full transcript
  • βš™οΈ Options menu β€” change settings without editing config files
  • 🎨 Theming β€” multiple built-in themes via command palette
  • πŸ’Ύ Persistent config β€” settings saved automatically
  • πŸ’» Terminal UI using Textual

Screenshots

β”Œβ”€ Squelch ─────────────────────────────── Recording πŸ”΄ ─┐
β”‚ Transcript                              β”‚ Event Log   β”‚
β”‚                                         β”‚             β”‚
β”‚ [00:01] To build a textual app, you     β”‚ 20:11:02    β”‚
β”‚ need to define a class that inherits... β”‚ FAST 6.1s   β”‚
β”‚                                         β”‚             β”‚
β”‚ [00:07] βœ“ The Widgets module is where   β”‚ 20:11:08    β”‚
β”‚ you find a rich set of widgets...       β”‚ SLOW 60.0s  β”‚
β”‚                                         β”‚             β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€             β”‚
β”‚ πŸ€– Response                             β”‚             β”‚
β”‚ Q: What are the main topics?            β”‚             β”‚
β”‚ A: The discussion covers building       β”‚             β”‚
β”‚    Textual apps and widget modules...   β”‚             β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚ πŸ’¬ Ask about the transcript...                        β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚ f5 Start/Stop  f10 End & Generate  f2 Options  q Quit β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Platform Support

Platform Status Notes
Windows βœ… Supported WASAPI loopback
Linux βœ… Supported Requires PipeWire
macOS ❌ Not yet Contributions welcome!

Installation

Prerequisites

Python 3.11+ is required.

Windows β€” No additional setup needed.

Linux β€” Install PipeWire:

# Debian/Ubuntu
sudo apt install pipewire pipewire-pulse

# Fedora
sudo dnf install pipewire pipewire-pulseaudio

Install Squelch

# Clone the repository
git clone https://github.com/vlouf/squelch.git
cd squelch

# Create virtual environment
python -m venv venv
source venv/bin/activate  # Linux
venv\Scripts\activate     # Windows

# Install
pip install -e .

# Optional: install cloud LLM support
pip install -e ".[cloud]"

LLM Setup (Optional)

For AI-powered Q&A and summaries, you need an LLM provider:

Option A: Ollama (Local, Free)

# Install from https://ollama.ai, then:
ollama pull llama3.1:8b
ollama serve

Option B: Cloud Providers

Install cloud support and set your API key:

pip install -e ".[cloud]"

# Then set ONE of these:
export OPENAI_API_KEY=sk-...
export ANTHROPIC_API_KEY=sk-ant-...
export GEMINI_API_KEY=...

Usage

Launch

squelch

Keybindings

Key Action
F5 Start/Stop recording
F10 End meeting & generate summary
F3 Toggle response panel
F2 Options menu
F1 Help
Ctrl+P Command palette
Q Quit

Workflow

  1. F5 β€” Start recording (captures system audio)
  2. Watch the live transcript appear
  3. Type questions in the input box for AI responses
  4. F10 β€” End meeting, generate summary, export to Markdown
  5. Review the exported file (opens automatically)

Options (F2)

Configure without editing files:

  • Theme β€” Dark/light mode
  • Audio device β€” Select loopback source
  • Whisper models β€” Choose speed vs accuracy tradeoff
  • Language β€” Set transcription language
  • LLM provider β€” Ollama (local) or Cloud
  • Output directory β€” Where to save meeting notes

Settings persist between sessions.

Command Palette (Ctrl+P)

Quick access to themes and commands. Type to search:

  • theme β€” Browse all themes (nord, gruvbox, dracula, etc.)
  • toggle β€” Recording, response panel, dark mode
  • options β€” Open settings

Output

Meeting notes are saved as Markdown:

~/Documents/Squelch/2025-12-22_1430_meeting.md

Each file includes:

  • Duration and word count
  • AI-generated summary
  • Key themes and action items
  • Full transcript (collapsible)

Configuration

Settings are stored in:

  • Windows: %APPDATA%\Squelch\config.toml
  • Linux: ~/.config/squelch/config.toml

You can edit this file directly or use the Options menu (F2).

GPU Acceleration (Optional)

For faster transcription, install CUDA:

  1. Install CUDA Toolkit
  2. Install cuDNN
  3. Squelch will automatically use GPU when available

Troubleshooting

No audio being captured?

  • Check Options (F2) β†’ Audio Device
  • Make sure audio is playing through the selected device

Ollama not detected?

  • Run ollama serve in a terminal
  • Check that a model is pulled: ollama list

Transcription is slow?

  • Use smaller Whisper models in Options
  • Enable GPU acceleration (see above)

Cloud LLM not working?

  • Verify API key is set: echo $OPENAI_API_KEY
  • Check the model name is correct

Contributing

See CONTRIBUTING.md for development setup and guidelines.

Acknowledgments

Built with faster-whisper, Textual, and Ollama.

Developed with the assistance of Claude (Anthropic).

Releases

No releases published

Packages

 
 
 

Contributors

Languages