Meeting transcription tool with live transcription and AI-powered summaries.
- π€ Live audio capture from system audio (Windows WASAPI, Linux PipeWire)
- π Real-time transcription using faster-whisper
- π Dual-pass transcription β fast model for low latency, better model for accuracy
- π€ AI-powered Q&A during meetings (local via Ollama, or cloud via OpenAI/Claude/Gemini)
- π Automatic summary generation with key themes and action items
- π Markdown export with collapsible full transcript
- βοΈ Options menu β change settings without editing config files
- π¨ Theming β multiple built-in themes via command palette
- πΎ Persistent config β settings saved automatically
- π» Terminal UI using Textual
ββ Squelch βββββββββββββββββββββββββββββββ Recording π΄ ββ
β Transcript β Event Log β
β β β
β [00:01] To build a textual app, you β 20:11:02 β
β need to define a class that inherits... β FAST 6.1s β
β β β
β [00:07] β The Widgets module is where β 20:11:08 β
β you find a rich set of widgets... β SLOW 60.0s β
β β β
βββββββββββββββββββββββββββββββββββββββββββ€ β
β π€ Response β β
β Q: What are the main topics? β β
β A: The discussion covers building β β
β Textual apps and widget modules... β β
βββββββββββββββββββββββββββββββββββββββββββ΄ββββββββββββββ€
β π¬ Ask about the transcript... β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β f5 Start/Stop f10 End & Generate f2 Options q Quit β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
| Platform | Status | Notes |
|---|---|---|
| Windows | β Supported | WASAPI loopback |
| Linux | β Supported | Requires PipeWire |
| macOS | β Not yet | Contributions welcome! |
Python 3.11+ is required.
Windows β No additional setup needed.
Linux β Install PipeWire:
# Debian/Ubuntu
sudo apt install pipewire pipewire-pulse
# Fedora
sudo dnf install pipewire pipewire-pulseaudio# Clone the repository
git clone https://github.com/vlouf/squelch.git
cd squelch
# Create virtual environment
python -m venv venv
source venv/bin/activate # Linux
venv\Scripts\activate # Windows
# Install
pip install -e .
# Optional: install cloud LLM support
pip install -e ".[cloud]"For AI-powered Q&A and summaries, you need an LLM provider:
Option A: Ollama (Local, Free)
# Install from https://ollama.ai, then:
ollama pull llama3.1:8b
ollama serveOption B: Cloud Providers
Install cloud support and set your API key:
pip install -e ".[cloud]"
# Then set ONE of these:
export OPENAI_API_KEY=sk-...
export ANTHROPIC_API_KEY=sk-ant-...
export GEMINI_API_KEY=...squelch| Key | Action |
|---|---|
| F5 | Start/Stop recording |
| F10 | End meeting & generate summary |
| F3 | Toggle response panel |
| F2 | Options menu |
| F1 | Help |
| Ctrl+P | Command palette |
| Q | Quit |
- F5 β Start recording (captures system audio)
- Watch the live transcript appear
- Type questions in the input box for AI responses
- F10 β End meeting, generate summary, export to Markdown
- Review the exported file (opens automatically)
Configure without editing files:
- Theme β Dark/light mode
- Audio device β Select loopback source
- Whisper models β Choose speed vs accuracy tradeoff
- Language β Set transcription language
- LLM provider β Ollama (local) or Cloud
- Output directory β Where to save meeting notes
Settings persist between sessions.
Quick access to themes and commands. Type to search:
themeβ Browse all themes (nord, gruvbox, dracula, etc.)toggleβ Recording, response panel, dark modeoptionsβ Open settings
Meeting notes are saved as Markdown:
~/Documents/Squelch/2025-12-22_1430_meeting.md
Each file includes:
- Duration and word count
- AI-generated summary
- Key themes and action items
- Full transcript (collapsible)
Settings are stored in:
- Windows:
%APPDATA%\Squelch\config.toml - Linux:
~/.config/squelch/config.toml
You can edit this file directly or use the Options menu (F2).
For faster transcription, install CUDA:
- Install CUDA Toolkit
- Install cuDNN
- Squelch will automatically use GPU when available
No audio being captured?
- Check Options (F2) β Audio Device
- Make sure audio is playing through the selected device
Ollama not detected?
- Run
ollama servein a terminal - Check that a model is pulled:
ollama list
Transcription is slow?
- Use smaller Whisper models in Options
- Enable GPU acceleration (see above)
Cloud LLM not working?
- Verify API key is set:
echo $OPENAI_API_KEY - Check the model name is correct
See CONTRIBUTING.md for development setup and guidelines.
Built with faster-whisper, Textual, and Ollama.
Developed with the assistance of Claude (Anthropic).