A complete, production-ready setup for running Ollama and Open WebUI on Debian Linux with Docker, including firewall hardening and persistent configuration.
This repo provides a working Docker Compose stack for Open WebUI connected to a locally-running Ollama instance on Debian. It is designed for self-hosted, LAN-accessible AI inference with no cloud dependency.
The setup has been built and tested on the following hardware but will work on any comparable x86-64 machine running Debian.
Test Hardware
- Machine: Beelink SER (Mini PC)
- CPU: AMD Ryzen 7 5800H (8 cores / 16 threads)
- RAM: 32GB DDR4
- Storage: 4TB SATA SSD
- OS: Debian 13 (Trixie) KDE
- Inference: CPU-only (no GPU)
[Browser / LAN Client]
│
▼
[Open WebUI — Docker bridge — port 8080]
│
▼ http://172.18.0.1:11434
[Ollama — systemd service — host]
│
▼
[Local model files — /usr/share/ollama]
Open WebUI runs in a Docker bridge network. Ollama runs as a systemd service on the host. The two communicate via the bridge gateway IP.
- Debian 12 or 13 (tested on Trixie)
- Docker and Docker Compose installed
- Ollama installed and running as a systemd service
- UFW installed (optional but recommended)
curl -fsSL https://get.docker.com | sh
sudo usermod -aG docker $USERcurl -fsSL https://ollama.com/install.sh | shgit clone https://github.com/mcps976/open-webui-ollama-debian.git
cd open-webui-ollama-debianBy default Ollama binds to 127.0.0.1 only. Open WebUI runs in a Docker bridge network and cannot reach localhost. You need to tell Ollama to listen on all interfaces.
Create or edit the Ollama systemd override:
sudo mkdir -p /etc/systemd/system/ollama.service.d
sudo tee /etc/systemd/system/ollama.service.d/override.conf << 'EOF'
[Service]
Environment="OLLAMA_HOST=0.0.0.0:11434"
EOF
sudo systemctl daemon-reload
sudo systemctl restart ollamaVerify Ollama is now listening on all interfaces:
ss -tlnp | grep 11434
# Expected: LISTEN 0 4096 *:11434 *:*GPU users: If you have a supported GPU, remove the
OLLAMA_NO_GPU=1line fromcompose.yamland Ollama will use it automatically.
docker volume create open-webui# Your LAN IP
ip -4 addr show | grep inet | grep -v 127Start the stack once to discover the bridge gateway:
docker compose up -d
docker network inspect open-webui_default --format='{{json .IPAM.Config}}'
# Returns something like: [{"Subnet":"172.18.0.0/16","Gateway":"172.18.0.1"}]Note the Gateway value — you will need it in the next step.
Replace the two placeholder values with your own:
ports:
- "YOUR_HOST_IP:8080:8080"
environment:
- OLLAMA_BASE_URL=http://YOUR_BRIDGE_GATEWAY:11434Then restart the stack:
docker compose down && docker compose up -ddocker exec open-webui curl -s --max-time 10 http://YOUR_BRIDGE_GATEWAY:11434/api/tagsYou should see a JSON response listing your installed models.
Open WebUI is now available at http://YOUR_HOST_IP:8080
This is the section most Open WebUI tutorials skip. If your machine is on a LAN and you want to restrict access, follow these steps.
Docker bypasses UFW by default for published ports. You need to handle both UFW and the Docker iptables chain separately.
Debian 13 uses nftables by default which causes UFW errors. Switch to legacy:
sudo update-alternatives --set iptables /usr/sbin/iptables-legacy
sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-legacyDisable IPv6 in UFW if your machine has no IPv6 addresses:
sudo sed -i 's/IPV6=yes/IPV6=no/' /etc/default/ufwReplace the IPs below with your own:
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow in on lo
sudo ufw allow from YOUR_ADMIN_IP to any # your management machine
sudo ufw allow from YOUR_TRUSTED_IP to any # NAS or other trusted host
sudo ufw allow in on docker0 # default Docker bridge
sudo ufw --force enableDocker publishes ports directly via iptables, bypassing UFW. Restrict who can reach your container ports by adding rules to the DOCKER-USER chain:
# Allow your LAN subnet to reach Docker ports (adjust subnet and interface)
sudo iptables -I DOCKER-USER -s 192.168.1.0/24 -i eth0 -j ACCEPT
sudo iptables -A DOCKER-USER -i eth0 -j DROPAllow the Open WebUI bridge subnet to reach Ollama on port 11434:
sudo ufw allow from 172.18.0.0/16 to any port 11434Docker rebuilds its iptables chains on every restart, wiping custom rules. Create a systemd service to re-apply them automatically after Docker starts:
sudo tee /etc/systemd/system/docker-user-rules.service << 'EOF'
[Unit]
Description=Add DOCKER-USER iptables rules
After=docker.service
Requires=docker.service
[Service]
Type=oneshot
ExecStart=/bin/bash -c '\
iptables -F DOCKER-USER && \
iptables -I DOCKER-USER -s YOUR_LAN_SUBNET -i YOUR_INTERFACE -j ACCEPT && \
iptables -A DOCKER-USER -i YOUR_INTERFACE -j DROP'
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
EOF
sudo systemctl daemon-reload
sudo systemctl enable docker-user-rules.service
sudo systemctl start docker-user-rules.servicesudo mkdir -p /etc/iptables
sudo iptables-save | sudo tee /etc/iptables/rules.v4The following models have been tested and confirmed working on CPU-only hardware with 32GB RAM:
| Model | Pull Command | Size | Notes |
|---|---|---|---|
phi4 |
ollama pull phi4 |
9.1 GB | Fast, good general purpose model |
llama3.1:8b |
ollama pull llama3.1:8b |
4.9 GB | Excellent all-rounder, recommended starting point |
llava:13b |
ollama pull llava:13b |
8.0 GB | Multimodal — supports image input |
deepseek-r1:14b |
ollama pull deepseek-r1:14b |
9.0 GB | Strong reasoning and code tasks |
qwen3.5:4b |
ollama pull qwen3.5:4b |
3.4 GB | Lightweight, fastest on CPU |
RAM guidance: Models up to ~10GB work well with 32GB RAM on CPU. Larger models (32B+) are possible but will be significantly slower without a GPU.
cd /opt/open-webui
docker compose pull
docker compose down
docker compose up -dYour data is stored in the named Docker volume and is preserved across updates.
Open WebUI can't reach Ollama
- Verify Ollama is listening on all interfaces:
ss -tlnp | grep 11434 - Check the bridge gateway IP matches your
OLLAMA_BASE_URLincompose.yaml - Ensure the UFW rule allows
172.18.0.0/16to port11434
Page won't load
- Check the port binding in
compose.yamluses your LAN IP, not127.0.0.1 - Verify the container is running:
docker ps | grep open-webui - Check firewall rules:
sudo ufw status numbered
UFW ip6tables error on Debian 13
- Follow the iptables-legacy switch steps in the firewall section above
- Disable IPv6 in UFW if your machine has no IPv6 addresses
Models not appearing in the UI
- Confirm Ollama has models installed:
ollama list - Test connectivity from inside the container:
docker exec open-webui curl -s http://YOUR_BRIDGE_GATEWAY:11434/api/tags
open-webui-ollama-debian/
├── compose.yaml # Docker Compose stack
└── README.md # This file
MIT — use freely, attribution appreciated.