Skip to content

mcps976/open-webui-ollama-debian

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 

Repository files navigation

open-webui-ollama-debian

A complete, production-ready setup for running Ollama and Open WebUI on Debian Linux with Docker, including firewall hardening and persistent configuration.


Overview

This repo provides a working Docker Compose stack for Open WebUI connected to a locally-running Ollama instance on Debian. It is designed for self-hosted, LAN-accessible AI inference with no cloud dependency.

The setup has been built and tested on the following hardware but will work on any comparable x86-64 machine running Debian.

Test Hardware

  • Machine: Beelink SER (Mini PC)
  • CPU: AMD Ryzen 7 5800H (8 cores / 16 threads)
  • RAM: 32GB DDR4
  • Storage: 4TB SATA SSD
  • OS: Debian 13 (Trixie) KDE
  • Inference: CPU-only (no GPU)

Architecture

[Browser / LAN Client]
        │
        ▼
[Open WebUI — Docker bridge — port 8080]
        │
        ▼ http://172.18.0.1:11434
[Ollama — systemd service — host]
        │
        ▼
[Local model files — /usr/share/ollama]

Open WebUI runs in a Docker bridge network. Ollama runs as a systemd service on the host. The two communicate via the bridge gateway IP.


Prerequisites

  • Debian 12 or 13 (tested on Trixie)
  • Docker and Docker Compose installed
  • Ollama installed and running as a systemd service
  • UFW installed (optional but recommended)

Install Docker

curl -fsSL https://get.docker.com | sh
sudo usermod -aG docker $USER

Install Ollama

curl -fsSL https://ollama.com/install.sh | sh

Installation

1. Clone this repo

git clone https://github.com/mcps976/open-webui-ollama-debian.git
cd open-webui-ollama-debian

2. Configure Ollama to listen on all interfaces

By default Ollama binds to 127.0.0.1 only. Open WebUI runs in a Docker bridge network and cannot reach localhost. You need to tell Ollama to listen on all interfaces.

Create or edit the Ollama systemd override:

sudo mkdir -p /etc/systemd/system/ollama.service.d
sudo tee /etc/systemd/system/ollama.service.d/override.conf << 'EOF'
[Service]
Environment="OLLAMA_HOST=0.0.0.0:11434"
EOF

sudo systemctl daemon-reload
sudo systemctl restart ollama

Verify Ollama is now listening on all interfaces:

ss -tlnp | grep 11434
# Expected: LISTEN 0  4096  *:11434  *:*

GPU users: If you have a supported GPU, remove the OLLAMA_NO_GPU=1 line from compose.yaml and Ollama will use it automatically.

3. Create the Docker volume

docker volume create open-webui

4. Find your host IP and bridge gateway

# Your LAN IP
ip -4 addr show | grep inet | grep -v 127

Start the stack once to discover the bridge gateway:

docker compose up -d
docker network inspect open-webui_default --format='{{json .IPAM.Config}}'
# Returns something like: [{"Subnet":"172.18.0.0/16","Gateway":"172.18.0.1"}]

Note the Gateway value — you will need it in the next step.

5. Edit compose.yaml

Replace the two placeholder values with your own:

ports:
  - "YOUR_HOST_IP:8080:8080"

environment:
  - OLLAMA_BASE_URL=http://YOUR_BRIDGE_GATEWAY:11434

Then restart the stack:

docker compose down && docker compose up -d

6. Verify Ollama is reachable from inside the container

docker exec open-webui curl -s --max-time 10 http://YOUR_BRIDGE_GATEWAY:11434/api/tags

You should see a JSON response listing your installed models.

Open WebUI is now available at http://YOUR_HOST_IP:8080


Firewall Hardening (UFW)

This is the section most Open WebUI tutorials skip. If your machine is on a LAN and you want to restrict access, follow these steps.

Docker bypasses UFW by default for published ports. You need to handle both UFW and the Docker iptables chain separately.

Switch to iptables-legacy backend (Debian 13)

Debian 13 uses nftables by default which causes UFW errors. Switch to legacy:

sudo update-alternatives --set iptables /usr/sbin/iptables-legacy
sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy

Disable IPv6 in UFW if your machine has no IPv6 addresses:

sudo sed -i 's/IPV6=yes/IPV6=no/' /etc/default/ufw

Set UFW rules

Replace the IPs below with your own:

sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow in on lo
sudo ufw allow from YOUR_ADMIN_IP to any        # your management machine
sudo ufw allow from YOUR_TRUSTED_IP to any      # NAS or other trusted host
sudo ufw allow in on docker0                    # default Docker bridge
sudo ufw --force enable

Block Docker bypass for container ports

Docker publishes ports directly via iptables, bypassing UFW. Restrict who can reach your container ports by adding rules to the DOCKER-USER chain:

# Allow your LAN subnet to reach Docker ports (adjust subnet and interface)
sudo iptables -I DOCKER-USER -s 192.168.1.0/24 -i eth0 -j ACCEPT
sudo iptables -A DOCKER-USER -i eth0 -j DROP

Allow the Open WebUI bridge subnet to reach Ollama on port 11434:

sudo ufw allow from 172.18.0.0/16 to any port 11434

Make Docker-USER rules persistent

Docker rebuilds its iptables chains on every restart, wiping custom rules. Create a systemd service to re-apply them automatically after Docker starts:

sudo tee /etc/systemd/system/docker-user-rules.service << 'EOF'
[Unit]
Description=Add DOCKER-USER iptables rules
After=docker.service
Requires=docker.service

[Service]
Type=oneshot
ExecStart=/bin/bash -c '\
  iptables -F DOCKER-USER && \
  iptables -I DOCKER-USER -s YOUR_LAN_SUBNET -i YOUR_INTERFACE -j ACCEPT && \
  iptables -A DOCKER-USER -i YOUR_INTERFACE -j DROP'
RemainAfterExit=yes

[Install]
WantedBy=multi-user.target
EOF

sudo systemctl daemon-reload
sudo systemctl enable docker-user-rules.service
sudo systemctl start docker-user-rules.service

Save iptables rules

sudo mkdir -p /etc/iptables
sudo iptables-save | sudo tee /etc/iptables/rules.v4

Tested Models

The following models have been tested and confirmed working on CPU-only hardware with 32GB RAM:

Model Pull Command Size Notes
phi4 ollama pull phi4 9.1 GB Fast, good general purpose model
llama3.1:8b ollama pull llama3.1:8b 4.9 GB Excellent all-rounder, recommended starting point
llava:13b ollama pull llava:13b 8.0 GB Multimodal — supports image input
deepseek-r1:14b ollama pull deepseek-r1:14b 9.0 GB Strong reasoning and code tasks
qwen3.5:4b ollama pull qwen3.5:4b 3.4 GB Lightweight, fastest on CPU

RAM guidance: Models up to ~10GB work well with 32GB RAM on CPU. Larger models (32B+) are possible but will be significantly slower without a GPU.


Updating Open WebUI

cd /opt/open-webui
docker compose pull
docker compose down
docker compose up -d

Your data is stored in the named Docker volume and is preserved across updates.


Troubleshooting

Open WebUI can't reach Ollama

  • Verify Ollama is listening on all interfaces: ss -tlnp | grep 11434
  • Check the bridge gateway IP matches your OLLAMA_BASE_URL in compose.yaml
  • Ensure the UFW rule allows 172.18.0.0/16 to port 11434

Page won't load

  • Check the port binding in compose.yaml uses your LAN IP, not 127.0.0.1
  • Verify the container is running: docker ps | grep open-webui
  • Check firewall rules: sudo ufw status numbered

UFW ip6tables error on Debian 13

  • Follow the iptables-legacy switch steps in the firewall section above
  • Disable IPv6 in UFW if your machine has no IPv6 addresses

Models not appearing in the UI

  • Confirm Ollama has models installed: ollama list
  • Test connectivity from inside the container: docker exec open-webui curl -s http://YOUR_BRIDGE_GATEWAY:11434/api/tags

File Structure

open-webui-ollama-debian/
├── compose.yaml     # Docker Compose stack
└── README.md        # This file

References


Licence

MIT — use freely, attribution appreciated.

About

Self-hosted local LLM stack using Ollama and Open WebUI on Debian Linux. Docker Compose setup with bridge networking, UFW firewall hardening, persistent Docker-USER iptables rules, and tested CPU-only model recommendations for machines without a GPU.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors