Open WebUI¶
Self-hosted web interface for LLMs with multi-backend support, RAG, and multi-user authentication.
Overview¶
Open WebUI provides:
- Backend agnostic - Connect to Ollama, OpenAI, or any compatible API
- Multi-user - Authentication and user management
- RAG - Upload documents for context
- Model switching - Change models mid-conversation
- Customizable - Themes, prompts, settings
Quick Start¶
With Ollama¶
docker run -d \
-p 3000:8080 \
-v /tank/ai/data/open-webui:/app/backend/data \
-e OLLAMA_BASE_URL=http://host.docker.internal:11434 \
--add-host=host.docker.internal:host-gateway \
--name open-webui \
ghcr.io/open-webui/open-webui:main
Access at http://localhost:3000
Standalone¶
docker run -d \
-p 3000:8080 \
-v /tank/ai/data/open-webui:/app/backend/data \
--name open-webui \
ghcr.io/open-webui/open-webui:main
Docker Compose¶
With Ollama Stack¶
version: '3.8'
services:
ollama:
image: ollama/ollama
container_name: ollama
volumes:
- /tank/ai/models/ollama:/root/.ollama
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
restart: unless-stopped
open-webui:
image: ghcr.io/open-webui/open-webui:main
container_name: open-webui
ports:
- "3000:8080"
volumes:
- /tank/ai/data/open-webui:/app/backend/data
environment:
- OLLAMA_BASE_URL=http://ollama:11434
depends_on:
- ollama
restart: unless-stopped
networks:
default:
name: ai-network
AMD ROCm Stack¶
services:
ollama:
image: ollama/ollama:rocm
devices:
- /dev/kfd
- /dev/dri
group_add:
- video
- render
volumes:
- /tank/ai/models/ollama:/root/.ollama
open-webui:
image: ghcr.io/open-webui/open-webui:main
ports:
- "3000:8080"
volumes:
- /tank/ai/data/open-webui:/app/backend/data
environment:
- OLLAMA_BASE_URL=http://ollama:11434
depends_on:
- ollama
Configuration¶
Environment Variables¶
| Variable | Description | Default |
|---|---|---|
OLLAMA_BASE_URL | Ollama API URL | http://localhost:11434 |
OPENAI_API_BASE_URL | OpenAI-compatible URL | - |
OPENAI_API_KEY | API key for OpenAI | - |
WEBUI_AUTH | Enable authentication | True |
WEBUI_SECRET_KEY | Session encryption key | Random |
DEFAULT_MODELS | Default model selection | - |
ENABLE_SIGNUP | Allow user registration | True |
Multiple Backends¶
environment:
# Ollama
- OLLAMA_BASE_URL=http://ollama:11434
# OpenAI-compatible (e.g., local llama.cpp)
- OPENAI_API_BASE_URLS=http://llama-server:8080/v1
# Cloud fallback
- OPENAI_API_KEY=sk-xxx
Production Configuration¶
services:
open-webui:
image: ghcr.io/open-webui/open-webui:main
ports:
- "127.0.0.1:3000:8080" # Local only, use reverse proxy
volumes:
- /tank/ai/data/open-webui:/app/backend/data
environment:
- OLLAMA_BASE_URL=http://ollama:11434
- WEBUI_AUTH=True
- ENABLE_SIGNUP=False # Disable public signup
- WEBUI_SECRET_KEY=${WEBUI_SECRET}
- DEFAULT_MODELS=llama3.3:70b
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/health"]
interval: 30s
timeout: 10s
retries: 3
restart: unless-stopped
User Management¶
First User¶
The first user to sign up becomes admin.
Disable Signup¶
After creating admin account:
Or in Admin Settings → General → Disable Signup
User Roles¶
| Role | Capabilities |
|---|---|
| Admin | Full access, user management |
| User | Chat, model access |
| Pending | Awaiting admin approval |
RAG (Document Chat)¶
Upload Documents¶
- Click + in chat
- Select Upload Files
- Upload PDF, TXT, MD, etc.
- Documents become part of context
Supported Formats¶
- Text files (.txt, .md)
- Word documents
- Code files
- And more
Collection Management¶
Create document collections:
- Workspace → Documents
- Create collection
- Upload documents
- Reference in chats with
#collection-name
Model Management¶
Switch Models¶
- Select model from dropdown in chat
- Models from all connected backends appear
Model Settings¶
Per-model configuration:
- Workspace → Models
- Select model
- Configure:
- System prompt
- Temperature
- Max tokens
- Context length
Custom Models¶
Add model aliases:
{
"name": "coding-assistant",
"base_model": "deepseek-coder-v2:16b",
"system_prompt": "You are a senior software engineer..."
}
Features¶
Chat Features¶
- Markdown rendering - Code blocks, tables
- Code execution - Python code blocks (with extension)
- Image understanding - Vision models
- Voice input - Speech-to-text
- Voice output - Text-to-speech
Keyboard Shortcuts¶
| Shortcut | Action |
|---|---|
Ctrl+Shift+O | New chat |
Ctrl+Shift+S | Toggle sidebar |
Ctrl+Shift+Backspace | Delete chat |
Enter | Send message |
Shift+Enter | New line |
Themes¶
- Settings → Interface
- Choose theme (Light, Dark, System)
- Custom CSS available
Integration¶
Tailscale Access¶
Expose Open WebUI via Tailscale:
Access via https://server.tailnet.ts.net
Reverse Proxy (Caddy)¶
Reverse Proxy (nginx)¶
server {
listen 443 ssl;
server_name webui.example.com;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
}
Data Persistence¶
Volume Structure¶
/tank/ai/data/open-webui/
├── webui.db # SQLite database
├── uploads/ # Uploaded files
├── cache/ # Temporary cache
└── docs/ # RAG documents
Backup¶
# Snapshot ZFS dataset
zfs snapshot tank/ai/data/open-webui@backup
# Or copy files
cp -r /tank/ai/data/open-webui /backup/
Troubleshooting¶
Can't Connect to Ollama¶
# Check Ollama is running
curl http://localhost:11434/
# In Docker, use correct URL
OLLAMA_BASE_URL=http://host.docker.internal:11434 # macOS/Windows
OLLAMA_BASE_URL=http://172.17.0.1:11434 # Linux (Docker bridge)
Login Loop¶
# Clear cookies in browser
# Or reset session
docker exec open-webui rm -rf /app/backend/data/webui.db
Slow Response¶
- Check Ollama logs:
docker logs ollama - Verify model is loaded:
ollama ps - Ensure GPU is being used
WebSocket Errors¶
# Ensure WebSocket upgrade in proxy
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
API Access¶
Open WebUI also provides an API:
# Get API key from Settings → Account → API Keys
curl http://localhost:3000/api/chat/completions \
-H "Authorization: Bearer sk-xxx" \
-H "Content-Type: application/json" \
-d '{
"model": "llama3.3:70b",
"messages": [{"role": "user", "content": "Hello!"}]
}'
See Also¶
- GUI Tools Index - Tool comparison
- Ollama Docker - Ollama setup
- Container Deployment - Docker configuration
- Remote Access - External access