Skip to content

Open WebUI Configuration

Open WebUI provides a powerful interface for interacting with AI models through Ollama. This guide explains how to configure and customize it for use with Obelisk.

Basic Configuration

Open WebUI is configured through environment variables in the docker-compose.yaml file:

open-webui:
  environment:
    - MODEL_DOWNLOAD_DIR=/models
    - OLLAMA_API_BASE_URL=http://ollama:11434
    - OLLAMA_API_URL=http://ollama:11434
    - LOG_LEVEL=debug

These settings establish connection to the Ollama service and configure basic behavior.

User Interface Features

Open WebUI provides several key features:

Chat Interface

The main chat interface allows:

  • Conversational interactions with AI models
  • Code highlighting and formatting
  • File attachment and reference
  • Conversation history and management

Model Selection

Users can select from available models with options for:

  • Parameter adjustment (temperature, top_p, etc.)
  • Context length configuration
  • Model-specific presets

Prompt Templates

Create and manage prompt templates to:

  • Define consistent AI behavior
  • Create specialized assistants for different tasks
  • Share templates with your team

Advanced Configuration

Custom Branding

To customize the Open WebUI appearance for your Obelisk deployment:

  1. Mount a custom assets volume:
open-webui:
  volumes:
    - ./custom-webui-assets:/app/public/custom
  1. Create the following files:
  2. custom-webui-assets/logo.png - Main logo
  3. custom-webui-assets/logo-dark.png - Logo for dark mode
  4. custom-webui-assets/favicon.png - Browser tab icon
  5. custom-webui-assets/background.png - Login page background

Authentication

Enable authentication for multi-user setups:

open-webui:
  environment:
    - ENABLE_USER_AUTH=true
    - DEFAULT_USER_EMAIL=admin@example.com
    - DEFAULT_USER_PASSWORD=strongpassword

API Integration

Open WebUI can be integrated with other services via its API:

open-webui:
  environment:
    - ENABLE_API=true
    - API_KEY=your-secure-api-key

This allows programmatic access to model interactions.

Persistent Data

Open WebUI stores its data in Docker volumes:

  • data: Conversations, user settings, and app data
  • open-webui: Configuration files
  • models: Shared with Ollama for model storage

These volumes persist across container restarts and updates.

Customizing for Documentation Support

To optimize Open WebUI for documentation support:

  1. Create a specialized preset:
  2. Navigate to Settings > Presets
  3. Create a new preset named "Documentation Helper"
  4. Configure with appropriate temperature (0.3-0.5) and parameters
  5. Set system prompt to documentation-specific instructions

  6. Create documentation-focused prompt templates:

  7. "Explain this concept"
  8. "How do I configure X"
  9. "Troubleshoot this error"

  10. Enable RAG (Retrieval Augmented Generation):

  11. Upload documentation files through the interface
  12. Enable "Knowledge Base" feature
  13. Configure vector storage settings

Troubleshooting

Common issues and solutions:

  1. Connection errors:
  2. Verify network settings in docker-compose
  3. Check that Ollama service is running

  4. Authentication problems:

  5. Reset password using the API
  6. Check environment variables for auth settings

  7. Performance issues:

  8. Adjust interface settings for slower devices
  9. Configure page size and context window appropriately

Resources