Configuring Ollama + Open WebUI on Linux
Installation of Open WebUI with Ollama Support¶
There are several methods to install Open WebUI with Ollama support. Here, we present three options for installation:
Option 1: Use the Bundled Ollama-Enabled Open WebUI¶
This is the simplest and recommended installation method. It uses a single container image that bundles Open WebUI with Ollama, allowing a streamlined setup with just one command. Choose the appropriate command based on your hardware environment:
With NVidia GPU and CUDA Support:¶
To leverage GPU resources, run the following command:
sudo docker run -d -p 3000:8080 --gpus=all -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:cuda
With GPU Support (Without CUDA):¶
To utilize GPU resources, run the following command:
sudo docker run -d -p 3000:8080 --gpus=all -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama
Only CPU (No GPU):¶
If you do not have a GPU, use the following command:
sudo docker run -d -p 3000:8080 -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama
Verify Docker Instance is Running:¶
To check if the Docker instance is running, use the following command:
$ sudo docker ps
Expected output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
984c0d7006ef ghcr.io/open-webui/open-webui:ollama "bash start.sh" About an hour ago Up About an hour 0.0.0.0:3000->8080/tcp, :::3000->8080/tcp open-webui
Option 2: Install Open WebUI with Separate Ollama Instance¶
This option allows you to install Ollama first and then run Open WebUI using a separate instance of Ollama.
Install Ollama:¶
Run the following command to install Ollama:
curl -fsSL https://ollama.com/install.sh | sh
Start Open WebUI:¶
Once Ollama is installed, use the following command to start Open WebUI:
sudo docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
Connect to Ollama on a Remote Server:¶
If you want to connect to Ollama running on a remote server, change the OLLAMA_BASE_URL to the remote server's URL:
sudo docker run -d -p 3000:8080 -e OLLAMA_BASE_URL=https://your_domain -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
Run Open WebUI with Nvidia GPU Support:¶
If you're using an Nvidia GPU, run the following command:
sudo docker run -d -p 3000:8080 --gpus all --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:cuda
Verify Docker Instance is Running:¶
To check if the Docker instance is running, use the following command:
$ sudo docker ps
Option 3: Install Using Docker Compose¶
This option involves cloning the Open WebUI GitHub repository and using Docker Compose for installation.
Clone the Repository:¶
First, clone the Open WebUI GitHub repository:
git clone https://github.com/open-webui/open-webui
cd open-webui
Select the Appropriate Docker Compose File:¶
Choose the appropriate Docker Compose file based on your hardware:
docker-compose.amdgpu.yaml— For AMD GPUsdocker-compose.api.yaml— API-only setupdocker-compose.data.yaml— For data servicesdocker-compose.gpu.yaml— For Nvidia GPUsdocker-compose.yaml— Default configuration
Start Docker Environment:¶
Run the following command to start the Docker Compose environment:
sudo docker compose -f docker-compose.yaml up
Verify Docker Instance is Running:¶
To check if the Docker instance is running, use the following command:
$ sudo docker ps
Expected output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
984c0d7006ef ghcr.io/open-webui/open-webui:ollama "bash start.sh" About an hour ago Up About an hour 0.0.0.0:3000->8080/tcp, :::3000->8080/tcp open-webui
Accessing the User Interface¶
Once the setup is complete, open your web browser and navigate to:
http://<your_server_ip>:3000
Info
- The first time you start, there might be a white screen for a few minutes. This is because the system is trying to make a request to the openai.com address. To fix this, simply disable the OpenAI API and enable the Ollama API in the settings.
- If, after startup, the local Ollama model you pulled cannot be found, go to the admin panel and correctly input the local Ollama address.
ref: a CSDN blog by vvw&