28 Apr 2025

All your AIs in one place with OpenWebUI and LiteLLM

If you’ve been exploring different AI models, you might have noticed that each major provider (like OpenAI, Google, and Anthropic) uses its own incompatible API specification. This makes it tricky to have a single, organized place to chat with all these different models. I ran into this too, and after some experimentation, I found a pretty nifty setup that solves the problem.

By combining either Open Web UI (previously called Olamas Web UI) or Prompta with a small library called LiteLLM, you can set up one clean interface to talk to models from Google, Anthropic, and OpenAI, all using your own API keys. Open Web UI is a full-featured option but requires setting up a server and a Postgres database. If you want something quicker and simpler, Prompta is a lightweight, client-side app that’s much easier to get running.

This is especially useful right now because Google is offering free access to powerful models like Gemini Pro 2.5 through their API, and Anthropic lets you use models like Sonnet 3.7 on a pay-as-you-go basis. Combined with OpenAI’s free token program (see this article for more info on that), you can easily access and manage conversations across multiple AIs without juggling different apps or interfaces.

I’ll cover how to get everything up and running at a high level, but assume you’re familiar with docker and self-hosting. The TL;DR of the setup is you need to spin up a copy of LiteLLM somewhere (fly.io works well for this).

To spin up LiteLLM in a docker container you can use the following Dockerfile:

FROM python:3.11-slim

# Install uv
RUN pip install uv

# Install dependencies using uv
RUN uv pip install --system "litellm[proxy]" prisma

# Copy the config file into the container
COPY litellm_cfg.yaml /app/litellm_cfg.yaml

WORKDIR /app

# Run litellm with the specified config
CMD ["litellm", "--config", "/app/litellm_cfg.yaml"]

Before building the container put a LiteLLM config file like the below in the same folder as the Dockerfile, and save it as litellm_cfg.yaml (customize it for whatever models/providers you’re using):

litellm_settings:
  set_verbose: true
  drop_params: true  # tells litellm to drop extra params that a provider doesn't support (needed for ollama + cursor)

model_list:
  - model_name: claude-3-sonnet-3.7
    litellm_params:
      model: anthropic/claude-3-7-sonnet-latest
      api_key: os.environ/ANTHROPIC_API_KEY

  - model_name: claude-3-sonnet-3.5
    litellm_params:
      # model: anthropic/claude-3-5-sonnet-20240620
      model: anthropic/claude-3-5-sonnet-latest
      api_key: os.environ/ANTHROPIC_API_KEY

  - model_name: claude-3-opus
    litellm_params:
      # model: anthropic/claude-3-opus-20240229
      model: anthropic/claude-3-opus-latest
      api_key: os.environ/ANTHROPIC_API_KEY

  - model_name: gemini-2.5-pro-exp-03-25
    litellm_params:
      model: gemini/gemini-2.5-pro-exp-03-25
      api_key: os.environ/GEMINI_API_KEY

The LiteLLM cfg file is where you configure the list of models you want to be able to route to. You’ll also need to make sure the various env vars for your API keys are set inside the container.

Once the container is up and running you can go to <your-litellm-host>:8000/ui where you can see your models and add new ones using LiteLLM’s interface.

Here’s the docker compose file that goes with it:

services:
  litellm:
    build: .
    ports: 
      - "8000:4000"
    env_file:
      - .env
    environment:
      # LITELLM_MASTER_KEY: "sk-1234"  # can set this here, but recommmend to set in a .env file
      DATABASE_URL: "postgres://postgres:postgres@postgres:5432/postgres" # use something more secure here
      PORT: 4000
      STORE_MODEL_IN_DB: "True"
    command: litellm --drop_params --config /app/litellm_cfg.yaml
    volumes:
      - ./litellm_cfg.yaml:/app/litellm_cfg.yaml
    restart: unless-stopped

  # postgres is only needed to let you use the /ui endpoint to manage litellm
  postgres:
    image: postgres
    environment:
	    # make sure these match whatever name and password you set up in the DATABASE_URL above
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: postgres
      POSTGRES_DB: postgres
    volumes:
      - ./pg_data:/var/lib/postgresql/data
    restart: unless-stopped

You’ll note that the compose file is also spinning up a postgres database to go with LiteLLM. This isn’t necessary, LiteLLM can run without it, but if you use the database you can edit the model list within the web ui without having to mess with yaml files.

Once you have both of those in place and you run them using docker-compose up, then you’ll be able to access litellm’s ui, and litellm will be serving an OpenAI compatible endpoint with these models that you can point Open Web UI or Prompta at. If you need an example docker-compose file for spinning up a copy of Open Web UI, check here.