How to Install AI Tools in Docker

Fooocus

docker pull ghcr.io/lllyasviel/fooocus:latest
docker run -d \
–name fooocus \
–gpus all \
-p 7865:7865 \
-v fooocus_data:/content/data \
ghcr.io/lllyasviel/fooocus:latest

VideoSOS

# 1. Clone the repo
git clone https://github.com/timoncool/videosos
cd videosos

# 2. Start VideoSOS in Docker
docker compose up -d

# 3. Open in browser
# -> http://localhost:3000

# 4. Stop when done
docker compose down

LocalAI

1️⃣ Prereqs

Make sure these work first:

docker –version
docker compose version

If you have a GPU (NVIDIA):

NVIDIA drivers installed

NVIDIA Container Toolkit installed

2️⃣ Create a project folder
mkdir localai-webui
cd localai-webui

3️⃣ docker-compose.yml

Create this file:

version: “3.9”

services:
localai:
image: ghcr.io/go-skynet/localai:latest
container_name: localai
ports:
– “8080:8080”
volumes:
– ./models:/models
environment:
– MODELS_PATH=/models
command: >
–models-path /models
–context-size 4096
deploy:
resources:
reservations:
devices:
– capabilities: [gpu]
restart: unless-stopped

webui:
image: ghcr.io/open-webui/open-webui:latest
container_name: open-webui
ports:
– “3000:8080”
environment:
– OPENAI_API_BASE_URL=http://localai:8080/v1
– OPENAI_API_KEY=localai
depends_on:
– localai
volumes:
– ./webui-data:/app/backend/data
restart: unless-stopped

4️⃣ Download a model (important)

LocalAI does not auto-download models.

Create folders:

mkdir -p models/llama-3

Example: download a GGUF model (recommended):

wget -O models/llama-3/llama-3-8b-instruct.Q4_K_M.gguf \
https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct-GGUF/resolve/main/llama-3-8b-instruct.Q4_K_M.gguf

Create models/llama-3/model.yaml:

name: llama-3
backend: llama-cpp
parameters:
model: llama-3-8b-instruct.Q4_K_M.gguf
context_size: 4096

5️⃣ Start everything
docker compose up -d

6️⃣ Open the UI

Web UI: http://localhost:3000

LocalAI API: http://localhost:8080/v1/chat/completions

In Open WebUI:

Model → select llama-3

Start chatting 🎉

MusicGPT

docker pull gabotechs/musicgpt
docker run -it –gpus all -p 8642:8642 \
-v ~/.musicgpt:/root/.local/share/musicgpt \
gabotechs/musicgpt –gpu –ui-expose