Fooocus
# docker pull ghcr.io/lllyasviel/fooocus:latest
# docker run -d \
–name fooocus \
–gpus all \
-p 7865:7865 \
-v fooocus_data:/content/data \
ghcr.io/lllyasviel/fooocus:latest
VideoSOS
Clone the repo
# git clone https://github.com/timoncool/videosos
# cd videosos
Start VideoSOS in Docker
# docker compose up -d
Open in browser
http://localhost:3000
Stop when done
# docker compose down
LocalAI + Open WebUI
Prereqs
Make sure these work first:
# docker –version
# docker compose version
If you have a GPU (NVIDIA):
NVIDIA drivers installed
NVIDIA Container Toolkit installed
Create a project folder
# mkdir localai-webui
# cd localai-webui
docker-compose.yml
Create this file:
version: “3.9”
services:
localai:
image: ghcr.io/go-skynet/localai:latest
container_name: localai
ports:
– “8080:8080”
volumes:
– ./models:/models
environment:
– MODELS_PATH=/models
command: >
–models-path /models
–context-size 4096
deploy:
resources:
reservations:
devices:
– capabilities: [gpu]
restart: unless-stopped
webui:
image: ghcr.io/open-webui/open-webui:latest
container_name: open-webui
ports:
– “3000:8080”
environment:
– OPENAI_API_BASE_URL=http://localai:8080/v1
– OPENAI_API_KEY=localai
depends_on:
– localai
volumes:
– ./webui-data:/app/backend/data
restart: unless-stopped
Download a model (important)
LocalAI does not auto-download models.
Create folders:
# mkdir -p models/llama-3
Example: download a GGUF model (recommended):
# wget -O models/llama-3/llama-3-8b-instruct.Q4_K_M.gguf \
https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct-GGUF/resolve/main/llama-3-8b-instruct.Q4_K_M.gguf
Create models/llama-3/model.yaml:
name: llama-3
backend: llama-cpp
parameters:
model: llama-3-8b-instruct.Q4_K_M.gguf
context_size: 4096
Start everything
# docker compose up -d
Open the UI
Web UI: http://localhost:3000
LocalAI API: http://localhost:8080/v1/chat/completions
In Open WebUI:
Model → select llama-3
Start chatting
MusicGPT
# docker pull gabotechs/musicgpt
# docker run -it –gpus all -p 8642:8642 \
-v ~/.musicgpt:/root/.local/share/musicgpt \
gabotechs/musicgpt –gpu –ui-expose
ARM Support
If you need ARM support for any of the above, the general approach is:
Build your own ARM image
Enable Docker Buildx
# docker buildx create –use
# docker buildx inspect –bootstrap
Clone the repo
# git clone https://github.com/<project>/<repo>.git
# cd <repo>
Build for ARM64
# docker buildx build \
–platform linux/arm64 \
-t my-arm64-fooocus:latest \
.
Run it
# docker run -p 7865:7865 my-arm64-fooocus:latest
This works if the Python/pytorch packages and other dependencies have ARM-compatible wheels — many PyTorch builds do now on macOS M1/M2 and some Linux ARM64 systems.