chore: add missing READMEs

This commit is contained in:
Sun-ZhenXing
2025-11-08 21:57:17 +08:00
parent a65a009640
commit febd1601a2
34 changed files with 1806 additions and 167 deletions

View File

@@ -0,0 +1,8 @@
# Docker registry
DOCKER_REGISTRY=docker.io
# Build version
BUILD_VERSION=1.6.0
# Hugging Face Endpoint, optional for China users
# HF_ENDPOINT=https://hf-mirror.com

1
builds/io-paint/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
/models

View File

@@ -0,0 +1,18 @@
FROM pytorch/pytorch:2.6.0-cuda12.4-cudnn9-runtime
ARG DEBIAN_FRONTEND=noninteractive
ARG VERSION=1.6.0
WORKDIR /workspace
RUN apt-get update && apt-get install -y --no-install-recommends \
software-properties-common \
libsm6 libxext6 ffmpeg libfontconfig1 libxrender1 libgl1-mesa-glx \
curl python3-pip
RUN apt-get clean && \
rm -rf /var/lib/apt/lists/*
RUN python3 -m pip install --upgrade pip
RUN pip3 install iopaint==${VERSION} && pip3 cache purge
EXPOSE 8080
CMD ["iopaint", "start", "--model=lama", "--device=cuda", "--port=8080", "--host=0.0.0.0"]

60
builds/io-paint/README.md Normal file
View File

@@ -0,0 +1,60 @@
# IOPaint (Lama Cleaner)
[English](./README.md) | [中文](./README.zh.md)
IOPaint (formerly LaMa Cleaner) is a free and open-source inpainting & outpainting tool powered by SOTA AI model.
## Prerequisites
- NVIDIA GPU with CUDA support
- Docker with NVIDIA runtime support
## Initialization
1. Copy the example environment file:
```bash
cp .env.example .env
```
2. Start the service:
```bash
docker compose up -d
```
3. Access the web interface at <http://localhost:8080>
## Services
- `iopaint`: The IOPaint service.
## Configuration
The service runs on port 8080 and uses CUDA device 0 by default.
| Variable | Description | Default |
| ----------------- | -------------------------------- | ----------- |
| `DOCKER_REGISTRY` | Docker registry to use | `docker.io` |
| `BUILD_VERSION` | Build version | `latest` |
| `HF_ENDPOINT` | Hugging Face endpoint (optional) | - |
## Models
Models are automatically downloaded and cached in the `./models` directory on first use.
## GPU Support
This configuration requires an NVIDIA GPU and uses CUDA device 0. Make sure you have:
- NVIDIA drivers installed
- Docker with NVIDIA runtime support
- nvidia-docker2 package installed
## Reference
- [Dockerfile](https://github.com/Sanster/IOPaint/blob/main/docker/GPUDockerfile)
## License
Please refer to the official IOPaint project for license information.

View File

@@ -0,0 +1,54 @@
# IOPaint (Lama Cleaner)
[English](./README.md) | [中文](./README.zh.md)
IOPaint原 LaMa Cleaner是一个由最先进的 AI 模型驱动的免费开源图像修复和扩展工具。
## 先决条件
- 支持 CUDA 的 NVIDIA GPU
- 支持 NVIDIA 运行时的 Docker
## 初始化
1. 复制示例环境文件:
```bash
cp .env.example .env
```
2. 启动服务:
```bash
docker compose up -d
```
3. 在 <http://localhost:8080> 访问 Web 界面
## 服务
- `iopaint`: IOPaint 服务。
## 配置
服务默认在端口 8080 运行,使用 CUDA 设备 0。
| 变量 | 描述 | 默认值 |
| ----------------- | ------------------------- | ----------- |
| `DOCKER_REGISTRY` | 使用的 Docker 镜像仓库 | `docker.io` |
| `BUILD_VERSION` | 构建版本 | `latest` |
| `HF_ENDPOINT` | Hugging Face 端点(可选) | - |
## 模型
模型在首次使用时会自动下载并缓存在 `./models` 目录中。
## GPU 支持
此配置需要 NVIDIA GPU 并使用 CUDA 设备 0。确保你已安装
- NVIDIA 驱动程序
- 支持 NVIDIA 运行时的 Docker
- nvidia-docker2 软件包
请参考官方 IOPaint 项目的许可信息。

View File

@@ -0,0 +1,47 @@
x-defaults: &defaults
restart: unless-stopped
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
lama-cleaner:
<<: *defaults
image: ${DOCKER_REGISTRY:-docker.io}/local/lama-cleaner:${BUILD_VERSION:-1.6.0}
ports:
- 8080:8080
build:
context: .
dockerfile: Dockerfile
environment:
TZ: ${TZ:-UTC}
HF_ENDPOINT: ${HF_ENDPOINT:-}
volumes:
- ./models:/root/.cache
command:
- iopaint
- start
- --model=lama
- --device=cuda
- --port=8080
- --host=0.0.0.0
deploy:
resources:
limits:
cpus: '2.0'
memory: 4G
reservations:
cpus: '1.0'
memory: 2G
devices:
- driver: nvidia
device_ids: ['0']
capabilities: [compute, utility]
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8080/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s