feat: add more Agent services & easytier

This commit is contained in:
Summer Shen
2026-04-19 12:26:54 +08:00
parent 0e948befac
commit 0b5ba69cb0
30 changed files with 1775 additions and 0 deletions
+22
View File
@@ -0,0 +1,22 @@
# Global Registry Prefix (optional)
# GLOBAL_REGISTRY=
# AnythingLLM Image Version
# No stable semantic version tags exist; 'latest' tracks the current release.
ANYTHINGLLM_VERSION=latest
# Timezone
TZ=UTC
# Host port for the AnythingLLM web UI
ANYTHINGLLM_PORT_OVERRIDE=3001
# UID/GID for file ownership inside the container
ANYTHINGLLM_UID=1000
ANYTHINGLLM_GID=1000
# Resource Limits
ANYTHINGLLM_CPU_LIMIT=2
ANYTHINGLLM_MEMORY_LIMIT=2G
ANYTHINGLLM_CPU_RESERVATION=0.5
ANYTHINGLLM_MEMORY_RESERVATION=512M
+49
View File
@@ -0,0 +1,49 @@
# AnythingLLM
[English](./README.md) | [中文](./README.zh.md)
Quick start: <https://docs.anythingllm.com>.
This service deploys AnythingLLM, an all-in-one AI application that lets you chat with documents, use multiple LLM providers, and build custom AI agents — with a full RAG pipeline built in.
## Services
- `anythingllm`: The AnythingLLM web application.
## Quick Start
```bash
docker compose up -d
```
Open `http://localhost:3001` and complete the setup wizard to connect your LLM provider.
## Configuration
All LLM providers, vector databases, and agent settings are configured through the web UI after startup. No API keys are required in `.env` unless you want to pre-seed them via environment variables.
| Variable | Description | Default |
| ----------------------------- | ----------------------------------------------- | -------- |
| `ANYTHINGLLM_VERSION` | Image version (`latest` — no stable tags exist) | `latest` |
| `TZ` | Container timezone | `UTC` |
| `ANYTHINGLLM_PORT_OVERRIDE` | Host port for the web UI | `3001` |
| `ANYTHINGLLM_UID` | UID for volume file ownership | `1000` |
| `ANYTHINGLLM_GID` | GID for volume file ownership | `1000` |
| `ANYTHINGLLM_CPU_LIMIT` | CPU limit | `2` |
| `ANYTHINGLLM_MEMORY_LIMIT` | Memory limit | `2G` |
| `ANYTHINGLLM_CPU_RESERVATION` | CPU reservation | `0.5` |
| `ANYTHINGLLM_MEMORY_LIMIT` | Memory reservation | `512M` |
## Volumes
- `anythingllm_storage`: Persists all application data, uploaded documents, embeddings, and settings.
## Ports
- **3001**: Web UI
## Notes
- The `mintplexlabs/anythingllm` image does not publish stable semantic version tags; `latest` is the only reliable tag.
- Supports OpenAI, Anthropic, Ollama, LM Studio, and many other LLM backends — all configured from the UI.
- The health check uses the `/api/ping` endpoint.
+49
View File
@@ -0,0 +1,49 @@
# AnythingLLM
[English](./README.md) | [中文](./README.zh.md)
快速开始:<https://docs.anythingllm.com>。
此服务用于部署 AnythingLLM,一款集文档问答、多 LLM 提供商接入和自定义 AI Agent 于一体的全能 AI 应用,内置完整的 RAG 流水线。
## 服务
- `anythingllm`AnythingLLM Web 应用。
## 快速开始
```bash
docker compose up -d
```
打开 `http://localhost:3001`,按照设置向导连接你的 LLM 提供商。
## 配置
所有 LLM 提供商、向量数据库和 Agent 设置均通过启动后的 Web UI 进行配置,无需在 `.env` 中预设 API Key(除非你希望通过环境变量预填充)。
| 变量 | 说明 | 默认值 |
| ----------------------------- | ----------------------------------- | -------- |
| `ANYTHINGLLM_VERSION` | 镜像版本(无语义化稳定标签,使用 `latest` | `latest` |
| `TZ` | 容器时区 | `UTC` |
| `ANYTHINGLLM_PORT_OVERRIDE` | Web UI 的宿主机端口 | `3001` |
| `ANYTHINGLLM_UID` | 数据卷文件所有者 UID | `1000` |
| `ANYTHINGLLM_GID` | 数据卷文件所有者 GID | `1000` |
| `ANYTHINGLLM_CPU_LIMIT` | CPU 限制 | `2` |
| `ANYTHINGLLM_MEMORY_LIMIT` | 内存限制 | `2G` |
| `ANYTHINGLLM_CPU_RESERVATION` | CPU 预留 | `0.5` |
| `ANYTHINGLLM_MEMORY_LIMIT` | 内存预留 | `512M` |
## 数据卷
- `anythingllm_storage`:持久化所有应用数据、上传的文档、嵌入向量和配置。
## 端口
- **3001**Web UI
## 说明
- `mintplexlabs/anythingllm` 镜像未发布语义化稳定标签,`latest` 是唯一可靠的标签。
- 支持 OpenAI、Anthropic、Ollama、LM Studio 等众多 LLM 后端,均可在 UI 中配置。
- 健康检查使用 `/api/ping` 端点。
+42
View File
@@ -0,0 +1,42 @@
x-defaults: &defaults
restart: unless-stopped
logging:
driver: json-file
options:
max-size: 100m
max-file: '3'
services:
anythingllm:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}mintplexlabs/anythingllm:${ANYTHINGLLM_VERSION:-latest}
ports:
- '${ANYTHINGLLM_PORT_OVERRIDE:-3001}:3001'
volumes:
- anythingllm_storage:/app/server/storage
environment:
- TZ=${TZ:-UTC}
- STORAGE_DIR=/app/server/storage
- UID=${ANYTHINGLLM_UID:-1000}
- GID=${ANYTHINGLLM_GID:-1000}
healthcheck:
test:
- CMD
- node
- -e
- "require('http').get('http://localhost:3001/api/ping',res=>process.exit(res.statusCode===200?0:1)).on('error',()=>process.exit(1))"
interval: 30s
timeout: 10s
retries: 5
start_period: 30s
deploy:
resources:
limits:
cpus: ${ANYTHINGLLM_CPU_LIMIT:-2}
memory: ${ANYTHINGLLM_MEMORY_LIMIT:-2G}
reservations:
cpus: ${ANYTHINGLLM_CPU_RESERVATION:-0.5}
memory: ${ANYTHINGLLM_MEMORY_RESERVATION:-512M}
volumes:
anythingllm_storage: