feat: add more Agent services & easytier
This commit is contained in:
@@ -0,0 +1,22 @@
|
||||
# Global Registry Prefix (optional)
|
||||
# GLOBAL_REGISTRY=
|
||||
|
||||
# AnythingLLM Image Version
|
||||
# No stable semantic version tags exist; 'latest' tracks the current release.
|
||||
ANYTHINGLLM_VERSION=latest
|
||||
|
||||
# Timezone
|
||||
TZ=UTC
|
||||
|
||||
# Host port for the AnythingLLM web UI
|
||||
ANYTHINGLLM_PORT_OVERRIDE=3001
|
||||
|
||||
# UID/GID for file ownership inside the container
|
||||
ANYTHINGLLM_UID=1000
|
||||
ANYTHINGLLM_GID=1000
|
||||
|
||||
# Resource Limits
|
||||
ANYTHINGLLM_CPU_LIMIT=2
|
||||
ANYTHINGLLM_MEMORY_LIMIT=2G
|
||||
ANYTHINGLLM_CPU_RESERVATION=0.5
|
||||
ANYTHINGLLM_MEMORY_RESERVATION=512M
|
||||
@@ -0,0 +1,49 @@
|
||||
# AnythingLLM
|
||||
|
||||
[English](./README.md) | [中文](./README.zh.md)
|
||||
|
||||
Quick start: <https://docs.anythingllm.com>.
|
||||
|
||||
This service deploys AnythingLLM, an all-in-one AI application that lets you chat with documents, use multiple LLM providers, and build custom AI agents — with a full RAG pipeline built in.
|
||||
|
||||
## Services
|
||||
|
||||
- `anythingllm`: The AnythingLLM web application.
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
Open `http://localhost:3001` and complete the setup wizard to connect your LLM provider.
|
||||
|
||||
## Configuration
|
||||
|
||||
All LLM providers, vector databases, and agent settings are configured through the web UI after startup. No API keys are required in `.env` unless you want to pre-seed them via environment variables.
|
||||
|
||||
| Variable | Description | Default |
|
||||
| ----------------------------- | ----------------------------------------------- | -------- |
|
||||
| `ANYTHINGLLM_VERSION` | Image version (`latest` — no stable tags exist) | `latest` |
|
||||
| `TZ` | Container timezone | `UTC` |
|
||||
| `ANYTHINGLLM_PORT_OVERRIDE` | Host port for the web UI | `3001` |
|
||||
| `ANYTHINGLLM_UID` | UID for volume file ownership | `1000` |
|
||||
| `ANYTHINGLLM_GID` | GID for volume file ownership | `1000` |
|
||||
| `ANYTHINGLLM_CPU_LIMIT` | CPU limit | `2` |
|
||||
| `ANYTHINGLLM_MEMORY_LIMIT` | Memory limit | `2G` |
|
||||
| `ANYTHINGLLM_CPU_RESERVATION` | CPU reservation | `0.5` |
|
||||
| `ANYTHINGLLM_MEMORY_LIMIT` | Memory reservation | `512M` |
|
||||
|
||||
## Volumes
|
||||
|
||||
- `anythingllm_storage`: Persists all application data, uploaded documents, embeddings, and settings.
|
||||
|
||||
## Ports
|
||||
|
||||
- **3001**: Web UI
|
||||
|
||||
## Notes
|
||||
|
||||
- The `mintplexlabs/anythingllm` image does not publish stable semantic version tags; `latest` is the only reliable tag.
|
||||
- Supports OpenAI, Anthropic, Ollama, LM Studio, and many other LLM backends — all configured from the UI.
|
||||
- The health check uses the `/api/ping` endpoint.
|
||||
@@ -0,0 +1,49 @@
|
||||
# AnythingLLM
|
||||
|
||||
[English](./README.md) | [中文](./README.zh.md)
|
||||
|
||||
快速开始:<https://docs.anythingllm.com>。
|
||||
|
||||
此服务用于部署 AnythingLLM,一款集文档问答、多 LLM 提供商接入和自定义 AI Agent 于一体的全能 AI 应用,内置完整的 RAG 流水线。
|
||||
|
||||
## 服务
|
||||
|
||||
- `anythingllm`:AnythingLLM Web 应用。
|
||||
|
||||
## 快速开始
|
||||
|
||||
```bash
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
打开 `http://localhost:3001`,按照设置向导连接你的 LLM 提供商。
|
||||
|
||||
## 配置
|
||||
|
||||
所有 LLM 提供商、向量数据库和 Agent 设置均通过启动后的 Web UI 进行配置,无需在 `.env` 中预设 API Key(除非你希望通过环境变量预填充)。
|
||||
|
||||
| 变量 | 说明 | 默认值 |
|
||||
| ----------------------------- | ----------------------------------- | -------- |
|
||||
| `ANYTHINGLLM_VERSION` | 镜像版本(无语义化稳定标签,使用 `latest`) | `latest` |
|
||||
| `TZ` | 容器时区 | `UTC` |
|
||||
| `ANYTHINGLLM_PORT_OVERRIDE` | Web UI 的宿主机端口 | `3001` |
|
||||
| `ANYTHINGLLM_UID` | 数据卷文件所有者 UID | `1000` |
|
||||
| `ANYTHINGLLM_GID` | 数据卷文件所有者 GID | `1000` |
|
||||
| `ANYTHINGLLM_CPU_LIMIT` | CPU 限制 | `2` |
|
||||
| `ANYTHINGLLM_MEMORY_LIMIT` | 内存限制 | `2G` |
|
||||
| `ANYTHINGLLM_CPU_RESERVATION` | CPU 预留 | `0.5` |
|
||||
| `ANYTHINGLLM_MEMORY_LIMIT` | 内存预留 | `512M` |
|
||||
|
||||
## 数据卷
|
||||
|
||||
- `anythingllm_storage`:持久化所有应用数据、上传的文档、嵌入向量和配置。
|
||||
|
||||
## 端口
|
||||
|
||||
- **3001**:Web UI
|
||||
|
||||
## 说明
|
||||
|
||||
- `mintplexlabs/anythingllm` 镜像未发布语义化稳定标签,`latest` 是唯一可靠的标签。
|
||||
- 支持 OpenAI、Anthropic、Ollama、LM Studio 等众多 LLM 后端,均可在 UI 中配置。
|
||||
- 健康检查使用 `/api/ping` 端点。
|
||||
@@ -0,0 +1,42 @@
|
||||
x-defaults: &defaults
|
||||
restart: unless-stopped
|
||||
logging:
|
||||
driver: json-file
|
||||
options:
|
||||
max-size: 100m
|
||||
max-file: '3'
|
||||
|
||||
services:
|
||||
anythingllm:
|
||||
<<: *defaults
|
||||
image: ${GLOBAL_REGISTRY:-}mintplexlabs/anythingllm:${ANYTHINGLLM_VERSION:-latest}
|
||||
ports:
|
||||
- '${ANYTHINGLLM_PORT_OVERRIDE:-3001}:3001'
|
||||
volumes:
|
||||
- anythingllm_storage:/app/server/storage
|
||||
environment:
|
||||
- TZ=${TZ:-UTC}
|
||||
- STORAGE_DIR=/app/server/storage
|
||||
- UID=${ANYTHINGLLM_UID:-1000}
|
||||
- GID=${ANYTHINGLLM_GID:-1000}
|
||||
healthcheck:
|
||||
test:
|
||||
- CMD
|
||||
- node
|
||||
- -e
|
||||
- "require('http').get('http://localhost:3001/api/ping',res=>process.exit(res.statusCode===200?0:1)).on('error',()=>process.exit(1))"
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 5
|
||||
start_period: 30s
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: ${ANYTHINGLLM_CPU_LIMIT:-2}
|
||||
memory: ${ANYTHINGLLM_MEMORY_LIMIT:-2G}
|
||||
reservations:
|
||||
cpus: ${ANYTHINGLLM_CPU_RESERVATION:-0.5}
|
||||
memory: ${ANYTHINGLLM_MEMORY_RESERVATION:-512M}
|
||||
|
||||
volumes:
|
||||
anythingllm_storage:
|
||||
@@ -0,0 +1,30 @@
|
||||
# EasyTier image version
|
||||
EASYTIER_VERSION=v2.6.0
|
||||
|
||||
# Timezone
|
||||
TZ=UTC
|
||||
|
||||
# Virtual network name (shared with all peers in the same network)
|
||||
EASYTIER_NETWORK_NAME=easytier
|
||||
|
||||
# Virtual network secret — REQUIRED, change before deploying
|
||||
# Generate a strong secret: openssl rand -hex 16
|
||||
EASYTIER_NETWORK_SECRET=
|
||||
|
||||
# Virtual IPv4 address of this server node within the EasyTier network
|
||||
EASYTIER_IPV4=10.144.144.1
|
||||
|
||||
# Host port for peer TCP connections
|
||||
EASYTIER_TCP_PORT_OVERRIDE=11010
|
||||
|
||||
# Host port for peer UDP connections
|
||||
EASYTIER_UDP_PORT_OVERRIDE=11010
|
||||
|
||||
# Host port for the management RPC portal (bound to 127.0.0.1 by default)
|
||||
EASYTIER_RPC_PORT_OVERRIDE=15888
|
||||
|
||||
# Resource limits
|
||||
EASYTIER_CPU_LIMIT=0.50
|
||||
EASYTIER_MEMORY_LIMIT=128M
|
||||
EASYTIER_CPU_RESERVATION=0.10
|
||||
EASYTIER_MEMORY_RESERVATION=32M
|
||||
@@ -0,0 +1,88 @@
|
||||
# EasyTier
|
||||
|
||||
[English](./README.md) | [中文](./README.zh.md)
|
||||
|
||||
[EasyTier](https://github.com/EasyTier/EasyTier) is a mesh VPN networking tool that lets you build a private, encrypted overlay network across hosts that are behind NAT or firewalls. This stack deploys EasyTier as a **public relay server** — a stable entry point that peers can use for discovery and traffic relay when direct connections are not possible.
|
||||
|
||||
## Services
|
||||
|
||||
- `easytier`: EasyTier core node running in relay-only mode (`--no-tun`), without creating a local TUN interface.
|
||||
|
||||
## Ports
|
||||
|
||||
| Port | Protocol | Description |
|
||||
| ---------- | -------- | ------------------------------------------------------------------ |
|
||||
| `11010` | TCP | Peer connection listener — must be publicly reachable |
|
||||
| `11010` | UDP | Peer connection listener — must be publicly reachable |
|
||||
| `15888` | TCP | Management RPC portal (bound to `127.0.0.1` by default) |
|
||||
|
||||
## Environment Variables
|
||||
|
||||
| Variable | Description | Default |
|
||||
| --------------------------- | --------------------------------------------------- | ---------------- |
|
||||
| `EASYTIER_VERSION` | EasyTier image version | `v2.6.0` |
|
||||
| `TZ` | Timezone | `UTC` |
|
||||
| `EASYTIER_NETWORK_NAME` | Virtual network name shared by all peers | `easytier` |
|
||||
| `EASYTIER_NETWORK_SECRET` | Network secret (password); **required** | `""` |
|
||||
| `EASYTIER_IPV4` | Virtual IPv4 of this server node | `10.144.144.1` |
|
||||
| `EASYTIER_TCP_PORT_OVERRIDE`| Host port for peer TCP listener | `11010` |
|
||||
| `EASYTIER_UDP_PORT_OVERRIDE`| Host port for peer UDP listener | `11010` |
|
||||
| `EASYTIER_RPC_PORT_OVERRIDE`| Host port for management RPC (localhost only) | `15888` |
|
||||
| `EASYTIER_CPU_LIMIT` | CPU limit | `0.50` |
|
||||
| `EASYTIER_MEMORY_LIMIT` | Memory limit | `128M` |
|
||||
|
||||
## Quick Start
|
||||
|
||||
1. Copy `.env.example` and set a strong network secret:
|
||||
|
||||
```bash
|
||||
cp .env.example .env
|
||||
```
|
||||
|
||||
Edit `.env`:
|
||||
|
||||
```env
|
||||
EASYTIER_NETWORK_NAME=myvpn
|
||||
EASYTIER_NETWORK_SECRET=<your-strong-secret>
|
||||
```
|
||||
|
||||
Generate a secret with: `openssl rand -hex 16`
|
||||
|
||||
2. Start the server:
|
||||
|
||||
```bash
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
3. Verify the node is healthy:
|
||||
|
||||
```bash
|
||||
docker compose exec easytier easytier-cli -p 127.0.0.1:15888 node info
|
||||
```
|
||||
|
||||
4. On each peer machine, connect to this server:
|
||||
|
||||
```bash
|
||||
easytier-core \
|
||||
--network-name myvpn \
|
||||
--network-secret <your-strong-secret> \
|
||||
--peers tcp://<server-public-ip>:11010 \
|
||||
--ipv4 10.144.144.2
|
||||
```
|
||||
|
||||
## Storage
|
||||
|
||||
This stack does not use persistent volumes. Configuration is provided entirely via command-line flags derived from environment variables.
|
||||
|
||||
## Security Notes
|
||||
|
||||
- **`EASYTIER_NETWORK_SECRET` is required.** An empty secret leaves the network open to any peer that knows the network name. Always set a strong random value before exposing this server to the internet.
|
||||
- The management RPC port (`15888`) is bound to `127.0.0.1` by default. Do not expose it publicly unless you have separate authentication in place.
|
||||
- Ports `11010/tcp` and `11010/udp` must be open in your firewall / cloud security group for peers to reach this server.
|
||||
- This stack runs in `--no-tun` relay mode. No kernel TUN device is created, so no elevated capabilities (`NET_ADMIN`) are required and `cap_drop: ALL` is applied.
|
||||
- If you need this server node to also participate as a VPN peer (with a local virtual interface), remove `--no-tun` from `command` and add `cap_add: [NET_ADMIN]` to the service.
|
||||
|
||||
## Documentation
|
||||
|
||||
- [EasyTier GitHub](https://github.com/EasyTier/EasyTier)
|
||||
- [EasyTier Documentation](https://www.easytier.top/guide/introduction.html)
|
||||
@@ -0,0 +1,88 @@
|
||||
# EasyTier
|
||||
|
||||
[English](./README.md) | [中文](./README.zh.md)
|
||||
|
||||
[EasyTier](https://github.com/EasyTier/EasyTier) 是一款网状 VPN 组网工具,可在 NAT 或防火墙后面的主机之间构建私有加密覆盖网络。本配置将 EasyTier 部署为**公共中继服务器**——作为稳定的入口节点,供各客户端节点在无法直连时进行发现和流量中转。
|
||||
|
||||
## 服务
|
||||
|
||||
- `easytier`:以中继模式(`--no-tun`)运行的 EasyTier 核心节点,不创建本地 TUN 网络接口。
|
||||
|
||||
## 端口
|
||||
|
||||
| 端口 | 协议 | 说明 |
|
||||
| ------- | ---- | ------------------------------------------------- |
|
||||
| `11010` | TCP | 节点连接监听端口——需公网可达 |
|
||||
| `11010` | UDP | 节点连接监听端口——需公网可达 |
|
||||
| `15888` | TCP | 管理 RPC 端口(默认仅绑定 `127.0.0.1`) |
|
||||
|
||||
## 环境变量
|
||||
|
||||
| 变量名 | 描述 | 默认值 |
|
||||
| ---------------------------- | --------------------------------- | -------------- |
|
||||
| `EASYTIER_VERSION` | EasyTier 镜像版本 | `v2.6.0` |
|
||||
| `TZ` | 时区 | `UTC` |
|
||||
| `EASYTIER_NETWORK_NAME` | 所有节点共享的虚拟网络名称 | `easytier` |
|
||||
| `EASYTIER_NETWORK_SECRET` | 网络密钥(密码),**必须设置** | `""` |
|
||||
| `EASYTIER_IPV4` | 本服务器节点在虚拟网络中的 IPv4 | `10.144.144.1` |
|
||||
| `EASYTIER_TCP_PORT_OVERRIDE` | 节点 TCP 监听端口(宿主机映射) | `11010` |
|
||||
| `EASYTIER_UDP_PORT_OVERRIDE` | 节点 UDP 监听端口(宿主机映射) | `11010` |
|
||||
| `EASYTIER_RPC_PORT_OVERRIDE` | 管理 RPC 端口(仅本机可访问) | `15888` |
|
||||
| `EASYTIER_CPU_LIMIT` | CPU 上限 | `0.50` |
|
||||
| `EASYTIER_MEMORY_LIMIT` | 内存上限 | `128M` |
|
||||
|
||||
## 快速开始
|
||||
|
||||
1. 复制 `.env.example` 并设置强网络密钥:
|
||||
|
||||
```bash
|
||||
cp .env.example .env
|
||||
```
|
||||
|
||||
编辑 `.env`:
|
||||
|
||||
```env
|
||||
EASYTIER_NETWORK_NAME=myvpn
|
||||
EASYTIER_NETWORK_SECRET=<你的强密钥>
|
||||
```
|
||||
|
||||
生成随机密钥:`openssl rand -hex 16`
|
||||
|
||||
2. 启动服务:
|
||||
|
||||
```bash
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
3. 验证节点状态:
|
||||
|
||||
```bash
|
||||
docker compose exec easytier easytier-cli -p 127.0.0.1:15888 node info
|
||||
```
|
||||
|
||||
4. 在各客户端机器上连接到此服务器:
|
||||
|
||||
```bash
|
||||
easytier-core \
|
||||
--network-name myvpn \
|
||||
--network-secret <你的强密钥> \
|
||||
--peers tcp://<服务器公网 IP>:11010 \
|
||||
--ipv4 10.144.144.2
|
||||
```
|
||||
|
||||
## 数据卷
|
||||
|
||||
本配置不使用持久化卷,所有配置均通过环境变量转换为命令行参数传入。
|
||||
|
||||
## 安全说明
|
||||
|
||||
- **`EASYTIER_NETWORK_SECRET` 为必填项。** 若密钥为空,任何知道网络名称的节点均可接入,请务必在公网暴露前设置强密钥。
|
||||
- 管理 RPC 端口(`15888`)默认仅绑定 `127.0.0.1`,请勿在无额外认证保护的情况下对外暴露。
|
||||
- 防火墙及云安全组需放行 `11010/tcp` 和 `11010/udp`,客户端节点才能连接到本服务器。
|
||||
- 本配置以 `--no-tun` 中继模式运行,无需创建 TUN 设备,因此无需提升内核权限(`NET_ADMIN`),已应用 `cap_drop: ALL`。
|
||||
- 如需服务器节点同时作为 VPN 网络中的普通成员(拥有本地虚拟网卡),请移除 `command` 中的 `--no-tun` 参数,并在服务中添加 `cap_add: [NET_ADMIN]`。
|
||||
|
||||
## 文档
|
||||
|
||||
- [EasyTier GitHub](https://github.com/EasyTier/EasyTier)
|
||||
- [EasyTier 官方文档](https://www.easytier.top/guide/introduction.html)
|
||||
@@ -0,0 +1,46 @@
|
||||
x-defaults: &defaults
|
||||
restart: unless-stopped
|
||||
logging:
|
||||
driver: json-file
|
||||
options:
|
||||
max-size: 100m
|
||||
max-file: '3'
|
||||
|
||||
services:
|
||||
easytier:
|
||||
<<: *defaults
|
||||
image: ${GLOBAL_REGISTRY:-}easytier/easytier:${EASYTIER_VERSION:-v2.6.0}
|
||||
command:
|
||||
- --network-name=${EASYTIER_NETWORK_NAME:-easytier}
|
||||
- --network-secret=${EASYTIER_NETWORK_SECRET:-}
|
||||
- --ipv4=${EASYTIER_IPV4:-10.144.144.1}
|
||||
- --listeners
|
||||
- tcp://0.0.0.0:11010
|
||||
- udp://0.0.0.0:11010
|
||||
- --rpc-portal=0.0.0.0:15888
|
||||
- --no-tun
|
||||
ports:
|
||||
# Peer listener ports — must be reachable from the public internet
|
||||
- '${EASYTIER_TCP_PORT_OVERRIDE:-11010}:11010/tcp'
|
||||
- '${EASYTIER_UDP_PORT_OVERRIDE:-11010}:11010/udp'
|
||||
# Management RPC — bind to localhost only by default for security
|
||||
- '127.0.0.1:${EASYTIER_RPC_PORT_OVERRIDE:-15888}:15888'
|
||||
environment:
|
||||
- TZ=${TZ:-UTC}
|
||||
# No TUN interface in server-relay mode; no special capabilities required
|
||||
cap_drop:
|
||||
- ALL
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: ${EASYTIER_CPU_LIMIT:-0.50}
|
||||
memory: ${EASYTIER_MEMORY_LIMIT:-128M}
|
||||
reservations:
|
||||
cpus: ${EASYTIER_CPU_RESERVATION:-0.10}
|
||||
memory: ${EASYTIER_MEMORY_RESERVATION:-32M}
|
||||
healthcheck:
|
||||
test: ['CMD', 'easytier-cli', '-p', '127.0.0.1:15888', 'node', 'info']
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 15s
|
||||
@@ -0,0 +1,23 @@
|
||||
# Global Registry Prefix (optional)
|
||||
# GLOBAL_REGISTRY=
|
||||
|
||||
# Letta Image Version
|
||||
LETTA_VERSION=0.16.7
|
||||
|
||||
# Timezone
|
||||
TZ=UTC
|
||||
|
||||
# Host port for the Letta REST API server
|
||||
LETTA_PORT_OVERRIDE=8283
|
||||
|
||||
# LLM Provider API Keys (optional; at least one is required for agent functionality)
|
||||
# OPENAI_API_KEY=sk-...
|
||||
# ANTHROPIC_API_KEY=sk-ant-...
|
||||
# GROQ_API_KEY=gsk_...
|
||||
# OLLAMA_BASE_URL=http://host.docker.internal:11434
|
||||
|
||||
# Resource Limits
|
||||
LETTA_CPU_LIMIT=1
|
||||
LETTA_MEMORY_LIMIT=1G
|
||||
LETTA_CPU_RESERVATION=0.25
|
||||
LETTA_MEMORY_RESERVATION=256M
|
||||
@@ -0,0 +1,49 @@
|
||||
# Letta
|
||||
|
||||
[English](./README.md) | [中文](./README.zh.md)
|
||||
|
||||
Quick start: <https://docs.letta.com>.
|
||||
|
||||
This service deploys Letta (formerly MemGPT), a framework for building stateful AI agents with long-term memory, persistent state, and tool use. Letta exposes a REST API for creating and managing agents programmatically.
|
||||
|
||||
## Services
|
||||
|
||||
- `letta`: The Letta agent server.
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
The Letta REST API will be available at `http://localhost:8283`. You can interact with it via the [Letta Python SDK](https://github.com/letta-ai/letta) or the [ADE web interface](https://app.letta.com).
|
||||
|
||||
To connect a local LLM (Ollama), set `OLLAMA_BASE_URL` in your `.env` file before starting.
|
||||
|
||||
## Configuration
|
||||
|
||||
| Variable | Description | Default |
|
||||
| ---------------------- | -------------------------------------------------------- | -------- |
|
||||
| `LETTA_VERSION` | Image version | `0.16.7` |
|
||||
| `TZ` | Container timezone | `UTC` |
|
||||
| `LETTA_PORT_OVERRIDE` | Host port for the REST API | `8283` |
|
||||
| `OPENAI_API_KEY` | OpenAI API key (optional) | *(empty)*|
|
||||
| `ANTHROPIC_API_KEY` | Anthropic API key (optional) | *(empty)*|
|
||||
| `GROQ_API_KEY` | Groq API key (optional) | *(empty)*|
|
||||
| `OLLAMA_BASE_URL` | Ollama base URL, e.g. `http://host.docker.internal:11434`| *(empty)*|
|
||||
| `LETTA_CPU_LIMIT` | CPU limit | `1` |
|
||||
| `LETTA_MEMORY_LIMIT` | Memory limit | `1G` |
|
||||
| `LETTA_CPU_RESERVATION`| CPU reservation | `0.25` |
|
||||
|
||||
## Volumes
|
||||
|
||||
- `letta_data`: Persists agent state, memory, and configuration at `/root/.letta`.
|
||||
|
||||
## Ports
|
||||
|
||||
- **8283**: REST API
|
||||
|
||||
## Notes
|
||||
|
||||
- At least one LLM provider API key (or `OLLAMA_BASE_URL`) is required to create functioning agents.
|
||||
- The health check uses the `/health` endpoint.
|
||||
@@ -0,0 +1,49 @@
|
||||
# Letta
|
||||
|
||||
[English](./README.md) | [中文](./README.zh.md)
|
||||
|
||||
快速开始:<https://docs.letta.com>。
|
||||
|
||||
此服务用于部署 Letta(前身为 MemGPT),一个用于构建具备长期记忆、持久状态和工具调用能力的有状态 AI Agent 框架。Letta 提供 REST API,支持以编程方式创建和管理 Agent。
|
||||
|
||||
## 服务
|
||||
|
||||
- `letta`:Letta Agent 服务器。
|
||||
|
||||
## 快速开始
|
||||
|
||||
```bash
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
Letta REST API 将在 `http://localhost:8283` 可用。你可以通过 [Letta Python SDK](https://github.com/letta-ai/letta) 或 [ADE Web 界面](https://app.letta.com) 与其交互。
|
||||
|
||||
如需连接本地 LLM(Ollama),请在启动前在 `.env` 文件中设置 `OLLAMA_BASE_URL`。
|
||||
|
||||
## 配置
|
||||
|
||||
| 变量 | 说明 | 默认值 |
|
||||
| ---------------------- | ----------------------------------------------------------- | -------- |
|
||||
| `LETTA_VERSION` | 镜像版本 | `0.16.7` |
|
||||
| `TZ` | 容器时区 | `UTC` |
|
||||
| `LETTA_PORT_OVERRIDE` | REST API 的宿主机端口 | `8283` |
|
||||
| `OPENAI_API_KEY` | OpenAI API Key(可选) | *(空)* |
|
||||
| `ANTHROPIC_API_KEY` | Anthropic API Key(可选) | *(空)* |
|
||||
| `GROQ_API_KEY` | Groq API Key(可选) | *(空)* |
|
||||
| `OLLAMA_BASE_URL` | Ollama 基础 URL,例如 `http://host.docker.internal:11434` | *(空)* |
|
||||
| `LETTA_CPU_LIMIT` | CPU 限制 | `1` |
|
||||
| `LETTA_MEMORY_LIMIT` | 内存限制 | `1G` |
|
||||
| `LETTA_CPU_RESERVATION`| CPU 预留 | `0.25` |
|
||||
|
||||
## 数据卷
|
||||
|
||||
- `letta_data`:在 `/root/.letta` 持久化 Agent 状态、记忆和配置。
|
||||
|
||||
## 端口
|
||||
|
||||
- **8283**:REST API
|
||||
|
||||
## 说明
|
||||
|
||||
- 创建可用的 Agent 至少需要一个 LLM 提供商的 API Key(或 `OLLAMA_BASE_URL`)。
|
||||
- 健康检查使用 `/health` 端点。
|
||||
@@ -0,0 +1,43 @@
|
||||
x-defaults: &defaults
|
||||
restart: unless-stopped
|
||||
logging:
|
||||
driver: json-file
|
||||
options:
|
||||
max-size: 100m
|
||||
max-file: '3'
|
||||
|
||||
services:
|
||||
letta:
|
||||
<<: *defaults
|
||||
image: ${GLOBAL_REGISTRY:-}letta/letta:${LETTA_VERSION:-0.16.7}
|
||||
ports:
|
||||
- '${LETTA_PORT_OVERRIDE:-8283}:8283'
|
||||
volumes:
|
||||
- letta_data:/root/.letta
|
||||
environment:
|
||||
- TZ=${TZ:-UTC}
|
||||
- OPENAI_API_KEY=${OPENAI_API_KEY:-}
|
||||
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY:-}
|
||||
- GROQ_API_KEY=${GROQ_API_KEY:-}
|
||||
- OLLAMA_BASE_URL=${OLLAMA_BASE_URL:-}
|
||||
healthcheck:
|
||||
test:
|
||||
- CMD
|
||||
- python3
|
||||
- -c
|
||||
- "import urllib.request; urllib.request.urlopen('http://localhost:8283/health')"
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 5
|
||||
start_period: 20s
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: ${LETTA_CPU_LIMIT:-1}
|
||||
memory: ${LETTA_MEMORY_LIMIT:-1G}
|
||||
reservations:
|
||||
cpus: ${LETTA_CPU_RESERVATION:-0.25}
|
||||
memory: ${LETTA_MEMORY_RESERVATION:-256M}
|
||||
|
||||
volumes:
|
||||
letta_data:
|
||||
@@ -0,0 +1,26 @@
|
||||
# Global Registry Prefix (optional)
|
||||
# GLOBAL_REGISTRY=
|
||||
|
||||
# LobeChat Image Version
|
||||
LOBE_CHAT_VERSION=1.143.3
|
||||
|
||||
# Timezone
|
||||
TZ=UTC
|
||||
|
||||
# Host port for the LobeChat web UI
|
||||
LOBE_CHAT_PORT_OVERRIDE=3210
|
||||
|
||||
# Optional access code to restrict access (leave empty to allow anonymous access)
|
||||
# ACCESS_CODE=your-secret-code
|
||||
|
||||
# LLM Provider API Keys (at least one is required for chat to work)
|
||||
# OPENAI_API_KEY=sk-...
|
||||
# OPENAI_PROXY_URL=https://your-proxy/v1
|
||||
# ANTHROPIC_API_KEY=sk-ant-...
|
||||
# GOOGLE_API_KEY=AIza...
|
||||
|
||||
# Resource Limits
|
||||
LOBE_CHAT_CPU_LIMIT=0.5
|
||||
LOBE_CHAT_MEMORY_LIMIT=512M
|
||||
LOBE_CHAT_CPU_RESERVATION=0.1
|
||||
LOBE_CHAT_MEMORY_RESERVATION=128M
|
||||
@@ -0,0 +1,45 @@
|
||||
# LobeChat
|
||||
|
||||
[English](./README.md) | [中文](./README.zh.md)
|
||||
|
||||
Quick start: <https://lobehub.com/docs>.
|
||||
|
||||
This service deploys LobeChat in standalone (serverless) mode — a modern, high-performance AI chat interface that supports multiple LLM providers, vision models, and plugin extensibility. No database is required; all state is stored client-side.
|
||||
|
||||
## Services
|
||||
|
||||
- `lobe-chat`: The LobeChat web application.
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
Open `http://localhost:3210`. Configure your LLM API keys in the settings panel (gear icon), or set them as environment variables before starting.
|
||||
|
||||
## Configuration
|
||||
|
||||
| Variable | Description | Default |
|
||||
| ---------------------- | -------------------------------------------------------- | ---------- |
|
||||
| `LOBE_CHAT_VERSION` | Image version | `1.143.3` |
|
||||
| `TZ` | Container timezone | `UTC` |
|
||||
| `LOBE_CHAT_PORT_OVERRIDE` | Host port for the web UI | `3210` |
|
||||
| `ACCESS_CODE` | Optional password to restrict access (empty = open) | *(empty)* |
|
||||
| `OPENAI_API_KEY` | OpenAI API key | *(empty)* |
|
||||
| `OPENAI_PROXY_URL` | Custom OpenAI-compatible API base URL | *(empty)* |
|
||||
| `ANTHROPIC_API_KEY` | Anthropic API key | *(empty)* |
|
||||
| `GOOGLE_API_KEY` | Google Gemini API key | *(empty)* |
|
||||
| `LOBE_CHAT_CPU_LIMIT` | CPU limit | `0.5` |
|
||||
| `LOBE_CHAT_MEMORY_LIMIT` | Memory limit | `512M` |
|
||||
|
||||
## Ports
|
||||
|
||||
- **3210**: Web UI
|
||||
|
||||
## Notes
|
||||
|
||||
- This is the **standalone** (client-side) mode. No PostgreSQL, S3, or auth server is needed.
|
||||
- Conversation history is stored in the browser; clearing browser data loses history.
|
||||
- For multi-user deployments with persistent server-side data, see the [LobeChat database mode docs](https://lobehub.com/docs/self-hosting/server-database).
|
||||
- The health check uses the `/api/health` endpoint.
|
||||
@@ -0,0 +1,45 @@
|
||||
# LobeChat
|
||||
|
||||
[English](./README.md) | [中文](./README.zh.md)
|
||||
|
||||
快速开始:<https://lobehub.com/docs>。
|
||||
|
||||
此服务以独立(无服务器)模式部署 LobeChat,这是一款现代高性能的 AI 对话界面,支持多 LLM 提供商、视觉模型和插件扩展。无需数据库,所有状态均存储在客户端。
|
||||
|
||||
## 服务
|
||||
|
||||
- `lobe-chat`:LobeChat Web 应用。
|
||||
|
||||
## 快速开始
|
||||
|
||||
```bash
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
打开 `http://localhost:3210`。在设置面板(齿轮图标)中配置 LLM API Key,或在启动前通过环境变量设置。
|
||||
|
||||
## 配置
|
||||
|
||||
| 变量 | 说明 | 默认值 |
|
||||
| ------------------------- | ------------------------------------------ | --------- |
|
||||
| `LOBE_CHAT_VERSION` | 镜像版本 | `1.143.3` |
|
||||
| `TZ` | 容器时区 | `UTC` |
|
||||
| `LOBE_CHAT_PORT_OVERRIDE` | Web UI 的宿主机端口 | `3210` |
|
||||
| `ACCESS_CODE` | 可选访问密码(空则开放访问) | *(空)* |
|
||||
| `OPENAI_API_KEY` | OpenAI API Key | *(空)* |
|
||||
| `OPENAI_PROXY_URL` | 自定义 OpenAI 兼容 API 基础 URL | *(空)* |
|
||||
| `ANTHROPIC_API_KEY` | Anthropic API Key | *(空)* |
|
||||
| `GOOGLE_API_KEY` | Google Gemini API Key | *(空)* |
|
||||
| `LOBE_CHAT_CPU_LIMIT` | CPU 限制 | `0.5` |
|
||||
| `LOBE_CHAT_MEMORY_LIMIT` | 内存限制 | `512M` |
|
||||
|
||||
## 端口
|
||||
|
||||
- **3210**:Web UI
|
||||
|
||||
## 说明
|
||||
|
||||
- 此为**独立**(客户端)模式,无需 PostgreSQL、S3 或认证服务器。
|
||||
- 对话历史存储在浏览器中,清除浏览器数据将丢失历史记录。
|
||||
- 如需多用户部署及服务端持久化数据,请参阅 [LobeChat 数据库模式文档](https://lobehub.com/docs/self-hosting/server-database)。
|
||||
- 健康检查使用 `/api/health` 端点。
|
||||
@@ -0,0 +1,39 @@
|
||||
x-defaults: &defaults
|
||||
restart: unless-stopped
|
||||
logging:
|
||||
driver: json-file
|
||||
options:
|
||||
max-size: 100m
|
||||
max-file: '3'
|
||||
|
||||
services:
|
||||
lobe-chat:
|
||||
<<: *defaults
|
||||
image: ${GLOBAL_REGISTRY:-}lobehub/lobe-chat:${LOBE_CHAT_VERSION:-1.143.3}
|
||||
ports:
|
||||
- '${LOBE_CHAT_PORT_OVERRIDE:-3210}:3210'
|
||||
environment:
|
||||
- TZ=${TZ:-UTC}
|
||||
- ACCESS_CODE=${ACCESS_CODE:-}
|
||||
- OPENAI_API_KEY=${OPENAI_API_KEY:-}
|
||||
- OPENAI_PROXY_URL=${OPENAI_PROXY_URL:-}
|
||||
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY:-}
|
||||
- GOOGLE_API_KEY=${GOOGLE_API_KEY:-}
|
||||
healthcheck:
|
||||
test:
|
||||
- CMD
|
||||
- node
|
||||
- -e
|
||||
- "require('http').get('http://localhost:3210/api/health',res=>process.exit(res.statusCode===200?0:1)).on('error',()=>process.exit(1))"
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 15s
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: ${LOBE_CHAT_CPU_LIMIT:-0.5}
|
||||
memory: ${LOBE_CHAT_MEMORY_LIMIT:-512M}
|
||||
reservations:
|
||||
cpus: ${LOBE_CHAT_CPU_RESERVATION:-0.1}
|
||||
memory: ${LOBE_CHAT_MEMORY_RESERVATION:-128M}
|
||||
Reference in New Issue
Block a user