feat: add OpenViking DeerFlow Mattermost OpenFang and Paperclip services

This commit is contained in:
Sun-ZhenXing
2026-03-28 23:40:06 +08:00
parent fbd0c9b7f4
commit 441b8a74f5
23 changed files with 1356 additions and 4 deletions
+45
View File
@@ -0,0 +1,45 @@
# Source build configuration
DEER_FLOW_VERSION=main
NGINX_VERSION=1.28-alpine
# Network configuration
DEER_FLOW_PORT_OVERRIDE=2026
DEER_FLOW_CORS_ORIGINS=http://localhost:2026
DEER_FLOW_BETTER_AUTH_SECRET=deer-flow-dev-secret-change-me
# Model configuration
DEER_FLOW_MODEL_NAME=openai-default
DEER_FLOW_MODEL_DISPLAY_NAME=OpenAI
DEER_FLOW_MODEL_ID=gpt-4.1-mini
OPENAI_API_KEY=
# Resources - Gateway
DEER_FLOW_GATEWAY_CPU_LIMIT=2.00
DEER_FLOW_GATEWAY_MEMORY_LIMIT=2G
DEER_FLOW_GATEWAY_CPU_RESERVATION=0.50
DEER_FLOW_GATEWAY_MEMORY_RESERVATION=512M
# Resources - LangGraph
DEER_FLOW_LANGGRAPH_CPU_LIMIT=2.00
DEER_FLOW_LANGGRAPH_MEMORY_LIMIT=2G
DEER_FLOW_LANGGRAPH_CPU_RESERVATION=0.50
DEER_FLOW_LANGGRAPH_MEMORY_RESERVATION=512M
# Resources - Frontend
DEER_FLOW_FRONTEND_CPU_LIMIT=1.00
DEER_FLOW_FRONTEND_MEMORY_LIMIT=1G
DEER_FLOW_FRONTEND_CPU_RESERVATION=0.25
DEER_FLOW_FRONTEND_MEMORY_RESERVATION=256M
# Resources - Nginx
DEER_FLOW_NGINX_CPU_LIMIT=0.50
DEER_FLOW_NGINX_MEMORY_LIMIT=256M
DEER_FLOW_NGINX_CPU_RESERVATION=0.10
DEER_FLOW_NGINX_MEMORY_RESERVATION=64M
# Logging
DEER_FLOW_LOG_MAX_SIZE=100m
DEER_FLOW_LOG_MAX_FILE=3
# Timezone
TZ=UTC
+60
View File
@@ -0,0 +1,60 @@
# DeerFlow
[中文文档](README.zh.md)
DeerFlow is a full-stack AI agent application from ByteDance. This Compose setup builds the frontend and backend from source, starts Gateway, LangGraph, and Nginx, and exposes the unified entrypoint on port 2026.
## Quick Start
1. Copy the example environment file:
```bash
cp .env.example .env
```
2. Edit `.env` and set `OPENAI_API_KEY`.
3. Start the stack:
```bash
docker compose up -d
```
4. Open DeerFlow:
- <http://localhost:2026>
## Default Ports
| Service | Port | Description |
| ----------- | ---- | ---------------------- |
| Nginx | 2026 | Unified web entrypoint |
| Gateway API | 8001 | Internal only |
| LangGraph | 2024 | Internal only |
| Frontend | 3000 | Internal only |
## Important Environment Variables
| Variable | Description | Default |
| ------------------------------ | ------------------------------------------------------ | -------------------------------- |
| `DEER_FLOW_VERSION` | Git ref used for source builds | `main` |
| `DEER_FLOW_PORT_OVERRIDE` | Host port for the unified entrypoint | `2026` |
| `OPENAI_API_KEY` | OpenAI API key referenced from generated `config.yaml` | - |
| `DEER_FLOW_MODEL_NAME` | Internal model identifier | `openai-default` |
| `DEER_FLOW_MODEL_DISPLAY_NAME` | Display name shown in the app | `OpenAI` |
| `DEER_FLOW_MODEL_ID` | OpenAI model id | `gpt-4.1-mini` |
| `DEER_FLOW_CORS_ORIGINS` | Allowed CORS origins for the gateway | `http://localhost:2026` |
| `DEER_FLOW_BETTER_AUTH_SECRET` | Frontend auth secret | `deer-flow-dev-secret-change-me` |
| `TZ` | Container timezone | `UTC` |
## Notes
- This setup generates a minimal `config.yaml` and `extensions_config.json` inside the backend containers, so no extra config files are required.
- The default sandbox mode is local to avoid requiring Docker socket mounts or Kubernetes provisioner setup.
- DeerFlow upstream usually expects local image builds, so the first build can take several minutes.
- Only an OpenAI-compatible model is wired by default here. If you want Anthropic, Gemini, or a more advanced config, update the generated template logic in `docker-compose.yaml`.
## References
- [DeerFlow Repository](https://github.com/bytedance/deer-flow)
- [Project README](https://github.com/bytedance/deer-flow/blob/main/README.md)
+60
View File
@@ -0,0 +1,60 @@
# DeerFlow
[English](README.md)
DeerFlow 是字节跳动开源的全栈 AI Agent 应用。这个 Compose 配置会从源码构建前后端镜像,启动 Gateway、LangGraph 和 Nginx,并通过 2026 端口暴露统一入口。
## 快速开始
1. 复制环境变量示例文件:
```bash
cp .env.example .env
```
2. 编辑 `.env`,至少填写 `OPENAI_API_KEY`。
3. 启动整个栈:
```bash
docker compose up -d
```
4. 打开 DeerFlow
- <http://localhost:2026>
## 默认端口
| 服务 | 端口 | 说明 |
| ----------- | ---- | ------------- |
| Nginx | 2026 | 统一 Web 入口 |
| Gateway API | 8001 | 仅内部访问 |
| LangGraph | 2024 | 仅内部访问 |
| Frontend | 3000 | 仅内部访问 |
## 关键环境变量
| 变量 | 说明 | 默认值 |
| ------------------------------ | -------------------------------------------- | -------------------------------- |
| `DEER_FLOW_VERSION` | 用于源码构建的 Git 引用 | `main` |
| `DEER_FLOW_PORT_OVERRIDE` | 统一入口对外端口 | `2026` |
| `OPENAI_API_KEY` | 生成的 `config.yaml` 中引用的 OpenAI API Key | - |
| `DEER_FLOW_MODEL_NAME` | 模型内部标识 | `openai-default` |
| `DEER_FLOW_MODEL_DISPLAY_NAME` | 界面展示名称 | `OpenAI` |
| `DEER_FLOW_MODEL_ID` | OpenAI 模型 ID | `gpt-4.1-mini` |
| `DEER_FLOW_CORS_ORIGINS` | Gateway 允许的跨域来源 | `http://localhost:2026` |
| `DEER_FLOW_BETTER_AUTH_SECRET` | 前端鉴权密钥 | `deer-flow-dev-secret-change-me` |
| `TZ` | 容器时区 | `UTC` |
## 说明
- 这个配置会在后端容器内部生成最小可用的 `config.yaml` 和 `extensions_config.json`,因此不需要额外手工准备配置文件。
- 默认使用本地 sandbox 模式,这样不需要挂载 Docker Socket,也不依赖 Kubernetes provisioner。
- DeerFlow 上游通常要求本地构建镜像,因此首次构建耗时可能较长。
- 当前默认只接入了 OpenAI 兼容模型。如果你要改成 Anthropic、Gemini 或更复杂的配置,需要调整 `docker-compose.yaml` 中生成配置文件的模板。
## 参考资料
- [DeerFlow 仓库](https://github.com/bytedance/deer-flow)
- [项目 README](https://github.com/bytedance/deer-flow/blob/main/README_zh.md)
+171
View File
@@ -0,0 +1,171 @@
x-defaults: &defaults
restart: unless-stopped
logging:
driver: json-file
options:
max-size: ${DEER_FLOW_LOG_MAX_SIZE:-100m}
max-file: '${DEER_FLOW_LOG_MAX_FILE:-3}'
services:
deerflow-gateway:
<<: *defaults
build:
context: https://github.com/bytedance/deer-flow.git#${DEER_FLOW_VERSION:-main}
dockerfile: backend/Dockerfile
image: ${GLOBAL_REGISTRY:-}alexsuntop/deer-flow-backend:${DEER_FLOW_VERSION:-main}
environment:
- TZ=${TZ:-UTC}
- OPENAI_API_KEY=${OPENAI_API_KEY:-}
env_file:
- .env
entrypoint:
- /bin/sh
- -ec
command: |
cat >/tmp/config.yaml <<EOF
config_version: 1
models:
- name: ${DEER_FLOW_MODEL_NAME:-openai-default}
display_name: ${DEER_FLOW_MODEL_DISPLAY_NAME:-OpenAI}
use: langchain_openai:ChatOpenAI
model: ${DEER_FLOW_MODEL_ID:-gpt-4.1-mini}
api_key: \$OPENAI_API_KEY
sandbox:
use: deerflow.sandbox.local:LocalSandboxProvider
EOF
cat >/tmp/extensions_config.json <<EOF
{"mcpServers":{},"skills":{}}
EOF
export DEER_FLOW_CONFIG_PATH=/tmp/config.yaml
export DEER_FLOW_EXTENSIONS_CONFIG_PATH=/tmp/extensions_config.json
export GATEWAY_HOST=0.0.0.0
export GATEWAY_PORT=8001
export CORS_ORIGINS=${DEER_FLOW_CORS_ORIGINS:-http://localhost:2026}
exec sh -c 'cd backend && PYTHONPATH=. uv run uvicorn app.gateway.app:app --host 0.0.0.0 --port 8001'
healthcheck:
test:
- CMD-SHELL
- python3 -c "import urllib.request; urllib.request.urlopen('http://127.0.0.1:8001/docs', timeout=5)"
interval: 30s
timeout: 10s
retries: 5
start_period: 60s
deploy:
resources:
limits:
cpus: ${DEER_FLOW_GATEWAY_CPU_LIMIT:-2.00}
memory: ${DEER_FLOW_GATEWAY_MEMORY_LIMIT:-2G}
reservations:
cpus: ${DEER_FLOW_GATEWAY_CPU_RESERVATION:-0.50}
memory: ${DEER_FLOW_GATEWAY_MEMORY_RESERVATION:-512M}
deerflow-langgraph:
<<: *defaults
build:
context: https://github.com/bytedance/deer-flow.git#${DEER_FLOW_VERSION:-main}
dockerfile: backend/Dockerfile
image: ${GLOBAL_REGISTRY:-}alexsuntop/deer-flow-backend:${DEER_FLOW_VERSION:-main}
environment:
- TZ=${TZ:-UTC}
- OPENAI_API_KEY=${OPENAI_API_KEY:-}
env_file:
- .env
entrypoint:
- /bin/sh
- -ec
command: |
cat >/tmp/config.yaml <<EOF
config_version: 1
models:
- name: ${DEER_FLOW_MODEL_NAME:-openai-default}
display_name: ${DEER_FLOW_MODEL_DISPLAY_NAME:-OpenAI}
use: langchain_openai:ChatOpenAI
model: ${DEER_FLOW_MODEL_ID:-gpt-4.1-mini}
api_key: \$OPENAI_API_KEY
sandbox:
use: deerflow.sandbox.local:LocalSandboxProvider
EOF
cat >/tmp/extensions_config.json <<EOF
{"mcpServers":{},"skills":{}}
EOF
export DEER_FLOW_CONFIG_PATH=/tmp/config.yaml
export DEER_FLOW_EXTENSIONS_CONFIG_PATH=/tmp/extensions_config.json
exec sh -c 'cd backend && NO_COLOR=1 uv run langgraph dev --no-browser --allow-blocking --no-reload'
healthcheck:
test:
- CMD-SHELL
- python3 -c "import socket; s=socket.create_connection(('127.0.0.1', 2024), 5); s.close()"
interval: 30s
timeout: 10s
retries: 5
start_period: 60s
deploy:
resources:
limits:
cpus: ${DEER_FLOW_LANGGRAPH_CPU_LIMIT:-2.00}
memory: ${DEER_FLOW_LANGGRAPH_MEMORY_LIMIT:-2G}
reservations:
cpus: ${DEER_FLOW_LANGGRAPH_CPU_RESERVATION:-0.50}
memory: ${DEER_FLOW_LANGGRAPH_MEMORY_RESERVATION:-512M}
deerflow-frontend:
<<: *defaults
build:
context: https://github.com/bytedance/deer-flow.git#${DEER_FLOW_VERSION:-main}
dockerfile: frontend/Dockerfile
target: prod
image: ${GLOBAL_REGISTRY:-}alexsuntop/deer-flow-frontend:${DEER_FLOW_VERSION:-main}
environment:
- TZ=${TZ:-UTC}
- BETTER_AUTH_SECRET=${DEER_FLOW_BETTER_AUTH_SECRET:-deer-flow-dev-secret-change-me}
- NEXT_PUBLIC_BACKEND_BASE_URL=
- NEXT_PUBLIC_LANGGRAPH_BASE_URL=/api/langgraph
env_file:
- .env
healthcheck:
test:
- CMD-SHELL
- node -e "fetch('http://127.0.0.1:3000').then((r)=>process.exit(r.ok?0:1)).catch(()=>process.exit(1))"
interval: 30s
timeout: 10s
retries: 5
start_period: 60s
deploy:
resources:
limits:
cpus: ${DEER_FLOW_FRONTEND_CPU_LIMIT:-1.00}
memory: ${DEER_FLOW_FRONTEND_MEMORY_LIMIT:-1G}
reservations:
cpus: ${DEER_FLOW_FRONTEND_CPU_RESERVATION:-0.25}
memory: ${DEER_FLOW_FRONTEND_MEMORY_RESERVATION:-256M}
deerflow-nginx:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}nginx:${NGINX_VERSION:-1.28-alpine}
depends_on:
deerflow-gateway:
condition: service_healthy
deerflow-langgraph:
condition: service_healthy
deerflow-frontend:
condition: service_healthy
ports:
- '${DEER_FLOW_PORT_OVERRIDE:-2026}:2026'
volumes:
- ./nginx.conf:/etc/nginx/conf.d/default.conf:ro
healthcheck:
test:
- CMD-SHELL
- wget --no-verbose --tries=1 --spider http://127.0.0.1:2026 >/dev/null || exit 1
interval: 30s
timeout: 10s
retries: 5
start_period: 10s
deploy:
resources:
limits:
cpus: ${DEER_FLOW_NGINX_CPU_LIMIT:-0.50}
memory: ${DEER_FLOW_NGINX_MEMORY_LIMIT:-256M}
reservations:
cpus: ${DEER_FLOW_NGINX_CPU_RESERVATION:-0.10}
memory: ${DEER_FLOW_NGINX_MEMORY_RESERVATION:-64M}
+40
View File
@@ -0,0 +1,40 @@
server {
listen 2026;
server_name _;
client_max_body_size 50m;
location /api/langgraph/ {
proxy_pass http://deerflow-langgraph:2024/;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 3600s;
proxy_send_timeout 3600s;
}
location /api/ {
proxy_pass http://deerflow-gateway:8001/api/;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 3600s;
proxy_send_timeout 3600s;
}
location / {
proxy_pass http://deerflow-frontend:3000;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 3600s;
proxy_send_timeout 3600s;
}
}
+33
View File
@@ -0,0 +1,33 @@
# Source build configuration
OPENFANG_VERSION=0.1.0
# Network configuration
OPENFANG_PORT_OVERRIDE=4200
# OpenFang runtime configuration
OPENFANG_PROVIDER=anthropic
OPENFANG_MODEL=claude-sonnet-4-20250514
OPENFANG_API_KEY_ENV=ANTHROPIC_API_KEY
OPENFANG_API_KEY=
OPENFANG_LOG_LEVEL=info
OPENFANG_MEMORY_DECAY_RATE=0.05
OPENFANG_EXEC_MODE=allowlist
OPENFANG_EXEC_TIMEOUT_SECS=30
# Provider credentials
ANTHROPIC_API_KEY=
OPENAI_API_KEY=
GROQ_API_KEY=
# Resources
OPENFANG_CPU_LIMIT=2.00
OPENFANG_MEMORY_LIMIT=2G
OPENFANG_CPU_RESERVATION=0.50
OPENFANG_MEMORY_RESERVATION=512M
# Logging
OPENFANG_LOG_MAX_SIZE=100m
OPENFANG_LOG_MAX_FILE=3
# Timezone
TZ=UTC
+71
View File
@@ -0,0 +1,71 @@
# OpenFang
[中文文档](README.zh.md)
OpenFang is an open-source agent operating system. This Compose setup builds the upstream Docker image from the `v0.1.0` source tag and writes a minimal `config.toml` into the persistent data volume on startup.
## Quick Start
1. Copy the example environment file:
```bash
cp .env.example .env
```
2. Set at least one provider API key in `.env`:
- `ANTHROPIC_API_KEY`
- `OPENAI_API_KEY`
- `GROQ_API_KEY`
3. Start OpenFang:
```bash
docker compose up -d
```
4. Open the dashboard:
- <http://localhost:4200>
5. Verify health if needed:
```bash
curl http://localhost:4200/api/health
```
## Default Ports
| Service | Port | Description |
| -------- | ---- | ---------------------- |
| OpenFang | 4200 | Dashboard and REST API |
## Important Environment Variables
| Variable | Description | Default |
| ------------------------ | ------------------------------------------------------------------ | -------------------------- |
| `OPENFANG_VERSION` | Git tag used for the source build | `0.1.0` |
| `OPENFANG_PORT_OVERRIDE` | Host port for OpenFang | `4200` |
| `OPENFANG_PROVIDER` | Default model provider | `anthropic` |
| `OPENFANG_MODEL` | Default model name | `claude-sonnet-4-20250514` |
| `OPENFANG_API_KEY_ENV` | Environment variable name that OpenFang reads for the provider key | `ANTHROPIC_API_KEY` |
| `OPENFANG_API_KEY` | Optional Bearer token to protect the API | - |
| `ANTHROPIC_API_KEY` | Anthropic API key | - |
| `OPENAI_API_KEY` | OpenAI API key | - |
| `GROQ_API_KEY` | Groq API key | - |
| `TZ` | Container timezone | `UTC` |
## Volumes
- `openfang_data`: Persistent configuration and runtime data under `/data`.
## Notes
- The generated config binds to `0.0.0.0:4200` for container use.
- If `OPENFANG_API_KEY` is empty, the instance runs without API authentication except for whatever protections you place in front of it.
- This setup uses the upstream Dockerfile, so the first build can take several minutes.
## References
- [OpenFang Repository](https://github.com/RightNow-AI/openfang)
- [Getting Started Guide](https://github.com/RightNow-AI/openfang/blob/main/docs/getting-started.md)
+71
View File
@@ -0,0 +1,71 @@
# OpenFang
[English](README.md)
OpenFang 是一个开源的 Agent Operating System。这个 Compose 配置会基于上游 `v0.1.0` 源码标签构建镜像,并在启动时把最小可用的 `config.toml` 写入持久化数据卷。
## 快速开始
1. 复制环境变量示例文件:
```bash
cp .env.example .env
```
2. 在 `.env` 中至少填写一个模型提供商的 API Key:
- `ANTHROPIC_API_KEY`
- `OPENAI_API_KEY`
- `GROQ_API_KEY`
3. 启动 OpenFang
```bash
docker compose up -d
```
4. 打开控制台:
- <http://localhost:4200>
5. 如需检查健康状态:
```bash
curl http://localhost:4200/api/health
```
## 默认端口
| 服务 | 端口 | 说明 |
| -------- | ---- | ----------------- |
| OpenFang | 4200 | 控制台与 REST API |
## 关键环境变量
| 变量 | 说明 | 默认值 |
| ------------------------ | ----------------------------------------- | -------------------------- |
| `OPENFANG_VERSION` | 用于源码构建的 Git 标签 | `0.1.0` |
| `OPENFANG_PORT_OVERRIDE` | OpenFang 对外端口 | `4200` |
| `OPENFANG_PROVIDER` | 默认模型提供商 | `anthropic` |
| `OPENFANG_MODEL` | 默认模型名称 | `claude-sonnet-4-20250514` |
| `OPENFANG_API_KEY_ENV` | OpenFang 读取提供商密钥时使用的环境变量名 | `ANTHROPIC_API_KEY` |
| `OPENFANG_API_KEY` | 可选的 API Bearer Token | - |
| `ANTHROPIC_API_KEY` | Anthropic API Key | - |
| `OPENAI_API_KEY` | OpenAI API Key | - |
| `GROQ_API_KEY` | Groq API Key | - |
| `TZ` | 容器时区 | `UTC` |
## 数据卷
- `openfang_data`:持久化 `/data` 下的配置与运行数据。
## 说明
- 生成的配置会监听 `0.0.0.0:4200`,适合容器内运行。
- 如果 `OPENFANG_API_KEY` 为空,实例本身不会启用额外 API 认证,是否暴露到公网需要你自行把控。
- 该服务使用上游 Dockerfile 从源码构建,首次构建通常需要几分钟。
## 参考资料
- [OpenFang 仓库](https://github.com/RightNow-AI/openfang)
- [入门文档](https://github.com/RightNow-AI/openfang/blob/main/docs/getting-started.md)
+71
View File
@@ -0,0 +1,71 @@
x-defaults: &defaults
restart: unless-stopped
logging:
driver: json-file
options:
max-size: ${OPENFANG_LOG_MAX_SIZE:-100m}
max-file: '${OPENFANG_LOG_MAX_FILE:-3}'
services:
openfang:
<<: *defaults
build:
context: https://github.com/RightNow-AI/openfang.git#${OPENFANG_VERSION:-0.1.0}
dockerfile: Dockerfile
image: ${GLOBAL_REGISTRY:-}alexsuntop/openfang:${OPENFANG_VERSION:-0.1.0}
ports:
- '${OPENFANG_PORT_OVERRIDE:-4200}:4200'
environment:
- TZ=${TZ:-UTC}
- OPENFANG_HOME=/data
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY:-}
- OPENAI_API_KEY=${OPENAI_API_KEY:-}
- GROQ_API_KEY=${GROQ_API_KEY:-}
env_file:
- .env
entrypoint:
- /bin/sh
- -ec
command: |
: > /data/config.toml
if [ -n "${OPENFANG_API_KEY:-}" ]; then
printf 'api_key = "%s"\n' "${OPENFANG_API_KEY}" >> /data/config.toml
fi
cat >> /data/config.toml <<EOF
api_listen = "0.0.0.0:4200"
log_level = "${OPENFANG_LOG_LEVEL:-info}"
[default_model]
provider = "${OPENFANG_PROVIDER:-anthropic}"
model = "${OPENFANG_MODEL:-claude-sonnet-4-20250514}"
api_key_env = "${OPENFANG_API_KEY_ENV:-ANTHROPIC_API_KEY}"
[memory]
decay_rate = ${OPENFANG_MEMORY_DECAY_RATE:-0.05}
[exec_policy]
mode = "${OPENFANG_EXEC_MODE:-allowlist}"
timeout_secs = ${OPENFANG_EXEC_TIMEOUT_SECS:-30}
EOF
exec openfang start
volumes:
- openfang_data:/data
healthcheck:
test:
- CMD-SHELL
- python3 -c "import urllib.request; urllib.request.urlopen('http://127.0.0.1:4200/api/health', timeout=5)"
interval: 30s
timeout: 10s
retries: 5
start_period: 30s
deploy:
resources:
limits:
cpus: ${OPENFANG_CPU_LIMIT:-2.00}
memory: ${OPENFANG_MEMORY_LIMIT:-2G}
reservations:
cpus: ${OPENFANG_CPU_RESERVATION:-0.50}
memory: ${OPENFANG_MEMORY_RESERVATION:-512M}
volumes:
openfang_data:
+31
View File
@@ -0,0 +1,31 @@
# Source build configuration
PAPERCLIP_GIT_REF=main
# Network configuration
PAPERCLIP_PORT_OVERRIDE=3100
PAPERCLIP_PUBLIC_URL=http://localhost:3100
PAPERCLIP_ALLOWED_HOSTNAMES=localhost
# Runtime mode
PAPERCLIP_DEPLOYMENT_MODE=authenticated
PAPERCLIP_DEPLOYMENT_EXPOSURE=private
# Optional external database
DATABASE_URL=
# LLM credentials for local adapters and workflows
OPENAI_API_KEY=
ANTHROPIC_API_KEY=
# Resources
PAPERCLIP_CPU_LIMIT=2.00
PAPERCLIP_MEMORY_LIMIT=4G
PAPERCLIP_CPU_RESERVATION=0.50
PAPERCLIP_MEMORY_RESERVATION=1G
# Logging
PAPERCLIP_LOG_MAX_SIZE=100m
PAPERCLIP_LOG_MAX_FILE=3
# Timezone
TZ=UTC
+67
View File
@@ -0,0 +1,67 @@
# Paperclip
[中文文档](README.zh.md)
Paperclip is an open-source orchestration platform for running AI-native teams. This Compose setup builds the upstream Docker image from source, persists the full Paperclip home directory, and exposes the web UI on port 3100.
## Quick Start
1. Copy the example environment file:
```bash
cp .env.example .env
```
2. Optionally edit `.env`:
- Set `PAPERCLIP_PUBLIC_URL` if you are not using `http://localhost:3100`
- Add `OPENAI_API_KEY` and or `ANTHROPIC_API_KEY` for local adapters
- Set `DATABASE_URL` if you want to use an external PostgreSQL instance instead of the embedded database
3. Start the service:
```bash
docker compose up -d
```
4. Open the UI:
- <http://localhost:3100>
5. Follow the Paperclip onboarding flow in the browser.
## Default Ports
| Service | Port | Description |
| --------- | ---- | -------------- |
| Paperclip | 3100 | Web UI and API |
## Important Environment Variables
| Variable | Description | Default |
| ------------------------------- | ---------------------------------------- | ----------------------- |
| `PAPERCLIP_GIT_REF` | Git ref used for the source build | `main` |
| `PAPERCLIP_PORT_OVERRIDE` | Host port for Paperclip | `3100` |
| `PAPERCLIP_PUBLIC_URL` | Public URL for auth and invite flows | `http://localhost:3100` |
| `PAPERCLIP_ALLOWED_HOSTNAMES` | Extra allowed hostnames | `localhost` |
| `PAPERCLIP_DEPLOYMENT_MODE` | Deployment mode | `authenticated` |
| `PAPERCLIP_DEPLOYMENT_EXPOSURE` | Exposure mode | `private` |
| `DATABASE_URL` | Optional external PostgreSQL URL | - |
| `OPENAI_API_KEY` | OpenAI key for bundled local adapters | - |
| `ANTHROPIC_API_KEY` | Anthropic key for bundled local adapters | - |
| `TZ` | Container timezone | `UTC` |
## Volumes
- `paperclip_data`: Stores embedded PostgreSQL data, uploaded files, secrets, and runtime state.
## Notes
- If `DATABASE_URL` is not provided, Paperclip automatically uses embedded PostgreSQL.
- The upstream Docker image includes the UI and server in one container.
- The first source build can take several minutes.
## References
- [Paperclip Repository](https://github.com/paperclipai/paperclip)
- [Docker Deployment Guide](https://github.com/paperclipai/paperclip/blob/main/docs/deploy/docker.md)
+67
View File
@@ -0,0 +1,67 @@
# Paperclip
[English](README.md)
Paperclip 是一个面向 AI 团队编排的开源平台。这个 Compose 配置会从上游源码构建 Docker 镜像,持久化整个 Paperclip Home 目录,并通过 3100 端口暴露 Web 界面。
## 快速开始
1. 复制环境变量示例文件:
```bash
cp .env.example .env
```
2. 按需编辑 `.env`
- 如果你不通过 `http://localhost:3100` 访问,请修改 `PAPERCLIP_PUBLIC_URL`
- 如果要启用本地适配器,填写 `OPENAI_API_KEY` 和或 `ANTHROPIC_API_KEY`
- 如果要接入外部 PostgreSQL,而不是内置数据库,请设置 `DATABASE_URL`
3. 启动服务:
```bash
docker compose up -d
```
4. 打开界面:
- <http://localhost:3100>
5. 在浏览器中完成 Paperclip 的初始化流程。
## 默认端口
| 服务 | 端口 | 说明 |
| --------- | ---- | -------------- |
| Paperclip | 3100 | Web 界面与 API |
## 关键环境变量
| 变量 | 说明 | 默认值 |
| ------------------------------- | ---------------------------- | ----------------------- |
| `PAPERCLIP_GIT_REF` | 用于源码构建的 Git 引用 | `main` |
| `PAPERCLIP_PORT_OVERRIDE` | Paperclip 对外端口 | `3100` |
| `PAPERCLIP_PUBLIC_URL` | 认证与邀请流程使用的公开 URL | `http://localhost:3100` |
| `PAPERCLIP_ALLOWED_HOSTNAMES` | 额外允许的主机名 | `localhost` |
| `PAPERCLIP_DEPLOYMENT_MODE` | 部署模式 | `authenticated` |
| `PAPERCLIP_DEPLOYMENT_EXPOSURE` | 暴露模式 | `private` |
| `DATABASE_URL` | 可选的外部 PostgreSQL 连接串 | - |
| `OPENAI_API_KEY` | OpenAI Key | - |
| `ANTHROPIC_API_KEY` | Anthropic Key | - |
| `TZ` | 容器时区 | `UTC` |
## 数据卷
- `paperclip_data`:保存内置 PostgreSQL、上传文件、密钥和运行状态。
## 说明
- 如果没有设置 `DATABASE_URL`Paperclip 会自动启用内置 PostgreSQL。
- 上游 Docker 镜像已经包含前端和服务端,不需要再拆分多个容器。
- 首次源码构建通常需要几分钟。
## 参考资料
- [Paperclip 仓库](https://github.com/paperclipai/paperclip)
- [Docker 部署文档](https://github.com/paperclipai/paperclip/blob/main/docs/deploy/docker.md)
+53
View File
@@ -0,0 +1,53 @@
x-defaults: &defaults
restart: unless-stopped
logging:
driver: json-file
options:
max-size: ${PAPERCLIP_LOG_MAX_SIZE:-100m}
max-file: '${PAPERCLIP_LOG_MAX_FILE:-3}'
services:
paperclip:
<<: *defaults
build:
context: https://github.com/paperclipai/paperclip.git#${PAPERCLIP_GIT_REF:-main}
dockerfile: Dockerfile
image: ${GLOBAL_REGISTRY:-}alexsuntop/paperclip:${PAPERCLIP_GIT_REF:-main}
ports:
- '${PAPERCLIP_PORT_OVERRIDE:-3100}:3100'
environment:
- TZ=${TZ:-UTC}
- HOST=0.0.0.0
- PORT=3100
- SERVE_UI=true
- PAPERCLIP_HOME=/paperclip
- PAPERCLIP_DEPLOYMENT_MODE=${PAPERCLIP_DEPLOYMENT_MODE:-authenticated}
- PAPERCLIP_DEPLOYMENT_EXPOSURE=${PAPERCLIP_DEPLOYMENT_EXPOSURE:-private}
- PAPERCLIP_PUBLIC_URL=${PAPERCLIP_PUBLIC_URL:-http://localhost:3100}
- PAPERCLIP_ALLOWED_HOSTNAMES=${PAPERCLIP_ALLOWED_HOSTNAMES:-localhost}
- OPENAI_API_KEY=${OPENAI_API_KEY:-}
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY:-}
- DATABASE_URL=${DATABASE_URL:-}
env_file:
- .env
volumes:
- paperclip_data:/paperclip
healthcheck:
test:
- CMD-SHELL
- curl -fsS http://127.0.0.1:3100/api/health >/dev/null || exit 1
interval: 30s
timeout: 10s
retries: 5
start_period: 60s
deploy:
resources:
limits:
cpus: ${PAPERCLIP_CPU_LIMIT:-2.00}
memory: ${PAPERCLIP_MEMORY_LIMIT:-4G}
reservations:
cpus: ${PAPERCLIP_CPU_RESERVATION:-0.50}
memory: ${PAPERCLIP_MEMORY_RESERVATION:-1G}
volumes:
paperclip_data: