feat: add OpenViking DeerFlow Mattermost OpenFang and Paperclip services

This commit is contained in:
Sun-ZhenXing
2026-03-28 23:40:06 +08:00
parent fbd0c9b7f4
commit 441b8a74f5
23 changed files with 1356 additions and 4 deletions
+45
View File
@@ -0,0 +1,45 @@
# Source build configuration
DEER_FLOW_VERSION=main
NGINX_VERSION=1.28-alpine
# Network configuration
DEER_FLOW_PORT_OVERRIDE=2026
DEER_FLOW_CORS_ORIGINS=http://localhost:2026
DEER_FLOW_BETTER_AUTH_SECRET=deer-flow-dev-secret-change-me
# Model configuration
DEER_FLOW_MODEL_NAME=openai-default
DEER_FLOW_MODEL_DISPLAY_NAME=OpenAI
DEER_FLOW_MODEL_ID=gpt-4.1-mini
OPENAI_API_KEY=
# Resources - Gateway
DEER_FLOW_GATEWAY_CPU_LIMIT=2.00
DEER_FLOW_GATEWAY_MEMORY_LIMIT=2G
DEER_FLOW_GATEWAY_CPU_RESERVATION=0.50
DEER_FLOW_GATEWAY_MEMORY_RESERVATION=512M
# Resources - LangGraph
DEER_FLOW_LANGGRAPH_CPU_LIMIT=2.00
DEER_FLOW_LANGGRAPH_MEMORY_LIMIT=2G
DEER_FLOW_LANGGRAPH_CPU_RESERVATION=0.50
DEER_FLOW_LANGGRAPH_MEMORY_RESERVATION=512M
# Resources - Frontend
DEER_FLOW_FRONTEND_CPU_LIMIT=1.00
DEER_FLOW_FRONTEND_MEMORY_LIMIT=1G
DEER_FLOW_FRONTEND_CPU_RESERVATION=0.25
DEER_FLOW_FRONTEND_MEMORY_RESERVATION=256M
# Resources - Nginx
DEER_FLOW_NGINX_CPU_LIMIT=0.50
DEER_FLOW_NGINX_MEMORY_LIMIT=256M
DEER_FLOW_NGINX_CPU_RESERVATION=0.10
DEER_FLOW_NGINX_MEMORY_RESERVATION=64M
# Logging
DEER_FLOW_LOG_MAX_SIZE=100m
DEER_FLOW_LOG_MAX_FILE=3
# Timezone
TZ=UTC
+60
View File
@@ -0,0 +1,60 @@
# DeerFlow
[中文文档](README.zh.md)
DeerFlow is a full-stack AI agent application from ByteDance. This Compose setup builds the frontend and backend from source, starts Gateway, LangGraph, and Nginx, and exposes the unified entrypoint on port 2026.
## Quick Start
1. Copy the example environment file:
```bash
cp .env.example .env
```
2. Edit `.env` and set `OPENAI_API_KEY`.
3. Start the stack:
```bash
docker compose up -d
```
4. Open DeerFlow:
- <http://localhost:2026>
## Default Ports
| Service | Port | Description |
| ----------- | ---- | ---------------------- |
| Nginx | 2026 | Unified web entrypoint |
| Gateway API | 8001 | Internal only |
| LangGraph | 2024 | Internal only |
| Frontend | 3000 | Internal only |
## Important Environment Variables
| Variable | Description | Default |
| ------------------------------ | ------------------------------------------------------ | -------------------------------- |
| `DEER_FLOW_VERSION` | Git ref used for source builds | `main` |
| `DEER_FLOW_PORT_OVERRIDE` | Host port for the unified entrypoint | `2026` |
| `OPENAI_API_KEY` | OpenAI API key referenced from generated `config.yaml` | - |
| `DEER_FLOW_MODEL_NAME` | Internal model identifier | `openai-default` |
| `DEER_FLOW_MODEL_DISPLAY_NAME` | Display name shown in the app | `OpenAI` |
| `DEER_FLOW_MODEL_ID` | OpenAI model id | `gpt-4.1-mini` |
| `DEER_FLOW_CORS_ORIGINS` | Allowed CORS origins for the gateway | `http://localhost:2026` |
| `DEER_FLOW_BETTER_AUTH_SECRET` | Frontend auth secret | `deer-flow-dev-secret-change-me` |
| `TZ` | Container timezone | `UTC` |
## Notes
- This setup generates a minimal `config.yaml` and `extensions_config.json` inside the backend containers, so no extra config files are required.
- The default sandbox mode is local to avoid requiring Docker socket mounts or Kubernetes provisioner setup.
- DeerFlow upstream usually expects local image builds, so the first build can take several minutes.
- Only an OpenAI-compatible model is wired by default here. If you want Anthropic, Gemini, or a more advanced config, update the generated template logic in `docker-compose.yaml`.
## References
- [DeerFlow Repository](https://github.com/bytedance/deer-flow)
- [Project README](https://github.com/bytedance/deer-flow/blob/main/README.md)
+60
View File
@@ -0,0 +1,60 @@
# DeerFlow
[English](README.md)
DeerFlow 是字节跳动开源的全栈 AI Agent 应用。这个 Compose 配置会从源码构建前后端镜像,启动 Gateway、LangGraph 和 Nginx,并通过 2026 端口暴露统一入口。
## 快速开始
1. 复制环境变量示例文件:
```bash
cp .env.example .env
```
2. 编辑 `.env`,至少填写 `OPENAI_API_KEY`。
3. 启动整个栈:
```bash
docker compose up -d
```
4. 打开 DeerFlow
- <http://localhost:2026>
## 默认端口
| 服务 | 端口 | 说明 |
| ----------- | ---- | ------------- |
| Nginx | 2026 | 统一 Web 入口 |
| Gateway API | 8001 | 仅内部访问 |
| LangGraph | 2024 | 仅内部访问 |
| Frontend | 3000 | 仅内部访问 |
## 关键环境变量
| 变量 | 说明 | 默认值 |
| ------------------------------ | -------------------------------------------- | -------------------------------- |
| `DEER_FLOW_VERSION` | 用于源码构建的 Git 引用 | `main` |
| `DEER_FLOW_PORT_OVERRIDE` | 统一入口对外端口 | `2026` |
| `OPENAI_API_KEY` | 生成的 `config.yaml` 中引用的 OpenAI API Key | - |
| `DEER_FLOW_MODEL_NAME` | 模型内部标识 | `openai-default` |
| `DEER_FLOW_MODEL_DISPLAY_NAME` | 界面展示名称 | `OpenAI` |
| `DEER_FLOW_MODEL_ID` | OpenAI 模型 ID | `gpt-4.1-mini` |
| `DEER_FLOW_CORS_ORIGINS` | Gateway 允许的跨域来源 | `http://localhost:2026` |
| `DEER_FLOW_BETTER_AUTH_SECRET` | 前端鉴权密钥 | `deer-flow-dev-secret-change-me` |
| `TZ` | 容器时区 | `UTC` |
## 说明
- 这个配置会在后端容器内部生成最小可用的 `config.yaml` 和 `extensions_config.json`,因此不需要额外手工准备配置文件。
- 默认使用本地 sandbox 模式,这样不需要挂载 Docker Socket,也不依赖 Kubernetes provisioner。
- DeerFlow 上游通常要求本地构建镜像,因此首次构建耗时可能较长。
- 当前默认只接入了 OpenAI 兼容模型。如果你要改成 Anthropic、Gemini 或更复杂的配置,需要调整 `docker-compose.yaml` 中生成配置文件的模板。
## 参考资料
- [DeerFlow 仓库](https://github.com/bytedance/deer-flow)
- [项目 README](https://github.com/bytedance/deer-flow/blob/main/README_zh.md)
+171
View File
@@ -0,0 +1,171 @@
x-defaults: &defaults
restart: unless-stopped
logging:
driver: json-file
options:
max-size: ${DEER_FLOW_LOG_MAX_SIZE:-100m}
max-file: '${DEER_FLOW_LOG_MAX_FILE:-3}'
services:
deerflow-gateway:
<<: *defaults
build:
context: https://github.com/bytedance/deer-flow.git#${DEER_FLOW_VERSION:-main}
dockerfile: backend/Dockerfile
image: ${GLOBAL_REGISTRY:-}alexsuntop/deer-flow-backend:${DEER_FLOW_VERSION:-main}
environment:
- TZ=${TZ:-UTC}
- OPENAI_API_KEY=${OPENAI_API_KEY:-}
env_file:
- .env
entrypoint:
- /bin/sh
- -ec
command: |
cat >/tmp/config.yaml <<EOF
config_version: 1
models:
- name: ${DEER_FLOW_MODEL_NAME:-openai-default}
display_name: ${DEER_FLOW_MODEL_DISPLAY_NAME:-OpenAI}
use: langchain_openai:ChatOpenAI
model: ${DEER_FLOW_MODEL_ID:-gpt-4.1-mini}
api_key: \$OPENAI_API_KEY
sandbox:
use: deerflow.sandbox.local:LocalSandboxProvider
EOF
cat >/tmp/extensions_config.json <<EOF
{"mcpServers":{},"skills":{}}
EOF
export DEER_FLOW_CONFIG_PATH=/tmp/config.yaml
export DEER_FLOW_EXTENSIONS_CONFIG_PATH=/tmp/extensions_config.json
export GATEWAY_HOST=0.0.0.0
export GATEWAY_PORT=8001
export CORS_ORIGINS=${DEER_FLOW_CORS_ORIGINS:-http://localhost:2026}
exec sh -c 'cd backend && PYTHONPATH=. uv run uvicorn app.gateway.app:app --host 0.0.0.0 --port 8001'
healthcheck:
test:
- CMD-SHELL
- python3 -c "import urllib.request; urllib.request.urlopen('http://127.0.0.1:8001/docs', timeout=5)"
interval: 30s
timeout: 10s
retries: 5
start_period: 60s
deploy:
resources:
limits:
cpus: ${DEER_FLOW_GATEWAY_CPU_LIMIT:-2.00}
memory: ${DEER_FLOW_GATEWAY_MEMORY_LIMIT:-2G}
reservations:
cpus: ${DEER_FLOW_GATEWAY_CPU_RESERVATION:-0.50}
memory: ${DEER_FLOW_GATEWAY_MEMORY_RESERVATION:-512M}
deerflow-langgraph:
<<: *defaults
build:
context: https://github.com/bytedance/deer-flow.git#${DEER_FLOW_VERSION:-main}
dockerfile: backend/Dockerfile
image: ${GLOBAL_REGISTRY:-}alexsuntop/deer-flow-backend:${DEER_FLOW_VERSION:-main}
environment:
- TZ=${TZ:-UTC}
- OPENAI_API_KEY=${OPENAI_API_KEY:-}
env_file:
- .env
entrypoint:
- /bin/sh
- -ec
command: |
cat >/tmp/config.yaml <<EOF
config_version: 1
models:
- name: ${DEER_FLOW_MODEL_NAME:-openai-default}
display_name: ${DEER_FLOW_MODEL_DISPLAY_NAME:-OpenAI}
use: langchain_openai:ChatOpenAI
model: ${DEER_FLOW_MODEL_ID:-gpt-4.1-mini}
api_key: \$OPENAI_API_KEY
sandbox:
use: deerflow.sandbox.local:LocalSandboxProvider
EOF
cat >/tmp/extensions_config.json <<EOF
{"mcpServers":{},"skills":{}}
EOF
export DEER_FLOW_CONFIG_PATH=/tmp/config.yaml
export DEER_FLOW_EXTENSIONS_CONFIG_PATH=/tmp/extensions_config.json
exec sh -c 'cd backend && NO_COLOR=1 uv run langgraph dev --no-browser --allow-blocking --no-reload'
healthcheck:
test:
- CMD-SHELL
- python3 -c "import socket; s=socket.create_connection(('127.0.0.1', 2024), 5); s.close()"
interval: 30s
timeout: 10s
retries: 5
start_period: 60s
deploy:
resources:
limits:
cpus: ${DEER_FLOW_LANGGRAPH_CPU_LIMIT:-2.00}
memory: ${DEER_FLOW_LANGGRAPH_MEMORY_LIMIT:-2G}
reservations:
cpus: ${DEER_FLOW_LANGGRAPH_CPU_RESERVATION:-0.50}
memory: ${DEER_FLOW_LANGGRAPH_MEMORY_RESERVATION:-512M}
deerflow-frontend:
<<: *defaults
build:
context: https://github.com/bytedance/deer-flow.git#${DEER_FLOW_VERSION:-main}
dockerfile: frontend/Dockerfile
target: prod
image: ${GLOBAL_REGISTRY:-}alexsuntop/deer-flow-frontend:${DEER_FLOW_VERSION:-main}
environment:
- TZ=${TZ:-UTC}
- BETTER_AUTH_SECRET=${DEER_FLOW_BETTER_AUTH_SECRET:-deer-flow-dev-secret-change-me}
- NEXT_PUBLIC_BACKEND_BASE_URL=
- NEXT_PUBLIC_LANGGRAPH_BASE_URL=/api/langgraph
env_file:
- .env
healthcheck:
test:
- CMD-SHELL
- node -e "fetch('http://127.0.0.1:3000').then((r)=>process.exit(r.ok?0:1)).catch(()=>process.exit(1))"
interval: 30s
timeout: 10s
retries: 5
start_period: 60s
deploy:
resources:
limits:
cpus: ${DEER_FLOW_FRONTEND_CPU_LIMIT:-1.00}
memory: ${DEER_FLOW_FRONTEND_MEMORY_LIMIT:-1G}
reservations:
cpus: ${DEER_FLOW_FRONTEND_CPU_RESERVATION:-0.25}
memory: ${DEER_FLOW_FRONTEND_MEMORY_RESERVATION:-256M}
deerflow-nginx:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}nginx:${NGINX_VERSION:-1.28-alpine}
depends_on:
deerflow-gateway:
condition: service_healthy
deerflow-langgraph:
condition: service_healthy
deerflow-frontend:
condition: service_healthy
ports:
- '${DEER_FLOW_PORT_OVERRIDE:-2026}:2026'
volumes:
- ./nginx.conf:/etc/nginx/conf.d/default.conf:ro
healthcheck:
test:
- CMD-SHELL
- wget --no-verbose --tries=1 --spider http://127.0.0.1:2026 >/dev/null || exit 1
interval: 30s
timeout: 10s
retries: 5
start_period: 10s
deploy:
resources:
limits:
cpus: ${DEER_FLOW_NGINX_CPU_LIMIT:-0.50}
memory: ${DEER_FLOW_NGINX_MEMORY_LIMIT:-256M}
reservations:
cpus: ${DEER_FLOW_NGINX_CPU_RESERVATION:-0.10}
memory: ${DEER_FLOW_NGINX_MEMORY_RESERVATION:-64M}
+40
View File
@@ -0,0 +1,40 @@
server {
listen 2026;
server_name _;
client_max_body_size 50m;
location /api/langgraph/ {
proxy_pass http://deerflow-langgraph:2024/;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 3600s;
proxy_send_timeout 3600s;
}
location /api/ {
proxy_pass http://deerflow-gateway:8001/api/;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 3600s;
proxy_send_timeout 3600s;
}
location / {
proxy_pass http://deerflow-frontend:3000;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 3600s;
proxy_send_timeout 3600s;
}
}