Compare commits

...

2 Commits

Author SHA1 Message Date
Sun-ZhenXing 54e549724d chore: update bifrost phoenix and ollama configs 2026-03-28 23:41:32 +08:00
Sun-ZhenXing 441b8a74f5 feat: add OpenViking DeerFlow Mattermost OpenFang and Paperclip services 2026-03-28 23:40:06 +08:00
33 changed files with 1369 additions and 21 deletions
+7 -2
View File
@@ -11,10 +11,13 @@ These services require building custom Docker images from source.
| Service | Version |
| ------------------------------------------- | ------- |
| [Debian DinD](./builds/debian-dind) | 0.1.2 |
| [DeerFlow](./builds/deer-flow) | 2.0 |
| [goose](./builds/goose) | 1.18.0 |
| [IOPaint](./builds/io-paint) | 1.6.0 |
| [K3s inside DinD](./builds/k3s-inside-dind) | 0.2.2 |
| [MinerU vLLM](./builds/mineru) | 2.7.6 |
| [OpenFang](./builds/openfang) | 0.1.0 |
| [Paperclip](./builds/paperclip) | main |
## Supported Services
@@ -29,7 +32,7 @@ These services require building custom Docker images from source.
| [Apache Pulsar](./src/pulsar) | 4.0.7 |
| [Apache RocketMQ](./src/rocketmq) | 5.3.1 |
| [Agentgateway](./src/agentgateway) | 0.11.2 |
| [Bifrost Gateway](./src/bifrost-gateway) | v1.3.63 |
| [Bifrost Gateway](./src/bifrost-gateway) | v1.4.17 |
| [Bolt.diy](./apps/bolt-diy) | latest |
| [Budibase](./src/budibase) | 3.23.0 |
| [BuildingAI](./apps/buildingai) | latest |
@@ -83,6 +86,7 @@ These services require building custom Docker images from source.
| [LMDeploy](./src/lmdeploy) | v0.11.1 |
| [Logstash](./src/logstash) | 8.16.1 |
| [MariaDB Galera Cluster](./src/mariadb-galera) | 11.7.2 |
| [Mattermost](./apps/mattermost) | 11.3 |
| [Memos](./src/memos) | 0.25.3 |
| [Milvus Standalone Embed](./src/milvus-standalone-embed) | v2.6.7 |
| [Milvus Standalone](./src/milvus-standalone) | v2.6.7 |
@@ -107,7 +111,7 @@ These services require building custom Docker images from source.
| [Odoo](./src/odoo) | 19.0 |
| [Ollama](./src/ollama) | 0.14.3 |
| [Open WebUI](./src/open-webui) | main |
| [Phoenix (Arize)](./src/phoenix) | 13.3.0 |
| [Phoenix (Arize)](./src/phoenix) | 13.19.2 |
| [Pingora Proxy Manager](./src/pingora-proxy-manager) | v1.0.3 |
| [Open WebUI Rust](./src/open-webui-rust) | latest |
| [OpenCode](./src/opencode) | 1.1.27 |
@@ -120,6 +124,7 @@ These services require building custom Docker images from source.
| [OpenObserve](./apps/openobserve) | v0.50.0 |
| [OpenSearch](./src/opensearch) | 2.19.0 |
| [OpenTelemetry Collector](./src/otel-collector) | 0.115.1 |
| [OpenViking](./src/openviking) | 0.1.0 |
| [Overleaf](./src/overleaf) | 5.2.1 |
| [PocketBase](./src/pocketbase) | 0.30.0 |
| [Podman](./src/podman) | v5.7.1 |
+7 -2
View File
@@ -11,10 +11,13 @@ Compose Anything 通过提供一组高质量的 Docker Compose 配置文件,
| 服务 | 版本 |
| ------------------------------------------- | ------ |
| [Debian DinD](./builds/debian-dind) | 0.1.2 |
| [DeerFlow](./builds/deer-flow) | 2.0 |
| [goose](./builds/goose) | 1.18.0 |
| [IOPaint](./builds/io-paint) | 1.6.0 |
| [K3s inside DinD](./builds/k3s-inside-dind) | 0.2.2 |
| [MinerU vLLM](./builds/mineru) | 2.7.6 |
| [OpenFang](./builds/openfang) | 0.1.0 |
| [Paperclip](./builds/paperclip) | main |
## 已经支持的服务
@@ -29,7 +32,7 @@ Compose Anything 通过提供一组高质量的 Docker Compose 配置文件,
| [Apache Pulsar](./src/pulsar) | 4.0.7 |
| [Apache RocketMQ](./src/rocketmq) | 5.3.1 |
| [Agentgateway](./src/agentgateway) | 0.11.2 |
| [Bifrost Gateway](./src/bifrost-gateway) | v1.3.63 |
| [Bifrost Gateway](./src/bifrost-gateway) | v1.4.17 |
| [Bolt.diy](./apps/bolt-diy) | latest |
| [Budibase](./src/budibase) | 3.23.0 |
| [BuildingAI](./apps/buildingai) | latest |
@@ -83,6 +86,7 @@ Compose Anything 通过提供一组高质量的 Docker Compose 配置文件,
| [LMDeploy](./src/lmdeploy) | v0.11.1 |
| [Logstash](./src/logstash) | 8.16.1 |
| [MariaDB Galera Cluster](./src/mariadb-galera) | 11.7.2 |
| [Mattermost](./apps/mattermost) | 11.3 |
| [Memos](./src/memos) | 0.25.3 |
| [Milvus Standalone Embed](./src/milvus-standalone-embed) | v2.6.7 |
| [Milvus Standalone](./src/milvus-standalone) | v2.6.7 |
@@ -107,7 +111,7 @@ Compose Anything 通过提供一组高质量的 Docker Compose 配置文件,
| [Odoo](./src/odoo) | 19.0 |
| [Ollama](./src/ollama) | 0.14.3 |
| [Open WebUI](./src/open-webui) | main |
| [Phoenix (Arize)](./src/phoenix) | 13.3.0 |
| [Phoenix (Arize)](./src/phoenix) | 13.19.2 |
| [Pingora Proxy Manager](./src/pingora-proxy-manager) | v1.0.3 |
| [Open WebUI Rust](./src/open-webui-rust) | latest |
| [OpenCode](./src/opencode) | 1.1.27 |
@@ -120,6 +124,7 @@ Compose Anything 通过提供一组高质量的 Docker Compose 配置文件,
| [OpenObserve](./apps/openobserve) | v0.50.0 |
| [OpenSearch](./src/opensearch) | 2.19.0 |
| [OpenTelemetry Collector](./src/otel-collector) | 0.115.1 |
| [OpenViking](./src/openviking) | 0.1.0 |
| [Overleaf](./src/overleaf) | 5.2.1 |
| [PocketBase](./src/pocketbase) | 0.30.0 |
| [Podman](./src/podman) | v5.7.1 |
+34
View File
@@ -0,0 +1,34 @@
# Image versions
MATTERMOST_VERSION=11.3
POSTGRES_VERSION=17-alpine
# Network configuration
MATTERMOST_PORT_OVERRIDE=8065
MATTERMOST_SITE_URL=http://localhost:8065
# PostgreSQL configuration
POSTGRES_DB=mattermost
POSTGRES_USER=mmuser
POSTGRES_PASSWORD=mmchangeit
# Mattermost runtime configuration
MATTERMOST_ENABLE_LOCAL_MODE=false
# Resources - Mattermost
MATTERMOST_CPU_LIMIT=2.00
MATTERMOST_MEMORY_LIMIT=2G
MATTERMOST_CPU_RESERVATION=0.50
MATTERMOST_MEMORY_RESERVATION=512M
# Resources - PostgreSQL
MATTERMOST_DB_CPU_LIMIT=1.00
MATTERMOST_DB_MEMORY_LIMIT=1G
MATTERMOST_DB_CPU_RESERVATION=0.25
MATTERMOST_DB_MEMORY_RESERVATION=256M
# Logging
MATTERMOST_LOG_MAX_SIZE=100m
MATTERMOST_LOG_MAX_FILE=3
# Timezone
TZ=UTC
+68
View File
@@ -0,0 +1,68 @@
# Mattermost
[中文文档](README.zh.md)
Mattermost is an open-source team collaboration platform that provides chat, file sharing, channels, and integrations. This Compose stack includes Mattermost plus PostgreSQL and is designed to start with a single `docker compose up -d`.
## Quick Start
1. Copy the example environment file:
```bash
cp .env.example .env
```
2. Edit `.env` if you want to change the port, site URL, or database password.
3. Start the stack:
```bash
docker compose up -d
```
4. Open Mattermost:
- <http://localhost:8065>
5. Complete the first-run wizard to create the initial system admin account.
## Default Ports
| Service | Port | Description |
| ---------- | ---- | ---------------------- |
| Mattermost | 8065 | Web UI and API |
| PostgreSQL | 5432 | Internal database only |
## Important Environment Variables
| Variable | Description | Default |
| ------------------------------ | ---------------------------------------------- | ----------------------- |
| `MATTERMOST_VERSION` | Mattermost Team Edition image tag | `11.3` |
| `MATTERMOST_PORT_OVERRIDE` | Host port for Mattermost | `8065` |
| `MATTERMOST_SITE_URL` | Public URL used by Mattermost | `http://localhost:8065` |
| `POSTGRES_DB` | PostgreSQL database name | `mattermost` |
| `POSTGRES_USER` | PostgreSQL user | `mmuser` |
| `POSTGRES_PASSWORD` | PostgreSQL password | `mmchangeit` |
| `MATTERMOST_ENABLE_LOCAL_MODE` | Enables local mode for administrative commands | `false` |
| `TZ` | Container timezone | `UTC` |
## Volumes
- `mattermost_postgres_data`: PostgreSQL data.
- `mattermost_config`: Mattermost config directory.
- `mattermost_data`: Uploaded files and application data.
- `mattermost_logs`: Application logs.
- `mattermost_plugins`: Server-side plugins.
- `mattermost_client_plugins`: Webapp plugins.
- `mattermost_bleve_indexes`: Search indexes.
## Notes
- The application depends on PostgreSQL and waits until the database is healthy before booting.
- The default setup uses Team Edition.
- If you expose Mattermost behind a reverse proxy or different hostname, update `MATTERMOST_SITE_URL`.
## References
- [Mattermost Repository](https://github.com/mattermost/mattermost)
- [Mattermost Team Edition Image](https://hub.docker.com/r/mattermost/mattermost-team-edition)
+68
View File
@@ -0,0 +1,68 @@
# Mattermost
[English](README.md)
Mattermost 是一个开源团队协作平台,提供聊天、频道、文件共享和集成能力。这个 Compose 配置包含 Mattermost 和 PostgreSQL,目标是用一条 `docker compose up -d` 完成启动。
## 快速开始
1. 复制环境变量示例文件:
```bash
cp .env.example .env
```
2. 按需修改 `.env`,例如端口、站点 URL 或数据库密码。
3. 启动整个栈:
```bash
docker compose up -d
```
4. 打开 Mattermost
- <http://localhost:8065>
5. 按照首次启动向导创建初始系统管理员账号。
## 默认端口
| 服务 | 端口 | 说明 |
| ---------- | ---- | -------------------- |
| Mattermost | 8065 | Web 界面与 API |
| PostgreSQL | 5432 | 仅供内部使用的数据库 |
## 关键环境变量
| 变量 | 说明 | 默认值 |
| ------------------------------ | -------------------------------- | ----------------------- |
| `MATTERMOST_VERSION` | Mattermost Team Edition 镜像标签 | `11.3` |
| `MATTERMOST_PORT_OVERRIDE` | Mattermost 对外端口 | `8065` |
| `MATTERMOST_SITE_URL` | Mattermost 对外访问 URL | `http://localhost:8065` |
| `POSTGRES_DB` | PostgreSQL 数据库名 | `mattermost` |
| `POSTGRES_USER` | PostgreSQL 用户名 | `mmuser` |
| `POSTGRES_PASSWORD` | PostgreSQL 密码 | `mmchangeit` |
| `MATTERMOST_ENABLE_LOCAL_MODE` | 是否启用本地管理模式 | `false` |
| `TZ` | 容器时区 | `UTC` |
## 数据卷
- `mattermost_postgres_data`PostgreSQL 数据。
- `mattermost_config`Mattermost 配置目录。
- `mattermost_data`:上传文件和业务数据。
- `mattermost_logs`:应用日志。
- `mattermost_plugins`:服务端插件。
- `mattermost_client_plugins`:前端插件。
- `mattermost_bleve_indexes`:搜索索引。
## 说明
- Mattermost 依赖 PostgreSQL,只有数据库健康后才会继续启动。
- 这里默认使用 Team Edition。
- 如果你通过反向代理或自定义域名访问 Mattermost,请同步修改 `MATTERMOST_SITE_URL`
## 参考资料
- [Mattermost 仓库](https://github.com/mattermost/mattermost)
- [Mattermost Team Edition 镜像](https://hub.docker.com/r/mattermost/mattermost-team-edition)
+84
View File
@@ -0,0 +1,84 @@
x-defaults: &defaults
restart: unless-stopped
logging:
driver: json-file
options:
max-size: ${MATTERMOST_LOG_MAX_SIZE:-100m}
max-file: '${MATTERMOST_LOG_MAX_FILE:-3}'
services:
mattermost-postgres:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}postgres:${POSTGRES_VERSION:-17-alpine}
environment:
- TZ=${TZ:-UTC}
- POSTGRES_DB=${POSTGRES_DB:-mattermost}
- POSTGRES_USER=${POSTGRES_USER:-mmuser}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-mmchangeit}
- PGDATA=/var/lib/postgresql/data/pgdata
volumes:
- mattermost_postgres_data:/var/lib/postgresql/data
healthcheck:
test: [CMD-SHELL, pg_isready -U $$POSTGRES_USER -d $$POSTGRES_DB]
interval: 15s
timeout: 5s
retries: 10
start_period: 20s
deploy:
resources:
limits:
cpus: ${MATTERMOST_DB_CPU_LIMIT:-1.00}
memory: ${MATTERMOST_DB_MEMORY_LIMIT:-1G}
reservations:
cpus: ${MATTERMOST_DB_CPU_RESERVATION:-0.25}
memory: ${MATTERMOST_DB_MEMORY_RESERVATION:-256M}
mattermost:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}mattermost/mattermost-team-edition:${MATTERMOST_VERSION:-11.3}
depends_on:
mattermost-postgres:
condition: service_healthy
ports:
- '${MATTERMOST_PORT_OVERRIDE:-8065}:8065'
environment:
- TZ=${TZ:-UTC}
- MM_SQLSETTINGS_DRIVERNAME=postgres
- MM_SQLSETTINGS_DATASOURCE=postgres://${POSTGRES_USER:-mmuser}:${POSTGRES_PASSWORD:-mmchangeit}@mattermost-postgres:5432/${POSTGRES_DB:-mattermost}?sslmode=disable&connect_timeout=10
- MM_SERVICESETTINGS_SITEURL=${MATTERMOST_SITE_URL:-http://localhost:8065}
- MM_SERVICESETTINGS_ENABLELOCALMODE=${MATTERMOST_ENABLE_LOCAL_MODE:-false}
- MM_PLUGINSETTINGS_ENABLEUPLOADS=true
- MM_BLEVESETTINGS_INDEXDIR=/mattermost/bleve-indexes
- MM_FILESETTINGS_DIRECTORY=/mattermost/data
env_file:
- .env
volumes:
- mattermost_config:/mattermost/config
- mattermost_data:/mattermost/data
- mattermost_logs:/mattermost/logs
- mattermost_plugins:/mattermost/plugins
- mattermost_client_plugins:/mattermost/client/plugins
- mattermost_bleve_indexes:/mattermost/bleve-indexes
healthcheck:
test: [CMD, /mattermost/bin/mmctl, system, status, --local]
interval: 30s
timeout: 10s
retries: 5
start_period: 60s
deploy:
resources:
limits:
cpus: ${MATTERMOST_CPU_LIMIT:-2.00}
memory: ${MATTERMOST_MEMORY_LIMIT:-2G}
reservations:
cpus: ${MATTERMOST_CPU_RESERVATION:-0.50}
memory: ${MATTERMOST_MEMORY_RESERVATION:-512M}
volumes:
mattermost_postgres_data:
mattermost_config:
mattermost_data:
mattermost_logs:
mattermost_plugins:
mattermost_client_plugins:
mattermost_bleve_indexes:
+45
View File
@@ -0,0 +1,45 @@
# Source build configuration
DEER_FLOW_VERSION=main
NGINX_VERSION=1.28-alpine
# Network configuration
DEER_FLOW_PORT_OVERRIDE=2026
DEER_FLOW_CORS_ORIGINS=http://localhost:2026
DEER_FLOW_BETTER_AUTH_SECRET=deer-flow-dev-secret-change-me
# Model configuration
DEER_FLOW_MODEL_NAME=openai-default
DEER_FLOW_MODEL_DISPLAY_NAME=OpenAI
DEER_FLOW_MODEL_ID=gpt-4.1-mini
OPENAI_API_KEY=
# Resources - Gateway
DEER_FLOW_GATEWAY_CPU_LIMIT=2.00
DEER_FLOW_GATEWAY_MEMORY_LIMIT=2G
DEER_FLOW_GATEWAY_CPU_RESERVATION=0.50
DEER_FLOW_GATEWAY_MEMORY_RESERVATION=512M
# Resources - LangGraph
DEER_FLOW_LANGGRAPH_CPU_LIMIT=2.00
DEER_FLOW_LANGGRAPH_MEMORY_LIMIT=2G
DEER_FLOW_LANGGRAPH_CPU_RESERVATION=0.50
DEER_FLOW_LANGGRAPH_MEMORY_RESERVATION=512M
# Resources - Frontend
DEER_FLOW_FRONTEND_CPU_LIMIT=1.00
DEER_FLOW_FRONTEND_MEMORY_LIMIT=1G
DEER_FLOW_FRONTEND_CPU_RESERVATION=0.25
DEER_FLOW_FRONTEND_MEMORY_RESERVATION=256M
# Resources - Nginx
DEER_FLOW_NGINX_CPU_LIMIT=0.50
DEER_FLOW_NGINX_MEMORY_LIMIT=256M
DEER_FLOW_NGINX_CPU_RESERVATION=0.10
DEER_FLOW_NGINX_MEMORY_RESERVATION=64M
# Logging
DEER_FLOW_LOG_MAX_SIZE=100m
DEER_FLOW_LOG_MAX_FILE=3
# Timezone
TZ=UTC
+60
View File
@@ -0,0 +1,60 @@
# DeerFlow
[中文文档](README.zh.md)
DeerFlow is a full-stack AI agent application from ByteDance. This Compose setup builds the frontend and backend from source, starts Gateway, LangGraph, and Nginx, and exposes the unified entrypoint on port 2026.
## Quick Start
1. Copy the example environment file:
```bash
cp .env.example .env
```
2. Edit `.env` and set `OPENAI_API_KEY`.
3. Start the stack:
```bash
docker compose up -d
```
4. Open DeerFlow:
- <http://localhost:2026>
## Default Ports
| Service | Port | Description |
| ----------- | ---- | ---------------------- |
| Nginx | 2026 | Unified web entrypoint |
| Gateway API | 8001 | Internal only |
| LangGraph | 2024 | Internal only |
| Frontend | 3000 | Internal only |
## Important Environment Variables
| Variable | Description | Default |
| ------------------------------ | ------------------------------------------------------ | -------------------------------- |
| `DEER_FLOW_VERSION` | Git ref used for source builds | `main` |
| `DEER_FLOW_PORT_OVERRIDE` | Host port for the unified entrypoint | `2026` |
| `OPENAI_API_KEY` | OpenAI API key referenced from generated `config.yaml` | - |
| `DEER_FLOW_MODEL_NAME` | Internal model identifier | `openai-default` |
| `DEER_FLOW_MODEL_DISPLAY_NAME` | Display name shown in the app | `OpenAI` |
| `DEER_FLOW_MODEL_ID` | OpenAI model id | `gpt-4.1-mini` |
| `DEER_FLOW_CORS_ORIGINS` | Allowed CORS origins for the gateway | `http://localhost:2026` |
| `DEER_FLOW_BETTER_AUTH_SECRET` | Frontend auth secret | `deer-flow-dev-secret-change-me` |
| `TZ` | Container timezone | `UTC` |
## Notes
- This setup generates a minimal `config.yaml` and `extensions_config.json` inside the backend containers, so no extra config files are required.
- The default sandbox mode is local to avoid requiring Docker socket mounts or Kubernetes provisioner setup.
- DeerFlow upstream usually expects local image builds, so the first build can take several minutes.
- Only an OpenAI-compatible model is wired by default here. If you want Anthropic, Gemini, or a more advanced config, update the generated template logic in `docker-compose.yaml`.
## References
- [DeerFlow Repository](https://github.com/bytedance/deer-flow)
- [Project README](https://github.com/bytedance/deer-flow/blob/main/README.md)
+60
View File
@@ -0,0 +1,60 @@
# DeerFlow
[English](README.md)
DeerFlow 是字节跳动开源的全栈 AI Agent 应用。这个 Compose 配置会从源码构建前后端镜像,启动 Gateway、LangGraph 和 Nginx,并通过 2026 端口暴露统一入口。
## 快速开始
1. 复制环境变量示例文件:
```bash
cp .env.example .env
```
2. 编辑 `.env`,至少填写 `OPENAI_API_KEY`
3. 启动整个栈:
```bash
docker compose up -d
```
4. 打开 DeerFlow
- <http://localhost:2026>
## 默认端口
| 服务 | 端口 | 说明 |
| ----------- | ---- | ------------- |
| Nginx | 2026 | 统一 Web 入口 |
| Gateway API | 8001 | 仅内部访问 |
| LangGraph | 2024 | 仅内部访问 |
| Frontend | 3000 | 仅内部访问 |
## 关键环境变量
| 变量 | 说明 | 默认值 |
| ------------------------------ | -------------------------------------------- | -------------------------------- |
| `DEER_FLOW_VERSION` | 用于源码构建的 Git 引用 | `main` |
| `DEER_FLOW_PORT_OVERRIDE` | 统一入口对外端口 | `2026` |
| `OPENAI_API_KEY` | 生成的 `config.yaml` 中引用的 OpenAI API Key | - |
| `DEER_FLOW_MODEL_NAME` | 模型内部标识 | `openai-default` |
| `DEER_FLOW_MODEL_DISPLAY_NAME` | 界面展示名称 | `OpenAI` |
| `DEER_FLOW_MODEL_ID` | OpenAI 模型 ID | `gpt-4.1-mini` |
| `DEER_FLOW_CORS_ORIGINS` | Gateway 允许的跨域来源 | `http://localhost:2026` |
| `DEER_FLOW_BETTER_AUTH_SECRET` | 前端鉴权密钥 | `deer-flow-dev-secret-change-me` |
| `TZ` | 容器时区 | `UTC` |
## 说明
- 这个配置会在后端容器内部生成最小可用的 `config.yaml``extensions_config.json`,因此不需要额外手工准备配置文件。
- 默认使用本地 sandbox 模式,这样不需要挂载 Docker Socket,也不依赖 Kubernetes provisioner。
- DeerFlow 上游通常要求本地构建镜像,因此首次构建耗时可能较长。
- 当前默认只接入了 OpenAI 兼容模型。如果你要改成 Anthropic、Gemini 或更复杂的配置,需要调整 `docker-compose.yaml` 中生成配置文件的模板。
## 参考资料
- [DeerFlow 仓库](https://github.com/bytedance/deer-flow)
- [项目 README](https://github.com/bytedance/deer-flow/blob/main/README_zh.md)
+171
View File
@@ -0,0 +1,171 @@
x-defaults: &defaults
restart: unless-stopped
logging:
driver: json-file
options:
max-size: ${DEER_FLOW_LOG_MAX_SIZE:-100m}
max-file: '${DEER_FLOW_LOG_MAX_FILE:-3}'
services:
deerflow-gateway:
<<: *defaults
build:
context: https://github.com/bytedance/deer-flow.git#${DEER_FLOW_VERSION:-main}
dockerfile: backend/Dockerfile
image: ${GLOBAL_REGISTRY:-}alexsuntop/deer-flow-backend:${DEER_FLOW_VERSION:-main}
environment:
- TZ=${TZ:-UTC}
- OPENAI_API_KEY=${OPENAI_API_KEY:-}
env_file:
- .env
entrypoint:
- /bin/sh
- -ec
command: |
cat >/tmp/config.yaml <<EOF
config_version: 1
models:
- name: ${DEER_FLOW_MODEL_NAME:-openai-default}
display_name: ${DEER_FLOW_MODEL_DISPLAY_NAME:-OpenAI}
use: langchain_openai:ChatOpenAI
model: ${DEER_FLOW_MODEL_ID:-gpt-4.1-mini}
api_key: \$OPENAI_API_KEY
sandbox:
use: deerflow.sandbox.local:LocalSandboxProvider
EOF
cat >/tmp/extensions_config.json <<EOF
{"mcpServers":{},"skills":{}}
EOF
export DEER_FLOW_CONFIG_PATH=/tmp/config.yaml
export DEER_FLOW_EXTENSIONS_CONFIG_PATH=/tmp/extensions_config.json
export GATEWAY_HOST=0.0.0.0
export GATEWAY_PORT=8001
export CORS_ORIGINS=${DEER_FLOW_CORS_ORIGINS:-http://localhost:2026}
exec sh -c 'cd backend && PYTHONPATH=. uv run uvicorn app.gateway.app:app --host 0.0.0.0 --port 8001'
healthcheck:
test:
- CMD-SHELL
- python3 -c "import urllib.request; urllib.request.urlopen('http://127.0.0.1:8001/docs', timeout=5)"
interval: 30s
timeout: 10s
retries: 5
start_period: 60s
deploy:
resources:
limits:
cpus: ${DEER_FLOW_GATEWAY_CPU_LIMIT:-2.00}
memory: ${DEER_FLOW_GATEWAY_MEMORY_LIMIT:-2G}
reservations:
cpus: ${DEER_FLOW_GATEWAY_CPU_RESERVATION:-0.50}
memory: ${DEER_FLOW_GATEWAY_MEMORY_RESERVATION:-512M}
deerflow-langgraph:
<<: *defaults
build:
context: https://github.com/bytedance/deer-flow.git#${DEER_FLOW_VERSION:-main}
dockerfile: backend/Dockerfile
image: ${GLOBAL_REGISTRY:-}alexsuntop/deer-flow-backend:${DEER_FLOW_VERSION:-main}
environment:
- TZ=${TZ:-UTC}
- OPENAI_API_KEY=${OPENAI_API_KEY:-}
env_file:
- .env
entrypoint:
- /bin/sh
- -ec
command: |
cat >/tmp/config.yaml <<EOF
config_version: 1
models:
- name: ${DEER_FLOW_MODEL_NAME:-openai-default}
display_name: ${DEER_FLOW_MODEL_DISPLAY_NAME:-OpenAI}
use: langchain_openai:ChatOpenAI
model: ${DEER_FLOW_MODEL_ID:-gpt-4.1-mini}
api_key: \$OPENAI_API_KEY
sandbox:
use: deerflow.sandbox.local:LocalSandboxProvider
EOF
cat >/tmp/extensions_config.json <<EOF
{"mcpServers":{},"skills":{}}
EOF
export DEER_FLOW_CONFIG_PATH=/tmp/config.yaml
export DEER_FLOW_EXTENSIONS_CONFIG_PATH=/tmp/extensions_config.json
exec sh -c 'cd backend && NO_COLOR=1 uv run langgraph dev --no-browser --allow-blocking --no-reload'
healthcheck:
test:
- CMD-SHELL
- python3 -c "import socket; s=socket.create_connection(('127.0.0.1', 2024), 5); s.close()"
interval: 30s
timeout: 10s
retries: 5
start_period: 60s
deploy:
resources:
limits:
cpus: ${DEER_FLOW_LANGGRAPH_CPU_LIMIT:-2.00}
memory: ${DEER_FLOW_LANGGRAPH_MEMORY_LIMIT:-2G}
reservations:
cpus: ${DEER_FLOW_LANGGRAPH_CPU_RESERVATION:-0.50}
memory: ${DEER_FLOW_LANGGRAPH_MEMORY_RESERVATION:-512M}
deerflow-frontend:
<<: *defaults
build:
context: https://github.com/bytedance/deer-flow.git#${DEER_FLOW_VERSION:-main}
dockerfile: frontend/Dockerfile
target: prod
image: ${GLOBAL_REGISTRY:-}alexsuntop/deer-flow-frontend:${DEER_FLOW_VERSION:-main}
environment:
- TZ=${TZ:-UTC}
- BETTER_AUTH_SECRET=${DEER_FLOW_BETTER_AUTH_SECRET:-deer-flow-dev-secret-change-me}
- NEXT_PUBLIC_BACKEND_BASE_URL=
- NEXT_PUBLIC_LANGGRAPH_BASE_URL=/api/langgraph
env_file:
- .env
healthcheck:
test:
- CMD-SHELL
- node -e "fetch('http://127.0.0.1:3000').then((r)=>process.exit(r.ok?0:1)).catch(()=>process.exit(1))"
interval: 30s
timeout: 10s
retries: 5
start_period: 60s
deploy:
resources:
limits:
cpus: ${DEER_FLOW_FRONTEND_CPU_LIMIT:-1.00}
memory: ${DEER_FLOW_FRONTEND_MEMORY_LIMIT:-1G}
reservations:
cpus: ${DEER_FLOW_FRONTEND_CPU_RESERVATION:-0.25}
memory: ${DEER_FLOW_FRONTEND_MEMORY_RESERVATION:-256M}
deerflow-nginx:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}nginx:${NGINX_VERSION:-1.28-alpine}
depends_on:
deerflow-gateway:
condition: service_healthy
deerflow-langgraph:
condition: service_healthy
deerflow-frontend:
condition: service_healthy
ports:
- '${DEER_FLOW_PORT_OVERRIDE:-2026}:2026'
volumes:
- ./nginx.conf:/etc/nginx/conf.d/default.conf:ro
healthcheck:
test:
- CMD-SHELL
- wget --no-verbose --tries=1 --spider http://127.0.0.1:2026 >/dev/null || exit 1
interval: 30s
timeout: 10s
retries: 5
start_period: 10s
deploy:
resources:
limits:
cpus: ${DEER_FLOW_NGINX_CPU_LIMIT:-0.50}
memory: ${DEER_FLOW_NGINX_MEMORY_LIMIT:-256M}
reservations:
cpus: ${DEER_FLOW_NGINX_CPU_RESERVATION:-0.10}
memory: ${DEER_FLOW_NGINX_MEMORY_RESERVATION:-64M}
+40
View File
@@ -0,0 +1,40 @@
server {
listen 2026;
server_name _;
client_max_body_size 50m;
location /api/langgraph/ {
proxy_pass http://deerflow-langgraph:2024/;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 3600s;
proxy_send_timeout 3600s;
}
location /api/ {
proxy_pass http://deerflow-gateway:8001/api/;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 3600s;
proxy_send_timeout 3600s;
}
location / {
proxy_pass http://deerflow-frontend:3000;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 3600s;
proxy_send_timeout 3600s;
}
}
+33
View File
@@ -0,0 +1,33 @@
# Source build configuration
OPENFANG_VERSION=0.1.0
# Network configuration
OPENFANG_PORT_OVERRIDE=4200
# OpenFang runtime configuration
OPENFANG_PROVIDER=anthropic
OPENFANG_MODEL=claude-sonnet-4-20250514
OPENFANG_API_KEY_ENV=ANTHROPIC_API_KEY
OPENFANG_API_KEY=
OPENFANG_LOG_LEVEL=info
OPENFANG_MEMORY_DECAY_RATE=0.05
OPENFANG_EXEC_MODE=allowlist
OPENFANG_EXEC_TIMEOUT_SECS=30
# Provider credentials
ANTHROPIC_API_KEY=
OPENAI_API_KEY=
GROQ_API_KEY=
# Resources
OPENFANG_CPU_LIMIT=2.00
OPENFANG_MEMORY_LIMIT=2G
OPENFANG_CPU_RESERVATION=0.50
OPENFANG_MEMORY_RESERVATION=512M
# Logging
OPENFANG_LOG_MAX_SIZE=100m
OPENFANG_LOG_MAX_FILE=3
# Timezone
TZ=UTC
+71
View File
@@ -0,0 +1,71 @@
# OpenFang
[中文文档](README.zh.md)
OpenFang is an open-source agent operating system. This Compose setup builds the upstream Docker image from the `v0.1.0` source tag and writes a minimal `config.toml` into the persistent data volume on startup.
## Quick Start
1. Copy the example environment file:
```bash
cp .env.example .env
```
2. Set at least one provider API key in `.env`:
- `ANTHROPIC_API_KEY`
- `OPENAI_API_KEY`
- `GROQ_API_KEY`
3. Start OpenFang:
```bash
docker compose up -d
```
4. Open the dashboard:
- <http://localhost:4200>
5. Verify health if needed:
```bash
curl http://localhost:4200/api/health
```
## Default Ports
| Service | Port | Description |
| -------- | ---- | ---------------------- |
| OpenFang | 4200 | Dashboard and REST API |
## Important Environment Variables
| Variable | Description | Default |
| ------------------------ | ------------------------------------------------------------------ | -------------------------- |
| `OPENFANG_VERSION` | Git tag used for the source build | `0.1.0` |
| `OPENFANG_PORT_OVERRIDE` | Host port for OpenFang | `4200` |
| `OPENFANG_PROVIDER` | Default model provider | `anthropic` |
| `OPENFANG_MODEL` | Default model name | `claude-sonnet-4-20250514` |
| `OPENFANG_API_KEY_ENV` | Environment variable name that OpenFang reads for the provider key | `ANTHROPIC_API_KEY` |
| `OPENFANG_API_KEY` | Optional Bearer token to protect the API | - |
| `ANTHROPIC_API_KEY` | Anthropic API key | - |
| `OPENAI_API_KEY` | OpenAI API key | - |
| `GROQ_API_KEY` | Groq API key | - |
| `TZ` | Container timezone | `UTC` |
## Volumes
- `openfang_data`: Persistent configuration and runtime data under `/data`.
## Notes
- The generated config binds to `0.0.0.0:4200` for container use.
- If `OPENFANG_API_KEY` is empty, the instance runs without API authentication except for whatever protections you place in front of it.
- This setup uses the upstream Dockerfile, so the first build can take several minutes.
## References
- [OpenFang Repository](https://github.com/RightNow-AI/openfang)
- [Getting Started Guide](https://github.com/RightNow-AI/openfang/blob/main/docs/getting-started.md)
+71
View File
@@ -0,0 +1,71 @@
# OpenFang
[English](README.md)
OpenFang 是一个开源的 Agent Operating System。这个 Compose 配置会基于上游 `v0.1.0` 源码标签构建镜像,并在启动时把最小可用的 `config.toml` 写入持久化数据卷。
## 快速开始
1. 复制环境变量示例文件:
```bash
cp .env.example .env
```
2. 在 `.env` 中至少填写一个模型提供商的 API Key:
- `ANTHROPIC_API_KEY`
- `OPENAI_API_KEY`
- `GROQ_API_KEY`
3. 启动 OpenFang
```bash
docker compose up -d
```
4. 打开控制台:
- <http://localhost:4200>
5. 如需检查健康状态:
```bash
curl http://localhost:4200/api/health
```
## 默认端口
| 服务 | 端口 | 说明 |
| -------- | ---- | ----------------- |
| OpenFang | 4200 | 控制台与 REST API |
## 关键环境变量
| 变量 | 说明 | 默认值 |
| ------------------------ | ----------------------------------------- | -------------------------- |
| `OPENFANG_VERSION` | 用于源码构建的 Git 标签 | `0.1.0` |
| `OPENFANG_PORT_OVERRIDE` | OpenFang 对外端口 | `4200` |
| `OPENFANG_PROVIDER` | 默认模型提供商 | `anthropic` |
| `OPENFANG_MODEL` | 默认模型名称 | `claude-sonnet-4-20250514` |
| `OPENFANG_API_KEY_ENV` | OpenFang 读取提供商密钥时使用的环境变量名 | `ANTHROPIC_API_KEY` |
| `OPENFANG_API_KEY` | 可选的 API Bearer Token | - |
| `ANTHROPIC_API_KEY` | Anthropic API Key | - |
| `OPENAI_API_KEY` | OpenAI API Key | - |
| `GROQ_API_KEY` | Groq API Key | - |
| `TZ` | 容器时区 | `UTC` |
## 数据卷
- `openfang_data`:持久化 `/data` 下的配置与运行数据。
## 说明
- 生成的配置会监听 `0.0.0.0:4200`,适合容器内运行。
- 如果 `OPENFANG_API_KEY` 为空,实例本身不会启用额外 API 认证,是否暴露到公网需要你自行把控。
- 该服务使用上游 Dockerfile 从源码构建,首次构建通常需要几分钟。
## 参考资料
- [OpenFang 仓库](https://github.com/RightNow-AI/openfang)
- [入门文档](https://github.com/RightNow-AI/openfang/blob/main/docs/getting-started.md)
+71
View File
@@ -0,0 +1,71 @@
x-defaults: &defaults
restart: unless-stopped
logging:
driver: json-file
options:
max-size: ${OPENFANG_LOG_MAX_SIZE:-100m}
max-file: '${OPENFANG_LOG_MAX_FILE:-3}'
services:
openfang:
<<: *defaults
build:
context: https://github.com/RightNow-AI/openfang.git#${OPENFANG_VERSION:-0.1.0}
dockerfile: Dockerfile
image: ${GLOBAL_REGISTRY:-}alexsuntop/openfang:${OPENFANG_VERSION:-0.1.0}
ports:
- '${OPENFANG_PORT_OVERRIDE:-4200}:4200'
environment:
- TZ=${TZ:-UTC}
- OPENFANG_HOME=/data
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY:-}
- OPENAI_API_KEY=${OPENAI_API_KEY:-}
- GROQ_API_KEY=${GROQ_API_KEY:-}
env_file:
- .env
entrypoint:
- /bin/sh
- -ec
command: |
: > /data/config.toml
if [ -n "${OPENFANG_API_KEY:-}" ]; then
printf 'api_key = "%s"\n' "${OPENFANG_API_KEY}" >> /data/config.toml
fi
cat >> /data/config.toml <<EOF
api_listen = "0.0.0.0:4200"
log_level = "${OPENFANG_LOG_LEVEL:-info}"
[default_model]
provider = "${OPENFANG_PROVIDER:-anthropic}"
model = "${OPENFANG_MODEL:-claude-sonnet-4-20250514}"
api_key_env = "${OPENFANG_API_KEY_ENV:-ANTHROPIC_API_KEY}"
[memory]
decay_rate = ${OPENFANG_MEMORY_DECAY_RATE:-0.05}
[exec_policy]
mode = "${OPENFANG_EXEC_MODE:-allowlist}"
timeout_secs = ${OPENFANG_EXEC_TIMEOUT_SECS:-30}
EOF
exec openfang start
volumes:
- openfang_data:/data
healthcheck:
test:
- CMD-SHELL
- python3 -c "import urllib.request; urllib.request.urlopen('http://127.0.0.1:4200/api/health', timeout=5)"
interval: 30s
timeout: 10s
retries: 5
start_period: 30s
deploy:
resources:
limits:
cpus: ${OPENFANG_CPU_LIMIT:-2.00}
memory: ${OPENFANG_MEMORY_LIMIT:-2G}
reservations:
cpus: ${OPENFANG_CPU_RESERVATION:-0.50}
memory: ${OPENFANG_MEMORY_RESERVATION:-512M}
volumes:
openfang_data:
+31
View File
@@ -0,0 +1,31 @@
# Source build configuration
PAPERCLIP_GIT_REF=main
# Network configuration
PAPERCLIP_PORT_OVERRIDE=3100
PAPERCLIP_PUBLIC_URL=http://localhost:3100
PAPERCLIP_ALLOWED_HOSTNAMES=localhost
# Runtime mode
PAPERCLIP_DEPLOYMENT_MODE=authenticated
PAPERCLIP_DEPLOYMENT_EXPOSURE=private
# Optional external database
DATABASE_URL=
# LLM credentials for local adapters and workflows
OPENAI_API_KEY=
ANTHROPIC_API_KEY=
# Resources
PAPERCLIP_CPU_LIMIT=2.00
PAPERCLIP_MEMORY_LIMIT=4G
PAPERCLIP_CPU_RESERVATION=0.50
PAPERCLIP_MEMORY_RESERVATION=1G
# Logging
PAPERCLIP_LOG_MAX_SIZE=100m
PAPERCLIP_LOG_MAX_FILE=3
# Timezone
TZ=UTC
+67
View File
@@ -0,0 +1,67 @@
# Paperclip
[中文文档](README.zh.md)
Paperclip is an open-source orchestration platform for running AI-native teams. This Compose setup builds the upstream Docker image from source, persists the full Paperclip home directory, and exposes the web UI on port 3100.
## Quick Start
1. Copy the example environment file:
```bash
cp .env.example .env
```
2. Optionally edit `.env`:
- Set `PAPERCLIP_PUBLIC_URL` if you are not using `http://localhost:3100`
- Add `OPENAI_API_KEY` and or `ANTHROPIC_API_KEY` for local adapters
- Set `DATABASE_URL` if you want to use an external PostgreSQL instance instead of the embedded database
3. Start the service:
```bash
docker compose up -d
```
4. Open the UI:
- <http://localhost:3100>
5. Follow the Paperclip onboarding flow in the browser.
## Default Ports
| Service | Port | Description |
| --------- | ---- | -------------- |
| Paperclip | 3100 | Web UI and API |
## Important Environment Variables
| Variable | Description | Default |
| ------------------------------- | ---------------------------------------- | ----------------------- |
| `PAPERCLIP_GIT_REF` | Git ref used for the source build | `main` |
| `PAPERCLIP_PORT_OVERRIDE` | Host port for Paperclip | `3100` |
| `PAPERCLIP_PUBLIC_URL` | Public URL for auth and invite flows | `http://localhost:3100` |
| `PAPERCLIP_ALLOWED_HOSTNAMES` | Extra allowed hostnames | `localhost` |
| `PAPERCLIP_DEPLOYMENT_MODE` | Deployment mode | `authenticated` |
| `PAPERCLIP_DEPLOYMENT_EXPOSURE` | Exposure mode | `private` |
| `DATABASE_URL` | Optional external PostgreSQL URL | - |
| `OPENAI_API_KEY` | OpenAI key for bundled local adapters | - |
| `ANTHROPIC_API_KEY` | Anthropic key for bundled local adapters | - |
| `TZ` | Container timezone | `UTC` |
## Volumes
- `paperclip_data`: Stores embedded PostgreSQL data, uploaded files, secrets, and runtime state.
## Notes
- If `DATABASE_URL` is not provided, Paperclip automatically uses embedded PostgreSQL.
- The upstream Docker image includes the UI and server in one container.
- The first source build can take several minutes.
## References
- [Paperclip Repository](https://github.com/paperclipai/paperclip)
- [Docker Deployment Guide](https://github.com/paperclipai/paperclip/blob/main/docs/deploy/docker.md)
+67
View File
@@ -0,0 +1,67 @@
# Paperclip
[English](README.md)
Paperclip 是一个面向 AI 团队编排的开源平台。这个 Compose 配置会从上游源码构建 Docker 镜像,持久化整个 Paperclip Home 目录,并通过 3100 端口暴露 Web 界面。
## 快速开始
1. 复制环境变量示例文件:
```bash
cp .env.example .env
```
2. 按需编辑 `.env`
- 如果你不通过 `http://localhost:3100` 访问,请修改 `PAPERCLIP_PUBLIC_URL`
- 如果要启用本地适配器,填写 `OPENAI_API_KEY` 和或 `ANTHROPIC_API_KEY`
- 如果要接入外部 PostgreSQL,而不是内置数据库,请设置 `DATABASE_URL`
3. 启动服务:
```bash
docker compose up -d
```
4. 打开界面:
- <http://localhost:3100>
5. 在浏览器中完成 Paperclip 的初始化流程。
## 默认端口
| 服务 | 端口 | 说明 |
| --------- | ---- | -------------- |
| Paperclip | 3100 | Web 界面与 API |
## 关键环境变量
| 变量 | 说明 | 默认值 |
| ------------------------------- | ---------------------------- | ----------------------- |
| `PAPERCLIP_GIT_REF` | 用于源码构建的 Git 引用 | `main` |
| `PAPERCLIP_PORT_OVERRIDE` | Paperclip 对外端口 | `3100` |
| `PAPERCLIP_PUBLIC_URL` | 认证与邀请流程使用的公开 URL | `http://localhost:3100` |
| `PAPERCLIP_ALLOWED_HOSTNAMES` | 额外允许的主机名 | `localhost` |
| `PAPERCLIP_DEPLOYMENT_MODE` | 部署模式 | `authenticated` |
| `PAPERCLIP_DEPLOYMENT_EXPOSURE` | 暴露模式 | `private` |
| `DATABASE_URL` | 可选的外部 PostgreSQL 连接串 | - |
| `OPENAI_API_KEY` | OpenAI Key | - |
| `ANTHROPIC_API_KEY` | Anthropic Key | - |
| `TZ` | 容器时区 | `UTC` |
## 数据卷
- `paperclip_data`:保存内置 PostgreSQL、上传文件、密钥和运行状态。
## 说明
- 如果没有设置 `DATABASE_URL`Paperclip 会自动启用内置 PostgreSQL。
- 上游 Docker 镜像已经包含前端和服务端,不需要再拆分多个容器。
- 首次源码构建通常需要几分钟。
## 参考资料
- [Paperclip 仓库](https://github.com/paperclipai/paperclip)
- [Docker 部署文档](https://github.com/paperclipai/paperclip/blob/main/docs/deploy/docker.md)
+53
View File
@@ -0,0 +1,53 @@
x-defaults: &defaults
restart: unless-stopped
logging:
driver: json-file
options:
max-size: ${PAPERCLIP_LOG_MAX_SIZE:-100m}
max-file: '${PAPERCLIP_LOG_MAX_FILE:-3}'
services:
paperclip:
<<: *defaults
build:
context: https://github.com/paperclipai/paperclip.git#${PAPERCLIP_GIT_REF:-main}
dockerfile: Dockerfile
image: ${GLOBAL_REGISTRY:-}alexsuntop/paperclip:${PAPERCLIP_GIT_REF:-main}
ports:
- '${PAPERCLIP_PORT_OVERRIDE:-3100}:3100'
environment:
- TZ=${TZ:-UTC}
- HOST=0.0.0.0
- PORT=3100
- SERVE_UI=true
- PAPERCLIP_HOME=/paperclip
- PAPERCLIP_DEPLOYMENT_MODE=${PAPERCLIP_DEPLOYMENT_MODE:-authenticated}
- PAPERCLIP_DEPLOYMENT_EXPOSURE=${PAPERCLIP_DEPLOYMENT_EXPOSURE:-private}
- PAPERCLIP_PUBLIC_URL=${PAPERCLIP_PUBLIC_URL:-http://localhost:3100}
- PAPERCLIP_ALLOWED_HOSTNAMES=${PAPERCLIP_ALLOWED_HOSTNAMES:-localhost}
- OPENAI_API_KEY=${OPENAI_API_KEY:-}
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY:-}
- DATABASE_URL=${DATABASE_URL:-}
env_file:
- .env
volumes:
- paperclip_data:/paperclip
healthcheck:
test:
- CMD-SHELL
- curl -fsS http://127.0.0.1:3100/api/health >/dev/null || exit 1
interval: 30s
timeout: 10s
retries: 5
start_period: 60s
deploy:
resources:
limits:
cpus: ${PAPERCLIP_CPU_LIMIT:-2.00}
memory: ${PAPERCLIP_MEMORY_LIMIT:-4G}
reservations:
cpus: ${PAPERCLIP_CPU_RESERVATION:-0.50}
memory: ${PAPERCLIP_MEMORY_RESERVATION:-1G}
volumes:
paperclip_data:
+1 -1
View File
@@ -1,5 +1,5 @@
# Bifrost Gateway Version
BIFROST_VERSION=v1.3.63
BIFROST_VERSION=v1.4.17
# Port to bind to on the host machine
BIFROST_PORT=28080
+1 -1
View File
@@ -12,7 +12,7 @@ Bifrost is a lightweight, high-performance LLM gateway that supports multiple mo
## Configuration
- `BIFROST_VERSION`: The version of the Bifrost image, default is `v1.3.63`.
- `BIFROST_VERSION`: The version of the Bifrost image, default is `v1.4.17`.
- `BIFROST_PORT`: The port for the Bifrost service, default is `28080`.
### Telemetry
+1 -1
View File
@@ -12,7 +12,7 @@ Bifrost 是一个轻量级、高性能的 LLM 网关,支持多种模型和提
## 配置
- `BIFROST_VERSION`: Bifrost 镜像的版本,默认为 `v1.3.63`
- `BIFROST_VERSION`: Bifrost 镜像的版本,默认为 `v1.4.17`
- `BIFROST_PORT`: Bifrost 服务的端口,默认为 `28080`
### 遥测 (Telemetry)
+1 -1
View File
@@ -9,7 +9,7 @@ x-defaults: &defaults
services:
bifrost:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}maximhq/bifrost:${BIFROST_VERSION:-v1.3.63}
image: ${GLOBAL_REGISTRY:-}maximhq/bifrost:${BIFROST_VERSION:-v1.4.17}
volumes:
- bifrost_data:/app/data
ports:
+3 -3
View File
@@ -15,7 +15,7 @@ healthCheckTimeout: 300
# Macro definitions: reusable command snippets for model configuration.
# Reference with $${macro-name} inside cmd fields.
macros:
"llama-server": >
llama-server: >
/app/llama-server
--port ${PORT}
@@ -25,14 +25,14 @@ models:
# The volume `llama_swap_models` is mounted to /root/.cache/llama.cpp inside
# the container. Place your .gguf files there and reference them with
# /root/.cache/llama.cpp/<filename>.gguf
"my-local-model":
my-local-model:
# ${PORT} is automatically assigned by llama-swap
cmd: >
$${llama-server}
--model /root/.cache/llama.cpp/model.gguf
--ctx-size 4096
--n-gpu-layers 0
proxy: "http://localhost:${PORT}"
proxy: 'http://localhost:${PORT}'
# Automatically unload the model after 15 minutes of inactivity
ttl: 900
+2 -6
View File
@@ -20,12 +20,8 @@ services:
healthcheck:
test:
- CMD
- wget
- --no-verbose
- --tries=1
- --spider
- 'http://localhost:11434/'
- ollama
- list
interval: 30s
timeout: 10s
retries: 3
+33
View File
@@ -0,0 +1,33 @@
# Image configuration
OPENVIKING_VERSION=main
# Network configuration
OPENVIKING_PORT_OVERRIDE=1933
OPENVIKING_ROOT_API_KEY=openviking-dev-root-key
# Embedding model configuration
OPENVIKING_EMBEDDING_PROVIDER=openai
OPENVIKING_EMBEDDING_API_BASE=https://api.openai.com/v1
OPENVIKING_EMBEDDING_API_KEY=
OPENVIKING_EMBEDDING_MODEL=text-embedding-3-small
OPENVIKING_EMBEDDING_DIMENSION=1536
# Vision / multimodal model configuration
OPENVIKING_VLM_PROVIDER=openai
OPENVIKING_VLM_API_BASE=https://api.openai.com/v1
OPENVIKING_VLM_API_KEY=
OPENVIKING_VLM_MODEL=gpt-4o-mini
# Logging
OPENVIKING_LOG_LEVEL=INFO
OPENVIKING_LOG_MAX_SIZE=100m
OPENVIKING_LOG_MAX_FILE=3
# Resources
OPENVIKING_CPU_LIMIT=2.00
OPENVIKING_MEMORY_LIMIT=2G
OPENVIKING_CPU_RESERVATION=0.50
OPENVIKING_MEMORY_RESERVATION=512M
# Timezone
TZ=UTC
+65
View File
@@ -0,0 +1,65 @@
# OpenViking
[中文文档](README.zh.md)
OpenViking is an agent-native context database from Volcengine. This Compose setup runs the official container image and bootstraps a minimal ov.conf from environment variables so the service can start with a single command.
## Quick Start
1. Copy the example environment file:
```bash
cp .env.example .env
```
2. Edit `.env` and set at least:
- `OPENVIKING_ROOT_API_KEY`
- `OPENVIKING_EMBEDDING_API_KEY`
- `OPENVIKING_VLM_API_KEY`
3. Start the service:
```bash
docker compose up -d
```
4. Verify health:
```bash
curl http://localhost:1933/health
```
## Default Ports
| Service | Port | Description |
| ---------- | ---- | ---------------------------- |
| OpenViking | 1933 | HTTP API and health endpoint |
## Important Environment Variables
| Variable | Description | Default |
| ------------------------------ | ----------------------------------------------- | ------------------------- |
| `OPENVIKING_VERSION` | OpenViking image tag | `main` |
| `OPENVIKING_PORT_OVERRIDE` | Host port for the HTTP API | `1933` |
| `OPENVIKING_ROOT_API_KEY` | Root API key required when binding to `0.0.0.0` | `openviking-dev-root-key` |
| `OPENVIKING_EMBEDDING_API_KEY` | API key for the embedding model | - |
| `OPENVIKING_EMBEDDING_MODEL` | Embedding model name | `text-embedding-3-small` |
| `OPENVIKING_VLM_API_KEY` | API key for the VLM / multimodal model | - |
| `OPENVIKING_VLM_MODEL` | VLM model name | `gpt-4o-mini` |
| `TZ` | Container timezone | `UTC` |
## Volumes
- `openviking_data`: Persistent workspace and local storage data.
## Notes
- This setup generates `ov.conf` at container start from `.env`, so no extra config file is required.
- The service can start without model API keys, but indexing and multimodal features will not work until valid credentials are provided.
- `/health` is unauthenticated and is used by the healthcheck.
## References
- [OpenViking Repository](https://github.com/volcengine/OpenViking)
- [Deployment Guide](https://github.com/volcengine/OpenViking/blob/main/docs/en/guides/03-deployment.md)
+65
View File
@@ -0,0 +1,65 @@
# OpenViking
[English](README.md)
OpenViking 是火山引擎开源的 Agent 原生上下文数据库。这个 Compose 配置直接使用官方容器镜像,并在容器启动时根据环境变量生成最小可用的 ov.conf,因此可以用一条命令启动。
## 快速开始
1. 复制环境变量示例文件:
```bash
cp .env.example .env
```
2. 编辑 `.env`,至少设置以下变量:
- `OPENVIKING_ROOT_API_KEY`
- `OPENVIKING_EMBEDDING_API_KEY`
- `OPENVIKING_VLM_API_KEY`
3. 启动服务:
```bash
docker compose up -d
```
4. 检查健康状态:
```bash
curl http://localhost:1933/health
```
## 默认端口
| 服务 | 端口 | 说明 |
| ---------- | ---- | ----------------------- |
| OpenViking | 1933 | HTTP API 与健康检查接口 |
## 关键环境变量
| 变量 | 说明 | 默认值 |
| ------------------------------ | ---------------------------------------- | ------------------------- |
| `OPENVIKING_VERSION` | OpenViking 镜像标签 | `main` |
| `OPENVIKING_PORT_OVERRIDE` | HTTP API 对外端口 | `1933` |
| `OPENVIKING_ROOT_API_KEY` | 服务监听 `0.0.0.0` 时需要的 Root API Key | `openviking-dev-root-key` |
| `OPENVIKING_EMBEDDING_API_KEY` | Embedding 模型的 API Key | - |
| `OPENVIKING_EMBEDDING_MODEL` | Embedding 模型名称 | `text-embedding-3-small` |
| `OPENVIKING_VLM_API_KEY` | 多模态模型 API Key | - |
| `OPENVIKING_VLM_MODEL` | 多模态模型名称 | `gpt-4o-mini` |
| `TZ` | 容器时区 | `UTC` |
## 数据卷
- `openviking_data`:持久化工作区与本地存储数据。
## 说明
- 这个配置会在容器启动时根据 `.env` 生成 `ov.conf`,因此不需要额外准备配置文件。
- 即使没有填写模型 API Key,服务通常也可以启动,但索引与多模态能力无法正常使用。
- `/health` 端点不需要认证,Compose 健康检查会依赖它。
## 参考资料
- [OpenViking 仓库](https://github.com/volcengine/OpenViking)
- [部署文档](https://github.com/volcengine/OpenViking/blob/main/docs/zh/guides/03-deployment.md)
+85
View File
@@ -0,0 +1,85 @@
x-defaults: &defaults
restart: unless-stopped
logging:
driver: json-file
options:
max-size: ${OPENVIKING_LOG_MAX_SIZE:-100m}
max-file: '${OPENVIKING_LOG_MAX_FILE:-3}'
services:
openviking:
<<: *defaults
image: ${GHCR_IO_REGISTRY:-ghcr.io}/volcengine/openviking:${OPENVIKING_VERSION:-main}
ports:
- '${OPENVIKING_PORT_OVERRIDE:-1933}:1933'
environment:
- TZ=${TZ:-UTC}
env_file:
- .env
entrypoint:
- /bin/sh
- -ec
command: |
cat >/app/ov.conf <<EOF
{
"server": {
"host": "0.0.0.0",
"port": 1933,
"auth_mode": "api_key",
"root_api_key": "${OPENVIKING_ROOT_API_KEY:-openviking-dev-root-key}",
"cors_origins": ["*"]
},
"storage": {
"workspace": "/app/data/workspace",
"vectordb": {
"name": "context",
"backend": "local"
},
"agfs": {
"backend": "local",
"log_level": "warn"
}
},
"log": {
"level": "${OPENVIKING_LOG_LEVEL:-INFO}",
"output": "stdout"
},
"embedding": {
"dense": {
"provider": "${OPENVIKING_EMBEDDING_PROVIDER:-openai}",
"api_base": "${OPENVIKING_EMBEDDING_API_BASE:-https://api.openai.com/v1}",
"api_key": "${OPENVIKING_EMBEDDING_API_KEY:-}",
"dimension": ${OPENVIKING_EMBEDDING_DIMENSION:-1536},
"model": "${OPENVIKING_EMBEDDING_MODEL:-text-embedding-3-small}"
}
},
"vlm": {
"provider": "${OPENVIKING_VLM_PROVIDER:-openai}",
"api_base": "${OPENVIKING_VLM_API_BASE:-https://api.openai.com/v1}",
"api_key": "${OPENVIKING_VLM_API_KEY:-}",
"model": "${OPENVIKING_VLM_MODEL:-gpt-4o-mini}"
}
}
EOF
exec openviking-server --config /app/ov.conf
volumes:
- openviking_data:/app/data
healthcheck:
test:
- CMD-SHELL
- curl -fsS http://127.0.0.1:1933/health >/dev/null || exit 1
interval: 30s
timeout: 10s
retries: 5
start_period: 30s
deploy:
resources:
limits:
cpus: ${OPENVIKING_CPU_LIMIT:-2.00}
memory: ${OPENVIKING_MEMORY_LIMIT:-2G}
reservations:
cpus: ${OPENVIKING_CPU_RESERVATION:-0.50}
memory: ${OPENVIKING_MEMORY_RESERVATION:-512M}
volumes:
openviking_data:
+1 -1
View File
@@ -1,5 +1,5 @@
# Phoenix version
PHOENIX_VERSION=13.3.0
PHOENIX_VERSION=13.19.2
# Timezone
TZ=UTC
+1 -1
View File
@@ -32,7 +32,7 @@ This project supports two modes of operation via Docker Compose profiles:
| Variable Name | Description | Default Value |
| -------------------------------- | ---------------------------------------- | ----------------- |
| COMPOSE_PROFILES | Active profiles (`sqlite` or `postgres`) | `sqlite` |
| PHOENIX_VERSION | Phoenix image version | `13.3.0` |
| PHOENIX_VERSION | Phoenix image version | `13.19.2` |
| PHOENIX_PORT_OVERRIDE | Host port for Phoenix UI and HTTP API | `6006` |
| PHOENIX_GRPC_PORT_OVERRIDE | Host port for OTLP gRPC collector | `4317` |
| PHOENIX_PROMETHEUS_PORT_OVERRIDE | Host port for Prometheus metrics | `9090` |
+1 -1
View File
@@ -32,7 +32,7 @@ Arize Phoenix 是一个开源的 AI 可观测性平台,专为 LLM 应用设计
| 变量名 | 描述 | 默认值 |
| -------------------------------- | ---------------------------------------- | ----------------- |
| COMPOSE_PROFILES | 激活的配置文件(`sqlite``postgres` | `sqlite` |
| PHOENIX_VERSION | Phoenix 镜像版本 | `13.3.0` |
| PHOENIX_VERSION | Phoenix 镜像版本 | `13.19.2` |
| PHOENIX_PORT_OVERRIDE | Phoenix UI 和 HTTP API 的主机端口 | `6006` |
| PHOENIX_GRPC_PORT_OVERRIDE | OTLP gRPC 采集器的主机端口 | `4317` |
| PHOENIX_PROMETHEUS_PORT_OVERRIDE | Prometheus 指标的主机端口 | `9090` |
+1 -1
View File
@@ -11,7 +11,7 @@ x-defaults: &defaults
x-phoenix-common: &phoenix-common
<<: *defaults
image: ${GLOBAL_REGISTRY:-}arizephoenix/phoenix:${PHOENIX_VERSION:-13.3.0}
image: ${GLOBAL_REGISTRY:-}arizephoenix/phoenix:${PHOENIX_VERSION:-13.19.2}
ports:
- '${PHOENIX_PORT_OVERRIDE:-6006}:6006' # UI and OTLP HTTP collector
- '${PHOENIX_GRPC_PORT_OVERRIDE:-4317}:4317' # OTLP gRPC collector