diff --git a/README.md b/README.md index bca2e16..db0e849 100644 --- a/README.md +++ b/README.md @@ -11,10 +11,13 @@ These services require building custom Docker images from source. | Service | Version | | ------------------------------------------- | ------- | | [Debian DinD](./builds/debian-dind) | 0.1.2 | +| [DeerFlow](./builds/deer-flow) | 2.0 | | [goose](./builds/goose) | 1.18.0 | | [IOPaint](./builds/io-paint) | 1.6.0 | | [K3s inside DinD](./builds/k3s-inside-dind) | 0.2.2 | | [MinerU vLLM](./builds/mineru) | 2.7.6 | +| [OpenFang](./builds/openfang) | 0.1.0 | +| [Paperclip](./builds/paperclip) | main | ## Supported Services @@ -29,7 +32,7 @@ These services require building custom Docker images from source. | [Apache Pulsar](./src/pulsar) | 4.0.7 | | [Apache RocketMQ](./src/rocketmq) | 5.3.1 | | [Agentgateway](./src/agentgateway) | 0.11.2 | -| [Bifrost Gateway](./src/bifrost-gateway) | v1.3.63 | +| [Bifrost Gateway](./src/bifrost-gateway) | v1.4.17 | | [Bolt.diy](./apps/bolt-diy) | latest | | [Budibase](./src/budibase) | 3.23.0 | | [BuildingAI](./apps/buildingai) | latest | @@ -83,6 +86,7 @@ These services require building custom Docker images from source. | [LMDeploy](./src/lmdeploy) | v0.11.1 | | [Logstash](./src/logstash) | 8.16.1 | | [MariaDB Galera Cluster](./src/mariadb-galera) | 11.7.2 | +| [Mattermost](./apps/mattermost) | 11.3 | | [Memos](./src/memos) | 0.25.3 | | [Milvus Standalone Embed](./src/milvus-standalone-embed) | v2.6.7 | | [Milvus Standalone](./src/milvus-standalone) | v2.6.7 | @@ -107,7 +111,7 @@ These services require building custom Docker images from source. | [Odoo](./src/odoo) | 19.0 | | [Ollama](./src/ollama) | 0.14.3 | | [Open WebUI](./src/open-webui) | main | -| [Phoenix (Arize)](./src/phoenix) | 13.3.0 | +| [Phoenix (Arize)](./src/phoenix) | 13.19.2 | | [Pingora Proxy Manager](./src/pingora-proxy-manager) | v1.0.3 | | [Open WebUI Rust](./src/open-webui-rust) | latest | | [OpenCode](./src/opencode) | 1.1.27 | @@ -120,6 +124,7 @@ These services require building custom Docker images from source. | [OpenObserve](./apps/openobserve) | v0.50.0 | | [OpenSearch](./src/opensearch) | 2.19.0 | | [OpenTelemetry Collector](./src/otel-collector) | 0.115.1 | +| [OpenViking](./src/openviking) | 0.1.0 | | [Overleaf](./src/overleaf) | 5.2.1 | | [PocketBase](./src/pocketbase) | 0.30.0 | | [Podman](./src/podman) | v5.7.1 | diff --git a/README.zh.md b/README.zh.md index 82d1d94..b595d76 100644 --- a/README.zh.md +++ b/README.zh.md @@ -11,10 +11,13 @@ Compose Anything 通过提供一组高质量的 Docker Compose 配置文件, | 服务 | 版本 | | ------------------------------------------- | ------ | | [Debian DinD](./builds/debian-dind) | 0.1.2 | +| [DeerFlow](./builds/deer-flow) | 2.0 | | [goose](./builds/goose) | 1.18.0 | | [IOPaint](./builds/io-paint) | 1.6.0 | | [K3s inside DinD](./builds/k3s-inside-dind) | 0.2.2 | | [MinerU vLLM](./builds/mineru) | 2.7.6 | +| [OpenFang](./builds/openfang) | 0.1.0 | +| [Paperclip](./builds/paperclip) | main | ## 已经支持的服务 @@ -29,7 +32,7 @@ Compose Anything 通过提供一组高质量的 Docker Compose 配置文件, | [Apache Pulsar](./src/pulsar) | 4.0.7 | | [Apache RocketMQ](./src/rocketmq) | 5.3.1 | | [Agentgateway](./src/agentgateway) | 0.11.2 | -| [Bifrost Gateway](./src/bifrost-gateway) | v1.3.63 | +| [Bifrost Gateway](./src/bifrost-gateway) | v1.4.17 | | [Bolt.diy](./apps/bolt-diy) | latest | | [Budibase](./src/budibase) | 3.23.0 | | [BuildingAI](./apps/buildingai) | latest | @@ -83,6 +86,7 @@ Compose Anything 通过提供一组高质量的 Docker Compose 配置文件, | [LMDeploy](./src/lmdeploy) | v0.11.1 | | [Logstash](./src/logstash) | 8.16.1 | | [MariaDB Galera Cluster](./src/mariadb-galera) | 11.7.2 | +| [Mattermost](./apps/mattermost) | 11.3 | | [Memos](./src/memos) | 0.25.3 | | [Milvus Standalone Embed](./src/milvus-standalone-embed) | v2.6.7 | | [Milvus Standalone](./src/milvus-standalone) | v2.6.7 | @@ -107,7 +111,7 @@ Compose Anything 通过提供一组高质量的 Docker Compose 配置文件, | [Odoo](./src/odoo) | 19.0 | | [Ollama](./src/ollama) | 0.14.3 | | [Open WebUI](./src/open-webui) | main | -| [Phoenix (Arize)](./src/phoenix) | 13.3.0 | +| [Phoenix (Arize)](./src/phoenix) | 13.19.2 | | [Pingora Proxy Manager](./src/pingora-proxy-manager) | v1.0.3 | | [Open WebUI Rust](./src/open-webui-rust) | latest | | [OpenCode](./src/opencode) | 1.1.27 | @@ -120,6 +124,7 @@ Compose Anything 通过提供一组高质量的 Docker Compose 配置文件, | [OpenObserve](./apps/openobserve) | v0.50.0 | | [OpenSearch](./src/opensearch) | 2.19.0 | | [OpenTelemetry Collector](./src/otel-collector) | 0.115.1 | +| [OpenViking](./src/openviking) | 0.1.0 | | [Overleaf](./src/overleaf) | 5.2.1 | | [PocketBase](./src/pocketbase) | 0.30.0 | | [Podman](./src/podman) | v5.7.1 | diff --git a/apps/mattermost/.env.example b/apps/mattermost/.env.example new file mode 100644 index 0000000..85a50de --- /dev/null +++ b/apps/mattermost/.env.example @@ -0,0 +1,34 @@ +# Image versions +MATTERMOST_VERSION=11.3 +POSTGRES_VERSION=17-alpine + +# Network configuration +MATTERMOST_PORT_OVERRIDE=8065 +MATTERMOST_SITE_URL=http://localhost:8065 + +# PostgreSQL configuration +POSTGRES_DB=mattermost +POSTGRES_USER=mmuser +POSTGRES_PASSWORD=mmchangeit + +# Mattermost runtime configuration +MATTERMOST_ENABLE_LOCAL_MODE=false + +# Resources - Mattermost +MATTERMOST_CPU_LIMIT=2.00 +MATTERMOST_MEMORY_LIMIT=2G +MATTERMOST_CPU_RESERVATION=0.50 +MATTERMOST_MEMORY_RESERVATION=512M + +# Resources - PostgreSQL +MATTERMOST_DB_CPU_LIMIT=1.00 +MATTERMOST_DB_MEMORY_LIMIT=1G +MATTERMOST_DB_CPU_RESERVATION=0.25 +MATTERMOST_DB_MEMORY_RESERVATION=256M + +# Logging +MATTERMOST_LOG_MAX_SIZE=100m +MATTERMOST_LOG_MAX_FILE=3 + +# Timezone +TZ=UTC diff --git a/apps/mattermost/README.md b/apps/mattermost/README.md new file mode 100644 index 0000000..77cd6f5 --- /dev/null +++ b/apps/mattermost/README.md @@ -0,0 +1,68 @@ +# Mattermost + +[中文文档](README.zh.md) + +Mattermost is an open-source team collaboration platform that provides chat, file sharing, channels, and integrations. This Compose stack includes Mattermost plus PostgreSQL and is designed to start with a single `docker compose up -d`. + +## Quick Start + +1. Copy the example environment file: + + ```bash + cp .env.example .env + ``` + +2. Edit `.env` if you want to change the port, site URL, or database password. + +3. Start the stack: + + ```bash + docker compose up -d + ``` + +4. Open Mattermost: + + - + +5. Complete the first-run wizard to create the initial system admin account. + +## Default Ports + +| Service | Port | Description | +| ---------- | ---- | ---------------------- | +| Mattermost | 8065 | Web UI and API | +| PostgreSQL | 5432 | Internal database only | + +## Important Environment Variables + +| Variable | Description | Default | +| ------------------------------ | ---------------------------------------------- | ----------------------- | +| `MATTERMOST_VERSION` | Mattermost Team Edition image tag | `11.3` | +| `MATTERMOST_PORT_OVERRIDE` | Host port for Mattermost | `8065` | +| `MATTERMOST_SITE_URL` | Public URL used by Mattermost | `http://localhost:8065` | +| `POSTGRES_DB` | PostgreSQL database name | `mattermost` | +| `POSTGRES_USER` | PostgreSQL user | `mmuser` | +| `POSTGRES_PASSWORD` | PostgreSQL password | `mmchangeit` | +| `MATTERMOST_ENABLE_LOCAL_MODE` | Enables local mode for administrative commands | `false` | +| `TZ` | Container timezone | `UTC` | + +## Volumes + +- `mattermost_postgres_data`: PostgreSQL data. +- `mattermost_config`: Mattermost config directory. +- `mattermost_data`: Uploaded files and application data. +- `mattermost_logs`: Application logs. +- `mattermost_plugins`: Server-side plugins. +- `mattermost_client_plugins`: Webapp plugins. +- `mattermost_bleve_indexes`: Search indexes. + +## Notes + +- The application depends on PostgreSQL and waits until the database is healthy before booting. +- The default setup uses Team Edition. +- If you expose Mattermost behind a reverse proxy or different hostname, update `MATTERMOST_SITE_URL`. + +## References + +- [Mattermost Repository](https://github.com/mattermost/mattermost) +- [Mattermost Team Edition Image](https://hub.docker.com/r/mattermost/mattermost-team-edition) diff --git a/apps/mattermost/README.zh.md b/apps/mattermost/README.zh.md new file mode 100644 index 0000000..f06941d --- /dev/null +++ b/apps/mattermost/README.zh.md @@ -0,0 +1,68 @@ +# Mattermost + +[English](README.md) + +Mattermost 是一个开源团队协作平台,提供聊天、频道、文件共享和集成能力。这个 Compose 配置包含 Mattermost 和 PostgreSQL,目标是用一条 `docker compose up -d` 完成启动。 + +## 快速开始 + +1. 复制环境变量示例文件: + + ```bash + cp .env.example .env + ``` + +2. 按需修改 `.env`,例如端口、站点 URL 或数据库密码。 + +3. 启动整个栈: + + ```bash + docker compose up -d + ``` + +4. 打开 Mattermost: + + - + +5. 按照首次启动向导创建初始系统管理员账号。 + +## 默认端口 + +| 服务 | 端口 | 说明 | +| ---------- | ---- | -------------------- | +| Mattermost | 8065 | Web 界面与 API | +| PostgreSQL | 5432 | 仅供内部使用的数据库 | + +## 关键环境变量 + +| 变量 | 说明 | 默认值 | +| ------------------------------ | -------------------------------- | ----------------------- | +| `MATTERMOST_VERSION` | Mattermost Team Edition 镜像标签 | `11.3` | +| `MATTERMOST_PORT_OVERRIDE` | Mattermost 对外端口 | `8065` | +| `MATTERMOST_SITE_URL` | Mattermost 对外访问 URL | `http://localhost:8065` | +| `POSTGRES_DB` | PostgreSQL 数据库名 | `mattermost` | +| `POSTGRES_USER` | PostgreSQL 用户名 | `mmuser` | +| `POSTGRES_PASSWORD` | PostgreSQL 密码 | `mmchangeit` | +| `MATTERMOST_ENABLE_LOCAL_MODE` | 是否启用本地管理模式 | `false` | +| `TZ` | 容器时区 | `UTC` | + +## 数据卷 + +- `mattermost_postgres_data`:PostgreSQL 数据。 +- `mattermost_config`:Mattermost 配置目录。 +- `mattermost_data`:上传文件和业务数据。 +- `mattermost_logs`:应用日志。 +- `mattermost_plugins`:服务端插件。 +- `mattermost_client_plugins`:前端插件。 +- `mattermost_bleve_indexes`:搜索索引。 + +## 说明 + +- Mattermost 依赖 PostgreSQL,只有数据库健康后才会继续启动。 +- 这里默认使用 Team Edition。 +- 如果你通过反向代理或自定义域名访问 Mattermost,请同步修改 `MATTERMOST_SITE_URL`。 + +## 参考资料 + +- [Mattermost 仓库](https://github.com/mattermost/mattermost) +- [Mattermost Team Edition 镜像](https://hub.docker.com/r/mattermost/mattermost-team-edition) diff --git a/apps/mattermost/docker-compose.yaml b/apps/mattermost/docker-compose.yaml new file mode 100644 index 0000000..10f51c2 --- /dev/null +++ b/apps/mattermost/docker-compose.yaml @@ -0,0 +1,84 @@ +x-defaults: &defaults + restart: unless-stopped + logging: + driver: json-file + options: + max-size: ${MATTERMOST_LOG_MAX_SIZE:-100m} + max-file: '${MATTERMOST_LOG_MAX_FILE:-3}' + +services: + mattermost-postgres: + <<: *defaults + image: ${GLOBAL_REGISTRY:-}postgres:${POSTGRES_VERSION:-17-alpine} + environment: + - TZ=${TZ:-UTC} + - POSTGRES_DB=${POSTGRES_DB:-mattermost} + - POSTGRES_USER=${POSTGRES_USER:-mmuser} + - POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-mmchangeit} + - PGDATA=/var/lib/postgresql/data/pgdata + volumes: + - mattermost_postgres_data:/var/lib/postgresql/data + healthcheck: + test: [CMD-SHELL, pg_isready -U $$POSTGRES_USER -d $$POSTGRES_DB] + interval: 15s + timeout: 5s + retries: 10 + start_period: 20s + deploy: + resources: + limits: + cpus: ${MATTERMOST_DB_CPU_LIMIT:-1.00} + memory: ${MATTERMOST_DB_MEMORY_LIMIT:-1G} + reservations: + cpus: ${MATTERMOST_DB_CPU_RESERVATION:-0.25} + memory: ${MATTERMOST_DB_MEMORY_RESERVATION:-256M} + + mattermost: + <<: *defaults + image: ${GLOBAL_REGISTRY:-}mattermost/mattermost-team-edition:${MATTERMOST_VERSION:-11.3} + depends_on: + mattermost-postgres: + condition: service_healthy + ports: + - '${MATTERMOST_PORT_OVERRIDE:-8065}:8065' + environment: + - TZ=${TZ:-UTC} + - MM_SQLSETTINGS_DRIVERNAME=postgres + - MM_SQLSETTINGS_DATASOURCE=postgres://${POSTGRES_USER:-mmuser}:${POSTGRES_PASSWORD:-mmchangeit}@mattermost-postgres:5432/${POSTGRES_DB:-mattermost}?sslmode=disable&connect_timeout=10 + - MM_SERVICESETTINGS_SITEURL=${MATTERMOST_SITE_URL:-http://localhost:8065} + - MM_SERVICESETTINGS_ENABLELOCALMODE=${MATTERMOST_ENABLE_LOCAL_MODE:-false} + - MM_PLUGINSETTINGS_ENABLEUPLOADS=true + - MM_BLEVESETTINGS_INDEXDIR=/mattermost/bleve-indexes + - MM_FILESETTINGS_DIRECTORY=/mattermost/data + env_file: + - .env + volumes: + - mattermost_config:/mattermost/config + - mattermost_data:/mattermost/data + - mattermost_logs:/mattermost/logs + - mattermost_plugins:/mattermost/plugins + - mattermost_client_plugins:/mattermost/client/plugins + - mattermost_bleve_indexes:/mattermost/bleve-indexes + healthcheck: + test: [CMD, /mattermost/bin/mmctl, system, status, --local] + interval: 30s + timeout: 10s + retries: 5 + start_period: 60s + deploy: + resources: + limits: + cpus: ${MATTERMOST_CPU_LIMIT:-2.00} + memory: ${MATTERMOST_MEMORY_LIMIT:-2G} + reservations: + cpus: ${MATTERMOST_CPU_RESERVATION:-0.50} + memory: ${MATTERMOST_MEMORY_RESERVATION:-512M} + +volumes: + mattermost_postgres_data: + mattermost_config: + mattermost_data: + mattermost_logs: + mattermost_plugins: + mattermost_client_plugins: + mattermost_bleve_indexes: diff --git a/builds/deer-flow/.env.example b/builds/deer-flow/.env.example new file mode 100644 index 0000000..323038f --- /dev/null +++ b/builds/deer-flow/.env.example @@ -0,0 +1,45 @@ +# Source build configuration +DEER_FLOW_VERSION=main +NGINX_VERSION=1.28-alpine + +# Network configuration +DEER_FLOW_PORT_OVERRIDE=2026 +DEER_FLOW_CORS_ORIGINS=http://localhost:2026 +DEER_FLOW_BETTER_AUTH_SECRET=deer-flow-dev-secret-change-me + +# Model configuration +DEER_FLOW_MODEL_NAME=openai-default +DEER_FLOW_MODEL_DISPLAY_NAME=OpenAI +DEER_FLOW_MODEL_ID=gpt-4.1-mini +OPENAI_API_KEY= + +# Resources - Gateway +DEER_FLOW_GATEWAY_CPU_LIMIT=2.00 +DEER_FLOW_GATEWAY_MEMORY_LIMIT=2G +DEER_FLOW_GATEWAY_CPU_RESERVATION=0.50 +DEER_FLOW_GATEWAY_MEMORY_RESERVATION=512M + +# Resources - LangGraph +DEER_FLOW_LANGGRAPH_CPU_LIMIT=2.00 +DEER_FLOW_LANGGRAPH_MEMORY_LIMIT=2G +DEER_FLOW_LANGGRAPH_CPU_RESERVATION=0.50 +DEER_FLOW_LANGGRAPH_MEMORY_RESERVATION=512M + +# Resources - Frontend +DEER_FLOW_FRONTEND_CPU_LIMIT=1.00 +DEER_FLOW_FRONTEND_MEMORY_LIMIT=1G +DEER_FLOW_FRONTEND_CPU_RESERVATION=0.25 +DEER_FLOW_FRONTEND_MEMORY_RESERVATION=256M + +# Resources - Nginx +DEER_FLOW_NGINX_CPU_LIMIT=0.50 +DEER_FLOW_NGINX_MEMORY_LIMIT=256M +DEER_FLOW_NGINX_CPU_RESERVATION=0.10 +DEER_FLOW_NGINX_MEMORY_RESERVATION=64M + +# Logging +DEER_FLOW_LOG_MAX_SIZE=100m +DEER_FLOW_LOG_MAX_FILE=3 + +# Timezone +TZ=UTC diff --git a/builds/deer-flow/README.md b/builds/deer-flow/README.md new file mode 100644 index 0000000..5be5634 --- /dev/null +++ b/builds/deer-flow/README.md @@ -0,0 +1,60 @@ +# DeerFlow + +[中文文档](README.zh.md) + +DeerFlow is a full-stack AI agent application from ByteDance. This Compose setup builds the frontend and backend from source, starts Gateway, LangGraph, and Nginx, and exposes the unified entrypoint on port 2026. + +## Quick Start + +1. Copy the example environment file: + + ```bash + cp .env.example .env + ``` + +2. Edit `.env` and set `OPENAI_API_KEY`. + +3. Start the stack: + + ```bash + docker compose up -d + ``` + +4. Open DeerFlow: + + - + +## Default Ports + +| Service | Port | Description | +| ----------- | ---- | ---------------------- | +| Nginx | 2026 | Unified web entrypoint | +| Gateway API | 8001 | Internal only | +| LangGraph | 2024 | Internal only | +| Frontend | 3000 | Internal only | + +## Important Environment Variables + +| Variable | Description | Default | +| ------------------------------ | ------------------------------------------------------ | -------------------------------- | +| `DEER_FLOW_VERSION` | Git ref used for source builds | `main` | +| `DEER_FLOW_PORT_OVERRIDE` | Host port for the unified entrypoint | `2026` | +| `OPENAI_API_KEY` | OpenAI API key referenced from generated `config.yaml` | - | +| `DEER_FLOW_MODEL_NAME` | Internal model identifier | `openai-default` | +| `DEER_FLOW_MODEL_DISPLAY_NAME` | Display name shown in the app | `OpenAI` | +| `DEER_FLOW_MODEL_ID` | OpenAI model id | `gpt-4.1-mini` | +| `DEER_FLOW_CORS_ORIGINS` | Allowed CORS origins for the gateway | `http://localhost:2026` | +| `DEER_FLOW_BETTER_AUTH_SECRET` | Frontend auth secret | `deer-flow-dev-secret-change-me` | +| `TZ` | Container timezone | `UTC` | + +## Notes + +- This setup generates a minimal `config.yaml` and `extensions_config.json` inside the backend containers, so no extra config files are required. +- The default sandbox mode is local to avoid requiring Docker socket mounts or Kubernetes provisioner setup. +- DeerFlow upstream usually expects local image builds, so the first build can take several minutes. +- Only an OpenAI-compatible model is wired by default here. If you want Anthropic, Gemini, or a more advanced config, update the generated template logic in `docker-compose.yaml`. + +## References + +- [DeerFlow Repository](https://github.com/bytedance/deer-flow) +- [Project README](https://github.com/bytedance/deer-flow/blob/main/README.md) diff --git a/builds/deer-flow/README.zh.md b/builds/deer-flow/README.zh.md new file mode 100644 index 0000000..ddfe9ba --- /dev/null +++ b/builds/deer-flow/README.zh.md @@ -0,0 +1,60 @@ +# DeerFlow + +[English](README.md) + +DeerFlow 是字节跳动开源的全栈 AI Agent 应用。这个 Compose 配置会从源码构建前后端镜像,启动 Gateway、LangGraph 和 Nginx,并通过 2026 端口暴露统一入口。 + +## 快速开始 + +1. 复制环境变量示例文件: + + ```bash + cp .env.example .env + ``` + +2. 编辑 `.env`,至少填写 `OPENAI_API_KEY`。 + +3. 启动整个栈: + + ```bash + docker compose up -d + ``` + +4. 打开 DeerFlow: + + - + +## 默认端口 + +| 服务 | 端口 | 说明 | +| ----------- | ---- | ------------- | +| Nginx | 2026 | 统一 Web 入口 | +| Gateway API | 8001 | 仅内部访问 | +| LangGraph | 2024 | 仅内部访问 | +| Frontend | 3000 | 仅内部访问 | + +## 关键环境变量 + +| 变量 | 说明 | 默认值 | +| ------------------------------ | -------------------------------------------- | -------------------------------- | +| `DEER_FLOW_VERSION` | 用于源码构建的 Git 引用 | `main` | +| `DEER_FLOW_PORT_OVERRIDE` | 统一入口对外端口 | `2026` | +| `OPENAI_API_KEY` | 生成的 `config.yaml` 中引用的 OpenAI API Key | - | +| `DEER_FLOW_MODEL_NAME` | 模型内部标识 | `openai-default` | +| `DEER_FLOW_MODEL_DISPLAY_NAME` | 界面展示名称 | `OpenAI` | +| `DEER_FLOW_MODEL_ID` | OpenAI 模型 ID | `gpt-4.1-mini` | +| `DEER_FLOW_CORS_ORIGINS` | Gateway 允许的跨域来源 | `http://localhost:2026` | +| `DEER_FLOW_BETTER_AUTH_SECRET` | 前端鉴权密钥 | `deer-flow-dev-secret-change-me` | +| `TZ` | 容器时区 | `UTC` | + +## 说明 + +- 这个配置会在后端容器内部生成最小可用的 `config.yaml` 和 `extensions_config.json`,因此不需要额外手工准备配置文件。 +- 默认使用本地 sandbox 模式,这样不需要挂载 Docker Socket,也不依赖 Kubernetes provisioner。 +- DeerFlow 上游通常要求本地构建镜像,因此首次构建耗时可能较长。 +- 当前默认只接入了 OpenAI 兼容模型。如果你要改成 Anthropic、Gemini 或更复杂的配置,需要调整 `docker-compose.yaml` 中生成配置文件的模板。 + +## 参考资料 + +- [DeerFlow 仓库](https://github.com/bytedance/deer-flow) +- [项目 README](https://github.com/bytedance/deer-flow/blob/main/README_zh.md) diff --git a/builds/deer-flow/docker-compose.yaml b/builds/deer-flow/docker-compose.yaml new file mode 100644 index 0000000..07c72b2 --- /dev/null +++ b/builds/deer-flow/docker-compose.yaml @@ -0,0 +1,171 @@ +x-defaults: &defaults + restart: unless-stopped + logging: + driver: json-file + options: + max-size: ${DEER_FLOW_LOG_MAX_SIZE:-100m} + max-file: '${DEER_FLOW_LOG_MAX_FILE:-3}' + +services: + deerflow-gateway: + <<: *defaults + build: + context: https://github.com/bytedance/deer-flow.git#${DEER_FLOW_VERSION:-main} + dockerfile: backend/Dockerfile + image: ${GLOBAL_REGISTRY:-}alexsuntop/deer-flow-backend:${DEER_FLOW_VERSION:-main} + environment: + - TZ=${TZ:-UTC} + - OPENAI_API_KEY=${OPENAI_API_KEY:-} + env_file: + - .env + entrypoint: + - /bin/sh + - -ec + command: | + cat >/tmp/config.yaml </tmp/extensions_config.json </tmp/config.yaml </tmp/extensions_config.json <process.exit(r.ok?0:1)).catch(()=>process.exit(1))" + interval: 30s + timeout: 10s + retries: 5 + start_period: 60s + deploy: + resources: + limits: + cpus: ${DEER_FLOW_FRONTEND_CPU_LIMIT:-1.00} + memory: ${DEER_FLOW_FRONTEND_MEMORY_LIMIT:-1G} + reservations: + cpus: ${DEER_FLOW_FRONTEND_CPU_RESERVATION:-0.25} + memory: ${DEER_FLOW_FRONTEND_MEMORY_RESERVATION:-256M} + + deerflow-nginx: + <<: *defaults + image: ${GLOBAL_REGISTRY:-}nginx:${NGINX_VERSION:-1.28-alpine} + depends_on: + deerflow-gateway: + condition: service_healthy + deerflow-langgraph: + condition: service_healthy + deerflow-frontend: + condition: service_healthy + ports: + - '${DEER_FLOW_PORT_OVERRIDE:-2026}:2026' + volumes: + - ./nginx.conf:/etc/nginx/conf.d/default.conf:ro + healthcheck: + test: + - CMD-SHELL + - wget --no-verbose --tries=1 --spider http://127.0.0.1:2026 >/dev/null || exit 1 + interval: 30s + timeout: 10s + retries: 5 + start_period: 10s + deploy: + resources: + limits: + cpus: ${DEER_FLOW_NGINX_CPU_LIMIT:-0.50} + memory: ${DEER_FLOW_NGINX_MEMORY_LIMIT:-256M} + reservations: + cpus: ${DEER_FLOW_NGINX_CPU_RESERVATION:-0.10} + memory: ${DEER_FLOW_NGINX_MEMORY_RESERVATION:-64M} diff --git a/builds/deer-flow/nginx.conf b/builds/deer-flow/nginx.conf new file mode 100644 index 0000000..53fa1f6 --- /dev/null +++ b/builds/deer-flow/nginx.conf @@ -0,0 +1,40 @@ +server { + listen 2026; + server_name _; + + client_max_body_size 50m; + + location /api/langgraph/ { + proxy_pass http://deerflow-langgraph:2024/; + proxy_http_version 1.1; + proxy_set_header Host $host; + proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; + proxy_set_header X-Forwarded-Proto $scheme; + proxy_set_header Upgrade $http_upgrade; + proxy_set_header Connection "upgrade"; + proxy_read_timeout 3600s; + proxy_send_timeout 3600s; + } + + location /api/ { + proxy_pass http://deerflow-gateway:8001/api/; + proxy_http_version 1.1; + proxy_set_header Host $host; + proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; + proxy_set_header X-Forwarded-Proto $scheme; + proxy_read_timeout 3600s; + proxy_send_timeout 3600s; + } + + location / { + proxy_pass http://deerflow-frontend:3000; + proxy_http_version 1.1; + proxy_set_header Host $host; + proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; + proxy_set_header X-Forwarded-Proto $scheme; + proxy_set_header Upgrade $http_upgrade; + proxy_set_header Connection "upgrade"; + proxy_read_timeout 3600s; + proxy_send_timeout 3600s; + } +} diff --git a/builds/openfang/.env.example b/builds/openfang/.env.example new file mode 100644 index 0000000..6eba68b --- /dev/null +++ b/builds/openfang/.env.example @@ -0,0 +1,33 @@ +# Source build configuration +OPENFANG_VERSION=0.1.0 + +# Network configuration +OPENFANG_PORT_OVERRIDE=4200 + +# OpenFang runtime configuration +OPENFANG_PROVIDER=anthropic +OPENFANG_MODEL=claude-sonnet-4-20250514 +OPENFANG_API_KEY_ENV=ANTHROPIC_API_KEY +OPENFANG_API_KEY= +OPENFANG_LOG_LEVEL=info +OPENFANG_MEMORY_DECAY_RATE=0.05 +OPENFANG_EXEC_MODE=allowlist +OPENFANG_EXEC_TIMEOUT_SECS=30 + +# Provider credentials +ANTHROPIC_API_KEY= +OPENAI_API_KEY= +GROQ_API_KEY= + +# Resources +OPENFANG_CPU_LIMIT=2.00 +OPENFANG_MEMORY_LIMIT=2G +OPENFANG_CPU_RESERVATION=0.50 +OPENFANG_MEMORY_RESERVATION=512M + +# Logging +OPENFANG_LOG_MAX_SIZE=100m +OPENFANG_LOG_MAX_FILE=3 + +# Timezone +TZ=UTC diff --git a/builds/openfang/README.md b/builds/openfang/README.md new file mode 100644 index 0000000..56701dc --- /dev/null +++ b/builds/openfang/README.md @@ -0,0 +1,71 @@ +# OpenFang + +[中文文档](README.zh.md) + +OpenFang is an open-source agent operating system. This Compose setup builds the upstream Docker image from the `v0.1.0` source tag and writes a minimal `config.toml` into the persistent data volume on startup. + +## Quick Start + +1. Copy the example environment file: + + ```bash + cp .env.example .env + ``` + +2. Set at least one provider API key in `.env`: + + - `ANTHROPIC_API_KEY` + - `OPENAI_API_KEY` + - `GROQ_API_KEY` + +3. Start OpenFang: + + ```bash + docker compose up -d + ``` + +4. Open the dashboard: + + - + +5. Verify health if needed: + + ```bash + curl http://localhost:4200/api/health + ``` + +## Default Ports + +| Service | Port | Description | +| -------- | ---- | ---------------------- | +| OpenFang | 4200 | Dashboard and REST API | + +## Important Environment Variables + +| Variable | Description | Default | +| ------------------------ | ------------------------------------------------------------------ | -------------------------- | +| `OPENFANG_VERSION` | Git tag used for the source build | `0.1.0` | +| `OPENFANG_PORT_OVERRIDE` | Host port for OpenFang | `4200` | +| `OPENFANG_PROVIDER` | Default model provider | `anthropic` | +| `OPENFANG_MODEL` | Default model name | `claude-sonnet-4-20250514` | +| `OPENFANG_API_KEY_ENV` | Environment variable name that OpenFang reads for the provider key | `ANTHROPIC_API_KEY` | +| `OPENFANG_API_KEY` | Optional Bearer token to protect the API | - | +| `ANTHROPIC_API_KEY` | Anthropic API key | - | +| `OPENAI_API_KEY` | OpenAI API key | - | +| `GROQ_API_KEY` | Groq API key | - | +| `TZ` | Container timezone | `UTC` | + +## Volumes + +- `openfang_data`: Persistent configuration and runtime data under `/data`. + +## Notes + +- The generated config binds to `0.0.0.0:4200` for container use. +- If `OPENFANG_API_KEY` is empty, the instance runs without API authentication except for whatever protections you place in front of it. +- This setup uses the upstream Dockerfile, so the first build can take several minutes. + +## References + +- [OpenFang Repository](https://github.com/RightNow-AI/openfang) +- [Getting Started Guide](https://github.com/RightNow-AI/openfang/blob/main/docs/getting-started.md) diff --git a/builds/openfang/README.zh.md b/builds/openfang/README.zh.md new file mode 100644 index 0000000..6212acc --- /dev/null +++ b/builds/openfang/README.zh.md @@ -0,0 +1,71 @@ +# OpenFang + +[English](README.md) + +OpenFang 是一个开源的 Agent Operating System。这个 Compose 配置会基于上游 `v0.1.0` 源码标签构建镜像,并在启动时把最小可用的 `config.toml` 写入持久化数据卷。 + +## 快速开始 + +1. 复制环境变量示例文件: + + ```bash + cp .env.example .env + ``` + +2. 在 `.env` 中至少填写一个模型提供商的 API Key: + + - `ANTHROPIC_API_KEY` + - `OPENAI_API_KEY` + - `GROQ_API_KEY` + +3. 启动 OpenFang: + + ```bash + docker compose up -d + ``` + +4. 打开控制台: + + - + +5. 如需检查健康状态: + + ```bash + curl http://localhost:4200/api/health + ``` + +## 默认端口 + +| 服务 | 端口 | 说明 | +| -------- | ---- | ----------------- | +| OpenFang | 4200 | 控制台与 REST API | + +## 关键环境变量 + +| 变量 | 说明 | 默认值 | +| ------------------------ | ----------------------------------------- | -------------------------- | +| `OPENFANG_VERSION` | 用于源码构建的 Git 标签 | `0.1.0` | +| `OPENFANG_PORT_OVERRIDE` | OpenFang 对外端口 | `4200` | +| `OPENFANG_PROVIDER` | 默认模型提供商 | `anthropic` | +| `OPENFANG_MODEL` | 默认模型名称 | `claude-sonnet-4-20250514` | +| `OPENFANG_API_KEY_ENV` | OpenFang 读取提供商密钥时使用的环境变量名 | `ANTHROPIC_API_KEY` | +| `OPENFANG_API_KEY` | 可选的 API Bearer Token | - | +| `ANTHROPIC_API_KEY` | Anthropic API Key | - | +| `OPENAI_API_KEY` | OpenAI API Key | - | +| `GROQ_API_KEY` | Groq API Key | - | +| `TZ` | 容器时区 | `UTC` | + +## 数据卷 + +- `openfang_data`:持久化 `/data` 下的配置与运行数据。 + +## 说明 + +- 生成的配置会监听 `0.0.0.0:4200`,适合容器内运行。 +- 如果 `OPENFANG_API_KEY` 为空,实例本身不会启用额外 API 认证,是否暴露到公网需要你自行把控。 +- 该服务使用上游 Dockerfile 从源码构建,首次构建通常需要几分钟。 + +## 参考资料 + +- [OpenFang 仓库](https://github.com/RightNow-AI/openfang) +- [入门文档](https://github.com/RightNow-AI/openfang/blob/main/docs/getting-started.md) diff --git a/builds/openfang/docker-compose.yaml b/builds/openfang/docker-compose.yaml new file mode 100644 index 0000000..702aaca --- /dev/null +++ b/builds/openfang/docker-compose.yaml @@ -0,0 +1,71 @@ +x-defaults: &defaults + restart: unless-stopped + logging: + driver: json-file + options: + max-size: ${OPENFANG_LOG_MAX_SIZE:-100m} + max-file: '${OPENFANG_LOG_MAX_FILE:-3}' + +services: + openfang: + <<: *defaults + build: + context: https://github.com/RightNow-AI/openfang.git#${OPENFANG_VERSION:-0.1.0} + dockerfile: Dockerfile + image: ${GLOBAL_REGISTRY:-}alexsuntop/openfang:${OPENFANG_VERSION:-0.1.0} + ports: + - '${OPENFANG_PORT_OVERRIDE:-4200}:4200' + environment: + - TZ=${TZ:-UTC} + - OPENFANG_HOME=/data + - ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY:-} + - OPENAI_API_KEY=${OPENAI_API_KEY:-} + - GROQ_API_KEY=${GROQ_API_KEY:-} + env_file: + - .env + entrypoint: + - /bin/sh + - -ec + command: | + : > /data/config.toml + if [ -n "${OPENFANG_API_KEY:-}" ]; then + printf 'api_key = "%s"\n' "${OPENFANG_API_KEY}" >> /data/config.toml + fi + cat >> /data/config.toml < + +5. Follow the Paperclip onboarding flow in the browser. + +## Default Ports + +| Service | Port | Description | +| --------- | ---- | -------------- | +| Paperclip | 3100 | Web UI and API | + +## Important Environment Variables + +| Variable | Description | Default | +| ------------------------------- | ---------------------------------------- | ----------------------- | +| `PAPERCLIP_GIT_REF` | Git ref used for the source build | `main` | +| `PAPERCLIP_PORT_OVERRIDE` | Host port for Paperclip | `3100` | +| `PAPERCLIP_PUBLIC_URL` | Public URL for auth and invite flows | `http://localhost:3100` | +| `PAPERCLIP_ALLOWED_HOSTNAMES` | Extra allowed hostnames | `localhost` | +| `PAPERCLIP_DEPLOYMENT_MODE` | Deployment mode | `authenticated` | +| `PAPERCLIP_DEPLOYMENT_EXPOSURE` | Exposure mode | `private` | +| `DATABASE_URL` | Optional external PostgreSQL URL | - | +| `OPENAI_API_KEY` | OpenAI key for bundled local adapters | - | +| `ANTHROPIC_API_KEY` | Anthropic key for bundled local adapters | - | +| `TZ` | Container timezone | `UTC` | + +## Volumes + +- `paperclip_data`: Stores embedded PostgreSQL data, uploaded files, secrets, and runtime state. + +## Notes + +- If `DATABASE_URL` is not provided, Paperclip automatically uses embedded PostgreSQL. +- The upstream Docker image includes the UI and server in one container. +- The first source build can take several minutes. + +## References + +- [Paperclip Repository](https://github.com/paperclipai/paperclip) +- [Docker Deployment Guide](https://github.com/paperclipai/paperclip/blob/main/docs/deploy/docker.md) diff --git a/builds/paperclip/README.zh.md b/builds/paperclip/README.zh.md new file mode 100644 index 0000000..7b57cc9 --- /dev/null +++ b/builds/paperclip/README.zh.md @@ -0,0 +1,67 @@ +# Paperclip + +[English](README.md) + +Paperclip 是一个面向 AI 团队编排的开源平台。这个 Compose 配置会从上游源码构建 Docker 镜像,持久化整个 Paperclip Home 目录,并通过 3100 端口暴露 Web 界面。 + +## 快速开始 + +1. 复制环境变量示例文件: + + ```bash + cp .env.example .env + ``` + +2. 按需编辑 `.env`: + + - 如果你不通过 `http://localhost:3100` 访问,请修改 `PAPERCLIP_PUBLIC_URL` + - 如果要启用本地适配器,填写 `OPENAI_API_KEY` 和或 `ANTHROPIC_API_KEY` + - 如果要接入外部 PostgreSQL,而不是内置数据库,请设置 `DATABASE_URL` + +3. 启动服务: + + ```bash + docker compose up -d + ``` + +4. 打开界面: + + - + +5. 在浏览器中完成 Paperclip 的初始化流程。 + +## 默认端口 + +| 服务 | 端口 | 说明 | +| --------- | ---- | -------------- | +| Paperclip | 3100 | Web 界面与 API | + +## 关键环境变量 + +| 变量 | 说明 | 默认值 | +| ------------------------------- | ---------------------------- | ----------------------- | +| `PAPERCLIP_GIT_REF` | 用于源码构建的 Git 引用 | `main` | +| `PAPERCLIP_PORT_OVERRIDE` | Paperclip 对外端口 | `3100` | +| `PAPERCLIP_PUBLIC_URL` | 认证与邀请流程使用的公开 URL | `http://localhost:3100` | +| `PAPERCLIP_ALLOWED_HOSTNAMES` | 额外允许的主机名 | `localhost` | +| `PAPERCLIP_DEPLOYMENT_MODE` | 部署模式 | `authenticated` | +| `PAPERCLIP_DEPLOYMENT_EXPOSURE` | 暴露模式 | `private` | +| `DATABASE_URL` | 可选的外部 PostgreSQL 连接串 | - | +| `OPENAI_API_KEY` | OpenAI Key | - | +| `ANTHROPIC_API_KEY` | Anthropic Key | - | +| `TZ` | 容器时区 | `UTC` | + +## 数据卷 + +- `paperclip_data`:保存内置 PostgreSQL、上传文件、密钥和运行状态。 + +## 说明 + +- 如果没有设置 `DATABASE_URL`,Paperclip 会自动启用内置 PostgreSQL。 +- 上游 Docker 镜像已经包含前端和服务端,不需要再拆分多个容器。 +- 首次源码构建通常需要几分钟。 + +## 参考资料 + +- [Paperclip 仓库](https://github.com/paperclipai/paperclip) +- [Docker 部署文档](https://github.com/paperclipai/paperclip/blob/main/docs/deploy/docker.md) diff --git a/builds/paperclip/docker-compose.yaml b/builds/paperclip/docker-compose.yaml new file mode 100644 index 0000000..6fbc6cd --- /dev/null +++ b/builds/paperclip/docker-compose.yaml @@ -0,0 +1,53 @@ +x-defaults: &defaults + restart: unless-stopped + logging: + driver: json-file + options: + max-size: ${PAPERCLIP_LOG_MAX_SIZE:-100m} + max-file: '${PAPERCLIP_LOG_MAX_FILE:-3}' + +services: + paperclip: + <<: *defaults + build: + context: https://github.com/paperclipai/paperclip.git#${PAPERCLIP_GIT_REF:-main} + dockerfile: Dockerfile + image: ${GLOBAL_REGISTRY:-}alexsuntop/paperclip:${PAPERCLIP_GIT_REF:-main} + ports: + - '${PAPERCLIP_PORT_OVERRIDE:-3100}:3100' + environment: + - TZ=${TZ:-UTC} + - HOST=0.0.0.0 + - PORT=3100 + - SERVE_UI=true + - PAPERCLIP_HOME=/paperclip + - PAPERCLIP_DEPLOYMENT_MODE=${PAPERCLIP_DEPLOYMENT_MODE:-authenticated} + - PAPERCLIP_DEPLOYMENT_EXPOSURE=${PAPERCLIP_DEPLOYMENT_EXPOSURE:-private} + - PAPERCLIP_PUBLIC_URL=${PAPERCLIP_PUBLIC_URL:-http://localhost:3100} + - PAPERCLIP_ALLOWED_HOSTNAMES=${PAPERCLIP_ALLOWED_HOSTNAMES:-localhost} + - OPENAI_API_KEY=${OPENAI_API_KEY:-} + - ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY:-} + - DATABASE_URL=${DATABASE_URL:-} + env_file: + - .env + volumes: + - paperclip_data:/paperclip + healthcheck: + test: + - CMD-SHELL + - curl -fsS http://127.0.0.1:3100/api/health >/dev/null || exit 1 + interval: 30s + timeout: 10s + retries: 5 + start_period: 60s + deploy: + resources: + limits: + cpus: ${PAPERCLIP_CPU_LIMIT:-2.00} + memory: ${PAPERCLIP_MEMORY_LIMIT:-4G} + reservations: + cpus: ${PAPERCLIP_CPU_RESERVATION:-0.50} + memory: ${PAPERCLIP_MEMORY_RESERVATION:-1G} + +volumes: + paperclip_data: diff --git a/src/openviking/.env.example b/src/openviking/.env.example new file mode 100644 index 0000000..ae8df99 --- /dev/null +++ b/src/openviking/.env.example @@ -0,0 +1,33 @@ +# Image configuration +OPENVIKING_VERSION=main + +# Network configuration +OPENVIKING_PORT_OVERRIDE=1933 +OPENVIKING_ROOT_API_KEY=openviking-dev-root-key + +# Embedding model configuration +OPENVIKING_EMBEDDING_PROVIDER=openai +OPENVIKING_EMBEDDING_API_BASE=https://api.openai.com/v1 +OPENVIKING_EMBEDDING_API_KEY= +OPENVIKING_EMBEDDING_MODEL=text-embedding-3-small +OPENVIKING_EMBEDDING_DIMENSION=1536 + +# Vision / multimodal model configuration +OPENVIKING_VLM_PROVIDER=openai +OPENVIKING_VLM_API_BASE=https://api.openai.com/v1 +OPENVIKING_VLM_API_KEY= +OPENVIKING_VLM_MODEL=gpt-4o-mini + +# Logging +OPENVIKING_LOG_LEVEL=INFO +OPENVIKING_LOG_MAX_SIZE=100m +OPENVIKING_LOG_MAX_FILE=3 + +# Resources +OPENVIKING_CPU_LIMIT=2.00 +OPENVIKING_MEMORY_LIMIT=2G +OPENVIKING_CPU_RESERVATION=0.50 +OPENVIKING_MEMORY_RESERVATION=512M + +# Timezone +TZ=UTC diff --git a/src/openviking/README.md b/src/openviking/README.md new file mode 100644 index 0000000..ea7af9e --- /dev/null +++ b/src/openviking/README.md @@ -0,0 +1,65 @@ +# OpenViking + +[中文文档](README.zh.md) + +OpenViking is an agent-native context database from Volcengine. This Compose setup runs the official container image and bootstraps a minimal ov.conf from environment variables so the service can start with a single command. + +## Quick Start + +1. Copy the example environment file: + + ```bash + cp .env.example .env + ``` + +2. Edit `.env` and set at least: + + - `OPENVIKING_ROOT_API_KEY` + - `OPENVIKING_EMBEDDING_API_KEY` + - `OPENVIKING_VLM_API_KEY` + +3. Start the service: + + ```bash + docker compose up -d + ``` + +4. Verify health: + + ```bash + curl http://localhost:1933/health + ``` + +## Default Ports + +| Service | Port | Description | +| ---------- | ---- | ---------------------------- | +| OpenViking | 1933 | HTTP API and health endpoint | + +## Important Environment Variables + +| Variable | Description | Default | +| ------------------------------ | ----------------------------------------------- | ------------------------- | +| `OPENVIKING_VERSION` | OpenViking image tag | `main` | +| `OPENVIKING_PORT_OVERRIDE` | Host port for the HTTP API | `1933` | +| `OPENVIKING_ROOT_API_KEY` | Root API key required when binding to `0.0.0.0` | `openviking-dev-root-key` | +| `OPENVIKING_EMBEDDING_API_KEY` | API key for the embedding model | - | +| `OPENVIKING_EMBEDDING_MODEL` | Embedding model name | `text-embedding-3-small` | +| `OPENVIKING_VLM_API_KEY` | API key for the VLM / multimodal model | - | +| `OPENVIKING_VLM_MODEL` | VLM model name | `gpt-4o-mini` | +| `TZ` | Container timezone | `UTC` | + +## Volumes + +- `openviking_data`: Persistent workspace and local storage data. + +## Notes + +- This setup generates `ov.conf` at container start from `.env`, so no extra config file is required. +- The service can start without model API keys, but indexing and multimodal features will not work until valid credentials are provided. +- `/health` is unauthenticated and is used by the healthcheck. + +## References + +- [OpenViking Repository](https://github.com/volcengine/OpenViking) +- [Deployment Guide](https://github.com/volcengine/OpenViking/blob/main/docs/en/guides/03-deployment.md) diff --git a/src/openviking/README.zh.md b/src/openviking/README.zh.md new file mode 100644 index 0000000..3802e0c --- /dev/null +++ b/src/openviking/README.zh.md @@ -0,0 +1,65 @@ +# OpenViking + +[English](README.md) + +OpenViking 是火山引擎开源的 Agent 原生上下文数据库。这个 Compose 配置直接使用官方容器镜像,并在容器启动时根据环境变量生成最小可用的 ov.conf,因此可以用一条命令启动。 + +## 快速开始 + +1. 复制环境变量示例文件: + + ```bash + cp .env.example .env + ``` + +2. 编辑 `.env`,至少设置以下变量: + + - `OPENVIKING_ROOT_API_KEY` + - `OPENVIKING_EMBEDDING_API_KEY` + - `OPENVIKING_VLM_API_KEY` + +3. 启动服务: + + ```bash + docker compose up -d + ``` + +4. 检查健康状态: + + ```bash + curl http://localhost:1933/health + ``` + +## 默认端口 + +| 服务 | 端口 | 说明 | +| ---------- | ---- | ----------------------- | +| OpenViking | 1933 | HTTP API 与健康检查接口 | + +## 关键环境变量 + +| 变量 | 说明 | 默认值 | +| ------------------------------ | ---------------------------------------- | ------------------------- | +| `OPENVIKING_VERSION` | OpenViking 镜像标签 | `main` | +| `OPENVIKING_PORT_OVERRIDE` | HTTP API 对外端口 | `1933` | +| `OPENVIKING_ROOT_API_KEY` | 服务监听 `0.0.0.0` 时需要的 Root API Key | `openviking-dev-root-key` | +| `OPENVIKING_EMBEDDING_API_KEY` | Embedding 模型的 API Key | - | +| `OPENVIKING_EMBEDDING_MODEL` | Embedding 模型名称 | `text-embedding-3-small` | +| `OPENVIKING_VLM_API_KEY` | 多模态模型 API Key | - | +| `OPENVIKING_VLM_MODEL` | 多模态模型名称 | `gpt-4o-mini` | +| `TZ` | 容器时区 | `UTC` | + +## 数据卷 + +- `openviking_data`:持久化工作区与本地存储数据。 + +## 说明 + +- 这个配置会在容器启动时根据 `.env` 生成 `ov.conf`,因此不需要额外准备配置文件。 +- 即使没有填写模型 API Key,服务通常也可以启动,但索引与多模态能力无法正常使用。 +- `/health` 端点不需要认证,Compose 健康检查会依赖它。 + +## 参考资料 + +- [OpenViking 仓库](https://github.com/volcengine/OpenViking) +- [部署文档](https://github.com/volcengine/OpenViking/blob/main/docs/zh/guides/03-deployment.md) diff --git a/src/openviking/docker-compose.yaml b/src/openviking/docker-compose.yaml new file mode 100644 index 0000000..a3294f8 --- /dev/null +++ b/src/openviking/docker-compose.yaml @@ -0,0 +1,85 @@ +x-defaults: &defaults + restart: unless-stopped + logging: + driver: json-file + options: + max-size: ${OPENVIKING_LOG_MAX_SIZE:-100m} + max-file: '${OPENVIKING_LOG_MAX_FILE:-3}' + +services: + openviking: + <<: *defaults + image: ${GHCR_IO_REGISTRY:-ghcr.io}/volcengine/openviking:${OPENVIKING_VERSION:-main} + ports: + - '${OPENVIKING_PORT_OVERRIDE:-1933}:1933' + environment: + - TZ=${TZ:-UTC} + env_file: + - .env + entrypoint: + - /bin/sh + - -ec + command: | + cat >/app/ov.conf </dev/null || exit 1 + interval: 30s + timeout: 10s + retries: 5 + start_period: 30s + deploy: + resources: + limits: + cpus: ${OPENVIKING_CPU_LIMIT:-2.00} + memory: ${OPENVIKING_MEMORY_LIMIT:-2G} + reservations: + cpus: ${OPENVIKING_CPU_RESERVATION:-0.50} + memory: ${OPENVIKING_MEMORY_RESERVATION:-512M} + +volumes: + openviking_data: