Compare commits

...

14 Commits

Author SHA1 Message Date
Sun-ZhenXing 3456de4586 feat: update GoModel 2026-05-08 11:48:15 +08:00
Sun-ZhenXing 5e8999c625 feat: add GoModel 2026-05-06 17:14:43 +08:00
Sun-ZhenXing c59a1e93f6 feat: add laminar 2026-05-06 09:55:21 +08:00
Sun-ZhenXing 5f8503df42 feat: add build turboocr 2026-04-29 11:54:59 +08:00
Sun-ZhenXing ce16588916 feat: add TurboOCR 2026-04-28 10:05:39 +08:00
Sun-ZhenXing 3483dd80f0 chore: update mineru 2026-04-19 14:12:11 +08:00
Summer Shen 0b5ba69cb0 feat: add more Agent services & easytier 2026-04-19 12:26:54 +08:00
Sun-ZhenXing 0e948befac refactor: signoz 2026-04-15 15:05:16 +08:00
Sun-ZhenXing ea1ca927c8 feat: add multica/ 2026-04-14 15:22:06 +08:00
Sun-ZhenXing 41c4e8fd4e feat: add Docker Compose repository guidelines and quick start instructions 2026-04-11 23:05:35 +08:00
Sun-ZhenXing 6ae63c5d86 feat: add shannon 2026-04-01 17:33:42 +08:00
Sun-ZhenXing b55fa9819b chore: update mineru 2026-03-30 14:17:32 +08:00
Sun-ZhenXing 54e549724d chore: update bifrost phoenix and ollama configs 2026-03-28 23:41:32 +08:00
Sun-ZhenXing 441b8a74f5 feat: add OpenViking DeerFlow Mattermost OpenFang and Paperclip services 2026-03-28 23:40:06 +08:00
150 changed files with 9870 additions and 570 deletions
@@ -1,69 +0,0 @@
---
applyTo: '**'
---
Compose Anything represents a collection of high-quality, production-ready, and portable Docker Compose configuration files. The primary objective is to allow users to deploy services "out-of-the-box" with minimal configuration while maintaining industry best practices.
The architecture focuses on modularity, security, and orchestrator compatibility (e.g., easy migration to Kubernetes). The technical challenge lies in balancing simplicity (zero-config startup) with robustness (resource limits, health checks, multi-arch support, and security baselines).
## Constraints
1. Out-of-the-box
- Configurations should work out-of-the-box with no extra steps (at most, provide a `.env` file).
2. Simple commands
- Each project ships a single `docker-compose.yaml` file.
- Command complexity should not exceed `docker compose up -d`; if more is needed, provide a `Makefile`.
- For initialization, prefer `healthcheck` with `depends_on` using `condition: service_healthy` to orchestrate startup order.
3. Stable versions
- Pin to the latest stable version instead of `latest`.
- Expose image versions via environment variables (e.g., `FOO_VERSION`).
4. Configuration conventions
- Prefer environment variables over complex CLI flags;
- Pass secrets via env vars or mounted files, never hardcode;
- Provide sensible defaults to enable zero-config startup;
- A commented `.env.example` is required;
- Env var naming: UPPER_SNAKE_CASE with service prefix (e.g., `POSTGRES_*`); use `*_PORT_OVERRIDE` for host port overrides.
5. Profiles
- Use Profiles for optional components/dependencies;
- Recommended names: `gpu` (acceleration), `metrics` (observability/exporters), `dev` (dev-only features).
6. Cross-platform & architectures
- Where images support it, ensure Debian 12+/Ubuntu 22.04+, Windows 10+, macOS 12+ work;
- Support x86-64 and ARM64 as consistently as possible;
- Avoid Linux-only host paths like `/etc/localtime` and `/etc/timezone`; prefer `TZ` env var for time zone.
7. Volumes & mounts
- Prefer relative paths for configuration to improve portability;
- Prefer named volumes for data directories to avoid permission/compat issues of host paths;
- If host paths are necessary, provide a top-level directory variable (e.g., `DATA_DIR`).
8. Resources & logging
- Always limit CPU and memory to prevent resource exhaustion;
- For GPU services, enable a single GPU by default via `deploy.resources.reservations.devices` (maps to device requests) or `gpus` where applicable;
- Limit logs (`json-file` driver: `max-size`/`max-file`).
9. Healthchecks
- Every service should define a `healthcheck` with suitable `interval`, `timeout`, `retries`, and `start_period`;
- Use `depends_on.condition: service_healthy` for dependency chains.
10. Security baseline (apply when possible)
- Run as non-root (expose `PUID`/`PGID` or set `user: "1000:1000"`);
- Read-only root filesystem (`read_only: true`), use `tmpfs`/writable mounts for required paths;
- Least privilege: `cap_drop: ["ALL"]`, add back only whats needed via `cap_add`;
- Avoid `container_name` (hurts scaling and reusable network aliases);
- If exposing Docker socket or other high-risk mounts, clearly document risks and alternatives.
11. Documentation & Discoverability
- Provide clear docs and examples (include admin/initialization notes, and security/license notes when relevant);
- Keep docs LLM-friendly;
- List primary env vars and default ports in the README, and link to `README.md` / `README.zh.md`.
Reference template: [`compose-template.yaml`](../../.compose-template.yaml) in the repo root.
If you want to find image tags, try fetch url like `https://hub.docker.com/v2/repositories/library/nginx/tags?page_size=1&ordering=last_updated`.
After update all of the services, please update `/README.md` & `/README.zh.md` to reflect the changes.
## Final Checklist
1. Is .env.example present and fully commented in English?
2. Are CPU/Memory limits applied?
3. Is container_name removed?
4. Are healthcheck and service_healthy conditions correctly implemented?
5. Are the Chinese docs correctly punctuated with spaces between languages?
6. Have the root repository README files been updated to include the new service?
**注意**:所有中文的文档都使用中文的标点符号,如 “,”、“()” 等,中文和英文之间要留有空格。对于 Docker Compose 文件和 `.env.example` 文件中的注释部分,请使用英语而不是中文。请为每个服务提供英文说明 README.md 和中文说明 `README.zh.md`
+85
View File
@@ -0,0 +1,85 @@
# Docker Compose Repository Guidelines
Compose Anything is a collection of production-ready, portable Docker Compose stacks. The default experience should remain simple: users should be able to enter a service directory and start it with `docker compose up -d`, while still getting sensible defaults for resource limits, health checks, security, and documentation.
## Primary Goals
1. Keep every stack easy to start and easy to understand.
2. Prefer portable Compose patterns that work across Windows, macOS, and Linux.
3. Default to production-aware settings instead of demo-only shortcuts.
4. Keep service documentation and root indexes accurate whenever a service changes.
## Required Workflow For Service Changes
1. Read the existing service folder before editing anything.
2. Use the repo root `.compose-template.yaml` as the structural reference when applicable.
3. Update these files together when a service changes: `docker-compose.yaml`, `.env.example`, `README.md`, and `README.zh.md`.
4. Update the root `README.md` and `README.zh.md` whenever a service is added, renamed, removed, or needs a new quick-start entry.
5. Keep the default startup path within `docker compose up -d`. If extra setup is unavoidable, document it clearly and prefer a `Makefile` over ad-hoc instructions.
## Compose Standards
1. Out-of-the-box startup
- A stack should work with zero extra steps, except optionally creating a `.env` file from `.env.example`.
- Defaults must be usable for local evaluation without forcing users to edit configuration first.
2. Command simplicity
- Each project should ship a single `docker-compose.yaml` file.
- Initialization order should use `healthcheck` plus `depends_on.condition: service_healthy` whenever a dependency chain exists.
3. Version pinning
- Pin to a stable image version instead of `latest` whenever a stable tag exists.
- Expose image versions via environment variables such as `REDIS_VERSION` or `POSTGRES_VERSION`.
4. Configuration style
- Prefer environment variables over long CLI flags.
- Never hardcode secrets.
- Provide a fully commented `.env.example` in English.
- Use UPPER_SNAKE_CASE names with a service prefix.
- Use `*_PORT_OVERRIDE` for host port overrides.
5. Profiles
- Use Compose profiles for optional components only.
- Preferred profile names: `gpu`, `metrics`, `dev`.
6. Cross-platform support
- Favor patterns that work on Debian 12+, Ubuntu 22.04+, Windows 10+, and macOS 12+ when upstream images support them.
- Support both x86-64 and ARM64 as consistently as practical.
- Avoid Linux-only host paths such as `/etc/localtime`; prefer `TZ`.
7. Storage and mounts
- Prefer named volumes for application data.
- Prefer relative paths for repo-managed configuration files.
- If host paths are necessary, expose a top-level directory variable such as `DATA_DIR`.
8. Resources and logging
- Every service must define CPU and memory limits.
- GPU services should default to one GPU via `deploy.resources.reservations.devices` or `gpus`.
- Limit container logs with the `json-file` driver and `max-size` / `max-file`.
9. Health checks
- Every long-running service should define a meaningful `healthcheck`.
- Tune `interval`, `timeout`, `retries`, and `start_period` for the actual startup profile of the service.
10. Security baseline
- Run as non-root when practical.
- Use `read_only: true` plus writable mounts or `tmpfs` where feasible.
- Default to `cap_drop: ["ALL"]` and add back only what is required.
- Do not use `container_name`.
- If a stack requires the Docker socket or another high-risk mount, document the risk and safer alternatives.
## Documentation Standards
1. Every service must provide both `README.md` and `README.zh.md`.
2. Service READMEs should at minimum cover: purpose, services, quick start, key environment variables, storage, and security notes when relevant.
3. The root `README.md` and `README.zh.md` should remain useful as entry points, not just service indexes. Include concise quick-start guidance and at least one concrete example when it helps discovery.
4. List the main environment variables and default ports in the service README.
5. Keep documentation LLM-friendly: predictable headings, short paragraphs, and concrete command examples.
Reference template: `/.compose-template.yaml`
If you need image tags, check the Docker Hub API, for example:
`https://hub.docker.com/v2/repositories/library/nginx/tags?page_size=1&ordering=last_updated`
## Final Checklist
1. Is `.env.example` present and fully commented in English?
2. Are CPU and memory limits defined?
3. Has `container_name` been avoided or removed?
4. Are `healthcheck` and `depends_on.condition: service_healthy` used correctly?
5. Are `README.md` and `README.zh.md` both updated for the service?
6. Are the root `README.md` and `README.zh.md` updated if discoverability changed?
7. Are Chinese docs using Chinese punctuation, with spaces between Chinese and English terms?
**注意**:所有中文文档都使用中文标点,如 “,”、“()” 等,中文与英文之间保留空格。Docker Compose 文件和 `.env.example` 文件中的注释必须使用英文。每个服务都必须提供英文 `README.md` 和中文 `README.zh.md`
+47 -5
View File
@@ -4,17 +4,45 @@
Compose Anything helps users quickly deploy various services by providing a set of high-quality Docker Compose configuration files. These configurations constrain resource usage, can be easily migrated to systems like K8S, and are easy to understand and modify.
## Quick Start
Choose a service directory, then start it with Docker Compose:
```bash
git clone https://github.com/Sun-ZhenXing/compose-anything.git
cd src/<service>
docker compose up -d
```
Most stacks are designed to run with the default settings. Use `.env.example` as a reference, and only create a `.env` file when you need to override ports, passwords, or image versions.
### Example: Start Redis
```bash
cd src/redis
docker compose up -d
docker compose exec redis redis-cli ping
```
If the stack is healthy, the final command returns `PONG`. By default, Redis is exposed on `localhost:6379`. For authentication, custom ports, or image changes, see [src/redis](./src/redis).
## Build Services
These services require building custom Docker images from source.
| Service | Version |
| ------------------------------------------- | ------- |
| [CubeSandbox](./builds/cube-sandbox) | 0.1.7 |
| [Debian DinD](./builds/debian-dind) | 0.1.2 |
| [DeerFlow](./builds/deer-flow) | 2.0 |
| [goose](./builds/goose) | 1.18.0 |
| [IOPaint](./builds/io-paint) | 1.6.0 |
| [K3s inside DinD](./builds/k3s-inside-dind) | 0.2.2 |
| [MinerU vLLM](./builds/mineru) | 2.7.6 |
| [MinerU vLLM](./builds/mineru) | 3.1.0 |
| [Multica](./builds/multica) | v0.1.32 |
| [OpenFang](./builds/openfang) | 0.1.0 |
| [Paperclip](./builds/paperclip) | main |
| [TurboOCR](./builds/turboocr) | v2.1.1 |
## Supported Services
@@ -29,7 +57,8 @@ These services require building custom Docker images from source.
| [Apache Pulsar](./src/pulsar) | 4.0.7 |
| [Apache RocketMQ](./src/rocketmq) | 5.3.1 |
| [Agentgateway](./src/agentgateway) | 0.11.2 |
| [Bifrost Gateway](./src/bifrost-gateway) | v1.3.63 |
| [AnythingLLM](./src/anythingllm) | latest |
| [Bifrost Gateway](./src/bifrost-gateway) | v1.4.17 |
| [Bolt.diy](./apps/bolt-diy) | latest |
| [Budibase](./src/budibase) | 3.23.0 |
| [BuildingAI](./apps/buildingai) | latest |
@@ -47,6 +76,7 @@ These services require building custom Docker images from source.
| [Doris](./src/doris) | 3.0.0 |
| [DuckDB](./src/duckdb) | v1.1.3 |
| [Easy Dataset](./apps/easy-dataset) | 1.5.1 |
| [EasyTier](./src/easytier) | v2.6.0 |
| [Elasticsearch](./src/elasticsearch) | 9.3.0 |
| [etcd](./src/etcd) | 3.6.0 |
| [FalkorDB](./src/falkordb) | v4.14.11 |
@@ -58,6 +88,7 @@ These services require building custom Docker images from source.
| [Gitea](./src/gitea) | 1.25.4-rootless |
| [GitLab Runner](./src/gitlab-runner) | 17.10.1 |
| [GitLab](./src/gitlab) | 18.8.3-ce.0 |
| [GoModel](./src/gomodel) | v0.1.27 |
| [GPUStack](./src/gpustack) | v0.5.3 |
| [Grafana](./src/grafana) | 12.3.2 |
| [Grafana Loki](./src/loki) | 3.3.2 |
@@ -74,22 +105,27 @@ These services require building custom Docker images from source.
| [Kodbox](./src/kodbox) | 1.62 |
| [Kong](./src/kong) | 3.8.0 |
| [Langflow](./apps/langflow) | latest |
| [Laminar](./src/laminar) | latest |
| [Langfuse](./apps/langfuse) | 3.115.0 |
| [Letta](./src/letta) | 0.16.7 |
| [LibreChat](./apps/librechat) | v0.8.4 |
| [LibreOffice](./src/libreoffice) | latest |
| [libSQL Server](./src/libsql) | latest |
| [LiteLLM](./src/litellm) | main-stable |
| [llama-swap](./src/llama-swap) | cpu |
| [llama.cpp](./src/llama.cpp) | server |
| [LMDeploy](./src/lmdeploy) | v0.11.1 |
| [LobeChat](./src/lobe-chat) | 1.143.3 |
| [Logstash](./src/logstash) | 8.16.1 |
| [MariaDB Galera Cluster](./src/mariadb-galera) | 11.7.2 |
| [Mattermost](./apps/mattermost) | 11.3 |
| [Memos](./src/memos) | 0.25.3 |
| [Milvus Standalone Embed](./src/milvus-standalone-embed) | v2.6.7 |
| [Milvus Standalone](./src/milvus-standalone) | v2.6.7 |
| [Minecraft Bedrock Server](./src/minecraft-bedrock-server) | latest |
| [MinIO](./src/minio) | 0.20260202 |
| [MLflow](./src/mlflow) | v2.20.2 |
| [MoltBot](./apps/moltbot) | main |
| [OpenClaw](./apps/openclaw) | 2026.2.3 |
| [MongoDB ReplicaSet Single](./src/mongodb-replicaset-single) | 8.2.3 |
| [MongoDB ReplicaSet](./src/mongodb-replicaset) | 8.2.3 |
| [MongoDB Standalone](./src/mongodb-standalone) | 8.2.3 |
@@ -107,7 +143,8 @@ These services require building custom Docker images from source.
| [Odoo](./src/odoo) | 19.0 |
| [Ollama](./src/ollama) | 0.14.3 |
| [Open WebUI](./src/open-webui) | main |
| [Phoenix (Arize)](./src/phoenix) | 13.3.0 |
| [Phoenix (Arize)](./src/phoenix) | 13.19.2 |
| [Pingap](./src/pingap) | 0.12.7-full |
| [Pingora Proxy Manager](./src/pingora-proxy-manager) | v1.0.3 |
| [Open WebUI Rust](./src/open-webui-rust) | latest |
| [OpenCode](./src/opencode) | 1.1.27 |
@@ -120,6 +157,7 @@ These services require building custom Docker images from source.
| [OpenObserve](./apps/openobserve) | v0.50.0 |
| [OpenSearch](./src/opensearch) | 2.19.0 |
| [OpenTelemetry Collector](./src/otel-collector) | 0.115.1 |
| [OpenViking](./src/openviking) | 0.1.0 |
| [Overleaf](./src/overleaf) | 5.2.1 |
| [PocketBase](./src/pocketbase) | 0.30.0 |
| [Podman](./src/podman) | v5.7.1 |
@@ -131,6 +169,7 @@ These services require building custom Docker images from source.
| [PyTorch](./src/pytorch) | 2.6.0 |
| [Qdrant](./src/qdrant) | 1.15.4 |
| [RabbitMQ](./src/rabbitmq) | 4.2.3 |
| [RAGFlow](./apps/ragflow) | v0.24.0 |
| [Ray](./src/ray) | 2.42.1 |
| [Redpanda](./src/redpanda) | v24.3.1 |
| [Redis Cluster](./src/redis-cluster) | 8.2.1 |
@@ -140,8 +179,10 @@ These services require building custom Docker images from source.
| [Restate](./src/restate) | 1.5.3 |
| [SearXNG](./src/searxng) | 2025.1.20-1ce14ef99 |
| [Selenium](./src/selenium) | 144.0-20260120 |
| [Shannon](./apps/shannon) | v0.3.1 |
| [SigNoz](./src/signoz) | 0.55.0 |
| [Sim](./apps/sim) | latest |
| [Skyvern](./apps/skyvern) | v1.0.31 |
| [Stable Diffusion WebUI](./apps/stable-diffusion-webui-docker) | latest |
| [Stirling-PDF](./apps/stirling-pdf) | latest |
| [Temporal](./src/temporal) | 1.24.2 |
@@ -149,6 +190,7 @@ These services require building custom Docker images from source.
| [TiKV](./src/tikv) | v8.5.0 |
| [Trigger.dev](./src/trigger-dev) | v4.2.0 |
| [TrailBase](./src/trailbase) | 0.22.4 |
| [TurboOCR](./src/turboocr) | v2.1.1 |
| [Valkey Cluster](./src/valkey-cluster) | 8.0 |
| [Valkey](./src/valkey) | 8.0 |
| [Verdaccio](./src/verdaccio) | 6.1.2 |
@@ -184,7 +226,7 @@ These services require building custom Docker images from source.
| [OpenWeather](./mcp-servers/openweather) | latest |
| [Paper Search](./mcp-servers/paper-search) | latest |
| [Playwright](./mcp-servers/playwright) | latest |
| [Redis MCP](./mcp-servers/redis-mcp) | latest |
| [Redis MCP](./mcp-servers/redis) | latest |
| [Rust Filesystem](./mcp-servers/rust-mcp-filesystem) | latest |
| [Sequential Thinking](./mcp-servers/sequentialthinking) | latest |
| [SQLite](./mcp-servers/sqlite) | latest |
+53 -11
View File
@@ -4,17 +4,45 @@
Compose Anything 通过提供一组高质量的 Docker Compose 配置文件,帮助用户快速部署各种服务。这些配置约束了资源使用,可快速迁移到 K8S 等系统,并且易于理解和修改。
## 快速开始
先进入目标服务目录,再使用 Docker Compose 启动:
```bash
git clone https://github.com/Sun-ZhenXing/compose-anything.git
cd src/<service>
docker compose up -d
```
大多数配置都可以直接使用默认值启动。`.env.example` 用于说明可选配置项;只有在你需要覆盖端口、密码或镜像版本时,才需要额外创建 `.env` 文件。
### 示例:快速启动 Redis
```bash
cd src/redis
docker compose up -d
docker compose exec redis redis-cli ping
```
如果服务正常,最后一条命令会返回 `PONG`。默认情况下,Redis 会暴露在 `localhost:6379`。如果需要认证、自定义端口或调整镜像版本,请查看 [src/redis](./src/redis)。
## 构建服务
这些服务需要从源代码构建自定义 Docker 镜像。
| 服务 | 版本 |
| ------------------------------------------- | ------ |
| [Debian DinD](./builds/debian-dind) | 0.1.2 |
| [goose](./builds/goose) | 1.18.0 |
| [IOPaint](./builds/io-paint) | 1.6.0 |
| [K3s inside DinD](./builds/k3s-inside-dind) | 0.2.2 |
| [MinerU vLLM](./builds/mineru) | 2.7.6 |
| 服务 | 版本 |
| ------------------------------------------- | ------- |
| [CubeSandbox](./builds/cube-sandbox) | 0.1.7 |
| [Debian DinD](./builds/debian-dind) | 0.1.2 |
| [DeerFlow](./builds/deer-flow) | 2.0 |
| [goose](./builds/goose) | 1.18.0 |
| [IOPaint](./builds/io-paint) | 1.6.0 |
| [K3s inside DinD](./builds/k3s-inside-dind) | 0.2.2 |
| [MinerU vLLM](./builds/mineru) | 3.1.0 |
| [Multica](./builds/multica) | v0.1.32 |
| [OpenFang](./builds/openfang) | 0.1.0 |
| [Paperclip](./builds/paperclip) | main |
| [TurboOCR](./builds/turboocr) | v2.1.1 |
## 已经支持的服务
@@ -29,7 +57,8 @@ Compose Anything 通过提供一组高质量的 Docker Compose 配置文件,
| [Apache Pulsar](./src/pulsar) | 4.0.7 |
| [Apache RocketMQ](./src/rocketmq) | 5.3.1 |
| [Agentgateway](./src/agentgateway) | 0.11.2 |
| [Bifrost Gateway](./src/bifrost-gateway) | v1.3.63 |
| [AnythingLLM](./src/anythingllm) | latest |
| [Bifrost Gateway](./src/bifrost-gateway) | v1.4.17 |
| [Bolt.diy](./apps/bolt-diy) | latest |
| [Budibase](./src/budibase) | 3.23.0 |
| [BuildingAI](./apps/buildingai) | latest |
@@ -47,6 +76,7 @@ Compose Anything 通过提供一组高质量的 Docker Compose 配置文件,
| [Doris](./src/doris) | 3.0.0 |
| [DuckDB](./src/duckdb) | v1.1.3 |
| [Easy Dataset](./apps/easy-dataset) | 1.5.1 |
| [EasyTier](./src/easytier) | v2.6.0 |
| [Elasticsearch](./src/elasticsearch) | 9.3.0 |
| [etcd](./src/etcd) | 3.6.0 |
| [FalkorDB](./src/falkordb) | v4.14.11 |
@@ -58,6 +88,7 @@ Compose Anything 通过提供一组高质量的 Docker Compose 配置文件,
| [Gitea](./src/gitea) | 1.25.4-rootless |
| [GitLab Runner](./src/gitlab-runner) | 17.10.1 |
| [GitLab](./src/gitlab) | 18.8.3-ce.0 |
| [GoModel](./src/gomodel) | v0.1.27 |
| [GPUStack](./src/gpustack) | v0.5.3 |
| [Grafana](./src/grafana) | 12.3.2 |
| [Grafana Loki](./src/loki) | 3.3.2 |
@@ -74,22 +105,27 @@ Compose Anything 通过提供一组高质量的 Docker Compose 配置文件,
| [Kodbox](./src/kodbox) | 1.62 |
| [Kong](./src/kong) | 3.8.0 |
| [Langflow](./apps/langflow) | latest |
| [Laminar](./src/laminar) | latest |
| [Langfuse](./apps/langfuse) | 3.115.0 |
| [Letta](./src/letta) | 0.16.7 |
| [LibreChat](./apps/librechat) | v0.8.4 |
| [LibreOffice](./src/libreoffice) | latest |
| [libSQL Server](./src/libsql) | latest |
| [LiteLLM](./src/litellm) | main-stable |
| [llama-swap](./src/llama-swap) | cpu |
| [llama.cpp](./src/llama.cpp) | server |
| [LMDeploy](./src/lmdeploy) | v0.11.1 |
| [LobeChat](./src/lobe-chat) | 1.143.3 |
| [Logstash](./src/logstash) | 8.16.1 |
| [MariaDB Galera Cluster](./src/mariadb-galera) | 11.7.2 |
| [Mattermost](./apps/mattermost) | 11.3 |
| [Memos](./src/memos) | 0.25.3 |
| [Milvus Standalone Embed](./src/milvus-standalone-embed) | v2.6.7 |
| [Milvus Standalone](./src/milvus-standalone) | v2.6.7 |
| [Minecraft Bedrock Server](./src/minecraft-bedrock-server) | latest |
| [MinIO](./src/minio) | 0.20260202 |
| [MLflow](./src/mlflow) | v2.20.2 |
| [MoltBot](./apps/moltbot) | main |
| [OpenClaw](./apps/openclaw) | 2026.2.3 |
| [MongoDB ReplicaSet Single](./src/mongodb-replicaset-single) | 8.2.3 |
| [MongoDB ReplicaSet](./src/mongodb-replicaset) | 8.2.3 |
| [MongoDB Standalone](./src/mongodb-standalone) | 8.2.3 |
@@ -107,7 +143,8 @@ Compose Anything 通过提供一组高质量的 Docker Compose 配置文件,
| [Odoo](./src/odoo) | 19.0 |
| [Ollama](./src/ollama) | 0.14.3 |
| [Open WebUI](./src/open-webui) | main |
| [Phoenix (Arize)](./src/phoenix) | 13.3.0 |
| [Phoenix (Arize)](./src/phoenix) | 13.19.2 |
| [Pingap](./src/pingap) | 0.12.7-full |
| [Pingora Proxy Manager](./src/pingora-proxy-manager) | v1.0.3 |
| [Open WebUI Rust](./src/open-webui-rust) | latest |
| [OpenCode](./src/opencode) | 1.1.27 |
@@ -120,6 +157,7 @@ Compose Anything 通过提供一组高质量的 Docker Compose 配置文件,
| [OpenObserve](./apps/openobserve) | v0.50.0 |
| [OpenSearch](./src/opensearch) | 2.19.0 |
| [OpenTelemetry Collector](./src/otel-collector) | 0.115.1 |
| [OpenViking](./src/openviking) | 0.1.0 |
| [Overleaf](./src/overleaf) | 5.2.1 |
| [PocketBase](./src/pocketbase) | 0.30.0 |
| [Podman](./src/podman) | v5.7.1 |
@@ -131,6 +169,7 @@ Compose Anything 通过提供一组高质量的 Docker Compose 配置文件,
| [PyTorch](./src/pytorch) | 2.6.0 |
| [Qdrant](./src/qdrant) | 1.15.4 |
| [RabbitMQ](./src/rabbitmq) | 4.2.3 |
| [RAGFlow](./apps/ragflow) | v0.24.0 |
| [Ray](./src/ray) | 2.42.1 |
| [Redpanda](./src/redpanda) | v24.3.1 |
| [Redis Cluster](./src/redis-cluster) | 8.2.1 |
@@ -140,8 +179,10 @@ Compose Anything 通过提供一组高质量的 Docker Compose 配置文件,
| [Restate](./src/restate) | 1.5.3 |
| [SearXNG](./src/searxng) | 2025.1.20-1ce14ef99 |
| [Selenium](./src/selenium) | 144.0-20260120 |
| [Shannon](./apps/shannon) | v0.3.1 |
| [SigNoz](./src/signoz) | 0.55.0 |
| [Sim](./apps/sim) | latest |
| [Skyvern](./apps/skyvern) | v1.0.31 |
| [Stable Diffusion WebUI](./apps/stable-diffusion-webui-docker) | latest |
| [Stirling-PDF](./apps/stirling-pdf) | latest |
| [Temporal](./src/temporal) | 1.24.2 |
@@ -149,6 +190,7 @@ Compose Anything 通过提供一组高质量的 Docker Compose 配置文件,
| [TiKV](./src/tikv) | v8.5.0 |
| [Trigger.dev](./src/trigger-dev) | v4.2.0 |
| [TrailBase](./src/trailbase) | 0.22.4 |
| [TurboOCR](./src/turboocr) | v2.1.1 |
| [Valkey Cluster](./src/valkey-cluster) | 8.0 |
| [Valkey](./src/valkey) | 8.0 |
| [Verdaccio](./src/verdaccio) | 6.1.2 |
@@ -184,7 +226,7 @@ Compose Anything 通过提供一组高质量的 Docker Compose 配置文件,
| [OpenWeather](./mcp-servers/openweather) | latest |
| [Paper Search](./mcp-servers/paper-search) | latest |
| [Playwright](./mcp-servers/playwright) | latest |
| [Redis MCP](./mcp-servers/redis-mcp) | latest |
| [Redis MCP](./mcp-servers/redis) | latest |
| [Rust Filesystem](./mcp-servers/rust-mcp-filesystem) | latest |
| [Sequential Thinking](./mcp-servers/sequentialthinking) | latest |
| [SQLite](./mcp-servers/sqlite) | latest |
+50
View File
@@ -0,0 +1,50 @@
# Global Registry Prefix (optional)
# GLOBAL_REGISTRY=
# Service Versions
LIBRECHAT_VERSION=v0.8.4
MONGODB_VERSION=8.0
MEILISEARCH_VERSION=v1.12.8
# Timezone
TZ=UTC
# Host port for the LibreChat web UI
LIBRECHAT_PORT_OVERRIDE=3080
# Security Secrets (CHANGEME: generate with: openssl rand -hex 32)
JWT_SECRET=changeme_jwt_secret_please_change_CHANGEME
JWT_REFRESH_SECRET=changeme_jwt_refresh_secret_CHANGEME
MEILI_MASTER_KEY=changeme_meili_master_key_CHANGEME
# Encryption Keys
# CREDS_KEY must be exactly 32 characters
CREDS_KEY=changeme_creds_key_32_chars_only
# CREDS_IV must be exactly 16 characters
CREDS_IV=changeme_iv_16ch
# Registration
ALLOW_REGISTRATION=true
ALLOW_SOCIAL_LOGIN=false
# LLM Provider API Keys (optional; configure via UI or here)
# OPENAI_API_KEY=sk-...
# ANTHROPIC_API_KEY=sk-ant-...
# Resource Limits - LibreChat
LIBRECHAT_CPU_LIMIT=2
LIBRECHAT_MEMORY_LIMIT=2G
LIBRECHAT_CPU_RESERVATION=0.5
LIBRECHAT_MEMORY_RESERVATION=512M
# Resource Limits - MongoDB
MONGODB_CPU_LIMIT=1
MONGODB_MEMORY_LIMIT=1G
MONGODB_CPU_RESERVATION=0.25
MONGODB_MEMORY_RESERVATION=256M
# Resource Limits - Meilisearch
MEILISEARCH_CPU_LIMIT=0.5
MEILISEARCH_MEMORY_LIMIT=512M
MEILISEARCH_CPU_RESERVATION=0.1
MEILISEARCH_MEMORY_RESERVATION=128M
+82
View File
@@ -0,0 +1,82 @@
# LibreChat
[English](./README.md) | [中文](./README.zh.md)
Quick start: <https://docs.librechat.ai>.
This service deploys LibreChat, an open-source AI chat platform that supports OpenAI, Anthropic, Google, Ollama, and many other providers in a single unified interface with conversation history, file uploads, code execution, and multi-user support.
## Services
- **librechat**: The LibreChat web application (Node.js).
- **mongodb**: MongoDB database for conversation and user data.
- **meilisearch**: Full-text search engine for message indexing.
## Quick Start
1. Copy `.env.example` to `.env`:
```bash
cp .env.example .env
```
2. Update the secrets in `.env` (generate with `openssl rand -hex 32`):
```
JWT_SECRET, JWT_REFRESH_SECRET, MEILI_MASTER_KEY, CREDS_KEY, CREDS_IV
```
3. Start the services:
```bash
docker compose up -d
```
4. Open `http://localhost:3080` and register the first user account.
## Core Environment Variables
| Variable | Description | Default |
| --------------------- | -------------------------------------------------------- | ---------------------------- |
| `LIBRECHAT_VERSION` | Image version | `v0.8.4` |
| `LIBRECHAT_PORT_OVERRIDE` | Host port for the web UI | `3080` |
| `JWT_SECRET` | JWT signing secret (min 32 chars) — **CHANGEME** | placeholder |
| `JWT_REFRESH_SECRET` | JWT refresh signing secret — **CHANGEME** | placeholder |
| `MEILI_MASTER_KEY` | Meilisearch master key — **CHANGEME** | placeholder |
| `CREDS_KEY` | Encryption key for stored credentials (exactly 32 chars) | placeholder |
| `CREDS_IV` | Encryption IV (exactly 16 chars) | placeholder |
| `ALLOW_REGISTRATION` | Allow new user registration | `true` |
| `OPENAI_API_KEY` | OpenAI API key (optional; can also configure in UI) | *(empty)* |
| `ANTHROPIC_API_KEY` | Anthropic API key (optional) | *(empty)* |
## Volumes
- `librechat_images`: User-uploaded images served by the web UI.
- `librechat_logs`: Application log files.
- `librechat_mongo_data`: MongoDB data persistence.
- `librechat_meilisearch_data`: Meilisearch index data.
## Ports
- **3080**: LibreChat web UI
## Security Notes
- Generate all secrets before any external exposure: `openssl rand -hex 32`
- `CREDS_KEY` and `CREDS_IV` encrypt stored API keys — losing them makes stored credentials unrecoverable.
- Set `ALLOW_REGISTRATION=false` after creating admin accounts to lock down signups.
## Resource Requirements
| Service | CPU Limit | Memory Limit |
| ------------ | --------- | ------------ |
| librechat | 2 | 2 GB |
| mongodb | 1 | 1 GB |
| meilisearch | 0.5 | 512 MB |
Total recommended: **4+ GB RAM**.
## Documentation
- [LibreChat Docs](https://docs.librechat.ai)
- [GitHub](https://github.com/danny-avila/LibreChat)
+82
View File
@@ -0,0 +1,82 @@
# LibreChat
[English](./README.md) | [中文](./README.zh.md)
快速开始:<https://docs.librechat.ai>。
此服务用于部署 LibreChat,一个开源 AI 对话平台,在单一统一界面中支持 OpenAI、Anthropic、Google、Ollama 等众多提供商,具备对话历史、文件上传、代码执行和多用户支持。
## 服务
- **librechat**LibreChat Web 应用(Node.js)。
- **mongodb**:用于存储对话和用户数据的 MongoDB 数据库。
- **meilisearch**:用于消息索引的全文搜索引擎。
## 快速开始
1.`.env.example` 复制为 `.env`
```bash
cp .env.example .env
```
2. 更新 `.env` 中的密钥(使用 `openssl rand -hex 32` 生成):
```
JWT_SECRET、JWT_REFRESH_SECRET、MEILI_MASTER_KEY、CREDS_KEY、CREDS_IV
```
3. 启动服务:
```bash
docker compose up -d
```
4. 打开 `http://localhost:3080`,注册第一个用户账号。
## 核心环境变量
| 变量 | 说明 | 默认值 |
| ------------------------ | ------------------------------------------------------- | -------- |
| `LIBRECHAT_VERSION` | 镜像版本 | `v0.8.4` |
| `LIBRECHAT_PORT_OVERRIDE`| Web UI 宿主机端口 | `3080` |
| `JWT_SECRET` | JWT 签名密钥(至少 32 字符)——**请修改** | 占位符 |
| `JWT_REFRESH_SECRET` | JWT 刷新签名密钥——**请修改** | 占位符 |
| `MEILI_MASTER_KEY` | Meilisearch 主密钥——**请修改** | 占位符 |
| `CREDS_KEY` | 存储凭证的加密密钥(恰好 32 字符) | 占位符 |
| `CREDS_IV` | 加密 IV(恰好 16 字符) | 占位符 |
| `ALLOW_REGISTRATION` | 允许新用户注册 | `true` |
| `OPENAI_API_KEY` | OpenAI API Key(可选;也可在 UI 中配置) | *(空)* |
| `ANTHROPIC_API_KEY` | Anthropic API Key(可选) | *(空)* |
## 数据卷
- `librechat_images`:用户上传的图片,由 Web UI 提供服务。
- `librechat_logs`:应用日志文件。
- `librechat_mongo_data`MongoDB 数据持久化。
- `librechat_meilisearch_data`Meilisearch 索引数据。
## 端口
- **3080**LibreChat Web UI
## 安全说明
- 在对外暴露之前,请生成所有密钥:`openssl rand -hex 32`。
- `CREDS_KEY` 和 `CREDS_IV` 用于加密存储的 API Key——丢失后存储的凭证将无法恢复。
- 创建管理员账号后,将 `ALLOW_REGISTRATION` 设为 `false` 以禁止新用户注册。
## 资源需求
| 服务 | CPU 限制 | 内存限制 |
| ----------- | -------- | -------- |
| librechat | 2 | 2 GB |
| mongodb | 1 | 1 GB |
| meilisearch | 0.5 | 512 MB |
推荐总计:**4+ GB RAM**。
## 文档
- [LibreChat 文档](https://docs.librechat.ai)
- [GitHub](https://github.com/danny-avila/LibreChat)
+108
View File
@@ -0,0 +1,108 @@
# Make sure to change the secret placeholders before exposing this stack externally.
# Fields marked with CHANGEME must be updated for any non-local deployment.
x-defaults: &defaults
restart: unless-stopped
logging:
driver: json-file
options:
max-size: 100m
max-file: '3'
services:
librechat:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}librechat/librechat:${LIBRECHAT_VERSION:-v0.8.4}
depends_on:
mongodb:
condition: service_healthy
meilisearch:
condition: service_healthy
ports:
- '${LIBRECHAT_PORT_OVERRIDE:-3080}:3080'
volumes:
- librechat_images:/app/client/public/images
- librechat_logs:/app/api/logs
environment:
- TZ=${TZ:-UTC}
- MONGO_URI=mongodb://mongodb:27017/LibreChat
- MEILI_HOST=http://meilisearch:7700
- MEILI_MASTER_KEY=${MEILI_MASTER_KEY:-changeme_meili_master_key_CHANGEME}
- JWT_SECRET=${JWT_SECRET:-changeme_jwt_secret_please_change_CHANGEME}
- JWT_REFRESH_SECRET=${JWT_REFRESH_SECRET:-changeme_jwt_refresh_secret_CHANGEME}
- CREDS_KEY=${CREDS_KEY:-changeme_creds_key_32_chars_only}
- CREDS_IV=${CREDS_IV:-changeme_iv_16ch}
- ALLOW_REGISTRATION=${ALLOW_REGISTRATION:-true}
- ALLOW_SOCIAL_LOGIN=${ALLOW_SOCIAL_LOGIN:-false}
- OPENAI_API_KEY=${OPENAI_API_KEY:-}
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY:-}
healthcheck:
test:
- CMD
- node
- -e
- "require('http').get('http://localhost:3080/health',res=>process.exit(res.statusCode===200?0:1)).on('error',()=>process.exit(1))"
interval: 30s
timeout: 10s
retries: 5
start_period: 40s
deploy:
resources:
limits:
cpus: ${LIBRECHAT_CPU_LIMIT:-2}
memory: ${LIBRECHAT_MEMORY_LIMIT:-2G}
reservations:
cpus: ${LIBRECHAT_CPU_RESERVATION:-0.5}
memory: ${LIBRECHAT_MEMORY_RESERVATION:-512M}
mongodb:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}mongo:${MONGODB_VERSION:-8.0}
volumes:
- librechat_mongo_data:/data/db
environment:
- TZ=${TZ:-UTC}
healthcheck:
test: [CMD, mongosh, --eval, "db.adminCommand('ping')"]
interval: 10s
timeout: 10s
retries: 5
start_period: 20s
deploy:
resources:
limits:
cpus: ${MONGODB_CPU_LIMIT:-1}
memory: ${MONGODB_MEMORY_LIMIT:-1G}
reservations:
cpus: ${MONGODB_CPU_RESERVATION:-0.25}
memory: ${MONGODB_MEMORY_RESERVATION:-256M}
meilisearch:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}getmeili/meilisearch:${MEILISEARCH_VERSION:-v1.12.8}
volumes:
- librechat_meilisearch_data:/meili_data
environment:
- TZ=${TZ:-UTC}
- MEILI_MASTER_KEY=${MEILI_MASTER_KEY:-changeme_meili_master_key_CHANGEME}
- MEILI_NO_ANALYTICS=true
healthcheck:
test: [CMD-SHELL, "curl -sf http://localhost:7700/health || exit 1"]
interval: 10s
timeout: 10s
retries: 5
start_period: 10s
deploy:
resources:
limits:
cpus: ${MEILISEARCH_CPU_LIMIT:-0.5}
memory: ${MEILISEARCH_MEMORY_LIMIT:-512M}
reservations:
cpus: ${MEILISEARCH_CPU_RESERVATION:-0.1}
memory: ${MEILISEARCH_MEMORY_RESERVATION:-128M}
volumes:
librechat_images:
librechat_logs:
librechat_mongo_data:
librechat_meilisearch_data:
+34
View File
@@ -0,0 +1,34 @@
# Image versions
MATTERMOST_VERSION=11.3
POSTGRES_VERSION=17-alpine
# Network configuration
MATTERMOST_PORT_OVERRIDE=8065
MATTERMOST_SITE_URL=http://localhost:8065
# PostgreSQL configuration
POSTGRES_DB=mattermost
POSTGRES_USER=mmuser
POSTGRES_PASSWORD=mmchangeit
# Mattermost runtime configuration
MATTERMOST_ENABLE_LOCAL_MODE=false
# Resources - Mattermost
MATTERMOST_CPU_LIMIT=2.00
MATTERMOST_MEMORY_LIMIT=2G
MATTERMOST_CPU_RESERVATION=0.50
MATTERMOST_MEMORY_RESERVATION=512M
# Resources - PostgreSQL
MATTERMOST_DB_CPU_LIMIT=1.00
MATTERMOST_DB_MEMORY_LIMIT=1G
MATTERMOST_DB_CPU_RESERVATION=0.25
MATTERMOST_DB_MEMORY_RESERVATION=256M
# Logging
MATTERMOST_LOG_MAX_SIZE=100m
MATTERMOST_LOG_MAX_FILE=3
# Timezone
TZ=UTC
+68
View File
@@ -0,0 +1,68 @@
# Mattermost
[中文文档](README.zh.md)
Mattermost is an open-source team collaboration platform that provides chat, file sharing, channels, and integrations. This Compose stack includes Mattermost plus PostgreSQL and is designed to start with a single `docker compose up -d`.
## Quick Start
1. Copy the example environment file:
```bash
cp .env.example .env
```
2. Edit `.env` if you want to change the port, site URL, or database password.
3. Start the stack:
```bash
docker compose up -d
```
4. Open Mattermost:
- <http://localhost:8065>
5. Complete the first-run wizard to create the initial system admin account.
## Default Ports
| Service | Port | Description |
| ---------- | ---- | ---------------------- |
| Mattermost | 8065 | Web UI and API |
| PostgreSQL | 5432 | Internal database only |
## Important Environment Variables
| Variable | Description | Default |
| ------------------------------ | ---------------------------------------------- | ----------------------- |
| `MATTERMOST_VERSION` | Mattermost Team Edition image tag | `11.3` |
| `MATTERMOST_PORT_OVERRIDE` | Host port for Mattermost | `8065` |
| `MATTERMOST_SITE_URL` | Public URL used by Mattermost | `http://localhost:8065` |
| `POSTGRES_DB` | PostgreSQL database name | `mattermost` |
| `POSTGRES_USER` | PostgreSQL user | `mmuser` |
| `POSTGRES_PASSWORD` | PostgreSQL password | `mmchangeit` |
| `MATTERMOST_ENABLE_LOCAL_MODE` | Enables local mode for administrative commands | `false` |
| `TZ` | Container timezone | `UTC` |
## Volumes
- `mattermost_postgres_data`: PostgreSQL data.
- `mattermost_config`: Mattermost config directory.
- `mattermost_data`: Uploaded files and application data.
- `mattermost_logs`: Application logs.
- `mattermost_plugins`: Server-side plugins.
- `mattermost_client_plugins`: Webapp plugins.
- `mattermost_bleve_indexes`: Search indexes.
## Notes
- The application depends on PostgreSQL and waits until the database is healthy before booting.
- The default setup uses Team Edition.
- If you expose Mattermost behind a reverse proxy or different hostname, update `MATTERMOST_SITE_URL`.
## References
- [Mattermost Repository](https://github.com/mattermost/mattermost)
- [Mattermost Team Edition Image](https://hub.docker.com/r/mattermost/mattermost-team-edition)
+68
View File
@@ -0,0 +1,68 @@
# Mattermost
[English](README.md)
Mattermost 是一个开源团队协作平台,提供聊天、频道、文件共享和集成能力。这个 Compose 配置包含 Mattermost 和 PostgreSQL,目标是用一条 `docker compose up -d` 完成启动。
## 快速开始
1. 复制环境变量示例文件:
```bash
cp .env.example .env
```
2. 按需修改 `.env`,例如端口、站点 URL 或数据库密码。
3. 启动整个栈:
```bash
docker compose up -d
```
4. 打开 Mattermost
- <http://localhost:8065>
5. 按照首次启动向导创建初始系统管理员账号。
## 默认端口
| 服务 | 端口 | 说明 |
| ---------- | ---- | -------------------- |
| Mattermost | 8065 | Web 界面与 API |
| PostgreSQL | 5432 | 仅供内部使用的数据库 |
## 关键环境变量
| 变量 | 说明 | 默认值 |
| ------------------------------ | -------------------------------- | ----------------------- |
| `MATTERMOST_VERSION` | Mattermost Team Edition 镜像标签 | `11.3` |
| `MATTERMOST_PORT_OVERRIDE` | Mattermost 对外端口 | `8065` |
| `MATTERMOST_SITE_URL` | Mattermost 对外访问 URL | `http://localhost:8065` |
| `POSTGRES_DB` | PostgreSQL 数据库名 | `mattermost` |
| `POSTGRES_USER` | PostgreSQL 用户名 | `mmuser` |
| `POSTGRES_PASSWORD` | PostgreSQL 密码 | `mmchangeit` |
| `MATTERMOST_ENABLE_LOCAL_MODE` | 是否启用本地管理模式 | `false` |
| `TZ` | 容器时区 | `UTC` |
## 数据卷
- `mattermost_postgres_data`PostgreSQL 数据。
- `mattermost_config`Mattermost 配置目录。
- `mattermost_data`:上传文件和业务数据。
- `mattermost_logs`:应用日志。
- `mattermost_plugins`:服务端插件。
- `mattermost_client_plugins`:前端插件。
- `mattermost_bleve_indexes`:搜索索引。
## 说明
- Mattermost 依赖 PostgreSQL,只有数据库健康后才会继续启动。
- 这里默认使用 Team Edition。
- 如果你通过反向代理或自定义域名访问 Mattermost,请同步修改 `MATTERMOST_SITE_URL`。
## 参考资料
- [Mattermost 仓库](https://github.com/mattermost/mattermost)
- [Mattermost Team Edition 镜像](https://hub.docker.com/r/mattermost/mattermost-team-edition)
+84
View File
@@ -0,0 +1,84 @@
x-defaults: &defaults
restart: unless-stopped
logging:
driver: json-file
options:
max-size: ${MATTERMOST_LOG_MAX_SIZE:-100m}
max-file: '${MATTERMOST_LOG_MAX_FILE:-3}'
services:
mattermost-postgres:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}postgres:${POSTGRES_VERSION:-17-alpine}
environment:
- TZ=${TZ:-UTC}
- POSTGRES_DB=${POSTGRES_DB:-mattermost}
- POSTGRES_USER=${POSTGRES_USER:-mmuser}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-mmchangeit}
- PGDATA=/var/lib/postgresql/data/pgdata
volumes:
- mattermost_postgres_data:/var/lib/postgresql/data
healthcheck:
test: [CMD-SHELL, pg_isready -U $$POSTGRES_USER -d $$POSTGRES_DB]
interval: 15s
timeout: 5s
retries: 10
start_period: 20s
deploy:
resources:
limits:
cpus: ${MATTERMOST_DB_CPU_LIMIT:-1.00}
memory: ${MATTERMOST_DB_MEMORY_LIMIT:-1G}
reservations:
cpus: ${MATTERMOST_DB_CPU_RESERVATION:-0.25}
memory: ${MATTERMOST_DB_MEMORY_RESERVATION:-256M}
mattermost:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}mattermost/mattermost-team-edition:${MATTERMOST_VERSION:-11.3}
depends_on:
mattermost-postgres:
condition: service_healthy
ports:
- '${MATTERMOST_PORT_OVERRIDE:-8065}:8065'
environment:
- TZ=${TZ:-UTC}
- MM_SQLSETTINGS_DRIVERNAME=postgres
- MM_SQLSETTINGS_DATASOURCE=postgres://${POSTGRES_USER:-mmuser}:${POSTGRES_PASSWORD:-mmchangeit}@mattermost-postgres:5432/${POSTGRES_DB:-mattermost}?sslmode=disable&connect_timeout=10
- MM_SERVICESETTINGS_SITEURL=${MATTERMOST_SITE_URL:-http://localhost:8065}
- MM_SERVICESETTINGS_ENABLELOCALMODE=${MATTERMOST_ENABLE_LOCAL_MODE:-false}
- MM_PLUGINSETTINGS_ENABLEUPLOADS=true
- MM_BLEVESETTINGS_INDEXDIR=/mattermost/bleve-indexes
- MM_FILESETTINGS_DIRECTORY=/mattermost/data
env_file:
- .env
volumes:
- mattermost_config:/mattermost/config
- mattermost_data:/mattermost/data
- mattermost_logs:/mattermost/logs
- mattermost_plugins:/mattermost/plugins
- mattermost_client_plugins:/mattermost/client/plugins
- mattermost_bleve_indexes:/mattermost/bleve-indexes
healthcheck:
test: [CMD, /mattermost/bin/mmctl, system, status, --local]
interval: 30s
timeout: 10s
retries: 5
start_period: 60s
deploy:
resources:
limits:
cpus: ${MATTERMOST_CPU_LIMIT:-2.00}
memory: ${MATTERMOST_MEMORY_LIMIT:-2G}
reservations:
cpus: ${MATTERMOST_CPU_RESERVATION:-0.50}
memory: ${MATTERMOST_MEMORY_RESERVATION:-512M}
volumes:
mattermost_postgres_data:
mattermost_config:
mattermost_data:
mattermost_logs:
mattermost_plugins:
mattermost_client_plugins:
mattermost_bleve_indexes:
+1 -1
View File
@@ -57,7 +57,7 @@ services:
- NANOBOT_GATEWAY__PORT=${GATEWAY_PORT:-18790}
command: ${NANOBOT_COMMAND:-gateway}
healthcheck:
test: [CMD, python, -c, import sys; sys.exit(0)]
test: ["CMD", "python", "-c", "import urllib.request; urllib.request.urlopen('http://localhost:18790/')"]
interval: 30s
timeout: 10s
retries: 3
+5 -4
View File
@@ -13,7 +13,7 @@ x-defaults: &defaults
services:
openclaw-gateway:
<<: *defaults
image: ${GLOBAL_REGISTRY:-ghcr.io}/openclaw/openclaw:${OPENCLAW_VERSION:-2026.2.3}
image: ${GLOBAL_REGISTRY:-ghcr.io/}openclaw/openclaw:${OPENCLAW_VERSION:-2026.2.3}
environment:
- TZ=${TZ:-UTC}
- HOME=/home/node
@@ -60,7 +60,8 @@ services:
openclaw-cli:
<<: *defaults
image: ${GLOBAL_REGISTRY:-ghcr.io}/openclaw/openclaw:${OPENCLAW_VERSION:-2026.2.3}
restart: 'no'
image: ${GLOBAL_REGISTRY:-ghcr.io/}openclaw/openclaw:${OPENCLAW_VERSION:-2026.2.3}
environment:
- TZ=${TZ:-UTC}
- HOME=/home/node
@@ -70,8 +71,8 @@ services:
- CLAUDE_WEB_SESSION_KEY=${CLAUDE_WEB_SESSION_KEY:-}
- CLAUDE_WEB_COOKIE=${CLAUDE_WEB_COOKIE:-}
volumes:
- moltbot_config:/home/node/.clawdbot
- moltbot_workspace:/home/node/clawd
- openclaw_config:/home/node/.openclaw
- openclaw_workspace:/home/node/openclaw-workspace
stdin_open: true
tty: true
entrypoint: [node, dist/index.js]
+74
View File
@@ -0,0 +1,74 @@
<?xml version="1.0"?>
<clickhouse>
<logger>
<!-- Set console log level to warning (only critical messages) -->
<level>warning</level>
<console>true</console>
</logger>
<!-- Configure trace_log table settings -->
<trace_log>
<!-- Only log critical trace events (level 6 and above - more restrictive) -->
<level>6</level>
<!-- Reduce the frequency of trace log flushing -->
<flush_interval_milliseconds>120000</flush_interval_milliseconds>
<!-- Set table TTL to reduce storage (7 days) -->
<table_ttl>604800</table_ttl>
</trace_log>
<!-- Configure text_log table settings (also large in your case) -->
<text_log>
<!-- Only log warning level and above -->
<level>warning</level>
<!-- Set TTL to 7 days -->
<table_ttl>604800</table_ttl>
<!-- Reduce flush frequency -->
<flush_interval_milliseconds>120000</flush_interval_milliseconds>
</text_log>
<!-- Reduce other system table logging -->
<query_log>
<!-- Only log slow queries (over 1 second) -->
<log_queries_min_query_duration_ms>1000</log_queries_min_query_duration_ms>
<!-- Reduce flush frequency -->
<flush_interval_milliseconds>60000</flush_interval_milliseconds>
<!-- Set TTL to 7 days -->
<table_ttl>604800</table_ttl>
</query_log>
<!-- Configure system log levels -->
<system_log>
<level>warning</level>
</system_log>
<!-- Reduce metric log verbosity -->
<metric_log>
<collect_interval_milliseconds>60000</collect_interval_milliseconds>
<flush_interval_milliseconds>120000</flush_interval_milliseconds>
<!-- Set TTL to 7 days -->
<table_ttl>604800</table_ttl>
</metric_log>
<!-- Configure asynchronous metric log (reduce storage) -->
<asynchronous_metric_log>
<collect_interval_milliseconds>60000</collect_interval_milliseconds>
<flush_interval_milliseconds>120000</flush_interval_milliseconds>
<!-- Set TTL to 7 days -->
<table_ttl>604800</table_ttl>
</asynchronous_metric_log>
<!-- Configure part log (reduce verbosity) -->
<part_log>
<level>warning</level>
<flush_interval_milliseconds>120000</flush_interval_milliseconds>
<!-- Set TTL to 7 days -->
<table_ttl>604800</table_ttl>
</part_log>
<!-- Configure latency log (reduce storage) -->
<latency_log>
<flush_interval_milliseconds>120000</flush_interval_milliseconds>
<!-- Set TTL to 7 days -->
<table_ttl>604800</table_ttl>
</latency_log>
</clickhouse>
+326
View File
@@ -0,0 +1,326 @@
#!/bin/bash
set -e
echo "==================== ClickHouse Initialization ===================="
clickhouse-client --query "CREATE DATABASE IF NOT EXISTS ${CLICKHOUSE_DATABASE}"
echo "✅ Database $CLICKHOUSE_DATABASE created successfully"
echo ""
echo "Creating OTEL tables required by OpenTelemetry Collector..."
clickhouse-client --database="${CLICKHOUSE_DATABASE}" --query "
CREATE TABLE IF NOT EXISTS otel_traces
(
\`Timestamp\` DateTime64(9) CODEC(Delta(8), ZSTD(1)),
\`TraceId\` String CODEC(ZSTD(1)),
\`SpanId\` String CODEC(ZSTD(1)),
\`ParentSpanId\` String CODEC(ZSTD(1)),
\`TraceState\` String CODEC(ZSTD(1)),
\`SpanName\` LowCardinality(String) CODEC(ZSTD(1)),
\`SpanKind\` LowCardinality(String) CODEC(ZSTD(1)),
\`ServiceName\` LowCardinality(String) CODEC(ZSTD(1)),
\`ResourceAttributes\` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
\`ScopeName\` String CODEC(ZSTD(1)),
\`ScopeVersion\` String CODEC(ZSTD(1)),
\`SpanAttributes\` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
\`Duration\` UInt64 CODEC(ZSTD(1)),
\`StatusCode\` LowCardinality(String) CODEC(ZSTD(1)),
\`StatusMessage\` String CODEC(ZSTD(1)),
\`Events.Timestamp\` Array(DateTime64(9)) CODEC(ZSTD(1)),
\`Events.Name\` Array(LowCardinality(String)) CODEC(ZSTD(1)),
\`Events.Attributes\` Array(Map(LowCardinality(String), String)) CODEC(ZSTD(1)),
\`Links.TraceId\` Array(String) CODEC(ZSTD(1)),
\`Links.SpanId\` Array(String) CODEC(ZSTD(1)),
\`Links.TraceState\` Array(String) CODEC(ZSTD(1)),
\`Links.Attributes\` Array(Map(LowCardinality(String), String)) CODEC(ZSTD(1)),
INDEX idx_trace_id TraceId TYPE bloom_filter(0.001) GRANULARITY 1,
INDEX idx_res_attr_key mapKeys(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_res_attr_value mapValues(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_span_attr_key mapKeys(SpanAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_span_attr_value mapValues(SpanAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_duration Duration TYPE minmax GRANULARITY 1
)
ENGINE = MergeTree
PARTITION BY toDate(Timestamp)
ORDER BY (ServiceName, SpanName, toDateTime(Timestamp))
TTL toDateTime(Timestamp) + toIntervalHour(730)
SETTINGS index_granularity = 8192, ttl_only_drop_parts = 1
"
clickhouse-client --database="${CLICKHOUSE_DATABASE}" --query "
CREATE TABLE IF NOT EXISTS otel_logs
(
\`Timestamp\` DateTime64(9) CODEC(Delta(8), ZSTD(1)),
\`TimestampTime\` DateTime DEFAULT toDateTime(Timestamp),
\`TraceId\` String CODEC(ZSTD(1)),
\`SpanId\` String CODEC(ZSTD(1)),
\`TraceFlags\` UInt8,
\`SeverityText\` LowCardinality(String) CODEC(ZSTD(1)),
\`SeverityNumber\` UInt8,
\`ServiceName\` LowCardinality(String) CODEC(ZSTD(1)),
\`Body\` String CODEC(ZSTD(1)),
\`ResourceSchemaUrl\` LowCardinality(String) CODEC(ZSTD(1)),
\`ResourceAttributes\` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
\`ScopeSchemaUrl\` LowCardinality(String) CODEC(ZSTD(1)),
\`ScopeName\` String CODEC(ZSTD(1)),
\`ScopeVersion\` LowCardinality(String) CODEC(ZSTD(1)),
\`ScopeAttributes\` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
\`LogAttributes\` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
INDEX idx_trace_id TraceId TYPE bloom_filter(0.001) GRANULARITY 1,
INDEX idx_res_attr_key mapKeys(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_res_attr_value mapValues(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_scope_attr_key mapKeys(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_scope_attr_value mapValues(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_log_attr_key mapKeys(LogAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_log_attr_value mapValues(LogAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_body Body TYPE tokenbf_v1(32768, 3, 0) GRANULARITY 8
)
ENGINE = MergeTree
PARTITION BY toDate(TimestampTime)
PRIMARY KEY (ServiceName, TimestampTime)
ORDER BY (ServiceName, TimestampTime, Timestamp)
TTL TimestampTime + toIntervalHour(730)
SETTINGS index_granularity = 8192, ttl_only_drop_parts = 1
"
clickhouse-client --database="${CLICKHOUSE_DATABASE}" --query "
CREATE TABLE IF NOT EXISTS otel_metrics_gauge
(
\`ResourceAttributes\` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
\`ResourceSchemaUrl\` String CODEC(ZSTD(1)),
\`ScopeName\` String CODEC(ZSTD(1)),
\`ScopeVersion\` String CODEC(ZSTD(1)),
\`ScopeAttributes\` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
\`ScopeDroppedAttrCount\` UInt32 CODEC(ZSTD(1)),
\`ScopeSchemaUrl\` String CODEC(ZSTD(1)),
\`ServiceName\` LowCardinality(String) CODEC(ZSTD(1)),
\`MetricName\` String CODEC(ZSTD(1)),
\`MetricDescription\` String CODEC(ZSTD(1)),
\`MetricUnit\` String CODEC(ZSTD(1)),
\`Attributes\` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
\`StartTimeUnix\` DateTime64(9) CODEC(Delta(8), ZSTD(1)),
\`TimeUnix\` DateTime64(9) CODEC(Delta(8), ZSTD(1)),
\`Value\` Float64 CODEC(ZSTD(1)),
\`Flags\` UInt32 CODEC(ZSTD(1)),
\`Exemplars.FilteredAttributes\` Array(Map(LowCardinality(String), String)) CODEC(ZSTD(1)),
\`Exemplars.TimeUnix\` Array(DateTime64(9)) CODEC(ZSTD(1)),
\`Exemplars.Value\` Array(Float64) CODEC(ZSTD(1)),
\`Exemplars.SpanId\` Array(String) CODEC(ZSTD(1)),
\`Exemplars.TraceId\` Array(String) CODEC(ZSTD(1)),
INDEX idx_res_attr_key mapKeys(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_res_attr_value mapValues(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_scope_attr_key mapKeys(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_scope_attr_value mapValues(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_attr_key mapKeys(Attributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_attr_value mapValues(Attributes) TYPE bloom_filter(0.01) GRANULARITY 1
)
ENGINE = MergeTree
PARTITION BY toDate(TimeUnix)
ORDER BY (ServiceName, MetricName, Attributes, toUnixTimestamp64Nano(TimeUnix))
TTL toDateTime(TimeUnix) + toIntervalHour(730)
SETTINGS index_granularity = 8192, ttl_only_drop_parts = 1
"
clickhouse-client --database="${CLICKHOUSE_DATABASE}" --query "
CREATE TABLE IF NOT EXISTS otel_metrics_sum
(
\`ResourceAttributes\` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
\`ResourceSchemaUrl\` String CODEC(ZSTD(1)),
\`ScopeName\` String CODEC(ZSTD(1)),
\`ScopeVersion\` String CODEC(ZSTD(1)),
\`ScopeAttributes\` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
\`ScopeDroppedAttrCount\` UInt32 CODEC(ZSTD(1)),
\`ScopeSchemaUrl\` String CODEC(ZSTD(1)),
\`ServiceName\` LowCardinality(String) CODEC(ZSTD(1)),
\`MetricName\` String CODEC(ZSTD(1)),
\`MetricDescription\` String CODEC(ZSTD(1)),
\`MetricUnit\` String CODEC(ZSTD(1)),
\`Attributes\` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
\`StartTimeUnix\` DateTime64(9) CODEC(Delta(8), ZSTD(1)),
\`TimeUnix\` DateTime64(9) CODEC(Delta(8), ZSTD(1)),
\`Value\` Float64 CODEC(ZSTD(1)),
\`Flags\` UInt32 CODEC(ZSTD(1)),
\`Exemplars.FilteredAttributes\` Array(Map(LowCardinality(String), String)) CODEC(ZSTD(1)),
\`Exemplars.TimeUnix\` Array(DateTime64(9)) CODEC(ZSTD(1)),
\`Exemplars.Value\` Array(Float64) CODEC(ZSTD(1)),
\`Exemplars.SpanId\` Array(String) CODEC(ZSTD(1)),
\`Exemplars.TraceId\` Array(String) CODEC(ZSTD(1)),
\`AggregationTemporality\` Int32 CODEC(ZSTD(1)),
\`IsMonotonic\` Bool CODEC(Delta(1), ZSTD(1)),
INDEX idx_res_attr_key mapKeys(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_res_attr_value mapValues(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_scope_attr_key mapKeys(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_scope_attr_value mapValues(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_attr_key mapKeys(Attributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_attr_value mapValues(Attributes) TYPE bloom_filter(0.01) GRANULARITY 1
)
ENGINE = MergeTree
PARTITION BY toDate(TimeUnix)
ORDER BY (ServiceName, MetricName, Attributes, toUnixTimestamp64Nano(TimeUnix))
TTL toDateTime(TimeUnix) + toIntervalHour(730)
SETTINGS index_granularity = 8192, ttl_only_drop_parts = 1
"
clickhouse-client --database="${CLICKHOUSE_DATABASE}" --query "
CREATE TABLE IF NOT EXISTS otel_metrics_histogram
(
\`ResourceAttributes\` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
\`ResourceSchemaUrl\` String CODEC(ZSTD(1)),
\`ScopeName\` String CODEC(ZSTD(1)),
\`ScopeVersion\` String CODEC(ZSTD(1)),
\`ScopeAttributes\` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
\`ScopeDroppedAttrCount\` UInt32 CODEC(ZSTD(1)),
\`ScopeSchemaUrl\` String CODEC(ZSTD(1)),
\`ServiceName\` LowCardinality(String) CODEC(ZSTD(1)),
\`MetricName\` String CODEC(ZSTD(1)),
\`MetricDescription\` String CODEC(ZSTD(1)),
\`MetricUnit\` String CODEC(ZSTD(1)),
\`Attributes\` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
\`StartTimeUnix\` DateTime64(9) CODEC(Delta(8), ZSTD(1)),
\`TimeUnix\` DateTime64(9) CODEC(Delta(8), ZSTD(1)),
\`Count\` UInt64 CODEC(Delta(8), ZSTD(1)),
\`Sum\` Float64 CODEC(ZSTD(1)),
\`BucketCounts\` Array(UInt64) CODEC(ZSTD(1)),
\`ExplicitBounds\` Array(Float64) CODEC(ZSTD(1)),
\`Exemplars.FilteredAttributes\` Array(Map(LowCardinality(String), String)) CODEC(ZSTD(1)),
\`Exemplars.TimeUnix\` Array(DateTime64(9)) CODEC(ZSTD(1)),
\`Exemplars.Value\` Array(Float64) CODEC(ZSTD(1)),
\`Exemplars.SpanId\` Array(String) CODEC(ZSTD(1)),
\`Exemplars.TraceId\` Array(String) CODEC(ZSTD(1)),
\`Flags\` UInt32 CODEC(ZSTD(1)),
\`Min\` Float64 CODEC(ZSTD(1)),
\`Max\` Float64 CODEC(ZSTD(1)),
\`AggregationTemporality\` Int32 CODEC(ZSTD(1)),
INDEX idx_res_attr_key mapKeys(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_res_attr_value mapValues(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_scope_attr_key mapKeys(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_scope_attr_value mapValues(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_attr_key mapKeys(Attributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_attr_value mapValues(Attributes) TYPE bloom_filter(0.01) GRANULARITY 1
)
ENGINE = MergeTree
PARTITION BY toDate(TimeUnix)
ORDER BY (ServiceName, MetricName, Attributes, toUnixTimestamp64Nano(TimeUnix))
TTL toDateTime(TimeUnix) + toIntervalHour(730)
SETTINGS index_granularity = 8192, ttl_only_drop_parts = 1
"
clickhouse-client --database="${CLICKHOUSE_DATABASE}" --query "
CREATE TABLE IF NOT EXISTS otel_metrics_summary
(
\`ResourceAttributes\` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
\`ResourceSchemaUrl\` String CODEC(ZSTD(1)),
\`ScopeName\` String CODEC(ZSTD(1)),
\`ScopeVersion\` String CODEC(ZSTD(1)),
\`ScopeAttributes\` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
\`ScopeDroppedAttrCount\` UInt32 CODEC(ZSTD(1)),
\`ScopeSchemaUrl\` String CODEC(ZSTD(1)),
\`ServiceName\` LowCardinality(String) CODEC(ZSTD(1)),
\`MetricName\` String CODEC(ZSTD(1)),
\`MetricDescription\` String CODEC(ZSTD(1)),
\`MetricUnit\` String CODEC(ZSTD(1)),
\`Attributes\` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
\`StartTimeUnix\` DateTime64(9) CODEC(Delta(8), ZSTD(1)),
\`TimeUnix\` DateTime64(9) CODEC(Delta(8), ZSTD(1)),
\`Count\` UInt64 CODEC(Delta(8), ZSTD(1)),
\`Sum\` Float64 CODEC(ZSTD(1)),
\`ValueAtQuantiles.Quantile\` Array(Float64) CODEC(ZSTD(1)),
\`ValueAtQuantiles.Value\` Array(Float64) CODEC(ZSTD(1)),
\`Flags\` UInt32 CODEC(ZSTD(1)),
INDEX idx_res_attr_key mapKeys(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_res_attr_value mapValues(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_scope_attr_key mapKeys(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_scope_attr_value mapValues(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_attr_key mapKeys(Attributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_attr_value mapValues(Attributes) TYPE bloom_filter(0.01) GRANULARITY 1
)
ENGINE = MergeTree
PARTITION BY toDate(TimeUnix)
ORDER BY (ServiceName, MetricName, Attributes, toUnixTimestamp64Nano(TimeUnix))
TTL toDateTime(TimeUnix) + toIntervalHour(730)
SETTINGS index_granularity = 8192, ttl_only_drop_parts = 1
"
clickhouse-client --database="${CLICKHOUSE_DATABASE}" --query "
CREATE TABLE IF NOT EXISTS otel_metrics_exponential_histogram
(
\`ResourceAttributes\` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
\`ResourceSchemaUrl\` String CODEC(ZSTD(1)),
\`ScopeName\` String CODEC(ZSTD(1)),
\`ScopeVersion\` String CODEC(ZSTD(1)),
\`ScopeAttributes\` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
\`ScopeDroppedAttrCount\` UInt32 CODEC(ZSTD(1)),
\`ScopeSchemaUrl\` String CODEC(ZSTD(1)),
\`ServiceName\` LowCardinality(String) CODEC(ZSTD(1)),
\`MetricName\` String CODEC(ZSTD(1)),
\`MetricDescription\` String CODEC(ZSTD(1)),
\`MetricUnit\` String CODEC(ZSTD(1)),
\`Attributes\` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
\`StartTimeUnix\` DateTime64(9) CODEC(Delta(8), ZSTD(1)),
\`TimeUnix\` DateTime64(9) CODEC(Delta(8), ZSTD(1)),
\`Count\` UInt64 CODEC(Delta(8), ZSTD(1)),
\`Sum\` Float64 CODEC(ZSTD(1)),
\`Scale\` Int32 CODEC(ZSTD(1)),
\`ZeroCount\` UInt64 CODEC(ZSTD(1)),
\`PositiveOffset\` Int32 CODEC(ZSTD(1)),
\`PositiveBucketCounts\` Array(UInt64) CODEC(ZSTD(1)),
\`NegativeOffset\` Int32 CODEC(ZSTD(1)),
\`NegativeBucketCounts\` Array(UInt64) CODEC(ZSTD(1)),
\`Exemplars.FilteredAttributes\` Array(Map(LowCardinality(String), String)) CODEC(ZSTD(1)),
\`Exemplars.TimeUnix\` Array(DateTime64(9)) CODEC(ZSTD(1)),
\`Exemplars.Value\` Array(Float64) CODEC(ZSTD(1)),
\`Exemplars.SpanId\` Array(String) CODEC(ZSTD(1)),
\`Exemplars.TraceId\` Array(String) CODEC(ZSTD(1)),
\`Flags\` UInt32 CODEC(ZSTD(1)),
\`Min\` Float64 CODEC(ZSTD(1)),
\`Max\` Float64 CODEC(ZSTD(1)),
\`AggregationTemporality\` Int32 CODEC(ZSTD(1)),
INDEX idx_res_attr_key mapKeys(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_res_attr_value mapValues(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_scope_attr_key mapKeys(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_scope_attr_value mapValues(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_attr_key mapKeys(Attributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_attr_value mapValues(Attributes) TYPE bloom_filter(0.01) GRANULARITY 1
)
ENGINE = MergeTree
PARTITION BY toDate(TimeUnix)
ORDER BY (ServiceName, MetricName, Attributes, toUnixTimestamp64Nano(TimeUnix))
TTL toDateTime(TimeUnix) + toIntervalHour(730)
SETTINGS index_granularity = 8192, ttl_only_drop_parts = 1
"
clickhouse-client --database="${CLICKHOUSE_DATABASE}" --query "
CREATE TABLE IF NOT EXISTS otel_traces_trace_id_ts
(
\`TraceId\` String CODEC(ZSTD(1)),
\`Start\` DateTime CODEC(Delta(4), ZSTD(1)),
\`End\` DateTime CODEC(Delta(4), ZSTD(1)),
INDEX idx_trace_id TraceId TYPE bloom_filter(0.01) GRANULARITY 1
)
ENGINE = MergeTree
PARTITION BY toDate(Start)
ORDER BY (TraceId, Start)
TTL toDateTime(Start) + toIntervalHour(730)
SETTINGS index_granularity = 8192, ttl_only_drop_parts = 1
"
clickhouse-client --database="${CLICKHOUSE_DATABASE}" --query "
CREATE MATERIALIZED VIEW IF NOT EXISTS otel_traces_trace_id_ts_mv TO otel_traces_trace_id_ts
(
\`TraceId\` String,
\`Start\` DateTime64(9),
\`End\` DateTime64(9)
)
AS SELECT
TraceId,
min(Timestamp) AS Start,
max(Timestamp) AS End
FROM otel_traces
WHERE TraceId != ''
GROUP BY TraceId
"
echo "✅ All 9 OTEL tables created successfully"
echo "===================================================================="
@@ -0,0 +1,52 @@
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
processors:
batch:
memory_limiter:
# 80% of maximum memory up to 2G
limit_mib: 1500
# 25% of limit up to 2G
spike_limit_mib: 512
check_interval: 5s
exporters:
clickhouse:
endpoint: tcp://${env:INIT_DB_HOST}:9000?dial_timeout=10s
database: ${env:INIT_DB_DATABASE}
username: ${env:INIT_DB_USERNAME}
password: ${env:INIT_DB_PASSWORD}
ttl: 730h
logs_table_name: otel_logs
traces_table_name: otel_traces
# Metrics use separate tables by type: otel_metrics_gauge, otel_metrics_sum,
# otel_metrics_histogram, otel_metrics_summary, otel_metrics_exponential_histogram
timeout: 5s
retry_on_failure:
enabled: true
initial_interval: 5s
max_interval: 30s
max_elapsed_time: 300s
service:
pipelines:
logs:
receivers: [otlp]
processors: [batch]
exporters: [clickhouse]
traces:
receivers: [otlp]
processors: [memory_limiter, batch]
exporters: [clickhouse]
metrics:
receivers: [otlp]
processors: [memory_limiter, batch]
exporters: [clickhouse]
# telemetry:
# metrics:
# address: localhost:8888
+3
View File
@@ -23,6 +23,8 @@ services:
- CLICKHOUSE_ALWAYS_RUN_INITDB_SCRIPTS=true
volumes:
- clickhouse_data:/var/lib/clickhouse
- ./assets/clickhouse-config.xml:/etc/clickhouse-server/config.d/custom-config.xml:ro
- ./assets/clickhouse-init.sh:/docker-entrypoint-initdb.d/init.sh:ro
ports:
- '${CLICKHOUSE_HTTP_PORT_OVERRIDE:-8123}:8123'
- '${CLICKHOUSE_NATIVE_PORT_OVERRIDE:-9000}:9000'
@@ -77,6 +79,7 @@ services:
condition: service_healthy
volumes:
- openlit_data:/app/client/data
- ./assets/otel-collector-config.yaml:/etc/otel/otel-collector-config.yaml:ro
healthcheck:
test: [CMD, wget, --quiet, --tries=1, --spider, 'http://localhost:${OPENLIT_INTERNAL_PORT:-3000}/health']
interval: 30s
+1 -1
View File
@@ -3,7 +3,7 @@ GLOBAL_REGISTRY=
TZ=UTC
# Opik Version
OPIK_VERSION=1.10.23
OPIK_VERSION=1.11.9
# Opik Frontend Port
OPIK_PORT_OVERRIDE=5173
+6 -5
View File
@@ -201,7 +201,7 @@ services:
backend:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}ghcr.io/comet-ml/opik/opik-backend:${OPIK_VERSION:-1.10.23}
image: ${GLOBAL_REGISTRY:-}ghcr.io/comet-ml/opik/opik-backend:${OPIK_VERSION:-1.11.9}
command: [bash, -c, './run_db_migrations.sh && ./entrypoint.sh']
environment:
TZ: ${TZ:-UTC}
@@ -231,6 +231,7 @@ services:
TOGGLE_OPIK_AI_ENABLED: ${TOGGLE_OPIK_AI_ENABLED:-false}
TOGGLE_GUARDRAILS_ENABLED: ${TOGGLE_GUARDRAILS_ENABLED:-false}
TOGGLE_WELCOME_WIZARD_ENABLED: ${TOGGLE_WELCOME_WIZARD_ENABLED:-true}
TOGGLE_RUNNERS_ENABLED: ${TOGGLE_RUNNERS_ENABLED:-false}
CORS: ${CORS:-false}
ATTACHMENTS_STRIP_MIN_SIZE: ${ATTACHMENTS_STRIP_MIN_SIZE:-256000}
JACKSON_MAX_STRING_LENGTH: ${JACKSON_MAX_STRING_LENGTH:-104857600}
@@ -264,19 +265,19 @@ services:
python-backend:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}ghcr.io/comet-ml/opik/opik-python-backend:${OPIK_VERSION:-1.10.23}
image: ${GLOBAL_REGISTRY:-}ghcr.io/comet-ml/opik/opik-python-backend:${OPIK_VERSION:-1.11.9}
privileged: true
environment:
TZ: ${TZ:-UTC}
OPIK_OTEL_SDK_ENABLED: 'false'
PYTHON_CODE_EXECUTOR_IMAGE_TAG: ${OPIK_VERSION:-1.10.23}
PYTHON_CODE_EXECUTOR_IMAGE_TAG: ${OPIK_VERSION:-1.11.9}
PYTHON_CODE_EXECUTOR_STRATEGY: ${PYTHON_CODE_EXECUTOR_STRATEGY:-process}
PYTHON_CODE_EXECUTOR_CONTAINERS_NUM: ${PYTHON_CODE_EXECUTOR_CONTAINERS_NUM:-5}
PYTHON_CODE_EXECUTOR_EXEC_TIMEOUT_IN_SECS: ${PYTHON_CODE_EXECUTOR_EXEC_TIMEOUT_IN_SECS:-3}
PYTHON_CODE_EXECUTOR_ALLOW_NETWORK: ${PYTHON_CODE_EXECUTOR_ALLOW_NETWORK:-false}
PYTHON_CODE_EXECUTOR_CPU_SHARES: ${PYTHON_CODE_EXECUTOR_CPU_SHARES:-512}
PYTHON_CODE_EXECUTOR_MEM_LIMIT: ${PYTHON_CODE_EXECUTOR_MEM_LIMIT:-256m}
OPIK_VERSION: ${OPIK_VERSION:-1.10.23}
OPIK_VERSION: ${OPIK_VERSION:-1.11.9}
OPIK_REVERSE_PROXY_URL: http://frontend:5173/api
PYTHON_BACKEND_PORT: ${PYTHON_BACKEND_PORT:-8000}
OPENAI_API_KEY: ${OPENAI_API_KEY:-}
@@ -313,7 +314,7 @@ services:
frontend:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}ghcr.io/comet-ml/opik/opik-frontend:${OPIK_VERSION:-1.10.23}
image: ${GLOBAL_REGISTRY:-}ghcr.io/comet-ml/opik/opik-frontend:${OPIK_VERSION:-1.11.9}
ports:
- '${OPIK_PORT_OVERRIDE:-5173}:80'
environment:
+55
View File
@@ -0,0 +1,55 @@
# Global Registry Prefix (optional)
# GLOBAL_REGISTRY=
# Service Versions
RAGFLOW_VERSION=v0.24.0
ELASTICSEARCH_VERSION=8.11.3
MYSQL_VERSION=8.0.39
REDIS_VERSION=7
MINIO_VERSION=RELEASE.2025-01-20T14-49-07Z
# Timezone
TZ=UTC
# Host port for the RAGFlow web UI (Nginx reverse proxy)
RAGFLOW_PORT_OVERRIDE=80
# MinIO web console port
MINIO_CONSOLE_PORT_OVERRIDE=9001
# Secrets (CHANGEME: use strong random values in production)
SECRET_KEY=changeme_secret_key_CHANGEME
MYSQL_PASSWORD=ragflow
REDIS_PASSWORD=redispassword
MINIO_USER=minioadmin
MINIO_PASSWORD=minioadmin
# Resource Limits - RAGFlow
RAGFLOW_CPU_LIMIT=4
RAGFLOW_MEMORY_LIMIT=4G
RAGFLOW_CPU_RESERVATION=1
RAGFLOW_MEMORY_RESERVATION=2G
# Resource Limits - Elasticsearch
ELASTICSEARCH_CPU_LIMIT=2
ELASTICSEARCH_MEMORY_LIMIT=2G
ELASTICSEARCH_CPU_RESERVATION=0.5
ELASTICSEARCH_MEMORY_RESERVATION=1G
# Resource Limits - MySQL
MYSQL_CPU_LIMIT=1
MYSQL_MEMORY_LIMIT=1G
MYSQL_CPU_RESERVATION=0.25
MYSQL_MEMORY_RESERVATION=256M
# Resource Limits - Redis
REDIS_CPU_LIMIT=0.5
REDIS_MEMORY_LIMIT=512M
REDIS_CPU_RESERVATION=0.1
REDIS_MEMORY_RESERVATION=128M
# Resource Limits - MinIO
MINIO_CPU_LIMIT=1
MINIO_MEMORY_LIMIT=1G
MINIO_CPU_RESERVATION=0.25
MINIO_MEMORY_RESERVATION=256M
+84
View File
@@ -0,0 +1,84 @@
# RAGFlow
[English](./README.md) | [中文](./README.zh.md)
Quick start: <https://ragflow.io/docs>.
This service deploys RAGFlow, an open-source Retrieval-Augmented Generation engine based on deep document understanding. It provides intelligent question answering over complex documents (PDFs, Word, PowerPoint, etc.) with accurate citation and citation tracing.
> **Platform note**: This stack is **x86-64 (amd64) only**. ARM64 is not supported by the official image.
>
> **Resource note**: Elasticsearch alone requires ~2 GB RAM. Provision at least **8 GB RAM** total before starting.
## Services
- **ragflow**: The RAGFlow web application and API server (Nginx on port 80, API on port 9380).
- **es01**: Elasticsearch single-node cluster for vector and full-text search.
- **mysql**: MySQL 8 database for metadata and workflow state.
- **redis**: Redis for task queues and caching.
- **minio**: S3-compatible object storage for document and chunk storage.
## Quick Start
1. Copy `.env.example` to `.env`:
```bash
cp .env.example .env
```
2. Update the secrets in `.env`:
```
SECRET_KEY, MYSQL_PASSWORD, REDIS_PASSWORD, MINIO_PASSWORD
```
3. Start the services (initial startup may take 25 minutes):
```bash
docker compose up -d
```
4. Open `http://localhost` and register the first admin account.
## Core Environment Variables
| Variable | Description | Default |
| ---------------------- | -------------------------------------------------------- | -------------------------------- |
| `RAGFLOW_VERSION` | RAGFlow image version | `v0.24.0` |
| `RAGFLOW_PORT_OVERRIDE`| Host port for the web UI | `80` |
| `SECRET_KEY` | Application secret key — **CHANGEME** | placeholder |
| `MYSQL_PASSWORD` | MySQL root password (also used by RAGFlow) | `ragflow` |
| `REDIS_PASSWORD` | Redis authentication password | `redispassword` |
| `MINIO_USER` | MinIO root user | `minioadmin` |
| `MINIO_PASSWORD` | MinIO root password | `minioadmin` |
| `MINIO_CONSOLE_PORT_OVERRIDE` | MinIO web console host port | `9001` |
## Volumes
- `ragflow_logs`: RAGFlow application logs.
- `ragflow_es_data`: Elasticsearch index data.
- `ragflow_mysql_data`: MySQL database files.
- `ragflow_redis_data`: Redis persistence.
- `ragflow_minio_data`: Object storage for documents and embeddings.
## Ports
- **80**: RAGFlow web UI and API (via Nginx)
- **9001**: MinIO web console
## Resource Requirements
| Service | CPU Limit | Memory Limit |
| ------------- | --------- | ------------ |
| ragflow | 4 | 4 GB |
| elasticsearch | 2 | 2 GB |
| mysql | 1 | 1 GB |
| redis | 0.5 | 512 MB |
| minio | 1 | 1 GB |
Total recommended: **8+ GB RAM**, **4+ CPU cores**.
## Documentation
- [RAGFlow Docs](https://ragflow.io/docs)
- [GitHub](https://github.com/infiniflow/ragflow)
+84
View File
@@ -0,0 +1,84 @@
# RAGFlow
[English](./README.md) | [中文](./README.zh.md)
快速开始:<https://ragflow.io/docs>。
此服务用于部署 RAGFlow,一个基于深度文档理解的开源检索增强生成引擎。它能对复杂文档(PDF、Word、PowerPoint 等)进行智能问答,并提供精准的引用和引文追踪。
> **平台说明**:此 Stack 仅支持 **x86-64amd64**,官方镜像不支持 ARM64。
>
> **资源说明**:仅 Elasticsearch 就需要约 2 GB RAM,启动前请确保系统至少有 **8 GB RAM**。
## 服务
- **ragflow**RAGFlow Web 应用和 API 服务器(Nginx 监听 80 端口,API 监听 9380 端口)。
- **es01**:单节点 Elasticsearch 集群,用于向量和全文检索。
- **mysql**MySQL 8 数据库,用于元数据和工作流状态存储。
- **redis**Redis,用于任务队列和缓存。
- **minio**:S3 兼容对象存储,用于文档和分块存储。
## 快速开始
1.`.env.example` 复制为 `.env`
```bash
cp .env.example .env
```
2. 更新 `.env` 中的密钥:
```
SECRET_KEY、MYSQL_PASSWORD、REDIS_PASSWORD、MINIO_PASSWORD
```
3. 启动服务(首次启动可能需要 2~5 分钟):
```bash
docker compose up -d
```
4. 打开 `http://localhost`,注册第一个管理员账号。
## 核心环境变量
| 变量 | 说明 | 默认值 |
| ----------------------------- | ------------------------------------------ | ------------- |
| `RAGFLOW_VERSION` | RAGFlow 镜像版本 | `v0.24.0` |
| `RAGFLOW_PORT_OVERRIDE` | Web UI 宿主机端口 | `80` |
| `SECRET_KEY` | 应用密钥——**请修改** | 占位符 |
| `MYSQL_PASSWORD` | MySQL root 密码(也供 RAGFlow 使用) | `ragflow` |
| `REDIS_PASSWORD` | Redis 认证密码 | `redispassword` |
| `MINIO_USER` | MinIO root 用户名 | `minioadmin` |
| `MINIO_PASSWORD` | MinIO root 密码 | `minioadmin` |
| `MINIO_CONSOLE_PORT_OVERRIDE` | MinIO Web 控制台宿主机端口 | `9001` |
## 数据卷
- `ragflow_logs`RAGFlow 应用日志。
- `ragflow_es_data`Elasticsearch 索引数据。
- `ragflow_mysql_data`MySQL 数据库文件。
- `ragflow_redis_data`Redis 持久化数据。
- `ragflow_minio_data`:文档和嵌入向量的对象存储。
## 端口
- **80**RAGFlow Web UI 和 API(通过 Nginx
- **9001**MinIO Web 控制台
## 资源需求
| 服务 | CPU 限制 | 内存限制 |
| ------------- | -------- | -------- |
| ragflow | 4 | 4 GB |
| elasticsearch | 2 | 2 GB |
| mysql | 1 | 1 GB |
| redis | 0.5 | 512 MB |
| minio | 1 | 1 GB |
推荐总计:**8+ GB RAM****4+ CPU 核心**。
## 文档
- [RAGFlow 文档](https://ragflow.io/docs)
- [GitHub](https://github.com/infiniflow/ragflow)
+157
View File
@@ -0,0 +1,157 @@
# RAGFlow requires substantial system resources.
# Elasticsearch alone needs ~2 GB RAM. Total recommended: 8+ GB RAM.
# This stack is x86-64 (amd64) only; ARM64 is not supported.
x-defaults: &defaults
restart: unless-stopped
logging:
driver: json-file
options:
max-size: 100m
max-file: '3'
services:
ragflow:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}infiniflow/ragflow:${RAGFLOW_VERSION:-v0.24.0}
depends_on:
mysql:
condition: service_healthy
redis:
condition: service_healthy
minio:
condition: service_healthy
es01:
condition: service_healthy
ports:
- '${RAGFLOW_PORT_OVERRIDE:-80}:80'
volumes:
- ragflow_logs:/ragflow/logs
environment:
- TZ=${TZ:-UTC}
- MYSQL_PASSWORD=${MYSQL_PASSWORD:-ragflow}
- MINIO_USER=${MINIO_USER:-minioadmin}
- MINIO_PASSWORD=${MINIO_PASSWORD:-minioadmin}
- REDIS_PASSWORD=${REDIS_PASSWORD:-redispassword}
- SECRET_KEY=${SECRET_KEY:-changeme_secret_key_CHANGEME}
healthcheck:
test: [CMD-SHELL, "curl -sf http://localhost/ > /dev/null 2>&1 || exit 1"]
interval: 30s
timeout: 15s
retries: 10
start_period: 120s
deploy:
resources:
limits:
cpus: ${RAGFLOW_CPU_LIMIT:-4}
memory: ${RAGFLOW_MEMORY_LIMIT:-4G}
reservations:
cpus: ${RAGFLOW_CPU_RESERVATION:-1}
memory: ${RAGFLOW_MEMORY_RESERVATION:-2G}
es01:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}elasticsearch:${ELASTICSEARCH_VERSION:-8.11.3}
environment:
- TZ=${TZ:-UTC}
- discovery.type=single-node
- xpack.security.enabled=false
- ES_JAVA_OPTS=-Xms512m -Xmx1g
volumes:
- ragflow_es_data:/usr/share/elasticsearch/data
healthcheck:
test: [CMD-SHELL, "curl -sf http://localhost:9200/_cluster/health | grep -qE '\"status\":\"(green|yellow)\"' || exit 1"]
interval: 15s
timeout: 10s
retries: 10
start_period: 60s
deploy:
resources:
limits:
cpus: ${ELASTICSEARCH_CPU_LIMIT:-2}
memory: ${ELASTICSEARCH_MEMORY_LIMIT:-2G}
reservations:
cpus: ${ELASTICSEARCH_CPU_RESERVATION:-0.5}
memory: ${ELASTICSEARCH_MEMORY_RESERVATION:-1G}
mysql:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}mysql:${MYSQL_VERSION:-8.0.39}
environment:
- TZ=${TZ:-UTC}
- MYSQL_ROOT_PASSWORD=${MYSQL_PASSWORD:-ragflow}
- MYSQL_DATABASE=rag_flow
volumes:
- ragflow_mysql_data:/var/lib/mysql
healthcheck:
test: [CMD, mysqladmin, ping, -h, localhost]
interval: 10s
timeout: 10s
retries: 5
start_period: 30s
deploy:
resources:
limits:
cpus: ${MYSQL_CPU_LIMIT:-1}
memory: ${MYSQL_MEMORY_LIMIT:-1G}
reservations:
cpus: ${MYSQL_CPU_RESERVATION:-0.25}
memory: ${MYSQL_MEMORY_RESERVATION:-256M}
redis:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}redis:${REDIS_VERSION:-7}
command: >
--requirepass ${REDIS_PASSWORD:-redispassword}
--maxmemory-policy noeviction
environment:
- REDIS_PASSWORD=${REDIS_PASSWORD:-redispassword}
volumes:
- ragflow_redis_data:/data
healthcheck:
test: [CMD-SHELL, "redis-cli -a $$REDIS_PASSWORD ping | grep -q PONG"]
interval: 5s
timeout: 10s
retries: 10
deploy:
resources:
limits:
cpus: ${REDIS_CPU_LIMIT:-0.5}
memory: ${REDIS_MEMORY_LIMIT:-512M}
reservations:
cpus: ${REDIS_CPU_RESERVATION:-0.1}
memory: ${REDIS_MEMORY_RESERVATION:-128M}
minio:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}minio/minio:${MINIO_VERSION:-RELEASE.2025-01-20T14-49-07Z}
command: server /data --console-address ':9001'
environment:
- TZ=${TZ:-UTC}
- MINIO_ROOT_USER=${MINIO_USER:-minioadmin}
- MINIO_ROOT_PASSWORD=${MINIO_PASSWORD:-minioadmin}
volumes:
- ragflow_minio_data:/data
ports:
- '${MINIO_CONSOLE_PORT_OVERRIDE:-9001}:9001'
healthcheck:
test: [CMD-SHELL, "curl -sf http://localhost:9000/minio/health/live || exit 1"]
interval: 10s
timeout: 5s
retries: 10
start_period: 10s
deploy:
resources:
limits:
cpus: ${MINIO_CPU_LIMIT:-1}
memory: ${MINIO_MEMORY_LIMIT:-1G}
reservations:
cpus: ${MINIO_CPU_RESERVATION:-0.25}
memory: ${MINIO_MEMORY_RESERVATION:-256M}
volumes:
ragflow_logs:
ragflow_es_data:
ragflow_mysql_data:
ragflow_redis_data:
ragflow_minio_data:
+172
View File
@@ -0,0 +1,172 @@
# Global Settings
GLOBAL_REGISTRY=
TZ=UTC
# Shannon Version (applies to gateway, orchestrator, llm-service, and agent-core)
SHANNON_VERSION=v0.3.1
# ============================================================
# LLM API Keys — at least one provider is required
# ============================================================
OPENAI_API_KEY=
ANTHROPIC_API_KEY=
GOOGLE_API_KEY=
XAI_API_KEY=
DEEPSEEK_API_KEY=
# Optional tool/search API keys
SERPAPI_API_KEY=
FIRECRAWL_API_KEY=
# ============================================================
# Security
# ============================================================
# IMPORTANT: Change this in production!
JWT_SECRET=development-only-secret-change-in-production
# Set to 0 to enable JWT authentication in production
GATEWAY_SKIP_AUTH=1
# ============================================================
# Service Versions
# ============================================================
POSTGRES_VERSION=pg16
REDIS_VERSION=7.2-alpine
QDRANT_VERSION=v1.17
TEMPORAL_VERSION=1.28.3
TEMPORAL_UI_VERSION=2.40.1
# ============================================================
# Ports (host-side overrides)
# ============================================================
GATEWAY_PORT_OVERRIDE=8080
TEMPORAL_UI_PORT_OVERRIDE=8088
# ============================================================
# Database Configuration
# ============================================================
POSTGRES_USER=shannon
POSTGRES_PASSWORD=shannon
POSTGRES_DB=shannon
POSTGRES_PORT=5432
POSTGRES_SSLMODE=disable
# ============================================================
# Redis Configuration
# ============================================================
REDIS_URL=redis://redis:6379
REDIS_ADDR=redis:6379
REDIS_TTL_SECONDS=3600
# ============================================================
# Qdrant Configuration
# ============================================================
QDRANT_HOST=qdrant
QDRANT_PORT=6333
# ============================================================
# Temporal Configuration
# ============================================================
TEMPORAL_NAMESPACE=default
# ============================================================
# LLM Service Configuration
# ============================================================
LLM_SERVICE_URL=http://llm-service:8001
DEFAULT_MODEL_TIER=small
MAX_TOKENS=2000
TEMPERATURE=0.7
MAX_TOKENS_PER_REQUEST=10000
MODELS_CONFIG_PATH=/app/config/models.yaml
# ============================================================
# Agent Core Configuration
# ============================================================
# WASI sandbox for secure code execution
SHANNON_USE_WASI_SANDBOX=1
WASI_MEMORY_LIMIT_MB=512
WASI_TIMEOUT_SECONDS=60
RUST_LOG=info
# ============================================================
# Orchestrator / Gateway Configuration
# ============================================================
ORCHESTRATOR_GRPC=orchestrator:50052
ADMIN_SERVER=http://orchestrator:8081
WORKFLOW_SYNTH_BYPASS_SINGLE=true
PROVIDER_RATE_CONTROL_ENABLED=false
# Worker pool sizes per priority queue
WORKER_ACT_CRITICAL=12
WORKER_WF_CRITICAL=12
WORKER_ACT_HIGH=10
WORKER_WF_HIGH=10
WORKER_ACT_NORMAL=8
WORKER_WF_NORMAL=8
WORKER_ACT_LOW=4
WORKER_WF_LOW=4
# ============================================================
# Observability
# ============================================================
OTEL_ENABLED=false
# OTEL_EXPORTER_OTLP_ENDPOINT=localhost:4317
DEBUG=false
ENVIRONMENT=production
# ============================================================
# Resource Limits
# ============================================================
# Gateway
GATEWAY_CPU_LIMIT=1.0
GATEWAY_MEMORY_LIMIT=512M
GATEWAY_CPU_RESERVATION=0.25
GATEWAY_MEMORY_RESERVATION=256M
# Orchestrator
ORCHESTRATOR_CPU_LIMIT=2.0
ORCHESTRATOR_MEMORY_LIMIT=2G
ORCHESTRATOR_CPU_RESERVATION=0.5
ORCHESTRATOR_MEMORY_RESERVATION=512M
# LLM Service
LLM_SERVICE_CPU_LIMIT=2.0
LLM_SERVICE_MEMORY_LIMIT=2G
LLM_SERVICE_CPU_RESERVATION=0.5
LLM_SERVICE_MEMORY_RESERVATION=512M
# Agent Core
AGENT_CORE_CPU_LIMIT=2.0
AGENT_CORE_MEMORY_LIMIT=2G
AGENT_CORE_CPU_RESERVATION=0.5
AGENT_CORE_MEMORY_RESERVATION=512M
# PostgreSQL
POSTGRES_CPU_LIMIT=1.0
POSTGRES_MEMORY_LIMIT=1G
POSTGRES_CPU_RESERVATION=0.25
POSTGRES_MEMORY_RESERVATION=256M
# Redis
REDIS_CPU_LIMIT=0.5
REDIS_MEMORY_LIMIT=512M
REDIS_CPU_RESERVATION=0.1
REDIS_MEMORY_RESERVATION=128M
# Qdrant
QDRANT_CPU_LIMIT=1.0
QDRANT_MEMORY_LIMIT=1G
QDRANT_CPU_RESERVATION=0.25
QDRANT_MEMORY_RESERVATION=256M
# Temporal
TEMPORAL_CPU_LIMIT=1.0
TEMPORAL_MEMORY_LIMIT=1G
TEMPORAL_CPU_RESERVATION=0.25
TEMPORAL_MEMORY_RESERVATION=256M
# Temporal UI (metrics profile)
TEMPORAL_UI_CPU_LIMIT=0.5
TEMPORAL_UI_MEMORY_LIMIT=256M
TEMPORAL_UI_CPU_RESERVATION=0.1
TEMPORAL_UI_MEMORY_RESERVATION=128M
+41
View File
@@ -0,0 +1,41 @@
.PHONY: setup up down logs ps
# Download required config files from Shannon repository and prepare .env
setup:
@echo "Creating config directory..."
mkdir -p config
@echo "Downloading Shannon configuration files..."
curl -sSL https://raw.githubusercontent.com/Kocoro-lab/Shannon/main/config/models.yaml \
-o config/models.yaml
curl -sSL https://raw.githubusercontent.com/Kocoro-lab/Shannon/main/config/features.yaml \
-o config/features.yaml
@if [ ! -f .env ]; then \
cp .env.example .env; \
echo "Created .env from .env.example. Edit it to add your LLM API keys."; \
else \
echo ".env already exists, skipping copy."; \
fi
@echo ""
@echo "Setup complete! Next steps:"
@echo " 1. Edit .env and set at least one LLM API key (OPENAI_API_KEY or ANTHROPIC_API_KEY)"
@echo " 2. Run: make up"
# Start all services (include Temporal UI dashboard with --profile metrics)
up:
docker compose up -d
# Start all services including Temporal UI monitoring dashboard
up-monitoring:
docker compose --profile metrics up -d
# Stop all services
down:
docker compose down
# View logs for all services
logs:
docker compose logs -f
# Show service status
ps:
docker compose ps
+125
View File
@@ -0,0 +1,125 @@
# Shannon
[English](./README.md) | [中文](./README.zh.md)
This service deploys [Shannon](https://github.com/Kocoro-lab/Shannon), a production-oriented multi-agent orchestration framework. Shannon provides time-travel debugging via Temporal workflows, hard token budgets per task/agent, real-time observability dashboards, WASI sandbox for secure code execution, OPA policy governance, and multi-tenant isolation — all with native support for OpenAI, Anthropic, Google, DeepSeek, and local models.
> **Note:** The `agent-core` service is only built for `linux/amd64`. On Apple Silicon (ARM64), Docker Desktop uses Rosetta emulation automatically.
## Services
- **gateway**: HTTP API gateway — primary entry point for all client requests (port `8080`)
- **orchestrator**: Core workflow orchestration engine powered by Temporal
- **llm-service**: LLM provider abstraction with model routing, fallback, and budget control
- **agent-core**: Rust-based agent execution runtime with WASI sandbox support
- **postgres**: PostgreSQL with pgvector extension for state and vector storage
- **redis**: Redis for caching, job queues, and rate limiting
- **qdrant**: Qdrant vector database for semantic memory
- **temporal**: Temporal workflow engine for durable, fault-tolerant task execution
- **temporal-ui**: Temporal Web UI for workflow debugging (enabled via `metrics` profile)
## Quick Start
### Prerequisites
- Docker & Docker Compose v2
- `curl` (for the setup script)
- At least one LLM API key (OpenAI, Anthropic, Google, etc.)
### 1. Run Setup
```bash
make setup
```
This downloads the required `config/models.yaml` and `config/features.yaml` from the Shannon repository and creates a local `.env` file.
### 2. Add Your LLM API Key
Edit `.env` and set at least one LLM provider key:
```env
# Choose at least one:
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
```
Also update `JWT_SECRET` and set `GATEWAY_SKIP_AUTH=0` for production deployments.
### 3. Start Services
```bash
make up
```
Access the Shannon API at `http://localhost:8080`.
### 4. (Optional) Enable Temporal UI Dashboard
To also start the Temporal workflow debugging UI:
```bash
make up-monitoring
```
Access Temporal UI at `http://localhost:8088`.
## Core Environment Variables
| Variable | Description | Default |
| --------------------------- | ------------------------------------------ | ---------------------------------------------- |
| `SHANNON_VERSION` | Version for all Shannon service images | `v0.3.1` |
| `OPENAI_API_KEY` | OpenAI API key (at least one key required) | `` |
| `ANTHROPIC_API_KEY` | Anthropic API key | `` |
| `GOOGLE_API_KEY` | Google AI API key | `` |
| `JWT_SECRET` | Secret for JWT token signing | `development-only-secret-change-in-production` |
| `GATEWAY_SKIP_AUTH` | Skip auth (set to `0` to enable in prod) | `1` |
| `GATEWAY_PORT_OVERRIDE` | Host port for the API gateway | `8080` |
| `TEMPORAL_UI_PORT_OVERRIDE` | Host port for the Temporal UI | `8088` |
## Database Configuration
| Variable | Description | Default |
| ------------------- | ------------------------ | ------------ |
| `POSTGRES_VERSION` | pgvector image tag | `pg16` |
| `POSTGRES_USER` | PostgreSQL username | `shannon` |
| `POSTGRES_PASSWORD` | PostgreSQL password | `shannon` |
| `POSTGRES_DB` | PostgreSQL database name | `shannon` |
| `REDIS_VERSION` | Redis image tag | `7.2-alpine` |
| `QDRANT_VERSION` | Qdrant image tag | `v1.17` |
## Agent Configuration
| Variable | Description | Default |
| -------------------------- | -------------------------------------- | --------- |
| `DEFAULT_MODEL_TIER` | Default model complexity tier | `small` |
| `SHANNON_USE_WASI_SANDBOX` | Enable WASI sandbox for code execution | `1` |
| `WASI_MEMORY_LIMIT_MB` | Memory limit for WASI sandbox (MB) | `512` |
| `WASI_TIMEOUT_SECONDS` | Execution timeout for WASI sandbox | `60` |
| `TEMPORAL_NAMESPACE` | Temporal namespace for workflows | `default` |
## Observability (Optional)
| Variable | Description | Default |
| ----------------------------- | ---------------------------- | ------- |
| `OTEL_ENABLED` | Enable OpenTelemetry tracing | `false` |
| `OTEL_EXPORTER_OTLP_ENDPOINT` | OTLP collector endpoint | `` |
## Security Notes
- By default, `GATEWAY_SKIP_AUTH=1` disables JWT authentication for easy local development.
- **For production**, set `GATEWAY_SKIP_AUTH=0` and use a strong `JWT_SECRET`.
- Passwords in `.env.example` are for local development only — always change them before deploying to a shared or public environment.
## Configuration Files
Shannon uses YAML configuration files under `./config/`:
- `config/models.yaml` — LLM providers, model tiers, pricing, and routing rules
- `config/features.yaml` — Feature flags, execution modes, and workflow settings
These are downloaded from the official Shannon repository by `make setup` and can be customized as needed.
## License
Shannon is licensed under the [Apache 2.0 License](https://github.com/Kocoro-lab/Shannon/blob/main/LICENSE).
+125
View File
@@ -0,0 +1,125 @@
# Shannon
[English](./README.md) | [中文](./README.zh.md)
本服务部署 [Shannon](https://github.com/Kocoro-lab/Shannon),一个面向生产环境的多智能体编排框架。Shannon 通过 Temporal 工作流引擎提供时光回溯调试能力、按任务 / 智能体的硬性 Token 预算控制、实时可观测性仪表盘、WASI 沙箱安全代码执行、OPA 策略治理以及多租户隔离,并原生支持 OpenAI、Anthropic、Google、DeepSeek 及本地模型。
> **注意:** `agent-core` 服务仅构建了 `linux/amd64` 镜像。在 Apple SiliconARM64)上,Docker Desktop 会自动通过 Rosetta 进行仿真运行。
## 服务说明
- **gateway**HTTP API 网关 —— 所有客户端请求的主入口(端口 `8080`
- **orchestrator**:基于 Temporal 的核心工作流编排引擎
- **llm-service**LLM 提供商抽象层,支持模型路由、故障转移和预算控制
- **agent-core**:基于 Rust 的智能体执行运行时,支持 WASI 沙箱
- **postgres**:带 pgvector 扩展的 PostgreSQL,用于状态和向量存储
- **redis**Redis,用于缓存、任务队列和限流
- **qdrant**Qdrant 向量数据库,用于语义记忆
- **temporal**Temporal 工作流引擎,提供可持久、容错的任务执行
- **temporal-ui**Temporal Web UI,用于工作流调试(通过 `metrics` profile 启用)
## 快速开始
### 前置条件
- Docker 及 Docker Compose v2
- `curl`(用于下载配置文件)
- 至少一个 LLM API 密钥(OpenAI、Anthropic、Google 等)
### 1. 运行初始化
```bash
make setup
```
该命令会从 Shannon 代码仓库下载所需的 `config/models.yaml``config/features.yaml` 配置文件,并创建本地 `.env` 文件。
### 2. 填写 LLM API 密钥
编辑 `.env` 文件,至少设置一个 LLM 提供商的密钥:
```env
# 至少选择一个:
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
```
在生产环境中,还需要更新 `JWT_SECRET` 并将 `GATEWAY_SKIP_AUTH` 设为 `0`
### 3. 启动服务
```bash
make up
```
通过 `http://localhost:8080` 访问 Shannon API。
### 4. (可选)启用 Temporal UI 仪表盘
若需同时启动 Temporal 工作流调试界面:
```bash
make up-monitoring
```
通过 `http://localhost:8088` 访问 Temporal UI。
## 核心环境变量
| 变量名 | 说明 | 默认值 |
| --------------------------- | ----------------------------------------- | ---------------------------------------------- |
| `SHANNON_VERSION` | 所有 Shannon 服务镜像的版本号 | `v0.3.1` |
| `OPENAI_API_KEY` | OpenAI API 密钥(至少需要一个提供商密钥) | `` |
| `ANTHROPIC_API_KEY` | Anthropic API 密钥 | `` |
| `GOOGLE_API_KEY` | Google AI API 密钥 | `` |
| `JWT_SECRET` | JWT Token 签名密钥 | `development-only-secret-change-in-production` |
| `GATEWAY_SKIP_AUTH` | 跳过身份验证(生产环境请设为 `0` | `1` |
| `GATEWAY_PORT_OVERRIDE` | API 网关的宿主机端口 | `8080` |
| `TEMPORAL_UI_PORT_OVERRIDE` | Temporal UI 的宿主机端口 | `8088` |
## 数据库配置
| 变量名 | 说明 | 默认值 |
| ------------------- | ------------------- | ------------ |
| `POSTGRES_VERSION` | pgvector 镜像标签 | `pg16` |
| `POSTGRES_USER` | PostgreSQL 用户名 | `shannon` |
| `POSTGRES_PASSWORD` | PostgreSQL 密码 | `shannon` |
| `POSTGRES_DB` | PostgreSQL 数据库名 | `shannon` |
| `REDIS_VERSION` | Redis 镜像标签 | `7.2-alpine` |
| `QDRANT_VERSION` | Qdrant 镜像标签 | `v1.17` |
## 智能体配置
| 变量名 | 说明 | 默认值 |
| -------------------------- | --------------------------- | --------- |
| `DEFAULT_MODEL_TIER` | 默认模型复杂度层级 | `small` |
| `SHANNON_USE_WASI_SANDBOX` | 启用 WASI 沙箱执行代码 | `1` |
| `WASI_MEMORY_LIMIT_MB` | WASI 沙箱内存限制(MB | `512` |
| `WASI_TIMEOUT_SECONDS` | WASI 沙箱执行超时时间(秒) | `60` |
| `TEMPORAL_NAMESPACE` | Temporal 工作流命名空间 | `default` |
## 可观测性(可选)
| 变量名 | 说明 | 默认值 |
| ----------------------------- | --------------------------- | ------- |
| `OTEL_ENABLED` | 启用 OpenTelemetry 链路追踪 | `false` |
| `OTEL_EXPORTER_OTLP_ENDPOINT` | OTLP 采集器端点 | `` |
## 安全说明
- 默认情况下,`GATEWAY_SKIP_AUTH=1` 会禁用 JWT 身份验证,便于本地开发。
- **生产环境**请将 `GATEWAY_SKIP_AUTH` 设为 `0`,并使用强密钥替换 `JWT_SECRET`。
- `.env.example` 中的密码仅供本地开发使用,在部署到共享或公开环境前务必修改。
## 配置文件说明
Shannon 使用 `./config/` 目录下的 YAML 配置文件:
- `config/models.yaml` —— LLM 提供商、模型层级、定价及路由规则
- `config/features.yaml` —— 功能开关、执行模式及工作流设置
这些文件通过 `make setup` 从 Shannon 官方代码仓库下载,可根据需要自定义。
## 开源协议
Shannon 采用 [Apache 2.0 协议](https://github.com/Kocoro-lab/Shannon/blob/main/LICENSE) 开源。
+353
View File
@@ -0,0 +1,353 @@
# Shannon - Production-Oriented Multi-Agent Orchestration Framework
# https://github.com/Kocoro-lab/Shannon
#
# NOTE: Run `make setup` before first launch to download required config files
# and create your .env file, then add at least one LLM API key.
x-defaults: &defaults
restart: unless-stopped
logging:
driver: json-file
options:
max-size: 100m
max-file: '3'
x-shannon-config: &shannon-config
volumes:
- ./config:/app/config:ro
services:
postgres:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}pgvector/pgvector:${POSTGRES_VERSION:-pg16}
environment:
TZ: ${TZ:-UTC}
POSTGRES_USER: ${POSTGRES_USER:-shannon}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-shannon}
POSTGRES_DB: ${POSTGRES_DB:-shannon}
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: [CMD-SHELL, 'pg_isready -U ${POSTGRES_USER:-shannon} -d ${POSTGRES_DB:-shannon}']
interval: 5s
timeout: 5s
retries: 20
start_period: 15s
deploy:
resources:
limits:
cpus: ${POSTGRES_CPU_LIMIT:-1.0}
memory: ${POSTGRES_MEMORY_LIMIT:-1G}
reservations:
cpus: ${POSTGRES_CPU_RESERVATION:-0.25}
memory: ${POSTGRES_MEMORY_RESERVATION:-256M}
redis:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}redis:${REDIS_VERSION:-7.2-alpine}
volumes:
- redis_data:/data
healthcheck:
test: [CMD, redis-cli, ping]
interval: 5s
timeout: 5s
retries: 10
start_period: 5s
deploy:
resources:
limits:
cpus: ${REDIS_CPU_LIMIT:-0.5}
memory: ${REDIS_MEMORY_LIMIT:-512M}
reservations:
cpus: ${REDIS_CPU_RESERVATION:-0.1}
memory: ${REDIS_MEMORY_RESERVATION:-128M}
qdrant:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}qdrant/qdrant:${QDRANT_VERSION:-v1.17}
environment:
TZ: ${TZ:-UTC}
volumes:
- qdrant_data:/qdrant/storage
healthcheck:
test: [CMD-SHELL, 'wget -qO- http://localhost:6333/health | grep -q ok || exit 1']
interval: 10s
timeout: 5s
retries: 10
start_period: 15s
deploy:
resources:
limits:
cpus: ${QDRANT_CPU_LIMIT:-1.0}
memory: ${QDRANT_MEMORY_LIMIT:-1G}
reservations:
cpus: ${QDRANT_CPU_RESERVATION:-0.25}
memory: ${QDRANT_MEMORY_RESERVATION:-256M}
temporal:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}temporalio/auto-setup:${TEMPORAL_VERSION:-1.28.3}
environment:
TZ: ${TZ:-UTC}
DB: postgres12
DB_PORT: ${POSTGRES_PORT:-5432}
POSTGRES_USER: ${POSTGRES_USER:-shannon}
POSTGRES_PWD: ${POSTGRES_PASSWORD:-shannon}
POSTGRES_SEEDS: postgres
depends_on:
postgres:
condition: service_healthy
healthcheck:
test: [CMD-SHELL, 'temporal operator cluster health --address localhost:7233 | grep -q SERVING || exit 1']
interval: 15s
timeout: 10s
retries: 10
start_period: 60s
deploy:
resources:
limits:
cpus: ${TEMPORAL_CPU_LIMIT:-1.0}
memory: ${TEMPORAL_MEMORY_LIMIT:-1G}
reservations:
cpus: ${TEMPORAL_CPU_RESERVATION:-0.25}
memory: ${TEMPORAL_MEMORY_RESERVATION:-256M}
temporal-ui:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}temporalio/ui:${TEMPORAL_UI_VERSION:-2.40.1}
environment:
TZ: ${TZ:-UTC}
TEMPORAL_ADDRESS: temporal:7233
ports:
- '${TEMPORAL_UI_PORT_OVERRIDE:-8088}:8080'
depends_on:
temporal:
condition: service_healthy
healthcheck:
test: [CMD-SHELL, 'wget -qO- http://localhost:8080 > /dev/null || exit 1']
interval: 15s
timeout: 5s
retries: 5
start_period: 20s
profiles:
- metrics
deploy:
resources:
limits:
cpus: ${TEMPORAL_UI_CPU_LIMIT:-0.5}
memory: ${TEMPORAL_UI_MEMORY_LIMIT:-256M}
reservations:
cpus: ${TEMPORAL_UI_CPU_RESERVATION:-0.1}
memory: ${TEMPORAL_UI_MEMORY_RESERVATION:-128M}
llm-service:
<<: [*defaults, *shannon-config]
image: ${GLOBAL_REGISTRY:-}waylandzhang/llm-service:${SHANNON_VERSION:-v0.3.1}
environment:
TZ: ${TZ:-UTC}
# LLM API Keys (at least one is required)
OPENAI_API_KEY: ${OPENAI_API_KEY:-}
ANTHROPIC_API_KEY: ${ANTHROPIC_API_KEY:-}
GOOGLE_API_KEY: ${GOOGLE_API_KEY:-}
XAI_API_KEY: ${XAI_API_KEY:-}
DEEPSEEK_API_KEY: ${DEEPSEEK_API_KEY:-}
# Optional search/tool API keys
SERPAPI_API_KEY: ${SERPAPI_API_KEY:-}
FIRECRAWL_API_KEY: ${FIRECRAWL_API_KEY:-}
# Internal service configuration
POSTGRES_HOST: postgres
POSTGRES_PORT: ${POSTGRES_PORT:-5432}
POSTGRES_USER: ${POSTGRES_USER:-shannon}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-shannon}
POSTGRES_DB: ${POSTGRES_DB:-shannon}
POSTGRES_SSLMODE: ${POSTGRES_SSLMODE:-disable}
REDIS_URL: ${REDIS_URL:-redis://redis:6379}
REDIS_ADDR: ${REDIS_ADDR:-redis:6379}
QDRANT_HOST: ${QDRANT_HOST:-qdrant}
QDRANT_PORT: ${QDRANT_PORT:-6333}
AGENT_CORE_ADDR: agent-core:50051
# Config paths
LLM_CONFIG_PATH: /app/config
MODELS_CONFIG_PATH: ${MODELS_CONFIG_PATH:-/app/config/models.yaml}
# Model selection
DEFAULT_MODEL_TIER: ${DEFAULT_MODEL_TIER:-small}
MAX_TOKENS: ${MAX_TOKENS:-2000}
TEMPERATURE: ${TEMPERATURE:-0.7}
MAX_TOKENS_PER_REQUEST: ${MAX_TOKENS_PER_REQUEST:-10000}
# Telemetry
OTEL_ENABLED: ${OTEL_ENABLED:-false}
DEBUG: ${DEBUG:-false}
ENVIRONMENT: ${ENVIRONMENT:-production}
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
qdrant:
condition: service_healthy
agent-core:
condition: service_started
healthcheck:
test: [CMD-SHELL, 'wget -qO- http://localhost:8001/health > /dev/null || exit 1']
interval: 15s
timeout: 5s
retries: 10
start_period: 30s
deploy:
resources:
limits:
cpus: ${LLM_SERVICE_CPU_LIMIT:-2.0}
memory: ${LLM_SERVICE_MEMORY_LIMIT:-2G}
reservations:
cpus: ${LLM_SERVICE_CPU_RESERVATION:-0.5}
memory: ${LLM_SERVICE_MEMORY_RESERVATION:-512M}
agent-core:
<<: [*defaults, *shannon-config]
# Note: agent-core is only built for linux/amd64.
# On Apple Silicon (ARM64), Docker Desktop uses Rosetta emulation automatically.
image: ${GLOBAL_REGISTRY:-}waylandzhang/agent-core:${SHANNON_VERSION:-v0.3.1}
platform: linux/amd64
environment:
TZ: ${TZ:-UTC}
RUST_LOG: ${RUST_LOG:-info}
CONFIG_PATH: /app/config/features.yaml
WASI_MEMORY_LIMIT_MB: ${WASI_MEMORY_LIMIT_MB:-512}
WASI_TIMEOUT_SECONDS: ${WASI_TIMEOUT_SECONDS:-60}
SHANNON_USE_WASI_SANDBOX: ${SHANNON_USE_WASI_SANDBOX:-1}
ENFORCE_TIMEOUT_SECONDS: ${ENFORCE_TIMEOUT_SECONDS:-300}
ENFORCE_MAX_TOKENS: ${ENFORCE_MAX_TOKENS:-32768}
OTEL_ENABLED: ${OTEL_ENABLED:-false}
volumes:
- ./config:/app/config:ro
- shannon_sessions:/app/sessions
healthcheck:
test: [CMD-SHELL, 'pgrep -x shannon-agent-core > /dev/null || exit 1']
interval: 15s
timeout: 5s
retries: 5
start_period: 20s
deploy:
resources:
limits:
cpus: ${AGENT_CORE_CPU_LIMIT:-2.0}
memory: ${AGENT_CORE_MEMORY_LIMIT:-2G}
reservations:
cpus: ${AGENT_CORE_CPU_RESERVATION:-0.5}
memory: ${AGENT_CORE_MEMORY_RESERVATION:-512M}
orchestrator:
<<: [*defaults, *shannon-config]
image: ${GLOBAL_REGISTRY:-}waylandzhang/orchestrator:${SHANNON_VERSION:-v0.3.1}
environment:
TZ: ${TZ:-UTC}
# Temporal workflow engine
TEMPORAL_HOST_PORT: temporal:7233
TEMPORAL_NAMESPACE: ${TEMPORAL_NAMESPACE:-default}
# Internal service URLs
LLM_SERVICE_URL: ${LLM_SERVICE_URL:-http://llm-service:8001}
QDRANT_HOST: ${QDRANT_HOST:-qdrant}
QDRANT_PORT: ${QDRANT_PORT:-6333}
# Database and cache
POSTGRES_HOST: postgres
POSTGRES_PORT: ${POSTGRES_PORT:-5432}
POSTGRES_USER: ${POSTGRES_USER:-shannon}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-shannon}
POSTGRES_DB: ${POSTGRES_DB:-shannon}
POSTGRES_SSLMODE: ${POSTGRES_SSLMODE:-disable}
REDIS_ADDR: ${REDIS_ADDR:-redis:6379}
REDIS_URL: ${REDIS_URL:-redis://redis:6379}
REDIS_TTL_SECONDS: ${REDIS_TTL_SECONDS:-3600}
# Worker pool sizing
WORKER_ACT_CRITICAL: ${WORKER_ACT_CRITICAL:-12}
WORKER_WF_CRITICAL: ${WORKER_WF_CRITICAL:-12}
WORKER_ACT_HIGH: ${WORKER_ACT_HIGH:-10}
WORKER_WF_HIGH: ${WORKER_WF_HIGH:-10}
WORKER_ACT_NORMAL: ${WORKER_ACT_NORMAL:-8}
WORKER_WF_NORMAL: ${WORKER_WF_NORMAL:-8}
WORKER_ACT_LOW: ${WORKER_ACT_LOW:-4}
WORKER_WF_LOW: ${WORKER_WF_LOW:-4}
# Workflow settings
WORKFLOW_SYNTH_BYPASS_SINGLE: ${WORKFLOW_SYNTH_BYPASS_SINGLE:-true}
PROVIDER_RATE_CONTROL_ENABLED: ${PROVIDER_RATE_CONTROL_ENABLED:-false}
# Security
JWT_SECRET: ${JWT_SECRET:-development-only-secret-change-in-production}
# Telemetry
OTEL_ENABLED: ${OTEL_ENABLED:-false}
DEBUG: ${DEBUG:-false}
ENVIRONMENT: ${ENVIRONMENT:-production}
depends_on:
temporal:
condition: service_healthy
redis:
condition: service_healthy
postgres:
condition: service_healthy
llm-service:
condition: service_healthy
agent-core:
condition: service_started
healthcheck:
test: [CMD-SHELL, 'wget -qO- http://localhost:8081/health > /dev/null || exit 1']
interval: 15s
timeout: 5s
retries: 10
start_period: 60s
deploy:
resources:
limits:
cpus: ${ORCHESTRATOR_CPU_LIMIT:-2.0}
memory: ${ORCHESTRATOR_MEMORY_LIMIT:-2G}
reservations:
cpus: ${ORCHESTRATOR_CPU_RESERVATION:-0.5}
memory: ${ORCHESTRATOR_MEMORY_RESERVATION:-512M}
gateway:
<<: [*defaults, *shannon-config]
image: ${GLOBAL_REGISTRY:-}waylandzhang/gateway:${SHANNON_VERSION:-v0.3.1}
environment:
TZ: ${TZ:-UTC}
PORT: ${GATEWAY_PORT:-8080}
ORCHESTRATOR_GRPC: ${ORCHESTRATOR_GRPC:-orchestrator:50052}
ADMIN_SERVER: ${ADMIN_SERVER:-http://orchestrator:8081}
# Database and cache
POSTGRES_HOST: postgres
POSTGRES_PORT: ${POSTGRES_PORT:-5432}
POSTGRES_USER: ${POSTGRES_USER:-shannon}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-shannon}
POSTGRES_DB: ${POSTGRES_DB:-shannon}
POSTGRES_SSLMODE: ${POSTGRES_SSLMODE:-disable}
REDIS_URL: ${REDIS_URL:-redis://redis:6379}
# Security
JWT_SECRET: ${JWT_SECRET:-development-only-secret-change-in-production}
# Set GATEWAY_SKIP_AUTH=0 to enable authentication in production
GATEWAY_SKIP_AUTH: ${GATEWAY_SKIP_AUTH:-1}
ports:
- '${GATEWAY_PORT_OVERRIDE:-8080}:8080'
depends_on:
orchestrator:
condition: service_healthy
redis:
condition: service_healthy
postgres:
condition: service_healthy
healthcheck:
test: [CMD-SHELL, 'wget -qO- http://localhost:8080/health > /dev/null || exit 1']
interval: 15s
timeout: 5s
retries: 10
start_period: 30s
deploy:
resources:
limits:
cpus: ${GATEWAY_CPU_LIMIT:-1.0}
memory: ${GATEWAY_MEMORY_LIMIT:-512M}
reservations:
cpus: ${GATEWAY_CPU_RESERVATION:-0.25}
memory: ${GATEWAY_MEMORY_RESERVATION:-256M}
volumes:
postgres_data:
redis_data:
qdrant_data:
shannon_sessions:
+48
View File
@@ -0,0 +1,48 @@
# Global Registry Prefix (optional)
# GLOBAL_REGISTRY=
# Service Versions
SKYVERN_VERSION=v1.0.31
POSTGRES_VERSION=15
# Timezone
TZ=UTC
# Host ports
SKYVERN_PORT_OVERRIDE=8000
SKYVERN_UI_PORT_OVERRIDE=8080
# Skyvern API Key (CHANGEME: set a strong random key for the REST API)
SKYVERN_API_KEY=changeme_skyvern_api_key_CHANGEME
# Browser type: chromium-headless (default), chromium, or chrome
BROWSER_TYPE=chromium-headless
# LLM Provider API Keys (at least one is required for task automation)
# OPENAI_API_KEY=sk-...
# ANTHROPIC_API_KEY=sk-ant-...
# PostgreSQL password
POSTGRES_PASSWORD=skyvern
# UI → API connection (must be the address reachable from the user's browser)
VITE_API_BASE_URL=http://localhost:8000
VITE_WSS_BASE_URL=ws://localhost:8000
# Resource Limits - Skyvern backend (includes Playwright + Chromium)
SKYVERN_CPU_LIMIT=2
SKYVERN_MEMORY_LIMIT=4G
SKYVERN_CPU_RESERVATION=0.5
SKYVERN_MEMORY_RESERVATION=1G
# Resource Limits - Skyvern UI
SKYVERN_UI_CPU_LIMIT=0.5
SKYVERN_UI_MEMORY_LIMIT=256M
SKYVERN_UI_CPU_RESERVATION=0.1
SKYVERN_UI_MEMORY_RESERVATION=64M
# Resource Limits - PostgreSQL
POSTGRES_CPU_LIMIT=1
POSTGRES_MEMORY_LIMIT=1G
POSTGRES_CPU_RESERVATION=0.25
POSTGRES_MEMORY_RESERVATION=256M
+84
View File
@@ -0,0 +1,84 @@
# Skyvern
[English](./README.md) | [中文](./README.zh.md)
Quick start: <https://docs.skyvern.com>.
This service deploys Skyvern, an AI-powered browser automation platform that uses LLMs and computer vision to execute tasks in web browsers. It can fill forms, navigate websites, and complete multi-step workflows without custom scripts.
## Services
- **skyvern**: The Skyvern API server with embedded Playwright + Chromium.
- **skyvern-ui**: React-based web UI for task management and browser session viewing.
- **postgres**: PostgreSQL database for task history and state.
## Quick Start
1. Copy `.env.example` to `.env`:
```bash
cp .env.example .env
```
2. Set your LLM API key and change the Skyvern API key in `.env`:
```
SKYVERN_API_KEY=your-strong-api-key
OPENAI_API_KEY=sk-...
```
3. Start the services:
```bash
docker compose up -d
```
4. Open `http://localhost:8080` for the web UI, or send tasks to the API at `http://localhost:8000`.
## Core Environment Variables
| Variable | Description | Default |
| ----------------------- | -------------------------------------------------------------------- | -------------------- |
| `SKYVERN_VERSION` | Image version (applies to both skyvern and skyvern-ui) | `v1.0.31` |
| `SKYVERN_PORT_OVERRIDE` | Host port for the API | `8000` |
| `SKYVERN_UI_PORT_OVERRIDE` | Host port for the web UI | `8080` |
| `SKYVERN_API_KEY` | API key for authenticating requests to the Skyvern server — **CHANGEME** | placeholder |
| `BROWSER_TYPE` | Browser type: `chromium-headless`, `chromium`, or `chrome` | `chromium-headless` |
| `OPENAI_API_KEY` | OpenAI API key (recommended for best results) | *(empty)* |
| `ANTHROPIC_API_KEY` | Anthropic API key (alternative to OpenAI) | *(empty)* |
| `POSTGRES_PASSWORD` | PostgreSQL password | `skyvern` |
| `VITE_API_BASE_URL` | Skyvern API URL as seen from the user's browser | `http://localhost:8000` |
| `VITE_WSS_BASE_URL` | WebSocket URL for live session streaming | `ws://localhost:8000` |
## Volumes
- `skyvern_artifacts`: Downloaded files and task artifacts.
- `skyvern_videos`: Browser session recordings.
- `skyvern_har`: HTTP Archive (HAR) files for debugging.
- `skyvern_postgres_data`: PostgreSQL data persistence.
## Ports
- **8000**: Skyvern REST API
- **8080**: Skyvern web UI
## Resource Requirements
| Service | CPU Limit | Memory Limit |
| ---------- | --------- | ------------ |
| skyvern | 2 | 4 GB |
| skyvern-ui | 0.5 | 256 MB |
| postgres | 1 | 1 GB |
The `skyvern` service includes Playwright and Chromium. Allocate **4+ GB RAM** and **2+ CPU cores** for reliable browser automation.
## Notes
- Database migrations run automatically on startup via Alembic.
- If deploying behind a reverse proxy, update `VITE_API_BASE_URL` and `VITE_WSS_BASE_URL` to your public domain.
- The `SKYVERN_API_KEY` must be included in API requests as the `x-api-key` header.
## Documentation
- [Skyvern Docs](https://docs.skyvern.com)
- [GitHub](https://github.com/Skyvern-AI/skyvern)
+84
View File
@@ -0,0 +1,84 @@
# Skyvern
[English](./README.md) | [中文](./README.zh.md)
快速开始:<https://docs.skyvern.com>。
此服务用于部署 Skyvern,一个由 AI 驱动的浏览器自动化平台,使用 LLM 和计算机视觉在 Web 浏览器中执行任务。无需编写自定义脚本,即可填写表单、导航网站和完成多步骤工作流。
## 服务
- **skyvern**:集成了 Playwright + Chromium 的 Skyvern API 服务器。
- **skyvern-ui**:用于任务管理和浏览器会话查看的 React Web UI。
- **postgres**PostgreSQL 数据库,用于存储任务历史和状态。
## 快速开始
1.`.env.example` 复制为 `.env`
```bash
cp .env.example .env
```
2. 在 `.env` 中设置 LLM API Key 并更改 Skyvern API Key
```
SKYVERN_API_KEY=your-strong-api-key
OPENAI_API_KEY=sk-...
```
3. 启动服务:
```bash
docker compose up -d
```
4. 打开 `http://localhost:8080` 访问 Web UI,或通过 `http://localhost:8000` 向 API 发送任务。
## 核心环境变量
| 变量 | 说明 | 默认值 |
| -------------------------- | ------------------------------------------------------- | ------------------------ |
| `SKYVERN_VERSION` | 镜像版本(同时适用于 skyvern 和 skyvern-ui | `v1.0.31` |
| `SKYVERN_PORT_OVERRIDE` | API 宿主机端口 | `8000` |
| `SKYVERN_UI_PORT_OVERRIDE` | Web UI 宿主机端口 | `8080` |
| `SKYVERN_API_KEY` | 请求 Skyvern 服务器的认证 API Key——**请修改** | 占位符 |
| `BROWSER_TYPE` | 浏览器类型:`chromium-headless`、`chromium` 或 `chrome` | `chromium-headless` |
| `OPENAI_API_KEY` | OpenAI API Key(推荐,效果最佳) | *(空)* |
| `ANTHROPIC_API_KEY` | Anthropic API KeyOpenAI 的替代方案) | *(空)* |
| `POSTGRES_PASSWORD` | PostgreSQL 密码 | `skyvern` |
| `VITE_API_BASE_URL` | 从用户浏览器访问的 Skyvern API URL | `http://localhost:8000` |
| `VITE_WSS_BASE_URL` | 实时会话流的 WebSocket URL | `ws://localhost:8000` |
## 数据卷
- `skyvern_artifacts`:下载的文件和任务产物。
- `skyvern_videos`:浏览器会话录像。
- `skyvern_har`:用于调试的 HTTP 存档(HAR)文件。
- `skyvern_postgres_data`PostgreSQL 数据持久化。
## 端口
- **8000**Skyvern REST API
- **8080**Skyvern Web UI
## 资源需求
| 服务 | CPU 限制 | 内存限制 |
| ---------- | -------- | -------- |
| skyvern | 2 | 4 GB |
| skyvern-ui | 0.5 | 256 MB |
| postgres | 1 | 1 GB |
`skyvern` 服务包含 Playwright 和 Chromium,需分配 **4+ GB RAM** 和 **2+ CPU 核心**以保证浏览器自动化的稳定运行。
## 说明
- 数据库迁移通过 Alembic 在启动时自动运行。
- 如果部署在反向代理后,请将 `VITE_API_BASE_URL` 和 `VITE_WSS_BASE_URL` 更新为你的公网域名。
- API 请求中必须在 `x-api-key` 请求头中包含 `SKYVERN_API_KEY`。
## 文档
- [Skyvern 文档](https://docs.skyvern.com)
- [GitHub](https://github.com/Skyvern-AI/skyvern)
+110
View File
@@ -0,0 +1,110 @@
# Change SKYVERN_API_KEY before exposing this stack externally.
# Fields marked with CHANGEME must be updated for any non-local deployment.
x-defaults: &defaults
restart: unless-stopped
logging:
driver: json-file
options:
max-size: 100m
max-file: '3'
services:
skyvern:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}skyvern/skyvern:${SKYVERN_VERSION:-v1.0.31}
depends_on:
postgres:
condition: service_healthy
ports:
- '${SKYVERN_PORT_OVERRIDE:-8000}:8000'
volumes:
- skyvern_artifacts:/data/artifacts
- skyvern_videos:/data/videos
- skyvern_har:/data/har
environment:
- TZ=${TZ:-UTC}
- DATABASE_STRING=postgresql+psycopg2://skyvern:${POSTGRES_PASSWORD:-skyvern}@postgres:5432/skyvern
- SKYVERN_API_KEY=${SKYVERN_API_KEY:-changeme_skyvern_api_key_CHANGEME}
- BROWSER_TYPE=${BROWSER_TYPE:-chromium-headless}
- VIDEO_PATH=/data/videos
- HAR_PATH=/data/har
- ARTIFACT_STORAGE_PATH=/data/artifacts
- OPENAI_API_KEY=${OPENAI_API_KEY:-}
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY:-}
healthcheck:
test:
- CMD
- python3
- -c
- "import urllib.request; urllib.request.urlopen('http://localhost:8000/api/v1/heartbeat')"
interval: 30s
timeout: 10s
retries: 5
start_period: 30s
deploy:
resources:
limits:
cpus: ${SKYVERN_CPU_LIMIT:-2}
memory: ${SKYVERN_MEMORY_LIMIT:-4G}
reservations:
cpus: ${SKYVERN_CPU_RESERVATION:-0.5}
memory: ${SKYVERN_MEMORY_RESERVATION:-1G}
skyvern-ui:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}skyvern/skyvern-ui:${SKYVERN_VERSION:-v1.0.31}
depends_on:
skyvern:
condition: service_healthy
ports:
- '${SKYVERN_UI_PORT_OVERRIDE:-8080}:8080'
environment:
- TZ=${TZ:-UTC}
- VITE_API_BASE_URL=${VITE_API_BASE_URL:-http://localhost:8000}
- VITE_WSS_BASE_URL=${VITE_WSS_BASE_URL:-ws://localhost:8000}
healthcheck:
test: [CMD-SHELL, "curl -sf http://localhost:8080/ > /dev/null 2>&1 || exit 1"]
interval: 30s
timeout: 10s
retries: 3
start_period: 15s
deploy:
resources:
limits:
cpus: ${SKYVERN_UI_CPU_LIMIT:-0.5}
memory: ${SKYVERN_UI_MEMORY_LIMIT:-256M}
reservations:
cpus: ${SKYVERN_UI_CPU_RESERVATION:-0.1}
memory: ${SKYVERN_UI_MEMORY_RESERVATION:-64M}
postgres:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}postgres:${POSTGRES_VERSION:-15}
environment:
- POSTGRES_USER=skyvern
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-skyvern}
- POSTGRES_DB=skyvern
- TZ=UTC
- PGTZ=UTC
volumes:
- skyvern_postgres_data:/var/lib/postgresql/data
healthcheck:
test: [CMD-SHELL, pg_isready -U skyvern]
interval: 5s
timeout: 5s
retries: 10
deploy:
resources:
limits:
cpus: ${POSTGRES_CPU_LIMIT:-1}
memory: ${POSTGRES_MEMORY_LIMIT:-1G}
reservations:
cpus: ${POSTGRES_CPU_RESERVATION:-0.25}
memory: ${POSTGRES_MEMORY_RESERVATION:-256M}
volumes:
skyvern_artifacts:
skyvern_videos:
skyvern_har:
skyvern_postgres_data:
+1 -1
View File
@@ -2,7 +2,7 @@
STIRLING_VERSION="latest"
# Port override
PORT_OVERRIDE=8080
STIRLING_PORT_OVERRIDE=8080
# Security settings
ENABLE_SECURITY="false"
+1 -1
View File
@@ -13,7 +13,7 @@ This service deploys Stirling-PDF, a locally hosted web-based PDF manipulation t
| Variable Name | Description | Default Value |
| -------------------- | ------------------------------------- | -------------- |
| STIRLING_VERSION | Stirling-PDF image version | `latest` |
| PORT_OVERRIDE | Host port mapping | `8080` |
| STIRLING_PORT_OVERRIDE | Host port mapping | `8080` |
| ENABLE_SECURITY | Enable security features | `false` |
| ENABLE_LOGIN | Enable login functionality | `false` |
| INITIAL_USERNAME | Initial admin username | `admin` |
+1 -1
View File
@@ -13,7 +13,7 @@
| 变量名 | 说明 | 默认值 |
| -------------------- | ---------------------- | -------------- |
| STIRLING_VERSION | Stirling-PDF 镜像版本 | `latest` |
| PORT_OVERRIDE | 主机端口映射 | `8080` |
| STIRLING_PORT_OVERRIDE | 主机端口映射 | `8080` |
| ENABLE_SECURITY | 启用安全功能 | `false` |
| ENABLE_LOGIN | 启用登录功能 | `false` |
| INITIAL_USERNAME | 初始管理员用户名 | `admin` |
+1 -1
View File
@@ -11,7 +11,7 @@ services:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}stirlingtools/stirling-pdf:${STIRLING_VERSION:-latest}
ports:
- '${PORT_OVERRIDE:-8080}:8080'
- '${STIRLING_PORT_OVERRIDE:-8080}:8080'
volumes:
- stirling_trainingData:/usr/share/tessdata
- stirling_configs:/configs
+36
View File
@@ -0,0 +1,36 @@
# --- Image / build ---
# Override prefix when pushing to a private registry (e.g. registry.example.com/)
GLOBAL_REGISTRY=
# Tag of the locally built image
CUBE_SANDBOX_VERSION=0.1.7
# Base image for the wrapper container.
# Default works globally. In mainland China, override with a regional mirror:
# UBUNTU_IMAGE=docker.m.daocloud.io/library/ubuntu:22.04
# UBUNTU_IMAGE=ccr.ccs.tencentyun.com/library/ubuntu:22.04
UBUNTU_IMAGE=ubuntu:22.04
# --- Runtime ---
# Timezone inside the container
TZ=Asia/Shanghai
# Mirror used by the upstream installer:
# cn -> https://cnb.cool/CubeSandbox + Tencent Cloud container registry (recommended in China)
# gh -> https://github.com (slower in China but works elsewhere)
CUBE_MIRROR=cn
# Size of the XFS-formatted loop file mounted at /data/cubelet inside the
# container. install.sh hard-requires XFS; the file lives on the cube_data
# named volume so it persists across container restarts.
CUBE_XFS_SIZE=50G
# Set to 1 to force re-running install.sh on next start
CUBE_FORCE_REINSTALL=0
# --- Resources ---
# CubeSandbox runs MySQL + Redis + CubeProxy + CoreDNS + CubeMaster + CubeAPI +
# Cubelet + network-agent inside the wrapper container, then spawns MicroVMs.
# Give it enough headroom; 16 GiB / 8 vCPU is a comfortable single-node default.
CUBE_CPU_LIMIT=8
CUBE_MEMORY_LIMIT=16G
CUBE_CPU_RESERVATION=2
CUBE_MEMORY_RESERVATION=8G
+134
View File
@@ -0,0 +1,134 @@
# CubeSandbox in a privileged systemd+DinD container.
#
# CubeSandbox's official install.sh is designed for bare metal / VMs and
# requires a running systemd (it registers all services as systemd units).
# This image therefore runs systemd as PID 1 rather than tini.
#
# UBUNTU_IMAGE may be overridden to use a regional mirror, e.g.:
# docker.m.daocloud.io/library/ubuntu:22.04 (China DaoCloud mirror)
# ccr.ccs.tencentyun.com/library/ubuntu:22.04 (Tencent Cloud mirror)
ARG UBUNTU_IMAGE=ubuntu:22.04
FROM ${UBUNTU_IMAGE}
ENV DEBIAN_FRONTEND=noninteractive \
LANG=C.UTF-8 \
LC_ALL=C.UTF-8
# Core system deps + systemd as the container init system.
# deploy/one-click/install.sh requires: tar, rg (ripgrep), ss (iproute2),
# bash, curl, sed, pgrep (procps), date, docker, python3, ip (iproute2), awk (gawk).
# Plus DinD prerequisites: iptables, ca-certificates, gnupg.
# Plus xfsprogs for the XFS-backed /data/cubelet (install.sh hard requirement).
RUN apt-get update && apt-get install -y --no-install-recommends \
systemd \
systemd-sysv \
dbus \
ca-certificates \
curl \
gnupg \
lsb-release \
bash \
tar \
ripgrep \
iproute2 \
procps \
gawk \
sed \
python3 \
python3-pip \
iptables \
kmod \
xfsprogs \
e2fsprogs \
util-linux \
file \
less \
&& rm -rf /var/lib/apt/lists/*
# Mask systemd units that are irrelevant or will fail in a container context.
RUN for unit in \
getty@tty1.service \
apt-daily.service \
apt-daily-upgrade.service \
apt-daily.timer \
apt-daily-upgrade.timer \
motd-news.service \
motd-news.timer \
systemd-networkd.service \
systemd-networkd-wait-online.service \
systemd-udevd.service \
systemd-udevd-control.socket \
systemd-udevd-kernel.socket \
systemd-logind.service \
e2scrub_reap.service \
apparmor.service; do \
ln -sf /dev/null "/etc/systemd/system/${unit}"; \
done
# Install Docker CE + Compose plugin from the official Docker apt repository.
RUN install -m 0755 -d /etc/apt/keyrings \
&& curl -fsSL https://download.docker.com/linux/ubuntu/gpg \
| gpg --dearmor -o /etc/apt/keyrings/docker.gpg \
&& chmod a+r /etc/apt/keyrings/docker.gpg \
&& echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo $VERSION_CODENAME) stable" \
> /etc/apt/sources.list.d/docker.list \
&& apt-get update \
&& apt-get install -y --no-install-recommends \
docker-ce \
docker-ce-cli \
containerd.io \
docker-buildx-plugin \
docker-compose-plugin \
&& rm -rf /var/lib/apt/lists/*
# Configure Docker daemon defaults.
RUN mkdir -p /etc/docker && printf '%s\n' \
'{' \
' "log-driver": "json-file",' \
' "log-opts": { "max-size": "50m", "max-file": "3" },' \
' "storage-driver": "overlay2"' \
'}' > /etc/docker/daemon.json
# Install E2B Python SDK so smoke tests can run from inside the container
# without polluting the WSL2 host with pip packages.
RUN pip3 install --no-cache-dir --break-system-packages \
e2b-code-interpreter==1.0.* \
requests \
|| pip3 install --no-cache-dir \
e2b-code-interpreter==1.0.* \
requests
# Persistent locations the installer writes to.
VOLUME ["/var/lib/docker", "/data", "/usr/local/services/cubetoolbox"]
# Helper scripts for the bootstrap flow.
COPY cube-init.sh /usr/local/bin/cube-init.sh
COPY cube-xfs-setup.sh /usr/local/bin/cube-xfs-setup.sh
COPY cube-install.sh /usr/local/bin/cube-install.sh
RUN chmod +x \
/usr/local/bin/cube-init.sh \
/usr/local/bin/cube-xfs-setup.sh \
/usr/local/bin/cube-install.sh
# Systemd service units for the CubeSandbox bootstrap sequence.
COPY cube-xfs-mount.service /etc/systemd/system/cube-xfs-mount.service
COPY cube-install.service /etc/systemd/system/cube-install.service
# Enable services by creating the wanted-by symlinks that systemctl enable
# would create (systemctl cannot run during a Docker image build).
RUN mkdir -p /etc/systemd/system/multi-user.target.wants \
&& ln -sf /etc/systemd/system/cube-xfs-mount.service \
/etc/systemd/system/multi-user.target.wants/cube-xfs-mount.service \
&& ln -sf /etc/systemd/system/cube-install.service \
/etc/systemd/system/multi-user.target.wants/cube-install.service \
&& ln -sf /lib/systemd/system/docker.service \
/etc/systemd/system/multi-user.target.wants/docker.service \
&& ln -sf /lib/systemd/system/containerd.service \
/etc/systemd/system/multi-user.target.wants/containerd.service
# cube-init.sh captures CUBE_* and TZ env vars from the container runtime
# into /etc/cube-sandbox.env (readable by systemd EnvironmentFile=), then
# execs /lib/systemd/systemd as PID 1.
ENTRYPOINT ["/usr/local/bin/cube-init.sh"]
CMD ["/lib/systemd/systemd"]
+150
View File
@@ -0,0 +1,150 @@
# CubeSandbox
Run [TencentCloud CubeSandbox](https://github.com/TencentCloud/CubeSandbox) — a KVM-based MicroVM sandbox compatible with the E2B SDK — entirely inside a single privileged Docker container, without modifying the host system.
## Why this is unusual
CubeSandbox is **not** a containerized project upstream. Its core components (Cubelet, network-agent, cube-shim, cube-runtime, CubeAPI, CubeMaster) ship as host binaries and the official `install.sh` writes them to `/usr/local/services/cubetoolbox`, then starts them as native processes that talk to the host containerd.
This stack runs the **entire installer inside one privileged container** that:
1. Runs its own `dockerd` (Docker-in-Docker) for MySQL / Redis / CubeProxy / CoreDNS dependencies.
2. Creates an XFS-formatted loop volume at `/data/cubelet` (install.sh hard-requires XFS).
3. Executes the upstream [`online-install.sh`](https://github.com/TencentCloud/CubeSandbox/blob/master/deploy/one-click/online-install.sh) on first boot.
4. Tails logs to keep the container alive.
The result is essentially a **single-node CubeSandbox appliance container** suitable for evaluating CubeSandbox without changing your host.
## Features
- Built on Ubuntu 22.04 (the project's primary test environment)
- Self-contained: no host packages installed, no host paths mounted
- KVM passed through via `/dev/kvm`
- Persistent volumes for installed binaries, sandbox data, and DinD storage
- Health check covering CubeAPI, CubeMaster, and network-agent
- China-mainland mirror (`MIRROR=cn`) used by default
- Smoke-test script included (`smoke-test.sh`)
## Requirements
- Linux host (or WSL2 with KVM passthrough) with `/dev/kvm` available to Docker
- Nested virtualization enabled (Intel VT-x / AMD-V exposed)
- cgroup v2 (modern kernels — Debian 12+, Ubuntu 22.04+, kernel 5.10+)
- ≥ 16 GiB RAM and ≥ 8 vCPU recommended (8 GiB is the upstream minimum)
- ≥ 60 GiB free disk for the XFS loop file + Docker image layers
- Outbound internet to download the install bundle (~hundreds of MB) and Docker images
> On WSL2: confirm `/dev/kvm` is present (`ls -l /dev/kvm`) and your user is in the `kvm` group on the host distro.
## Quick Start
1. Copy the example environment file (optional — defaults work):
```bash
cp .env.example .env
```
2. Build and start (the first run downloads the CubeSandbox bundle and several Docker images — expect 520 minutes):
```bash
docker compose up -d --build
```
3. Watch the bootstrap log:
```bash
docker compose logs -f cube-sandbox
```
Wait for the `==================== CubeSandbox is up ====================` banner.
4. Verify all services are healthy:
```bash
curl -fsS http://127.0.0.1:3000/health && echo # CubeAPI
curl -fsS http://127.0.0.1:8089/notify/health && echo # CubeMaster
curl -fsS http://127.0.0.1:19090/healthz && echo # network-agent
```
5. (Optional) Run the smoke test:
```bash
bash smoke-test.sh # Health checks only
SKIP_TEMPLATE_BUILD=1 bash smoke-test.sh # Skip the slow template build
```
## Endpoints
Because the container uses `network_mode: host`, all CubeSandbox HTTP endpoints are reachable directly on the host loopback:
| Service | URL |
| ------------- | ------------------------------------ |
| CubeAPI | `http://127.0.0.1:3000` |
| CubeMaster | `http://127.0.0.1:8089` |
| network-agent | `http://127.0.0.1:19090` |
The CubeAPI exposes the E2B-compatible REST surface; point the [`e2b` Python SDK](https://e2b.dev) at `http://127.0.0.1:3000` to create sandboxes.
## Configuration
Key environment variables (see `.env.example` for the full list):
| Variable | Description | Default |
| -------------------------- | ------------------------------------------------------------ | ---------------- |
| `GLOBAL_REGISTRY` | Image registry prefix when pushing to a private registry | _(empty)_ |
| `CUBE_SANDBOX_VERSION` | Tag of the locally built wrapper image | `0.1.7` |
| `UBUNTU_IMAGE` | Base Ubuntu version | `22.04` |
| `TZ` | Container timezone | `Asia/Shanghai` |
| `CUBE_MIRROR` | Installer mirror — `cn` (China CDN) or `gh` (GitHub) | `cn` |
| `CUBE_XFS_SIZE` | Size of the XFS loop file backing `/data/cubelet` | `50G` |
| `CUBE_FORCE_REINSTALL` | Set to `1` to re-run `install.sh` on next start | `0` |
| `CUBE_CPU_LIMIT` | CPU limit | `8` |
| `CUBE_MEMORY_LIMIT` | Memory limit | `16G` |
| `CUBE_CPU_RESERVATION` | CPU reservation | `2` |
| `CUBE_MEMORY_RESERVATION` | Memory reservation | `8G` |
## Storage
Three named volumes hold persistent state — your installed CubeSandbox survives `docker compose down && up`:
| Volume | Path inside container | Purpose |
| --------------- | ----------------------------------- | -------------------------------------------------- |
| `cube_dind_data` | `/var/lib/docker` | DinD daemon images / containers / volumes |
| `cube_data` | `/data` | XFS loop image, `/data/cubelet`, sandbox disks, logs |
| `cube_toolbox` | `/usr/local/services/cubetoolbox` | Installed CubeSandbox binaries and scripts |
To wipe everything and reinstall from scratch:
```bash
docker compose down -v
docker compose up -d --build
```
## Security Considerations
⚠️ This stack is **highly privileged by design**. Only run it in trusted environments.
- `privileged: true` — required to mount the XFS loop volume, manage TAP interfaces, and run KVM
- `network_mode: host` — required so Cubelet can register the node IP and manage host TAP interfaces
- `cgroup: host` — required for the in-container `dockerd` to share the host's cgroup v2 hierarchy
- `/dev/kvm` and `/dev/net/tun` are passed through
These permissions are equivalent to what `online-install.sh` would request if it were run directly on your host. The advantage of the container wrapper is that all installer side-effects are confined to the three named volumes above, so removing the stack leaves no host residue.
## Troubleshooting
- **`/dev/kvm not found`** — the host does not expose KVM to Docker. On WSL2, confirm nested virtualization is enabled and the kernel exposes `/dev/kvm`. On bare metal, ensure VT-x / AMD-V is enabled in BIOS.
- **First boot hangs at "Running CubeSandbox one-click installer"** — the installer is downloading the bundle (~hundreds of MB) and pulling several Docker images. Check progress with `docker compose logs -f cube-sandbox`.
- **`quickcheck.sh reported issues`** — open a shell in the container and inspect logs:
```bash
docker compose exec cube-sandbox bash
ls /data/log/
tail -f /data/log/CubeAPI/*.log
```
- **Re-run the installer cleanly** — set `CUBE_FORCE_REINSTALL=1` in `.env` and `docker compose up -d --force-recreate`.
## Project Information
- Upstream: https://github.com/TencentCloud/CubeSandbox
- License: upstream project is Apache-2.0; this configuration is provided as-is for the Compose Anything project.
+151
View File
@@ -0,0 +1,151 @@
# CubeSandbox
在单个特权 Docker 容器内完整运行 [腾讯云 CubeSandbox](https://github.com/TencentCloud/CubeSandbox)——一个基于 KVM、兼容 E2B SDK 的 MicroVM 沙箱——无需修改宿主系统。
## 为什么这个栈与众不同
CubeSandbox 上游**并不是**一个容器化项目。它的核心组件(Cubelet、network-agent、cube-shim、cube-runtime、CubeAPI、CubeMaster)以宿主机二进制形式分发,官方 `install.sh` 会把它们写入 `/usr/local/services/cubetoolbox`,然后作为本机进程启动并与宿主 containerd 集成。
本栈把**整个安装器塞进一个特权容器**:
1. 容器内自起一个 `dockerd`Docker-in-Docker),用于运行 MySQL / Redis / CubeProxy / CoreDNS 等依赖。
2.`/data/cubelet` 创建一个 XFS 格式的 loop 卷(install.sh 强制要求 XFS)。
3. 首次启动时执行上游的 [`online-install.sh`](https://github.com/TencentCloud/CubeSandbox/blob/master/deploy/one-click/online-install.sh)。
4. 通过 tail 日志保持容器存活。
最终得到一个**单节点 CubeSandbox 一体化容器**,方便在不改动宿主的前提下评估 CubeSandbox。
## 特性
- 基于 Ubuntu 22.04(项目主要测试环境)
- 自包含:不安装宿主机软件包,不挂载宿主路径
- 通过 `/dev/kvm` 透传 KVM
- 三个持久化命名卷分别保存安装产物、沙箱数据和 DinD 存储
- 健康检查覆盖 CubeAPI、CubeMaster、network-agent
- 默认使用国内镜像 (`MIRROR=cn`)
- 内置冒烟测试脚本(`smoke-test.sh`
## 环境要求
- Linux 宿主(或开启 KVM 透传的 WSL2),`/dev/kvm` 对 Docker 可见
- 已开启嵌套虚拟化(暴露 Intel VT-x / AMD-V
- cgroup v2(现代内核——Debian 12+、Ubuntu 22.04+、kernel 5.10+
- 推荐 ≥ 16 GiB 内存、≥ 8 vCPU(上游最低 8 GiB
- 至少 60 GiB 空闲磁盘,用于 XFS loop 文件 + Docker 镜像层
- 可访问外网,用于下载安装包(数百 MB)和 Docker 镜像
> WSL2 用户:先确认 `/dev/kvm` 存在(`ls -l /dev/kvm`),并且当前用户在宿主发行版的 `kvm` 组中。
## 快速开始
1. 复制示例环境文件(可选,默认值即可使用):
```bash
cp .env.example .env
```
2. 构建并启动(首次运行会下载 CubeSandbox 安装包和若干 Docker 镜像,预计 5-20 分钟):
```bash
docker compose up -d --build
```
3. 观察启动日志:
```bash
docker compose logs -f cube-sandbox
```
等待出现 `==================== CubeSandbox is up ====================` 横幅。
4. 验证所有服务健康:
```bash
curl -fsS http://127.0.0.1:3000/health && echo # CubeAPI
curl -fsS http://127.0.0.1:8089/notify/health && echo # CubeMaster
curl -fsS http://127.0.0.1:19090/healthz && echo # network-agent
```
5. (可选)运行冒烟测试:
```bash
bash smoke-test.sh # 仅做健康检查
SKIP_TEMPLATE_BUILD=1 bash smoke-test.sh # 跳过较慢的模板构建步骤
```
## 服务端点
由于容器使用 `network_mode: host`CubeSandbox 的所有 HTTP 端点都直接暴露在宿主回环地址上:
| 服务 | URL |
| ------------- | ------------------------------------ |
| CubeAPI | `http://127.0.0.1:3000` |
| CubeMaster | `http://127.0.0.1:8089` |
| network-agent | `http://127.0.0.1:19090` |
CubeAPI 暴露兼容 E2B 的 REST 接口;将 [`e2b` Python SDK](https://e2b.dev) 指向 `http://127.0.0.1:3000` 即可创建沙箱。
## 配置项
主要环境变量(完整列表见 `.env.example`):
| 变量 | 描述 | 默认值 |
| -------------------------- | --------------------------------------------------- | --------------- |
| `GLOBAL_REGISTRY` | 推送到私有仓库时使用的镜像前缀 | _(空)_ |
| `CUBE_SANDBOX_VERSION` | 本地构建的封装镜像 tag | `0.1.7` |
| `UBUNTU_IMAGE` | 基础 Ubuntu 版本 | `22.04` |
| `TZ` | 容器时区 | `Asia/Shanghai` |
| `CUBE_MIRROR` | 安装器镜像源——`cn`(国内 CDN)或 `gh`GitHub | `cn` |
| `CUBE_XFS_SIZE` | `/data/cubelet` 背后 XFS loop 文件大小 | `50G` |
| `CUBE_FORCE_REINSTALL` | 设为 `1` 时下次启动会重跑 `install.sh` | `0` |
| `CUBE_CPU_LIMIT` | CPU 上限 | `8` |
| `CUBE_MEMORY_LIMIT` | 内存上限 | `16G` |
| `CUBE_CPU_RESERVATION` | CPU 预留 | `2` |
| `CUBE_MEMORY_RESERVATION` | 内存预留 | `8G` |
## 存储
三个命名卷保存所有持久化状态——`docker compose down && up` 不会丢失安装:
| 卷 | 容器内路径 | 用途 |
| ---------------- | ----------------------------------- | --------------------------------------------------- |
| `cube_dind_data` | `/var/lib/docker` | DinD 守护进程的镜像 / 容器 / 卷 |
| `cube_data` | `/data` | XFS loop 文件、`/data/cubelet`、沙箱磁盘、日志 |
| `cube_toolbox` | `/usr/local/services/cubetoolbox` | 已安装的 CubeSandbox 二进制和脚本 |
完全清空并从头重装:
```bash
docker compose down -v
docker compose up -d --build
```
## 安全说明
⚠️ 本栈**按设计是高特权的**,仅在受信环境中使用。
- `privileged: true`——挂载 XFS loop 卷、管理 TAP 接口、运行 KVM 所必需
- `network_mode: host`——Cubelet 注册节点 IP、管理宿主 TAP 接口所必需
- `cgroup: host`——容器内的 `dockerd` 共享宿主 cgroup v2 层级所必需
- 透传 `/dev/kvm` 和 `/dev/net/tun`
这些权限等同于直接在宿主上运行 `online-install.sh` 所需的权限。容器封装的好处在于:所有安装副作用都被限制在上述三个命名卷内,删除本栈不会在宿主上留下任何残留。
## 故障排查
- **`/dev/kvm not found`**:宿主未对 Docker 暴露 KVM。WSL2 用户请确认嵌套虚拟化已启用且内核暴露 `/dev/kvm`;裸金属用户请在 BIOS 中启用 VT-x / AMD-V。
- **首次启动卡在 "Running CubeSandbox one-click installer"**:安装器正在下载安装包(数百 MB)并拉取若干 Docker 镜像。用 `docker compose logs -f cube-sandbox` 查看进度。
- **`quickcheck.sh reported issues`**:进入容器查看日志:
```bash
docker compose exec cube-sandbox bash
ls /data/log/
tail -f /data/log/CubeAPI/*.log
```
- **干净重跑安装**:在 `.env` 中设置 `CUBE_FORCE_REINSTALL=1`,然后 `docker compose up -d --force-recreate`。
## 项目信息
- 上游项目:https://github.com/TencentCloud/CubeSandbox
- 许可证:上游项目采用 Apache-2.0;本配置以 as-is 形式提供给 Compose Anything 项目使用。
+43
View File
@@ -0,0 +1,43 @@
#!/usr/bin/env bash
# Thin PID-1 wrapper: capture container runtime env vars into a file that
# systemd EnvironmentFile= can read, then exec systemd as PID 1.
#
# This script runs BEFORE systemd, so it must be kept minimal and must not
# depend on any CubeSandbox service being available.
set -euo pipefail
# Write CUBE_* and TZ vars to /etc/cube-sandbox.env so that
# cube-xfs-mount.service and cube-install.service can pick them up via
# EnvironmentFile=/etc/cube-sandbox.env.
install -m 0644 /dev/null /etc/cube-sandbox.env
printenv | grep -E '^(CUBE_|TZ=)' >> /etc/cube-sandbox.env 2>/dev/null || true
# Mount BPF filesystem required by network-agent eBPF map pinning.
# /sys/fs/bpf is not auto-mounted in Docker containers even when the kernel
# supports BPF; without it network-agent crashes on startup with
# "not on a bpf filesystem" and then a nil-pointer panic.
if ! mountpoint -q /sys/fs/bpf 2>/dev/null; then
mkdir -p /sys/fs/bpf
mount -t bpf none /sys/fs/bpf 2>/dev/null \
|| echo "[cube-init] WARNING: could not mount BPF filesystem; network-agent may fail" >&2
fi
# Redirect CubeMaster's rootfs artifact workspace to the persistent data volume.
# Template builds export the sandbox image into a tar (often > 2 GB) before
# converting it to an ext4 disk image. /tmp is only a 2 GB tmpfs and is wiped on
# every container restart; /data (a named Docker volume) has 50+ GB and is
# persistent.
#
# We use a bind mount instead of a symlink: CubeMaster's Go startup code calls
# os.RemoveAll + os.MkdirAll on this path, which would silently replace a
# symlink with a real tmpfs directory. A bind-mount point returns EBUSY on
# removal, keeping the mount intact so all writes land on /data.
mkdir -p /data/cubemaster-rootfs-artifacts
mkdir -p /tmp/cubemaster-rootfs-artifacts
if ! mountpoint -q /tmp/cubemaster-rootfs-artifacts 2>/dev/null; then
mount --bind /data/cubemaster-rootfs-artifacts /tmp/cubemaster-rootfs-artifacts \
|| echo "[cube-init] WARNING: bind mount for cubemaster-rootfs-artifacts failed; writes may fill tmpfs" >&2
fi
# Hand off to systemd (or whatever CMD was passed to the container).
exec "$@"
+24
View File
@@ -0,0 +1,24 @@
[Unit]
Description=CubeSandbox one-click installer
# Requires both the XFS volume and dockerd to be ready before running.
# install.sh will pull Docker images (MySQL, Redis, CubeProxy, CoreDNS)
# and then register Cubelet / CubeAPI / CubeMaster / network-agent as
# systemd units via `systemctl enable --now`.
After=docker.service cube-xfs-mount.service
Requires=docker.service cube-xfs-mount.service
[Service]
Type=oneshot
RemainAfterExit=yes
EnvironmentFile=-/etc/cube-sandbox.env
ExecStart=/usr/local/bin/cube-install.sh
# First boot downloads ~400 MB + pulls several Docker images; allow 30 min.
TimeoutStartSec=1800
# Retry on transient network failures (e.g. download interrupted).
Restart=on-failure
RestartSec=30s
StandardOutput=journal
StandardError=journal
[Install]
WantedBy=multi-user.target
+160
View File
@@ -0,0 +1,160 @@
#!/usr/bin/env bash
# Run the CubeSandbox one-click installer, then run quickcheck.sh.
# Called by cube-install.service (Type=oneshot) after docker.service and
# cube-xfs-mount.service are both active.
set -euo pipefail
log() { printf '[cube-install] %s\n' "$*"; }
err() { printf '[cube-install] ERROR: %s\n' "$*" >&2; }
INSTALL_PREFIX="/usr/local/services/cubetoolbox"
QUICKCHECK="${INSTALL_PREFIX}/scripts/one-click/quickcheck.sh"
UP_SCRIPT="${INSTALL_PREFIX}/scripts/one-click/up-with-deps.sh"
MIRROR="${CUBE_MIRROR:-cn}"
INSTALLER_URL_CN="https://cnb.cool/CubeSandbox/CubeSandbox/-/git/raw/master/deploy/one-click/online-install.sh"
INSTALLER_URL_GH="https://github.com/tencentcloud/CubeSandbox/raw/master/deploy/one-click/online-install.sh"
# /dev/kvm sanity — required by the MicroVM hypervisor.
if [ ! -c /dev/kvm ]; then
err "/dev/kvm is not available inside the container."
err "Ensure the compose stack passes --device /dev/kvm and nested virt is enabled on the host."
exit 1
fi
log "KVM device present: $(ls -l /dev/kvm)"
# Wait for dockerd (started by docker.service) to be ready before install.sh
# tries to pull MySQL / Redis / CubeProxy images.
log "Waiting for docker daemon ..."
for i in $(seq 1 60); do
if docker info >/dev/null 2>&1; then
log "docker ready."
break
fi
sleep 2
done
if ! docker info >/dev/null 2>&1; then
err "docker daemon not ready after 120 s"
exit 1
fi
# Redirect TMPDIR to the 50 GB XFS volume.
# /tmp is only 256 MB (tmpfs) and mounted noexec — both cause install failures:
# - curl: (23) Failure writing output to destination (out of space)
# - extracted scripts fail to execute (noexec mount flag)
mkdir -p /data/tmp
export TMPDIR=/data/tmp
log "TMPDIR set to $TMPDIR ($(df -h /data/tmp | awk 'NR==2{print $4}') free)"
# Set CAROOT so mkcert can find / create the local CA directory on every boot.
# Without this, up-cube-proxy.sh calls `mkcert -install` which exits with:
# "ERROR: failed to find the default CA location"
# Because up-with-deps.sh runs under set -euo pipefail, that failure aborts
# the entire script before any compute services (network-agent, CubeAPI, etc.)
# are started. Persisting the CA on /data (named volume) means the cert is
# re-used across container restarts rather than regenerated each time.
export CAROOT=/data/mkcert-ca
mkdir -p "$CAROOT"
log "CAROOT set to $CAROOT"
# Run the upstream one-click installer on first boot; on subsequent boots
# just re-launch all services via up-with-deps.sh.
if [ -x "$QUICKCHECK" ] && [ "${CUBE_FORCE_REINSTALL:-0}" != "1" ]; then
log "CubeSandbox already installed at $INSTALL_PREFIX — starting services."
if [ ! -x "$UP_SCRIPT" ]; then
err "up-with-deps.sh not found at $UP_SCRIPT — reinstall required"
exit 1
fi
ONE_CLICK_TOOLBOX_ROOT="$INSTALL_PREFIX" \
ONE_CLICK_RUNTIME_ENV_FILE="${INSTALL_PREFIX}/.one-click.env" \
bash "$UP_SCRIPT" \
|| log "WARNING: up-with-deps.sh exited non-zero; services may still be starting"
else
log "Running CubeSandbox one-click installer (mirror=$MIRROR) ..."
if [ "$MIRROR" = "cn" ]; then
curl -fsSL "$INSTALLER_URL_CN" | MIRROR=cn bash
else
curl -fsSL "$INSTALLER_URL_GH" | bash
fi
fi
# Run quickcheck.sh with retries — network-agent initialises 500 tap interfaces
# which takes ~2 minutes; we retry every 30 s for up to 10 minutes.
QUICKCHECK_PASSED=0
if [ -x "$QUICKCHECK" ]; then
log "Running quickcheck.sh (retrying up to 10 min for network-agent tap init) ..."
for i in $(seq 1 20); do
if ONE_CLICK_TOOLBOX_ROOT="$INSTALL_PREFIX" \
ONE_CLICK_RUNTIME_ENV_FILE="${INSTALL_PREFIX}/.one-click.env" \
"$QUICKCHECK" 2>&1; then
QUICKCHECK_PASSED=1
break
fi
log "quickcheck attempt $i/20 failed — retrying in 30 s ..."
sleep 30
done
else
err "quickcheck.sh not found at $QUICKCHECK — install may have failed."
exit 1
fi
if [ "$QUICKCHECK_PASSED" != "1" ]; then
err "quickcheck.sh never passed after 20 attempts — CubeSandbox is unhealthy."
exit 1
fi
# Ensure containerd-shim-cube-rs is on Cubelet's clean PATH.
# up.sh/up-with-deps.sh launch Cubelet with:
# PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
# Cubelet resolves runtime shims from that PATH, so it cannot find
# containerd-shim-cube-rs unless it is symlinked into one of those dirs.
# We create the symlink unconditionally on every boot (both after fresh
# install and after the restart path) so Cubelet can start sandboxes.
SHIM_SRC="${INSTALL_PREFIX}/cube-shim/bin/containerd-shim-cube-rs"
SHIM_DST="/usr/local/bin/containerd-shim-cube-rs"
if [ -x "$SHIM_SRC" ]; then
ln -sf "$SHIM_SRC" "$SHIM_DST"
log "containerd-shim-cube-rs linked: $SHIM_DST -> $SHIM_SRC"
else
log "WARNING: $SHIM_SRC not found — Cubelet will not be able to start MicroVMs"
fi
# Restart Cubelet now that network-agent is confirmed ready.
# On first startup the Cubelet process begins before network-agent has finished
# initialising its 500 TAP interfaces (~2 min). This causes the
# io.cubelet.images-service.v1 plugin to fail with:
# "network-agent health check failed ... context deadline exceeded"
# leaving the gRPC cubelet.services.images.v1.Images service unregistered.
# When CubeMaster later tries to distribute a template artifact to the node it
# gets back gRPC Unimplemented and the build fails.
# Restarting Cubelet here — after quickcheck has confirmed network-agent is up —
# allows the images-service plugin to load successfully on the second boot.
CUBELET_BIN="${INSTALL_PREFIX}/Cubelet/bin/cubelet"
CUBELET_CFG="${INSTALL_PREFIX}/Cubelet/config/config.toml"
CUBELET_DYN="${INSTALL_PREFIX}/Cubelet/dynamicconf/conf.yaml"
CUBELET_LOG="/data/log/Cubelet/Cubelet-req.log"
if [ -x "$CUBELET_BIN" ]; then
log "Restarting Cubelet so images-service plugin loads against ready network-agent ..."
pkill -f "${CUBELET_BIN}" 2>/dev/null || true
sleep 2
mkdir -p "$(dirname "$CUBELET_LOG")"
nohup "$CUBELET_BIN" \
--config "$CUBELET_CFG" \
--dynamic-conf-path "$CUBELET_DYN" \
>>"$CUBELET_LOG" 2>&1 &
CUBELET_PID=$!
log "Cubelet restarted (PID ${CUBELET_PID}) — waiting 10 s for boot ..."
sleep 10
if kill -0 "$CUBELET_PID" 2>/dev/null; then
log "Cubelet is running."
else
log "WARNING: Cubelet PID ${CUBELET_PID} exited — check ${CUBELET_LOG}."
fi
fi
log "==================== CubeSandbox is up ===================="
log " CubeAPI: http://127.0.0.1:3000/health"
log " CubeMaster: http://127.0.0.1:8089/notify/health"
log " network-agent http://127.0.0.1:19090/healthz"
log " Logs: /data/log/{CubeAPI,CubeMaster,Cubelet}/"
log "==========================================================="
@@ -0,0 +1,18 @@
[Unit]
Description=CubeSandbox XFS loop volume mount
# Must run before dockerd and the installer because install.sh validates that
# /data/cubelet is an XFS filesystem before proceeding.
DefaultDependencies=no
Before=cube-install.service docker.service
After=local-fs.target
[Service]
Type=oneshot
RemainAfterExit=yes
EnvironmentFile=-/etc/cube-sandbox.env
ExecStart=/usr/local/bin/cube-xfs-setup.sh
StandardOutput=journal
StandardError=journal
[Install]
WantedBy=multi-user.target
+31
View File
@@ -0,0 +1,31 @@
#!/usr/bin/env bash
# Create and mount the XFS-formatted loop volume at /data/cubelet.
# Called by cube-xfs-mount.service (Type=oneshot) before docker.service starts.
#
# install.sh hard-requires that /data/cubelet is on an XFS filesystem;
# it validates this with `df -T /data/cubelet | grep -q xfs`.
set -euo pipefail
log() { printf '[cube-xfs] %s\n' "$*"; }
CUBE_DATA_DIR="${CUBE_DATA_DIR:-/data/cubelet}"
CUBE_XFS_IMG="${CUBE_XFS_IMG:-/data/cubelet.img}"
CUBE_XFS_SIZE="${CUBE_XFS_SIZE:-50G}"
mkdir -p /data "$CUBE_DATA_DIR"
current_fs="$(stat -fc %T "$CUBE_DATA_DIR" 2>/dev/null || echo unknown)"
if [ "$current_fs" = "xfs" ]; then
log "Already mounted: $CUBE_DATA_DIR ($current_fs) — nothing to do."
exit 0
fi
log "Preparing XFS loop volume at $CUBE_XFS_IMG (size=$CUBE_XFS_SIZE) ..."
if [ ! -f "$CUBE_XFS_IMG" ]; then
fallocate -l "$CUBE_XFS_SIZE" "$CUBE_XFS_IMG"
mkfs.xfs -q -f "$CUBE_XFS_IMG"
log "Formatted $CUBE_XFS_IMG as XFS."
fi
mount -o loop "$CUBE_XFS_IMG" "$CUBE_DATA_DIR"
log "Mounted $CUBE_DATA_DIR ($(stat -fc %T "$CUBE_DATA_DIR"))."
+110
View File
@@ -0,0 +1,110 @@
# CubeSandbox running inside a privileged systemd+DinD container.
#
# WHY THIS LOOKS UNUSUAL
# ----------------------
# CubeSandbox is NOT a containerized project upstream. Its core components
# (Cubelet, network-agent, cube-shim, CubeAPI, CubeMaster) ship as host
# binaries, and the official install.sh registers them as systemd units and
# manages them with systemctl.
#
# To run it purely with Docker without modifying the WSL2 host, this stack:
# 1. Runs systemd as PID 1 inside a privileged container so that
# install.sh can call systemctl enable / start / status normally.
# 2. Runs its own dockerd (DinD) for MySQL / Redis / CoreDNS / CubeProxy.
# 3. Mounts an XFS loop volume at /data/cubelet (install.sh hard-requires XFS).
# 4. Executes the upstream online-install.sh via cube-install.service.
#
# The /run and /run/lock paths are tmpfs so systemd can write its runtime
# state (PID files, socket files, etc.) during the container lifetime.
# stop_signal RTMIN+3 is the standard graceful-shutdown signal for systemd.
x-defaults: &defaults
restart: unless-stopped
logging:
driver: json-file
options:
max-size: 100m
max-file: '3'
services:
cube-sandbox:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}compose-anything/cube-sandbox:${CUBE_SANDBOX_VERSION:-0.1.7}
build:
context: .
dockerfile: Dockerfile
args:
- UBUNTU_IMAGE=${UBUNTU_IMAGE:-ubuntu:22.04}
# CubeSandbox needs:
# - /dev/kvm for the MicroVM hypervisor
# - /dev/net/tun for cube TAP interfaces
# - SYS_ADMIN/NET_ADMIN to mount the XFS loop volume and create TAPs
# - Its own dockerd for MySQL / Redis / CubeProxy / CoreDNS
# - systemd as PID 1 so install.sh can register and start services
# The simplest correct configuration is privileged + host network.
privileged: true
network_mode: host
devices:
- /dev/kvm:/dev/kvm
- /dev/net/tun:/dev/net/tun
# cgroupns:host lets the in-container systemd + dockerd share the host's
# (i.e. WSL2's) cgroup v2 hierarchy directly — more reliable than private.
cgroup: host
# systemd needs to write its runtime state to /run; use tmpfs so it does
# not leak across container restarts and does not consume the named volumes.
tmpfs:
- /run:size=100m
- /run/lock:size=10m
- /tmp:size=2g,exec
# SIGRTMIN+3 is the proper graceful-shutdown signal for systemd.
stop_signal: RTMIN+3
environment:
- TZ=${TZ:-Asia/Shanghai}
# cn = pull installer + images via the cnb.cool / Tencent Cloud mirror
# gh = pull from raw.githubusercontent.com (slower in mainland China)
- CUBE_MIRROR=${CUBE_MIRROR:-cn}
# Size of the XFS loop file that backs /data/cubelet
- CUBE_XFS_SIZE=${CUBE_XFS_SIZE:-50G}
# Set to 1 to re-run install.sh even if a previous install is detected
- CUBE_FORCE_REINSTALL=${CUBE_FORCE_REINSTALL:-0}
volumes:
# DinD docker daemon storage (images for MySQL, Redis, CoreDNS, CubeProxy)
- cube_dind_data:/var/lib/docker
# XFS loop image + mounted /data/cubelet + cube-shim disks + logs
- cube_data:/data
# Installed CubeSandbox binaries & scripts
- cube_toolbox:/usr/local/services/cubetoolbox
# No `ports:` block — we use network_mode: host so the CubeAPI on
# 127.0.0.1:3000 inside the container is the same socket as
# 127.0.0.1:3000 on the WSL2 host.
healthcheck:
test:
- CMD-SHELL
- "curl -fsS http://127.0.0.1:3000/health && curl -fsS http://127.0.0.1:8089/notify/health && curl -fsS http://127.0.0.1:19090/healthz"
interval: 30s
timeout: 15s
retries: 5
start_period: 600s # First boot downloads ~400 MB + Docker images; be generous.
deploy:
resources:
limits:
cpus: '${CUBE_CPU_LIMIT:-8}'
memory: ${CUBE_MEMORY_LIMIT:-16G}
reservations:
cpus: '${CUBE_CPU_RESERVATION:-2}'
memory: ${CUBE_MEMORY_RESERVATION:-8G}
volumes:
cube_dind_data:
cube_data:
cube_toolbox:
+112
View File
@@ -0,0 +1,112 @@
#!/usr/bin/env python3
"""
Basic E2B SDK integration test against a local CubeSandbox instance.
Runs three checks:
1. Sandbox creation (debug=True → API at http://localhost:3000)
2. Code execution and output validation
3. Sandbox teardown
Usage (inside the cube-sandbox container):
python3 /root/e2b-test.py
Exit codes:
0 all tests passed
1 any test failed
"""
import sys
PASS = "\033[1;32m[ OK ]\033[0m"
FAIL = "\033[1;31m[FAIL]\033[0m"
INFO = "\033[1;36m[INFO]\033[0m"
def check(label: str, cond: bool, detail: str = "") -> bool:
if cond:
print(f"{PASS} {label}")
else:
print(f"{FAIL} {label}{': ' + detail if detail else ''}")
return cond
def main() -> int:
ok = True
# ------------------------------------------------------------------ #
# 1. Import #
# ------------------------------------------------------------------ #
print(f"{INFO} Importing e2b_code_interpreter …")
try:
from e2b_code_interpreter import Sandbox # type: ignore
except ImportError as exc:
print(f"{FAIL} import failed: {exc}")
return 1
ok &= check("e2b_code_interpreter imported", True)
# ------------------------------------------------------------------ #
# 2. Create sandbox #
# ------------------------------------------------------------------ #
print(f"\n{INFO} Creating sandbox (debug=True → http://localhost:3000) …")
sb = None
try:
# debug=True makes the SDK target http://localhost:3000 instead of
# the E2B cloud and http://localhost:<port> for the envd connection.
sb = Sandbox(debug=True, api_key="local-test", timeout=120)
ok &= check("Sandbox created", sb is not None, f"id={sb.sandbox_id if sb else '?'}")
print(f" sandbox_id = {sb.sandbox_id}")
except Exception as exc:
ok &= check("Sandbox created", False, str(exc))
print(f"\n{INFO} Skipping remaining tests (sandbox creation failed)")
return 0 if ok else 1
# ------------------------------------------------------------------ #
# 3. Execute code #
# ------------------------------------------------------------------ #
print(f"\n{INFO} Running code inside sandbox …")
try:
result = sb.run_code('print("Hello from CubeSandbox!")')
expected = "Hello from CubeSandbox!"
output = (result.text or "").strip()
ok &= check("Code executed without error", not result.error,
str(result.error) if result.error else "")
ok &= check("Output matches expected", output == expected,
f"got {output!r}")
except Exception as exc:
ok &= check("Code execution", False, str(exc))
# ------------------------------------------------------------------ #
# 4. Multi-line / stateful execution #
# ------------------------------------------------------------------ #
print(f"\n{INFO} Running stateful multi-cell execution …")
try:
sb.run_code("x = 40 + 2")
result2 = sb.run_code("print(x)")
output2 = (result2.text or "").strip()
ok &= check("Stateful multi-cell execution", output2 == "42",
f"got {output2!r}")
except Exception as exc:
ok &= check("Stateful multi-cell execution", False, str(exc))
# ------------------------------------------------------------------ #
# 5. Kill sandbox #
# ------------------------------------------------------------------ #
print(f"\n{INFO} Killing sandbox …")
try:
sb.kill()
ok &= check("Sandbox killed", True)
except Exception as exc:
ok &= check("Sandbox killed", False, str(exc))
# ------------------------------------------------------------------ #
# Summary #
# ------------------------------------------------------------------ #
print()
if ok:
print(f"{PASS} All E2B SDK tests passed")
else:
print(f"{FAIL} Some E2B SDK tests FAILED")
return 0 if ok else 1
if __name__ == "__main__":
sys.exit(main())
+104
View File
@@ -0,0 +1,104 @@
#!/usr/bin/env bash
# Smoke test for a running CubeSandbox stack.
#
# Run from the WSL2 host or from inside the cube-sandbox container - both work
# because the container uses network_mode: host.
#
# Steps:
# 1. Health-check all CubeSandbox services
# 2. (Optional, slow) Build a code-interpreter template from a public image
# 3. Create a sandbox via the E2B-compatible REST API, run a tiny payload,
# then destroy it
#
# Skip the slow template-build step with: SKIP_TEMPLATE_BUILD=1 ./smoke-test.sh
set -euo pipefail
# cubemastercli is installed to a non-standard prefix; add it to PATH so this
# script works both when run inside the container and from the WSL2 host.
export PATH="/usr/local/services/cubetoolbox/CubeMaster/bin:${PATH:-}"
CUBE_API="${CUBE_API:-http://127.0.0.1:3000}"
CUBE_MASTER="${CUBE_MASTER:-http://127.0.0.1:8089}"
CUBE_NETAGENT="${CUBE_NETAGENT:-http://127.0.0.1:19090}"
ok() { printf '\033[1;32m[ OK ]\033[0m %s\n' "$*"; }
fail() { printf '\033[1;31m[FAIL]\033[0m %s\n' "$*" >&2; exit 1; }
info() { printf '\033[1;36m[INFO]\033[0m %s\n' "$*"; }
#-------------------------------------------------------------------
# 1. Health checks (matches what install.sh's quickcheck.sh verifies)
#-------------------------------------------------------------------
info "Health: CubeAPI"
curl -fsS "${CUBE_API}/health" >/dev/null && ok "CubeAPI /health" || fail "CubeAPI /health"
echo
info "Health: CubeMaster"
curl -fsS "${CUBE_MASTER}/notify/health" >/dev/null && ok "CubeMaster /notify/health" || fail "CubeMaster /notify/health"
info "Health: network-agent"
curl -fsS "${CUBE_NETAGENT}/healthz" >/dev/null && ok "network-agent /healthz" || fail "network-agent /healthz"
curl -fsS "${CUBE_NETAGENT}/readyz" >/dev/null && ok "network-agent /readyz" || fail "network-agent /readyz"
#-------------------------------------------------------------------
# 2. Optional: build a sandbox template
#-------------------------------------------------------------------
TEMPLATE_ID="${CUBE_TEMPLATE_ID:-}"
if [ -z "$TEMPLATE_ID" ] && [ "${SKIP_TEMPLATE_BUILD:-0}" != "1" ]; then
info "No CUBE_TEMPLATE_ID provided; building one from ccr.ccs.tencentyun.com/ags-image/sandbox-code:latest"
info "(this can take 5-15 minutes; set SKIP_TEMPLATE_BUILD=1 to skip and only run health checks)"
if ! command -v cubemastercli >/dev/null 2>&1; then
# cubemastercli lives inside the container; exec into it
CUBE_CTR="$(docker compose ps -q cube-sandbox 2>/dev/null || true)"
[ -z "$CUBE_CTR" ] && fail "cube-sandbox container not running and cubemastercli not on PATH"
CMC="docker exec -i $CUBE_CTR cubemastercli"
else
CMC="cubemastercli"
fi
JOB_OUT="$($CMC tpl create-from-image \
--image ccr.ccs.tencentyun.com/ags-image/sandbox-code:latest \
--writable-layer-size 1G \
--expose-port 49999 \
--expose-port 49983 \
--probe 49999 2>&1)"
echo "$JOB_OUT"
JOB_ID="$(echo "$JOB_OUT" | grep -oE 'job_id[=: ]+[A-Za-z0-9_-]+' | head -1 | awk '{print $NF}')"
[ -z "$JOB_ID" ] && fail "could not parse job_id from output"
info "Watching job $JOB_ID ..."
$CMC tpl watch --job-id "$JOB_ID"
# Extract template_id from the create-from-image output (it's on the first few
# lines) rather than re-querying the list — list ordering is not guaranteed and
# could return a FAILED entry as the last line.
TEMPLATE_ID="$(echo "$JOB_OUT" | grep -E '\btemplate_id\b' | head -1 | awk '{print $NF}')"
[ -z "$TEMPLATE_ID" ] && fail "could not determine template id after build"
ok "Template built: $TEMPLATE_ID"
elif [ -z "$TEMPLATE_ID" ]; then
info "Skipping sandbox lifecycle test (no CUBE_TEMPLATE_ID and SKIP_TEMPLATE_BUILD=1)"
ok "Health checks passed - CubeSandbox stack is up"
exit 0
fi
#-------------------------------------------------------------------
# 3. Create -> inspect -> destroy a sandbox via REST
#-------------------------------------------------------------------
info "Creating sandbox from template $TEMPLATE_ID ..."
RESP="$(curl -fsS -X POST "${CUBE_API}/sandboxes" \
-H 'Authorization: Bearer dummy' \
-H 'Content-Type: application/json' \
-d "{\"templateID\":\"${TEMPLATE_ID}\"}")"
SANDBOX_ID="$(echo "$RESP" | python3 -c 'import json,sys; print(json.load(sys.stdin).get("sandboxID",""))')"
[ -z "$SANDBOX_ID" ] && fail "no sandboxID in response: $RESP"
ok "Created sandbox $SANDBOX_ID"
info "Inspecting sandbox ..."
curl -fsS "${CUBE_API}/sandboxes/${SANDBOX_ID}" -H 'Authorization: Bearer dummy' \
| python3 -m json.tool
ok "Sandbox is queryable"
info "Destroying sandbox ..."
curl -fsS -X DELETE "${CUBE_API}/sandboxes/${SANDBOX_ID}" -H 'Authorization: Bearer dummy' >/dev/null
ok "Sandbox destroyed"
ok "All smoke tests passed"
+45
View File
@@ -0,0 +1,45 @@
# Source build configuration
DEER_FLOW_VERSION=main
NGINX_VERSION=1.28-alpine
# Network configuration
DEER_FLOW_PORT_OVERRIDE=2026
DEER_FLOW_CORS_ORIGINS=http://localhost:2026
DEER_FLOW_BETTER_AUTH_SECRET=deer-flow-dev-secret-change-me
# Model configuration
DEER_FLOW_MODEL_NAME=openai-default
DEER_FLOW_MODEL_DISPLAY_NAME=OpenAI
DEER_FLOW_MODEL_ID=gpt-4.1-mini
OPENAI_API_KEY=
# Resources - Gateway
DEER_FLOW_GATEWAY_CPU_LIMIT=2.00
DEER_FLOW_GATEWAY_MEMORY_LIMIT=2G
DEER_FLOW_GATEWAY_CPU_RESERVATION=0.50
DEER_FLOW_GATEWAY_MEMORY_RESERVATION=512M
# Resources - LangGraph
DEER_FLOW_LANGGRAPH_CPU_LIMIT=2.00
DEER_FLOW_LANGGRAPH_MEMORY_LIMIT=2G
DEER_FLOW_LANGGRAPH_CPU_RESERVATION=0.50
DEER_FLOW_LANGGRAPH_MEMORY_RESERVATION=512M
# Resources - Frontend
DEER_FLOW_FRONTEND_CPU_LIMIT=1.00
DEER_FLOW_FRONTEND_MEMORY_LIMIT=1G
DEER_FLOW_FRONTEND_CPU_RESERVATION=0.25
DEER_FLOW_FRONTEND_MEMORY_RESERVATION=256M
# Resources - Nginx
DEER_FLOW_NGINX_CPU_LIMIT=0.50
DEER_FLOW_NGINX_MEMORY_LIMIT=256M
DEER_FLOW_NGINX_CPU_RESERVATION=0.10
DEER_FLOW_NGINX_MEMORY_RESERVATION=64M
# Logging
DEER_FLOW_LOG_MAX_SIZE=100m
DEER_FLOW_LOG_MAX_FILE=3
# Timezone
TZ=UTC
+60
View File
@@ -0,0 +1,60 @@
# DeerFlow
[中文文档](README.zh.md)
DeerFlow is a full-stack AI agent application from ByteDance. This Compose setup builds the frontend and backend from source, starts Gateway, LangGraph, and Nginx, and exposes the unified entrypoint on port 2026.
## Quick Start
1. Copy the example environment file:
```bash
cp .env.example .env
```
2. Edit `.env` and set `OPENAI_API_KEY`.
3. Start the stack:
```bash
docker compose up -d
```
4. Open DeerFlow:
- <http://localhost:2026>
## Default Ports
| Service | Port | Description |
| ----------- | ---- | ---------------------- |
| Nginx | 2026 | Unified web entrypoint |
| Gateway API | 8001 | Internal only |
| LangGraph | 2024 | Internal only |
| Frontend | 3000 | Internal only |
## Important Environment Variables
| Variable | Description | Default |
| ------------------------------ | ------------------------------------------------------ | -------------------------------- |
| `DEER_FLOW_VERSION` | Git ref used for source builds | `main` |
| `DEER_FLOW_PORT_OVERRIDE` | Host port for the unified entrypoint | `2026` |
| `OPENAI_API_KEY` | OpenAI API key referenced from generated `config.yaml` | - |
| `DEER_FLOW_MODEL_NAME` | Internal model identifier | `openai-default` |
| `DEER_FLOW_MODEL_DISPLAY_NAME` | Display name shown in the app | `OpenAI` |
| `DEER_FLOW_MODEL_ID` | OpenAI model id | `gpt-4.1-mini` |
| `DEER_FLOW_CORS_ORIGINS` | Allowed CORS origins for the gateway | `http://localhost:2026` |
| `DEER_FLOW_BETTER_AUTH_SECRET` | Frontend auth secret | `deer-flow-dev-secret-change-me` |
| `TZ` | Container timezone | `UTC` |
## Notes
- This setup generates a minimal `config.yaml` and `extensions_config.json` inside the backend containers, so no extra config files are required.
- The default sandbox mode is local to avoid requiring Docker socket mounts or Kubernetes provisioner setup.
- DeerFlow upstream usually expects local image builds, so the first build can take several minutes.
- Only an OpenAI-compatible model is wired by default here. If you want Anthropic, Gemini, or a more advanced config, update the generated template logic in `docker-compose.yaml`.
## References
- [DeerFlow Repository](https://github.com/bytedance/deer-flow)
- [Project README](https://github.com/bytedance/deer-flow/blob/main/README.md)
+60
View File
@@ -0,0 +1,60 @@
# DeerFlow
[English](README.md)
DeerFlow 是字节跳动开源的全栈 AI Agent 应用。这个 Compose 配置会从源码构建前后端镜像,启动 Gateway、LangGraph 和 Nginx,并通过 2026 端口暴露统一入口。
## 快速开始
1. 复制环境变量示例文件:
```bash
cp .env.example .env
```
2. 编辑 `.env`,至少填写 `OPENAI_API_KEY`。
3. 启动整个栈:
```bash
docker compose up -d
```
4. 打开 DeerFlow
- <http://localhost:2026>
## 默认端口
| 服务 | 端口 | 说明 |
| ----------- | ---- | ------------- |
| Nginx | 2026 | 统一 Web 入口 |
| Gateway API | 8001 | 仅内部访问 |
| LangGraph | 2024 | 仅内部访问 |
| Frontend | 3000 | 仅内部访问 |
## 关键环境变量
| 变量 | 说明 | 默认值 |
| ------------------------------ | -------------------------------------------- | -------------------------------- |
| `DEER_FLOW_VERSION` | 用于源码构建的 Git 引用 | `main` |
| `DEER_FLOW_PORT_OVERRIDE` | 统一入口对外端口 | `2026` |
| `OPENAI_API_KEY` | 生成的 `config.yaml` 中引用的 OpenAI API Key | - |
| `DEER_FLOW_MODEL_NAME` | 模型内部标识 | `openai-default` |
| `DEER_FLOW_MODEL_DISPLAY_NAME` | 界面展示名称 | `OpenAI` |
| `DEER_FLOW_MODEL_ID` | OpenAI 模型 ID | `gpt-4.1-mini` |
| `DEER_FLOW_CORS_ORIGINS` | Gateway 允许的跨域来源 | `http://localhost:2026` |
| `DEER_FLOW_BETTER_AUTH_SECRET` | 前端鉴权密钥 | `deer-flow-dev-secret-change-me` |
| `TZ` | 容器时区 | `UTC` |
## 说明
- 这个配置会在后端容器内部生成最小可用的 `config.yaml` 和 `extensions_config.json`,因此不需要额外手工准备配置文件。
- 默认使用本地 sandbox 模式,这样不需要挂载 Docker Socket,也不依赖 Kubernetes provisioner。
- DeerFlow 上游通常要求本地构建镜像,因此首次构建耗时可能较长。
- 当前默认只接入了 OpenAI 兼容模型。如果你要改成 Anthropic、Gemini 或更复杂的配置,需要调整 `docker-compose.yaml` 中生成配置文件的模板。
## 参考资料
- [DeerFlow 仓库](https://github.com/bytedance/deer-flow)
- [项目 README](https://github.com/bytedance/deer-flow/blob/main/README_zh.md)
+171
View File
@@ -0,0 +1,171 @@
x-defaults: &defaults
restart: unless-stopped
logging:
driver: json-file
options:
max-size: ${DEER_FLOW_LOG_MAX_SIZE:-100m}
max-file: '${DEER_FLOW_LOG_MAX_FILE:-3}'
services:
deerflow-gateway:
<<: *defaults
build:
context: https://github.com/bytedance/deer-flow.git#${DEER_FLOW_VERSION:-main}
dockerfile: backend/Dockerfile
image: ${GLOBAL_REGISTRY:-}alexsuntop/deer-flow-backend:${DEER_FLOW_VERSION:-main}
environment:
- TZ=${TZ:-UTC}
- OPENAI_API_KEY=${OPENAI_API_KEY:-}
env_file:
- .env
entrypoint:
- /bin/sh
- -ec
command: |
cat >/tmp/config.yaml <<EOF
config_version: 1
models:
- name: ${DEER_FLOW_MODEL_NAME:-openai-default}
display_name: ${DEER_FLOW_MODEL_DISPLAY_NAME:-OpenAI}
use: langchain_openai:ChatOpenAI
model: ${DEER_FLOW_MODEL_ID:-gpt-4.1-mini}
api_key: \$OPENAI_API_KEY
sandbox:
use: deerflow.sandbox.local:LocalSandboxProvider
EOF
cat >/tmp/extensions_config.json <<EOF
{"mcpServers":{},"skills":{}}
EOF
export DEER_FLOW_CONFIG_PATH=/tmp/config.yaml
export DEER_FLOW_EXTENSIONS_CONFIG_PATH=/tmp/extensions_config.json
export GATEWAY_HOST=0.0.0.0
export GATEWAY_PORT=8001
export CORS_ORIGINS=${DEER_FLOW_CORS_ORIGINS:-http://localhost:2026}
exec sh -c 'cd backend && PYTHONPATH=. uv run uvicorn app.gateway.app:app --host 0.0.0.0 --port 8001'
healthcheck:
test:
- CMD-SHELL
- python3 -c "import urllib.request; urllib.request.urlopen('http://127.0.0.1:8001/docs', timeout=5)"
interval: 30s
timeout: 10s
retries: 5
start_period: 60s
deploy:
resources:
limits:
cpus: ${DEER_FLOW_GATEWAY_CPU_LIMIT:-2.00}
memory: ${DEER_FLOW_GATEWAY_MEMORY_LIMIT:-2G}
reservations:
cpus: ${DEER_FLOW_GATEWAY_CPU_RESERVATION:-0.50}
memory: ${DEER_FLOW_GATEWAY_MEMORY_RESERVATION:-512M}
deerflow-langgraph:
<<: *defaults
build:
context: https://github.com/bytedance/deer-flow.git#${DEER_FLOW_VERSION:-main}
dockerfile: backend/Dockerfile
image: ${GLOBAL_REGISTRY:-}alexsuntop/deer-flow-backend:${DEER_FLOW_VERSION:-main}
environment:
- TZ=${TZ:-UTC}
- OPENAI_API_KEY=${OPENAI_API_KEY:-}
env_file:
- .env
entrypoint:
- /bin/sh
- -ec
command: |
cat >/tmp/config.yaml <<EOF
config_version: 1
models:
- name: ${DEER_FLOW_MODEL_NAME:-openai-default}
display_name: ${DEER_FLOW_MODEL_DISPLAY_NAME:-OpenAI}
use: langchain_openai:ChatOpenAI
model: ${DEER_FLOW_MODEL_ID:-gpt-4.1-mini}
api_key: \$OPENAI_API_KEY
sandbox:
use: deerflow.sandbox.local:LocalSandboxProvider
EOF
cat >/tmp/extensions_config.json <<EOF
{"mcpServers":{},"skills":{}}
EOF
export DEER_FLOW_CONFIG_PATH=/tmp/config.yaml
export DEER_FLOW_EXTENSIONS_CONFIG_PATH=/tmp/extensions_config.json
exec sh -c 'cd backend && NO_COLOR=1 uv run langgraph dev --no-browser --allow-blocking --no-reload'
healthcheck:
test:
- CMD-SHELL
- python3 -c "import socket; s=socket.create_connection(('127.0.0.1', 2024), 5); s.close()"
interval: 30s
timeout: 10s
retries: 5
start_period: 60s
deploy:
resources:
limits:
cpus: ${DEER_FLOW_LANGGRAPH_CPU_LIMIT:-2.00}
memory: ${DEER_FLOW_LANGGRAPH_MEMORY_LIMIT:-2G}
reservations:
cpus: ${DEER_FLOW_LANGGRAPH_CPU_RESERVATION:-0.50}
memory: ${DEER_FLOW_LANGGRAPH_MEMORY_RESERVATION:-512M}
deerflow-frontend:
<<: *defaults
build:
context: https://github.com/bytedance/deer-flow.git#${DEER_FLOW_VERSION:-main}
dockerfile: frontend/Dockerfile
target: prod
image: ${GLOBAL_REGISTRY:-}alexsuntop/deer-flow-frontend:${DEER_FLOW_VERSION:-main}
environment:
- TZ=${TZ:-UTC}
- BETTER_AUTH_SECRET=${DEER_FLOW_BETTER_AUTH_SECRET:-deer-flow-dev-secret-change-me}
- NEXT_PUBLIC_BACKEND_BASE_URL=
- NEXT_PUBLIC_LANGGRAPH_BASE_URL=/api/langgraph
env_file:
- .env
healthcheck:
test:
- CMD-SHELL
- node -e "fetch('http://127.0.0.1:3000').then((r)=>process.exit(r.ok?0:1)).catch(()=>process.exit(1))"
interval: 30s
timeout: 10s
retries: 5
start_period: 60s
deploy:
resources:
limits:
cpus: ${DEER_FLOW_FRONTEND_CPU_LIMIT:-1.00}
memory: ${DEER_FLOW_FRONTEND_MEMORY_LIMIT:-1G}
reservations:
cpus: ${DEER_FLOW_FRONTEND_CPU_RESERVATION:-0.25}
memory: ${DEER_FLOW_FRONTEND_MEMORY_RESERVATION:-256M}
deerflow-nginx:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}nginx:${NGINX_VERSION:-1.28-alpine}
depends_on:
deerflow-gateway:
condition: service_healthy
deerflow-langgraph:
condition: service_healthy
deerflow-frontend:
condition: service_healthy
ports:
- '${DEER_FLOW_PORT_OVERRIDE:-2026}:2026'
volumes:
- ./nginx.conf:/etc/nginx/conf.d/default.conf:ro
healthcheck:
test:
- CMD-SHELL
- wget --no-verbose --tries=1 --spider http://127.0.0.1:2026 >/dev/null || exit 1
interval: 30s
timeout: 10s
retries: 5
start_period: 10s
deploy:
resources:
limits:
cpus: ${DEER_FLOW_NGINX_CPU_LIMIT:-0.50}
memory: ${DEER_FLOW_NGINX_MEMORY_LIMIT:-256M}
reservations:
cpus: ${DEER_FLOW_NGINX_CPU_RESERVATION:-0.10}
memory: ${DEER_FLOW_NGINX_MEMORY_RESERVATION:-64M}
+40
View File
@@ -0,0 +1,40 @@
server {
listen 2026;
server_name _;
client_max_body_size 50m;
location /api/langgraph/ {
proxy_pass http://deerflow-langgraph:2024/;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 3600s;
proxy_send_timeout 3600s;
}
location /api/ {
proxy_pass http://deerflow-gateway:8001/api/;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 3600s;
proxy_send_timeout 3600s;
}
location / {
proxy_pass http://deerflow-frontend:3000;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 3600s;
proxy_send_timeout 3600s;
}
}
+1 -1
View File
@@ -1,5 +1,5 @@
# MinerU Docker image
MINERU_VERSION=2.7.6
MINERU_VERSION=3.1.0
# Port configurations
MINERU_PORT_OVERRIDE_VLLM=30000
+4 -7
View File
@@ -1,10 +1,7 @@
# Use the official vllm image for gpu with Ampere、Ada Lovelace、Hopper architecture (8.0 <= Compute Capability <= 9.0)
# Use DaoCloud mirrored vllm image for China region for gpu with Volta、Turing、Ampere、Ada Lovelace、Hopper、Blackwell architecture (7.0 <= Compute Capability <= 12.0)
# Compute Capability version query (https://developer.nvidia.com/cuda-gpus)
FROM vllm/vllm-openai:v0.10.2
# Use the official vllm image for gpu with Volta、Turing、Blackwell architecture (7.0 < Compute Capability < 8.0 or Compute Capability >= 10.0)
# support x86_64 architecture and ARM(AArch64) architecture
# FROM vllm/vllm-openai:v0.11.0
FROM docker.m.daocloud.io/vllm/vllm-openai:v0.11.2
# Install libgl for opencv support & Noto fonts for Chinese characters
RUN apt-get update && \
@@ -18,11 +15,11 @@ RUN apt-get update && \
rm -rf /var/lib/apt/lists/*
# Install mineru latest
RUN python3 -m pip install -U 'mineru[core]>=2.7.6' --break-system-packages && \
RUN python3 -m pip install -U 'mineru[core]>=3.0.0' -i https://mirrors.aliyun.com/pypi/simple --break-system-packages && \
python3 -m pip cache purge
# Download models and update the configuration file
RUN /bin/bash -c "mineru-models-download -s huggingface -m all"
RUN /bin/bash -c "mineru-models-download -s modelscope -m all"
# Set the entry point to activate the virtual environment and run the command line tool
ENTRYPOINT ["/bin/bash", "-c", "export MINERU_MODEL_SOURCE=local && exec \"$@\"", "--"]
+1 -1
View File
@@ -39,7 +39,7 @@ mineru -p demo.pdf -o ./output -b vlm-http-client -u http://localhost:30000
## Configuration
- `MINERU_VERSION`: The version for MinerU, default is `2.7.6`.
- `MINERU_VERSION`: The version for MinerU, default is `3.1.0`.
- `MINERU_PORT_OVERRIDE_VLLM`: The host port for the VLLM server, default is `30000`.
- `MINERU_PORT_OVERRIDE_API`: The host port for the API service, default is `8000`.
- `MINERU_PORT_OVERRIDE_GRADIO`: The host port for the Gradio WebUI, default is `7860`.
+1 -1
View File
@@ -39,7 +39,7 @@ mineru -p demo.pdf -o ./output -b vlm-http-client -u http://localhost:30000
## 配置
- `MINERU_VERSION`: MinerU 的 Docker 镜像版本,默认为 `2.7.6`。
- `MINERU_VERSION`: MinerU 的 Docker 镜像版本,默认为 `3.1.0`。
- `MINERU_PORT_OVERRIDE_VLLM`: VLLM 服务器的主机端口,默认为 `30000`。
- `MINERU_PORT_OVERRIDE_API`: API 服务的主机端口,默认为 `8000`。
- `MINERU_PORT_OVERRIDE_GRADIO`: Gradio WebUI 的主机端口,默认为 `7860`。
+3 -6
View File
@@ -1,10 +1,7 @@
# Use DaoCloud mirrored vllm image for China region for gpu with Ampere、Ada Lovelace、Hopper architecture (8.0 <= Compute Capability <= 9.0)
# Use DaoCloud mirrored vllm image for China region for gpu with Volta、Turing、Ampere、Ada Lovelace、Hopper、Blackwell architecture (7.0 <= Compute Capability <= 12.0)
# Compute Capability version query (https://developer.nvidia.com/cuda-gpus)
FROM docker.m.daocloud.io/vllm/vllm-openai:v0.10.2
# Use DaoCloud mirrored vllm image for China region for gpu with Volta、Turing、Blackwell architecture (7.0 < Compute Capability < 8.0 or Compute Capability >= 10.0)
# support x86_64 architecture and ARM(AArch64) architecture
# FROM docker.m.daocloud.io/vllm/vllm-openai:v0.11.0
FROM docker.m.daocloud.io/vllm/vllm-openai:v0.11.2
# Install libgl for opencv support & Noto fonts for Chinese characters
RUN apt-get update && \
@@ -18,7 +15,7 @@ RUN apt-get update && \
rm -rf /var/lib/apt/lists/*
# Install mineru latest
RUN python3 -m pip install -U 'mineru[core]>=2.7.0' -i https://mirrors.aliyun.com/pypi/simple --break-system-packages && \
RUN python3 -m pip install -U 'mineru[core]>=3.0.0' -i https://mirrors.aliyun.com/pypi/simple --break-system-packages && \
python3 -m pip cache purge
# Download models and update the configuration file
+1 -1
View File
@@ -14,7 +14,7 @@ RUN apt-get update && \
# Install mineru latest
RUN python3 -m pip install -U pip -i https://mirrors.aliyun.com/pypi/simple && \
python3 -m pip install 'mineru[core]>=2.7.4' \
python3 -m pip install 'mineru[core]>=3.0.0' \
numpy==1.26.4 \
opencv-python==4.11.0.86 \
-i https://mirrors.aliyun.com/pypi/simple && \
+1 -4
View File
@@ -14,10 +14,7 @@ RUN apt-get update && \
# Install mineru latest
RUN python3 -m pip install -U pip -i https://mirrors.aliyun.com/pypi/simple && \
python3 -m pip install mineru[api,gradio] \
"matplotlib>=3.10,<4" \
"ultralytics>=8.3.48,<9" \
"doclayout_yolo==0.0.4" \
python3 -m pip install "mineru[gradio]>=3.0.0" \
"ftfy>=6.3.1,<7" \
"shapely>=2.0.7,<3" \
"pyclipper>=1.3.0,<2" \
+1 -1
View File
@@ -8,7 +8,7 @@ x-defaults: &defaults
x-mineru-vllm: &mineru-vllm
<<: *defaults
image: ${GLOBAL_REGISTRY:-}alexsuntop/mineru:${MINERU_VERSION:-2.7.6}
image: ${GLOBAL_REGISTRY:-}alexsuntop/mineru:${MINERU_VERSION:-3.1.0}
build:
context: .
dockerfile: ${MINERU_DOCKERFILE_PATH:-Dockerfile}
+1 -1
View File
@@ -17,7 +17,7 @@ RUN echo 'deb http://mirrors.aliyun.com/ubuntu/ noble main restricted universe m
rm -rf /var/lib/apt/lists/* /tmp/aliyun-sources.list
# Install mineru latest
RUN python3 -m pip install "mineru[core]>=2.7.2" \
RUN python3 -m pip install "mineru[core]>=3.0.0" \
numpy==1.26.4 \
opencv-python==4.11.0.86 \
-i https://mirrors.aliyun.com/pypi/simple && \
+1 -4
View File
@@ -14,10 +14,7 @@ RUN apt-get update && \
# Install mineru latest
RUN python3 -m pip install -U pip -i https://mirrors.aliyun.com/pypi/simple && \
python3 -m pip install "mineru[api,gradio]>=2.7.6" \
"matplotlib>=3.10,<4" \
"ultralytics>=8.3.48,<9" \
"doclayout_yolo==0.0.4" \
python3 -m pip install "mineru[gradio]>=3.0.0" \
"ftfy>=6.3.1,<7" \
"shapely>=2.0.7,<3" \
"pyclipper>=1.3.0,<2" \
+1 -1
View File
@@ -21,7 +21,7 @@ RUN sed -i '3s/^Version: 0.15.1+metax3\.1\.0\.4$/Version: 0.21.0+metax3.1.0.4/'
# Install mineru latest
RUN /opt/conda/bin/python3 -m pip install -U pip -i https://mirrors.aliyun.com/pypi/simple && \
/opt/conda/bin/python3 -m pip install 'mineru[core]>=2.6.5' \
/opt/conda/bin/python3 -m pip install 'mineru[core]>=3.0.0' \
numpy==1.26.4 \
opencv-python==4.11.0.86 \
-i https://mirrors.aliyun.com/pypi/simple && \
+2 -2
View File
@@ -1,6 +1,6 @@
# 基础镜像配置 vLLM 或 LMDeploy ,请根据实际需要选择其中一个,要求 amd64(x86-64) CPU + Cambricon MLU.
# Base image containing the LMDEPLOY inference environment, requiring amd64(x86-64) CPU + Cambricon MLU.
FROM crpi-4crprmm5baj1v8iv.cn-hangzhou.personal.cr.aliyuncs.com/lmdeploy_dlinfer/camb:qwen2.5_vl
FROM crpi-4crprmm5baj1v8iv.cn-hangzhou.personal.cr.aliyuncs.com/lmdeploy_dlinfer/camb:mineru25
ARG BACKEND=lmdeploy
# Base image containing the vLLM inference environment, requiring amd64(x86-64) CPU + Cambricon MLU.
# FROM crpi-vofi3w62lkohhxsp.cn-shanghai.personal.cr.aliyuncs.com/opendatalab-mineru/mlu:vllm0.8.3-torch2.6.0-torchmlu1.26.1-ubuntu22.04-py310
@@ -22,7 +22,7 @@ RUN /bin/bash -c '\
source /torch/venv3/pytorch_infer/bin/activate; \
fi && \
python3 -m pip install -U pip -i https://mirrors.aliyun.com/pypi/simple && \
python3 -m pip install "mineru[core]>=2.7.4" \
python3 -m pip install "mineru[core]>=3.0.0" \
numpy==1.26.4 \
opencv-python==4.11.0.86 \
-i https://mirrors.aliyun.com/pypi/simple && \
+1 -4
View File
@@ -18,10 +18,7 @@ RUN python3 -m pip install -U pip -i https://mirrors.aliyun.com/pypi/simple && \
git clone https://gitcode.com/gh_mirrors/vi/vision.git -b v0.20.0 --depth 1 && \
cd vision && \
python3 setup.py install && \
python3 -m pip install "mineru[api,gradio]>=2.7.2" \
"matplotlib>=3.10,<4" \
"ultralytics>=8.3.48,<9" \
"doclayout_yolo==0.0.4" \
python3 -m pip install "mineru[gradio]>=3.0.0" \
"ftfy>=6.3.1,<7" \
"shapely>=2.0.7,<3" \
"pyclipper>=1.3.0,<2" \
+1 -1
View File
@@ -19,7 +19,7 @@ RUN apt-get update && \
# Install mineru latest
RUN python3 -m pip install -U pip -i https://mirrors.aliyun.com/pypi/simple && \
python3 -m pip install 'mineru[core]>=2.6.5' \
python3 -m pip install 'mineru[core]>=3.0.0' \
numpy==1.26.4 \
opencv-python==4.11.0.86 \
-i https://mirrors.aliyun.com/pypi/simple && \
+1 -1
View File
@@ -17,7 +17,7 @@ RUN apt-get update && \
# Install mineru latest
RUN python3 -m pip install -U pip -i https://mirrors.aliyun.com/pypi/simple && \
python3 -m pip install 'mineru[core]>=2.6.5' \
python3 -m pip install 'mineru[core]>=3.0.0' \
numpy==1.26.4 \
opencv-python==4.11.0.86 \
-i https://mirrors.aliyun.com/pypi/simple && \
+55
View File
@@ -0,0 +1,55 @@
# Source build configuration
MULTICA_VERSION=v0.1.32
MULTICA_PGVECTOR_VERSION=pg17
# Ports
MULTICA_BACKEND_PORT_OVERRIDE=8080
MULTICA_FRONTEND_PORT_OVERRIDE=3000
# PostgreSQL
MULTICA_POSTGRES_DB=multica
MULTICA_POSTGRES_USER=multica
MULTICA_POSTGRES_PASSWORD=multica
# Authentication & Security (CHANGEME: update JWT_SECRET for production)
MULTICA_JWT_SECRET=change-me-in-production
# Frontend origin (used by backend for CORS and cookie settings)
MULTICA_FRONTEND_ORIGIN=http://localhost:3000
MULTICA_APP_URL=http://localhost:3000
MULTICA_CORS_ALLOWED_ORIGINS=
MULTICA_COOKIE_DOMAIN=
# Email via Resend (optional)
MULTICA_RESEND_API_KEY=
MULTICA_RESEND_FROM_EMAIL=noreply@multica.ai
# Google OAuth (optional)
MULTICA_GOOGLE_CLIENT_ID=
MULTICA_GOOGLE_CLIENT_SECRET=
MULTICA_GOOGLE_REDIRECT_URI=http://localhost:3000/auth/callback
# Resources - PostgreSQL
MULTICA_POSTGRES_CPU_LIMIT=1.00
MULTICA_POSTGRES_MEMORY_LIMIT=1G
MULTICA_POSTGRES_CPU_RESERVATION=0.25
MULTICA_POSTGRES_MEMORY_RESERVATION=256M
# Resources - Backend
MULTICA_BACKEND_CPU_LIMIT=2.00
MULTICA_BACKEND_MEMORY_LIMIT=2G
MULTICA_BACKEND_CPU_RESERVATION=0.50
MULTICA_BACKEND_MEMORY_RESERVATION=512M
# Resources - Frontend
MULTICA_FRONTEND_CPU_LIMIT=1.00
MULTICA_FRONTEND_MEMORY_LIMIT=1G
MULTICA_FRONTEND_CPU_RESERVATION=0.25
MULTICA_FRONTEND_MEMORY_RESERVATION=256M
# Logging
MULTICA_LOG_MAX_SIZE=100m
MULTICA_LOG_MAX_FILE=3
# Timezone
TZ=UTC
+77
View File
@@ -0,0 +1,77 @@
# Multica
[English](./README.md) | [中文](./README.zh.md)
Multica is an open-source managed agents platform that turns coding agents into real teammates. Assign tasks, track progress, and compound reusable skills — works with Claude Code, Codex, OpenClaw, and OpenCode. This Compose setup builds the Go backend and Next.js frontend from source, starts PostgreSQL with pgvector, and exposes both services.
## Services
- **multica-backend**: Go backend (Chi router, sqlc, gorilla/websocket) with auto-migration on startup
- **multica-frontend**: Next.js 16 web application (App Router, standalone output)
- **multica-postgres**: PostgreSQL 17 with pgvector extension
## Quick Start
1. Copy the example environment file:
```bash
cp .env.example .env
```
2. Edit `.env` and change `MULTICA_JWT_SECRET` to a secure random value:
```bash
MULTICA_JWT_SECRET=$(openssl rand -base64 32)
```
3. Start the stack (first run builds images from source — this takes several minutes):
```bash
docker compose up -d
```
4. Open Multica:
- Frontend: <http://localhost:3000>
- Backend API: <http://localhost:8080>
## Default Ports
| Service | Port | Description |
| -------- | ---- | ---------------------- |
| Frontend | 3000 | Web UI |
| Backend | 8080 | REST API and WebSocket |
| Postgres | 5432 | Internal only |
## Important Environment Variables
| Variable | Description | Default |
| -------------------------------- | ------------------------------------------ | ------------------------- |
| `MULTICA_VERSION` | Git ref used for source builds | `v0.1.32` |
| `MULTICA_BACKEND_PORT_OVERRIDE` | Host port for the backend API | `8080` |
| `MULTICA_FRONTEND_PORT_OVERRIDE` | Host port for the web UI | `3000` |
| `MULTICA_JWT_SECRET` | JWT signing secret (change for production) | `change-me-in-production` |
| `MULTICA_POSTGRES_PASSWORD` | PostgreSQL password | `multica` |
| `MULTICA_FRONTEND_ORIGIN` | Frontend URL for CORS and cookies | `http://localhost:3000` |
| `MULTICA_GOOGLE_CLIENT_ID` | Google OAuth client ID (optional) | - |
| `MULTICA_GOOGLE_CLIENT_SECRET` | Google OAuth client secret (optional) | - |
| `MULTICA_RESEND_API_KEY` | Resend API key for email (optional) | - |
| `TZ` | Container timezone | `UTC` |
## Storage
| Volume | Description |
| ---------------- | --------------- |
| `multica_pgdata` | PostgreSQL data |
## Security Notes
- Always change `MULTICA_JWT_SECRET` before exposing the service.
- Change `MULTICA_POSTGRES_PASSWORD` for production deployments.
- Google OAuth and email (Resend) are optional; the platform works without them.
- The first build downloads the full Multica repository from GitHub and builds Docker images, so it requires internet access and may take several minutes.
## References
- [Multica Repository](https://github.com/multica-ai/multica)
- [Self-Hosting Guide](https://github.com/multica-ai/multica/blob/main/SELF_HOSTING.md)
+77
View File
@@ -0,0 +1,77 @@
# Multica
[English](./README.md) | [中文](./README.zh.md)
Multica 是一个开源的托管 Agent 平台,能将编码 Agent 变成真正的团队成员。分配任务、跟踪进度、积累可复用技能——支持 Claude Code、Codex、OpenClaw 和 OpenCode。此 Compose 配置从源码构建 Go 后端和 Next.js 前端,启动带有 pgvector 扩展的 PostgreSQL,并暴露两个服务。
## 服务
- **multica-backend**Go 后端(Chi 路由、sqlc、gorilla/websocket),启动时自动执行数据库迁移
- **multica-frontend**Next.js 16 Web 应用(App Routerstandalone 输出)
- **multica-postgres**PostgreSQL 17,包含 pgvector 扩展
## 快速开始
1. 复制环境变量示例文件:
```bash
cp .env.example .env
```
2. 编辑 `.env`,将 `MULTICA_JWT_SECRET` 修改为安全的随机值:
```bash
MULTICA_JWT_SECRET=$(openssl rand -base64 32)
```
3. 启动服务(首次运行会从源码构建镜像,需要几分钟):
```bash
docker compose up -d
```
4. 打开 Multica
- 前端界面:<http://localhost:3000>
- 后端 API<http://localhost:8080>
## 默认端口
| 服务 | 端口 | 说明 |
| -------- | ---- | --------------------- |
| Frontend | 3000 | Web 界面 |
| Backend | 8080 | REST API 和 WebSocket |
| Postgres | 5432 | 仅内部访问 |
## 关键环境变量
| 变量 | 说明 | 默认值 |
| -------------------------------- | ---------------------------------- | ------------------------- |
| `MULTICA_VERSION` | 用于源码构建的 Git 引用 | `v0.1.32` |
| `MULTICA_BACKEND_PORT_OVERRIDE` | 后端 API 对外端口 | `8080` |
| `MULTICA_FRONTEND_PORT_OVERRIDE` | Web 界面对外端口 | `3000` |
| `MULTICA_JWT_SECRET` | JWT 签名密钥(生产环境必须修改) | `change-me-in-production` |
| `MULTICA_POSTGRES_PASSWORD` | PostgreSQL 密码 | `multica` |
| `MULTICA_FRONTEND_ORIGIN` | 前端 URL,用于 CORS 和 Cookie 设置 | `http://localhost:3000` |
| `MULTICA_GOOGLE_CLIENT_ID` | Google OAuth 客户端 ID(可选) | - |
| `MULTICA_GOOGLE_CLIENT_SECRET` | Google OAuth 客户端密钥(可选) | - |
| `MULTICA_RESEND_API_KEY` | Resend 邮件服务的 API Key(可选) | - |
| `TZ` | 容器时区 | `UTC` |
## 存储
| 卷 | 说明 |
| ---------------- | --------------- |
| `multica_pgdata` | PostgreSQL 数据 |
## 安全说明
- 在对外暴露服务前,务必修改 `MULTICA_JWT_SECRET`。
- 生产环境部署时请修改 `MULTICA_POSTGRES_PASSWORD`。
- Google OAuth 和邮件服务(Resend)均为可选配置,平台在没有它们的情况下也能正常运行。
- 首次构建需要从 GitHub 下载完整的 Multica 仓库并构建 Docker 镜像,因此需要联网,可能需要几分钟。
## 参考资料
- [Multica 仓库](https://github.com/multica-ai/multica)
- [自托管指南](https://github.com/multica-ai/multica/blob/main/SELF_HOSTING.md)
+109
View File
@@ -0,0 +1,109 @@
x-defaults: &defaults
restart: unless-stopped
logging:
driver: json-file
options:
max-size: ${MULTICA_LOG_MAX_SIZE:-100m}
max-file: '${MULTICA_LOG_MAX_FILE:-3}'
services:
multica-postgres:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}pgvector/pgvector:${MULTICA_PGVECTOR_VERSION:-pg17}
environment:
- TZ=${TZ:-UTC}
- POSTGRES_DB=${MULTICA_POSTGRES_DB:-multica}
- POSTGRES_USER=${MULTICA_POSTGRES_USER:-multica}
- POSTGRES_PASSWORD=${MULTICA_POSTGRES_PASSWORD:-multica}
volumes:
- multica_pgdata:/var/lib/postgresql/data
healthcheck:
test: [CMD-SHELL, pg_isready -U $$POSTGRES_USER -d $$POSTGRES_DB]
interval: 10s
timeout: 5s
retries: 5
start_period: 10s
deploy:
resources:
limits:
cpus: ${MULTICA_POSTGRES_CPU_LIMIT:-1.00}
memory: ${MULTICA_POSTGRES_MEMORY_LIMIT:-1G}
reservations:
cpus: ${MULTICA_POSTGRES_CPU_RESERVATION:-0.25}
memory: ${MULTICA_POSTGRES_MEMORY_RESERVATION:-256M}
multica-backend:
<<: *defaults
build:
context: https://github.com/multica-ai/multica.git#${MULTICA_VERSION:-v0.1.32}
dockerfile: Dockerfile
depends_on:
multica-postgres:
condition: service_healthy
ports:
- '${MULTICA_BACKEND_PORT_OVERRIDE:-8080}:8080'
environment:
- TZ=${TZ:-UTC}
- DATABASE_URL=postgres://${MULTICA_POSTGRES_USER:-multica}:${MULTICA_POSTGRES_PASSWORD:-multica}@multica-postgres:5432/${MULTICA_POSTGRES_DB:-multica}?sslmode=disable
- PORT=8080
- JWT_SECRET=${MULTICA_JWT_SECRET:-change-me-in-production}
- FRONTEND_ORIGIN=${MULTICA_FRONTEND_ORIGIN:-http://localhost:3000}
- CORS_ALLOWED_ORIGINS=${MULTICA_CORS_ALLOWED_ORIGINS:-}
- MULTICA_APP_URL=${MULTICA_APP_URL:-http://localhost:3000}
- RESEND_API_KEY=${MULTICA_RESEND_API_KEY:-}
- RESEND_FROM_EMAIL=${MULTICA_RESEND_FROM_EMAIL:-noreply@multica.ai}
- GOOGLE_CLIENT_ID=${MULTICA_GOOGLE_CLIENT_ID:-}
- GOOGLE_CLIENT_SECRET=${MULTICA_GOOGLE_CLIENT_SECRET:-}
- GOOGLE_REDIRECT_URI=${MULTICA_GOOGLE_REDIRECT_URI:-http://localhost:3000/auth/callback}
- COOKIE_DOMAIN=${MULTICA_COOKIE_DOMAIN:-}
healthcheck:
test:
- CMD-SHELL
- wget --no-verbose --tries=1 --spider http://127.0.0.1:8080/ || exit 1
interval: 30s
timeout: 10s
retries: 5
start_period: 60s
deploy:
resources:
limits:
cpus: ${MULTICA_BACKEND_CPU_LIMIT:-2.00}
memory: ${MULTICA_BACKEND_MEMORY_LIMIT:-2G}
reservations:
cpus: ${MULTICA_BACKEND_CPU_RESERVATION:-0.50}
memory: ${MULTICA_BACKEND_MEMORY_RESERVATION:-512M}
multica-frontend:
<<: *defaults
build:
context: https://github.com/multica-ai/multica.git#${MULTICA_VERSION:-v0.1.32}
dockerfile: Dockerfile.web
args:
REMOTE_API_URL: http://multica-backend:8080
NEXT_PUBLIC_GOOGLE_CLIENT_ID: ${MULTICA_GOOGLE_CLIENT_ID:-}
depends_on:
- multica-backend
ports:
- '${MULTICA_FRONTEND_PORT_OVERRIDE:-3000}:3000'
environment:
- TZ=${TZ:-UTC}
- HOSTNAME=0.0.0.0
healthcheck:
test:
- CMD-SHELL
- wget --no-verbose --tries=1 --spider http://127.0.0.1:3000/ || exit 1
interval: 30s
timeout: 10s
retries: 5
start_period: 60s
deploy:
resources:
limits:
cpus: ${MULTICA_FRONTEND_CPU_LIMIT:-1.00}
memory: ${MULTICA_FRONTEND_MEMORY_LIMIT:-1G}
reservations:
cpus: ${MULTICA_FRONTEND_CPU_RESERVATION:-0.25}
memory: ${MULTICA_FRONTEND_MEMORY_RESERVATION:-256M}
volumes:
multica_pgdata:
+33
View File
@@ -0,0 +1,33 @@
# Source build configuration
OPENFANG_VERSION=0.1.0
# Network configuration
OPENFANG_PORT_OVERRIDE=4200
# OpenFang runtime configuration
OPENFANG_PROVIDER=anthropic
OPENFANG_MODEL=claude-sonnet-4-20250514
OPENFANG_API_KEY_ENV=ANTHROPIC_API_KEY
OPENFANG_API_KEY=
OPENFANG_LOG_LEVEL=info
OPENFANG_MEMORY_DECAY_RATE=0.05
OPENFANG_EXEC_MODE=allowlist
OPENFANG_EXEC_TIMEOUT_SECS=30
# Provider credentials
ANTHROPIC_API_KEY=
OPENAI_API_KEY=
GROQ_API_KEY=
# Resources
OPENFANG_CPU_LIMIT=2.00
OPENFANG_MEMORY_LIMIT=2G
OPENFANG_CPU_RESERVATION=0.50
OPENFANG_MEMORY_RESERVATION=512M
# Logging
OPENFANG_LOG_MAX_SIZE=100m
OPENFANG_LOG_MAX_FILE=3
# Timezone
TZ=UTC
+71
View File
@@ -0,0 +1,71 @@
# OpenFang
[中文文档](README.zh.md)
OpenFang is an open-source agent operating system. This Compose setup builds the upstream Docker image from the `v0.1.0` source tag and writes a minimal `config.toml` into the persistent data volume on startup.
## Quick Start
1. Copy the example environment file:
```bash
cp .env.example .env
```
2. Set at least one provider API key in `.env`:
- `ANTHROPIC_API_KEY`
- `OPENAI_API_KEY`
- `GROQ_API_KEY`
3. Start OpenFang:
```bash
docker compose up -d
```
4. Open the dashboard:
- <http://localhost:4200>
5. Verify health if needed:
```bash
curl http://localhost:4200/api/health
```
## Default Ports
| Service | Port | Description |
| -------- | ---- | ---------------------- |
| OpenFang | 4200 | Dashboard and REST API |
## Important Environment Variables
| Variable | Description | Default |
| ------------------------ | ------------------------------------------------------------------ | -------------------------- |
| `OPENFANG_VERSION` | Git tag used for the source build | `0.1.0` |
| `OPENFANG_PORT_OVERRIDE` | Host port for OpenFang | `4200` |
| `OPENFANG_PROVIDER` | Default model provider | `anthropic` |
| `OPENFANG_MODEL` | Default model name | `claude-sonnet-4-20250514` |
| `OPENFANG_API_KEY_ENV` | Environment variable name that OpenFang reads for the provider key | `ANTHROPIC_API_KEY` |
| `OPENFANG_API_KEY` | Optional Bearer token to protect the API | - |
| `ANTHROPIC_API_KEY` | Anthropic API key | - |
| `OPENAI_API_KEY` | OpenAI API key | - |
| `GROQ_API_KEY` | Groq API key | - |
| `TZ` | Container timezone | `UTC` |
## Volumes
- `openfang_data`: Persistent configuration and runtime data under `/data`.
## Notes
- The generated config binds to `0.0.0.0:4200` for container use.
- If `OPENFANG_API_KEY` is empty, the instance runs without API authentication except for whatever protections you place in front of it.
- This setup uses the upstream Dockerfile, so the first build can take several minutes.
## References
- [OpenFang Repository](https://github.com/RightNow-AI/openfang)
- [Getting Started Guide](https://github.com/RightNow-AI/openfang/blob/main/docs/getting-started.md)
+71
View File
@@ -0,0 +1,71 @@
# OpenFang
[English](README.md)
OpenFang 是一个开源的 Agent Operating System。这个 Compose 配置会基于上游 `v0.1.0` 源码标签构建镜像,并在启动时把最小可用的 `config.toml` 写入持久化数据卷。
## 快速开始
1. 复制环境变量示例文件:
```bash
cp .env.example .env
```
2. 在 `.env` 中至少填写一个模型提供商的 API Key:
- `ANTHROPIC_API_KEY`
- `OPENAI_API_KEY`
- `GROQ_API_KEY`
3. 启动 OpenFang
```bash
docker compose up -d
```
4. 打开控制台:
- <http://localhost:4200>
5. 如需检查健康状态:
```bash
curl http://localhost:4200/api/health
```
## 默认端口
| 服务 | 端口 | 说明 |
| -------- | ---- | ----------------- |
| OpenFang | 4200 | 控制台与 REST API |
## 关键环境变量
| 变量 | 说明 | 默认值 |
| ------------------------ | ----------------------------------------- | -------------------------- |
| `OPENFANG_VERSION` | 用于源码构建的 Git 标签 | `0.1.0` |
| `OPENFANG_PORT_OVERRIDE` | OpenFang 对外端口 | `4200` |
| `OPENFANG_PROVIDER` | 默认模型提供商 | `anthropic` |
| `OPENFANG_MODEL` | 默认模型名称 | `claude-sonnet-4-20250514` |
| `OPENFANG_API_KEY_ENV` | OpenFang 读取提供商密钥时使用的环境变量名 | `ANTHROPIC_API_KEY` |
| `OPENFANG_API_KEY` | 可选的 API Bearer Token | - |
| `ANTHROPIC_API_KEY` | Anthropic API Key | - |
| `OPENAI_API_KEY` | OpenAI API Key | - |
| `GROQ_API_KEY` | Groq API Key | - |
| `TZ` | 容器时区 | `UTC` |
## 数据卷
- `openfang_data`:持久化 `/data` 下的配置与运行数据。
## 说明
- 生成的配置会监听 `0.0.0.0:4200`,适合容器内运行。
- 如果 `OPENFANG_API_KEY` 为空,实例本身不会启用额外 API 认证,是否暴露到公网需要你自行把控。
- 该服务使用上游 Dockerfile 从源码构建,首次构建通常需要几分钟。
## 参考资料
- [OpenFang 仓库](https://github.com/RightNow-AI/openfang)
- [入门文档](https://github.com/RightNow-AI/openfang/blob/main/docs/getting-started.md)
+71
View File
@@ -0,0 +1,71 @@
x-defaults: &defaults
restart: unless-stopped
logging:
driver: json-file
options:
max-size: ${OPENFANG_LOG_MAX_SIZE:-100m}
max-file: '${OPENFANG_LOG_MAX_FILE:-3}'
services:
openfang:
<<: *defaults
build:
context: https://github.com/RightNow-AI/openfang.git#${OPENFANG_VERSION:-0.1.0}
dockerfile: Dockerfile
image: ${GLOBAL_REGISTRY:-}alexsuntop/openfang:${OPENFANG_VERSION:-0.1.0}
ports:
- '${OPENFANG_PORT_OVERRIDE:-4200}:4200'
environment:
- TZ=${TZ:-UTC}
- OPENFANG_HOME=/data
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY:-}
- OPENAI_API_KEY=${OPENAI_API_KEY:-}
- GROQ_API_KEY=${GROQ_API_KEY:-}
env_file:
- .env
entrypoint:
- /bin/sh
- -ec
command: |
: > /data/config.toml
if [ -n "${OPENFANG_API_KEY:-}" ]; then
printf 'api_key = "%s"\n' "${OPENFANG_API_KEY}" >> /data/config.toml
fi
cat >> /data/config.toml <<EOF
api_listen = "0.0.0.0:4200"
log_level = "${OPENFANG_LOG_LEVEL:-info}"
[default_model]
provider = "${OPENFANG_PROVIDER:-anthropic}"
model = "${OPENFANG_MODEL:-claude-sonnet-4-20250514}"
api_key_env = "${OPENFANG_API_KEY_ENV:-ANTHROPIC_API_KEY}"
[memory]
decay_rate = ${OPENFANG_MEMORY_DECAY_RATE:-0.05}
[exec_policy]
mode = "${OPENFANG_EXEC_MODE:-allowlist}"
timeout_secs = ${OPENFANG_EXEC_TIMEOUT_SECS:-30}
EOF
exec openfang start
volumes:
- openfang_data:/data
healthcheck:
test:
- CMD-SHELL
- python3 -c "import urllib.request; urllib.request.urlopen('http://127.0.0.1:4200/api/health', timeout=5)"
interval: 30s
timeout: 10s
retries: 5
start_period: 30s
deploy:
resources:
limits:
cpus: ${OPENFANG_CPU_LIMIT:-2.00}
memory: ${OPENFANG_MEMORY_LIMIT:-2G}
reservations:
cpus: ${OPENFANG_CPU_RESERVATION:-0.50}
memory: ${OPENFANG_MEMORY_RESERVATION:-512M}
volumes:
openfang_data:
+31
View File
@@ -0,0 +1,31 @@
# Source build configuration
PAPERCLIP_GIT_REF=main
# Network configuration
PAPERCLIP_PORT_OVERRIDE=3100
PAPERCLIP_PUBLIC_URL=http://localhost:3100
PAPERCLIP_ALLOWED_HOSTNAMES=localhost
# Runtime mode
PAPERCLIP_DEPLOYMENT_MODE=authenticated
PAPERCLIP_DEPLOYMENT_EXPOSURE=private
# Optional external database
DATABASE_URL=
# LLM credentials for local adapters and workflows
OPENAI_API_KEY=
ANTHROPIC_API_KEY=
# Resources
PAPERCLIP_CPU_LIMIT=2.00
PAPERCLIP_MEMORY_LIMIT=4G
PAPERCLIP_CPU_RESERVATION=0.50
PAPERCLIP_MEMORY_RESERVATION=1G
# Logging
PAPERCLIP_LOG_MAX_SIZE=100m
PAPERCLIP_LOG_MAX_FILE=3
# Timezone
TZ=UTC
+67
View File
@@ -0,0 +1,67 @@
# Paperclip
[中文文档](README.zh.md)
Paperclip is an open-source orchestration platform for running AI-native teams. This Compose setup builds the upstream Docker image from source, persists the full Paperclip home directory, and exposes the web UI on port 3100.
## Quick Start
1. Copy the example environment file:
```bash
cp .env.example .env
```
2. Optionally edit `.env`:
- Set `PAPERCLIP_PUBLIC_URL` if you are not using `http://localhost:3100`
- Add `OPENAI_API_KEY` and or `ANTHROPIC_API_KEY` for local adapters
- Set `DATABASE_URL` if you want to use an external PostgreSQL instance instead of the embedded database
3. Start the service:
```bash
docker compose up -d
```
4. Open the UI:
- <http://localhost:3100>
5. Follow the Paperclip onboarding flow in the browser.
## Default Ports
| Service | Port | Description |
| --------- | ---- | -------------- |
| Paperclip | 3100 | Web UI and API |
## Important Environment Variables
| Variable | Description | Default |
| ------------------------------- | ---------------------------------------- | ----------------------- |
| `PAPERCLIP_GIT_REF` | Git ref used for the source build | `main` |
| `PAPERCLIP_PORT_OVERRIDE` | Host port for Paperclip | `3100` |
| `PAPERCLIP_PUBLIC_URL` | Public URL for auth and invite flows | `http://localhost:3100` |
| `PAPERCLIP_ALLOWED_HOSTNAMES` | Extra allowed hostnames | `localhost` |
| `PAPERCLIP_DEPLOYMENT_MODE` | Deployment mode | `authenticated` |
| `PAPERCLIP_DEPLOYMENT_EXPOSURE` | Exposure mode | `private` |
| `DATABASE_URL` | Optional external PostgreSQL URL | - |
| `OPENAI_API_KEY` | OpenAI key for bundled local adapters | - |
| `ANTHROPIC_API_KEY` | Anthropic key for bundled local adapters | - |
| `TZ` | Container timezone | `UTC` |
## Volumes
- `paperclip_data`: Stores embedded PostgreSQL data, uploaded files, secrets, and runtime state.
## Notes
- If `DATABASE_URL` is not provided, Paperclip automatically uses embedded PostgreSQL.
- The upstream Docker image includes the UI and server in one container.
- The first source build can take several minutes.
## References
- [Paperclip Repository](https://github.com/paperclipai/paperclip)
- [Docker Deployment Guide](https://github.com/paperclipai/paperclip/blob/main/docs/deploy/docker.md)
+67
View File
@@ -0,0 +1,67 @@
# Paperclip
[English](README.md)
Paperclip 是一个面向 AI 团队编排的开源平台。这个 Compose 配置会从上游源码构建 Docker 镜像,持久化整个 Paperclip Home 目录,并通过 3100 端口暴露 Web 界面。
## 快速开始
1. 复制环境变量示例文件:
```bash
cp .env.example .env
```
2. 按需编辑 `.env`
- 如果你不通过 `http://localhost:3100` 访问,请修改 `PAPERCLIP_PUBLIC_URL`
- 如果要启用本地适配器,填写 `OPENAI_API_KEY` 和或 `ANTHROPIC_API_KEY`
- 如果要接入外部 PostgreSQL,而不是内置数据库,请设置 `DATABASE_URL`
3. 启动服务:
```bash
docker compose up -d
```
4. 打开界面:
- <http://localhost:3100>
5. 在浏览器中完成 Paperclip 的初始化流程。
## 默认端口
| 服务 | 端口 | 说明 |
| --------- | ---- | -------------- |
| Paperclip | 3100 | Web 界面与 API |
## 关键环境变量
| 变量 | 说明 | 默认值 |
| ------------------------------- | ---------------------------- | ----------------------- |
| `PAPERCLIP_GIT_REF` | 用于源码构建的 Git 引用 | `main` |
| `PAPERCLIP_PORT_OVERRIDE` | Paperclip 对外端口 | `3100` |
| `PAPERCLIP_PUBLIC_URL` | 认证与邀请流程使用的公开 URL | `http://localhost:3100` |
| `PAPERCLIP_ALLOWED_HOSTNAMES` | 额外允许的主机名 | `localhost` |
| `PAPERCLIP_DEPLOYMENT_MODE` | 部署模式 | `authenticated` |
| `PAPERCLIP_DEPLOYMENT_EXPOSURE` | 暴露模式 | `private` |
| `DATABASE_URL` | 可选的外部 PostgreSQL 连接串 | - |
| `OPENAI_API_KEY` | OpenAI Key | - |
| `ANTHROPIC_API_KEY` | Anthropic Key | - |
| `TZ` | 容器时区 | `UTC` |
## 数据卷
- `paperclip_data`:保存内置 PostgreSQL、上传文件、密钥和运行状态。
## 说明
- 如果没有设置 `DATABASE_URL`Paperclip 会自动启用内置 PostgreSQL。
- 上游 Docker 镜像已经包含前端和服务端,不需要再拆分多个容器。
- 首次源码构建通常需要几分钟。
## 参考资料
- [Paperclip 仓库](https://github.com/paperclipai/paperclip)
- [Docker 部署文档](https://github.com/paperclipai/paperclip/blob/main/docs/deploy/docker.md)
+53
View File
@@ -0,0 +1,53 @@
x-defaults: &defaults
restart: unless-stopped
logging:
driver: json-file
options:
max-size: ${PAPERCLIP_LOG_MAX_SIZE:-100m}
max-file: '${PAPERCLIP_LOG_MAX_FILE:-3}'
services:
paperclip:
<<: *defaults
build:
context: https://github.com/paperclipai/paperclip.git#${PAPERCLIP_GIT_REF:-main}
dockerfile: Dockerfile
image: ${GLOBAL_REGISTRY:-}alexsuntop/paperclip:${PAPERCLIP_GIT_REF:-main}
ports:
- '${PAPERCLIP_PORT_OVERRIDE:-3100}:3100'
environment:
- TZ=${TZ:-UTC}
- HOST=0.0.0.0
- PORT=3100
- SERVE_UI=true
- PAPERCLIP_HOME=/paperclip
- PAPERCLIP_DEPLOYMENT_MODE=${PAPERCLIP_DEPLOYMENT_MODE:-authenticated}
- PAPERCLIP_DEPLOYMENT_EXPOSURE=${PAPERCLIP_DEPLOYMENT_EXPOSURE:-private}
- PAPERCLIP_PUBLIC_URL=${PAPERCLIP_PUBLIC_URL:-http://localhost:3100}
- PAPERCLIP_ALLOWED_HOSTNAMES=${PAPERCLIP_ALLOWED_HOSTNAMES:-localhost}
- OPENAI_API_KEY=${OPENAI_API_KEY:-}
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY:-}
- DATABASE_URL=${DATABASE_URL:-}
env_file:
- .env
volumes:
- paperclip_data:/paperclip
healthcheck:
test:
- CMD-SHELL
- curl -fsS http://127.0.0.1:3100/api/health >/dev/null || exit 1
interval: 30s
timeout: 10s
retries: 5
start_period: 60s
deploy:
resources:
limits:
cpus: ${PAPERCLIP_CPU_LIMIT:-2.00}
memory: ${PAPERCLIP_MEMORY_LIMIT:-4G}
reservations:
cpus: ${PAPERCLIP_CPU_RESERVATION:-0.50}
memory: ${PAPERCLIP_MEMORY_RESERVATION:-1G}
volumes:
paperclip_data:
+73
View File
@@ -0,0 +1,73 @@
# Source build configuration
TURBOOCR_VERSION=v2.1.1
# Registry mirror prefix for docker build — leave empty for direct pull.
# China users: set to "docker.m.daocloud.io/" to proxy Docker Hub via DaoCloud.
# Example: TURBOOCR_DOCKER_MIRROR=docker.m.daocloud.io/
TURBOOCR_DOCKER_MIRROR=
# NGC (nvcr.io) mirror prefix for the CUDA 12.x GPU build — leave empty for direct pull.
# Standard Docker Hub mirrors (e.g. DaoCloud) do NOT proxy nvcr.io.
# Set this only if you have a dedicated NGC pull-through proxy.
TURBOOCR_NGC_MIRROR=
# Network configuration
TURBOOCR_HTTP_PORT_OVERRIDE=8000
TURBOOCR_GRPC_PORT_OVERRIDE=50051
# Language bundle: latin (default), chinese, greek, eslav, arabic, korean, thai
TURBOOCR_LANG=
# Set to 1 with TURBOOCR_LANG=chinese to use the 84 MB server rec model
TURBOOCR_SERVER=
# GPU pipeline pool — number of concurrent inference pipelines (~1.4 GB VRAM each).
# Leave empty to let the server choose automatically based on available VRAM.
# Ignored in CPU mode.
TURBOOCR_PIPELINE_POOL_SIZE=
# Set to 1 to skip loading the PP-DocLayoutV3 layout detection model.
# Saves ~300-500 MB VRAM and cuts first-start compilation time by ~28 min on laptop GPUs.
# Only do this if you do not need the ?layout=1 PDF endpoint.
TURBOOCR_DISABLE_LAYOUT=0
# Default PDF parsing mode: ocr (safest) / geometric / auto / auto_verified
TURBOOCR_PDF_MODE=ocr
# Set to 1 to skip the angle classifier (~0.4 ms savings per image)
TURBOOCR_DISABLE_ANGLE_CLS=0
# Maximum detection input dimension in pixels
TURBOOCR_DET_MAX_SIDE=960
# PDF render parallelism
TURBOOCR_PDF_DAEMONS=16
TURBOOCR_PDF_WORKERS=4
# Maximum pages accepted in a single PDF request
TURBOOCR_MAX_PDF_PAGES=2000
# Log level: debug / info / warn / error
TURBOOCR_LOG_LEVEL=info
# Log format: json (structured) / text (human-readable)
TURBOOCR_LOG_FORMAT=json
# Resources — GPU variant (profile: gpu)
# First-start builds TRT engines; 12 G covers the GPU + engine compilation headroom.
TURBOOCR_CPU_LIMIT=8.0
TURBOOCR_MEMORY_LIMIT=12G
TURBOOCR_CPU_RESERVATION=2.0
TURBOOCR_MEMORY_RESERVATION=4G
# Number of NVIDIA GPUs to reserve (GPU variant only)
TURBOOCR_GPU_COUNT=1
# Shared memory — fastpdf2png uses /dev/shm for inter-process PDF page transfers
TURBOOCR_SHM_SIZE=2g
# Logging
TURBOOCR_LOG_MAX_SIZE=100m
TURBOOCR_LOG_MAX_FILE=3
# Timezone
TZ=UTC
+104
View File
@@ -0,0 +1,104 @@
# ============================================================
# TurboOCR — CPU-only build (ONNX Runtime backend, no GPU required)
# Base image: ubuntu:24.04
#
# Produces: /app/build_cpu/paddle_cpu_server (HTTP + gRPC server)
#
# Image size: ~500 MB (vs ~10 GB for the GPU image).
# No TRT compilation on first start — ONNX Runtime is used directly.
# Startup is fast (~30 s) and requires no NVIDIA driver.
#
# Build: docker build -f Dockerfile.cpu -t turboocr-cpu .
# ============================================================
ARG TURBOOCR_VERSION=v2.1.1
ARG ORT_VERSION=1.22.0
# Registry mirror prefix — leave empty for direct pull.
# China users: set to "docker.m.daocloud.io/" to proxy Docker Hub via DaoCloud.
ARG DOCKER_MIRROR=
FROM ${DOCKER_MIRROR}ubuntu:24.04
# Re-declare ARGs after FROM so they remain in scope
ARG TURBOOCR_VERSION
ARG ORT_VERSION
ENV DEBIAN_FRONTEND=noninteractive
# Install build dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
cmake \
g++ \
make \
pkg-config \
libopencv-dev \
nginx \
gosu \
libgrpc++-dev \
libc-ares-dev \
libprotobuf-dev \
protobuf-compiler \
protobuf-compiler-grpc \
libjsoncpp-dev \
uuid-dev \
zlib1g-dev \
libssl-dev \
git \
wget \
curl \
ca-certificates \
&& rm -rf /var/lib/apt/lists/*
# Install Drogon HTTP framework (async, epoll-based)
RUN cd /tmp && \
git clone --depth 1 --branch v1.9.12 https://github.com/drogonframework/drogon.git && \
cd drogon && git submodule update --init && \
mkdir build && cd build && \
cmake .. -DBUILD_EXAMPLES=OFF -DBUILD_CTL=OFF -DBUILD_ORM=OFF \
-DBUILD_POSTGRESQL=OFF -DBUILD_MYSQL=OFF -DBUILD_SQLITE=OFF \
-DBUILD_REDIS=OFF -DBUILD_TESTING=OFF && \
make -j$(nproc) && make install && \
rm -rf /tmp/drogon
# Install ONNX Runtime C++ SDK
RUN cd /tmp && \
wget -q "https://github.com/microsoft/onnxruntime/releases/download/v${ORT_VERSION}/onnxruntime-linux-x64-${ORT_VERSION}.tgz" && \
tar xzf "onnxruntime-linux-x64-${ORT_VERSION}.tgz" && \
cp -r "onnxruntime-linux-x64-${ORT_VERSION}/include/"* /usr/local/include/ && \
cp "onnxruntime-linux-x64-${ORT_VERSION}/lib/libonnxruntime.so"* /usr/local/lib/ && \
ldconfig && rm -rf /tmp/onnxruntime*
# Clone TurboOCR at the pinned release tag
RUN git clone --depth 1 --branch "${TURBOOCR_VERSION}" \
https://github.com/aiptimizer/TurboOCR.git /app
WORKDIR /app
# Install fastpdf2png (PDF renderer — PDFium vendored in third_party/).
# Copy vendored libpdfium first so the installer does not need network access.
RUN cp third_party/pdfium/lib/libpdfium.so /usr/lib/ && ldconfig && \
bash scripts/install_fastpdf2png.sh && \
{ cp bin/libpdfium.so /usr/lib/ 2>/dev/null || true; } && \
ldconfig
# Build CPU-only mode with ONNX Runtime backend
RUN mkdir -p build_cpu && cd build_cpu && \
cmake .. -DUSE_CPU_ONLY=ON -DFETCH_MODELS=OFF && \
make -j$(nproc)
# Create non-root user and redirect /app/models/rec into the named cache volume.
RUN useradd -m -s /bin/bash ocr \
&& chmod +x /app/scripts/entrypoint.sh \
&& mkdir -p /home/ocr/.cache/turbo-ocr/models/rec /app/models \
&& ln -s /home/ocr/.cache/turbo-ocr/models/rec /app/models/rec
# Fetch all PP-OCRv5 language bundles (SHA256-verified from pinned GitHub Release)
ARG OCR_INCLUDE_SERVER=1
ENV OCR_INCLUDE_SERVER=${OCR_INCLUDE_SERVER}
RUN bash scripts/fetch_release_models.sh \
&& chown -R ocr:ocr /app /home/ocr/.cache
EXPOSE 8000 50051
ENTRYPOINT ["/app/scripts/entrypoint.sh"]
CMD ["./build_cpu/paddle_cpu_server"]
+118
View File
@@ -0,0 +1,118 @@
# ============================================================
# TurboOCR — CUDA 12.x build (TensorRT 10.8 / CUDA 12.7)
# Base image: nvcr.io/nvidia/tensorrt:24.12-py3
#
# Supported compute capabilities (NVIDIA GPU reference):
# https://developer.nvidia.com/cuda-gpus
# 7.5 Turing — GTX 16xx / RTX 20xx
# 8.0 Ampere — A100, RTX 30xx server-class
# 8.6 Ampere — RTX 30xx desktop / laptop
# 8.9 Ada — RTX 40xx
#
# Blackwell (CC 12.0) requires CUDA 13.x.
# For that, use the upstream docker/Dockerfile.gpu (tensorrt:26.03-py3).
#
# Build: docker build -f Dockerfile.cuda12 -t turboocr-cuda12 .
# ============================================================
ARG TURBOOCR_VERSION=v2.1.1
ARG CMAKE_VERSION=3.31.6
ARG ORT_VERSION=1.22.0
# NGC registry mirror prefix — leave empty for direct pull from nvcr.io.
# Note: standard Docker Hub mirrors (e.g. DaoCloud) do NOT proxy nvcr.io.
# Set this only if you have a dedicated NGC mirror or a pull-through proxy.
ARG NGC_MIRROR=
FROM ${NGC_MIRROR}nvcr.io/nvidia/tensorrt:24.12-py3
# Re-declare ARGs after FROM so they remain in scope
ARG TURBOOCR_VERSION
ARG CMAKE_VERSION
ARG ORT_VERSION
# Install build dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
pkg-config \
libopencv-dev \
nginx \
gosu \
libgrpc++-dev \
libprotobuf-dev \
protobuf-compiler-grpc \
libjsoncpp-dev \
uuid-dev \
zlib1g-dev \
libssl-dev \
libc-ares-dev \
git \
wget \
curl \
&& rm -rf /var/lib/apt/lists/*
# Install Drogon HTTP framework (async, epoll-based)
RUN cd /tmp && \
git clone --depth 1 --branch v1.9.12 https://github.com/drogonframework/drogon.git && \
cd drogon && git submodule update --init && \
mkdir build && cd build && \
cmake .. -DBUILD_EXAMPLES=OFF -DBUILD_CTL=OFF -DBUILD_ORM=OFF \
-DBUILD_POSTGRESQL=OFF -DBUILD_MYSQL=OFF -DBUILD_SQLITE=OFF \
-DBUILD_REDIS=OFF -DBUILD_TESTING=OFF && \
make -j$(nproc) && make install && \
rm -rf /tmp/drogon
# Upgrade CMake (the base image may ship an older version)
RUN cd /tmp && \
wget -q "https://github.com/Kitware/CMake/releases/download/v${CMAKE_VERSION}/cmake-${CMAKE_VERSION}-linux-x86_64.tar.gz" && \
tar xzf "cmake-${CMAKE_VERSION}-linux-x86_64.tar.gz" && \
cp -r "cmake-${CMAKE_VERSION}-linux-x86_64/bin/"* /usr/local/bin/ && \
cp -r "cmake-${CMAKE_VERSION}-linux-x86_64/share/"* /usr/local/share/ && \
rm -rf /tmp/cmake*
# Install ONNX Runtime C++ SDK (used by the CPU inference fallback path)
RUN cd /tmp && \
wget -q "https://github.com/microsoft/onnxruntime/releases/download/v${ORT_VERSION}/onnxruntime-linux-x64-${ORT_VERSION}.tgz" && \
tar xzf "onnxruntime-linux-x64-${ORT_VERSION}.tgz" && \
cp -r "onnxruntime-linux-x64-${ORT_VERSION}/include/"* /usr/local/include/ && \
cp "onnxruntime-linux-x64-${ORT_VERSION}/lib/libonnxruntime.so"* /usr/local/lib/ && \
ldconfig && rm -rf /tmp/onnxruntime*
# Clone TurboOCR at the pinned release tag
RUN git clone --depth 1 --branch "${TURBOOCR_VERSION}" \
https://github.com/aiptimizer/TurboOCR.git /app
WORKDIR /app
# Install fastpdf2png (PDF renderer — PDFium vendored in third_party/)
RUN bash scripts/install_fastpdf2png.sh && \
cp bin/libpdfium.so /usr/lib/ && ldconfig
# Build GPU mode.
# - CUDA_ARCHITECTURES: 7.5-8.9 covers Turing through Ada Lovelace under CUDA 12.x.
# CC 12.0 (Blackwell) is excluded — it requires CUDA 13.x.
# - TENSORRT_DIR: /usr/local/tensorrt is the cmake default and matches the 24.12-py3
# base image layout. No override needed (upstream 26.03 uses /usr/lib/x86_64-linux-gnu).
# - FETCH_MODELS=OFF: models are fetched in a separate layer below for better caching.
RUN mkdir -p build && cd build && \
cmake .. \
-DFETCH_MODELS=OFF \
-DCMAKE_CUDA_ARCHITECTURES="75;80;86;89" \
&& make -j$(nproc)
# Create non-root user and redirect /app/models/rec into the named cache volume.
# TRT engines built at first start are persisted via: -v turboocr_cache:/home/ocr/.cache/turbo-ocr
RUN useradd -m -s /bin/bash ocr \
&& chmod +x /app/scripts/entrypoint.sh \
&& mkdir -p /home/ocr/.cache/turbo-ocr/models/rec /app/models \
&& ln -s /home/ocr/.cache/turbo-ocr/models/rec /app/models/rec
# Fetch all PP-OCRv5 language bundles (SHA256-verified from pinned GitHub Release)
ARG OCR_INCLUDE_SERVER=1
ENV OCR_INCLUDE_SERVER=${OCR_INCLUDE_SERVER}
RUN bash scripts/fetch_release_models.sh \
&& chown -R ocr:ocr /app /home/ocr/.cache
EXPOSE 8000 50051
ENTRYPOINT ["/app/scripts/entrypoint.sh"]
CMD ["./build/paddle_highspeed_cpp"]
+127
View File
@@ -0,0 +1,127 @@
# TurboOCR — Custom Builds
[中文文档](README.zh.md)
This directory builds [TurboOCR](https://github.com/aiptimizer/TurboOCR) from source for two targets that are not covered by the upstream pre-built images:
| Variant | Dockerfile | Profile | Base image |
| ------- | ---------- | ------- | ---------- |
| **CUDA 12.x** | `Dockerfile.cuda12` | `gpu` | `nvcr.io/nvidia/tensorrt:24.12-py3` (TRT 10.8 / CUDA 12.7) |
| **CPU-only** | `Dockerfile.cpu` | `cpu` | `ubuntu:24.04` (ONNX Runtime) |
The upstream pre-built image targets CUDA 13.x (Blackwell / CC 12.0). Use this directory if your GPU is on CUDA 12.x (Turing through Ada Lovelace, CC 7.58.9) or if you have no GPU at all.
## Quick Start
1. Copy the example environment file:
```bash
cp .env.example .env
```
2. Build and start the variant you need:
**CUDA 12.x (GPU — Turing through Ada Lovelace):**
```bash
docker compose --profile gpu up -d --build
```
**CPU-only (no GPU required):**
```bash
docker compose --profile cpu up -d --build
```
3. Access the API at <http://localhost:8000>.
> **Note:** The first build compiles Drogon and TurboOCR from source, which takes 1030 minutes depending on your CPU core count. Subsequent builds use the Docker layer cache and are fast.
## First-Start Behavior
### GPU variant
On the very first container start, TensorRT compiles 4 ONNX models into engine files. Measured times on an RTX 3070 Laptop:
| Engine | Time |
| ------ | ---- |
| det | ~5 min |
| rec | ~30 min |
| cls | ~4 min |
| layout | ~28 min |
| **Total** | **~6790 min** |
High-end desktop GPUs finish in ~15 minutes. The container shows `unhealthy` during compilation — this is expected. Once all engines are ready the server starts and the status transitions to `healthy`. Subsequent restarts reuse the cached engines and start in seconds.
> **Tip:** Set `TURBOOCR_DISABLE_LAYOUT=1` to skip the layout detection engine (~28 min savings on laptop GPUs). Use this only if you do not need the `?layout=1` PDF endpoint.
### CPU variant
No TRT compilation occurs. ONNX Runtime loads the models directly at startup. The container is typically `healthy` within 60 seconds.
## Default Ports
| Port | Protocol | Description |
| ---- | -------- | ----------- |
| 8000 | HTTP | OCR REST API + health/metrics |
| 50051 | gRPC | OCR gRPC API |
## Important Environment Variables
| Variable | Description | Default |
| -------- | ----------- | ------- |
| `TURBOOCR_VERSION` | Git tag used for the source build | `v2.1.1` |
| `TURBOOCR_HTTP_PORT_OVERRIDE` | Host port for the HTTP API | `8000` |
| `TURBOOCR_GRPC_PORT_OVERRIDE` | Host port for the gRPC API | `50051` |
| `TURBOOCR_LANG` | Language bundle: `latin`, `chinese`, `greek`, `eslav`, `arabic`, `korean`, `thai` | `""` (latin) |
| `TURBOOCR_SERVER` | With `chinese`, set to `1` for the 84 MB server rec model | `""` |
| `TURBOOCR_PIPELINE_POOL_SIZE` | Concurrent GPU pipelines (~1.4 GB VRAM each); empty = auto | `""` |
| `TURBOOCR_DISABLE_LAYOUT` | Disable layout detection model (saves ~300500 MB VRAM) | `0` |
| `TURBOOCR_PDF_MODE` | PDF parsing mode: `ocr` / `geometric` / `auto` / `auto_verified` | `ocr` |
| `TURBOOCR_CPU_LIMIT` | CPU core limit (both variants) | `8.0` |
| `TURBOOCR_MEMORY_LIMIT` | Memory limit — `12G` for GPU, `4G` for CPU | variant default |
| `TURBOOCR_GPU_COUNT` | NVIDIA GPUs to reserve (GPU variant only) | `1` |
| `TURBOOCR_SHM_SIZE` | Shared memory for fastpdf2png — `2g` for GPU, `512m` for CPU | variant default |
| `TZ` | Container timezone | `UTC` |
## Storage
- `turboocr_build_cache` — named volume at `/home/ocr/.cache/turbo-ocr`. Stores TRT engine files (GPU) or the model cache directory (CPU). Must be a named volume — a bind-mount of an empty host directory would shadow the baked-in language bundles and the server would fail to load models.
## Supported GPU Architectures (CUDA 12.x variant)
| Compute Capability | Architecture | GPUs |
| ------------------ | ------------ | ---- |
| 7.5 | Turing | GTX 16xx, RTX 20xx |
| 8.0 | Ampere | A100, RTX 30xx (server) |
| 8.6 | Ampere | RTX 30xx (desktop / laptop) |
| 8.9 | Ada Lovelace | RTX 40xx |
Blackwell (CC 12.0, RTX 50xx) requires CUDA 13.x — use the upstream pre-built image from `src/turboocr` instead.
## Notes
- Both Dockerfiles build TurboOCR from source via `git clone` inside the image. A working internet connection is required at build time.
- The CUDA 12.x Dockerfile overrides `CMAKE_CUDA_ARCHITECTURES` to `75;80;86;89`, removing CC 12.0 which is not supported by CUDA 12.x.
- TensorRT 10.8 is located at `/usr/local/tensorrt` in the `24.12-py3` base image, which matches the CMake default. No `-DTENSORRT_DIR` override is needed.
- The CPU variant uses ONNX Runtime 1.22.0 and produces a `paddle_cpu_server` binary with both HTTP and gRPC interfaces.
## Endpoints
- HTTP API: <http://localhost:8000>
- gRPC API: `localhost:50051`
- Health: <http://localhost:8000/health>
- Readiness: <http://localhost:8000/health/ready>
- Metrics (Prometheus): <http://localhost:8000/metrics>
## Security Notes
- The API has no authentication by default. Put a reverse proxy (nginx, Caddy) in front for production.
- The default PDF mode is `ocr`, which only trusts pixel data and is safe for untrusted PDF uploads.
- Do **not** set `TURBOOCR_PDF_MODE` to `geometric` or `auto` globally if you accept PDFs from untrusted sources.
## References
- [TurboOCR Repository](https://github.com/aiptimizer/TurboOCR)
- [NVIDIA TensorRT Container Releases](https://docs.nvidia.com/deeplearning/tensorrt/container-release-notes/)
- [NVIDIA CUDA GPU Compute Capability Table](https://developer.nvidia.com/cuda-gpus)
+127
View File
@@ -0,0 +1,127 @@
# TurboOCR — 自定义构建
[English](README.md)
此目录从源码构建 [TurboOCR](https://github.com/aiptimizer/TurboOCR),覆盖上游预构建镜像未提供的两个目标:
| 变体 | Dockerfile | Profile | 基础镜像 |
| ---- | ---------- | ------- | -------- |
| **CUDA 12.x** | `Dockerfile.cuda12` | `gpu` | `nvcr.io/nvidia/tensorrt:24.12-py3`TRT 10.8 / CUDA 12.7 |
| **纯 CPU** | `Dockerfile.cpu` | `cpu` | `ubuntu:24.04`ONNX Runtime |
上游预构建镜像针对 CUDA 13.xBlackwell / CC 12.0)。如果你的 GPU 属于 CUDA 12.x 范围(Turing 到 Ada LovelaceCC 7.58.9),或者没有 GPU,请使用本目录。
## 快速开始
1. 复制示例环境文件:
```bash
cp .env.example .env
```
2. 按需构建并启动对应变体:
**CUDA 12.xGPU — Turing 到 Ada Lovelace):**
```bash
docker compose --profile gpu up -d --build
```
**纯 CPU(无需 GPU):**
```bash
docker compose --profile cpu up -d --build
```
3. 访问 API<http://localhost:8000>。
> **说明:** 首次构建需要从源码编译 Drogon 和 TurboOCR,耗时约 10–30 分钟,具体取决于 CPU 核心数。后续构建会复用 Docker 层缓存,速度很快。
## 首次启动说明
### GPU 变体
容器首次启动时,TensorRT 会将 4 个 ONNX 模型编译为引擎文件。在 RTX 3070 Laptop 上的实测耗时:
| 引擎 | 耗时 |
| ---- | ---- |
| det | 约 5 分钟 |
| rec | 约 30 分钟 |
| cls | 约 4 分钟 |
| layout | 约 28 分钟 |
| **合计** | **约 6790 分钟** |
高端桌面 GPU 约 15 分钟完成。编译期间容器显示 `unhealthy` 属于正常现象——所有引擎构建完成后服务启动,状态切换为 `healthy`。后续重启会复用缓存引擎,几乎瞬间完成。
> **提示:** 设置 `TURBOOCR_DISABLE_LAYOUT=1` 可跳过版面检测引擎的编译(笔记本 GPU 约节省 28 分钟)。仅在不需要 `?layout=1` PDF 端点时使用此选项。
### CPU 变体
无 TRT 编译过程。ONNX Runtime 在启动时直接加载模型,通常在 60 秒内变为 `healthy`。
## 默认端口
| 端口 | 协议 | 说明 |
| ---- | ---- | ---- |
| 8000 | HTTP | OCR REST API + 健康检查/指标 |
| 50051 | gRPC | OCR gRPC API |
## 主要环境变量
| 变量名 | 说明 | 默认值 |
| ------ | ---- | ------ |
| `TURBOOCR_VERSION` | 构建所用的 Git 标签 | `v2.1.1` |
| `TURBOOCR_HTTP_PORT_OVERRIDE` | HTTP API 主机端口 | `8000` |
| `TURBOOCR_GRPC_PORT_OVERRIDE` | gRPC API 主机端口 | `50051` |
| `TURBOOCR_LANG` | 语言包:`latin`、`chinese`、`greek`、`eslav`、`arabic`、`korean`、`thai` | `""`latin |
| `TURBOOCR_SERVER` | 当使用 `chinese` 时,设为 `1` 启用 84 MB 服务端识别模型 | `""` |
| `TURBOOCR_PIPELINE_POOL_SIZE` | 并发 GPU 流水线数(每条约 1.4 GB 显存),留空则自动 | `""` |
| `TURBOOCR_DISABLE_LAYOUT` | 禁用版面检测模型(节省约 300–500 MB 显存) | `0` |
| `TURBOOCR_PDF_MODE` | PDF 解析模式:`ocr` / `geometric` / `auto` / `auto_verified` | `ocr` |
| `TURBOOCR_CPU_LIMIT` | CPU 核心限制(两个变体通用) | `8.0` |
| `TURBOOCR_MEMORY_LIMIT` | 内存限制——GPU 变体 `12G`CPU 变体 `4G` | 变体默认值 |
| `TURBOOCR_GPU_COUNT` | 预留的 NVIDIA GPU 数量(仅 GPU 变体) | `1` |
| `TURBOOCR_SHM_SIZE` | fastpdf2png 共享内存——GPU 变体 `2g`CPU 变体 `512m` | 变体默认值 |
| `TZ` | 容器时区 | `UTC` |
## 存储
- `turboocr_build_cache`——命名卷,挂载于 `/home/ocr/.cache/turbo-ocr`。用于存储 TRT 引擎文件(GPU 变体)或模型缓存目录(CPU 变体)。必须使用**命名卷**——绑定挂载空主机目录会遮蔽镜像内置语言包,导致服务无法加载模型。
## 支持的 GPU 架构(CUDA 12.x 变体)
| 算力版本 | 架构 | GPU 型号 |
| -------- | ---- | -------- |
| 7.5 | Turing | GTX 16xx、RTX 20xx |
| 8.0 | Ampere | A100、RTX 30xx(服务器) |
| 8.6 | Ampere | RTX 30xx(桌面/笔记本) |
| 8.9 | Ada Lovelace | RTX 40xx |
BlackwellCC 12.0RTX 50xx)需要 CUDA 13.x——请改用 `src/turboocr` 中的上游预构建镜像。
## 说明
- 两个 Dockerfile 均在镜像内通过 `git clone` 从源码构建 TurboOCR,构建时需要可访问互联网。
- CUDA 12.x Dockerfile 将 `CMAKE_CUDA_ARCHITECTURES` 设置为 `75;80;86;89`,去除了 CUDA 12.x 不支持的 CC 12.0。
- TensorRT 10.8 在 `24.12-py3` 基础镜像中位于 `/usr/local/tensorrt`,与 CMake 默认值一致,无需额外的 `-DTENSORRT_DIR` 参数。
- CPU 变体使用 ONNX Runtime 1.22.0,生成同时支持 HTTP 和 gRPC 接口的 `paddle_cpu_server` 二进制文件。
## 访问端点
- HTTP API<http://localhost:8000>
- gRPC API`localhost:50051`
- 健康检查:<http://localhost:8000/health>
- 就绪检查:<http://localhost:8000/health/ready>
- Prometheus 指标:<http://localhost:8000/metrics>
## 安全说明
- API 默认无身份认证。生产环境请在前面套一层反向代理(nginx、Caddy 等)。
- PDF 默认模式为 `ocr`,只信任像素数据,可安全处理不可信来源的 PDF 上传。
- 如果你的服务接收不可信来源的 PDF,**不要**将 `TURBOOCR_PDF_MODE` 全局设为 `geometric` 或 `auto`。
## 参考链接
- [TurboOCR 仓库](https://github.com/aiptimizer/TurboOCR)
- [NVIDIA TensorRT 容器发布说明](https://docs.nvidia.com/deeplearning/tensorrt/container-release-notes/)
- [NVIDIA CUDA GPU 算力版本对照表](https://developer.nvidia.com/cuda-gpus)
+110
View File
@@ -0,0 +1,110 @@
x-defaults: &defaults
restart: unless-stopped
logging:
driver: json-file
options:
max-size: ${TURBOOCR_LOG_MAX_SIZE:-100m}
max-file: '${TURBOOCR_LOG_MAX_FILE:-3}'
x-turboocr-common: &turboocr-common
<<: *defaults
ports:
- '${TURBOOCR_HTTP_PORT_OVERRIDE:-8000}:8000'
- '${TURBOOCR_GRPC_PORT_OVERRIDE:-50051}:50051'
volumes:
# Named volume persists TRT engines (GPU) or ONNX model cache (CPU).
# Must be a named volume — bind-mounting an empty host dir shadows the
# baked-in language bundles and prevents the server from loading models.
- turboocr_build_cache:/home/ocr/.cache/turbo-ocr
environment:
- TZ=${TZ:-UTC}
# Language bundle: latin (default), chinese, greek, eslav, arabic, korean, thai
- OCR_LANG=${TURBOOCR_LANG:-}
# Set to 1 with OCR_LANG=chinese to use the 84 MB server rec model
- OCR_SERVER=${TURBOOCR_SERVER:-}
# Concurrent GPU pipelines (~1.4 GB VRAM each); empty = auto; ignored in CPU mode
- PIPELINE_POOL_SIZE=${TURBOOCR_PIPELINE_POOL_SIZE:-}
# Set to 1 to disable PP-DocLayoutV3 layout detection (saves ~300-500 MB VRAM)
- DISABLE_LAYOUT=${TURBOOCR_DISABLE_LAYOUT:-0}
# Default PDF mode: ocr (safest) / geometric / auto / auto_verified
- ENABLE_PDF_MODE=${TURBOOCR_PDF_MODE:-ocr}
# Skip angle classifier (~0.4 ms savings)
- DISABLE_ANGLE_CLS=${TURBOOCR_DISABLE_ANGLE_CLS:-0}
# Max detection input size in pixels
- DET_MAX_SIDE=${TURBOOCR_DET_MAX_SIDE:-960}
# PDF render parallelism
- PDF_DAEMONS=${TURBOOCR_PDF_DAEMONS:-16}
- PDF_WORKERS=${TURBOOCR_PDF_WORKERS:-4}
# Maximum pages per PDF request
- MAX_PDF_PAGES=${TURBOOCR_MAX_PDF_PAGES:-2000}
# Log level: debug / info / warn / error
- LOG_LEVEL=${TURBOOCR_LOG_LEVEL:-info}
# Log format: json (structured) / text (human-readable)
- LOG_FORMAT=${TURBOOCR_LOG_FORMAT:-json}
services:
turboocr-cuda12:
<<: *turboocr-common
profiles: [gpu]
build:
context: .
dockerfile: Dockerfile.cuda12
args:
TURBOOCR_VERSION: ${TURBOOCR_VERSION:-v2.1.1}
NGC_MIRROR: ${TURBOOCR_NGC_MIRROR:-}
image: ${GLOBAL_REGISTRY:-}alexsuntop/turboocr-cuda12:${TURBOOCR_VERSION:-v2.1.1}
healthcheck:
test: [CMD, curl, -fsS, 'http://localhost:8000/health']
interval: 30s
timeout: 10s
retries: 5
# First start builds 4 TensorRT engines from ONNX. Measured times on an
# RTX 3070 Laptop: det (~5 min) + rec (~30 min) + cls (~4 min) +
# layout (~28 min) = ~67-90 min. High-end desktop GPUs finish in ~15 min.
# Set TURBOOCR_DISABLE_LAYOUT=1 to skip layout and save ~28 min.
# Subsequent restarts reuse the cached engines and start in seconds.
start_period: 120m
deploy:
resources:
limits:
cpus: ${TURBOOCR_CPU_LIMIT:-8.0}
memory: ${TURBOOCR_MEMORY_LIMIT:-12G}
reservations:
cpus: ${TURBOOCR_CPU_RESERVATION:-2.0}
memory: ${TURBOOCR_MEMORY_RESERVATION:-4G}
devices:
- driver: nvidia
count: ${TURBOOCR_GPU_COUNT:-1}
capabilities: [gpu]
shm_size: ${TURBOOCR_SHM_SIZE:-2g}
turboocr-cpu:
<<: *turboocr-common
profiles: [cpu]
build:
context: .
dockerfile: Dockerfile.cpu
args:
TURBOOCR_VERSION: ${TURBOOCR_VERSION:-v2.1.1}
DOCKER_MIRROR: ${TURBOOCR_DOCKER_MIRROR:-}
image: ${GLOBAL_REGISTRY:-}alexsuntop/turboocr-cpu:${TURBOOCR_VERSION:-v2.1.1}
healthcheck:
test: [CMD, curl, -fsS, 'http://localhost:8000/health']
interval: 30s
timeout: 10s
retries: 5
# CPU mode uses ONNX Runtime directly — no TRT compilation on first start.
# Expect startup in under 60 s on most hardware.
start_period: 2m
deploy:
resources:
limits:
cpus: ${TURBOOCR_CPU_LIMIT:-8.0}
memory: ${TURBOOCR_MEMORY_LIMIT:-4G}
reservations:
cpus: ${TURBOOCR_CPU_RESERVATION:-2.0}
memory: ${TURBOOCR_MEMORY_RESERVATION:-1G}
shm_size: ${TURBOOCR_SHM_SIZE:-512m}
volumes:
turboocr_build_cache:
@@ -18,7 +18,6 @@ services:
mcp-elevenlabs:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}mcp/elevenlabs:${MCP_ELEVENLABS_VERSION:-latest}
container_name: mcp-elevenlabs
environment:
- TZ=${TZ:-UTC}
- ELEVENLABS_API_KEY=${ELEVENLABS_API_KEY}
@@ -18,7 +18,6 @@ services:
mcp-firecrawl:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}mcp/firecrawl:${MCP_FIRECRAWL_VERSION:-latest}
container_name: mcp-firecrawl
environment:
- TZ=${TZ:-UTC}
- FIRECRAWL_API_KEY=${FIRECRAWL_API_KEY}
@@ -18,7 +18,6 @@ services:
mcp-youtube-transcript:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}mcp/youtube-transcript:${MCP_YOUTUBE_TRANSCRIPT_VERSION:-latest}
container_name: mcp-youtube-transcript
environment:
- TZ=${TZ:-UTC}
ports:
+22
View File
@@ -0,0 +1,22 @@
# Global Registry Prefix (optional)
# GLOBAL_REGISTRY=
# AnythingLLM Image Version
# No stable semantic version tags exist; 'latest' tracks the current release.
ANYTHINGLLM_VERSION=latest
# Timezone
TZ=UTC
# Host port for the AnythingLLM web UI
ANYTHINGLLM_PORT_OVERRIDE=3001
# UID/GID for file ownership inside the container
ANYTHINGLLM_UID=1000
ANYTHINGLLM_GID=1000
# Resource Limits
ANYTHINGLLM_CPU_LIMIT=2
ANYTHINGLLM_MEMORY_LIMIT=2G
ANYTHINGLLM_CPU_RESERVATION=0.5
ANYTHINGLLM_MEMORY_RESERVATION=512M
+49
View File
@@ -0,0 +1,49 @@
# AnythingLLM
[English](./README.md) | [中文](./README.zh.md)
Quick start: <https://docs.anythingllm.com>.
This service deploys AnythingLLM, an all-in-one AI application that lets you chat with documents, use multiple LLM providers, and build custom AI agents — with a full RAG pipeline built in.
## Services
- `anythingllm`: The AnythingLLM web application.
## Quick Start
```bash
docker compose up -d
```
Open `http://localhost:3001` and complete the setup wizard to connect your LLM provider.
## Configuration
All LLM providers, vector databases, and agent settings are configured through the web UI after startup. No API keys are required in `.env` unless you want to pre-seed them via environment variables.
| Variable | Description | Default |
| ----------------------------- | ----------------------------------------------- | -------- |
| `ANYTHINGLLM_VERSION` | Image version (`latest` — no stable tags exist) | `latest` |
| `TZ` | Container timezone | `UTC` |
| `ANYTHINGLLM_PORT_OVERRIDE` | Host port for the web UI | `3001` |
| `ANYTHINGLLM_UID` | UID for volume file ownership | `1000` |
| `ANYTHINGLLM_GID` | GID for volume file ownership | `1000` |
| `ANYTHINGLLM_CPU_LIMIT` | CPU limit | `2` |
| `ANYTHINGLLM_MEMORY_LIMIT` | Memory limit | `2G` |
| `ANYTHINGLLM_CPU_RESERVATION` | CPU reservation | `0.5` |
| `ANYTHINGLLM_MEMORY_LIMIT` | Memory reservation | `512M` |
## Volumes
- `anythingllm_storage`: Persists all application data, uploaded documents, embeddings, and settings.
## Ports
- **3001**: Web UI
## Notes
- The `mintplexlabs/anythingllm` image does not publish stable semantic version tags; `latest` is the only reliable tag.
- Supports OpenAI, Anthropic, Ollama, LM Studio, and many other LLM backends — all configured from the UI.
- The health check uses the `/api/ping` endpoint.
+49
View File
@@ -0,0 +1,49 @@
# AnythingLLM
[English](./README.md) | [中文](./README.zh.md)
快速开始:<https://docs.anythingllm.com>。
此服务用于部署 AnythingLLM,一款集文档问答、多 LLM 提供商接入和自定义 AI Agent 于一体的全能 AI 应用,内置完整的 RAG 流水线。
## 服务
- `anythingllm`AnythingLLM Web 应用。
## 快速开始
```bash
docker compose up -d
```
打开 `http://localhost:3001`,按照设置向导连接你的 LLM 提供商。
## 配置
所有 LLM 提供商、向量数据库和 Agent 设置均通过启动后的 Web UI 进行配置,无需在 `.env` 中预设 API Key(除非你希望通过环境变量预填充)。
| 变量 | 说明 | 默认值 |
| ----------------------------- | ----------------------------------- | -------- |
| `ANYTHINGLLM_VERSION` | 镜像版本(无语义化稳定标签,使用 `latest` | `latest` |
| `TZ` | 容器时区 | `UTC` |
| `ANYTHINGLLM_PORT_OVERRIDE` | Web UI 的宿主机端口 | `3001` |
| `ANYTHINGLLM_UID` | 数据卷文件所有者 UID | `1000` |
| `ANYTHINGLLM_GID` | 数据卷文件所有者 GID | `1000` |
| `ANYTHINGLLM_CPU_LIMIT` | CPU 限制 | `2` |
| `ANYTHINGLLM_MEMORY_LIMIT` | 内存限制 | `2G` |
| `ANYTHINGLLM_CPU_RESERVATION` | CPU 预留 | `0.5` |
| `ANYTHINGLLM_MEMORY_LIMIT` | 内存预留 | `512M` |
## 数据卷
- `anythingllm_storage`:持久化所有应用数据、上传的文档、嵌入向量和配置。
## 端口
- **3001**Web UI
## 说明
- `mintplexlabs/anythingllm` 镜像未发布语义化稳定标签,`latest` 是唯一可靠的标签。
- 支持 OpenAI、Anthropic、Ollama、LM Studio 等众多 LLM 后端,均可在 UI 中配置。
- 健康检查使用 `/api/ping` 端点。
+42
View File
@@ -0,0 +1,42 @@
x-defaults: &defaults
restart: unless-stopped
logging:
driver: json-file
options:
max-size: 100m
max-file: '3'
services:
anythingllm:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}mintplexlabs/anythingllm:${ANYTHINGLLM_VERSION:-latest}
ports:
- '${ANYTHINGLLM_PORT_OVERRIDE:-3001}:3001'
volumes:
- anythingllm_storage:/app/server/storage
environment:
- TZ=${TZ:-UTC}
- STORAGE_DIR=/app/server/storage
- UID=${ANYTHINGLLM_UID:-1000}
- GID=${ANYTHINGLLM_GID:-1000}
healthcheck:
test:
- CMD
- node
- -e
- "require('http').get('http://localhost:3001/api/ping',res=>process.exit(res.statusCode===200?0:1)).on('error',()=>process.exit(1))"
interval: 30s
timeout: 10s
retries: 5
start_period: 30s
deploy:
resources:
limits:
cpus: ${ANYTHINGLLM_CPU_LIMIT:-2}
memory: ${ANYTHINGLLM_MEMORY_LIMIT:-2G}
reservations:
cpus: ${ANYTHINGLLM_CPU_RESERVATION:-0.5}
memory: ${ANYTHINGLLM_MEMORY_RESERVATION:-512M}
volumes:
anythingllm_storage:
+1 -1
View File
@@ -1,5 +1,5 @@
# Bifrost Gateway Version
BIFROST_VERSION=v1.3.63
BIFROST_VERSION=v1.4.17
# Port to bind to on the host machine
BIFROST_PORT=28080
+1 -1
View File
@@ -12,7 +12,7 @@ Bifrost is a lightweight, high-performance LLM gateway that supports multiple mo
## Configuration
- `BIFROST_VERSION`: The version of the Bifrost image, default is `v1.3.63`.
- `BIFROST_VERSION`: The version of the Bifrost image, default is `v1.4.17`.
- `BIFROST_PORT`: The port for the Bifrost service, default is `28080`.
### Telemetry
+1 -1
View File
@@ -12,7 +12,7 @@ Bifrost 是一个轻量级、高性能的 LLM 网关,支持多种模型和提
## 配置
- `BIFROST_VERSION`: Bifrost 镜像的版本,默认为 `v1.3.63`
- `BIFROST_VERSION`: Bifrost 镜像的版本,默认为 `v1.4.17`
- `BIFROST_PORT`: Bifrost 服务的端口,默认为 `28080`
### 遥测 (Telemetry)
+1 -1
View File
@@ -9,7 +9,7 @@ x-defaults: &defaults
services:
bifrost:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}maximhq/bifrost:${BIFROST_VERSION:-v1.3.63}
image: ${GLOBAL_REGISTRY:-}maximhq/bifrost:${BIFROST_VERSION:-v1.4.17}
volumes:
- bifrost_data:/app/data
ports:
-2
View File
@@ -37,7 +37,6 @@ services:
budibase:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}budibase/budibase:${BUDIBASE_VERSION:-3.23.0}
container_name: budibase
ports:
- '${BUDIBASE_PORT_OVERRIDE:-10000}:80'
environment:
@@ -98,7 +97,6 @@ services:
redis:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}redis:${REDIS_VERSION:-7-alpine}
container_name: budibase-redis
command: redis-server --appendonly yes --maxmemory 256mb --maxmemory-policy allkeys-lru
volumes:
- redis_data:/data
-3
View File
@@ -38,7 +38,6 @@ services:
build:
context: https://github.com/conductor-oss/conductor.git#main:docker/server
dockerfile: Dockerfile
container_name: conductor-server
ports:
- '${CONDUCTOR_SERVER_PORT_OVERRIDE:-8080}:8080'
- '${CONDUCTOR_UI_PORT_OVERRIDE:-5000}:5000'
@@ -90,7 +89,6 @@ services:
postgres:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}postgres:${POSTGRES_VERSION:-16-alpine}
container_name: conductor-postgres
environment:
- POSTGRES_DB=${POSTGRES_DB:-conductor}
- POSTGRES_USER=${POSTGRES_USER:-conductor}
@@ -119,7 +117,6 @@ services:
elasticsearch:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}elasticsearch:${ELASTICSEARCH_VERSION:-8.11.0}
container_name: conductor-elasticsearch
environment:
- discovery.type=single-node
- xpack.security.enabled=false

Some files were not shown because too many files have changed in this diff Show More