Compare commits

..

14 Commits

Author SHA1 Message Date
Sun-ZhenXing 53b841926e feat: all cpus reservations to 0.1 2026-05-14 16:22:07 +08:00
Sun-ZhenXing 453a3eab11 feat: add sub2api 2026-05-10 15:18:24 +08:00
Sun-ZhenXing 3456de4586 feat: update GoModel 2026-05-08 11:48:15 +08:00
Sun-ZhenXing 5e8999c625 feat: add GoModel 2026-05-06 17:14:43 +08:00
Sun-ZhenXing c59a1e93f6 feat: add laminar 2026-05-06 09:55:21 +08:00
Sun-ZhenXing 5f8503df42 feat: add build turboocr 2026-04-29 11:54:59 +08:00
Sun-ZhenXing ce16588916 feat: add TurboOCR 2026-04-28 10:05:39 +08:00
Sun-ZhenXing 3483dd80f0 chore: update mineru 2026-04-19 14:12:11 +08:00
Summer Shen 0b5ba69cb0 feat: add more Agent services & easytier 2026-04-19 12:26:54 +08:00
Sun-ZhenXing 0e948befac refactor: signoz 2026-04-15 15:05:16 +08:00
Sun-ZhenXing ea1ca927c8 feat: add multica/ 2026-04-14 15:22:06 +08:00
Sun-ZhenXing 41c4e8fd4e feat: add Docker Compose repository guidelines and quick start instructions 2026-04-11 23:05:35 +08:00
Sun-ZhenXing 6ae63c5d86 feat: add shannon 2026-04-01 17:33:42 +08:00
Sun-ZhenXing b55fa9819b chore: update mineru 2026-03-30 14:17:32 +08:00
351 changed files with 9561 additions and 1293 deletions
+2 -2
View File
@@ -36,10 +36,10 @@ services:
deploy: deploy:
resources: resources:
limits: limits:
cpus: ${SERVICE_NAME_CPU_LIMIT:-0.50} cpus: ${SERVICE_NAME_CPU_LIMIT:-0.5}
memory: ${SERVICE_NAME_MEMORY_LIMIT:-256M} memory: ${SERVICE_NAME_MEMORY_LIMIT:-256M}
reservations: reservations:
cpus: ${SERVICE_NAME_CPU_RESERVATION:-0.25} cpus: ${SERVICE_NAME_CPU_RESERVATION:-0.1}
memory: ${SERVICE_NAME_MEMORY_RESERVATION:-128M} memory: ${SERVICE_NAME_MEMORY_RESERVATION:-128M}
volumes: volumes:
@@ -1,69 +0,0 @@
---
applyTo: '**'
---
Compose Anything represents a collection of high-quality, production-ready, and portable Docker Compose configuration files. The primary objective is to allow users to deploy services "out-of-the-box" with minimal configuration while maintaining industry best practices.
The architecture focuses on modularity, security, and orchestrator compatibility (e.g., easy migration to Kubernetes). The technical challenge lies in balancing simplicity (zero-config startup) with robustness (resource limits, health checks, multi-arch support, and security baselines).
## Constraints
1. Out-of-the-box
- Configurations should work out-of-the-box with no extra steps (at most, provide a `.env` file).
2. Simple commands
- Each project ships a single `docker-compose.yaml` file.
- Command complexity should not exceed `docker compose up -d`; if more is needed, provide a `Makefile`.
- For initialization, prefer `healthcheck` with `depends_on` using `condition: service_healthy` to orchestrate startup order.
3. Stable versions
- Pin to the latest stable version instead of `latest`.
- Expose image versions via environment variables (e.g., `FOO_VERSION`).
4. Configuration conventions
- Prefer environment variables over complex CLI flags;
- Pass secrets via env vars or mounted files, never hardcode;
- Provide sensible defaults to enable zero-config startup;
- A commented `.env.example` is required;
- Env var naming: UPPER_SNAKE_CASE with service prefix (e.g., `POSTGRES_*`); use `*_PORT_OVERRIDE` for host port overrides.
5. Profiles
- Use Profiles for optional components/dependencies;
- Recommended names: `gpu` (acceleration), `metrics` (observability/exporters), `dev` (dev-only features).
6. Cross-platform & architectures
- Where images support it, ensure Debian 12+/Ubuntu 22.04+, Windows 10+, macOS 12+ work;
- Support x86-64 and ARM64 as consistently as possible;
- Avoid Linux-only host paths like `/etc/localtime` and `/etc/timezone`; prefer `TZ` env var for time zone.
7. Volumes & mounts
- Prefer relative paths for configuration to improve portability;
- Prefer named volumes for data directories to avoid permission/compat issues of host paths;
- If host paths are necessary, provide a top-level directory variable (e.g., `DATA_DIR`).
8. Resources & logging
- Always limit CPU and memory to prevent resource exhaustion;
- For GPU services, enable a single GPU by default via `deploy.resources.reservations.devices` (maps to device requests) or `gpus` where applicable;
- Limit logs (`json-file` driver: `max-size`/`max-file`).
9. Healthchecks
- Every service should define a `healthcheck` with suitable `interval`, `timeout`, `retries`, and `start_period`;
- Use `depends_on.condition: service_healthy` for dependency chains.
10. Security baseline (apply when possible)
- Run as non-root (expose `PUID`/`PGID` or set `user: "1000:1000"`);
- Read-only root filesystem (`read_only: true`), use `tmpfs`/writable mounts for required paths;
- Least privilege: `cap_drop: ["ALL"]`, add back only whats needed via `cap_add`;
- Avoid `container_name` (hurts scaling and reusable network aliases);
- If exposing Docker socket or other high-risk mounts, clearly document risks and alternatives.
11. Documentation & Discoverability
- Provide clear docs and examples (include admin/initialization notes, and security/license notes when relevant);
- Keep docs LLM-friendly;
- List primary env vars and default ports in the README, and link to `README.md` / `README.zh.md`.
Reference template: [`compose-template.yaml`](../../.compose-template.yaml) in the repo root.
If you want to find image tags, try fetch url like `https://hub.docker.com/v2/repositories/library/nginx/tags?page_size=1&ordering=last_updated`.
After update all of the services, please update `/README.md` & `/README.zh.md` to reflect the changes.
## Final Checklist
1. Is .env.example present and fully commented in English?
2. Are CPU/Memory limits applied?
3. Is container_name removed?
4. Are healthcheck and service_healthy conditions correctly implemented?
5. Are the Chinese docs correctly punctuated with spaces between languages?
6. Have the root repository README files been updated to include the new service?
**注意**:所有中文的文档都使用中文的标点符号,如 “,”、“()” 等,中文和英文之间要留有空格。对于 Docker Compose 文件和 `.env.example` 文件中的注释部分,请使用英语而不是中文。请为每个服务提供英文说明 README.md 和中文说明 `README.zh.md`
+85
View File
@@ -0,0 +1,85 @@
# Docker Compose Repository Guidelines
Compose Anything is a collection of production-ready, portable Docker Compose stacks. The default experience should remain simple: users should be able to enter a service directory and start it with `docker compose up -d`, while still getting sensible defaults for resource limits, health checks, security, and documentation.
## Primary Goals
1. Keep every stack easy to start and easy to understand.
2. Prefer portable Compose patterns that work across Windows, macOS, and Linux.
3. Default to production-aware settings instead of demo-only shortcuts.
4. Keep service documentation and root indexes accurate whenever a service changes.
## Required Workflow For Service Changes
1. Read the existing service folder before editing anything.
2. Use the repo root `.compose-template.yaml` as the structural reference when applicable.
3. Update these files together when a service changes: `docker-compose.yaml`, `.env.example`, `README.md`, and `README.zh.md`.
4. Update the root `README.md` and `README.zh.md` whenever a service is added, renamed, removed, or needs a new quick-start entry.
5. Keep the default startup path within `docker compose up -d`. If extra setup is unavoidable, document it clearly and prefer a `Makefile` over ad-hoc instructions.
## Compose Standards
1. Out-of-the-box startup
- A stack should work with zero extra steps, except optionally creating a `.env` file from `.env.example`.
- Defaults must be usable for local evaluation without forcing users to edit configuration first.
2. Command simplicity
- Each project should ship a single `docker-compose.yaml` file.
- Initialization order should use `healthcheck` plus `depends_on.condition: service_healthy` whenever a dependency chain exists.
3. Version pinning
- Pin to a stable image version instead of `latest` whenever a stable tag exists.
- Expose image versions via environment variables such as `REDIS_VERSION` or `POSTGRES_VERSION`.
4. Configuration style
- Prefer environment variables over long CLI flags.
- Never hardcode secrets.
- Provide a fully commented `.env.example` in English.
- Use UPPER_SNAKE_CASE names with a service prefix.
- Use `*_PORT_OVERRIDE` for host port overrides.
5. Profiles
- Use Compose profiles for optional components only.
- Preferred profile names: `gpu`, `metrics`, `dev`.
6. Cross-platform support
- Favor patterns that work on Debian 12+, Ubuntu 22.04+, Windows 10+, and macOS 12+ when upstream images support them.
- Support both x86-64 and ARM64 as consistently as practical.
- Avoid Linux-only host paths such as `/etc/localtime`; prefer `TZ`.
7. Storage and mounts
- Prefer named volumes for application data.
- Prefer relative paths for repo-managed configuration files.
- If host paths are necessary, expose a top-level directory variable such as `DATA_DIR`.
8. Resources and logging
- Every service must define CPU and memory limits.
- GPU services should default to one GPU via `deploy.resources.reservations.devices` or `gpus`.
- Limit container logs with the `json-file` driver and `max-size` / `max-file`.
9. Health checks
- Every long-running service should define a meaningful `healthcheck`.
- Tune `interval`, `timeout`, `retries`, and `start_period` for the actual startup profile of the service.
10. Security baseline
- Run as non-root when practical.
- Use `read_only: true` plus writable mounts or `tmpfs` where feasible.
- Default to `cap_drop: ["ALL"]` and add back only what is required.
- Do not use `container_name`.
- If a stack requires the Docker socket or another high-risk mount, document the risk and safer alternatives.
## Documentation Standards
1. Every service must provide both `README.md` and `README.zh.md`.
2. Service READMEs should at minimum cover: purpose, services, quick start, key environment variables, storage, and security notes when relevant.
3. The root `README.md` and `README.zh.md` should remain useful as entry points, not just service indexes. Include concise quick-start guidance and at least one concrete example when it helps discovery.
4. List the main environment variables and default ports in the service README.
5. Keep documentation LLM-friendly: predictable headings, short paragraphs, and concrete command examples.
Reference template: `/.compose-template.yaml`
If you need image tags, check the Docker Hub API, for example:
`https://hub.docker.com/v2/repositories/library/nginx/tags?page_size=1&ordering=last_updated`
## Final Checklist
1. Is `.env.example` present and fully commented in English?
2. Are CPU and memory limits defined?
3. Has `container_name` been avoided or removed?
4. Are `healthcheck` and `depends_on.condition: service_healthy` used correctly?
5. Are `README.md` and `README.zh.md` both updated for the service?
6. Are the root `README.md` and `README.zh.md` updated if discoverability changed?
7. Are Chinese docs using Chinese punctuation, with spaces between Chinese and English terms?
**注意**:所有中文文档都使用中文标点,如 “,”、“()” 等,中文与英文之间保留空格。Docker Compose 文件和 `.env.example` 文件中的注释必须使用英文。每个服务都必须提供英文 `README.md` 和中文 `README.zh.md`
+45 -7
View File
@@ -4,20 +4,45 @@
Compose Anything helps users quickly deploy various services by providing a set of high-quality Docker Compose configuration files. These configurations constrain resource usage, can be easily migrated to systems like K8S, and are easy to understand and modify. Compose Anything helps users quickly deploy various services by providing a set of high-quality Docker Compose configuration files. These configurations constrain resource usage, can be easily migrated to systems like K8S, and are easy to understand and modify.
## Quick Start
Choose a service directory, then start it with Docker Compose:
```bash
git clone https://github.com/Sun-ZhenXing/compose-anything.git
cd src/<service>
docker compose up -d
```
Most stacks are designed to run with the default settings. Use `.env.example` as a reference, and only create a `.env` file when you need to override ports, passwords, or image versions.
### Example: Start Redis
```bash
cd src/redis
docker compose up -d
docker compose exec redis redis-cli ping
```
If the stack is healthy, the final command returns `PONG`. By default, Redis is exposed on `localhost:6379`. For authentication, custom ports, or image changes, see [src/redis](./src/redis).
## Build Services ## Build Services
These services require building custom Docker images from source. These services require building custom Docker images from source.
| Service | Version | | Service | Version |
| ------------------------------------------- | ------- | | ------------------------------------------- | ------- |
| [CubeSandbox](./builds/cube-sandbox) | 0.1.7 |
| [Debian DinD](./builds/debian-dind) | 0.1.2 | | [Debian DinD](./builds/debian-dind) | 0.1.2 |
| [DeerFlow](./builds/deer-flow) | 2.0 | | [DeerFlow](./builds/deer-flow) | 2.0 |
| [goose](./builds/goose) | 1.18.0 | | [goose](./builds/goose) | 1.18.0 |
| [IOPaint](./builds/io-paint) | 1.6.0 | | [IOPaint](./builds/io-paint) | 1.6.0 |
| [K3s inside DinD](./builds/k3s-inside-dind) | 0.2.2 | | [K3s inside DinD](./builds/k3s-inside-dind) | 0.2.2 |
| [MinerU vLLM](./builds/mineru) | 2.7.6 | | [MinerU vLLM](./builds/mineru) | 3.1.0 |
| [Multica](./builds/multica) | v0.1.32 |
| [OpenFang](./builds/openfang) | 0.1.0 | | [OpenFang](./builds/openfang) | 0.1.0 |
| [Paperclip](./builds/paperclip) | main | | [Paperclip](./builds/paperclip) | main |
| [TurboOCR](./builds/turboocr) | v2.1.1 |
## Supported Services ## Supported Services
@@ -32,6 +57,7 @@ These services require building custom Docker images from source.
| [Apache Pulsar](./src/pulsar) | 4.0.7 | | [Apache Pulsar](./src/pulsar) | 4.0.7 |
| [Apache RocketMQ](./src/rocketmq) | 5.3.1 | | [Apache RocketMQ](./src/rocketmq) | 5.3.1 |
| [Agentgateway](./src/agentgateway) | 0.11.2 | | [Agentgateway](./src/agentgateway) | 0.11.2 |
| [AnythingLLM](./src/anythingllm) | latest |
| [Bifrost Gateway](./src/bifrost-gateway) | v1.4.17 | | [Bifrost Gateway](./src/bifrost-gateway) | v1.4.17 |
| [Bolt.diy](./apps/bolt-diy) | latest | | [Bolt.diy](./apps/bolt-diy) | latest |
| [Budibase](./src/budibase) | 3.23.0 | | [Budibase](./src/budibase) | 3.23.0 |
@@ -50,17 +76,19 @@ These services require building custom Docker images from source.
| [Doris](./src/doris) | 3.0.0 | | [Doris](./src/doris) | 3.0.0 |
| [DuckDB](./src/duckdb) | v1.1.3 | | [DuckDB](./src/duckdb) | v1.1.3 |
| [Easy Dataset](./apps/easy-dataset) | 1.5.1 | | [Easy Dataset](./apps/easy-dataset) | 1.5.1 |
| [EasyTier](./src/easytier) | v2.6.0 |
| [Elasticsearch](./src/elasticsearch) | 9.3.0 | | [Elasticsearch](./src/elasticsearch) | 9.3.0 |
| [etcd](./src/etcd) | 3.6.0 | | [etcd](./src/etcd) | 3.6.0 |
| [FalkorDB](./src/falkordb) | v4.14.11 | | [FalkorDB](./src/falkordb) | v4.14.11 |
| [Firecrawl](./src/firecrawl) | latest | | [Firecrawl](./src/firecrawl) | latest |
| [Flowise](./src/flowise) | 3.0.12 | | [Flowise](./src/flowise) | 3.0.12 |
| [frpc](./src/frpc) | 0.65.0 | | [frpc](./src/frpc) | 0.68.1 |
| [frps](./src/frps) | 0.65.0 | | [frps](./src/frps) | 0.68.1 |
| [Gitea Runner](./src/gitea-runner) | 0.2.13 | | [Gitea Runner](./src/gitea-runner) | 0.2.13 |
| [Gitea](./src/gitea) | 1.25.4-rootless | | [Gitea](./src/gitea) | 1.25.4-rootless |
| [GitLab Runner](./src/gitlab-runner) | 17.10.1 | | [GitLab Runner](./src/gitlab-runner) | 17.10.1 |
| [GitLab](./src/gitlab) | 18.8.3-ce.0 | | [GitLab](./src/gitlab) | 18.8.3-ce.0 |
| [GoModel](./src/gomodel) | v0.1.27 |
| [GPUStack](./src/gpustack) | v0.5.3 | | [GPUStack](./src/gpustack) | v0.5.3 |
| [Grafana](./src/grafana) | 12.3.2 | | [Grafana](./src/grafana) | 12.3.2 |
| [Grafana Loki](./src/loki) | 3.3.2 | | [Grafana Loki](./src/loki) | 3.3.2 |
@@ -77,13 +105,17 @@ These services require building custom Docker images from source.
| [Kodbox](./src/kodbox) | 1.62 | | [Kodbox](./src/kodbox) | 1.62 |
| [Kong](./src/kong) | 3.8.0 | | [Kong](./src/kong) | 3.8.0 |
| [Langflow](./apps/langflow) | latest | | [Langflow](./apps/langflow) | latest |
| [Laminar](./src/laminar) | latest |
| [Langfuse](./apps/langfuse) | 3.115.0 | | [Langfuse](./apps/langfuse) | 3.115.0 |
| [Letta](./src/letta) | 0.16.7 |
| [LibreChat](./apps/librechat) | v0.8.4 |
| [LibreOffice](./src/libreoffice) | latest | | [LibreOffice](./src/libreoffice) | latest |
| [libSQL Server](./src/libsql) | latest | | [libSQL Server](./src/libsql) | latest |
| [LiteLLM](./src/litellm) | main-stable | | [LiteLLM](./src/litellm) | main-stable |
| [llama-swap](./src/llama-swap) | cpu | | [llama-swap](./src/llama-swap) | cpu |
| [llama.cpp](./src/llama.cpp) | server | | [llama.cpp](./src/llama-cpp) | server |
| [LMDeploy](./src/lmdeploy) | v0.11.1 | | [LMDeploy](./src/lmdeploy) | v0.11.1 |
| [LobeChat](./src/lobe-chat) | 1.143.3 |
| [Logstash](./src/logstash) | 8.16.1 | | [Logstash](./src/logstash) | 8.16.1 |
| [MariaDB Galera Cluster](./src/mariadb-galera) | 11.7.2 | | [MariaDB Galera Cluster](./src/mariadb-galera) | 11.7.2 |
| [Mattermost](./apps/mattermost) | 11.3 | | [Mattermost](./apps/mattermost) | 11.3 |
@@ -93,7 +125,7 @@ These services require building custom Docker images from source.
| [Minecraft Bedrock Server](./src/minecraft-bedrock-server) | latest | | [Minecraft Bedrock Server](./src/minecraft-bedrock-server) | latest |
| [MinIO](./src/minio) | 0.20260202 | | [MinIO](./src/minio) | 0.20260202 |
| [MLflow](./src/mlflow) | v2.20.2 | | [MLflow](./src/mlflow) | v2.20.2 |
| [MoltBot](./apps/moltbot) | main | | [OpenClaw](./apps/openclaw) | 2026.2.3 |
| [MongoDB ReplicaSet Single](./src/mongodb-replicaset-single) | 8.2.3 | | [MongoDB ReplicaSet Single](./src/mongodb-replicaset-single) | 8.2.3 |
| [MongoDB ReplicaSet](./src/mongodb-replicaset) | 8.2.3 | | [MongoDB ReplicaSet](./src/mongodb-replicaset) | 8.2.3 |
| [MongoDB Standalone](./src/mongodb-standalone) | 8.2.3 | | [MongoDB Standalone](./src/mongodb-standalone) | 8.2.3 |
@@ -111,7 +143,8 @@ These services require building custom Docker images from source.
| [Odoo](./src/odoo) | 19.0 | | [Odoo](./src/odoo) | 19.0 |
| [Ollama](./src/ollama) | 0.14.3 | | [Ollama](./src/ollama) | 0.14.3 |
| [Open WebUI](./src/open-webui) | main | | [Open WebUI](./src/open-webui) | main |
| [Phoenix (Arize)](./src/phoenix) | 13.19.2 | | [Phoenix (Arize)](./src/phoenix) | 15.5.0 |
| [Pingap](./src/pingap) | 0.12.7-full |
| [Pingora Proxy Manager](./src/pingora-proxy-manager) | v1.0.3 | | [Pingora Proxy Manager](./src/pingora-proxy-manager) | v1.0.3 |
| [Open WebUI Rust](./src/open-webui-rust) | latest | | [Open WebUI Rust](./src/open-webui-rust) | latest |
| [OpenCode](./src/opencode) | 1.1.27 | | [OpenCode](./src/opencode) | 1.1.27 |
@@ -136,6 +169,7 @@ These services require building custom Docker images from source.
| [PyTorch](./src/pytorch) | 2.6.0 | | [PyTorch](./src/pytorch) | 2.6.0 |
| [Qdrant](./src/qdrant) | 1.15.4 | | [Qdrant](./src/qdrant) | 1.15.4 |
| [RabbitMQ](./src/rabbitmq) | 4.2.3 | | [RabbitMQ](./src/rabbitmq) | 4.2.3 |
| [RAGFlow](./apps/ragflow) | v0.24.0 |
| [Ray](./src/ray) | 2.42.1 | | [Ray](./src/ray) | 2.42.1 |
| [Redpanda](./src/redpanda) | v24.3.1 | | [Redpanda](./src/redpanda) | v24.3.1 |
| [Redis Cluster](./src/redis-cluster) | 8.2.1 | | [Redis Cluster](./src/redis-cluster) | 8.2.1 |
@@ -145,15 +179,19 @@ These services require building custom Docker images from source.
| [Restate](./src/restate) | 1.5.3 | | [Restate](./src/restate) | 1.5.3 |
| [SearXNG](./src/searxng) | 2025.1.20-1ce14ef99 | | [SearXNG](./src/searxng) | 2025.1.20-1ce14ef99 |
| [Selenium](./src/selenium) | 144.0-20260120 | | [Selenium](./src/selenium) | 144.0-20260120 |
| [Shannon](./apps/shannon) | v0.3.1 |
| [SigNoz](./src/signoz) | 0.55.0 | | [SigNoz](./src/signoz) | 0.55.0 |
| [Sim](./apps/sim) | latest | | [Sim](./apps/sim) | latest |
| [Skyvern](./apps/skyvern) | v1.0.31 |
| [Stable Diffusion WebUI](./apps/stable-diffusion-webui-docker) | latest | | [Stable Diffusion WebUI](./apps/stable-diffusion-webui-docker) | latest |
| [Stirling-PDF](./apps/stirling-pdf) | latest | | [Stirling-PDF](./apps/stirling-pdf) | latest |
| [Sub2API](./src/sub2api) | 0.1.124 |
| [Temporal](./src/temporal) | 1.24.2 | | [Temporal](./src/temporal) | 1.24.2 |
| [TiDB](./src/tidb) | v8.5.0 | | [TiDB](./src/tidb) | v8.5.0 |
| [TiKV](./src/tikv) | v8.5.0 | | [TiKV](./src/tikv) | v8.5.0 |
| [Trigger.dev](./src/trigger-dev) | v4.2.0 | | [Trigger.dev](./src/trigger-dev) | v4.2.0 |
| [TrailBase](./src/trailbase) | 0.22.4 | | [TrailBase](./src/trailbase) | 0.22.4 |
| [TurboOCR](./src/turboocr) | v2.1.1 |
| [Valkey Cluster](./src/valkey-cluster) | 8.0 | | [Valkey Cluster](./src/valkey-cluster) | 8.0 |
| [Valkey](./src/valkey) | 8.0 | | [Valkey](./src/valkey) | 8.0 |
| [Verdaccio](./src/verdaccio) | 6.1.2 | | [Verdaccio](./src/verdaccio) | 6.1.2 |
@@ -189,7 +227,7 @@ These services require building custom Docker images from source.
| [OpenWeather](./mcp-servers/openweather) | latest | | [OpenWeather](./mcp-servers/openweather) | latest |
| [Paper Search](./mcp-servers/paper-search) | latest | | [Paper Search](./mcp-servers/paper-search) | latest |
| [Playwright](./mcp-servers/playwright) | latest | | [Playwright](./mcp-servers/playwright) | latest |
| [Redis MCP](./mcp-servers/redis-mcp) | latest | | [Redis MCP](./mcp-servers/redis) | latest |
| [Rust Filesystem](./mcp-servers/rust-mcp-filesystem) | latest | | [Rust Filesystem](./mcp-servers/rust-mcp-filesystem) | latest |
| [Sequential Thinking](./mcp-servers/sequentialthinking) | latest | | [Sequential Thinking](./mcp-servers/sequentialthinking) | latest |
| [SQLite](./mcp-servers/sqlite) | latest | | [SQLite](./mcp-servers/sqlite) | latest |
+46 -8
View File
@@ -4,20 +4,45 @@
Compose Anything 通过提供一组高质量的 Docker Compose 配置文件,帮助用户快速部署各种服务。这些配置约束了资源使用,可快速迁移到 K8S 等系统,并且易于理解和修改。 Compose Anything 通过提供一组高质量的 Docker Compose 配置文件,帮助用户快速部署各种服务。这些配置约束了资源使用,可快速迁移到 K8S 等系统,并且易于理解和修改。
## 快速开始
先进入目标服务目录,再使用 Docker Compose 启动:
```bash
git clone https://github.com/Sun-ZhenXing/compose-anything.git
cd src/<service>
docker compose up -d
```
大多数配置都可以直接使用默认值启动。`.env.example` 用于说明可选配置项;只有在你需要覆盖端口、密码或镜像版本时,才需要额外创建 `.env` 文件。
### 示例:快速启动 Redis
```bash
cd src/redis
docker compose up -d
docker compose exec redis redis-cli ping
```
如果服务正常,最后一条命令会返回 `PONG`。默认情况下,Redis 会暴露在 `localhost:6379`。如果需要认证、自定义端口或调整镜像版本,请查看 [src/redis](./src/redis)。
## 构建服务 ## 构建服务
这些服务需要从源代码构建自定义 Docker 镜像。 这些服务需要从源代码构建自定义 Docker 镜像。
| 服务 | 版本 | | 服务 | 版本 |
| ------------------------------------------- | ------ | | ------------------------------------------- | ------- |
| [CubeSandbox](./builds/cube-sandbox) | 0.1.7 |
| [Debian DinD](./builds/debian-dind) | 0.1.2 | | [Debian DinD](./builds/debian-dind) | 0.1.2 |
| [DeerFlow](./builds/deer-flow) | 2.0 | | [DeerFlow](./builds/deer-flow) | 2.0 |
| [goose](./builds/goose) | 1.18.0 | | [goose](./builds/goose) | 1.18.0 |
| [IOPaint](./builds/io-paint) | 1.6.0 | | [IOPaint](./builds/io-paint) | 1.6.0 |
| [K3s inside DinD](./builds/k3s-inside-dind) | 0.2.2 | | [K3s inside DinD](./builds/k3s-inside-dind) | 0.2.2 |
| [MinerU vLLM](./builds/mineru) | 2.7.6 | | [MinerU vLLM](./builds/mineru) | 3.1.0 |
| [Multica](./builds/multica) | v0.1.32 |
| [OpenFang](./builds/openfang) | 0.1.0 | | [OpenFang](./builds/openfang) | 0.1.0 |
| [Paperclip](./builds/paperclip) | main | | [Paperclip](./builds/paperclip) | main |
| [TurboOCR](./builds/turboocr) | v2.1.1 |
## 已经支持的服务 ## 已经支持的服务
@@ -32,6 +57,7 @@ Compose Anything 通过提供一组高质量的 Docker Compose 配置文件,
| [Apache Pulsar](./src/pulsar) | 4.0.7 | | [Apache Pulsar](./src/pulsar) | 4.0.7 |
| [Apache RocketMQ](./src/rocketmq) | 5.3.1 | | [Apache RocketMQ](./src/rocketmq) | 5.3.1 |
| [Agentgateway](./src/agentgateway) | 0.11.2 | | [Agentgateway](./src/agentgateway) | 0.11.2 |
| [AnythingLLM](./src/anythingllm) | latest |
| [Bifrost Gateway](./src/bifrost-gateway) | v1.4.17 | | [Bifrost Gateway](./src/bifrost-gateway) | v1.4.17 |
| [Bolt.diy](./apps/bolt-diy) | latest | | [Bolt.diy](./apps/bolt-diy) | latest |
| [Budibase](./src/budibase) | 3.23.0 | | [Budibase](./src/budibase) | 3.23.0 |
@@ -50,17 +76,19 @@ Compose Anything 通过提供一组高质量的 Docker Compose 配置文件,
| [Doris](./src/doris) | 3.0.0 | | [Doris](./src/doris) | 3.0.0 |
| [DuckDB](./src/duckdb) | v1.1.3 | | [DuckDB](./src/duckdb) | v1.1.3 |
| [Easy Dataset](./apps/easy-dataset) | 1.5.1 | | [Easy Dataset](./apps/easy-dataset) | 1.5.1 |
| [EasyTier](./src/easytier) | v2.6.0 |
| [Elasticsearch](./src/elasticsearch) | 9.3.0 | | [Elasticsearch](./src/elasticsearch) | 9.3.0 |
| [etcd](./src/etcd) | 3.6.0 | | [etcd](./src/etcd) | 3.6.0 |
| [FalkorDB](./src/falkordb) | v4.14.11 | | [FalkorDB](./src/falkordb) | v4.14.11 |
| [Firecrawl](./src/firecrawl) | latest | | [Firecrawl](./src/firecrawl) | latest |
| [Flowise](./src/flowise) | 3.0.12 | | [Flowise](./src/flowise) | 3.0.12 |
| [frpc](./src/frpc) | 0.65.0 | | [frpc](./src/frpc) | 0.68.1 |
| [frps](./src/frps) | 0.65.0 | | [frps](./src/frps) | 0.68.1 |
| [Gitea Runner](./src/gitea-runner) | 0.2.13 | | [Gitea Runner](./src/gitea-runner) | 0.2.13 |
| [Gitea](./src/gitea) | 1.25.4-rootless | | [Gitea](./src/gitea) | 1.25.4-rootless |
| [GitLab Runner](./src/gitlab-runner) | 17.10.1 | | [GitLab Runner](./src/gitlab-runner) | 17.10.1 |
| [GitLab](./src/gitlab) | 18.8.3-ce.0 | | [GitLab](./src/gitlab) | 18.8.3-ce.0 |
| [GoModel](./src/gomodel) | v0.1.27 |
| [GPUStack](./src/gpustack) | v0.5.3 | | [GPUStack](./src/gpustack) | v0.5.3 |
| [Grafana](./src/grafana) | 12.3.2 | | [Grafana](./src/grafana) | 12.3.2 |
| [Grafana Loki](./src/loki) | 3.3.2 | | [Grafana Loki](./src/loki) | 3.3.2 |
@@ -77,13 +105,17 @@ Compose Anything 通过提供一组高质量的 Docker Compose 配置文件,
| [Kodbox](./src/kodbox) | 1.62 | | [Kodbox](./src/kodbox) | 1.62 |
| [Kong](./src/kong) | 3.8.0 | | [Kong](./src/kong) | 3.8.0 |
| [Langflow](./apps/langflow) | latest | | [Langflow](./apps/langflow) | latest |
| [Laminar](./src/laminar) | latest |
| [Langfuse](./apps/langfuse) | 3.115.0 | | [Langfuse](./apps/langfuse) | 3.115.0 |
| [Letta](./src/letta) | 0.16.7 |
| [LibreChat](./apps/librechat) | v0.8.4 |
| [LibreOffice](./src/libreoffice) | latest | | [LibreOffice](./src/libreoffice) | latest |
| [libSQL Server](./src/libsql) | latest | | [libSQL Server](./src/libsql) | latest |
| [LiteLLM](./src/litellm) | main-stable | | [LiteLLM](./src/litellm) | main-stable |
| [llama-swap](./src/llama-swap) | cpu | | [llama-swap](./src/llama-swap) | cpu |
| [llama.cpp](./src/llama.cpp) | server | | [llama.cpp](./src/llama-cpp) | server |
| [LMDeploy](./src/lmdeploy) | v0.11.1 | | [LMDeploy](./src/lmdeploy) | v0.11.1 |
| [LobeChat](./src/lobe-chat) | 1.143.3 |
| [Logstash](./src/logstash) | 8.16.1 | | [Logstash](./src/logstash) | 8.16.1 |
| [MariaDB Galera Cluster](./src/mariadb-galera) | 11.7.2 | | [MariaDB Galera Cluster](./src/mariadb-galera) | 11.7.2 |
| [Mattermost](./apps/mattermost) | 11.3 | | [Mattermost](./apps/mattermost) | 11.3 |
@@ -93,7 +125,7 @@ Compose Anything 通过提供一组高质量的 Docker Compose 配置文件,
| [Minecraft Bedrock Server](./src/minecraft-bedrock-server) | latest | | [Minecraft Bedrock Server](./src/minecraft-bedrock-server) | latest |
| [MinIO](./src/minio) | 0.20260202 | | [MinIO](./src/minio) | 0.20260202 |
| [MLflow](./src/mlflow) | v2.20.2 | | [MLflow](./src/mlflow) | v2.20.2 |
| [MoltBot](./apps/moltbot) | main | | [OpenClaw](./apps/openclaw) | 2026.2.3 |
| [MongoDB ReplicaSet Single](./src/mongodb-replicaset-single) | 8.2.3 | | [MongoDB ReplicaSet Single](./src/mongodb-replicaset-single) | 8.2.3 |
| [MongoDB ReplicaSet](./src/mongodb-replicaset) | 8.2.3 | | [MongoDB ReplicaSet](./src/mongodb-replicaset) | 8.2.3 |
| [MongoDB Standalone](./src/mongodb-standalone) | 8.2.3 | | [MongoDB Standalone](./src/mongodb-standalone) | 8.2.3 |
@@ -111,7 +143,8 @@ Compose Anything 通过提供一组高质量的 Docker Compose 配置文件,
| [Odoo](./src/odoo) | 19.0 | | [Odoo](./src/odoo) | 19.0 |
| [Ollama](./src/ollama) | 0.14.3 | | [Ollama](./src/ollama) | 0.14.3 |
| [Open WebUI](./src/open-webui) | main | | [Open WebUI](./src/open-webui) | main |
| [Phoenix (Arize)](./src/phoenix) | 13.19.2 | | [Phoenix (Arize)](./src/phoenix) | 15.5.0 |
| [Pingap](./src/pingap) | 0.12.7-full |
| [Pingora Proxy Manager](./src/pingora-proxy-manager) | v1.0.3 | | [Pingora Proxy Manager](./src/pingora-proxy-manager) | v1.0.3 |
| [Open WebUI Rust](./src/open-webui-rust) | latest | | [Open WebUI Rust](./src/open-webui-rust) | latest |
| [OpenCode](./src/opencode) | 1.1.27 | | [OpenCode](./src/opencode) | 1.1.27 |
@@ -136,6 +169,7 @@ Compose Anything 通过提供一组高质量的 Docker Compose 配置文件,
| [PyTorch](./src/pytorch) | 2.6.0 | | [PyTorch](./src/pytorch) | 2.6.0 |
| [Qdrant](./src/qdrant) | 1.15.4 | | [Qdrant](./src/qdrant) | 1.15.4 |
| [RabbitMQ](./src/rabbitmq) | 4.2.3 | | [RabbitMQ](./src/rabbitmq) | 4.2.3 |
| [RAGFlow](./apps/ragflow) | v0.24.0 |
| [Ray](./src/ray) | 2.42.1 | | [Ray](./src/ray) | 2.42.1 |
| [Redpanda](./src/redpanda) | v24.3.1 | | [Redpanda](./src/redpanda) | v24.3.1 |
| [Redis Cluster](./src/redis-cluster) | 8.2.1 | | [Redis Cluster](./src/redis-cluster) | 8.2.1 |
@@ -145,15 +179,19 @@ Compose Anything 通过提供一组高质量的 Docker Compose 配置文件,
| [Restate](./src/restate) | 1.5.3 | | [Restate](./src/restate) | 1.5.3 |
| [SearXNG](./src/searxng) | 2025.1.20-1ce14ef99 | | [SearXNG](./src/searxng) | 2025.1.20-1ce14ef99 |
| [Selenium](./src/selenium) | 144.0-20260120 | | [Selenium](./src/selenium) | 144.0-20260120 |
| [Shannon](./apps/shannon) | v0.3.1 |
| [SigNoz](./src/signoz) | 0.55.0 | | [SigNoz](./src/signoz) | 0.55.0 |
| [Sim](./apps/sim) | latest | | [Sim](./apps/sim) | latest |
| [Skyvern](./apps/skyvern) | v1.0.31 |
| [Stable Diffusion WebUI](./apps/stable-diffusion-webui-docker) | latest | | [Stable Diffusion WebUI](./apps/stable-diffusion-webui-docker) | latest |
| [Stirling-PDF](./apps/stirling-pdf) | latest | | [Stirling-PDF](./apps/stirling-pdf) | latest |
| [Sub2API](./src/sub2api) | 0.1.124 |
| [Temporal](./src/temporal) | 1.24.2 | | [Temporal](./src/temporal) | 1.24.2 |
| [TiDB](./src/tidb) | v8.5.0 | | [TiDB](./src/tidb) | v8.5.0 |
| [TiKV](./src/tikv) | v8.5.0 | | [TiKV](./src/tikv) | v8.5.0 |
| [Trigger.dev](./src/trigger-dev) | v4.2.0 | | [Trigger.dev](./src/trigger-dev) | v4.2.0 |
| [TrailBase](./src/trailbase) | 0.22.4 | | [TrailBase](./src/trailbase) | 0.22.4 |
| [TurboOCR](./src/turboocr) | v2.1.1 |
| [Valkey Cluster](./src/valkey-cluster) | 8.0 | | [Valkey Cluster](./src/valkey-cluster) | 8.0 |
| [Valkey](./src/valkey) | 8.0 | | [Valkey](./src/valkey) | 8.0 |
| [Verdaccio](./src/verdaccio) | 6.1.2 | | [Verdaccio](./src/verdaccio) | 6.1.2 |
@@ -189,7 +227,7 @@ Compose Anything 通过提供一组高质量的 Docker Compose 配置文件,
| [OpenWeather](./mcp-servers/openweather) | latest | | [OpenWeather](./mcp-servers/openweather) | latest |
| [Paper Search](./mcp-servers/paper-search) | latest | | [Paper Search](./mcp-servers/paper-search) | latest |
| [Playwright](./mcp-servers/playwright) | latest | | [Playwright](./mcp-servers/playwright) | latest |
| [Redis MCP](./mcp-servers/redis-mcp) | latest | | [Redis MCP](./mcp-servers/redis) | latest |
| [Rust Filesystem](./mcp-servers/rust-mcp-filesystem) | latest | | [Rust Filesystem](./mcp-servers/rust-mcp-filesystem) | latest |
| [Sequential Thinking](./mcp-servers/sequentialthinking) | latest | | [Sequential Thinking](./mcp-servers/sequentialthinking) | latest |
| [SQLite](./mcp-servers/sqlite) | latest | | [SQLite](./mcp-servers/sqlite) | latest |
+2 -2
View File
@@ -19,10 +19,10 @@ services:
deploy: deploy:
resources: resources:
limits: limits:
cpus: ${BOLT_DIY_CPU_LIMIT:-2.00} cpus: ${BOLT_DIY_CPU_LIMIT:-2.0}
memory: ${BOLT_DIY_MEMORY_LIMIT:-2G} memory: ${BOLT_DIY_MEMORY_LIMIT:-2G}
reservations: reservations:
cpus: ${BOLT_DIY_CPU_RESERVATION:-0.5} cpus: ${BOLT_DIY_CPU_RESERVATION:-0.1}
memory: ${BOLT_DIY_MEMORY_RESERVATION:-512M} memory: ${BOLT_DIY_MEMORY_RESERVATION:-512M}
healthcheck: healthcheck:
test: test:
+4 -4
View File
@@ -26,9 +26,9 @@ REDIS_PASSWORD=
REDIS_PORT_OVERRIDE=6379 REDIS_PORT_OVERRIDE=6379
# Redis resource limits # Redis resource limits
REDIS_CPU_LIMIT=0.25 REDIS_CPU_LIMIT=0.3
REDIS_MEMORY_LIMIT=256M REDIS_MEMORY_LIMIT=256M
REDIS_CPU_RESERVATION=0.10 REDIS_CPU_RESERVATION=0.1
REDIS_MEMORY_RESERVATION=128M REDIS_MEMORY_RESERVATION=128M
# =========================== # ===========================
@@ -48,7 +48,7 @@ POSTGRES_PORT_OVERRIDE=5432
# PostgreSQL resource limits # PostgreSQL resource limits
POSTGRES_CPU_LIMIT=1.0 POSTGRES_CPU_LIMIT=1.0
POSTGRES_MEMORY_LIMIT=512M POSTGRES_MEMORY_LIMIT=512M
POSTGRES_CPU_RESERVATION=0.25 POSTGRES_CPU_RESERVATION=0.1
POSTGRES_MEMORY_RESERVATION=256M POSTGRES_MEMORY_RESERVATION=256M
# =========================== # ===========================
@@ -72,5 +72,5 @@ NPM_REGISTRY_URL=
# BuildingAI resource limits # BuildingAI resource limits
BUILDINGAI_CPU_LIMIT=2.0 BUILDINGAI_CPU_LIMIT=2.0
BUILDINGAI_MEMORY_LIMIT=3584M BUILDINGAI_MEMORY_LIMIT=3584M
BUILDINGAI_CPU_RESERVATION=0.5 BUILDINGAI_CPU_RESERVATION=0.1
BUILDINGAI_MEMORY_RESERVATION=512M BUILDINGAI_MEMORY_RESERVATION=512M
+4 -4
View File
@@ -38,10 +38,10 @@ services:
deploy: deploy:
resources: resources:
limits: limits:
cpus: ${REDIS_CPU_LIMIT:-0.25} cpus: ${REDIS_CPU_LIMIT:-0.3}
memory: ${REDIS_MEMORY_LIMIT:-256M} memory: ${REDIS_MEMORY_LIMIT:-256M}
reservations: reservations:
cpus: ${REDIS_CPU_RESERVATION:-0.10} cpus: ${REDIS_CPU_RESERVATION:-0.1}
memory: ${REDIS_MEMORY_RESERVATION:-128M} memory: ${REDIS_MEMORY_RESERVATION:-128M}
postgres: postgres:
@@ -69,7 +69,7 @@ services:
cpus: ${POSTGRES_CPU_LIMIT:-1.0} cpus: ${POSTGRES_CPU_LIMIT:-1.0}
memory: ${POSTGRES_MEMORY_LIMIT:-512M} memory: ${POSTGRES_MEMORY_LIMIT:-512M}
reservations: reservations:
cpus: ${POSTGRES_CPU_RESERVATION:-0.25} cpus: ${POSTGRES_CPU_RESERVATION:-0.1}
memory: ${POSTGRES_MEMORY_RESERVATION:-256M} memory: ${POSTGRES_MEMORY_RESERVATION:-256M}
buildingai: buildingai:
@@ -108,7 +108,7 @@ services:
cpus: ${BUILDINGAI_CPU_LIMIT:-2.0} cpus: ${BUILDINGAI_CPU_LIMIT:-2.0}
memory: ${BUILDINGAI_MEMORY_LIMIT:-3584M} memory: ${BUILDINGAI_MEMORY_LIMIT:-3584M}
reservations: reservations:
cpus: ${BUILDINGAI_CPU_RESERVATION:-0.5} cpus: ${BUILDINGAI_CPU_RESERVATION:-0.1}
memory: ${BUILDINGAI_MEMORY_RESERVATION:-512M} memory: ${BUILDINGAI_MEMORY_RESERVATION:-512M}
volumes: volumes:
+2 -2
View File
@@ -89,8 +89,8 @@ DASHSCOPE_API_KEY=
#! ================================================== #! ==================================================
# CPU limits (default: 4.00 cores limit, 1.00 cores reservation) # CPU limits (default: 4.00 cores limit, 1.00 cores reservation)
DEEPTUTOR_CPU_LIMIT=4.00 DEEPTUTOR_CPU_LIMIT=4.0
DEEPTUTOR_CPU_RESERVATION=1.00 DEEPTUTOR_CPU_RESERVATION=0.1
# Memory limits (default: 8G limit, 2G reservation) # Memory limits (default: 8G limit, 2G reservation)
DEEPTUTOR_MEMORY_LIMIT=8G DEEPTUTOR_MEMORY_LIMIT=8G
+2 -2
View File
@@ -57,10 +57,10 @@ services:
deploy: deploy:
resources: resources:
limits: limits:
cpus: ${DEEPTUTOR_CPU_LIMIT:-4.00} cpus: ${DEEPTUTOR_CPU_LIMIT:-4.0}
memory: ${DEEPTUTOR_MEMORY_LIMIT:-8G} memory: ${DEEPTUTOR_MEMORY_LIMIT:-8G}
reservations: reservations:
cpus: ${DEEPTUTOR_CPU_RESERVATION:-1.00} cpus: ${DEEPTUTOR_CPU_RESERVATION:-0.1}
memory: ${DEEPTUTOR_MEMORY_RESERVATION:-2G} memory: ${DEEPTUTOR_MEMORY_RESERVATION:-2G}
volumes: volumes:
+6 -6
View File
@@ -37,7 +37,7 @@ services:
cpus: ${DIFY_API_CPU_LIMIT:-1.0} cpus: ${DIFY_API_CPU_LIMIT:-1.0}
memory: ${DIFY_API_MEMORY_LIMIT:-2G} memory: ${DIFY_API_MEMORY_LIMIT:-2G}
reservations: reservations:
cpus: ${DIFY_API_CPU_RESERVATION:-0.5} cpus: ${DIFY_API_CPU_RESERVATION:-0.1}
memory: ${DIFY_API_MEMORY_RESERVATION:-1G} memory: ${DIFY_API_MEMORY_RESERVATION:-1G}
healthcheck: healthcheck:
test: test:
@@ -83,7 +83,7 @@ services:
cpus: ${DIFY_WORKER_CPU_LIMIT:-1.0} cpus: ${DIFY_WORKER_CPU_LIMIT:-1.0}
memory: ${DIFY_WORKER_MEMORY_LIMIT:-2G} memory: ${DIFY_WORKER_MEMORY_LIMIT:-2G}
reservations: reservations:
cpus: ${DIFY_WORKER_CPU_RESERVATION:-0.5} cpus: ${DIFY_WORKER_CPU_RESERVATION:-0.1}
memory: ${DIFY_WORKER_MEMORY_RESERVATION:-1G} memory: ${DIFY_WORKER_MEMORY_RESERVATION:-1G}
dify-web: dify-web:
@@ -104,7 +104,7 @@ services:
cpus: ${DIFY_WEB_CPU_LIMIT:-0.5} cpus: ${DIFY_WEB_CPU_LIMIT:-0.5}
memory: ${DIFY_WEB_MEMORY_LIMIT:-512M} memory: ${DIFY_WEB_MEMORY_LIMIT:-512M}
reservations: reservations:
cpus: ${DIFY_WEB_CPU_RESERVATION:-0.25} cpus: ${DIFY_WEB_CPU_RESERVATION:-0.1}
memory: ${DIFY_WEB_MEMORY_RESERVATION:-256M} memory: ${DIFY_WEB_MEMORY_RESERVATION:-256M}
dify-db: dify-db:
@@ -124,7 +124,7 @@ services:
cpus: ${DIFY_DB_CPU_LIMIT:-0.5} cpus: ${DIFY_DB_CPU_LIMIT:-0.5}
memory: ${DIFY_DB_MEMORY_LIMIT:-512M} memory: ${DIFY_DB_MEMORY_LIMIT:-512M}
reservations: reservations:
cpus: ${DIFY_DB_CPU_RESERVATION:-0.25} cpus: ${DIFY_DB_CPU_RESERVATION:-0.1}
memory: ${DIFY_DB_MEMORY_RESERVATION:-256M} memory: ${DIFY_DB_MEMORY_RESERVATION:-256M}
healthcheck: healthcheck:
test: [CMD-SHELL, pg_isready -U $$POSTGRES_USER -d $$POSTGRES_DB] test: [CMD-SHELL, pg_isready -U $$POSTGRES_USER -d $$POSTGRES_DB]
@@ -144,7 +144,7 @@ services:
deploy: deploy:
resources: resources:
limits: limits:
cpus: ${DIFY_REDIS_CPU_LIMIT:-0.25} cpus: ${DIFY_REDIS_CPU_LIMIT:-0.3}
memory: ${DIFY_REDIS_MEMORY_LIMIT:-256M} memory: ${DIFY_REDIS_MEMORY_LIMIT:-256M}
reservations: reservations:
cpus: ${DIFY_REDIS_CPU_RESERVATION:-0.1} cpus: ${DIFY_REDIS_CPU_RESERVATION:-0.1}
@@ -176,7 +176,7 @@ services:
cpus: ${DIFY_WEAVIATE_CPU_LIMIT:-0.5} cpus: ${DIFY_WEAVIATE_CPU_LIMIT:-0.5}
memory: ${DIFY_WEAVIATE_MEMORY_LIMIT:-1G} memory: ${DIFY_WEAVIATE_MEMORY_LIMIT:-1G}
reservations: reservations:
cpus: ${DIFY_WEAVIATE_CPU_RESERVATION:-0.25} cpus: ${DIFY_WEAVIATE_CPU_RESERVATION:-0.1}
memory: ${DIFY_WEAVIATE_MEMORY_RESERVATION:-512M} memory: ${DIFY_WEAVIATE_MEMORY_RESERVATION:-512M}
healthcheck: healthcheck:
test: test:
+1 -1
View File
@@ -26,7 +26,7 @@ services:
cpus: ${EASY_DATASET_CPU_LIMIT:-2.0} cpus: ${EASY_DATASET_CPU_LIMIT:-2.0}
memory: ${EASY_DATASET_MEMORY_LIMIT:-4G} memory: ${EASY_DATASET_MEMORY_LIMIT:-4G}
reservations: reservations:
cpus: ${EASY_DATASET_CPU_RESERVATION:-0.5} cpus: ${EASY_DATASET_CPU_RESERVATION:-0.1}
memory: ${EASY_DATASET_MEMORY_RESERVATION:-1G} memory: ${EASY_DATASET_MEMORY_RESERVATION:-1G}
healthcheck: healthcheck:
test: [CMD, wget, --no-verbose, --tries=1, --spider, 'http://localhost:1717'] test: [CMD, wget, --no-verbose, --tries=1, --spider, 'http://localhost:1717']
+2 -2
View File
@@ -53,13 +53,13 @@ DO_NOT_TRACK=false
# Resource Limits - Langflow # Resource Limits - Langflow
LANGFLOW_CPU_LIMIT=2.0 LANGFLOW_CPU_LIMIT=2.0
LANGFLOW_MEMORY_LIMIT=2G LANGFLOW_MEMORY_LIMIT=2G
LANGFLOW_CPU_RESERVATION=0.5 LANGFLOW_CPU_RESERVATION=0.1
LANGFLOW_MEMORY_RESERVATION=512M LANGFLOW_MEMORY_RESERVATION=512M
# Resource Limits - PostgreSQL # Resource Limits - PostgreSQL
POSTGRES_CPU_LIMIT=1.0 POSTGRES_CPU_LIMIT=1.0
POSTGRES_MEMORY_LIMIT=1G POSTGRES_MEMORY_LIMIT=1G
POSTGRES_CPU_RESERVATION=0.25 POSTGRES_CPU_RESERVATION=0.1
POSTGRES_MEMORY_RESERVATION=256M POSTGRES_MEMORY_RESERVATION=256M
# Logging Configuration # Logging Configuration
+2 -2
View File
@@ -93,7 +93,7 @@ services:
cpus: '${LANGFLOW_CPU_LIMIT:-2.0}' cpus: '${LANGFLOW_CPU_LIMIT:-2.0}'
memory: '${LANGFLOW_MEMORY_LIMIT:-2G}' memory: '${LANGFLOW_MEMORY_LIMIT:-2G}'
reservations: reservations:
cpus: '${LANGFLOW_CPU_RESERVATION:-0.5}' cpus: '${LANGFLOW_CPU_RESERVATION:-0.1}'
memory: '${LANGFLOW_MEMORY_RESERVATION:-512M}' memory: '${LANGFLOW_MEMORY_RESERVATION:-512M}'
postgres: postgres:
@@ -119,7 +119,7 @@ services:
cpus: '${POSTGRES_CPU_LIMIT:-1.0}' cpus: '${POSTGRES_CPU_LIMIT:-1.0}'
memory: '${POSTGRES_MEMORY_LIMIT:-1G}' memory: '${POSTGRES_MEMORY_LIMIT:-1G}'
reservations: reservations:
cpus: '${POSTGRES_CPU_RESERVATION:-0.25}' cpus: '${POSTGRES_CPU_RESERVATION:-0.1}'
memory: '${POSTGRES_MEMORY_RESERVATION:-256M}' memory: '${POSTGRES_MEMORY_RESERVATION:-256M}'
volumes: volumes:
+6 -6
View File
@@ -102,35 +102,35 @@ LANGFUSE_INIT_USER_PASSWORD=
# Resource Limits - Langfuse Worker # Resource Limits - Langfuse Worker
LANGFUSE_WORKER_CPU_LIMIT=2.0 LANGFUSE_WORKER_CPU_LIMIT=2.0
LANGFUSE_WORKER_MEMORY_LIMIT=2G LANGFUSE_WORKER_MEMORY_LIMIT=2G
LANGFUSE_WORKER_CPU_RESERVATION=0.5 LANGFUSE_WORKER_CPU_RESERVATION=0.1
LANGFUSE_WORKER_MEMORY_RESERVATION=512M LANGFUSE_WORKER_MEMORY_RESERVATION=512M
# Resource Limits - Langfuse Web # Resource Limits - Langfuse Web
LANGFUSE_WEB_CPU_LIMIT=2.0 LANGFUSE_WEB_CPU_LIMIT=2.0
LANGFUSE_WEB_MEMORY_LIMIT=2G LANGFUSE_WEB_MEMORY_LIMIT=2G
LANGFUSE_WEB_CPU_RESERVATION=0.5 LANGFUSE_WEB_CPU_RESERVATION=0.1
LANGFUSE_WEB_MEMORY_RESERVATION=512M LANGFUSE_WEB_MEMORY_RESERVATION=512M
# Resource Limits - ClickHouse # Resource Limits - ClickHouse
CLICKHOUSE_CPU_LIMIT=2.0 CLICKHOUSE_CPU_LIMIT=2.0
CLICKHOUSE_MEMORY_LIMIT=4G CLICKHOUSE_MEMORY_LIMIT=4G
CLICKHOUSE_CPU_RESERVATION=0.5 CLICKHOUSE_CPU_RESERVATION=0.1
CLICKHOUSE_MEMORY_RESERVATION=1G CLICKHOUSE_MEMORY_RESERVATION=1G
# Resource Limits - MinIO # Resource Limits - MinIO
MINIO_CPU_LIMIT=1.0 MINIO_CPU_LIMIT=1.0
MINIO_MEMORY_LIMIT=1G MINIO_MEMORY_LIMIT=1G
MINIO_CPU_RESERVATION=0.25 MINIO_CPU_RESERVATION=0.1
MINIO_MEMORY_RESERVATION=256M MINIO_MEMORY_RESERVATION=256M
# Resource Limits - Redis # Resource Limits - Redis
REDIS_CPU_LIMIT=1.0 REDIS_CPU_LIMIT=1.0
REDIS_MEMORY_LIMIT=512M REDIS_MEMORY_LIMIT=512M
REDIS_CPU_RESERVATION=0.25 REDIS_CPU_RESERVATION=0.1
REDIS_MEMORY_RESERVATION=256M REDIS_MEMORY_RESERVATION=256M
# Resource Limits - PostgreSQL # Resource Limits - PostgreSQL
POSTGRES_CPU_LIMIT=2.0 POSTGRES_CPU_LIMIT=2.0
POSTGRES_MEMORY_LIMIT=2G POSTGRES_MEMORY_LIMIT=2G
POSTGRES_CPU_RESERVATION=0.5 POSTGRES_CPU_RESERVATION=0.1
POSTGRES_MEMORY_RESERVATION=512M POSTGRES_MEMORY_RESERVATION=512M
+6 -6
View File
@@ -85,7 +85,7 @@ services:
cpus: ${LANGFUSE_WORKER_CPU_LIMIT:-2.0} cpus: ${LANGFUSE_WORKER_CPU_LIMIT:-2.0}
memory: ${LANGFUSE_WORKER_MEMORY_LIMIT:-2G} memory: ${LANGFUSE_WORKER_MEMORY_LIMIT:-2G}
reservations: reservations:
cpus: ${LANGFUSE_WORKER_CPU_RESERVATION:-0.5} cpus: ${LANGFUSE_WORKER_CPU_RESERVATION:-0.1}
memory: ${LANGFUSE_WORKER_MEMORY_RESERVATION:-512M} memory: ${LANGFUSE_WORKER_MEMORY_RESERVATION:-512M}
healthcheck: healthcheck:
test: test:
@@ -126,7 +126,7 @@ services:
cpus: ${LANGFUSE_WEB_CPU_LIMIT:-2.0} cpus: ${LANGFUSE_WEB_CPU_LIMIT:-2.0}
memory: ${LANGFUSE_WEB_MEMORY_LIMIT:-2G} memory: ${LANGFUSE_WEB_MEMORY_LIMIT:-2G}
reservations: reservations:
cpus: ${LANGFUSE_WEB_CPU_RESERVATION:-0.5} cpus: ${LANGFUSE_WEB_CPU_RESERVATION:-0.1}
memory: ${LANGFUSE_WEB_MEMORY_RESERVATION:-512M} memory: ${LANGFUSE_WEB_MEMORY_RESERVATION:-512M}
healthcheck: healthcheck:
test: test:
@@ -171,7 +171,7 @@ services:
cpus: ${CLICKHOUSE_CPU_LIMIT:-2.0} cpus: ${CLICKHOUSE_CPU_LIMIT:-2.0}
memory: ${CLICKHOUSE_MEMORY_LIMIT:-4G} memory: ${CLICKHOUSE_MEMORY_LIMIT:-4G}
reservations: reservations:
cpus: ${CLICKHOUSE_CPU_RESERVATION:-0.5} cpus: ${CLICKHOUSE_CPU_RESERVATION:-0.1}
memory: ${CLICKHOUSE_MEMORY_RESERVATION:-1G} memory: ${CLICKHOUSE_MEMORY_RESERVATION:-1G}
minio: minio:
@@ -200,7 +200,7 @@ services:
cpus: ${MINIO_CPU_LIMIT:-1.0} cpus: ${MINIO_CPU_LIMIT:-1.0}
memory: ${MINIO_MEMORY_LIMIT:-1G} memory: ${MINIO_MEMORY_LIMIT:-1G}
reservations: reservations:
cpus: ${MINIO_CPU_RESERVATION:-0.25} cpus: ${MINIO_CPU_RESERVATION:-0.1}
memory: ${MINIO_MEMORY_RESERVATION:-256M} memory: ${MINIO_MEMORY_RESERVATION:-256M}
redis: redis:
@@ -222,7 +222,7 @@ services:
cpus: ${REDIS_CPU_LIMIT:-1.0} cpus: ${REDIS_CPU_LIMIT:-1.0}
memory: ${REDIS_MEMORY_LIMIT:-512M} memory: ${REDIS_MEMORY_LIMIT:-512M}
reservations: reservations:
cpus: ${REDIS_CPU_RESERVATION:-0.25} cpus: ${REDIS_CPU_RESERVATION:-0.1}
memory: ${REDIS_MEMORY_RESERVATION:-256M} memory: ${REDIS_MEMORY_RESERVATION:-256M}
postgres: postgres:
@@ -249,7 +249,7 @@ services:
cpus: ${POSTGRES_CPU_LIMIT:-2.0} cpus: ${POSTGRES_CPU_LIMIT:-2.0}
memory: ${POSTGRES_MEMORY_LIMIT:-2G} memory: ${POSTGRES_MEMORY_LIMIT:-2G}
reservations: reservations:
cpus: ${POSTGRES_CPU_RESERVATION:-0.5} cpus: ${POSTGRES_CPU_RESERVATION:-0.1}
memory: ${POSTGRES_MEMORY_RESERVATION:-512M} memory: ${POSTGRES_MEMORY_RESERVATION:-512M}
volumes: volumes:
+50
View File
@@ -0,0 +1,50 @@
# Global Registry Prefix (optional)
# GLOBAL_REGISTRY=
# Service Versions
LIBRECHAT_VERSION=v0.8.4
MONGODB_VERSION=8.0
MEILISEARCH_VERSION=v1.12.8
# Timezone
TZ=UTC
# Host port for the LibreChat web UI
LIBRECHAT_PORT_OVERRIDE=3080
# Security Secrets (CHANGEME: generate with: openssl rand -hex 32)
JWT_SECRET=changeme_jwt_secret_please_change_CHANGEME
JWT_REFRESH_SECRET=changeme_jwt_refresh_secret_CHANGEME
MEILI_MASTER_KEY=changeme_meili_master_key_CHANGEME
# Encryption Keys
# CREDS_KEY must be exactly 32 characters
CREDS_KEY=changeme_creds_key_32_chars_only
# CREDS_IV must be exactly 16 characters
CREDS_IV=changeme_iv_16ch
# Registration
ALLOW_REGISTRATION=true
ALLOW_SOCIAL_LOGIN=false
# LLM Provider API Keys (optional; configure via UI or here)
# OPENAI_API_KEY=sk-...
# ANTHROPIC_API_KEY=sk-ant-...
# Resource Limits - LibreChat
LIBRECHAT_CPU_LIMIT=2.0
LIBRECHAT_MEMORY_LIMIT=2G
LIBRECHAT_CPU_RESERVATION=0.1
LIBRECHAT_MEMORY_RESERVATION=512M
# Resource Limits - MongoDB
MONGODB_CPU_LIMIT=1.0
MONGODB_MEMORY_LIMIT=1G
MONGODB_CPU_RESERVATION=0.1
MONGODB_MEMORY_RESERVATION=256M
# Resource Limits - Meilisearch
MEILISEARCH_CPU_LIMIT=0.5
MEILISEARCH_MEMORY_LIMIT=512M
MEILISEARCH_CPU_RESERVATION=0.1
MEILISEARCH_MEMORY_RESERVATION=128M
+82
View File
@@ -0,0 +1,82 @@
# LibreChat
[English](./README.md) | [中文](./README.zh.md)
Quick start: <https://docs.librechat.ai>.
This service deploys LibreChat, an open-source AI chat platform that supports OpenAI, Anthropic, Google, Ollama, and many other providers in a single unified interface with conversation history, file uploads, code execution, and multi-user support.
## Services
- **librechat**: The LibreChat web application (Node.js).
- **mongodb**: MongoDB database for conversation and user data.
- **meilisearch**: Full-text search engine for message indexing.
## Quick Start
1. Copy `.env.example` to `.env`:
```bash
cp .env.example .env
```
2. Update the secrets in `.env` (generate with `openssl rand -hex 32`):
```
JWT_SECRET, JWT_REFRESH_SECRET, MEILI_MASTER_KEY, CREDS_KEY, CREDS_IV
```
3. Start the services:
```bash
docker compose up -d
```
4. Open `http://localhost:3080` and register the first user account.
## Core Environment Variables
| Variable | Description | Default |
| --------------------- | -------------------------------------------------------- | ---------------------------- |
| `LIBRECHAT_VERSION` | Image version | `v0.8.4` |
| `LIBRECHAT_PORT_OVERRIDE` | Host port for the web UI | `3080` |
| `JWT_SECRET` | JWT signing secret (min 32 chars) — **CHANGEME** | placeholder |
| `JWT_REFRESH_SECRET` | JWT refresh signing secret — **CHANGEME** | placeholder |
| `MEILI_MASTER_KEY` | Meilisearch master key — **CHANGEME** | placeholder |
| `CREDS_KEY` | Encryption key for stored credentials (exactly 32 chars) | placeholder |
| `CREDS_IV` | Encryption IV (exactly 16 chars) | placeholder |
| `ALLOW_REGISTRATION` | Allow new user registration | `true` |
| `OPENAI_API_KEY` | OpenAI API key (optional; can also configure in UI) | *(empty)* |
| `ANTHROPIC_API_KEY` | Anthropic API key (optional) | *(empty)* |
## Volumes
- `librechat_images`: User-uploaded images served by the web UI.
- `librechat_logs`: Application log files.
- `librechat_mongo_data`: MongoDB data persistence.
- `librechat_meilisearch_data`: Meilisearch index data.
## Ports
- **3080**: LibreChat web UI
## Security Notes
- Generate all secrets before any external exposure: `openssl rand -hex 32`
- `CREDS_KEY` and `CREDS_IV` encrypt stored API keys — losing them makes stored credentials unrecoverable.
- Set `ALLOW_REGISTRATION=false` after creating admin accounts to lock down signups.
## Resource Requirements
| Service | CPU Limit | Memory Limit |
| ------------ | --------- | ------------ |
| librechat | 2 | 2 GB |
| mongodb | 1 | 1 GB |
| meilisearch | 0.5 | 512 MB |
Total recommended: **4+ GB RAM**.
## Documentation
- [LibreChat Docs](https://docs.librechat.ai)
- [GitHub](https://github.com/danny-avila/LibreChat)
+82
View File
@@ -0,0 +1,82 @@
# LibreChat
[English](./README.md) | [中文](./README.zh.md)
快速开始:<https://docs.librechat.ai>。
此服务用于部署 LibreChat,一个开源 AI 对话平台,在单一统一界面中支持 OpenAI、Anthropic、Google、Ollama 等众多提供商,具备对话历史、文件上传、代码执行和多用户支持。
## 服务
- **librechat**LibreChat Web 应用(Node.js)。
- **mongodb**:用于存储对话和用户数据的 MongoDB 数据库。
- **meilisearch**:用于消息索引的全文搜索引擎。
## 快速开始
1.`.env.example` 复制为 `.env`
```bash
cp .env.example .env
```
2. 更新 `.env` 中的密钥(使用 `openssl rand -hex 32` 生成):
```
JWT_SECRET、JWT_REFRESH_SECRET、MEILI_MASTER_KEY、CREDS_KEY、CREDS_IV
```
3. 启动服务:
```bash
docker compose up -d
```
4. 打开 `http://localhost:3080`,注册第一个用户账号。
## 核心环境变量
| 变量 | 说明 | 默认值 |
| ------------------------ | ------------------------------------------------------- | -------- |
| `LIBRECHAT_VERSION` | 镜像版本 | `v0.8.4` |
| `LIBRECHAT_PORT_OVERRIDE`| Web UI 宿主机端口 | `3080` |
| `JWT_SECRET` | JWT 签名密钥(至少 32 字符)——**请修改** | 占位符 |
| `JWT_REFRESH_SECRET` | JWT 刷新签名密钥——**请修改** | 占位符 |
| `MEILI_MASTER_KEY` | Meilisearch 主密钥——**请修改** | 占位符 |
| `CREDS_KEY` | 存储凭证的加密密钥(恰好 32 字符) | 占位符 |
| `CREDS_IV` | 加密 IV(恰好 16 字符) | 占位符 |
| `ALLOW_REGISTRATION` | 允许新用户注册 | `true` |
| `OPENAI_API_KEY` | OpenAI API Key(可选;也可在 UI 中配置) | *(空)* |
| `ANTHROPIC_API_KEY` | Anthropic API Key(可选) | *(空)* |
## 数据卷
- `librechat_images`:用户上传的图片,由 Web UI 提供服务。
- `librechat_logs`:应用日志文件。
- `librechat_mongo_data`MongoDB 数据持久化。
- `librechat_meilisearch_data`Meilisearch 索引数据。
## 端口
- **3080**LibreChat Web UI
## 安全说明
- 在对外暴露之前,请生成所有密钥:`openssl rand -hex 32`。
- `CREDS_KEY` 和 `CREDS_IV` 用于加密存储的 API Key——丢失后存储的凭证将无法恢复。
- 创建管理员账号后,将 `ALLOW_REGISTRATION` 设为 `false` 以禁止新用户注册。
## 资源需求
| 服务 | CPU 限制 | 内存限制 |
| ----------- | -------- | -------- |
| librechat | 2 | 2 GB |
| mongodb | 1 | 1 GB |
| meilisearch | 0.5 | 512 MB |
推荐总计:**4+ GB RAM**。
## 文档
- [LibreChat 文档](https://docs.librechat.ai)
- [GitHub](https://github.com/danny-avila/LibreChat)
+108
View File
@@ -0,0 +1,108 @@
# Make sure to change the secret placeholders before exposing this stack externally.
# Fields marked with CHANGEME must be updated for any non-local deployment.
x-defaults: &defaults
restart: unless-stopped
logging:
driver: json-file
options:
max-size: 100m
max-file: '3'
services:
librechat:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}librechat/librechat:${LIBRECHAT_VERSION:-v0.8.4}
depends_on:
mongodb:
condition: service_healthy
meilisearch:
condition: service_healthy
ports:
- '${LIBRECHAT_PORT_OVERRIDE:-3080}:3080'
volumes:
- librechat_images:/app/client/public/images
- librechat_logs:/app/api/logs
environment:
- TZ=${TZ:-UTC}
- MONGO_URI=mongodb://mongodb:27017/LibreChat
- MEILI_HOST=http://meilisearch:7700
- MEILI_MASTER_KEY=${MEILI_MASTER_KEY:-changeme_meili_master_key_CHANGEME}
- JWT_SECRET=${JWT_SECRET:-changeme_jwt_secret_please_change_CHANGEME}
- JWT_REFRESH_SECRET=${JWT_REFRESH_SECRET:-changeme_jwt_refresh_secret_CHANGEME}
- CREDS_KEY=${CREDS_KEY:-changeme_creds_key_32_chars_only}
- CREDS_IV=${CREDS_IV:-changeme_iv_16ch}
- ALLOW_REGISTRATION=${ALLOW_REGISTRATION:-true}
- ALLOW_SOCIAL_LOGIN=${ALLOW_SOCIAL_LOGIN:-false}
- OPENAI_API_KEY=${OPENAI_API_KEY:-}
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY:-}
healthcheck:
test:
- CMD
- node
- -e
- "require('http').get('http://localhost:3080/health',res=>process.exit(res.statusCode===200?0:1)).on('error',()=>process.exit(1))"
interval: 30s
timeout: 10s
retries: 5
start_period: 40s
deploy:
resources:
limits:
cpus: ${LIBRECHAT_CPU_LIMIT:-2.0}
memory: ${LIBRECHAT_MEMORY_LIMIT:-2G}
reservations:
cpus: ${LIBRECHAT_CPU_RESERVATION:-0.1}
memory: ${LIBRECHAT_MEMORY_RESERVATION:-512M}
mongodb:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}mongo:${MONGODB_VERSION:-8.0}
volumes:
- librechat_mongo_data:/data/db
environment:
- TZ=${TZ:-UTC}
healthcheck:
test: [CMD, mongosh, --eval, "db.adminCommand('ping')"]
interval: 10s
timeout: 10s
retries: 5
start_period: 20s
deploy:
resources:
limits:
cpus: ${MONGODB_CPU_LIMIT:-1.0}
memory: ${MONGODB_MEMORY_LIMIT:-1G}
reservations:
cpus: ${MONGODB_CPU_RESERVATION:-0.1}
memory: ${MONGODB_MEMORY_RESERVATION:-256M}
meilisearch:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}getmeili/meilisearch:${MEILISEARCH_VERSION:-v1.12.8}
volumes:
- librechat_meilisearch_data:/meili_data
environment:
- TZ=${TZ:-UTC}
- MEILI_MASTER_KEY=${MEILI_MASTER_KEY:-changeme_meili_master_key_CHANGEME}
- MEILI_NO_ANALYTICS=true
healthcheck:
test: [CMD-SHELL, 'curl -sf http://localhost:7700/health || exit 1']
interval: 10s
timeout: 10s
retries: 5
start_period: 10s
deploy:
resources:
limits:
cpus: ${MEILISEARCH_CPU_LIMIT:-0.5}
memory: ${MEILISEARCH_MEMORY_LIMIT:-512M}
reservations:
cpus: ${MEILISEARCH_CPU_RESERVATION:-0.1}
memory: ${MEILISEARCH_MEMORY_RESERVATION:-128M}
volumes:
librechat_images:
librechat_logs:
librechat_mongo_data:
librechat_meilisearch_data:
+4 -4
View File
@@ -15,15 +15,15 @@ POSTGRES_PASSWORD=mmchangeit
MATTERMOST_ENABLE_LOCAL_MODE=false MATTERMOST_ENABLE_LOCAL_MODE=false
# Resources - Mattermost # Resources - Mattermost
MATTERMOST_CPU_LIMIT=2.00 MATTERMOST_CPU_LIMIT=2.0
MATTERMOST_MEMORY_LIMIT=2G MATTERMOST_MEMORY_LIMIT=2G
MATTERMOST_CPU_RESERVATION=0.50 MATTERMOST_CPU_RESERVATION=0.1
MATTERMOST_MEMORY_RESERVATION=512M MATTERMOST_MEMORY_RESERVATION=512M
# Resources - PostgreSQL # Resources - PostgreSQL
MATTERMOST_DB_CPU_LIMIT=1.00 MATTERMOST_DB_CPU_LIMIT=1.0
MATTERMOST_DB_MEMORY_LIMIT=1G MATTERMOST_DB_MEMORY_LIMIT=1G
MATTERMOST_DB_CPU_RESERVATION=0.25 MATTERMOST_DB_CPU_RESERVATION=0.1
MATTERMOST_DB_MEMORY_RESERVATION=256M MATTERMOST_DB_MEMORY_RESERVATION=256M
# Logging # Logging
+4 -4
View File
@@ -27,10 +27,10 @@ services:
deploy: deploy:
resources: resources:
limits: limits:
cpus: ${MATTERMOST_DB_CPU_LIMIT:-1.00} cpus: ${MATTERMOST_DB_CPU_LIMIT:-1.0}
memory: ${MATTERMOST_DB_MEMORY_LIMIT:-1G} memory: ${MATTERMOST_DB_MEMORY_LIMIT:-1G}
reservations: reservations:
cpus: ${MATTERMOST_DB_CPU_RESERVATION:-0.25} cpus: ${MATTERMOST_DB_CPU_RESERVATION:-0.1}
memory: ${MATTERMOST_DB_MEMORY_RESERVATION:-256M} memory: ${MATTERMOST_DB_MEMORY_RESERVATION:-256M}
mattermost: mattermost:
@@ -68,10 +68,10 @@ services:
deploy: deploy:
resources: resources:
limits: limits:
cpus: ${MATTERMOST_CPU_LIMIT:-2.00} cpus: ${MATTERMOST_CPU_LIMIT:-2.0}
memory: ${MATTERMOST_MEMORY_LIMIT:-2G} memory: ${MATTERMOST_MEMORY_LIMIT:-2G}
reservations: reservations:
cpus: ${MATTERMOST_CPU_RESERVATION:-0.50} cpus: ${MATTERMOST_CPU_RESERVATION:-0.1}
memory: ${MATTERMOST_MEMORY_RESERVATION:-512M} memory: ${MATTERMOST_MEMORY_RESERVATION:-512M}
volumes: volumes:
+2 -2
View File
@@ -45,7 +45,7 @@ services:
cpus: ${N8N_CPU_LIMIT:-2.0} cpus: ${N8N_CPU_LIMIT:-2.0}
memory: ${N8N_MEMORY_LIMIT:-2G} memory: ${N8N_MEMORY_LIMIT:-2G}
reservations: reservations:
cpus: ${N8N_CPU_RESERVATION:-0.5} cpus: ${N8N_CPU_RESERVATION:-0.1}
memory: ${N8N_MEMORY_RESERVATION:-512M} memory: ${N8N_MEMORY_RESERVATION:-512M}
healthcheck: healthcheck:
test: [CMD, wget, --no-verbose, --tries=1, --spider, 'http://localhost:5678/healthz'] test: [CMD, wget, --no-verbose, --tries=1, --spider, 'http://localhost:5678/healthz']
@@ -70,7 +70,7 @@ services:
cpus: ${N8N_DB_CPU_LIMIT:-1.0} cpus: ${N8N_DB_CPU_LIMIT:-1.0}
memory: ${N8N_DB_MEMORY_LIMIT:-1G} memory: ${N8N_DB_MEMORY_LIMIT:-1G}
reservations: reservations:
cpus: ${N8N_DB_CPU_RESERVATION:-0.5} cpus: ${N8N_DB_CPU_RESERVATION:-0.1}
memory: ${N8N_DB_MEMORY_RESERVATION:-512M} memory: ${N8N_DB_MEMORY_RESERVATION:-512M}
volumes: volumes:
+1 -1
View File
@@ -137,7 +137,7 @@ GATEWAY_PORT=18790
# CPU limits # CPU limits
NANOBOT_CPU_LIMIT=1.0 NANOBOT_CPU_LIMIT=1.0
NANOBOT_CPU_RESERVATION=0.5 NANOBOT_CPU_RESERVATION=0.1
# Memory limits # Memory limits
NANOBOT_MEMORY_LIMIT=1G NANOBOT_MEMORY_LIMIT=1G
+2 -2
View File
@@ -57,7 +57,7 @@ services:
- NANOBOT_GATEWAY__PORT=${GATEWAY_PORT:-18790} - NANOBOT_GATEWAY__PORT=${GATEWAY_PORT:-18790}
command: ${NANOBOT_COMMAND:-gateway} command: ${NANOBOT_COMMAND:-gateway}
healthcheck: healthcheck:
test: [CMD, python, -c, import sys; sys.exit(0)] test: [CMD, python, -c, "import urllib.request; urllib.request.urlopen('http://localhost:18790/')"]
interval: 30s interval: 30s
timeout: 10s timeout: 10s
retries: 3 retries: 3
@@ -68,7 +68,7 @@ services:
cpus: ${NANOBOT_CPU_LIMIT:-1.0} cpus: ${NANOBOT_CPU_LIMIT:-1.0}
memory: ${NANOBOT_MEMORY_LIMIT:-1G} memory: ${NANOBOT_MEMORY_LIMIT:-1G}
reservations: reservations:
cpus: ${NANOBOT_CPU_RESERVATION:-0.5} cpus: ${NANOBOT_CPU_RESERVATION:-0.1}
memory: ${NANOBOT_MEMORY_RESERVATION:-512M} memory: ${NANOBOT_MEMORY_RESERVATION:-512M}
volumes: volumes:
+1 -1
View File
@@ -46,7 +46,7 @@ CLAUDE_WEB_COOKIE=
# Gateway service resource limits # Gateway service resource limits
OPENCLAW_CPU_LIMIT=2.0 OPENCLAW_CPU_LIMIT=2.0
OPENCLAW_MEMORY_LIMIT=2G OPENCLAW_MEMORY_LIMIT=2G
OPENCLAW_CPU_RESERVATION=1.0 OPENCLAW_CPU_RESERVATION=0.1
OPENCLAW_MEMORY_RESERVATION=1G OPENCLAW_MEMORY_RESERVATION=1G
# CLI service resource limits # CLI service resource limits
+6 -5
View File
@@ -13,7 +13,7 @@ x-defaults: &defaults
services: services:
openclaw-gateway: openclaw-gateway:
<<: *defaults <<: *defaults
image: ${GLOBAL_REGISTRY:-ghcr.io}/openclaw/openclaw:${OPENCLAW_VERSION:-2026.2.3} image: ${GLOBAL_REGISTRY:-ghcr.io/}openclaw/openclaw:${OPENCLAW_VERSION:-2026.2.3}
environment: environment:
- TZ=${TZ:-UTC} - TZ=${TZ:-UTC}
- HOME=/home/node - HOME=/home/node
@@ -55,12 +55,13 @@ services:
cpus: ${OPENCLAW_CPU_LIMIT:-2.0} cpus: ${OPENCLAW_CPU_LIMIT:-2.0}
memory: ${OPENCLAW_MEMORY_LIMIT:-2G} memory: ${OPENCLAW_MEMORY_LIMIT:-2G}
reservations: reservations:
cpus: ${OPENCLAW_CPU_RESERVATION:-1.0} cpus: ${OPENCLAW_CPU_RESERVATION:-0.1}
memory: ${OPENCLAW_MEMORY_RESERVATION:-1G} memory: ${OPENCLAW_MEMORY_RESERVATION:-1G}
openclaw-cli: openclaw-cli:
<<: *defaults <<: *defaults
image: ${GLOBAL_REGISTRY:-ghcr.io}/openclaw/openclaw:${OPENCLAW_VERSION:-2026.2.3} restart: no
image: ${GLOBAL_REGISTRY:-ghcr.io/}openclaw/openclaw:${OPENCLAW_VERSION:-2026.2.3}
environment: environment:
- TZ=${TZ:-UTC} - TZ=${TZ:-UTC}
- HOME=/home/node - HOME=/home/node
@@ -70,8 +71,8 @@ services:
- CLAUDE_WEB_SESSION_KEY=${CLAUDE_WEB_SESSION_KEY:-} - CLAUDE_WEB_SESSION_KEY=${CLAUDE_WEB_SESSION_KEY:-}
- CLAUDE_WEB_COOKIE=${CLAUDE_WEB_COOKIE:-} - CLAUDE_WEB_COOKIE=${CLAUDE_WEB_COOKIE:-}
volumes: volumes:
- moltbot_config:/home/node/.clawdbot - openclaw_config:/home/node/.openclaw
- moltbot_workspace:/home/node/clawd - openclaw_workspace:/home/node/openclaw-workspace
stdin_open: true stdin_open: true
tty: true tty: true
entrypoint: [node, dist/index.js] entrypoint: [node, dist/index.js]
+2 -2
View File
@@ -62,7 +62,7 @@ OPENLIT_CPU_LIMIT=1.0
OPENLIT_MEMORY_LIMIT=2G OPENLIT_MEMORY_LIMIT=2G
# CPU reservation for OpenLIT # CPU reservation for OpenLIT
OPENLIT_CPU_RESERVATION=0.25 OPENLIT_CPU_RESERVATION=0.1
# Memory reservation for OpenLIT # Memory reservation for OpenLIT
OPENLIT_MEMORY_RESERVATION=512M OPENLIT_MEMORY_RESERVATION=512M
@@ -77,7 +77,7 @@ CLICKHOUSE_CPU_LIMIT=2.0
CLICKHOUSE_MEMORY_LIMIT=4G CLICKHOUSE_MEMORY_LIMIT=4G
# CPU reservation for ClickHouse # CPU reservation for ClickHouse
CLICKHOUSE_CPU_RESERVATION=0.5 CLICKHOUSE_CPU_RESERVATION=0.1
# Memory reservation for ClickHouse # Memory reservation for ClickHouse
CLICKHOUSE_MEMORY_RESERVATION=2G CLICKHOUSE_MEMORY_RESERVATION=2G
+74
View File
@@ -0,0 +1,74 @@
<?xml version="1.0"?>
<clickhouse>
<logger>
<!-- Set console log level to warning (only critical messages) -->
<level>warning</level>
<console>true</console>
</logger>
<!-- Configure trace_log table settings -->
<trace_log>
<!-- Only log critical trace events (level 6 and above - more restrictive) -->
<level>6</level>
<!-- Reduce the frequency of trace log flushing -->
<flush_interval_milliseconds>120000</flush_interval_milliseconds>
<!-- Set table TTL to reduce storage (7 days) -->
<table_ttl>604800</table_ttl>
</trace_log>
<!-- Configure text_log table settings (also large in your case) -->
<text_log>
<!-- Only log warning level and above -->
<level>warning</level>
<!-- Set TTL to 7 days -->
<table_ttl>604800</table_ttl>
<!-- Reduce flush frequency -->
<flush_interval_milliseconds>120000</flush_interval_milliseconds>
</text_log>
<!-- Reduce other system table logging -->
<query_log>
<!-- Only log slow queries (over 1 second) -->
<log_queries_min_query_duration_ms>1000</log_queries_min_query_duration_ms>
<!-- Reduce flush frequency -->
<flush_interval_milliseconds>60000</flush_interval_milliseconds>
<!-- Set TTL to 7 days -->
<table_ttl>604800</table_ttl>
</query_log>
<!-- Configure system log levels -->
<system_log>
<level>warning</level>
</system_log>
<!-- Reduce metric log verbosity -->
<metric_log>
<collect_interval_milliseconds>60000</collect_interval_milliseconds>
<flush_interval_milliseconds>120000</flush_interval_milliseconds>
<!-- Set TTL to 7 days -->
<table_ttl>604800</table_ttl>
</metric_log>
<!-- Configure asynchronous metric log (reduce storage) -->
<asynchronous_metric_log>
<collect_interval_milliseconds>60000</collect_interval_milliseconds>
<flush_interval_milliseconds>120000</flush_interval_milliseconds>
<!-- Set TTL to 7 days -->
<table_ttl>604800</table_ttl>
</asynchronous_metric_log>
<!-- Configure part log (reduce verbosity) -->
<part_log>
<level>warning</level>
<flush_interval_milliseconds>120000</flush_interval_milliseconds>
<!-- Set TTL to 7 days -->
<table_ttl>604800</table_ttl>
</part_log>
<!-- Configure latency log (reduce storage) -->
<latency_log>
<flush_interval_milliseconds>120000</flush_interval_milliseconds>
<!-- Set TTL to 7 days -->
<table_ttl>604800</table_ttl>
</latency_log>
</clickhouse>
+326
View File
@@ -0,0 +1,326 @@
#!/bin/bash
set -e
echo "==================== ClickHouse Initialization ===================="
clickhouse-client --query "CREATE DATABASE IF NOT EXISTS ${CLICKHOUSE_DATABASE}"
echo "✅ Database $CLICKHOUSE_DATABASE created successfully"
echo ""
echo "Creating OTEL tables required by OpenTelemetry Collector..."
clickhouse-client --database="${CLICKHOUSE_DATABASE}" --query "
CREATE TABLE IF NOT EXISTS otel_traces
(
\`Timestamp\` DateTime64(9) CODEC(Delta(8), ZSTD(1)),
\`TraceId\` String CODEC(ZSTD(1)),
\`SpanId\` String CODEC(ZSTD(1)),
\`ParentSpanId\` String CODEC(ZSTD(1)),
\`TraceState\` String CODEC(ZSTD(1)),
\`SpanName\` LowCardinality(String) CODEC(ZSTD(1)),
\`SpanKind\` LowCardinality(String) CODEC(ZSTD(1)),
\`ServiceName\` LowCardinality(String) CODEC(ZSTD(1)),
\`ResourceAttributes\` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
\`ScopeName\` String CODEC(ZSTD(1)),
\`ScopeVersion\` String CODEC(ZSTD(1)),
\`SpanAttributes\` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
\`Duration\` UInt64 CODEC(ZSTD(1)),
\`StatusCode\` LowCardinality(String) CODEC(ZSTD(1)),
\`StatusMessage\` String CODEC(ZSTD(1)),
\`Events.Timestamp\` Array(DateTime64(9)) CODEC(ZSTD(1)),
\`Events.Name\` Array(LowCardinality(String)) CODEC(ZSTD(1)),
\`Events.Attributes\` Array(Map(LowCardinality(String), String)) CODEC(ZSTD(1)),
\`Links.TraceId\` Array(String) CODEC(ZSTD(1)),
\`Links.SpanId\` Array(String) CODEC(ZSTD(1)),
\`Links.TraceState\` Array(String) CODEC(ZSTD(1)),
\`Links.Attributes\` Array(Map(LowCardinality(String), String)) CODEC(ZSTD(1)),
INDEX idx_trace_id TraceId TYPE bloom_filter(0.001) GRANULARITY 1,
INDEX idx_res_attr_key mapKeys(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_res_attr_value mapValues(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_span_attr_key mapKeys(SpanAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_span_attr_value mapValues(SpanAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_duration Duration TYPE minmax GRANULARITY 1
)
ENGINE = MergeTree
PARTITION BY toDate(Timestamp)
ORDER BY (ServiceName, SpanName, toDateTime(Timestamp))
TTL toDateTime(Timestamp) + toIntervalHour(730)
SETTINGS index_granularity = 8192, ttl_only_drop_parts = 1
"
clickhouse-client --database="${CLICKHOUSE_DATABASE}" --query "
CREATE TABLE IF NOT EXISTS otel_logs
(
\`Timestamp\` DateTime64(9) CODEC(Delta(8), ZSTD(1)),
\`TimestampTime\` DateTime DEFAULT toDateTime(Timestamp),
\`TraceId\` String CODEC(ZSTD(1)),
\`SpanId\` String CODEC(ZSTD(1)),
\`TraceFlags\` UInt8,
\`SeverityText\` LowCardinality(String) CODEC(ZSTD(1)),
\`SeverityNumber\` UInt8,
\`ServiceName\` LowCardinality(String) CODEC(ZSTD(1)),
\`Body\` String CODEC(ZSTD(1)),
\`ResourceSchemaUrl\` LowCardinality(String) CODEC(ZSTD(1)),
\`ResourceAttributes\` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
\`ScopeSchemaUrl\` LowCardinality(String) CODEC(ZSTD(1)),
\`ScopeName\` String CODEC(ZSTD(1)),
\`ScopeVersion\` LowCardinality(String) CODEC(ZSTD(1)),
\`ScopeAttributes\` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
\`LogAttributes\` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
INDEX idx_trace_id TraceId TYPE bloom_filter(0.001) GRANULARITY 1,
INDEX idx_res_attr_key mapKeys(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_res_attr_value mapValues(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_scope_attr_key mapKeys(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_scope_attr_value mapValues(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_log_attr_key mapKeys(LogAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_log_attr_value mapValues(LogAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_body Body TYPE tokenbf_v1(32768, 3, 0) GRANULARITY 8
)
ENGINE = MergeTree
PARTITION BY toDate(TimestampTime)
PRIMARY KEY (ServiceName, TimestampTime)
ORDER BY (ServiceName, TimestampTime, Timestamp)
TTL TimestampTime + toIntervalHour(730)
SETTINGS index_granularity = 8192, ttl_only_drop_parts = 1
"
clickhouse-client --database="${CLICKHOUSE_DATABASE}" --query "
CREATE TABLE IF NOT EXISTS otel_metrics_gauge
(
\`ResourceAttributes\` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
\`ResourceSchemaUrl\` String CODEC(ZSTD(1)),
\`ScopeName\` String CODEC(ZSTD(1)),
\`ScopeVersion\` String CODEC(ZSTD(1)),
\`ScopeAttributes\` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
\`ScopeDroppedAttrCount\` UInt32 CODEC(ZSTD(1)),
\`ScopeSchemaUrl\` String CODEC(ZSTD(1)),
\`ServiceName\` LowCardinality(String) CODEC(ZSTD(1)),
\`MetricName\` String CODEC(ZSTD(1)),
\`MetricDescription\` String CODEC(ZSTD(1)),
\`MetricUnit\` String CODEC(ZSTD(1)),
\`Attributes\` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
\`StartTimeUnix\` DateTime64(9) CODEC(Delta(8), ZSTD(1)),
\`TimeUnix\` DateTime64(9) CODEC(Delta(8), ZSTD(1)),
\`Value\` Float64 CODEC(ZSTD(1)),
\`Flags\` UInt32 CODEC(ZSTD(1)),
\`Exemplars.FilteredAttributes\` Array(Map(LowCardinality(String), String)) CODEC(ZSTD(1)),
\`Exemplars.TimeUnix\` Array(DateTime64(9)) CODEC(ZSTD(1)),
\`Exemplars.Value\` Array(Float64) CODEC(ZSTD(1)),
\`Exemplars.SpanId\` Array(String) CODEC(ZSTD(1)),
\`Exemplars.TraceId\` Array(String) CODEC(ZSTD(1)),
INDEX idx_res_attr_key mapKeys(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_res_attr_value mapValues(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_scope_attr_key mapKeys(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_scope_attr_value mapValues(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_attr_key mapKeys(Attributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_attr_value mapValues(Attributes) TYPE bloom_filter(0.01) GRANULARITY 1
)
ENGINE = MergeTree
PARTITION BY toDate(TimeUnix)
ORDER BY (ServiceName, MetricName, Attributes, toUnixTimestamp64Nano(TimeUnix))
TTL toDateTime(TimeUnix) + toIntervalHour(730)
SETTINGS index_granularity = 8192, ttl_only_drop_parts = 1
"
clickhouse-client --database="${CLICKHOUSE_DATABASE}" --query "
CREATE TABLE IF NOT EXISTS otel_metrics_sum
(
\`ResourceAttributes\` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
\`ResourceSchemaUrl\` String CODEC(ZSTD(1)),
\`ScopeName\` String CODEC(ZSTD(1)),
\`ScopeVersion\` String CODEC(ZSTD(1)),
\`ScopeAttributes\` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
\`ScopeDroppedAttrCount\` UInt32 CODEC(ZSTD(1)),
\`ScopeSchemaUrl\` String CODEC(ZSTD(1)),
\`ServiceName\` LowCardinality(String) CODEC(ZSTD(1)),
\`MetricName\` String CODEC(ZSTD(1)),
\`MetricDescription\` String CODEC(ZSTD(1)),
\`MetricUnit\` String CODEC(ZSTD(1)),
\`Attributes\` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
\`StartTimeUnix\` DateTime64(9) CODEC(Delta(8), ZSTD(1)),
\`TimeUnix\` DateTime64(9) CODEC(Delta(8), ZSTD(1)),
\`Value\` Float64 CODEC(ZSTD(1)),
\`Flags\` UInt32 CODEC(ZSTD(1)),
\`Exemplars.FilteredAttributes\` Array(Map(LowCardinality(String), String)) CODEC(ZSTD(1)),
\`Exemplars.TimeUnix\` Array(DateTime64(9)) CODEC(ZSTD(1)),
\`Exemplars.Value\` Array(Float64) CODEC(ZSTD(1)),
\`Exemplars.SpanId\` Array(String) CODEC(ZSTD(1)),
\`Exemplars.TraceId\` Array(String) CODEC(ZSTD(1)),
\`AggregationTemporality\` Int32 CODEC(ZSTD(1)),
\`IsMonotonic\` Bool CODEC(Delta(1), ZSTD(1)),
INDEX idx_res_attr_key mapKeys(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_res_attr_value mapValues(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_scope_attr_key mapKeys(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_scope_attr_value mapValues(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_attr_key mapKeys(Attributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_attr_value mapValues(Attributes) TYPE bloom_filter(0.01) GRANULARITY 1
)
ENGINE = MergeTree
PARTITION BY toDate(TimeUnix)
ORDER BY (ServiceName, MetricName, Attributes, toUnixTimestamp64Nano(TimeUnix))
TTL toDateTime(TimeUnix) + toIntervalHour(730)
SETTINGS index_granularity = 8192, ttl_only_drop_parts = 1
"
clickhouse-client --database="${CLICKHOUSE_DATABASE}" --query "
CREATE TABLE IF NOT EXISTS otel_metrics_histogram
(
\`ResourceAttributes\` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
\`ResourceSchemaUrl\` String CODEC(ZSTD(1)),
\`ScopeName\` String CODEC(ZSTD(1)),
\`ScopeVersion\` String CODEC(ZSTD(1)),
\`ScopeAttributes\` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
\`ScopeDroppedAttrCount\` UInt32 CODEC(ZSTD(1)),
\`ScopeSchemaUrl\` String CODEC(ZSTD(1)),
\`ServiceName\` LowCardinality(String) CODEC(ZSTD(1)),
\`MetricName\` String CODEC(ZSTD(1)),
\`MetricDescription\` String CODEC(ZSTD(1)),
\`MetricUnit\` String CODEC(ZSTD(1)),
\`Attributes\` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
\`StartTimeUnix\` DateTime64(9) CODEC(Delta(8), ZSTD(1)),
\`TimeUnix\` DateTime64(9) CODEC(Delta(8), ZSTD(1)),
\`Count\` UInt64 CODEC(Delta(8), ZSTD(1)),
\`Sum\` Float64 CODEC(ZSTD(1)),
\`BucketCounts\` Array(UInt64) CODEC(ZSTD(1)),
\`ExplicitBounds\` Array(Float64) CODEC(ZSTD(1)),
\`Exemplars.FilteredAttributes\` Array(Map(LowCardinality(String), String)) CODEC(ZSTD(1)),
\`Exemplars.TimeUnix\` Array(DateTime64(9)) CODEC(ZSTD(1)),
\`Exemplars.Value\` Array(Float64) CODEC(ZSTD(1)),
\`Exemplars.SpanId\` Array(String) CODEC(ZSTD(1)),
\`Exemplars.TraceId\` Array(String) CODEC(ZSTD(1)),
\`Flags\` UInt32 CODEC(ZSTD(1)),
\`Min\` Float64 CODEC(ZSTD(1)),
\`Max\` Float64 CODEC(ZSTD(1)),
\`AggregationTemporality\` Int32 CODEC(ZSTD(1)),
INDEX idx_res_attr_key mapKeys(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_res_attr_value mapValues(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_scope_attr_key mapKeys(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_scope_attr_value mapValues(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_attr_key mapKeys(Attributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_attr_value mapValues(Attributes) TYPE bloom_filter(0.01) GRANULARITY 1
)
ENGINE = MergeTree
PARTITION BY toDate(TimeUnix)
ORDER BY (ServiceName, MetricName, Attributes, toUnixTimestamp64Nano(TimeUnix))
TTL toDateTime(TimeUnix) + toIntervalHour(730)
SETTINGS index_granularity = 8192, ttl_only_drop_parts = 1
"
clickhouse-client --database="${CLICKHOUSE_DATABASE}" --query "
CREATE TABLE IF NOT EXISTS otel_metrics_summary
(
\`ResourceAttributes\` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
\`ResourceSchemaUrl\` String CODEC(ZSTD(1)),
\`ScopeName\` String CODEC(ZSTD(1)),
\`ScopeVersion\` String CODEC(ZSTD(1)),
\`ScopeAttributes\` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
\`ScopeDroppedAttrCount\` UInt32 CODEC(ZSTD(1)),
\`ScopeSchemaUrl\` String CODEC(ZSTD(1)),
\`ServiceName\` LowCardinality(String) CODEC(ZSTD(1)),
\`MetricName\` String CODEC(ZSTD(1)),
\`MetricDescription\` String CODEC(ZSTD(1)),
\`MetricUnit\` String CODEC(ZSTD(1)),
\`Attributes\` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
\`StartTimeUnix\` DateTime64(9) CODEC(Delta(8), ZSTD(1)),
\`TimeUnix\` DateTime64(9) CODEC(Delta(8), ZSTD(1)),
\`Count\` UInt64 CODEC(Delta(8), ZSTD(1)),
\`Sum\` Float64 CODEC(ZSTD(1)),
\`ValueAtQuantiles.Quantile\` Array(Float64) CODEC(ZSTD(1)),
\`ValueAtQuantiles.Value\` Array(Float64) CODEC(ZSTD(1)),
\`Flags\` UInt32 CODEC(ZSTD(1)),
INDEX idx_res_attr_key mapKeys(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_res_attr_value mapValues(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_scope_attr_key mapKeys(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_scope_attr_value mapValues(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_attr_key mapKeys(Attributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_attr_value mapValues(Attributes) TYPE bloom_filter(0.01) GRANULARITY 1
)
ENGINE = MergeTree
PARTITION BY toDate(TimeUnix)
ORDER BY (ServiceName, MetricName, Attributes, toUnixTimestamp64Nano(TimeUnix))
TTL toDateTime(TimeUnix) + toIntervalHour(730)
SETTINGS index_granularity = 8192, ttl_only_drop_parts = 1
"
clickhouse-client --database="${CLICKHOUSE_DATABASE}" --query "
CREATE TABLE IF NOT EXISTS otel_metrics_exponential_histogram
(
\`ResourceAttributes\` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
\`ResourceSchemaUrl\` String CODEC(ZSTD(1)),
\`ScopeName\` String CODEC(ZSTD(1)),
\`ScopeVersion\` String CODEC(ZSTD(1)),
\`ScopeAttributes\` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
\`ScopeDroppedAttrCount\` UInt32 CODEC(ZSTD(1)),
\`ScopeSchemaUrl\` String CODEC(ZSTD(1)),
\`ServiceName\` LowCardinality(String) CODEC(ZSTD(1)),
\`MetricName\` String CODEC(ZSTD(1)),
\`MetricDescription\` String CODEC(ZSTD(1)),
\`MetricUnit\` String CODEC(ZSTD(1)),
\`Attributes\` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
\`StartTimeUnix\` DateTime64(9) CODEC(Delta(8), ZSTD(1)),
\`TimeUnix\` DateTime64(9) CODEC(Delta(8), ZSTD(1)),
\`Count\` UInt64 CODEC(Delta(8), ZSTD(1)),
\`Sum\` Float64 CODEC(ZSTD(1)),
\`Scale\` Int32 CODEC(ZSTD(1)),
\`ZeroCount\` UInt64 CODEC(ZSTD(1)),
\`PositiveOffset\` Int32 CODEC(ZSTD(1)),
\`PositiveBucketCounts\` Array(UInt64) CODEC(ZSTD(1)),
\`NegativeOffset\` Int32 CODEC(ZSTD(1)),
\`NegativeBucketCounts\` Array(UInt64) CODEC(ZSTD(1)),
\`Exemplars.FilteredAttributes\` Array(Map(LowCardinality(String), String)) CODEC(ZSTD(1)),
\`Exemplars.TimeUnix\` Array(DateTime64(9)) CODEC(ZSTD(1)),
\`Exemplars.Value\` Array(Float64) CODEC(ZSTD(1)),
\`Exemplars.SpanId\` Array(String) CODEC(ZSTD(1)),
\`Exemplars.TraceId\` Array(String) CODEC(ZSTD(1)),
\`Flags\` UInt32 CODEC(ZSTD(1)),
\`Min\` Float64 CODEC(ZSTD(1)),
\`Max\` Float64 CODEC(ZSTD(1)),
\`AggregationTemporality\` Int32 CODEC(ZSTD(1)),
INDEX idx_res_attr_key mapKeys(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_res_attr_value mapValues(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_scope_attr_key mapKeys(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_scope_attr_value mapValues(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_attr_key mapKeys(Attributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_attr_value mapValues(Attributes) TYPE bloom_filter(0.01) GRANULARITY 1
)
ENGINE = MergeTree
PARTITION BY toDate(TimeUnix)
ORDER BY (ServiceName, MetricName, Attributes, toUnixTimestamp64Nano(TimeUnix))
TTL toDateTime(TimeUnix) + toIntervalHour(730)
SETTINGS index_granularity = 8192, ttl_only_drop_parts = 1
"
clickhouse-client --database="${CLICKHOUSE_DATABASE}" --query "
CREATE TABLE IF NOT EXISTS otel_traces_trace_id_ts
(
\`TraceId\` String CODEC(ZSTD(1)),
\`Start\` DateTime CODEC(Delta(4), ZSTD(1)),
\`End\` DateTime CODEC(Delta(4), ZSTD(1)),
INDEX idx_trace_id TraceId TYPE bloom_filter(0.01) GRANULARITY 1
)
ENGINE = MergeTree
PARTITION BY toDate(Start)
ORDER BY (TraceId, Start)
TTL toDateTime(Start) + toIntervalHour(730)
SETTINGS index_granularity = 8192, ttl_only_drop_parts = 1
"
clickhouse-client --database="${CLICKHOUSE_DATABASE}" --query "
CREATE MATERIALIZED VIEW IF NOT EXISTS otel_traces_trace_id_ts_mv TO otel_traces_trace_id_ts
(
\`TraceId\` String,
\`Start\` DateTime64(9),
\`End\` DateTime64(9)
)
AS SELECT
TraceId,
min(Timestamp) AS Start,
max(Timestamp) AS End
FROM otel_traces
WHERE TraceId != ''
GROUP BY TraceId
"
echo "✅ All 9 OTEL tables created successfully"
echo "===================================================================="
@@ -0,0 +1,52 @@
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
processors:
batch:
memory_limiter:
# 80% of maximum memory up to 2G
limit_mib: 1500
# 25% of limit up to 2G
spike_limit_mib: 512
check_interval: 5s
exporters:
clickhouse:
endpoint: tcp://${env:INIT_DB_HOST}:9000?dial_timeout=10s
database: ${env:INIT_DB_DATABASE}
username: ${env:INIT_DB_USERNAME}
password: ${env:INIT_DB_PASSWORD}
ttl: 730h
logs_table_name: otel_logs
traces_table_name: otel_traces
# Metrics use separate tables by type: otel_metrics_gauge, otel_metrics_sum,
# otel_metrics_histogram, otel_metrics_summary, otel_metrics_exponential_histogram
timeout: 5s
retry_on_failure:
enabled: true
initial_interval: 5s
max_interval: 30s
max_elapsed_time: 300s
service:
pipelines:
logs:
receivers: [otlp]
processors: [batch]
exporters: [clickhouse]
traces:
receivers: [otlp]
processors: [memory_limiter, batch]
exporters: [clickhouse]
metrics:
receivers: [otlp]
processors: [memory_limiter, batch]
exporters: [clickhouse]
# telemetry:
# metrics:
# address: localhost:8888
+5 -2
View File
@@ -23,6 +23,8 @@ services:
- CLICKHOUSE_ALWAYS_RUN_INITDB_SCRIPTS=true - CLICKHOUSE_ALWAYS_RUN_INITDB_SCRIPTS=true
volumes: volumes:
- clickhouse_data:/var/lib/clickhouse - clickhouse_data:/var/lib/clickhouse
- ./assets/clickhouse-config.xml:/etc/clickhouse-server/config.d/custom-config.xml:ro
- ./assets/clickhouse-init.sh:/docker-entrypoint-initdb.d/init.sh:ro
ports: ports:
- '${CLICKHOUSE_HTTP_PORT_OVERRIDE:-8123}:8123' - '${CLICKHOUSE_HTTP_PORT_OVERRIDE:-8123}:8123'
- '${CLICKHOUSE_NATIVE_PORT_OVERRIDE:-9000}:9000' - '${CLICKHOUSE_NATIVE_PORT_OVERRIDE:-9000}:9000'
@@ -38,7 +40,7 @@ services:
cpus: ${CLICKHOUSE_CPU_LIMIT:-2.0} cpus: ${CLICKHOUSE_CPU_LIMIT:-2.0}
memory: ${CLICKHOUSE_MEMORY_LIMIT:-4G} memory: ${CLICKHOUSE_MEMORY_LIMIT:-4G}
reservations: reservations:
cpus: ${CLICKHOUSE_CPU_RESERVATION:-0.5} cpus: ${CLICKHOUSE_CPU_RESERVATION:-0.1}
memory: ${CLICKHOUSE_MEMORY_RESERVATION:-2G} memory: ${CLICKHOUSE_MEMORY_RESERVATION:-2G}
openlit: openlit:
@@ -77,6 +79,7 @@ services:
condition: service_healthy condition: service_healthy
volumes: volumes:
- openlit_data:/app/client/data - openlit_data:/app/client/data
- ./assets/otel-collector-config.yaml:/etc/otel/otel-collector-config.yaml:ro
healthcheck: healthcheck:
test: [CMD, wget, --quiet, --tries=1, --spider, 'http://localhost:${OPENLIT_INTERNAL_PORT:-3000}/health'] test: [CMD, wget, --quiet, --tries=1, --spider, 'http://localhost:${OPENLIT_INTERNAL_PORT:-3000}/health']
interval: 30s interval: 30s
@@ -89,7 +92,7 @@ services:
cpus: ${OPENLIT_CPU_LIMIT:-1.0} cpus: ${OPENLIT_CPU_LIMIT:-1.0}
memory: ${OPENLIT_MEMORY_LIMIT:-2G} memory: ${OPENLIT_MEMORY_LIMIT:-2G}
reservations: reservations:
cpus: ${OPENLIT_CPU_RESERVATION:-0.25} cpus: ${OPENLIT_CPU_RESERVATION:-0.1}
memory: ${OPENLIT_MEMORY_RESERVATION:-512M} memory: ${OPENLIT_MEMORY_RESERVATION:-512M}
volumes: volumes:
+1 -1
View File
@@ -36,7 +36,7 @@ ZO_S3_SECRET_KEY=
# Resource limits # Resource limits
# CPU limits (in cores) # CPU limits (in cores)
OPENOBSERVE_CPU_LIMIT=2.0 OPENOBSERVE_CPU_LIMIT=2.0
OPENOBSERVE_CPU_RESERVATION=0.5 OPENOBSERVE_CPU_RESERVATION=0.1
# Memory limits # Memory limits
OPENOBSERVE_MEMORY_LIMIT=2G OPENOBSERVE_MEMORY_LIMIT=2G
+1 -1
View File
@@ -40,7 +40,7 @@ services:
cpus: ${OPENOBSERVE_CPU_LIMIT:-2.0} cpus: ${OPENOBSERVE_CPU_LIMIT:-2.0}
memory: ${OPENOBSERVE_MEMORY_LIMIT:-2G} memory: ${OPENOBSERVE_MEMORY_LIMIT:-2G}
reservations: reservations:
cpus: ${OPENOBSERVE_CPU_RESERVATION:-0.5} cpus: ${OPENOBSERVE_CPU_RESERVATION:-0.1}
memory: ${OPENOBSERVE_MEMORY_RESERVATION:-512M} memory: ${OPENOBSERVE_MEMORY_RESERVATION:-512M}
volumes: volumes:
+1 -1
View File
@@ -35,7 +35,7 @@ OPENSANDBOX_SERVER_CPU_LIMIT=2.0
# OpenSandbox Server CPU reservation # OpenSandbox Server CPU reservation
# Default: 1.0 (1 CPU core) # Default: 1.0 (1 CPU core)
OPENSANDBOX_SERVER_CPU_RESERVATION=1.0 OPENSANDBOX_SERVER_CPU_RESERVATION=0.1
# OpenSandbox Server memory limit # OpenSandbox Server memory limit
# Default: 2G # Default: 2G
+1 -1
View File
@@ -41,7 +41,7 @@ services:
cpus: ${OPENSANDBOX_SERVER_CPU_LIMIT:-2.0} cpus: ${OPENSANDBOX_SERVER_CPU_LIMIT:-2.0}
memory: ${OPENSANDBOX_SERVER_MEMORY_LIMIT:-2G} memory: ${OPENSANDBOX_SERVER_MEMORY_LIMIT:-2G}
reservations: reservations:
cpus: ${OPENSANDBOX_SERVER_CPU_RESERVATION:-1.0} cpus: ${OPENSANDBOX_SERVER_CPU_RESERVATION:-0.1}
memory: ${OPENSANDBOX_SERVER_MEMORY_RESERVATION:-1G} memory: ${OPENSANDBOX_SERVER_MEMORY_RESERVATION:-1G}
volumes: volumes:
+9 -9
View File
@@ -3,7 +3,7 @@ GLOBAL_REGISTRY=
TZ=UTC TZ=UTC
# Opik Version # Opik Version
OPIK_VERSION=1.10.23 OPIK_VERSION=1.11.9
# Opik Frontend Port # Opik Frontend Port
OPIK_PORT_OVERRIDE=5173 OPIK_PORT_OVERRIDE=5173
@@ -81,47 +81,47 @@ NGINX_CONF_SUFFIX=local
# Resource Limits - MySQL # Resource Limits - MySQL
MYSQL_CPU_LIMIT=1.0 MYSQL_CPU_LIMIT=1.0
MYSQL_MEMORY_LIMIT=1G MYSQL_MEMORY_LIMIT=1G
MYSQL_CPU_RESERVATION=0.5 MYSQL_CPU_RESERVATION=0.1
MYSQL_MEMORY_RESERVATION=512M MYSQL_MEMORY_RESERVATION=512M
# Resource Limits - Redis # Resource Limits - Redis
REDIS_CPU_LIMIT=0.5 REDIS_CPU_LIMIT=0.5
REDIS_MEMORY_LIMIT=512M REDIS_MEMORY_LIMIT=512M
REDIS_CPU_RESERVATION=0.25 REDIS_CPU_RESERVATION=0.1
REDIS_MEMORY_RESERVATION=256M REDIS_MEMORY_RESERVATION=256M
# Resource Limits - ZooKeeper # Resource Limits - ZooKeeper
ZOOKEEPER_CPU_LIMIT=0.5 ZOOKEEPER_CPU_LIMIT=0.5
ZOOKEEPER_MEMORY_LIMIT=1G ZOOKEEPER_MEMORY_LIMIT=1G
ZOOKEEPER_CPU_RESERVATION=0.25 ZOOKEEPER_CPU_RESERVATION=0.1
ZOOKEEPER_MEMORY_RESERVATION=512M ZOOKEEPER_MEMORY_RESERVATION=512M
# Resource Limits - ClickHouse # Resource Limits - ClickHouse
CLICKHOUSE_CPU_LIMIT=2.0 CLICKHOUSE_CPU_LIMIT=2.0
CLICKHOUSE_MEMORY_LIMIT=4G CLICKHOUSE_MEMORY_LIMIT=4G
CLICKHOUSE_CPU_RESERVATION=0.5 CLICKHOUSE_CPU_RESERVATION=0.1
CLICKHOUSE_MEMORY_RESERVATION=1G CLICKHOUSE_MEMORY_RESERVATION=1G
# Resource Limits - MinIO # Resource Limits - MinIO
MINIO_CPU_LIMIT=1.0 MINIO_CPU_LIMIT=1.0
MINIO_MEMORY_LIMIT=1G MINIO_MEMORY_LIMIT=1G
MINIO_CPU_RESERVATION=0.25 MINIO_CPU_RESERVATION=0.1
MINIO_MEMORY_RESERVATION=512M MINIO_MEMORY_RESERVATION=512M
# Resource Limits - Backend # Resource Limits - Backend
BACKEND_CPU_LIMIT=2.0 BACKEND_CPU_LIMIT=2.0
BACKEND_MEMORY_LIMIT=2G BACKEND_MEMORY_LIMIT=2G
BACKEND_CPU_RESERVATION=0.5 BACKEND_CPU_RESERVATION=0.1
BACKEND_MEMORY_RESERVATION=1G BACKEND_MEMORY_RESERVATION=1G
# Resource Limits - Python Backend # Resource Limits - Python Backend
PYTHON_BACKEND_CPU_LIMIT=1.0 PYTHON_BACKEND_CPU_LIMIT=1.0
PYTHON_BACKEND_MEMORY_LIMIT=1G PYTHON_BACKEND_MEMORY_LIMIT=1G
PYTHON_BACKEND_CPU_RESERVATION=0.5 PYTHON_BACKEND_CPU_RESERVATION=0.1
PYTHON_BACKEND_MEMORY_RESERVATION=512M PYTHON_BACKEND_MEMORY_RESERVATION=512M
# Resource Limits - Frontend # Resource Limits - Frontend
FRONTEND_CPU_LIMIT=0.5 FRONTEND_CPU_LIMIT=0.5
FRONTEND_MEMORY_LIMIT=512M FRONTEND_MEMORY_LIMIT=512M
FRONTEND_CPU_RESERVATION=0.25 FRONTEND_CPU_RESERVATION=0.1
FRONTEND_MEMORY_RESERVATION=256M FRONTEND_MEMORY_RESERVATION=256M
+14 -13
View File
@@ -33,7 +33,7 @@ services:
cpus: ${MYSQL_CPU_LIMIT:-1.0} cpus: ${MYSQL_CPU_LIMIT:-1.0}
memory: ${MYSQL_MEMORY_LIMIT:-1G} memory: ${MYSQL_MEMORY_LIMIT:-1G}
reservations: reservations:
cpus: ${MYSQL_CPU_RESERVATION:-0.5} cpus: ${MYSQL_CPU_RESERVATION:-0.1}
memory: ${MYSQL_MEMORY_RESERVATION:-512M} memory: ${MYSQL_MEMORY_RESERVATION:-512M}
redis: redis:
@@ -56,7 +56,7 @@ services:
cpus: ${REDIS_CPU_LIMIT:-0.5} cpus: ${REDIS_CPU_LIMIT:-0.5}
memory: ${REDIS_MEMORY_LIMIT:-512M} memory: ${REDIS_MEMORY_LIMIT:-512M}
reservations: reservations:
cpus: ${REDIS_CPU_RESERVATION:-0.25} cpus: ${REDIS_CPU_RESERVATION:-0.1}
memory: ${REDIS_MEMORY_RESERVATION:-256M} memory: ${REDIS_MEMORY_RESERVATION:-256M}
zookeeper: zookeeper:
@@ -93,7 +93,7 @@ services:
cpus: ${ZOOKEEPER_CPU_LIMIT:-0.5} cpus: ${ZOOKEEPER_CPU_LIMIT:-0.5}
memory: ${ZOOKEEPER_MEMORY_LIMIT:-1G} memory: ${ZOOKEEPER_MEMORY_LIMIT:-1G}
reservations: reservations:
cpus: ${ZOOKEEPER_CPU_RESERVATION:-0.25} cpus: ${ZOOKEEPER_CPU_RESERVATION:-0.1}
memory: ${ZOOKEEPER_MEMORY_RESERVATION:-512M} memory: ${ZOOKEEPER_MEMORY_RESERVATION:-512M}
clickhouse-init: clickhouse-init:
@@ -148,7 +148,7 @@ services:
cpus: ${CLICKHOUSE_CPU_LIMIT:-2.0} cpus: ${CLICKHOUSE_CPU_LIMIT:-2.0}
memory: ${CLICKHOUSE_MEMORY_LIMIT:-4G} memory: ${CLICKHOUSE_MEMORY_LIMIT:-4G}
reservations: reservations:
cpus: ${CLICKHOUSE_CPU_RESERVATION:-0.5} cpus: ${CLICKHOUSE_CPU_RESERVATION:-0.1}
memory: ${CLICKHOUSE_MEMORY_RESERVATION:-1G} memory: ${CLICKHOUSE_MEMORY_RESERVATION:-1G}
minio: minio:
@@ -176,7 +176,7 @@ services:
cpus: ${MINIO_CPU_LIMIT:-1.0} cpus: ${MINIO_CPU_LIMIT:-1.0}
memory: ${MINIO_MEMORY_LIMIT:-1G} memory: ${MINIO_MEMORY_LIMIT:-1G}
reservations: reservations:
cpus: ${MINIO_CPU_RESERVATION:-0.25} cpus: ${MINIO_CPU_RESERVATION:-0.1}
memory: ${MINIO_MEMORY_RESERVATION:-512M} memory: ${MINIO_MEMORY_RESERVATION:-512M}
minio-init: minio-init:
@@ -201,7 +201,7 @@ services:
backend: backend:
<<: *defaults <<: *defaults
image: ${GLOBAL_REGISTRY:-}ghcr.io/comet-ml/opik/opik-backend:${OPIK_VERSION:-1.10.23} image: ${GLOBAL_REGISTRY:-}ghcr.io/comet-ml/opik/opik-backend:${OPIK_VERSION:-1.11.9}
command: [bash, -c, './run_db_migrations.sh && ./entrypoint.sh'] command: [bash, -c, './run_db_migrations.sh && ./entrypoint.sh']
environment: environment:
TZ: ${TZ:-UTC} TZ: ${TZ:-UTC}
@@ -231,6 +231,7 @@ services:
TOGGLE_OPIK_AI_ENABLED: ${TOGGLE_OPIK_AI_ENABLED:-false} TOGGLE_OPIK_AI_ENABLED: ${TOGGLE_OPIK_AI_ENABLED:-false}
TOGGLE_GUARDRAILS_ENABLED: ${TOGGLE_GUARDRAILS_ENABLED:-false} TOGGLE_GUARDRAILS_ENABLED: ${TOGGLE_GUARDRAILS_ENABLED:-false}
TOGGLE_WELCOME_WIZARD_ENABLED: ${TOGGLE_WELCOME_WIZARD_ENABLED:-true} TOGGLE_WELCOME_WIZARD_ENABLED: ${TOGGLE_WELCOME_WIZARD_ENABLED:-true}
TOGGLE_RUNNERS_ENABLED: ${TOGGLE_RUNNERS_ENABLED:-false}
CORS: ${CORS:-false} CORS: ${CORS:-false}
ATTACHMENTS_STRIP_MIN_SIZE: ${ATTACHMENTS_STRIP_MIN_SIZE:-256000} ATTACHMENTS_STRIP_MIN_SIZE: ${ATTACHMENTS_STRIP_MIN_SIZE:-256000}
JACKSON_MAX_STRING_LENGTH: ${JACKSON_MAX_STRING_LENGTH:-104857600} JACKSON_MAX_STRING_LENGTH: ${JACKSON_MAX_STRING_LENGTH:-104857600}
@@ -257,26 +258,26 @@ services:
cpus: ${BACKEND_CPU_LIMIT:-2.0} cpus: ${BACKEND_CPU_LIMIT:-2.0}
memory: ${BACKEND_MEMORY_LIMIT:-2G} memory: ${BACKEND_MEMORY_LIMIT:-2G}
reservations: reservations:
cpus: ${BACKEND_CPU_RESERVATION:-0.5} cpus: ${BACKEND_CPU_RESERVATION:-0.1}
memory: ${BACKEND_MEMORY_RESERVATION:-1G} memory: ${BACKEND_MEMORY_RESERVATION:-1G}
volumes: volumes:
- backend_tmp:/tmp - backend_tmp:/tmp
python-backend: python-backend:
<<: *defaults <<: *defaults
image: ${GLOBAL_REGISTRY:-}ghcr.io/comet-ml/opik/opik-python-backend:${OPIK_VERSION:-1.10.23} image: ${GLOBAL_REGISTRY:-}ghcr.io/comet-ml/opik/opik-python-backend:${OPIK_VERSION:-1.11.9}
privileged: true privileged: true
environment: environment:
TZ: ${TZ:-UTC} TZ: ${TZ:-UTC}
OPIK_OTEL_SDK_ENABLED: 'false' OPIK_OTEL_SDK_ENABLED: 'false'
PYTHON_CODE_EXECUTOR_IMAGE_TAG: ${OPIK_VERSION:-1.10.23} PYTHON_CODE_EXECUTOR_IMAGE_TAG: ${OPIK_VERSION:-1.11.9}
PYTHON_CODE_EXECUTOR_STRATEGY: ${PYTHON_CODE_EXECUTOR_STRATEGY:-process} PYTHON_CODE_EXECUTOR_STRATEGY: ${PYTHON_CODE_EXECUTOR_STRATEGY:-process}
PYTHON_CODE_EXECUTOR_CONTAINERS_NUM: ${PYTHON_CODE_EXECUTOR_CONTAINERS_NUM:-5} PYTHON_CODE_EXECUTOR_CONTAINERS_NUM: ${PYTHON_CODE_EXECUTOR_CONTAINERS_NUM:-5}
PYTHON_CODE_EXECUTOR_EXEC_TIMEOUT_IN_SECS: ${PYTHON_CODE_EXECUTOR_EXEC_TIMEOUT_IN_SECS:-3} PYTHON_CODE_EXECUTOR_EXEC_TIMEOUT_IN_SECS: ${PYTHON_CODE_EXECUTOR_EXEC_TIMEOUT_IN_SECS:-3}
PYTHON_CODE_EXECUTOR_ALLOW_NETWORK: ${PYTHON_CODE_EXECUTOR_ALLOW_NETWORK:-false} PYTHON_CODE_EXECUTOR_ALLOW_NETWORK: ${PYTHON_CODE_EXECUTOR_ALLOW_NETWORK:-false}
PYTHON_CODE_EXECUTOR_CPU_SHARES: ${PYTHON_CODE_EXECUTOR_CPU_SHARES:-512} PYTHON_CODE_EXECUTOR_CPU_SHARES: ${PYTHON_CODE_EXECUTOR_CPU_SHARES:-512}
PYTHON_CODE_EXECUTOR_MEM_LIMIT: ${PYTHON_CODE_EXECUTOR_MEM_LIMIT:-256m} PYTHON_CODE_EXECUTOR_MEM_LIMIT: ${PYTHON_CODE_EXECUTOR_MEM_LIMIT:-256m}
OPIK_VERSION: ${OPIK_VERSION:-1.10.23} OPIK_VERSION: ${OPIK_VERSION:-1.11.9}
OPIK_REVERSE_PROXY_URL: http://frontend:5173/api OPIK_REVERSE_PROXY_URL: http://frontend:5173/api
PYTHON_BACKEND_PORT: ${PYTHON_BACKEND_PORT:-8000} PYTHON_BACKEND_PORT: ${PYTHON_BACKEND_PORT:-8000}
OPENAI_API_KEY: ${OPENAI_API_KEY:-} OPENAI_API_KEY: ${OPENAI_API_KEY:-}
@@ -306,14 +307,14 @@ services:
cpus: ${PYTHON_BACKEND_CPU_LIMIT:-1.0} cpus: ${PYTHON_BACKEND_CPU_LIMIT:-1.0}
memory: ${PYTHON_BACKEND_MEMORY_LIMIT:-1G} memory: ${PYTHON_BACKEND_MEMORY_LIMIT:-1G}
reservations: reservations:
cpus: ${PYTHON_BACKEND_CPU_RESERVATION:-0.5} cpus: ${PYTHON_BACKEND_CPU_RESERVATION:-0.1}
memory: ${PYTHON_BACKEND_MEMORY_RESERVATION:-512M} memory: ${PYTHON_BACKEND_MEMORY_RESERVATION:-512M}
volumes: volumes:
- python_backend_docker:/var/lib/docker - python_backend_docker:/var/lib/docker
frontend: frontend:
<<: *defaults <<: *defaults
image: ${GLOBAL_REGISTRY:-}ghcr.io/comet-ml/opik/opik-frontend:${OPIK_VERSION:-1.10.23} image: ${GLOBAL_REGISTRY:-}ghcr.io/comet-ml/opik/opik-frontend:${OPIK_VERSION:-1.11.9}
ports: ports:
- '${OPIK_PORT_OVERRIDE:-5173}:80' - '${OPIK_PORT_OVERRIDE:-5173}:80'
environment: environment:
@@ -335,7 +336,7 @@ services:
cpus: ${FRONTEND_CPU_LIMIT:-0.5} cpus: ${FRONTEND_CPU_LIMIT:-0.5}
memory: ${FRONTEND_MEMORY_LIMIT:-512M} memory: ${FRONTEND_MEMORY_LIMIT:-512M}
reservations: reservations:
cpus: ${FRONTEND_CPU_RESERVATION:-0.25} cpus: ${FRONTEND_CPU_RESERVATION:-0.1}
memory: ${FRONTEND_MEMORY_RESERVATION:-256M} memory: ${FRONTEND_MEMORY_RESERVATION:-256M}
volumes: volumes:
+55
View File
@@ -0,0 +1,55 @@
# Global Registry Prefix (optional)
# GLOBAL_REGISTRY=
# Service Versions
RAGFLOW_VERSION=v0.24.0
ELASTICSEARCH_VERSION=8.11.3
MYSQL_VERSION=8.0.39
REDIS_VERSION=7
MINIO_VERSION=RELEASE.2025-01-20T14-49-07Z
# Timezone
TZ=UTC
# Host port for the RAGFlow web UI (Nginx reverse proxy)
RAGFLOW_PORT_OVERRIDE=80
# MinIO web console port
MINIO_CONSOLE_PORT_OVERRIDE=9001
# Secrets (CHANGEME: use strong random values in production)
SECRET_KEY=changeme_secret_key_CHANGEME
MYSQL_PASSWORD=ragflow
REDIS_PASSWORD=redispassword
MINIO_USER=minioadmin
MINIO_PASSWORD=minioadmin
# Resource Limits - RAGFlow
RAGFLOW_CPU_LIMIT=4.0
RAGFLOW_MEMORY_LIMIT=4G
RAGFLOW_CPU_RESERVATION=0.1
RAGFLOW_MEMORY_RESERVATION=2G
# Resource Limits - Elasticsearch
ELASTICSEARCH_CPU_LIMIT=2.0
ELASTICSEARCH_MEMORY_LIMIT=2G
ELASTICSEARCH_CPU_RESERVATION=0.1
ELASTICSEARCH_MEMORY_RESERVATION=1G
# Resource Limits - MySQL
MYSQL_CPU_LIMIT=1.0
MYSQL_MEMORY_LIMIT=1G
MYSQL_CPU_RESERVATION=0.1
MYSQL_MEMORY_RESERVATION=256M
# Resource Limits - Redis
REDIS_CPU_LIMIT=0.5
REDIS_MEMORY_LIMIT=512M
REDIS_CPU_RESERVATION=0.1
REDIS_MEMORY_RESERVATION=128M
# Resource Limits - MinIO
MINIO_CPU_LIMIT=1.0
MINIO_MEMORY_LIMIT=1G
MINIO_CPU_RESERVATION=0.1
MINIO_MEMORY_RESERVATION=256M
+84
View File
@@ -0,0 +1,84 @@
# RAGFlow
[English](./README.md) | [中文](./README.zh.md)
Quick start: <https://ragflow.io/docs>.
This service deploys RAGFlow, an open-source Retrieval-Augmented Generation engine based on deep document understanding. It provides intelligent question answering over complex documents (PDFs, Word, PowerPoint, etc.) with accurate citation and citation tracing.
> **Platform note**: This stack is **x86-64 (amd64) only**. ARM64 is not supported by the official image.
>
> **Resource note**: Elasticsearch alone requires ~2 GB RAM. Provision at least **8 GB RAM** total before starting.
## Services
- **ragflow**: The RAGFlow web application and API server (Nginx on port 80, API on port 9380).
- **es01**: Elasticsearch single-node cluster for vector and full-text search.
- **mysql**: MySQL 8 database for metadata and workflow state.
- **redis**: Redis for task queues and caching.
- **minio**: S3-compatible object storage for document and chunk storage.
## Quick Start
1. Copy `.env.example` to `.env`:
```bash
cp .env.example .env
```
2. Update the secrets in `.env`:
```
SECRET_KEY, MYSQL_PASSWORD, REDIS_PASSWORD, MINIO_PASSWORD
```
3. Start the services (initial startup may take 25 minutes):
```bash
docker compose up -d
```
4. Open `http://localhost` and register the first admin account.
## Core Environment Variables
| Variable | Description | Default |
| ---------------------- | -------------------------------------------------------- | -------------------------------- |
| `RAGFLOW_VERSION` | RAGFlow image version | `v0.24.0` |
| `RAGFLOW_PORT_OVERRIDE`| Host port for the web UI | `80` |
| `SECRET_KEY` | Application secret key — **CHANGEME** | placeholder |
| `MYSQL_PASSWORD` | MySQL root password (also used by RAGFlow) | `ragflow` |
| `REDIS_PASSWORD` | Redis authentication password | `redispassword` |
| `MINIO_USER` | MinIO root user | `minioadmin` |
| `MINIO_PASSWORD` | MinIO root password | `minioadmin` |
| `MINIO_CONSOLE_PORT_OVERRIDE` | MinIO web console host port | `9001` |
## Volumes
- `ragflow_logs`: RAGFlow application logs.
- `ragflow_es_data`: Elasticsearch index data.
- `ragflow_mysql_data`: MySQL database files.
- `ragflow_redis_data`: Redis persistence.
- `ragflow_minio_data`: Object storage for documents and embeddings.
## Ports
- **80**: RAGFlow web UI and API (via Nginx)
- **9001**: MinIO web console
## Resource Requirements
| Service | CPU Limit | Memory Limit |
| ------------- | --------- | ------------ |
| ragflow | 4 | 4 GB |
| elasticsearch | 2 | 2 GB |
| mysql | 1 | 1 GB |
| redis | 0.5 | 512 MB |
| minio | 1 | 1 GB |
Total recommended: **8+ GB RAM**, **4+ CPU cores**.
## Documentation
- [RAGFlow Docs](https://ragflow.io/docs)
- [GitHub](https://github.com/infiniflow/ragflow)
+84
View File
@@ -0,0 +1,84 @@
# RAGFlow
[English](./README.md) | [中文](./README.zh.md)
快速开始:<https://ragflow.io/docs>。
此服务用于部署 RAGFlow,一个基于深度文档理解的开源检索增强生成引擎。它能对复杂文档(PDF、Word、PowerPoint 等)进行智能问答,并提供精准的引用和引文追踪。
> **平台说明**:此 Stack 仅支持 **x86-64amd64**,官方镜像不支持 ARM64。
>
> **资源说明**:仅 Elasticsearch 就需要约 2 GB RAM,启动前请确保系统至少有 **8 GB RAM**。
## 服务
- **ragflow**RAGFlow Web 应用和 API 服务器(Nginx 监听 80 端口,API 监听 9380 端口)。
- **es01**:单节点 Elasticsearch 集群,用于向量和全文检索。
- **mysql**MySQL 8 数据库,用于元数据和工作流状态存储。
- **redis**Redis,用于任务队列和缓存。
- **minio**:S3 兼容对象存储,用于文档和分块存储。
## 快速开始
1.`.env.example` 复制为 `.env`
```bash
cp .env.example .env
```
2. 更新 `.env` 中的密钥:
```
SECRET_KEY、MYSQL_PASSWORD、REDIS_PASSWORD、MINIO_PASSWORD
```
3. 启动服务(首次启动可能需要 2~5 分钟):
```bash
docker compose up -d
```
4. 打开 `http://localhost`,注册第一个管理员账号。
## 核心环境变量
| 变量 | 说明 | 默认值 |
| ----------------------------- | ------------------------------------------ | ------------- |
| `RAGFLOW_VERSION` | RAGFlow 镜像版本 | `v0.24.0` |
| `RAGFLOW_PORT_OVERRIDE` | Web UI 宿主机端口 | `80` |
| `SECRET_KEY` | 应用密钥——**请修改** | 占位符 |
| `MYSQL_PASSWORD` | MySQL root 密码(也供 RAGFlow 使用) | `ragflow` |
| `REDIS_PASSWORD` | Redis 认证密码 | `redispassword` |
| `MINIO_USER` | MinIO root 用户名 | `minioadmin` |
| `MINIO_PASSWORD` | MinIO root 密码 | `minioadmin` |
| `MINIO_CONSOLE_PORT_OVERRIDE` | MinIO Web 控制台宿主机端口 | `9001` |
## 数据卷
- `ragflow_logs`RAGFlow 应用日志。
- `ragflow_es_data`Elasticsearch 索引数据。
- `ragflow_mysql_data`MySQL 数据库文件。
- `ragflow_redis_data`Redis 持久化数据。
- `ragflow_minio_data`:文档和嵌入向量的对象存储。
## 端口
- **80**RAGFlow Web UI 和 API(通过 Nginx
- **9001**MinIO Web 控制台
## 资源需求
| 服务 | CPU 限制 | 内存限制 |
| ------------- | -------- | -------- |
| ragflow | 4 | 4 GB |
| elasticsearch | 2 | 2 GB |
| mysql | 1 | 1 GB |
| redis | 0.5 | 512 MB |
| minio | 1 | 1 GB |
推荐总计:**8+ GB RAM****4+ CPU 核心**。
## 文档
- [RAGFlow 文档](https://ragflow.io/docs)
- [GitHub](https://github.com/infiniflow/ragflow)
+157
View File
@@ -0,0 +1,157 @@
# RAGFlow requires substantial system resources.
# Elasticsearch alone needs ~2 GB RAM. Total recommended: 8+ GB RAM.
# This stack is x86-64 (amd64) only; ARM64 is not supported.
x-defaults: &defaults
restart: unless-stopped
logging:
driver: json-file
options:
max-size: 100m
max-file: '3'
services:
ragflow:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}infiniflow/ragflow:${RAGFLOW_VERSION:-v0.24.0}
depends_on:
mysql:
condition: service_healthy
redis:
condition: service_healthy
minio:
condition: service_healthy
es01:
condition: service_healthy
ports:
- '${RAGFLOW_PORT_OVERRIDE:-80}:80'
volumes:
- ragflow_logs:/ragflow/logs
environment:
- TZ=${TZ:-UTC}
- MYSQL_PASSWORD=${MYSQL_PASSWORD:-ragflow}
- MINIO_USER=${MINIO_USER:-minioadmin}
- MINIO_PASSWORD=${MINIO_PASSWORD:-minioadmin}
- REDIS_PASSWORD=${REDIS_PASSWORD:-redispassword}
- SECRET_KEY=${SECRET_KEY:-changeme_secret_key_CHANGEME}
healthcheck:
test: [CMD-SHELL, 'curl -sf http://localhost/ > /dev/null 2>&1 || exit 1']
interval: 30s
timeout: 15s
retries: 10
start_period: 120s
deploy:
resources:
limits:
cpus: ${RAGFLOW_CPU_LIMIT:-4.0}
memory: ${RAGFLOW_MEMORY_LIMIT:-4G}
reservations:
cpus: ${RAGFLOW_CPU_RESERVATION:-0.1}
memory: ${RAGFLOW_MEMORY_RESERVATION:-2G}
es01:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}elasticsearch:${ELASTICSEARCH_VERSION:-8.11.3}
environment:
- TZ=${TZ:-UTC}
- discovery.type=single-node
- xpack.security.enabled=false
- ES_JAVA_OPTS=-Xms512m -Xmx1g
volumes:
- ragflow_es_data:/usr/share/elasticsearch/data
healthcheck:
test: [CMD-SHELL, "curl -sf http://localhost:9200/_cluster/health | grep -qE '\"status\":\"(green|yellow)\"' || exit 1"]
interval: 15s
timeout: 10s
retries: 10
start_period: 60s
deploy:
resources:
limits:
cpus: ${ELASTICSEARCH_CPU_LIMIT:-2.0}
memory: ${ELASTICSEARCH_MEMORY_LIMIT:-2G}
reservations:
cpus: ${ELASTICSEARCH_CPU_RESERVATION:-0.1}
memory: ${ELASTICSEARCH_MEMORY_RESERVATION:-1G}
mysql:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}mysql:${MYSQL_VERSION:-8.0.39}
environment:
- TZ=${TZ:-UTC}
- MYSQL_ROOT_PASSWORD=${MYSQL_PASSWORD:-ragflow}
- MYSQL_DATABASE=rag_flow
volumes:
- ragflow_mysql_data:/var/lib/mysql
healthcheck:
test: [CMD, mysqladmin, ping, -h, localhost]
interval: 10s
timeout: 10s
retries: 5
start_period: 30s
deploy:
resources:
limits:
cpus: ${MYSQL_CPU_LIMIT:-1.0}
memory: ${MYSQL_MEMORY_LIMIT:-1G}
reservations:
cpus: ${MYSQL_CPU_RESERVATION:-0.1}
memory: ${MYSQL_MEMORY_RESERVATION:-256M}
redis:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}redis:${REDIS_VERSION:-7}
command: >
--requirepass ${REDIS_PASSWORD:-redispassword}
--maxmemory-policy noeviction
environment:
- REDIS_PASSWORD=${REDIS_PASSWORD:-redispassword}
volumes:
- ragflow_redis_data:/data
healthcheck:
test: [CMD-SHELL, 'redis-cli -a $$REDIS_PASSWORD ping | grep -q PONG']
interval: 5s
timeout: 10s
retries: 10
deploy:
resources:
limits:
cpus: ${REDIS_CPU_LIMIT:-0.5}
memory: ${REDIS_MEMORY_LIMIT:-512M}
reservations:
cpus: ${REDIS_CPU_RESERVATION:-0.1}
memory: ${REDIS_MEMORY_RESERVATION:-128M}
minio:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}minio/minio:${MINIO_VERSION:-RELEASE.2025-01-20T14-49-07Z}
command: server /data --console-address ':9001'
environment:
- TZ=${TZ:-UTC}
- MINIO_ROOT_USER=${MINIO_USER:-minioadmin}
- MINIO_ROOT_PASSWORD=${MINIO_PASSWORD:-minioadmin}
volumes:
- ragflow_minio_data:/data
ports:
- '${MINIO_CONSOLE_PORT_OVERRIDE:-9001}:9001'
healthcheck:
test: [CMD-SHELL, 'curl -sf http://localhost:9000/minio/health/live || exit 1']
interval: 10s
timeout: 5s
retries: 10
start_period: 10s
deploy:
resources:
limits:
cpus: ${MINIO_CPU_LIMIT:-1.0}
memory: ${MINIO_MEMORY_LIMIT:-1G}
reservations:
cpus: ${MINIO_CPU_RESERVATION:-0.1}
memory: ${MINIO_MEMORY_RESERVATION:-256M}
volumes:
ragflow_logs:
ragflow_es_data:
ragflow_mysql_data:
ragflow_redis_data:
ragflow_minio_data:
+172
View File
@@ -0,0 +1,172 @@
# Global Settings
GLOBAL_REGISTRY=
TZ=UTC
# Shannon Version (applies to gateway, orchestrator, llm-service, and agent-core)
SHANNON_VERSION=v0.3.1
# ============================================================
# LLM API Keys — at least one provider is required
# ============================================================
OPENAI_API_KEY=
ANTHROPIC_API_KEY=
GOOGLE_API_KEY=
XAI_API_KEY=
DEEPSEEK_API_KEY=
# Optional tool/search API keys
SERPAPI_API_KEY=
FIRECRAWL_API_KEY=
# ============================================================
# Security
# ============================================================
# IMPORTANT: Change this in production!
JWT_SECRET=development-only-secret-change-in-production
# Set to 0 to enable JWT authentication in production
GATEWAY_SKIP_AUTH=1
# ============================================================
# Service Versions
# ============================================================
POSTGRES_VERSION=pg16
REDIS_VERSION=7.2-alpine
QDRANT_VERSION=v1.17
TEMPORAL_VERSION=1.28.3
TEMPORAL_UI_VERSION=2.40.1
# ============================================================
# Ports (host-side overrides)
# ============================================================
GATEWAY_PORT_OVERRIDE=8080
TEMPORAL_UI_PORT_OVERRIDE=8088
# ============================================================
# Database Configuration
# ============================================================
POSTGRES_USER=shannon
POSTGRES_PASSWORD=shannon
POSTGRES_DB=shannon
POSTGRES_PORT=5432
POSTGRES_SSLMODE=disable
# ============================================================
# Redis Configuration
# ============================================================
REDIS_URL=redis://redis:6379
REDIS_ADDR=redis:6379
REDIS_TTL_SECONDS=3600
# ============================================================
# Qdrant Configuration
# ============================================================
QDRANT_HOST=qdrant
QDRANT_PORT=6333
# ============================================================
# Temporal Configuration
# ============================================================
TEMPORAL_NAMESPACE=default
# ============================================================
# LLM Service Configuration
# ============================================================
LLM_SERVICE_URL=http://llm-service:8001
DEFAULT_MODEL_TIER=small
MAX_TOKENS=2000
TEMPERATURE=0.7
MAX_TOKENS_PER_REQUEST=10000
MODELS_CONFIG_PATH=/app/config/models.yaml
# ============================================================
# Agent Core Configuration
# ============================================================
# WASI sandbox for secure code execution
SHANNON_USE_WASI_SANDBOX=1
WASI_MEMORY_LIMIT_MB=512
WASI_TIMEOUT_SECONDS=60
RUST_LOG=info
# ============================================================
# Orchestrator / Gateway Configuration
# ============================================================
ORCHESTRATOR_GRPC=orchestrator:50052
ADMIN_SERVER=http://orchestrator:8081
WORKFLOW_SYNTH_BYPASS_SINGLE=true
PROVIDER_RATE_CONTROL_ENABLED=false
# Worker pool sizes per priority queue
WORKER_ACT_CRITICAL=12
WORKER_WF_CRITICAL=12
WORKER_ACT_HIGH=10
WORKER_WF_HIGH=10
WORKER_ACT_NORMAL=8
WORKER_WF_NORMAL=8
WORKER_ACT_LOW=4
WORKER_WF_LOW=4
# ============================================================
# Observability
# ============================================================
OTEL_ENABLED=false
# OTEL_EXPORTER_OTLP_ENDPOINT=localhost:4317
DEBUG=false
ENVIRONMENT=production
# ============================================================
# Resource Limits
# ============================================================
# Gateway
GATEWAY_CPU_LIMIT=1.0
GATEWAY_MEMORY_LIMIT=512M
GATEWAY_CPU_RESERVATION=0.1
GATEWAY_MEMORY_RESERVATION=256M
# Orchestrator
ORCHESTRATOR_CPU_LIMIT=2.0
ORCHESTRATOR_MEMORY_LIMIT=2G
ORCHESTRATOR_CPU_RESERVATION=0.1
ORCHESTRATOR_MEMORY_RESERVATION=512M
# LLM Service
LLM_SERVICE_CPU_LIMIT=2.0
LLM_SERVICE_MEMORY_LIMIT=2G
LLM_SERVICE_CPU_RESERVATION=0.1
LLM_SERVICE_MEMORY_RESERVATION=512M
# Agent Core
AGENT_CORE_CPU_LIMIT=2.0
AGENT_CORE_MEMORY_LIMIT=2G
AGENT_CORE_CPU_RESERVATION=0.1
AGENT_CORE_MEMORY_RESERVATION=512M
# PostgreSQL
POSTGRES_CPU_LIMIT=1.0
POSTGRES_MEMORY_LIMIT=1G
POSTGRES_CPU_RESERVATION=0.1
POSTGRES_MEMORY_RESERVATION=256M
# Redis
REDIS_CPU_LIMIT=0.5
REDIS_MEMORY_LIMIT=512M
REDIS_CPU_RESERVATION=0.1
REDIS_MEMORY_RESERVATION=128M
# Qdrant
QDRANT_CPU_LIMIT=1.0
QDRANT_MEMORY_LIMIT=1G
QDRANT_CPU_RESERVATION=0.1
QDRANT_MEMORY_RESERVATION=256M
# Temporal
TEMPORAL_CPU_LIMIT=1.0
TEMPORAL_MEMORY_LIMIT=1G
TEMPORAL_CPU_RESERVATION=0.1
TEMPORAL_MEMORY_RESERVATION=256M
# Temporal UI (metrics profile)
TEMPORAL_UI_CPU_LIMIT=0.5
TEMPORAL_UI_MEMORY_LIMIT=256M
TEMPORAL_UI_CPU_RESERVATION=0.1
TEMPORAL_UI_MEMORY_RESERVATION=128M
+41
View File
@@ -0,0 +1,41 @@
.PHONY: setup up down logs ps
# Download required config files from Shannon repository and prepare .env
setup:
@echo "Creating config directory..."
mkdir -p config
@echo "Downloading Shannon configuration files..."
curl -sSL https://raw.githubusercontent.com/Kocoro-lab/Shannon/main/config/models.yaml \
-o config/models.yaml
curl -sSL https://raw.githubusercontent.com/Kocoro-lab/Shannon/main/config/features.yaml \
-o config/features.yaml
@if [ ! -f .env ]; then \
cp .env.example .env; \
echo "Created .env from .env.example. Edit it to add your LLM API keys."; \
else \
echo ".env already exists, skipping copy."; \
fi
@echo ""
@echo "Setup complete! Next steps:"
@echo " 1. Edit .env and set at least one LLM API key (OPENAI_API_KEY or ANTHROPIC_API_KEY)"
@echo " 2. Run: make up"
# Start all services (include Temporal UI dashboard with --profile metrics)
up:
docker compose up -d
# Start all services including Temporal UI monitoring dashboard
up-monitoring:
docker compose --profile metrics up -d
# Stop all services
down:
docker compose down
# View logs for all services
logs:
docker compose logs -f
# Show service status
ps:
docker compose ps
+125
View File
@@ -0,0 +1,125 @@
# Shannon
[English](./README.md) | [中文](./README.zh.md)
This service deploys [Shannon](https://github.com/Kocoro-lab/Shannon), a production-oriented multi-agent orchestration framework. Shannon provides time-travel debugging via Temporal workflows, hard token budgets per task/agent, real-time observability dashboards, WASI sandbox for secure code execution, OPA policy governance, and multi-tenant isolation — all with native support for OpenAI, Anthropic, Google, DeepSeek, and local models.
> **Note:** The `agent-core` service is only built for `linux/amd64`. On Apple Silicon (ARM64), Docker Desktop uses Rosetta emulation automatically.
## Services
- **gateway**: HTTP API gateway — primary entry point for all client requests (port `8080`)
- **orchestrator**: Core workflow orchestration engine powered by Temporal
- **llm-service**: LLM provider abstraction with model routing, fallback, and budget control
- **agent-core**: Rust-based agent execution runtime with WASI sandbox support
- **postgres**: PostgreSQL with pgvector extension for state and vector storage
- **redis**: Redis for caching, job queues, and rate limiting
- **qdrant**: Qdrant vector database for semantic memory
- **temporal**: Temporal workflow engine for durable, fault-tolerant task execution
- **temporal-ui**: Temporal Web UI for workflow debugging (enabled via `metrics` profile)
## Quick Start
### Prerequisites
- Docker & Docker Compose v2
- `curl` (for the setup script)
- At least one LLM API key (OpenAI, Anthropic, Google, etc.)
### 1. Run Setup
```bash
make setup
```
This downloads the required `config/models.yaml` and `config/features.yaml` from the Shannon repository and creates a local `.env` file.
### 2. Add Your LLM API Key
Edit `.env` and set at least one LLM provider key:
```env
# Choose at least one:
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
```
Also update `JWT_SECRET` and set `GATEWAY_SKIP_AUTH=0` for production deployments.
### 3. Start Services
```bash
make up
```
Access the Shannon API at `http://localhost:8080`.
### 4. (Optional) Enable Temporal UI Dashboard
To also start the Temporal workflow debugging UI:
```bash
make up-monitoring
```
Access Temporal UI at `http://localhost:8088`.
## Core Environment Variables
| Variable | Description | Default |
| --------------------------- | ------------------------------------------ | ---------------------------------------------- |
| `SHANNON_VERSION` | Version for all Shannon service images | `v0.3.1` |
| `OPENAI_API_KEY` | OpenAI API key (at least one key required) | `` |
| `ANTHROPIC_API_KEY` | Anthropic API key | `` |
| `GOOGLE_API_KEY` | Google AI API key | `` |
| `JWT_SECRET` | Secret for JWT token signing | `development-only-secret-change-in-production` |
| `GATEWAY_SKIP_AUTH` | Skip auth (set to `0` to enable in prod) | `1` |
| `GATEWAY_PORT_OVERRIDE` | Host port for the API gateway | `8080` |
| `TEMPORAL_UI_PORT_OVERRIDE` | Host port for the Temporal UI | `8088` |
## Database Configuration
| Variable | Description | Default |
| ------------------- | ------------------------ | ------------ |
| `POSTGRES_VERSION` | pgvector image tag | `pg16` |
| `POSTGRES_USER` | PostgreSQL username | `shannon` |
| `POSTGRES_PASSWORD` | PostgreSQL password | `shannon` |
| `POSTGRES_DB` | PostgreSQL database name | `shannon` |
| `REDIS_VERSION` | Redis image tag | `7.2-alpine` |
| `QDRANT_VERSION` | Qdrant image tag | `v1.17` |
## Agent Configuration
| Variable | Description | Default |
| -------------------------- | -------------------------------------- | --------- |
| `DEFAULT_MODEL_TIER` | Default model complexity tier | `small` |
| `SHANNON_USE_WASI_SANDBOX` | Enable WASI sandbox for code execution | `1` |
| `WASI_MEMORY_LIMIT_MB` | Memory limit for WASI sandbox (MB) | `512` |
| `WASI_TIMEOUT_SECONDS` | Execution timeout for WASI sandbox | `60` |
| `TEMPORAL_NAMESPACE` | Temporal namespace for workflows | `default` |
## Observability (Optional)
| Variable | Description | Default |
| ----------------------------- | ---------------------------- | ------- |
| `OTEL_ENABLED` | Enable OpenTelemetry tracing | `false` |
| `OTEL_EXPORTER_OTLP_ENDPOINT` | OTLP collector endpoint | `` |
## Security Notes
- By default, `GATEWAY_SKIP_AUTH=1` disables JWT authentication for easy local development.
- **For production**, set `GATEWAY_SKIP_AUTH=0` and use a strong `JWT_SECRET`.
- Passwords in `.env.example` are for local development only — always change them before deploying to a shared or public environment.
## Configuration Files
Shannon uses YAML configuration files under `./config/`:
- `config/models.yaml` — LLM providers, model tiers, pricing, and routing rules
- `config/features.yaml` — Feature flags, execution modes, and workflow settings
These are downloaded from the official Shannon repository by `make setup` and can be customized as needed.
## License
Shannon is licensed under the [Apache 2.0 License](https://github.com/Kocoro-lab/Shannon/blob/main/LICENSE).
+125
View File
@@ -0,0 +1,125 @@
# Shannon
[English](./README.md) | [中文](./README.zh.md)
本服务部署 [Shannon](https://github.com/Kocoro-lab/Shannon),一个面向生产环境的多智能体编排框架。Shannon 通过 Temporal 工作流引擎提供时光回溯调试能力、按任务 / 智能体的硬性 Token 预算控制、实时可观测性仪表盘、WASI 沙箱安全代码执行、OPA 策略治理以及多租户隔离,并原生支持 OpenAI、Anthropic、Google、DeepSeek 及本地模型。
> **注意:** `agent-core` 服务仅构建了 `linux/amd64` 镜像。在 Apple SiliconARM64)上,Docker Desktop 会自动通过 Rosetta 进行仿真运行。
## 服务说明
- **gateway**HTTP API 网关 —— 所有客户端请求的主入口(端口 `8080`
- **orchestrator**:基于 Temporal 的核心工作流编排引擎
- **llm-service**LLM 提供商抽象层,支持模型路由、故障转移和预算控制
- **agent-core**:基于 Rust 的智能体执行运行时,支持 WASI 沙箱
- **postgres**:带 pgvector 扩展的 PostgreSQL,用于状态和向量存储
- **redis**Redis,用于缓存、任务队列和限流
- **qdrant**Qdrant 向量数据库,用于语义记忆
- **temporal**Temporal 工作流引擎,提供可持久、容错的任务执行
- **temporal-ui**Temporal Web UI,用于工作流调试(通过 `metrics` profile 启用)
## 快速开始
### 前置条件
- Docker 及 Docker Compose v2
- `curl`(用于下载配置文件)
- 至少一个 LLM API 密钥(OpenAI、Anthropic、Google 等)
### 1. 运行初始化
```bash
make setup
```
该命令会从 Shannon 代码仓库下载所需的 `config/models.yaml``config/features.yaml` 配置文件,并创建本地 `.env` 文件。
### 2. 填写 LLM API 密钥
编辑 `.env` 文件,至少设置一个 LLM 提供商的密钥:
```env
# 至少选择一个:
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
```
在生产环境中,还需要更新 `JWT_SECRET` 并将 `GATEWAY_SKIP_AUTH` 设为 `0`
### 3. 启动服务
```bash
make up
```
通过 `http://localhost:8080` 访问 Shannon API。
### 4. (可选)启用 Temporal UI 仪表盘
若需同时启动 Temporal 工作流调试界面:
```bash
make up-monitoring
```
通过 `http://localhost:8088` 访问 Temporal UI。
## 核心环境变量
| 变量名 | 说明 | 默认值 |
| --------------------------- | ----------------------------------------- | ---------------------------------------------- |
| `SHANNON_VERSION` | 所有 Shannon 服务镜像的版本号 | `v0.3.1` |
| `OPENAI_API_KEY` | OpenAI API 密钥(至少需要一个提供商密钥) | `` |
| `ANTHROPIC_API_KEY` | Anthropic API 密钥 | `` |
| `GOOGLE_API_KEY` | Google AI API 密钥 | `` |
| `JWT_SECRET` | JWT Token 签名密钥 | `development-only-secret-change-in-production` |
| `GATEWAY_SKIP_AUTH` | 跳过身份验证(生产环境请设为 `0` | `1` |
| `GATEWAY_PORT_OVERRIDE` | API 网关的宿主机端口 | `8080` |
| `TEMPORAL_UI_PORT_OVERRIDE` | Temporal UI 的宿主机端口 | `8088` |
## 数据库配置
| 变量名 | 说明 | 默认值 |
| ------------------- | ------------------- | ------------ |
| `POSTGRES_VERSION` | pgvector 镜像标签 | `pg16` |
| `POSTGRES_USER` | PostgreSQL 用户名 | `shannon` |
| `POSTGRES_PASSWORD` | PostgreSQL 密码 | `shannon` |
| `POSTGRES_DB` | PostgreSQL 数据库名 | `shannon` |
| `REDIS_VERSION` | Redis 镜像标签 | `7.2-alpine` |
| `QDRANT_VERSION` | Qdrant 镜像标签 | `v1.17` |
## 智能体配置
| 变量名 | 说明 | 默认值 |
| -------------------------- | --------------------------- | --------- |
| `DEFAULT_MODEL_TIER` | 默认模型复杂度层级 | `small` |
| `SHANNON_USE_WASI_SANDBOX` | 启用 WASI 沙箱执行代码 | `1` |
| `WASI_MEMORY_LIMIT_MB` | WASI 沙箱内存限制(MB | `512` |
| `WASI_TIMEOUT_SECONDS` | WASI 沙箱执行超时时间(秒) | `60` |
| `TEMPORAL_NAMESPACE` | Temporal 工作流命名空间 | `default` |
## 可观测性(可选)
| 变量名 | 说明 | 默认值 |
| ----------------------------- | --------------------------- | ------- |
| `OTEL_ENABLED` | 启用 OpenTelemetry 链路追踪 | `false` |
| `OTEL_EXPORTER_OTLP_ENDPOINT` | OTLP 采集器端点 | `` |
## 安全说明
- 默认情况下,`GATEWAY_SKIP_AUTH=1` 会禁用 JWT 身份验证,便于本地开发。
- **生产环境**请将 `GATEWAY_SKIP_AUTH` 设为 `0`,并使用强密钥替换 `JWT_SECRET`。
- `.env.example` 中的密码仅供本地开发使用,在部署到共享或公开环境前务必修改。
## 配置文件说明
Shannon 使用 `./config/` 目录下的 YAML 配置文件:
- `config/models.yaml` —— LLM 提供商、模型层级、定价及路由规则
- `config/features.yaml` —— 功能开关、执行模式及工作流设置
这些文件通过 `make setup` 从 Shannon 官方代码仓库下载,可根据需要自定义。
## 开源协议
Shannon 采用 [Apache 2.0 协议](https://github.com/Kocoro-lab/Shannon/blob/main/LICENSE) 开源。
+353
View File
@@ -0,0 +1,353 @@
# Shannon - Production-Oriented Multi-Agent Orchestration Framework
# https://github.com/Kocoro-lab/Shannon
#
# NOTE: Run `make setup` before first launch to download required config files
# and create your .env file, then add at least one LLM API key.
x-defaults: &defaults
restart: unless-stopped
logging:
driver: json-file
options:
max-size: 100m
max-file: '3'
x-shannon-config: &shannon-config
volumes:
- ./config:/app/config:ro
services:
postgres:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}pgvector/pgvector:${POSTGRES_VERSION:-pg16}
environment:
TZ: ${TZ:-UTC}
POSTGRES_USER: ${POSTGRES_USER:-shannon}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-shannon}
POSTGRES_DB: ${POSTGRES_DB:-shannon}
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: [CMD-SHELL, 'pg_isready -U ${POSTGRES_USER:-shannon} -d ${POSTGRES_DB:-shannon}']
interval: 5s
timeout: 5s
retries: 20
start_period: 15s
deploy:
resources:
limits:
cpus: ${POSTGRES_CPU_LIMIT:-1.0}
memory: ${POSTGRES_MEMORY_LIMIT:-1G}
reservations:
cpus: ${POSTGRES_CPU_RESERVATION:-0.1}
memory: ${POSTGRES_MEMORY_RESERVATION:-256M}
redis:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}redis:${REDIS_VERSION:-7.2-alpine}
volumes:
- redis_data:/data
healthcheck:
test: [CMD, redis-cli, ping]
interval: 5s
timeout: 5s
retries: 10
start_period: 5s
deploy:
resources:
limits:
cpus: ${REDIS_CPU_LIMIT:-0.5}
memory: ${REDIS_MEMORY_LIMIT:-512M}
reservations:
cpus: ${REDIS_CPU_RESERVATION:-0.1}
memory: ${REDIS_MEMORY_RESERVATION:-128M}
qdrant:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}qdrant/qdrant:${QDRANT_VERSION:-v1.17}
environment:
TZ: ${TZ:-UTC}
volumes:
- qdrant_data:/qdrant/storage
healthcheck:
test: [CMD-SHELL, 'wget -qO- http://localhost:6333/health | grep -q ok || exit 1']
interval: 10s
timeout: 5s
retries: 10
start_period: 15s
deploy:
resources:
limits:
cpus: ${QDRANT_CPU_LIMIT:-1.0}
memory: ${QDRANT_MEMORY_LIMIT:-1G}
reservations:
cpus: ${QDRANT_CPU_RESERVATION:-0.1}
memory: ${QDRANT_MEMORY_RESERVATION:-256M}
temporal:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}temporalio/auto-setup:${TEMPORAL_VERSION:-1.28.3}
environment:
TZ: ${TZ:-UTC}
DB: postgres12
DB_PORT: ${POSTGRES_PORT:-5432}
POSTGRES_USER: ${POSTGRES_USER:-shannon}
POSTGRES_PWD: ${POSTGRES_PASSWORD:-shannon}
POSTGRES_SEEDS: postgres
depends_on:
postgres:
condition: service_healthy
healthcheck:
test: [CMD-SHELL, 'temporal operator cluster health --address localhost:7233 | grep -q SERVING || exit 1']
interval: 15s
timeout: 10s
retries: 10
start_period: 60s
deploy:
resources:
limits:
cpus: ${TEMPORAL_CPU_LIMIT:-1.0}
memory: ${TEMPORAL_MEMORY_LIMIT:-1G}
reservations:
cpus: ${TEMPORAL_CPU_RESERVATION:-0.1}
memory: ${TEMPORAL_MEMORY_RESERVATION:-256M}
temporal-ui:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}temporalio/ui:${TEMPORAL_UI_VERSION:-2.40.1}
environment:
TZ: ${TZ:-UTC}
TEMPORAL_ADDRESS: temporal:7233
ports:
- '${TEMPORAL_UI_PORT_OVERRIDE:-8088}:8080'
depends_on:
temporal:
condition: service_healthy
healthcheck:
test: [CMD-SHELL, 'wget -qO- http://localhost:8080 > /dev/null || exit 1']
interval: 15s
timeout: 5s
retries: 5
start_period: 20s
profiles:
- metrics
deploy:
resources:
limits:
cpus: ${TEMPORAL_UI_CPU_LIMIT:-0.5}
memory: ${TEMPORAL_UI_MEMORY_LIMIT:-256M}
reservations:
cpus: ${TEMPORAL_UI_CPU_RESERVATION:-0.1}
memory: ${TEMPORAL_UI_MEMORY_RESERVATION:-128M}
llm-service:
<<: [*defaults, *shannon-config]
image: ${GLOBAL_REGISTRY:-}waylandzhang/llm-service:${SHANNON_VERSION:-v0.3.1}
environment:
TZ: ${TZ:-UTC}
# LLM API Keys (at least one is required)
OPENAI_API_KEY: ${OPENAI_API_KEY:-}
ANTHROPIC_API_KEY: ${ANTHROPIC_API_KEY:-}
GOOGLE_API_KEY: ${GOOGLE_API_KEY:-}
XAI_API_KEY: ${XAI_API_KEY:-}
DEEPSEEK_API_KEY: ${DEEPSEEK_API_KEY:-}
# Optional search/tool API keys
SERPAPI_API_KEY: ${SERPAPI_API_KEY:-}
FIRECRAWL_API_KEY: ${FIRECRAWL_API_KEY:-}
# Internal service configuration
POSTGRES_HOST: postgres
POSTGRES_PORT: ${POSTGRES_PORT:-5432}
POSTGRES_USER: ${POSTGRES_USER:-shannon}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-shannon}
POSTGRES_DB: ${POSTGRES_DB:-shannon}
POSTGRES_SSLMODE: ${POSTGRES_SSLMODE:-disable}
REDIS_URL: ${REDIS_URL:-redis://redis:6379}
REDIS_ADDR: ${REDIS_ADDR:-redis:6379}
QDRANT_HOST: ${QDRANT_HOST:-qdrant}
QDRANT_PORT: ${QDRANT_PORT:-6333}
AGENT_CORE_ADDR: agent-core:50051
# Config paths
LLM_CONFIG_PATH: /app/config
MODELS_CONFIG_PATH: ${MODELS_CONFIG_PATH:-/app/config/models.yaml}
# Model selection
DEFAULT_MODEL_TIER: ${DEFAULT_MODEL_TIER:-small}
MAX_TOKENS: ${MAX_TOKENS:-2000}
TEMPERATURE: ${TEMPERATURE:-0.7}
MAX_TOKENS_PER_REQUEST: ${MAX_TOKENS_PER_REQUEST:-10000}
# Telemetry
OTEL_ENABLED: ${OTEL_ENABLED:-false}
DEBUG: ${DEBUG:-false}
ENVIRONMENT: ${ENVIRONMENT:-production}
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
qdrant:
condition: service_healthy
agent-core:
condition: service_started
healthcheck:
test: [CMD-SHELL, 'wget -qO- http://localhost:8001/health > /dev/null || exit 1']
interval: 15s
timeout: 5s
retries: 10
start_period: 30s
deploy:
resources:
limits:
cpus: ${LLM_SERVICE_CPU_LIMIT:-2.0}
memory: ${LLM_SERVICE_MEMORY_LIMIT:-2G}
reservations:
cpus: ${LLM_SERVICE_CPU_RESERVATION:-0.1}
memory: ${LLM_SERVICE_MEMORY_RESERVATION:-512M}
agent-core:
<<: [*defaults, *shannon-config]
# Note: agent-core is only built for linux/amd64.
# On Apple Silicon (ARM64), Docker Desktop uses Rosetta emulation automatically.
image: ${GLOBAL_REGISTRY:-}waylandzhang/agent-core:${SHANNON_VERSION:-v0.3.1}
platform: linux/amd64
environment:
TZ: ${TZ:-UTC}
RUST_LOG: ${RUST_LOG:-info}
CONFIG_PATH: /app/config/features.yaml
WASI_MEMORY_LIMIT_MB: ${WASI_MEMORY_LIMIT_MB:-512}
WASI_TIMEOUT_SECONDS: ${WASI_TIMEOUT_SECONDS:-60}
SHANNON_USE_WASI_SANDBOX: ${SHANNON_USE_WASI_SANDBOX:-1}
ENFORCE_TIMEOUT_SECONDS: ${ENFORCE_TIMEOUT_SECONDS:-300}
ENFORCE_MAX_TOKENS: ${ENFORCE_MAX_TOKENS:-32768}
OTEL_ENABLED: ${OTEL_ENABLED:-false}
volumes:
- ./config:/app/config:ro
- shannon_sessions:/app/sessions
healthcheck:
test: [CMD-SHELL, 'pgrep -x shannon-agent-core > /dev/null || exit 1']
interval: 15s
timeout: 5s
retries: 5
start_period: 20s
deploy:
resources:
limits:
cpus: ${AGENT_CORE_CPU_LIMIT:-2.0}
memory: ${AGENT_CORE_MEMORY_LIMIT:-2G}
reservations:
cpus: ${AGENT_CORE_CPU_RESERVATION:-0.1}
memory: ${AGENT_CORE_MEMORY_RESERVATION:-512M}
orchestrator:
<<: [*defaults, *shannon-config]
image: ${GLOBAL_REGISTRY:-}waylandzhang/orchestrator:${SHANNON_VERSION:-v0.3.1}
environment:
TZ: ${TZ:-UTC}
# Temporal workflow engine
TEMPORAL_HOST_PORT: temporal:7233
TEMPORAL_NAMESPACE: ${TEMPORAL_NAMESPACE:-default}
# Internal service URLs
LLM_SERVICE_URL: ${LLM_SERVICE_URL:-http://llm-service:8001}
QDRANT_HOST: ${QDRANT_HOST:-qdrant}
QDRANT_PORT: ${QDRANT_PORT:-6333}
# Database and cache
POSTGRES_HOST: postgres
POSTGRES_PORT: ${POSTGRES_PORT:-5432}
POSTGRES_USER: ${POSTGRES_USER:-shannon}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-shannon}
POSTGRES_DB: ${POSTGRES_DB:-shannon}
POSTGRES_SSLMODE: ${POSTGRES_SSLMODE:-disable}
REDIS_ADDR: ${REDIS_ADDR:-redis:6379}
REDIS_URL: ${REDIS_URL:-redis://redis:6379}
REDIS_TTL_SECONDS: ${REDIS_TTL_SECONDS:-3600}
# Worker pool sizing
WORKER_ACT_CRITICAL: ${WORKER_ACT_CRITICAL:-12}
WORKER_WF_CRITICAL: ${WORKER_WF_CRITICAL:-12}
WORKER_ACT_HIGH: ${WORKER_ACT_HIGH:-10}
WORKER_WF_HIGH: ${WORKER_WF_HIGH:-10}
WORKER_ACT_NORMAL: ${WORKER_ACT_NORMAL:-8}
WORKER_WF_NORMAL: ${WORKER_WF_NORMAL:-8}
WORKER_ACT_LOW: ${WORKER_ACT_LOW:-4}
WORKER_WF_LOW: ${WORKER_WF_LOW:-4}
# Workflow settings
WORKFLOW_SYNTH_BYPASS_SINGLE: ${WORKFLOW_SYNTH_BYPASS_SINGLE:-true}
PROVIDER_RATE_CONTROL_ENABLED: ${PROVIDER_RATE_CONTROL_ENABLED:-false}
# Security
JWT_SECRET: ${JWT_SECRET:-development-only-secret-change-in-production}
# Telemetry
OTEL_ENABLED: ${OTEL_ENABLED:-false}
DEBUG: ${DEBUG:-false}
ENVIRONMENT: ${ENVIRONMENT:-production}
depends_on:
temporal:
condition: service_healthy
redis:
condition: service_healthy
postgres:
condition: service_healthy
llm-service:
condition: service_healthy
agent-core:
condition: service_started
healthcheck:
test: [CMD-SHELL, 'wget -qO- http://localhost:8081/health > /dev/null || exit 1']
interval: 15s
timeout: 5s
retries: 10
start_period: 60s
deploy:
resources:
limits:
cpus: ${ORCHESTRATOR_CPU_LIMIT:-2.0}
memory: ${ORCHESTRATOR_MEMORY_LIMIT:-2G}
reservations:
cpus: ${ORCHESTRATOR_CPU_RESERVATION:-0.1}
memory: ${ORCHESTRATOR_MEMORY_RESERVATION:-512M}
gateway:
<<: [*defaults, *shannon-config]
image: ${GLOBAL_REGISTRY:-}waylandzhang/gateway:${SHANNON_VERSION:-v0.3.1}
environment:
TZ: ${TZ:-UTC}
PORT: ${GATEWAY_PORT:-8080}
ORCHESTRATOR_GRPC: ${ORCHESTRATOR_GRPC:-orchestrator:50052}
ADMIN_SERVER: ${ADMIN_SERVER:-http://orchestrator:8081}
# Database and cache
POSTGRES_HOST: postgres
POSTGRES_PORT: ${POSTGRES_PORT:-5432}
POSTGRES_USER: ${POSTGRES_USER:-shannon}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-shannon}
POSTGRES_DB: ${POSTGRES_DB:-shannon}
POSTGRES_SSLMODE: ${POSTGRES_SSLMODE:-disable}
REDIS_URL: ${REDIS_URL:-redis://redis:6379}
# Security
JWT_SECRET: ${JWT_SECRET:-development-only-secret-change-in-production}
# Set GATEWAY_SKIP_AUTH=0 to enable authentication in production
GATEWAY_SKIP_AUTH: ${GATEWAY_SKIP_AUTH:-1}
ports:
- '${GATEWAY_PORT_OVERRIDE:-8080}:8080'
depends_on:
orchestrator:
condition: service_healthy
redis:
condition: service_healthy
postgres:
condition: service_healthy
healthcheck:
test: [CMD-SHELL, 'wget -qO- http://localhost:8080/health > /dev/null || exit 1']
interval: 15s
timeout: 5s
retries: 10
start_period: 30s
deploy:
resources:
limits:
cpus: ${GATEWAY_CPU_LIMIT:-1.0}
memory: ${GATEWAY_MEMORY_LIMIT:-512M}
reservations:
cpus: ${GATEWAY_CPU_RESERVATION:-0.1}
memory: ${GATEWAY_MEMORY_RESERVATION:-256M}
volumes:
postgres_data:
redis_data:
qdrant_data:
shannon_sessions:
+4 -4
View File
@@ -70,7 +70,7 @@ POSTGRES_DB=simstudio
# Resource Limits - Main Application # Resource Limits - Main Application
# ----------------------------------------------------------------------------- # -----------------------------------------------------------------------------
SIM_CPU_LIMIT=4.0 SIM_CPU_LIMIT=4.0
SIM_CPU_RESERVATION=2.0 SIM_CPU_RESERVATION=0.1
SIM_MEMORY_LIMIT=8G SIM_MEMORY_LIMIT=8G
SIM_MEMORY_RESERVATION=4G SIM_MEMORY_RESERVATION=4G
@@ -78,7 +78,7 @@ SIM_MEMORY_RESERVATION=4G
# Resource Limits - Realtime Server # Resource Limits - Realtime Server
# ----------------------------------------------------------------------------- # -----------------------------------------------------------------------------
SIM_REALTIME_CPU_LIMIT=2.0 SIM_REALTIME_CPU_LIMIT=2.0
SIM_REALTIME_CPU_RESERVATION=1.0 SIM_REALTIME_CPU_RESERVATION=0.1
SIM_REALTIME_MEMORY_LIMIT=4G SIM_REALTIME_MEMORY_LIMIT=4G
SIM_REALTIME_MEMORY_RESERVATION=2G SIM_REALTIME_MEMORY_RESERVATION=2G
@@ -86,7 +86,7 @@ SIM_REALTIME_MEMORY_RESERVATION=2G
# Resource Limits - Database Migrations # Resource Limits - Database Migrations
# ----------------------------------------------------------------------------- # -----------------------------------------------------------------------------
SIM_MIGRATIONS_CPU_LIMIT=1.0 SIM_MIGRATIONS_CPU_LIMIT=1.0
SIM_MIGRATIONS_CPU_RESERVATION=0.5 SIM_MIGRATIONS_CPU_RESERVATION=0.1
SIM_MIGRATIONS_MEMORY_LIMIT=512M SIM_MIGRATIONS_MEMORY_LIMIT=512M
SIM_MIGRATIONS_MEMORY_RESERVATION=256M SIM_MIGRATIONS_MEMORY_RESERVATION=256M
@@ -94,7 +94,7 @@ SIM_MIGRATIONS_MEMORY_RESERVATION=256M
# Resource Limits - PostgreSQL # Resource Limits - PostgreSQL
# ----------------------------------------------------------------------------- # -----------------------------------------------------------------------------
POSTGRES_CPU_LIMIT=2.0 POSTGRES_CPU_LIMIT=2.0
POSTGRES_CPU_RESERVATION=1.0 POSTGRES_CPU_RESERVATION=0.1
POSTGRES_MEMORY_LIMIT=2G POSTGRES_MEMORY_LIMIT=2G
POSTGRES_MEMORY_RESERVATION=1G POSTGRES_MEMORY_RESERVATION=1G
+4 -4
View File
@@ -48,7 +48,7 @@ services:
cpus: ${SIM_CPU_LIMIT:-4.0} cpus: ${SIM_CPU_LIMIT:-4.0}
memory: ${SIM_MEMORY_LIMIT:-8G} memory: ${SIM_MEMORY_LIMIT:-8G}
reservations: reservations:
cpus: ${SIM_CPU_RESERVATION:-2.0} cpus: ${SIM_CPU_RESERVATION:-0.1}
memory: ${SIM_MEMORY_RESERVATION:-4G} memory: ${SIM_MEMORY_RESERVATION:-4G}
realtime: realtime:
@@ -77,7 +77,7 @@ services:
cpus: ${SIM_REALTIME_CPU_LIMIT:-2.0} cpus: ${SIM_REALTIME_CPU_LIMIT:-2.0}
memory: ${SIM_REALTIME_MEMORY_LIMIT:-4G} memory: ${SIM_REALTIME_MEMORY_LIMIT:-4G}
reservations: reservations:
cpus: ${SIM_REALTIME_CPU_RESERVATION:-1.0} cpus: ${SIM_REALTIME_CPU_RESERVATION:-0.1}
memory: ${SIM_REALTIME_MEMORY_RESERVATION:-2G} memory: ${SIM_REALTIME_MEMORY_RESERVATION:-2G}
migrations: migrations:
@@ -102,7 +102,7 @@ services:
cpus: ${SIM_MIGRATIONS_CPU_LIMIT:-1.0} cpus: ${SIM_MIGRATIONS_CPU_LIMIT:-1.0}
memory: ${SIM_MIGRATIONS_MEMORY_LIMIT:-512M} memory: ${SIM_MIGRATIONS_MEMORY_LIMIT:-512M}
reservations: reservations:
cpus: ${SIM_MIGRATIONS_CPU_RESERVATION:-0.5} cpus: ${SIM_MIGRATIONS_CPU_RESERVATION:-0.1}
memory: ${SIM_MIGRATIONS_MEMORY_RESERVATION:-256M} memory: ${SIM_MIGRATIONS_MEMORY_RESERVATION:-256M}
db: db:
@@ -129,7 +129,7 @@ services:
cpus: ${POSTGRES_CPU_LIMIT:-2.0} cpus: ${POSTGRES_CPU_LIMIT:-2.0}
memory: ${POSTGRES_MEMORY_LIMIT:-2G} memory: ${POSTGRES_MEMORY_LIMIT:-2G}
reservations: reservations:
cpus: ${POSTGRES_CPU_RESERVATION:-1.0} cpus: ${POSTGRES_CPU_RESERVATION:-0.1}
memory: ${POSTGRES_MEMORY_RESERVATION:-1G} memory: ${POSTGRES_MEMORY_RESERVATION:-1G}
volumes: volumes:
+48
View File
@@ -0,0 +1,48 @@
# Global Registry Prefix (optional)
# GLOBAL_REGISTRY=
# Service Versions
SKYVERN_VERSION=v1.0.31
POSTGRES_VERSION=15
# Timezone
TZ=UTC
# Host ports
SKYVERN_PORT_OVERRIDE=8000
SKYVERN_UI_PORT_OVERRIDE=8080
# Skyvern API Key (CHANGEME: set a strong random key for the REST API)
SKYVERN_API_KEY=changeme_skyvern_api_key_CHANGEME
# Browser type: chromium-headless (default), chromium, or chrome
BROWSER_TYPE=chromium-headless
# LLM Provider API Keys (at least one is required for task automation)
# OPENAI_API_KEY=sk-...
# ANTHROPIC_API_KEY=sk-ant-...
# PostgreSQL password
POSTGRES_PASSWORD=skyvern
# UI → API connection (must be the address reachable from the user's browser)
VITE_API_BASE_URL=http://localhost:8000
VITE_WSS_BASE_URL=ws://localhost:8000
# Resource Limits - Skyvern backend (includes Playwright + Chromium)
SKYVERN_CPU_LIMIT=2.0
SKYVERN_MEMORY_LIMIT=4G
SKYVERN_CPU_RESERVATION=0.1
SKYVERN_MEMORY_RESERVATION=1G
# Resource Limits - Skyvern UI
SKYVERN_UI_CPU_LIMIT=0.5
SKYVERN_UI_MEMORY_LIMIT=256M
SKYVERN_UI_CPU_RESERVATION=0.1
SKYVERN_UI_MEMORY_RESERVATION=64M
# Resource Limits - PostgreSQL
POSTGRES_CPU_LIMIT=1.0
POSTGRES_MEMORY_LIMIT=1G
POSTGRES_CPU_RESERVATION=0.1
POSTGRES_MEMORY_RESERVATION=256M
+84
View File
@@ -0,0 +1,84 @@
# Skyvern
[English](./README.md) | [中文](./README.zh.md)
Quick start: <https://docs.skyvern.com>.
This service deploys Skyvern, an AI-powered browser automation platform that uses LLMs and computer vision to execute tasks in web browsers. It can fill forms, navigate websites, and complete multi-step workflows without custom scripts.
## Services
- **skyvern**: The Skyvern API server with embedded Playwright + Chromium.
- **skyvern-ui**: React-based web UI for task management and browser session viewing.
- **postgres**: PostgreSQL database for task history and state.
## Quick Start
1. Copy `.env.example` to `.env`:
```bash
cp .env.example .env
```
2. Set your LLM API key and change the Skyvern API key in `.env`:
```
SKYVERN_API_KEY=your-strong-api-key
OPENAI_API_KEY=sk-...
```
3. Start the services:
```bash
docker compose up -d
```
4. Open `http://localhost:8080` for the web UI, or send tasks to the API at `http://localhost:8000`.
## Core Environment Variables
| Variable | Description | Default |
| ----------------------- | -------------------------------------------------------------------- | -------------------- |
| `SKYVERN_VERSION` | Image version (applies to both skyvern and skyvern-ui) | `v1.0.31` |
| `SKYVERN_PORT_OVERRIDE` | Host port for the API | `8000` |
| `SKYVERN_UI_PORT_OVERRIDE` | Host port for the web UI | `8080` |
| `SKYVERN_API_KEY` | API key for authenticating requests to the Skyvern server — **CHANGEME** | placeholder |
| `BROWSER_TYPE` | Browser type: `chromium-headless`, `chromium`, or `chrome` | `chromium-headless` |
| `OPENAI_API_KEY` | OpenAI API key (recommended for best results) | *(empty)* |
| `ANTHROPIC_API_KEY` | Anthropic API key (alternative to OpenAI) | *(empty)* |
| `POSTGRES_PASSWORD` | PostgreSQL password | `skyvern` |
| `VITE_API_BASE_URL` | Skyvern API URL as seen from the user's browser | `http://localhost:8000` |
| `VITE_WSS_BASE_URL` | WebSocket URL for live session streaming | `ws://localhost:8000` |
## Volumes
- `skyvern_artifacts`: Downloaded files and task artifacts.
- `skyvern_videos`: Browser session recordings.
- `skyvern_har`: HTTP Archive (HAR) files for debugging.
- `skyvern_postgres_data`: PostgreSQL data persistence.
## Ports
- **8000**: Skyvern REST API
- **8080**: Skyvern web UI
## Resource Requirements
| Service | CPU Limit | Memory Limit |
| ---------- | --------- | ------------ |
| skyvern | 2 | 4 GB |
| skyvern-ui | 0.5 | 256 MB |
| postgres | 1 | 1 GB |
The `skyvern` service includes Playwright and Chromium. Allocate **4+ GB RAM** and **2+ CPU cores** for reliable browser automation.
## Notes
- Database migrations run automatically on startup via Alembic.
- If deploying behind a reverse proxy, update `VITE_API_BASE_URL` and `VITE_WSS_BASE_URL` to your public domain.
- The `SKYVERN_API_KEY` must be included in API requests as the `x-api-key` header.
## Documentation
- [Skyvern Docs](https://docs.skyvern.com)
- [GitHub](https://github.com/Skyvern-AI/skyvern)
+84
View File
@@ -0,0 +1,84 @@
# Skyvern
[English](./README.md) | [中文](./README.zh.md)
快速开始:<https://docs.skyvern.com>。
此服务用于部署 Skyvern,一个由 AI 驱动的浏览器自动化平台,使用 LLM 和计算机视觉在 Web 浏览器中执行任务。无需编写自定义脚本,即可填写表单、导航网站和完成多步骤工作流。
## 服务
- **skyvern**:集成了 Playwright + Chromium 的 Skyvern API 服务器。
- **skyvern-ui**:用于任务管理和浏览器会话查看的 React Web UI。
- **postgres**PostgreSQL 数据库,用于存储任务历史和状态。
## 快速开始
1.`.env.example` 复制为 `.env`
```bash
cp .env.example .env
```
2. 在 `.env` 中设置 LLM API Key 并更改 Skyvern API Key
```
SKYVERN_API_KEY=your-strong-api-key
OPENAI_API_KEY=sk-...
```
3. 启动服务:
```bash
docker compose up -d
```
4. 打开 `http://localhost:8080` 访问 Web UI,或通过 `http://localhost:8000` 向 API 发送任务。
## 核心环境变量
| 变量 | 说明 | 默认值 |
| -------------------------- | ------------------------------------------------------- | ------------------------ |
| `SKYVERN_VERSION` | 镜像版本(同时适用于 skyvern 和 skyvern-ui | `v1.0.31` |
| `SKYVERN_PORT_OVERRIDE` | API 宿主机端口 | `8000` |
| `SKYVERN_UI_PORT_OVERRIDE` | Web UI 宿主机端口 | `8080` |
| `SKYVERN_API_KEY` | 请求 Skyvern 服务器的认证 API Key——**请修改** | 占位符 |
| `BROWSER_TYPE` | 浏览器类型:`chromium-headless`、`chromium` 或 `chrome` | `chromium-headless` |
| `OPENAI_API_KEY` | OpenAI API Key(推荐,效果最佳) | *(空)* |
| `ANTHROPIC_API_KEY` | Anthropic API KeyOpenAI 的替代方案) | *(空)* |
| `POSTGRES_PASSWORD` | PostgreSQL 密码 | `skyvern` |
| `VITE_API_BASE_URL` | 从用户浏览器访问的 Skyvern API URL | `http://localhost:8000` |
| `VITE_WSS_BASE_URL` | 实时会话流的 WebSocket URL | `ws://localhost:8000` |
## 数据卷
- `skyvern_artifacts`:下载的文件和任务产物。
- `skyvern_videos`:浏览器会话录像。
- `skyvern_har`:用于调试的 HTTP 存档(HAR)文件。
- `skyvern_postgres_data`PostgreSQL 数据持久化。
## 端口
- **8000**Skyvern REST API
- **8080**Skyvern Web UI
## 资源需求
| 服务 | CPU 限制 | 内存限制 |
| ---------- | -------- | -------- |
| skyvern | 2 | 4 GB |
| skyvern-ui | 0.5 | 256 MB |
| postgres | 1 | 1 GB |
`skyvern` 服务包含 Playwright 和 Chromium,需分配 **4+ GB RAM** 和 **2+ CPU 核心**以保证浏览器自动化的稳定运行。
## 说明
- 数据库迁移通过 Alembic 在启动时自动运行。
- 如果部署在反向代理后,请将 `VITE_API_BASE_URL` 和 `VITE_WSS_BASE_URL` 更新为你的公网域名。
- API 请求中必须在 `x-api-key` 请求头中包含 `SKYVERN_API_KEY`。
## 文档
- [Skyvern 文档](https://docs.skyvern.com)
- [GitHub](https://github.com/Skyvern-AI/skyvern)
+110
View File
@@ -0,0 +1,110 @@
# Change SKYVERN_API_KEY before exposing this stack externally.
# Fields marked with CHANGEME must be updated for any non-local deployment.
x-defaults: &defaults
restart: unless-stopped
logging:
driver: json-file
options:
max-size: 100m
max-file: '3'
services:
skyvern:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}skyvern/skyvern:${SKYVERN_VERSION:-v1.0.31}
depends_on:
postgres:
condition: service_healthy
ports:
- '${SKYVERN_PORT_OVERRIDE:-8000}:8000'
volumes:
- skyvern_artifacts:/data/artifacts
- skyvern_videos:/data/videos
- skyvern_har:/data/har
environment:
- TZ=${TZ:-UTC}
- DATABASE_STRING=postgresql+psycopg2://skyvern:${POSTGRES_PASSWORD:-skyvern}@postgres:5432/skyvern
- SKYVERN_API_KEY=${SKYVERN_API_KEY:-changeme_skyvern_api_key_CHANGEME}
- BROWSER_TYPE=${BROWSER_TYPE:-chromium-headless}
- VIDEO_PATH=/data/videos
- HAR_PATH=/data/har
- ARTIFACT_STORAGE_PATH=/data/artifacts
- OPENAI_API_KEY=${OPENAI_API_KEY:-}
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY:-}
healthcheck:
test:
- CMD
- python3
- -c
- "import urllib.request; urllib.request.urlopen('http://localhost:8000/api/v1/heartbeat')"
interval: 30s
timeout: 10s
retries: 5
start_period: 30s
deploy:
resources:
limits:
cpus: ${SKYVERN_CPU_LIMIT:-2.0}
memory: ${SKYVERN_MEMORY_LIMIT:-4G}
reservations:
cpus: ${SKYVERN_CPU_RESERVATION:-0.1}
memory: ${SKYVERN_MEMORY_RESERVATION:-1G}
skyvern-ui:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}skyvern/skyvern-ui:${SKYVERN_VERSION:-v1.0.31}
depends_on:
skyvern:
condition: service_healthy
ports:
- '${SKYVERN_UI_PORT_OVERRIDE:-8080}:8080'
environment:
- TZ=${TZ:-UTC}
- VITE_API_BASE_URL=${VITE_API_BASE_URL:-http://localhost:8000}
- VITE_WSS_BASE_URL=${VITE_WSS_BASE_URL:-ws://localhost:8000}
healthcheck:
test: [CMD-SHELL, 'curl -sf http://localhost:8080/ > /dev/null 2>&1 || exit 1']
interval: 30s
timeout: 10s
retries: 3
start_period: 15s
deploy:
resources:
limits:
cpus: ${SKYVERN_UI_CPU_LIMIT:-0.5}
memory: ${SKYVERN_UI_MEMORY_LIMIT:-256M}
reservations:
cpus: ${SKYVERN_UI_CPU_RESERVATION:-0.1}
memory: ${SKYVERN_UI_MEMORY_RESERVATION:-64M}
postgres:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}postgres:${POSTGRES_VERSION:-15}
environment:
- POSTGRES_USER=skyvern
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-skyvern}
- POSTGRES_DB=skyvern
- TZ=UTC
- PGTZ=UTC
volumes:
- skyvern_postgres_data:/var/lib/postgresql/data
healthcheck:
test: [CMD-SHELL, pg_isready -U skyvern]
interval: 5s
timeout: 5s
retries: 10
deploy:
resources:
limits:
cpus: ${POSTGRES_CPU_LIMIT:-1.0}
memory: ${POSTGRES_MEMORY_LIMIT:-1G}
reservations:
cpus: ${POSTGRES_CPU_RESERVATION:-0.1}
memory: ${POSTGRES_MEMORY_RESERVATION:-256M}
volumes:
skyvern_artifacts:
skyvern_videos:
skyvern_har:
skyvern_postgres_data:
@@ -26,7 +26,7 @@ services:
cpus: ${SD_WEBUI_CPU_LIMIT:-4.0} cpus: ${SD_WEBUI_CPU_LIMIT:-4.0}
memory: ${SD_WEBUI_MEMORY_LIMIT:-16G} memory: ${SD_WEBUI_MEMORY_LIMIT:-16G}
reservations: reservations:
cpus: ${SD_WEBUI_CPU_RESERVATION:-2.0} cpus: ${SD_WEBUI_CPU_RESERVATION:-0.1}
memory: ${SD_WEBUI_MEMORY_RESERVATION:-8G} memory: ${SD_WEBUI_MEMORY_RESERVATION:-8G}
devices: devices:
- driver: nvidia - driver: nvidia
+1 -1
View File
@@ -2,7 +2,7 @@
STIRLING_VERSION="latest" STIRLING_VERSION="latest"
# Port override # Port override
PORT_OVERRIDE=8080 STIRLING_PORT_OVERRIDE=8080
# Security settings # Security settings
ENABLE_SECURITY="false" ENABLE_SECURITY="false"
+1 -1
View File
@@ -13,7 +13,7 @@ This service deploys Stirling-PDF, a locally hosted web-based PDF manipulation t
| Variable Name | Description | Default Value | | Variable Name | Description | Default Value |
| -------------------- | ------------------------------------- | -------------- | | -------------------- | ------------------------------------- | -------------- |
| STIRLING_VERSION | Stirling-PDF image version | `latest` | | STIRLING_VERSION | Stirling-PDF image version | `latest` |
| PORT_OVERRIDE | Host port mapping | `8080` | | STIRLING_PORT_OVERRIDE | Host port mapping | `8080` |
| ENABLE_SECURITY | Enable security features | `false` | | ENABLE_SECURITY | Enable security features | `false` |
| ENABLE_LOGIN | Enable login functionality | `false` | | ENABLE_LOGIN | Enable login functionality | `false` |
| INITIAL_USERNAME | Initial admin username | `admin` | | INITIAL_USERNAME | Initial admin username | `admin` |
+1 -1
View File
@@ -13,7 +13,7 @@
| 变量名 | 说明 | 默认值 | | 变量名 | 说明 | 默认值 |
| -------------------- | ---------------------- | -------------- | | -------------------- | ---------------------- | -------------- |
| STIRLING_VERSION | Stirling-PDF 镜像版本 | `latest` | | STIRLING_VERSION | Stirling-PDF 镜像版本 | `latest` |
| PORT_OVERRIDE | 主机端口映射 | `8080` | | STIRLING_PORT_OVERRIDE | 主机端口映射 | `8080` |
| ENABLE_SECURITY | 启用安全功能 | `false` | | ENABLE_SECURITY | 启用安全功能 | `false` |
| ENABLE_LOGIN | 启用登录功能 | `false` | | ENABLE_LOGIN | 启用登录功能 | `false` |
| INITIAL_USERNAME | 初始管理员用户名 | `admin` | | INITIAL_USERNAME | 初始管理员用户名 | `admin` |
+2 -2
View File
@@ -11,7 +11,7 @@ services:
<<: *defaults <<: *defaults
image: ${GLOBAL_REGISTRY:-}stirlingtools/stirling-pdf:${STIRLING_VERSION:-latest} image: ${GLOBAL_REGISTRY:-}stirlingtools/stirling-pdf:${STIRLING_VERSION:-latest}
ports: ports:
- '${PORT_OVERRIDE:-8080}:8080' - '${STIRLING_PORT_OVERRIDE:-8080}:8080'
volumes: volumes:
- stirling_trainingData:/usr/share/tessdata - stirling_trainingData:/usr/share/tessdata
- stirling_configs:/configs - stirling_configs:/configs
@@ -41,7 +41,7 @@ services:
cpus: ${STIRLING_CPU_LIMIT:-2.0} cpus: ${STIRLING_CPU_LIMIT:-2.0}
memory: ${STIRLING_MEMORY_LIMIT:-4G} memory: ${STIRLING_MEMORY_LIMIT:-4G}
reservations: reservations:
cpus: ${STIRLING_CPU_RESERVATION:-1.0} cpus: ${STIRLING_CPU_RESERVATION:-0.1}
memory: ${STIRLING_MEMORY_RESERVATION:-2G} memory: ${STIRLING_MEMORY_RESERVATION:-2G}
healthcheck: healthcheck:
test: [CMD, wget, --no-verbose, --tries=1, --spider, 'http://localhost:8080/'] test: [CMD, wget, --no-verbose, --tries=1, --spider, 'http://localhost:8080/']
+36
View File
@@ -0,0 +1,36 @@
# --- Image / build ---
# Override prefix when pushing to a private registry (e.g. registry.example.com/)
GLOBAL_REGISTRY=
# Tag of the locally built image
CUBE_SANDBOX_VERSION=0.1.7
# Base image for the wrapper container.
# Default works globally. In mainland China, override with a regional mirror:
# UBUNTU_IMAGE=docker.m.daocloud.io/library/ubuntu:22.04
# UBUNTU_IMAGE=ccr.ccs.tencentyun.com/library/ubuntu:22.04
UBUNTU_IMAGE=ubuntu:22.04
# --- Runtime ---
# Timezone inside the container
TZ=Asia/Shanghai
# Mirror used by the upstream installer:
# cn -> https://cnb.cool/CubeSandbox + Tencent Cloud container registry (recommended in China)
# gh -> https://github.com (slower in China but works elsewhere)
CUBE_MIRROR=cn
# Size of the XFS-formatted loop file mounted at /data/cubelet inside the
# container. install.sh hard-requires XFS; the file lives on the cube_data
# named volume so it persists across container restarts.
CUBE_XFS_SIZE=50G
# Set to 1 to force re-running install.sh on next start
CUBE_FORCE_REINSTALL=0
# --- Resources ---
# CubeSandbox runs MySQL + Redis + CubeProxy + CoreDNS + CubeMaster + CubeAPI +
# Cubelet + network-agent inside the wrapper container, then spawns MicroVMs.
# Give it enough headroom; 16 GiB / 8 vCPU is a comfortable single-node default.
CUBE_CPU_LIMIT=8.0
CUBE_MEMORY_LIMIT=16G
CUBE_CPU_RESERVATION=0.1
CUBE_MEMORY_RESERVATION=8G
+134
View File
@@ -0,0 +1,134 @@
# CubeSandbox in a privileged systemd+DinD container.
#
# CubeSandbox's official install.sh is designed for bare metal / VMs and
# requires a running systemd (it registers all services as systemd units).
# This image therefore runs systemd as PID 1 rather than tini.
#
# UBUNTU_IMAGE may be overridden to use a regional mirror, e.g.:
# docker.m.daocloud.io/library/ubuntu:22.04 (China DaoCloud mirror)
# ccr.ccs.tencentyun.com/library/ubuntu:22.04 (Tencent Cloud mirror)
ARG UBUNTU_IMAGE=ubuntu:22.04
FROM ${UBUNTU_IMAGE}
ENV DEBIAN_FRONTEND=noninteractive \
LANG=C.UTF-8 \
LC_ALL=C.UTF-8
# Core system deps + systemd as the container init system.
# deploy/one-click/install.sh requires: tar, rg (ripgrep), ss (iproute2),
# bash, curl, sed, pgrep (procps), date, docker, python3, ip (iproute2), awk (gawk).
# Plus DinD prerequisites: iptables, ca-certificates, gnupg.
# Plus xfsprogs for the XFS-backed /data/cubelet (install.sh hard requirement).
RUN apt-get update && apt-get install -y --no-install-recommends \
systemd \
systemd-sysv \
dbus \
ca-certificates \
curl \
gnupg \
lsb-release \
bash \
tar \
ripgrep \
iproute2 \
procps \
gawk \
sed \
python3 \
python3-pip \
iptables \
kmod \
xfsprogs \
e2fsprogs \
util-linux \
file \
less \
&& rm -rf /var/lib/apt/lists/*
# Mask systemd units that are irrelevant or will fail in a container context.
RUN for unit in \
getty@tty1.service \
apt-daily.service \
apt-daily-upgrade.service \
apt-daily.timer \
apt-daily-upgrade.timer \
motd-news.service \
motd-news.timer \
systemd-networkd.service \
systemd-networkd-wait-online.service \
systemd-udevd.service \
systemd-udevd-control.socket \
systemd-udevd-kernel.socket \
systemd-logind.service \
e2scrub_reap.service \
apparmor.service; do \
ln -sf /dev/null "/etc/systemd/system/${unit}"; \
done
# Install Docker CE + Compose plugin from the official Docker apt repository.
RUN install -m 0755 -d /etc/apt/keyrings \
&& curl -fsSL https://download.docker.com/linux/ubuntu/gpg \
| gpg --dearmor -o /etc/apt/keyrings/docker.gpg \
&& chmod a+r /etc/apt/keyrings/docker.gpg \
&& echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo $VERSION_CODENAME) stable" \
> /etc/apt/sources.list.d/docker.list \
&& apt-get update \
&& apt-get install -y --no-install-recommends \
docker-ce \
docker-ce-cli \
containerd.io \
docker-buildx-plugin \
docker-compose-plugin \
&& rm -rf /var/lib/apt/lists/*
# Configure Docker daemon defaults.
RUN mkdir -p /etc/docker && printf '%s\n' \
'{' \
' "log-driver": "json-file",' \
' "log-opts": { "max-size": "50m", "max-file": "3" },' \
' "storage-driver": "overlay2"' \
'}' > /etc/docker/daemon.json
# Install E2B Python SDK so smoke tests can run from inside the container
# without polluting the WSL2 host with pip packages.
RUN pip3 install --no-cache-dir --break-system-packages \
e2b-code-interpreter==1.0.* \
requests \
|| pip3 install --no-cache-dir \
e2b-code-interpreter==1.0.* \
requests
# Persistent locations the installer writes to.
VOLUME ["/var/lib/docker", "/data", "/usr/local/services/cubetoolbox"]
# Helper scripts for the bootstrap flow.
COPY cube-init.sh /usr/local/bin/cube-init.sh
COPY cube-xfs-setup.sh /usr/local/bin/cube-xfs-setup.sh
COPY cube-install.sh /usr/local/bin/cube-install.sh
RUN chmod +x \
/usr/local/bin/cube-init.sh \
/usr/local/bin/cube-xfs-setup.sh \
/usr/local/bin/cube-install.sh
# Systemd service units for the CubeSandbox bootstrap sequence.
COPY cube-xfs-mount.service /etc/systemd/system/cube-xfs-mount.service
COPY cube-install.service /etc/systemd/system/cube-install.service
# Enable services by creating the wanted-by symlinks that systemctl enable
# would create (systemctl cannot run during a Docker image build).
RUN mkdir -p /etc/systemd/system/multi-user.target.wants \
&& ln -sf /etc/systemd/system/cube-xfs-mount.service \
/etc/systemd/system/multi-user.target.wants/cube-xfs-mount.service \
&& ln -sf /etc/systemd/system/cube-install.service \
/etc/systemd/system/multi-user.target.wants/cube-install.service \
&& ln -sf /lib/systemd/system/docker.service \
/etc/systemd/system/multi-user.target.wants/docker.service \
&& ln -sf /lib/systemd/system/containerd.service \
/etc/systemd/system/multi-user.target.wants/containerd.service
# cube-init.sh captures CUBE_* and TZ env vars from the container runtime
# into /etc/cube-sandbox.env (readable by systemd EnvironmentFile=), then
# execs /lib/systemd/systemd as PID 1.
ENTRYPOINT ["/usr/local/bin/cube-init.sh"]
CMD ["/lib/systemd/systemd"]
+150
View File
@@ -0,0 +1,150 @@
# CubeSandbox
Run [TencentCloud CubeSandbox](https://github.com/TencentCloud/CubeSandbox) — a KVM-based MicroVM sandbox compatible with the E2B SDK — entirely inside a single privileged Docker container, without modifying the host system.
## Why this is unusual
CubeSandbox is **not** a containerized project upstream. Its core components (Cubelet, network-agent, cube-shim, cube-runtime, CubeAPI, CubeMaster) ship as host binaries and the official `install.sh` writes them to `/usr/local/services/cubetoolbox`, then starts them as native processes that talk to the host containerd.
This stack runs the **entire installer inside one privileged container** that:
1. Runs its own `dockerd` (Docker-in-Docker) for MySQL / Redis / CubeProxy / CoreDNS dependencies.
2. Creates an XFS-formatted loop volume at `/data/cubelet` (install.sh hard-requires XFS).
3. Executes the upstream [`online-install.sh`](https://github.com/TencentCloud/CubeSandbox/blob/master/deploy/one-click/online-install.sh) on first boot.
4. Tails logs to keep the container alive.
The result is essentially a **single-node CubeSandbox appliance container** suitable for evaluating CubeSandbox without changing your host.
## Features
- Built on Ubuntu 22.04 (the project's primary test environment)
- Self-contained: no host packages installed, no host paths mounted
- KVM passed through via `/dev/kvm`
- Persistent volumes for installed binaries, sandbox data, and DinD storage
- Health check covering CubeAPI, CubeMaster, and network-agent
- China-mainland mirror (`MIRROR=cn`) used by default
- Smoke-test script included (`smoke-test.sh`)
## Requirements
- Linux host (or WSL2 with KVM passthrough) with `/dev/kvm` available to Docker
- Nested virtualization enabled (Intel VT-x / AMD-V exposed)
- cgroup v2 (modern kernels — Debian 12+, Ubuntu 22.04+, kernel 5.10+)
- ≥ 16 GiB RAM and ≥ 8 vCPU recommended (8 GiB is the upstream minimum)
- ≥ 60 GiB free disk for the XFS loop file + Docker image layers
- Outbound internet to download the install bundle (~hundreds of MB) and Docker images
> On WSL2: confirm `/dev/kvm` is present (`ls -l /dev/kvm`) and your user is in the `kvm` group on the host distro.
## Quick Start
1. Copy the example environment file (optional — defaults work):
```bash
cp .env.example .env
```
2. Build and start (the first run downloads the CubeSandbox bundle and several Docker images — expect 520 minutes):
```bash
docker compose up -d --build
```
3. Watch the bootstrap log:
```bash
docker compose logs -f cube-sandbox
```
Wait for the `==================== CubeSandbox is up ====================` banner.
4. Verify all services are healthy:
```bash
curl -fsS http://127.0.0.1:3000/health && echo # CubeAPI
curl -fsS http://127.0.0.1:8089/notify/health && echo # CubeMaster
curl -fsS http://127.0.0.1:19090/healthz && echo # network-agent
```
5. (Optional) Run the smoke test:
```bash
bash smoke-test.sh # Health checks only
SKIP_TEMPLATE_BUILD=1 bash smoke-test.sh # Skip the slow template build
```
## Endpoints
Because the container uses `network_mode: host`, all CubeSandbox HTTP endpoints are reachable directly on the host loopback:
| Service | URL |
| ------------- | ------------------------------------ |
| CubeAPI | `http://127.0.0.1:3000` |
| CubeMaster | `http://127.0.0.1:8089` |
| network-agent | `http://127.0.0.1:19090` |
The CubeAPI exposes the E2B-compatible REST surface; point the [`e2b` Python SDK](https://e2b.dev) at `http://127.0.0.1:3000` to create sandboxes.
## Configuration
Key environment variables (see `.env.example` for the full list):
| Variable | Description | Default |
| -------------------------- | ------------------------------------------------------------ | ---------------- |
| `GLOBAL_REGISTRY` | Image registry prefix when pushing to a private registry | _(empty)_ |
| `CUBE_SANDBOX_VERSION` | Tag of the locally built wrapper image | `0.1.7` |
| `UBUNTU_IMAGE` | Base Ubuntu version | `22.04` |
| `TZ` | Container timezone | `Asia/Shanghai` |
| `CUBE_MIRROR` | Installer mirror — `cn` (China CDN) or `gh` (GitHub) | `cn` |
| `CUBE_XFS_SIZE` | Size of the XFS loop file backing `/data/cubelet` | `50G` |
| `CUBE_FORCE_REINSTALL` | Set to `1` to re-run `install.sh` on next start | `0` |
| `CUBE_CPU_LIMIT` | CPU limit | `8` |
| `CUBE_MEMORY_LIMIT` | Memory limit | `16G` |
| `CUBE_CPU_RESERVATION` | CPU reservation | `2` |
| `CUBE_MEMORY_RESERVATION` | Memory reservation | `8G` |
## Storage
Three named volumes hold persistent state — your installed CubeSandbox survives `docker compose down && up`:
| Volume | Path inside container | Purpose |
| --------------- | ----------------------------------- | -------------------------------------------------- |
| `cube_dind_data` | `/var/lib/docker` | DinD daemon images / containers / volumes |
| `cube_data` | `/data` | XFS loop image, `/data/cubelet`, sandbox disks, logs |
| `cube_toolbox` | `/usr/local/services/cubetoolbox` | Installed CubeSandbox binaries and scripts |
To wipe everything and reinstall from scratch:
```bash
docker compose down -v
docker compose up -d --build
```
## Security Considerations
⚠️ This stack is **highly privileged by design**. Only run it in trusted environments.
- `privileged: true` — required to mount the XFS loop volume, manage TAP interfaces, and run KVM
- `network_mode: host` — required so Cubelet can register the node IP and manage host TAP interfaces
- `cgroup: host` — required for the in-container `dockerd` to share the host's cgroup v2 hierarchy
- `/dev/kvm` and `/dev/net/tun` are passed through
These permissions are equivalent to what `online-install.sh` would request if it were run directly on your host. The advantage of the container wrapper is that all installer side-effects are confined to the three named volumes above, so removing the stack leaves no host residue.
## Troubleshooting
- **`/dev/kvm not found`** — the host does not expose KVM to Docker. On WSL2, confirm nested virtualization is enabled and the kernel exposes `/dev/kvm`. On bare metal, ensure VT-x / AMD-V is enabled in BIOS.
- **First boot hangs at "Running CubeSandbox one-click installer"** — the installer is downloading the bundle (~hundreds of MB) and pulling several Docker images. Check progress with `docker compose logs -f cube-sandbox`.
- **`quickcheck.sh reported issues`** — open a shell in the container and inspect logs:
```bash
docker compose exec cube-sandbox bash
ls /data/log/
tail -f /data/log/CubeAPI/*.log
```
- **Re-run the installer cleanly** — set `CUBE_FORCE_REINSTALL=1` in `.env` and `docker compose up -d --force-recreate`.
## Project Information
- Upstream: https://github.com/TencentCloud/CubeSandbox
- License: upstream project is Apache-2.0; this configuration is provided as-is for the Compose Anything project.
+151
View File
@@ -0,0 +1,151 @@
# CubeSandbox
在单个特权 Docker 容器内完整运行 [腾讯云 CubeSandbox](https://github.com/TencentCloud/CubeSandbox)——一个基于 KVM、兼容 E2B SDK 的 MicroVM 沙箱——无需修改宿主系统。
## 为什么这个栈与众不同
CubeSandbox 上游**并不是**一个容器化项目。它的核心组件(Cubelet、network-agent、cube-shim、cube-runtime、CubeAPI、CubeMaster)以宿主机二进制形式分发,官方 `install.sh` 会把它们写入 `/usr/local/services/cubetoolbox`,然后作为本机进程启动并与宿主 containerd 集成。
本栈把**整个安装器塞进一个特权容器**:
1. 容器内自起一个 `dockerd`Docker-in-Docker),用于运行 MySQL / Redis / CubeProxy / CoreDNS 等依赖。
2.`/data/cubelet` 创建一个 XFS 格式的 loop 卷(install.sh 强制要求 XFS)。
3. 首次启动时执行上游的 [`online-install.sh`](https://github.com/TencentCloud/CubeSandbox/blob/master/deploy/one-click/online-install.sh)。
4. 通过 tail 日志保持容器存活。
最终得到一个**单节点 CubeSandbox 一体化容器**,方便在不改动宿主的前提下评估 CubeSandbox。
## 特性
- 基于 Ubuntu 22.04(项目主要测试环境)
- 自包含:不安装宿主机软件包,不挂载宿主路径
- 通过 `/dev/kvm` 透传 KVM
- 三个持久化命名卷分别保存安装产物、沙箱数据和 DinD 存储
- 健康检查覆盖 CubeAPI、CubeMaster、network-agent
- 默认使用国内镜像 (`MIRROR=cn`)
- 内置冒烟测试脚本(`smoke-test.sh`
## 环境要求
- Linux 宿主(或开启 KVM 透传的 WSL2),`/dev/kvm` 对 Docker 可见
- 已开启嵌套虚拟化(暴露 Intel VT-x / AMD-V
- cgroup v2(现代内核——Debian 12+、Ubuntu 22.04+、kernel 5.10+
- 推荐 ≥ 16 GiB 内存、≥ 8 vCPU(上游最低 8 GiB
- 至少 60 GiB 空闲磁盘,用于 XFS loop 文件 + Docker 镜像层
- 可访问外网,用于下载安装包(数百 MB)和 Docker 镜像
> WSL2 用户:先确认 `/dev/kvm` 存在(`ls -l /dev/kvm`),并且当前用户在宿主发行版的 `kvm` 组中。
## 快速开始
1. 复制示例环境文件(可选,默认值即可使用):
```bash
cp .env.example .env
```
2. 构建并启动(首次运行会下载 CubeSandbox 安装包和若干 Docker 镜像,预计 5-20 分钟):
```bash
docker compose up -d --build
```
3. 观察启动日志:
```bash
docker compose logs -f cube-sandbox
```
等待出现 `==================== CubeSandbox is up ====================` 横幅。
4. 验证所有服务健康:
```bash
curl -fsS http://127.0.0.1:3000/health && echo # CubeAPI
curl -fsS http://127.0.0.1:8089/notify/health && echo # CubeMaster
curl -fsS http://127.0.0.1:19090/healthz && echo # network-agent
```
5. (可选)运行冒烟测试:
```bash
bash smoke-test.sh # 仅做健康检查
SKIP_TEMPLATE_BUILD=1 bash smoke-test.sh # 跳过较慢的模板构建步骤
```
## 服务端点
由于容器使用 `network_mode: host`CubeSandbox 的所有 HTTP 端点都直接暴露在宿主回环地址上:
| 服务 | URL |
| ------------- | ------------------------------------ |
| CubeAPI | `http://127.0.0.1:3000` |
| CubeMaster | `http://127.0.0.1:8089` |
| network-agent | `http://127.0.0.1:19090` |
CubeAPI 暴露兼容 E2B 的 REST 接口;将 [`e2b` Python SDK](https://e2b.dev) 指向 `http://127.0.0.1:3000` 即可创建沙箱。
## 配置项
主要环境变量(完整列表见 `.env.example`):
| 变量 | 描述 | 默认值 |
| -------------------------- | --------------------------------------------------- | --------------- |
| `GLOBAL_REGISTRY` | 推送到私有仓库时使用的镜像前缀 | _(空)_ |
| `CUBE_SANDBOX_VERSION` | 本地构建的封装镜像 tag | `0.1.7` |
| `UBUNTU_IMAGE` | 基础 Ubuntu 版本 | `22.04` |
| `TZ` | 容器时区 | `Asia/Shanghai` |
| `CUBE_MIRROR` | 安装器镜像源——`cn`(国内 CDN)或 `gh`GitHub | `cn` |
| `CUBE_XFS_SIZE` | `/data/cubelet` 背后 XFS loop 文件大小 | `50G` |
| `CUBE_FORCE_REINSTALL` | 设为 `1` 时下次启动会重跑 `install.sh` | `0` |
| `CUBE_CPU_LIMIT` | CPU 上限 | `8` |
| `CUBE_MEMORY_LIMIT` | 内存上限 | `16G` |
| `CUBE_CPU_RESERVATION` | CPU 预留 | `2` |
| `CUBE_MEMORY_RESERVATION` | 内存预留 | `8G` |
## 存储
三个命名卷保存所有持久化状态——`docker compose down && up` 不会丢失安装:
| 卷 | 容器内路径 | 用途 |
| ---------------- | ----------------------------------- | --------------------------------------------------- |
| `cube_dind_data` | `/var/lib/docker` | DinD 守护进程的镜像 / 容器 / 卷 |
| `cube_data` | `/data` | XFS loop 文件、`/data/cubelet`、沙箱磁盘、日志 |
| `cube_toolbox` | `/usr/local/services/cubetoolbox` | 已安装的 CubeSandbox 二进制和脚本 |
完全清空并从头重装:
```bash
docker compose down -v
docker compose up -d --build
```
## 安全说明
⚠️ 本栈**按设计是高特权的**,仅在受信环境中使用。
- `privileged: true`——挂载 XFS loop 卷、管理 TAP 接口、运行 KVM 所必需
- `network_mode: host`——Cubelet 注册节点 IP、管理宿主 TAP 接口所必需
- `cgroup: host`——容器内的 `dockerd` 共享宿主 cgroup v2 层级所必需
- 透传 `/dev/kvm` 和 `/dev/net/tun`
这些权限等同于直接在宿主上运行 `online-install.sh` 所需的权限。容器封装的好处在于:所有安装副作用都被限制在上述三个命名卷内,删除本栈不会在宿主上留下任何残留。
## 故障排查
- **`/dev/kvm not found`**:宿主未对 Docker 暴露 KVM。WSL2 用户请确认嵌套虚拟化已启用且内核暴露 `/dev/kvm`;裸金属用户请在 BIOS 中启用 VT-x / AMD-V。
- **首次启动卡在 "Running CubeSandbox one-click installer"**:安装器正在下载安装包(数百 MB)并拉取若干 Docker 镜像。用 `docker compose logs -f cube-sandbox` 查看进度。
- **`quickcheck.sh reported issues`**:进入容器查看日志:
```bash
docker compose exec cube-sandbox bash
ls /data/log/
tail -f /data/log/CubeAPI/*.log
```
- **干净重跑安装**:在 `.env` 中设置 `CUBE_FORCE_REINSTALL=1`,然后 `docker compose up -d --force-recreate`。
## 项目信息
- 上游项目:https://github.com/TencentCloud/CubeSandbox
- 许可证:上游项目采用 Apache-2.0;本配置以 as-is 形式提供给 Compose Anything 项目使用。
+43
View File
@@ -0,0 +1,43 @@
#!/usr/bin/env bash
# Thin PID-1 wrapper: capture container runtime env vars into a file that
# systemd EnvironmentFile= can read, then exec systemd as PID 1.
#
# This script runs BEFORE systemd, so it must be kept minimal and must not
# depend on any CubeSandbox service being available.
set -euo pipefail
# Write CUBE_* and TZ vars to /etc/cube-sandbox.env so that
# cube-xfs-mount.service and cube-install.service can pick them up via
# EnvironmentFile=/etc/cube-sandbox.env.
install -m 0644 /dev/null /etc/cube-sandbox.env
printenv | grep -E '^(CUBE_|TZ=)' >> /etc/cube-sandbox.env 2>/dev/null || true
# Mount BPF filesystem required by network-agent eBPF map pinning.
# /sys/fs/bpf is not auto-mounted in Docker containers even when the kernel
# supports BPF; without it network-agent crashes on startup with
# "not on a bpf filesystem" and then a nil-pointer panic.
if ! mountpoint -q /sys/fs/bpf 2>/dev/null; then
mkdir -p /sys/fs/bpf
mount -t bpf none /sys/fs/bpf 2>/dev/null \
|| echo "[cube-init] WARNING: could not mount BPF filesystem; network-agent may fail" >&2
fi
# Redirect CubeMaster's rootfs artifact workspace to the persistent data volume.
# Template builds export the sandbox image into a tar (often > 2 GB) before
# converting it to an ext4 disk image. /tmp is only a 2 GB tmpfs and is wiped on
# every container restart; /data (a named Docker volume) has 50+ GB and is
# persistent.
#
# We use a bind mount instead of a symlink: CubeMaster's Go startup code calls
# os.RemoveAll + os.MkdirAll on this path, which would silently replace a
# symlink with a real tmpfs directory. A bind-mount point returns EBUSY on
# removal, keeping the mount intact so all writes land on /data.
mkdir -p /data/cubemaster-rootfs-artifacts
mkdir -p /tmp/cubemaster-rootfs-artifacts
if ! mountpoint -q /tmp/cubemaster-rootfs-artifacts 2>/dev/null; then
mount --bind /data/cubemaster-rootfs-artifacts /tmp/cubemaster-rootfs-artifacts \
|| echo "[cube-init] WARNING: bind mount for cubemaster-rootfs-artifacts failed; writes may fill tmpfs" >&2
fi
# Hand off to systemd (or whatever CMD was passed to the container).
exec "$@"
+24
View File
@@ -0,0 +1,24 @@
[Unit]
Description=CubeSandbox one-click installer
# Requires both the XFS volume and dockerd to be ready before running.
# install.sh will pull Docker images (MySQL, Redis, CubeProxy, CoreDNS)
# and then register Cubelet / CubeAPI / CubeMaster / network-agent as
# systemd units via `systemctl enable --now`.
After=docker.service cube-xfs-mount.service
Requires=docker.service cube-xfs-mount.service
[Service]
Type=oneshot
RemainAfterExit=yes
EnvironmentFile=-/etc/cube-sandbox.env
ExecStart=/usr/local/bin/cube-install.sh
# First boot downloads ~400 MB + pulls several Docker images; allow 30 min.
TimeoutStartSec=1800
# Retry on transient network failures (e.g. download interrupted).
Restart=on-failure
RestartSec=30s
StandardOutput=journal
StandardError=journal
[Install]
WantedBy=multi-user.target
+160
View File
@@ -0,0 +1,160 @@
#!/usr/bin/env bash
# Run the CubeSandbox one-click installer, then run quickcheck.sh.
# Called by cube-install.service (Type=oneshot) after docker.service and
# cube-xfs-mount.service are both active.
set -euo pipefail
log() { printf '[cube-install] %s\n' "$*"; }
err() { printf '[cube-install] ERROR: %s\n' "$*" >&2; }
INSTALL_PREFIX="/usr/local/services/cubetoolbox"
QUICKCHECK="${INSTALL_PREFIX}/scripts/one-click/quickcheck.sh"
UP_SCRIPT="${INSTALL_PREFIX}/scripts/one-click/up-with-deps.sh"
MIRROR="${CUBE_MIRROR:-cn}"
INSTALLER_URL_CN="https://cnb.cool/CubeSandbox/CubeSandbox/-/git/raw/master/deploy/one-click/online-install.sh"
INSTALLER_URL_GH="https://github.com/tencentcloud/CubeSandbox/raw/master/deploy/one-click/online-install.sh"
# /dev/kvm sanity — required by the MicroVM hypervisor.
if [ ! -c /dev/kvm ]; then
err "/dev/kvm is not available inside the container."
err "Ensure the compose stack passes --device /dev/kvm and nested virt is enabled on the host."
exit 1
fi
log "KVM device present: $(ls -l /dev/kvm)"
# Wait for dockerd (started by docker.service) to be ready before install.sh
# tries to pull MySQL / Redis / CubeProxy images.
log "Waiting for docker daemon ..."
for i in $(seq 1 60); do
if docker info >/dev/null 2>&1; then
log "docker ready."
break
fi
sleep 2
done
if ! docker info >/dev/null 2>&1; then
err "docker daemon not ready after 120 s"
exit 1
fi
# Redirect TMPDIR to the 50 GB XFS volume.
# /tmp is only 256 MB (tmpfs) and mounted noexec — both cause install failures:
# - curl: (23) Failure writing output to destination (out of space)
# - extracted scripts fail to execute (noexec mount flag)
mkdir -p /data/tmp
export TMPDIR=/data/tmp
log "TMPDIR set to $TMPDIR ($(df -h /data/tmp | awk 'NR==2{print $4}') free)"
# Set CAROOT so mkcert can find / create the local CA directory on every boot.
# Without this, up-cube-proxy.sh calls `mkcert -install` which exits with:
# "ERROR: failed to find the default CA location"
# Because up-with-deps.sh runs under set -euo pipefail, that failure aborts
# the entire script before any compute services (network-agent, CubeAPI, etc.)
# are started. Persisting the CA on /data (named volume) means the cert is
# re-used across container restarts rather than regenerated each time.
export CAROOT=/data/mkcert-ca
mkdir -p "$CAROOT"
log "CAROOT set to $CAROOT"
# Run the upstream one-click installer on first boot; on subsequent boots
# just re-launch all services via up-with-deps.sh.
if [ -x "$QUICKCHECK" ] && [ "${CUBE_FORCE_REINSTALL:-0}" != "1" ]; then
log "CubeSandbox already installed at $INSTALL_PREFIX — starting services."
if [ ! -x "$UP_SCRIPT" ]; then
err "up-with-deps.sh not found at $UP_SCRIPT — reinstall required"
exit 1
fi
ONE_CLICK_TOOLBOX_ROOT="$INSTALL_PREFIX" \
ONE_CLICK_RUNTIME_ENV_FILE="${INSTALL_PREFIX}/.one-click.env" \
bash "$UP_SCRIPT" \
|| log "WARNING: up-with-deps.sh exited non-zero; services may still be starting"
else
log "Running CubeSandbox one-click installer (mirror=$MIRROR) ..."
if [ "$MIRROR" = "cn" ]; then
curl -fsSL "$INSTALLER_URL_CN" | MIRROR=cn bash
else
curl -fsSL "$INSTALLER_URL_GH" | bash
fi
fi
# Run quickcheck.sh with retries — network-agent initialises 500 tap interfaces
# which takes ~2 minutes; we retry every 30 s for up to 10 minutes.
QUICKCHECK_PASSED=0
if [ -x "$QUICKCHECK" ]; then
log "Running quickcheck.sh (retrying up to 10 min for network-agent tap init) ..."
for i in $(seq 1 20); do
if ONE_CLICK_TOOLBOX_ROOT="$INSTALL_PREFIX" \
ONE_CLICK_RUNTIME_ENV_FILE="${INSTALL_PREFIX}/.one-click.env" \
"$QUICKCHECK" 2>&1; then
QUICKCHECK_PASSED=1
break
fi
log "quickcheck attempt $i/20 failed — retrying in 30 s ..."
sleep 30
done
else
err "quickcheck.sh not found at $QUICKCHECK — install may have failed."
exit 1
fi
if [ "$QUICKCHECK_PASSED" != "1" ]; then
err "quickcheck.sh never passed after 20 attempts — CubeSandbox is unhealthy."
exit 1
fi
# Ensure containerd-shim-cube-rs is on Cubelet's clean PATH.
# up.sh/up-with-deps.sh launch Cubelet with:
# PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
# Cubelet resolves runtime shims from that PATH, so it cannot find
# containerd-shim-cube-rs unless it is symlinked into one of those dirs.
# We create the symlink unconditionally on every boot (both after fresh
# install and after the restart path) so Cubelet can start sandboxes.
SHIM_SRC="${INSTALL_PREFIX}/cube-shim/bin/containerd-shim-cube-rs"
SHIM_DST="/usr/local/bin/containerd-shim-cube-rs"
if [ -x "$SHIM_SRC" ]; then
ln -sf "$SHIM_SRC" "$SHIM_DST"
log "containerd-shim-cube-rs linked: $SHIM_DST -> $SHIM_SRC"
else
log "WARNING: $SHIM_SRC not found — Cubelet will not be able to start MicroVMs"
fi
# Restart Cubelet now that network-agent is confirmed ready.
# On first startup the Cubelet process begins before network-agent has finished
# initialising its 500 TAP interfaces (~2 min). This causes the
# io.cubelet.images-service.v1 plugin to fail with:
# "network-agent health check failed ... context deadline exceeded"
# leaving the gRPC cubelet.services.images.v1.Images service unregistered.
# When CubeMaster later tries to distribute a template artifact to the node it
# gets back gRPC Unimplemented and the build fails.
# Restarting Cubelet here — after quickcheck has confirmed network-agent is up —
# allows the images-service plugin to load successfully on the second boot.
CUBELET_BIN="${INSTALL_PREFIX}/Cubelet/bin/cubelet"
CUBELET_CFG="${INSTALL_PREFIX}/Cubelet/config/config.toml"
CUBELET_DYN="${INSTALL_PREFIX}/Cubelet/dynamicconf/conf.yaml"
CUBELET_LOG="/data/log/Cubelet/Cubelet-req.log"
if [ -x "$CUBELET_BIN" ]; then
log "Restarting Cubelet so images-service plugin loads against ready network-agent ..."
pkill -f "${CUBELET_BIN}" 2>/dev/null || true
sleep 2
mkdir -p "$(dirname "$CUBELET_LOG")"
nohup "$CUBELET_BIN" \
--config "$CUBELET_CFG" \
--dynamic-conf-path "$CUBELET_DYN" \
>>"$CUBELET_LOG" 2>&1 &
CUBELET_PID=$!
log "Cubelet restarted (PID ${CUBELET_PID}) — waiting 10 s for boot ..."
sleep 10
if kill -0 "$CUBELET_PID" 2>/dev/null; then
log "Cubelet is running."
else
log "WARNING: Cubelet PID ${CUBELET_PID} exited — check ${CUBELET_LOG}."
fi
fi
log "==================== CubeSandbox is up ===================="
log " CubeAPI: http://127.0.0.1:3000/health"
log " CubeMaster: http://127.0.0.1:8089/notify/health"
log " network-agent http://127.0.0.1:19090/healthz"
log " Logs: /data/log/{CubeAPI,CubeMaster,Cubelet}/"
log "==========================================================="
@@ -0,0 +1,18 @@
[Unit]
Description=CubeSandbox XFS loop volume mount
# Must run before dockerd and the installer because install.sh validates that
# /data/cubelet is an XFS filesystem before proceeding.
DefaultDependencies=no
Before=cube-install.service docker.service
After=local-fs.target
[Service]
Type=oneshot
RemainAfterExit=yes
EnvironmentFile=-/etc/cube-sandbox.env
ExecStart=/usr/local/bin/cube-xfs-setup.sh
StandardOutput=journal
StandardError=journal
[Install]
WantedBy=multi-user.target
+31
View File
@@ -0,0 +1,31 @@
#!/usr/bin/env bash
# Create and mount the XFS-formatted loop volume at /data/cubelet.
# Called by cube-xfs-mount.service (Type=oneshot) before docker.service starts.
#
# install.sh hard-requires that /data/cubelet is on an XFS filesystem;
# it validates this with `df -T /data/cubelet | grep -q xfs`.
set -euo pipefail
log() { printf '[cube-xfs] %s\n' "$*"; }
CUBE_DATA_DIR="${CUBE_DATA_DIR:-/data/cubelet}"
CUBE_XFS_IMG="${CUBE_XFS_IMG:-/data/cubelet.img}"
CUBE_XFS_SIZE="${CUBE_XFS_SIZE:-50G}"
mkdir -p /data "$CUBE_DATA_DIR"
current_fs="$(stat -fc %T "$CUBE_DATA_DIR" 2>/dev/null || echo unknown)"
if [ "$current_fs" = "xfs" ]; then
log "Already mounted: $CUBE_DATA_DIR ($current_fs) — nothing to do."
exit 0
fi
log "Preparing XFS loop volume at $CUBE_XFS_IMG (size=$CUBE_XFS_SIZE) ..."
if [ ! -f "$CUBE_XFS_IMG" ]; then
fallocate -l "$CUBE_XFS_SIZE" "$CUBE_XFS_IMG"
mkfs.xfs -q -f "$CUBE_XFS_IMG"
log "Formatted $CUBE_XFS_IMG as XFS."
fi
mount -o loop "$CUBE_XFS_IMG" "$CUBE_DATA_DIR"
log "Mounted $CUBE_DATA_DIR ($(stat -fc %T "$CUBE_DATA_DIR"))."
+110
View File
@@ -0,0 +1,110 @@
# CubeSandbox running inside a privileged systemd+DinD container.
#
# WHY THIS LOOKS UNUSUAL
# ----------------------
# CubeSandbox is NOT a containerized project upstream. Its core components
# (Cubelet, network-agent, cube-shim, CubeAPI, CubeMaster) ship as host
# binaries, and the official install.sh registers them as systemd units and
# manages them with systemctl.
#
# To run it purely with Docker without modifying the WSL2 host, this stack:
# 1. Runs systemd as PID 1 inside a privileged container so that
# install.sh can call systemctl enable / start / status normally.
# 2. Runs its own dockerd (DinD) for MySQL / Redis / CoreDNS / CubeProxy.
# 3. Mounts an XFS loop volume at /data/cubelet (install.sh hard-requires XFS).
# 4. Executes the upstream online-install.sh via cube-install.service.
#
# The /run and /run/lock paths are tmpfs so systemd can write its runtime
# state (PID files, socket files, etc.) during the container lifetime.
# stop_signal RTMIN+3 is the standard graceful-shutdown signal for systemd.
x-defaults: &defaults
restart: unless-stopped
logging:
driver: json-file
options:
max-size: 100m
max-file: '3'
services:
cube-sandbox:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}compose-anything/cube-sandbox:${CUBE_SANDBOX_VERSION:-0.1.7}
build:
context: .
dockerfile: Dockerfile
args:
- UBUNTU_IMAGE=${UBUNTU_IMAGE:-ubuntu:22.04}
# CubeSandbox needs:
# - /dev/kvm for the MicroVM hypervisor
# - /dev/net/tun for cube TAP interfaces
# - SYS_ADMIN/NET_ADMIN to mount the XFS loop volume and create TAPs
# - Its own dockerd for MySQL / Redis / CubeProxy / CoreDNS
# - systemd as PID 1 so install.sh can register and start services
# The simplest correct configuration is privileged + host network.
privileged: true
network_mode: host
devices:
- /dev/kvm:/dev/kvm
- /dev/net/tun:/dev/net/tun
# cgroupns:host lets the in-container systemd + dockerd share the host's
# (i.e. WSL2's) cgroup v2 hierarchy directly — more reliable than private.
cgroup: host
# systemd needs to write its runtime state to /run; use tmpfs so it does
# not leak across container restarts and does not consume the named volumes.
tmpfs:
- /run:size=100m
- /run/lock:size=10m
- /tmp:size=2g,exec
# SIGRTMIN+3 is the proper graceful-shutdown signal for systemd.
stop_signal: RTMIN+3
environment:
- TZ=${TZ:-Asia/Shanghai}
# cn = pull installer + images via the cnb.cool / Tencent Cloud mirror
# gh = pull from raw.githubusercontent.com (slower in mainland China)
- CUBE_MIRROR=${CUBE_MIRROR:-cn}
# Size of the XFS loop file that backs /data/cubelet
- CUBE_XFS_SIZE=${CUBE_XFS_SIZE:-50G}
# Set to 1 to re-run install.sh even if a previous install is detected
- CUBE_FORCE_REINSTALL=${CUBE_FORCE_REINSTALL:-0}
volumes:
# DinD docker daemon storage (images for MySQL, Redis, CoreDNS, CubeProxy)
- cube_dind_data:/var/lib/docker
# XFS loop image + mounted /data/cubelet + cube-shim disks + logs
- cube_data:/data
# Installed CubeSandbox binaries & scripts
- cube_toolbox:/usr/local/services/cubetoolbox
# No `ports:` block — we use network_mode: host so the CubeAPI on
# 127.0.0.1:3000 inside the container is the same socket as
# 127.0.0.1:3000 on the WSL2 host.
healthcheck:
test:
- CMD-SHELL
- 'curl -fsS http://127.0.0.1:3000/health && curl -fsS http://127.0.0.1:8089/notify/health && curl -fsS http://127.0.0.1:19090/healthz'
interval: 30s
timeout: 15s
retries: 5
start_period: 600s # First boot downloads ~400 MB + Docker images; be generous.
deploy:
resources:
limits:
cpus: '${CUBE_CPU_LIMIT:-8.0}'
memory: ${CUBE_MEMORY_LIMIT:-16G}
reservations:
cpus: '${CUBE_CPU_RESERVATION:-0.1}'
memory: ${CUBE_MEMORY_RESERVATION:-8G}
volumes:
cube_dind_data:
cube_data:
cube_toolbox:
+112
View File
@@ -0,0 +1,112 @@
#!/usr/bin/env python3
"""
Basic E2B SDK integration test against a local CubeSandbox instance.
Runs three checks:
1. Sandbox creation (debug=True → API at http://localhost:3000)
2. Code execution and output validation
3. Sandbox teardown
Usage (inside the cube-sandbox container):
python3 /root/e2b-test.py
Exit codes:
0 all tests passed
1 any test failed
"""
import sys
PASS = "\033[1;32m[ OK ]\033[0m"
FAIL = "\033[1;31m[FAIL]\033[0m"
INFO = "\033[1;36m[INFO]\033[0m"
def check(label: str, cond: bool, detail: str = "") -> bool:
if cond:
print(f"{PASS} {label}")
else:
print(f"{FAIL} {label}{': ' + detail if detail else ''}")
return cond
def main() -> int:
ok = True
# ------------------------------------------------------------------ #
# 1. Import #
# ------------------------------------------------------------------ #
print(f"{INFO} Importing e2b_code_interpreter …")
try:
from e2b_code_interpreter import Sandbox # type: ignore
except ImportError as exc:
print(f"{FAIL} import failed: {exc}")
return 1
ok &= check("e2b_code_interpreter imported", True)
# ------------------------------------------------------------------ #
# 2. Create sandbox #
# ------------------------------------------------------------------ #
print(f"\n{INFO} Creating sandbox (debug=True → http://localhost:3000) …")
sb = None
try:
# debug=True makes the SDK target http://localhost:3000 instead of
# the E2B cloud and http://localhost:<port> for the envd connection.
sb = Sandbox(debug=True, api_key="local-test", timeout=120)
ok &= check("Sandbox created", sb is not None, f"id={sb.sandbox_id if sb else '?'}")
print(f" sandbox_id = {sb.sandbox_id}")
except Exception as exc:
ok &= check("Sandbox created", False, str(exc))
print(f"\n{INFO} Skipping remaining tests (sandbox creation failed)")
return 0 if ok else 1
# ------------------------------------------------------------------ #
# 3. Execute code #
# ------------------------------------------------------------------ #
print(f"\n{INFO} Running code inside sandbox …")
try:
result = sb.run_code('print("Hello from CubeSandbox!")')
expected = "Hello from CubeSandbox!"
output = (result.text or "").strip()
ok &= check("Code executed without error", not result.error,
str(result.error) if result.error else "")
ok &= check("Output matches expected", output == expected,
f"got {output!r}")
except Exception as exc:
ok &= check("Code execution", False, str(exc))
# ------------------------------------------------------------------ #
# 4. Multi-line / stateful execution #
# ------------------------------------------------------------------ #
print(f"\n{INFO} Running stateful multi-cell execution …")
try:
sb.run_code("x = 40 + 2")
result2 = sb.run_code("print(x)")
output2 = (result2.text or "").strip()
ok &= check("Stateful multi-cell execution", output2 == "42",
f"got {output2!r}")
except Exception as exc:
ok &= check("Stateful multi-cell execution", False, str(exc))
# ------------------------------------------------------------------ #
# 5. Kill sandbox #
# ------------------------------------------------------------------ #
print(f"\n{INFO} Killing sandbox …")
try:
sb.kill()
ok &= check("Sandbox killed", True)
except Exception as exc:
ok &= check("Sandbox killed", False, str(exc))
# ------------------------------------------------------------------ #
# Summary #
# ------------------------------------------------------------------ #
print()
if ok:
print(f"{PASS} All E2B SDK tests passed")
else:
print(f"{FAIL} Some E2B SDK tests FAILED")
return 0 if ok else 1
if __name__ == "__main__":
sys.exit(main())
+104
View File
@@ -0,0 +1,104 @@
#!/usr/bin/env bash
# Smoke test for a running CubeSandbox stack.
#
# Run from the WSL2 host or from inside the cube-sandbox container - both work
# because the container uses network_mode: host.
#
# Steps:
# 1. Health-check all CubeSandbox services
# 2. (Optional, slow) Build a code-interpreter template from a public image
# 3. Create a sandbox via the E2B-compatible REST API, run a tiny payload,
# then destroy it
#
# Skip the slow template-build step with: SKIP_TEMPLATE_BUILD=1 ./smoke-test.sh
set -euo pipefail
# cubemastercli is installed to a non-standard prefix; add it to PATH so this
# script works both when run inside the container and from the WSL2 host.
export PATH="/usr/local/services/cubetoolbox/CubeMaster/bin:${PATH:-}"
CUBE_API="${CUBE_API:-http://127.0.0.1:3000}"
CUBE_MASTER="${CUBE_MASTER:-http://127.0.0.1:8089}"
CUBE_NETAGENT="${CUBE_NETAGENT:-http://127.0.0.1:19090}"
ok() { printf '\033[1;32m[ OK ]\033[0m %s\n' "$*"; }
fail() { printf '\033[1;31m[FAIL]\033[0m %s\n' "$*" >&2; exit 1; }
info() { printf '\033[1;36m[INFO]\033[0m %s\n' "$*"; }
#-------------------------------------------------------------------
# 1. Health checks (matches what install.sh's quickcheck.sh verifies)
#-------------------------------------------------------------------
info "Health: CubeAPI"
curl -fsS "${CUBE_API}/health" >/dev/null && ok "CubeAPI /health" || fail "CubeAPI /health"
echo
info "Health: CubeMaster"
curl -fsS "${CUBE_MASTER}/notify/health" >/dev/null && ok "CubeMaster /notify/health" || fail "CubeMaster /notify/health"
info "Health: network-agent"
curl -fsS "${CUBE_NETAGENT}/healthz" >/dev/null && ok "network-agent /healthz" || fail "network-agent /healthz"
curl -fsS "${CUBE_NETAGENT}/readyz" >/dev/null && ok "network-agent /readyz" || fail "network-agent /readyz"
#-------------------------------------------------------------------
# 2. Optional: build a sandbox template
#-------------------------------------------------------------------
TEMPLATE_ID="${CUBE_TEMPLATE_ID:-}"
if [ -z "$TEMPLATE_ID" ] && [ "${SKIP_TEMPLATE_BUILD:-0}" != "1" ]; then
info "No CUBE_TEMPLATE_ID provided; building one from ccr.ccs.tencentyun.com/ags-image/sandbox-code:latest"
info "(this can take 5-15 minutes; set SKIP_TEMPLATE_BUILD=1 to skip and only run health checks)"
if ! command -v cubemastercli >/dev/null 2>&1; then
# cubemastercli lives inside the container; exec into it
CUBE_CTR="$(docker compose ps -q cube-sandbox 2>/dev/null || true)"
[ -z "$CUBE_CTR" ] && fail "cube-sandbox container not running and cubemastercli not on PATH"
CMC="docker exec -i $CUBE_CTR cubemastercli"
else
CMC="cubemastercli"
fi
JOB_OUT="$($CMC tpl create-from-image \
--image ccr.ccs.tencentyun.com/ags-image/sandbox-code:latest \
--writable-layer-size 1G \
--expose-port 49999 \
--expose-port 49983 \
--probe 49999 2>&1)"
echo "$JOB_OUT"
JOB_ID="$(echo "$JOB_OUT" | grep -oE 'job_id[=: ]+[A-Za-z0-9_-]+' | head -1 | awk '{print $NF}')"
[ -z "$JOB_ID" ] && fail "could not parse job_id from output"
info "Watching job $JOB_ID ..."
$CMC tpl watch --job-id "$JOB_ID"
# Extract template_id from the create-from-image output (it's on the first few
# lines) rather than re-querying the list — list ordering is not guaranteed and
# could return a FAILED entry as the last line.
TEMPLATE_ID="$(echo "$JOB_OUT" | grep -E '\btemplate_id\b' | head -1 | awk '{print $NF}')"
[ -z "$TEMPLATE_ID" ] && fail "could not determine template id after build"
ok "Template built: $TEMPLATE_ID"
elif [ -z "$TEMPLATE_ID" ]; then
info "Skipping sandbox lifecycle test (no CUBE_TEMPLATE_ID and SKIP_TEMPLATE_BUILD=1)"
ok "Health checks passed - CubeSandbox stack is up"
exit 0
fi
#-------------------------------------------------------------------
# 3. Create -> inspect -> destroy a sandbox via REST
#-------------------------------------------------------------------
info "Creating sandbox from template $TEMPLATE_ID ..."
RESP="$(curl -fsS -X POST "${CUBE_API}/sandboxes" \
-H 'Authorization: Bearer dummy' \
-H 'Content-Type: application/json' \
-d "{\"templateID\":\"${TEMPLATE_ID}\"}")"
SANDBOX_ID="$(echo "$RESP" | python3 -c 'import json,sys; print(json.load(sys.stdin).get("sandboxID",""))')"
[ -z "$SANDBOX_ID" ] && fail "no sandboxID in response: $RESP"
ok "Created sandbox $SANDBOX_ID"
info "Inspecting sandbox ..."
curl -fsS "${CUBE_API}/sandboxes/${SANDBOX_ID}" -H 'Authorization: Bearer dummy' \
| python3 -m json.tool
ok "Sandbox is queryable"
info "Destroying sandbox ..."
curl -fsS -X DELETE "${CUBE_API}/sandboxes/${SANDBOX_ID}" -H 'Authorization: Bearer dummy' >/dev/null
ok "Sandbox destroyed"
ok "All smoke tests passed"
+1 -1
View File
@@ -25,7 +25,7 @@ INSTALL_NVIDIA_TOOLKIT=false
# Resource limits # Resource limits
DIND_CPU_LIMIT=2.0 DIND_CPU_LIMIT=2.0
DIND_MEMORY_LIMIT=4G DIND_MEMORY_LIMIT=4G
DIND_CPU_RESERVATION=1.0 DIND_CPU_RESERVATION=0.1
DIND_MEMORY_RESERVATION=2G DIND_MEMORY_RESERVATION=2G
# Docker daemon options # Docker daemon options
+2 -2
View File
@@ -44,7 +44,7 @@ services:
cpus: ${DIND_CPU_LIMIT:-2.0} cpus: ${DIND_CPU_LIMIT:-2.0}
memory: ${DIND_MEMORY_LIMIT:-4G} memory: ${DIND_MEMORY_LIMIT:-4G}
reservations: reservations:
cpus: ${DIND_CPU_RESERVATION:-1.0} cpus: ${DIND_CPU_RESERVATION:-0.1}
memory: ${DIND_MEMORY_RESERVATION:-2G} memory: ${DIND_MEMORY_RESERVATION:-2G}
# GPU-enabled DinD (optional) # GPU-enabled DinD (optional)
@@ -84,7 +84,7 @@ services:
cpus: ${DIND_CPU_LIMIT:-2.0} cpus: ${DIND_CPU_LIMIT:-2.0}
memory: ${DIND_MEMORY_LIMIT:-4G} memory: ${DIND_MEMORY_LIMIT:-4G}
reservations: reservations:
cpus: ${DIND_CPU_RESERVATION:-1.0} cpus: ${DIND_CPU_RESERVATION:-0.1}
memory: ${DIND_MEMORY_RESERVATION:-2G} memory: ${DIND_MEMORY_RESERVATION:-2G}
devices: devices:
- driver: nvidia - driver: nvidia
+8 -8
View File
@@ -14,27 +14,27 @@ DEER_FLOW_MODEL_ID=gpt-4.1-mini
OPENAI_API_KEY= OPENAI_API_KEY=
# Resources - Gateway # Resources - Gateway
DEER_FLOW_GATEWAY_CPU_LIMIT=2.00 DEER_FLOW_GATEWAY_CPU_LIMIT=2.0
DEER_FLOW_GATEWAY_MEMORY_LIMIT=2G DEER_FLOW_GATEWAY_MEMORY_LIMIT=2G
DEER_FLOW_GATEWAY_CPU_RESERVATION=0.50 DEER_FLOW_GATEWAY_CPU_RESERVATION=0.1
DEER_FLOW_GATEWAY_MEMORY_RESERVATION=512M DEER_FLOW_GATEWAY_MEMORY_RESERVATION=512M
# Resources - LangGraph # Resources - LangGraph
DEER_FLOW_LANGGRAPH_CPU_LIMIT=2.00 DEER_FLOW_LANGGRAPH_CPU_LIMIT=2.0
DEER_FLOW_LANGGRAPH_MEMORY_LIMIT=2G DEER_FLOW_LANGGRAPH_MEMORY_LIMIT=2G
DEER_FLOW_LANGGRAPH_CPU_RESERVATION=0.50 DEER_FLOW_LANGGRAPH_CPU_RESERVATION=0.1
DEER_FLOW_LANGGRAPH_MEMORY_RESERVATION=512M DEER_FLOW_LANGGRAPH_MEMORY_RESERVATION=512M
# Resources - Frontend # Resources - Frontend
DEER_FLOW_FRONTEND_CPU_LIMIT=1.00 DEER_FLOW_FRONTEND_CPU_LIMIT=1.0
DEER_FLOW_FRONTEND_MEMORY_LIMIT=1G DEER_FLOW_FRONTEND_MEMORY_LIMIT=1G
DEER_FLOW_FRONTEND_CPU_RESERVATION=0.25 DEER_FLOW_FRONTEND_CPU_RESERVATION=0.1
DEER_FLOW_FRONTEND_MEMORY_RESERVATION=256M DEER_FLOW_FRONTEND_MEMORY_RESERVATION=256M
# Resources - Nginx # Resources - Nginx
DEER_FLOW_NGINX_CPU_LIMIT=0.50 DEER_FLOW_NGINX_CPU_LIMIT=0.5
DEER_FLOW_NGINX_MEMORY_LIMIT=256M DEER_FLOW_NGINX_MEMORY_LIMIT=256M
DEER_FLOW_NGINX_CPU_RESERVATION=0.10 DEER_FLOW_NGINX_CPU_RESERVATION=0.1
DEER_FLOW_NGINX_MEMORY_RESERVATION=64M DEER_FLOW_NGINX_MEMORY_RESERVATION=64M
# Logging # Logging
+8 -8
View File
@@ -53,10 +53,10 @@ services:
deploy: deploy:
resources: resources:
limits: limits:
cpus: ${DEER_FLOW_GATEWAY_CPU_LIMIT:-2.00} cpus: ${DEER_FLOW_GATEWAY_CPU_LIMIT:-2.0}
memory: ${DEER_FLOW_GATEWAY_MEMORY_LIMIT:-2G} memory: ${DEER_FLOW_GATEWAY_MEMORY_LIMIT:-2G}
reservations: reservations:
cpus: ${DEER_FLOW_GATEWAY_CPU_RESERVATION:-0.50} cpus: ${DEER_FLOW_GATEWAY_CPU_RESERVATION:-0.1}
memory: ${DEER_FLOW_GATEWAY_MEMORY_RESERVATION:-512M} memory: ${DEER_FLOW_GATEWAY_MEMORY_RESERVATION:-512M}
deerflow-langgraph: deerflow-langgraph:
@@ -102,10 +102,10 @@ services:
deploy: deploy:
resources: resources:
limits: limits:
cpus: ${DEER_FLOW_LANGGRAPH_CPU_LIMIT:-2.00} cpus: ${DEER_FLOW_LANGGRAPH_CPU_LIMIT:-2.0}
memory: ${DEER_FLOW_LANGGRAPH_MEMORY_LIMIT:-2G} memory: ${DEER_FLOW_LANGGRAPH_MEMORY_LIMIT:-2G}
reservations: reservations:
cpus: ${DEER_FLOW_LANGGRAPH_CPU_RESERVATION:-0.50} cpus: ${DEER_FLOW_LANGGRAPH_CPU_RESERVATION:-0.1}
memory: ${DEER_FLOW_LANGGRAPH_MEMORY_RESERVATION:-512M} memory: ${DEER_FLOW_LANGGRAPH_MEMORY_RESERVATION:-512M}
deerflow-frontend: deerflow-frontend:
@@ -133,10 +133,10 @@ services:
deploy: deploy:
resources: resources:
limits: limits:
cpus: ${DEER_FLOW_FRONTEND_CPU_LIMIT:-1.00} cpus: ${DEER_FLOW_FRONTEND_CPU_LIMIT:-1.0}
memory: ${DEER_FLOW_FRONTEND_MEMORY_LIMIT:-1G} memory: ${DEER_FLOW_FRONTEND_MEMORY_LIMIT:-1G}
reservations: reservations:
cpus: ${DEER_FLOW_FRONTEND_CPU_RESERVATION:-0.25} cpus: ${DEER_FLOW_FRONTEND_CPU_RESERVATION:-0.1}
memory: ${DEER_FLOW_FRONTEND_MEMORY_RESERVATION:-256M} memory: ${DEER_FLOW_FRONTEND_MEMORY_RESERVATION:-256M}
deerflow-nginx: deerflow-nginx:
@@ -164,8 +164,8 @@ services:
deploy: deploy:
resources: resources:
limits: limits:
cpus: ${DEER_FLOW_NGINX_CPU_LIMIT:-0.50} cpus: ${DEER_FLOW_NGINX_CPU_LIMIT:-0.5}
memory: ${DEER_FLOW_NGINX_MEMORY_LIMIT:-256M} memory: ${DEER_FLOW_NGINX_MEMORY_LIMIT:-256M}
reservations: reservations:
cpus: ${DEER_FLOW_NGINX_CPU_RESERVATION:-0.10} cpus: ${DEER_FLOW_NGINX_CPU_RESERVATION:-0.1}
memory: ${DEER_FLOW_NGINX_MEMORY_RESERVATION:-64M} memory: ${DEER_FLOW_NGINX_MEMORY_RESERVATION:-64M}
+2 -2
View File
@@ -52,8 +52,8 @@ GOOSE_MODEL=gpt-4
# ============================================ # ============================================
# CPU limits # CPU limits
GOOSE_CPU_LIMIT=2.00 GOOSE_CPU_LIMIT=2.0
GOOSE_CPU_RESERVATION=0.50 GOOSE_CPU_RESERVATION=0.1
# Memory limits # Memory limits
GOOSE_MEMORY_LIMIT=2G GOOSE_MEMORY_LIMIT=2G
+2 -2
View File
@@ -44,10 +44,10 @@ services:
deploy: deploy:
resources: resources:
limits: limits:
cpus: ${GOOSE_CPU_LIMIT:-2.00} cpus: ${GOOSE_CPU_LIMIT:-2.0}
memory: ${GOOSE_MEMORY_LIMIT:-2G} memory: ${GOOSE_MEMORY_LIMIT:-2G}
reservations: reservations:
cpus: ${GOOSE_CPU_RESERVATION:-0.50} cpus: ${GOOSE_CPU_RESERVATION:-0.1}
memory: ${GOOSE_MEMORY_RESERVATION:-512M} memory: ${GOOSE_MEMORY_RESERVATION:-512M}
volumes: volumes:
+1 -1
View File
@@ -33,7 +33,7 @@ services:
cpus: '2.0' cpus: '2.0'
memory: 4G memory: 4G
reservations: reservations:
cpus: '1.0' cpus: '0.1'
memory: 2G memory: 2G
devices: devices:
- driver: nvidia - driver: nvidia
+2 -2
View File
@@ -41,12 +41,12 @@ K3S_DISABLE_SERVICES=traefik
# Resource Limits # Resource Limits
# CPU limit (cores) # CPU limit (cores)
K3S_DIND_CPU_LIMIT=2.00 K3S_DIND_CPU_LIMIT=2.0
# Memory limit # Memory limit
K3S_DIND_MEMORY_LIMIT=4G K3S_DIND_MEMORY_LIMIT=4G
# Resource Reservations # Resource Reservations
# CPU reservation (cores) # CPU reservation (cores)
K3S_DIND_CPU_RESERVATION=0.50 K3S_DIND_CPU_RESERVATION=0.1
# Memory reservation # Memory reservation
K3S_DIND_MEMORY_RESERVATION=1G K3S_DIND_MEMORY_RESERVATION=1G
+2 -2
View File
@@ -44,10 +44,10 @@ services:
deploy: deploy:
resources: resources:
limits: limits:
cpus: ${K3S_DIND_CPU_LIMIT:-2.00} cpus: ${K3S_DIND_CPU_LIMIT:-2.0}
memory: ${K3S_DIND_MEMORY_LIMIT:-4G} memory: ${K3S_DIND_MEMORY_LIMIT:-4G}
reservations: reservations:
cpus: ${K3S_DIND_CPU_RESERVATION:-0.50} cpus: ${K3S_DIND_CPU_RESERVATION:-0.1}
memory: ${K3S_DIND_MEMORY_RESERVATION:-1G} memory: ${K3S_DIND_MEMORY_RESERVATION:-1G}
volumes: volumes:
+2 -2
View File
@@ -45,8 +45,8 @@ MICROSANDBOX_PORT_OVERRIDE=5555
# CPU limits # CPU limits
# MicroSandbox requires more CPU for KVM virtualization # MicroSandbox requires more CPU for KVM virtualization
MICROSANDBOX_CPU_LIMIT=4 MICROSANDBOX_CPU_LIMIT=4.0
MICROSANDBOX_CPU_RESERVATION=1 MICROSANDBOX_CPU_RESERVATION=0.1
# Memory limits # Memory limits
# MicroSandbox requires more memory for running VMs # MicroSandbox requires more memory for running VMs
+2 -2
View File
@@ -66,10 +66,10 @@ services:
deploy: deploy:
resources: resources:
limits: limits:
cpus: ${MICROSANDBOX_CPU_LIMIT:-4} cpus: ${MICROSANDBOX_CPU_LIMIT:-4.0}
memory: ${MICROSANDBOX_MEMORY_LIMIT:-4G} memory: ${MICROSANDBOX_MEMORY_LIMIT:-4G}
reservations: reservations:
cpus: ${MICROSANDBOX_CPU_RESERVATION:-1} cpus: ${MICROSANDBOX_CPU_RESERVATION:-0.1}
memory: ${MICROSANDBOX_MEMORY_RESERVATION:-1G} memory: ${MICROSANDBOX_MEMORY_RESERVATION:-1G}
volumes: volumes:
+1 -1
View File
@@ -1,5 +1,5 @@
# MinerU Docker image # MinerU Docker image
MINERU_VERSION=2.7.6 MINERU_VERSION=3.1.0
# Port configurations # Port configurations
MINERU_PORT_OVERRIDE_VLLM=30000 MINERU_PORT_OVERRIDE_VLLM=30000
+4 -7
View File
@@ -1,10 +1,7 @@
# Use the official vllm image for gpu with Ampere、Ada Lovelace、Hopper architecture (8.0 <= Compute Capability <= 9.0) # Use DaoCloud mirrored vllm image for China region for gpu with Volta、Turing、Ampere、Ada Lovelace、Hopper、Blackwell architecture (7.0 <= Compute Capability <= 12.0)
# Compute Capability version query (https://developer.nvidia.com/cuda-gpus) # Compute Capability version query (https://developer.nvidia.com/cuda-gpus)
FROM vllm/vllm-openai:v0.10.2
# Use the official vllm image for gpu with Volta、Turing、Blackwell architecture (7.0 < Compute Capability < 8.0 or Compute Capability >= 10.0)
# support x86_64 architecture and ARM(AArch64) architecture # support x86_64 architecture and ARM(AArch64) architecture
# FROM vllm/vllm-openai:v0.11.0 FROM docker.m.daocloud.io/vllm/vllm-openai:v0.11.2
# Install libgl for opencv support & Noto fonts for Chinese characters # Install libgl for opencv support & Noto fonts for Chinese characters
RUN apt-get update && \ RUN apt-get update && \
@@ -18,11 +15,11 @@ RUN apt-get update && \
rm -rf /var/lib/apt/lists/* rm -rf /var/lib/apt/lists/*
# Install mineru latest # Install mineru latest
RUN python3 -m pip install -U 'mineru[core]>=2.7.6' --break-system-packages && \ RUN python3 -m pip install -U 'mineru[core]>=3.0.0' -i https://mirrors.aliyun.com/pypi/simple --break-system-packages && \
python3 -m pip cache purge python3 -m pip cache purge
# Download models and update the configuration file # Download models and update the configuration file
RUN /bin/bash -c "mineru-models-download -s huggingface -m all" RUN /bin/bash -c "mineru-models-download -s modelscope -m all"
# Set the entry point to activate the virtual environment and run the command line tool # Set the entry point to activate the virtual environment and run the command line tool
ENTRYPOINT ["/bin/bash", "-c", "export MINERU_MODEL_SOURCE=local && exec \"$@\"", "--"] ENTRYPOINT ["/bin/bash", "-c", "export MINERU_MODEL_SOURCE=local && exec \"$@\"", "--"]
+1 -1
View File
@@ -39,7 +39,7 @@ mineru -p demo.pdf -o ./output -b vlm-http-client -u http://localhost:30000
## Configuration ## Configuration
- `MINERU_VERSION`: The version for MinerU, default is `2.7.6`. - `MINERU_VERSION`: The version for MinerU, default is `3.1.0`.
- `MINERU_PORT_OVERRIDE_VLLM`: The host port for the VLLM server, default is `30000`. - `MINERU_PORT_OVERRIDE_VLLM`: The host port for the VLLM server, default is `30000`.
- `MINERU_PORT_OVERRIDE_API`: The host port for the API service, default is `8000`. - `MINERU_PORT_OVERRIDE_API`: The host port for the API service, default is `8000`.
- `MINERU_PORT_OVERRIDE_GRADIO`: The host port for the Gradio WebUI, default is `7860`. - `MINERU_PORT_OVERRIDE_GRADIO`: The host port for the Gradio WebUI, default is `7860`.
+1 -1
View File
@@ -39,7 +39,7 @@ mineru -p demo.pdf -o ./output -b vlm-http-client -u http://localhost:30000
## 配置 ## 配置
- `MINERU_VERSION`: MinerU 的 Docker 镜像版本,默认为 `2.7.6`。 - `MINERU_VERSION`: MinerU 的 Docker 镜像版本,默认为 `3.1.0`。
- `MINERU_PORT_OVERRIDE_VLLM`: VLLM 服务器的主机端口,默认为 `30000`。 - `MINERU_PORT_OVERRIDE_VLLM`: VLLM 服务器的主机端口,默认为 `30000`。
- `MINERU_PORT_OVERRIDE_API`: API 服务的主机端口,默认为 `8000`。 - `MINERU_PORT_OVERRIDE_API`: API 服务的主机端口,默认为 `8000`。
- `MINERU_PORT_OVERRIDE_GRADIO`: Gradio WebUI 的主机端口,默认为 `7860`。 - `MINERU_PORT_OVERRIDE_GRADIO`: Gradio WebUI 的主机端口,默认为 `7860`。
+3 -6
View File
@@ -1,10 +1,7 @@
# Use DaoCloud mirrored vllm image for China region for gpu with Ampere、Ada Lovelace、Hopper architecture (8.0 <= Compute Capability <= 9.0) # Use DaoCloud mirrored vllm image for China region for gpu with Volta、Turing、Ampere、Ada Lovelace、Hopper、Blackwell architecture (7.0 <= Compute Capability <= 12.0)
# Compute Capability version query (https://developer.nvidia.com/cuda-gpus) # Compute Capability version query (https://developer.nvidia.com/cuda-gpus)
FROM docker.m.daocloud.io/vllm/vllm-openai:v0.10.2
# Use DaoCloud mirrored vllm image for China region for gpu with Volta、Turing、Blackwell architecture (7.0 < Compute Capability < 8.0 or Compute Capability >= 10.0)
# support x86_64 architecture and ARM(AArch64) architecture # support x86_64 architecture and ARM(AArch64) architecture
# FROM docker.m.daocloud.io/vllm/vllm-openai:v0.11.0 FROM docker.m.daocloud.io/vllm/vllm-openai:v0.11.2
# Install libgl for opencv support & Noto fonts for Chinese characters # Install libgl for opencv support & Noto fonts for Chinese characters
RUN apt-get update && \ RUN apt-get update && \
@@ -18,7 +15,7 @@ RUN apt-get update && \
rm -rf /var/lib/apt/lists/* rm -rf /var/lib/apt/lists/*
# Install mineru latest # Install mineru latest
RUN python3 -m pip install -U 'mineru[core]>=2.7.0' -i https://mirrors.aliyun.com/pypi/simple --break-system-packages && \ RUN python3 -m pip install -U 'mineru[core]>=3.0.0' -i https://mirrors.aliyun.com/pypi/simple --break-system-packages && \
python3 -m pip cache purge python3 -m pip cache purge
# Download models and update the configuration file # Download models and update the configuration file
+1 -1
View File
@@ -14,7 +14,7 @@ RUN apt-get update && \
# Install mineru latest # Install mineru latest
RUN python3 -m pip install -U pip -i https://mirrors.aliyun.com/pypi/simple && \ RUN python3 -m pip install -U pip -i https://mirrors.aliyun.com/pypi/simple && \
python3 -m pip install 'mineru[core]>=2.7.4' \ python3 -m pip install 'mineru[core]>=3.0.0' \
numpy==1.26.4 \ numpy==1.26.4 \
opencv-python==4.11.0.86 \ opencv-python==4.11.0.86 \
-i https://mirrors.aliyun.com/pypi/simple && \ -i https://mirrors.aliyun.com/pypi/simple && \
+1 -4
View File
@@ -14,10 +14,7 @@ RUN apt-get update && \
# Install mineru latest # Install mineru latest
RUN python3 -m pip install -U pip -i https://mirrors.aliyun.com/pypi/simple && \ RUN python3 -m pip install -U pip -i https://mirrors.aliyun.com/pypi/simple && \
python3 -m pip install mineru[api,gradio] \ python3 -m pip install "mineru[gradio]>=3.0.0" \
"matplotlib>=3.10,<4" \
"ultralytics>=8.3.48,<9" \
"doclayout_yolo==0.0.4" \
"ftfy>=6.3.1,<7" \ "ftfy>=6.3.1,<7" \
"shapely>=2.0.7,<3" \ "shapely>=2.0.7,<3" \
"pyclipper>=1.3.0,<2" \ "pyclipper>=1.3.0,<2" \
+2 -2
View File
@@ -8,7 +8,7 @@ x-defaults: &defaults
x-mineru-vllm: &mineru-vllm x-mineru-vllm: &mineru-vllm
<<: *defaults <<: *defaults
image: ${GLOBAL_REGISTRY:-}alexsuntop/mineru:${MINERU_VERSION:-2.7.6} image: ${GLOBAL_REGISTRY:-}alexsuntop/mineru:${MINERU_VERSION:-3.1.0}
build: build:
context: . context: .
dockerfile: ${MINERU_DOCKERFILE_PATH:-Dockerfile} dockerfile: ${MINERU_DOCKERFILE_PATH:-Dockerfile}
@@ -28,7 +28,7 @@ x-mineru-vllm: &mineru-vllm
cpus: '16.0' cpus: '16.0'
memory: 32G memory: 32G
reservations: reservations:
cpus: '8.0' cpus: '0.1'
memory: 16G memory: 16G
devices: devices:
- driver: nvidia - driver: nvidia
+1 -1
View File
@@ -17,7 +17,7 @@ RUN echo 'deb http://mirrors.aliyun.com/ubuntu/ noble main restricted universe m
rm -rf /var/lib/apt/lists/* /tmp/aliyun-sources.list rm -rf /var/lib/apt/lists/* /tmp/aliyun-sources.list
# Install mineru latest # Install mineru latest
RUN python3 -m pip install "mineru[core]>=2.7.2" \ RUN python3 -m pip install "mineru[core]>=3.0.0" \
numpy==1.26.4 \ numpy==1.26.4 \
opencv-python==4.11.0.86 \ opencv-python==4.11.0.86 \
-i https://mirrors.aliyun.com/pypi/simple && \ -i https://mirrors.aliyun.com/pypi/simple && \
+1 -4
View File
@@ -14,10 +14,7 @@ RUN apt-get update && \
# Install mineru latest # Install mineru latest
RUN python3 -m pip install -U pip -i https://mirrors.aliyun.com/pypi/simple && \ RUN python3 -m pip install -U pip -i https://mirrors.aliyun.com/pypi/simple && \
python3 -m pip install "mineru[api,gradio]>=2.7.6" \ python3 -m pip install "mineru[gradio]>=3.0.0" \
"matplotlib>=3.10,<4" \
"ultralytics>=8.3.48,<9" \
"doclayout_yolo==0.0.4" \
"ftfy>=6.3.1,<7" \ "ftfy>=6.3.1,<7" \
"shapely>=2.0.7,<3" \ "shapely>=2.0.7,<3" \
"pyclipper>=1.3.0,<2" \ "pyclipper>=1.3.0,<2" \
+1 -1
View File
@@ -21,7 +21,7 @@ RUN sed -i '3s/^Version: 0.15.1+metax3\.1\.0\.4$/Version: 0.21.0+metax3.1.0.4/'
# Install mineru latest # Install mineru latest
RUN /opt/conda/bin/python3 -m pip install -U pip -i https://mirrors.aliyun.com/pypi/simple && \ RUN /opt/conda/bin/python3 -m pip install -U pip -i https://mirrors.aliyun.com/pypi/simple && \
/opt/conda/bin/python3 -m pip install 'mineru[core]>=2.6.5' \ /opt/conda/bin/python3 -m pip install 'mineru[core]>=3.0.0' \
numpy==1.26.4 \ numpy==1.26.4 \
opencv-python==4.11.0.86 \ opencv-python==4.11.0.86 \
-i https://mirrors.aliyun.com/pypi/simple && \ -i https://mirrors.aliyun.com/pypi/simple && \
+2 -2
View File
@@ -1,6 +1,6 @@
# 基础镜像配置 vLLM 或 LMDeploy ,请根据实际需要选择其中一个,要求 amd64(x86-64) CPU + Cambricon MLU. # 基础镜像配置 vLLM 或 LMDeploy ,请根据实际需要选择其中一个,要求 amd64(x86-64) CPU + Cambricon MLU.
# Base image containing the LMDEPLOY inference environment, requiring amd64(x86-64) CPU + Cambricon MLU. # Base image containing the LMDEPLOY inference environment, requiring amd64(x86-64) CPU + Cambricon MLU.
FROM crpi-4crprmm5baj1v8iv.cn-hangzhou.personal.cr.aliyuncs.com/lmdeploy_dlinfer/camb:qwen2.5_vl FROM crpi-4crprmm5baj1v8iv.cn-hangzhou.personal.cr.aliyuncs.com/lmdeploy_dlinfer/camb:mineru25
ARG BACKEND=lmdeploy ARG BACKEND=lmdeploy
# Base image containing the vLLM inference environment, requiring amd64(x86-64) CPU + Cambricon MLU. # Base image containing the vLLM inference environment, requiring amd64(x86-64) CPU + Cambricon MLU.
# FROM crpi-vofi3w62lkohhxsp.cn-shanghai.personal.cr.aliyuncs.com/opendatalab-mineru/mlu:vllm0.8.3-torch2.6.0-torchmlu1.26.1-ubuntu22.04-py310 # FROM crpi-vofi3w62lkohhxsp.cn-shanghai.personal.cr.aliyuncs.com/opendatalab-mineru/mlu:vllm0.8.3-torch2.6.0-torchmlu1.26.1-ubuntu22.04-py310
@@ -22,7 +22,7 @@ RUN /bin/bash -c '\
source /torch/venv3/pytorch_infer/bin/activate; \ source /torch/venv3/pytorch_infer/bin/activate; \
fi && \ fi && \
python3 -m pip install -U pip -i https://mirrors.aliyun.com/pypi/simple && \ python3 -m pip install -U pip -i https://mirrors.aliyun.com/pypi/simple && \
python3 -m pip install "mineru[core]>=2.7.4" \ python3 -m pip install "mineru[core]>=3.0.0" \
numpy==1.26.4 \ numpy==1.26.4 \
opencv-python==4.11.0.86 \ opencv-python==4.11.0.86 \
-i https://mirrors.aliyun.com/pypi/simple && \ -i https://mirrors.aliyun.com/pypi/simple && \
+1 -4
View File
@@ -18,10 +18,7 @@ RUN python3 -m pip install -U pip -i https://mirrors.aliyun.com/pypi/simple && \
git clone https://gitcode.com/gh_mirrors/vi/vision.git -b v0.20.0 --depth 1 && \ git clone https://gitcode.com/gh_mirrors/vi/vision.git -b v0.20.0 --depth 1 && \
cd vision && \ cd vision && \
python3 setup.py install && \ python3 setup.py install && \
python3 -m pip install "mineru[api,gradio]>=2.7.2" \ python3 -m pip install "mineru[gradio]>=3.0.0" \
"matplotlib>=3.10,<4" \
"ultralytics>=8.3.48,<9" \
"doclayout_yolo==0.0.4" \
"ftfy>=6.3.1,<7" \ "ftfy>=6.3.1,<7" \
"shapely>=2.0.7,<3" \ "shapely>=2.0.7,<3" \
"pyclipper>=1.3.0,<2" \ "pyclipper>=1.3.0,<2" \
+1 -1
View File
@@ -19,7 +19,7 @@ RUN apt-get update && \
# Install mineru latest # Install mineru latest
RUN python3 -m pip install -U pip -i https://mirrors.aliyun.com/pypi/simple && \ RUN python3 -m pip install -U pip -i https://mirrors.aliyun.com/pypi/simple && \
python3 -m pip install 'mineru[core]>=2.6.5' \ python3 -m pip install 'mineru[core]>=3.0.0' \
numpy==1.26.4 \ numpy==1.26.4 \
opencv-python==4.11.0.86 \ opencv-python==4.11.0.86 \
-i https://mirrors.aliyun.com/pypi/simple && \ -i https://mirrors.aliyun.com/pypi/simple && \
+1 -1
View File
@@ -17,7 +17,7 @@ RUN apt-get update && \
# Install mineru latest # Install mineru latest
RUN python3 -m pip install -U pip -i https://mirrors.aliyun.com/pypi/simple && \ RUN python3 -m pip install -U pip -i https://mirrors.aliyun.com/pypi/simple && \
python3 -m pip install 'mineru[core]>=2.6.5' \ python3 -m pip install 'mineru[core]>=3.0.0' \
numpy==1.26.4 \ numpy==1.26.4 \
opencv-python==4.11.0.86 \ opencv-python==4.11.0.86 \
-i https://mirrors.aliyun.com/pypi/simple && \ -i https://mirrors.aliyun.com/pypi/simple && \
+55
View File
@@ -0,0 +1,55 @@
# Source build configuration
MULTICA_VERSION=v0.1.32
MULTICA_PGVECTOR_VERSION=pg17
# Ports
MULTICA_BACKEND_PORT_OVERRIDE=8080
MULTICA_FRONTEND_PORT_OVERRIDE=3000
# PostgreSQL
MULTICA_POSTGRES_DB=multica
MULTICA_POSTGRES_USER=multica
MULTICA_POSTGRES_PASSWORD=multica
# Authentication & Security (CHANGEME: update JWT_SECRET for production)
MULTICA_JWT_SECRET=change-me-in-production
# Frontend origin (used by backend for CORS and cookie settings)
MULTICA_FRONTEND_ORIGIN=http://localhost:3000
MULTICA_APP_URL=http://localhost:3000
MULTICA_CORS_ALLOWED_ORIGINS=
MULTICA_COOKIE_DOMAIN=
# Email via Resend (optional)
MULTICA_RESEND_API_KEY=
MULTICA_RESEND_FROM_EMAIL=noreply@multica.ai
# Google OAuth (optional)
MULTICA_GOOGLE_CLIENT_ID=
MULTICA_GOOGLE_CLIENT_SECRET=
MULTICA_GOOGLE_REDIRECT_URI=http://localhost:3000/auth/callback
# Resources - PostgreSQL
MULTICA_POSTGRES_CPU_LIMIT=1.0
MULTICA_POSTGRES_MEMORY_LIMIT=1G
MULTICA_POSTGRES_CPU_RESERVATION=0.1
MULTICA_POSTGRES_MEMORY_RESERVATION=256M
# Resources - Backend
MULTICA_BACKEND_CPU_LIMIT=2.0
MULTICA_BACKEND_MEMORY_LIMIT=2G
MULTICA_BACKEND_CPU_RESERVATION=0.1
MULTICA_BACKEND_MEMORY_RESERVATION=512M
# Resources - Frontend
MULTICA_FRONTEND_CPU_LIMIT=1.0
MULTICA_FRONTEND_MEMORY_LIMIT=1G
MULTICA_FRONTEND_CPU_RESERVATION=0.1
MULTICA_FRONTEND_MEMORY_RESERVATION=256M
# Logging
MULTICA_LOG_MAX_SIZE=100m
MULTICA_LOG_MAX_FILE=3
# Timezone
TZ=UTC
+77
View File
@@ -0,0 +1,77 @@
# Multica
[English](./README.md) | [中文](./README.zh.md)
Multica is an open-source managed agents platform that turns coding agents into real teammates. Assign tasks, track progress, and compound reusable skills — works with Claude Code, Codex, OpenClaw, and OpenCode. This Compose setup builds the Go backend and Next.js frontend from source, starts PostgreSQL with pgvector, and exposes both services.
## Services
- **multica-backend**: Go backend (Chi router, sqlc, gorilla/websocket) with auto-migration on startup
- **multica-frontend**: Next.js 16 web application (App Router, standalone output)
- **multica-postgres**: PostgreSQL 17 with pgvector extension
## Quick Start
1. Copy the example environment file:
```bash
cp .env.example .env
```
2. Edit `.env` and change `MULTICA_JWT_SECRET` to a secure random value:
```bash
MULTICA_JWT_SECRET=$(openssl rand -base64 32)
```
3. Start the stack (first run builds images from source — this takes several minutes):
```bash
docker compose up -d
```
4. Open Multica:
- Frontend: <http://localhost:3000>
- Backend API: <http://localhost:8080>
## Default Ports
| Service | Port | Description |
| -------- | ---- | ---------------------- |
| Frontend | 3000 | Web UI |
| Backend | 8080 | REST API and WebSocket |
| Postgres | 5432 | Internal only |
## Important Environment Variables
| Variable | Description | Default |
| -------------------------------- | ------------------------------------------ | ------------------------- |
| `MULTICA_VERSION` | Git ref used for source builds | `v0.1.32` |
| `MULTICA_BACKEND_PORT_OVERRIDE` | Host port for the backend API | `8080` |
| `MULTICA_FRONTEND_PORT_OVERRIDE` | Host port for the web UI | `3000` |
| `MULTICA_JWT_SECRET` | JWT signing secret (change for production) | `change-me-in-production` |
| `MULTICA_POSTGRES_PASSWORD` | PostgreSQL password | `multica` |
| `MULTICA_FRONTEND_ORIGIN` | Frontend URL for CORS and cookies | `http://localhost:3000` |
| `MULTICA_GOOGLE_CLIENT_ID` | Google OAuth client ID (optional) | - |
| `MULTICA_GOOGLE_CLIENT_SECRET` | Google OAuth client secret (optional) | - |
| `MULTICA_RESEND_API_KEY` | Resend API key for email (optional) | - |
| `TZ` | Container timezone | `UTC` |
## Storage
| Volume | Description |
| ---------------- | --------------- |
| `multica_pgdata` | PostgreSQL data |
## Security Notes
- Always change `MULTICA_JWT_SECRET` before exposing the service.
- Change `MULTICA_POSTGRES_PASSWORD` for production deployments.
- Google OAuth and email (Resend) are optional; the platform works without them.
- The first build downloads the full Multica repository from GitHub and builds Docker images, so it requires internet access and may take several minutes.
## References
- [Multica Repository](https://github.com/multica-ai/multica)
- [Self-Hosting Guide](https://github.com/multica-ai/multica/blob/main/SELF_HOSTING.md)
+77
View File
@@ -0,0 +1,77 @@
# Multica
[English](./README.md) | [中文](./README.zh.md)
Multica 是一个开源的托管 Agent 平台,能将编码 Agent 变成真正的团队成员。分配任务、跟踪进度、积累可复用技能——支持 Claude Code、Codex、OpenClaw 和 OpenCode。此 Compose 配置从源码构建 Go 后端和 Next.js 前端,启动带有 pgvector 扩展的 PostgreSQL,并暴露两个服务。
## 服务
- **multica-backend**Go 后端(Chi 路由、sqlc、gorilla/websocket),启动时自动执行数据库迁移
- **multica-frontend**Next.js 16 Web 应用(App Routerstandalone 输出)
- **multica-postgres**PostgreSQL 17,包含 pgvector 扩展
## 快速开始
1. 复制环境变量示例文件:
```bash
cp .env.example .env
```
2. 编辑 `.env`,将 `MULTICA_JWT_SECRET` 修改为安全的随机值:
```bash
MULTICA_JWT_SECRET=$(openssl rand -base64 32)
```
3. 启动服务(首次运行会从源码构建镜像,需要几分钟):
```bash
docker compose up -d
```
4. 打开 Multica
- 前端界面:<http://localhost:3000>
- 后端 API<http://localhost:8080>
## 默认端口
| 服务 | 端口 | 说明 |
| -------- | ---- | --------------------- |
| Frontend | 3000 | Web 界面 |
| Backend | 8080 | REST API 和 WebSocket |
| Postgres | 5432 | 仅内部访问 |
## 关键环境变量
| 变量 | 说明 | 默认值 |
| -------------------------------- | ---------------------------------- | ------------------------- |
| `MULTICA_VERSION` | 用于源码构建的 Git 引用 | `v0.1.32` |
| `MULTICA_BACKEND_PORT_OVERRIDE` | 后端 API 对外端口 | `8080` |
| `MULTICA_FRONTEND_PORT_OVERRIDE` | Web 界面对外端口 | `3000` |
| `MULTICA_JWT_SECRET` | JWT 签名密钥(生产环境必须修改) | `change-me-in-production` |
| `MULTICA_POSTGRES_PASSWORD` | PostgreSQL 密码 | `multica` |
| `MULTICA_FRONTEND_ORIGIN` | 前端 URL,用于 CORS 和 Cookie 设置 | `http://localhost:3000` |
| `MULTICA_GOOGLE_CLIENT_ID` | Google OAuth 客户端 ID(可选) | - |
| `MULTICA_GOOGLE_CLIENT_SECRET` | Google OAuth 客户端密钥(可选) | - |
| `MULTICA_RESEND_API_KEY` | Resend 邮件服务的 API Key(可选) | - |
| `TZ` | 容器时区 | `UTC` |
## 存储
| 卷 | 说明 |
| ---------------- | --------------- |
| `multica_pgdata` | PostgreSQL 数据 |
## 安全说明
- 在对外暴露服务前,务必修改 `MULTICA_JWT_SECRET`。
- 生产环境部署时请修改 `MULTICA_POSTGRES_PASSWORD`。
- Google OAuth 和邮件服务(Resend)均为可选配置,平台在没有它们的情况下也能正常运行。
- 首次构建需要从 GitHub 下载完整的 Multica 仓库并构建 Docker 镜像,因此需要联网,可能需要几分钟。
## 参考资料
- [Multica 仓库](https://github.com/multica-ai/multica)
- [自托管指南](https://github.com/multica-ai/multica/blob/main/SELF_HOSTING.md)
+109
View File
@@ -0,0 +1,109 @@
x-defaults: &defaults
restart: unless-stopped
logging:
driver: json-file
options:
max-size: ${MULTICA_LOG_MAX_SIZE:-100m}
max-file: '${MULTICA_LOG_MAX_FILE:-3}'
services:
multica-postgres:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}pgvector/pgvector:${MULTICA_PGVECTOR_VERSION:-pg17}
environment:
- TZ=${TZ:-UTC}
- POSTGRES_DB=${MULTICA_POSTGRES_DB:-multica}
- POSTGRES_USER=${MULTICA_POSTGRES_USER:-multica}
- POSTGRES_PASSWORD=${MULTICA_POSTGRES_PASSWORD:-multica}
volumes:
- multica_pgdata:/var/lib/postgresql/data
healthcheck:
test: [CMD-SHELL, pg_isready -U $$POSTGRES_USER -d $$POSTGRES_DB]
interval: 10s
timeout: 5s
retries: 5
start_period: 10s
deploy:
resources:
limits:
cpus: ${MULTICA_POSTGRES_CPU_LIMIT:-1.0}
memory: ${MULTICA_POSTGRES_MEMORY_LIMIT:-1G}
reservations:
cpus: ${MULTICA_POSTGRES_CPU_RESERVATION:-0.1}
memory: ${MULTICA_POSTGRES_MEMORY_RESERVATION:-256M}
multica-backend:
<<: *defaults
build:
context: https://github.com/multica-ai/multica.git#${MULTICA_VERSION:-v0.1.32}
dockerfile: Dockerfile
depends_on:
multica-postgres:
condition: service_healthy
ports:
- '${MULTICA_BACKEND_PORT_OVERRIDE:-8080}:8080'
environment:
- TZ=${TZ:-UTC}
- DATABASE_URL=postgres://${MULTICA_POSTGRES_USER:-multica}:${MULTICA_POSTGRES_PASSWORD:-multica}@multica-postgres:5432/${MULTICA_POSTGRES_DB:-multica}?sslmode=disable
- PORT=8080
- JWT_SECRET=${MULTICA_JWT_SECRET:-change-me-in-production}
- FRONTEND_ORIGIN=${MULTICA_FRONTEND_ORIGIN:-http://localhost:3000}
- CORS_ALLOWED_ORIGINS=${MULTICA_CORS_ALLOWED_ORIGINS:-}
- MULTICA_APP_URL=${MULTICA_APP_URL:-http://localhost:3000}
- RESEND_API_KEY=${MULTICA_RESEND_API_KEY:-}
- RESEND_FROM_EMAIL=${MULTICA_RESEND_FROM_EMAIL:-noreply@multica.ai}
- GOOGLE_CLIENT_ID=${MULTICA_GOOGLE_CLIENT_ID:-}
- GOOGLE_CLIENT_SECRET=${MULTICA_GOOGLE_CLIENT_SECRET:-}
- GOOGLE_REDIRECT_URI=${MULTICA_GOOGLE_REDIRECT_URI:-http://localhost:3000/auth/callback}
- COOKIE_DOMAIN=${MULTICA_COOKIE_DOMAIN:-}
healthcheck:
test:
- CMD-SHELL
- wget --no-verbose --tries=1 --spider http://127.0.0.1:8080/ || exit 1
interval: 30s
timeout: 10s
retries: 5
start_period: 60s
deploy:
resources:
limits:
cpus: ${MULTICA_BACKEND_CPU_LIMIT:-2.0}
memory: ${MULTICA_BACKEND_MEMORY_LIMIT:-2G}
reservations:
cpus: ${MULTICA_BACKEND_CPU_RESERVATION:-0.1}
memory: ${MULTICA_BACKEND_MEMORY_RESERVATION:-512M}
multica-frontend:
<<: *defaults
build:
context: https://github.com/multica-ai/multica.git#${MULTICA_VERSION:-v0.1.32}
dockerfile: Dockerfile.web
args:
REMOTE_API_URL: http://multica-backend:8080
NEXT_PUBLIC_GOOGLE_CLIENT_ID: ${MULTICA_GOOGLE_CLIENT_ID:-}
depends_on:
- multica-backend
ports:
- '${MULTICA_FRONTEND_PORT_OVERRIDE:-3000}:3000'
environment:
- TZ=${TZ:-UTC}
- HOSTNAME=0.0.0.0
healthcheck:
test:
- CMD-SHELL
- wget --no-verbose --tries=1 --spider http://127.0.0.1:3000/ || exit 1
interval: 30s
timeout: 10s
retries: 5
start_period: 60s
deploy:
resources:
limits:
cpus: ${MULTICA_FRONTEND_CPU_LIMIT:-1.0}
memory: ${MULTICA_FRONTEND_MEMORY_LIMIT:-1G}
reservations:
cpus: ${MULTICA_FRONTEND_CPU_RESERVATION:-0.1}
memory: ${MULTICA_FRONTEND_MEMORY_RESERVATION:-256M}
volumes:
multica_pgdata:

Some files were not shown because too many files have changed in this diff Show More