Compare commits
10 Commits
febd1601a2
...
a9679a484f
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
a9679a484f | ||
|
|
8f30f94184 | ||
|
|
0b11022ef8 | ||
|
|
3cc5acafbd | ||
|
|
9a079fe79b | ||
|
|
4f4dbfba27 | ||
|
|
861bb6bb40 | ||
|
|
1c42cb2800 | ||
|
|
5f9820e7db | ||
|
|
42aa5c40d6 |
11
README.md
11
README.md
@@ -12,6 +12,7 @@ Compose Anything helps users quickly deploy various services by providing a set
|
||||
| [Apache HBase](./src/hbase) | 2.6 |
|
||||
| [Apache HTTP Server](./src/apache) | 2.4.62 |
|
||||
| [Apache Kafka](./src/kafka) | 7.8.0 |
|
||||
| [Apache Pulsar](./src/pulsar) | 4.0.7 |
|
||||
| [Apache RocketMQ](./src/rocketmq) | 5.3.1 |
|
||||
| [Bifrost Gateway](./src/bifrost-gateway) | 1.2.15 |
|
||||
| [Bolt.diy](./src/bolt-diy) | latest |
|
||||
@@ -56,8 +57,8 @@ Compose Anything helps users quickly deploy various services by providing a set
|
||||
| [Milvus Standalone Embed](./src/milvus-standalone-embed) | 2.6.2 |
|
||||
| [Milvus Standalone](./src/milvus-standalone) | 2.6.2 |
|
||||
| [Minecraft Bedrock Server](./src/minecraft-bedrock-server) | latest |
|
||||
| [MinerU SGALNG](./src/mineru-sgalng) | 2.2.2 |
|
||||
| [MinerU vLLM](./builds/mineru-vllm) | 2.5.4 |
|
||||
| [MinerU SGLang](./src/mineru-sglang) | 2.2.2 |
|
||||
| [MinerU vLLM](./builds/mineru-vllm) | 2.6.4 |
|
||||
| [MinIO](./src/minio) | RELEASE.2025-09-07T16-13-09Z |
|
||||
| [MLflow](./src/mlflow) | v2.20.2 |
|
||||
| [MongoDB ReplicaSet Single](./src/mongodb-replicaset-single) | 8.0.13 |
|
||||
@@ -67,6 +68,7 @@ Compose Anything helps users quickly deploy various services by providing a set
|
||||
| [n8n](./src/n8n) | 1.114.0 |
|
||||
| [Nacos](./src/nacos) | v3.1.0 |
|
||||
| [NebulaGraph](./src/nebulagraph) | v3.8.0 |
|
||||
| [NexaSDK](./src/nexa-sdk) | v0.2.62 |
|
||||
| [Neo4j](./src/neo4j) | 5.27.4 |
|
||||
| [Nginx](./src/nginx) | 1.29.1 |
|
||||
| [Node Exporter](./src/node-exporter) | v1.8.2 |
|
||||
@@ -74,6 +76,9 @@ Compose Anything helps users quickly deploy various services by providing a set
|
||||
| [Odoo](./src/odoo) | 19.0 |
|
||||
| [Ollama](./src/ollama) | 0.12.0 |
|
||||
| [Open WebUI](./src/open-webui) | main |
|
||||
| [Phoenix (Arize)](./src/phoenix) | 12.19.0 |
|
||||
| [Pingora Proxy Manager](./src/pingora-proxy-manager) | v1.0.3 |
|
||||
| [Open WebUI Rust](./src/open-webui-rust) | latest |
|
||||
| [OpenCoze](./src/opencoze) | See Docs |
|
||||
| [OpenCut](./src/opencut) | latest |
|
||||
| [OpenList](./src/openlist) | latest |
|
||||
@@ -87,6 +92,7 @@ Compose Anything helps users quickly deploy various services by providing a set
|
||||
| [Qdrant](./src/qdrant) | 1.15.4 |
|
||||
| [RabbitMQ](./src/rabbitmq) | 4.1.4 |
|
||||
| [Ray](./src/ray) | 2.42.1 |
|
||||
| [Redpanda](./src/redpanda) | v24.3.1 |
|
||||
| [Redis Cluster](./src/redis-cluster) | 8.2.1 |
|
||||
| [Redis](./src/redis) | 8.2.1 |
|
||||
| [Restate Cluster](./src/restate-cluster) | 1.5.3 |
|
||||
@@ -97,6 +103,7 @@ Compose Anything helps users quickly deploy various services by providing a set
|
||||
| [Temporal](./src/temporal) | 1.24.2 |
|
||||
| [TiDB](./src/tidb) | v8.5.0 |
|
||||
| [TiKV](./src/tikv) | v8.5.0 |
|
||||
| [Trigger.dev](./src/trigger-dev) | v4.2.0 |
|
||||
| [Valkey Cluster](./src/valkey-cluster) | 8.0 |
|
||||
| [Valkey](./src/valkey) | 8.0 |
|
||||
| [Verdaccio](./src/verdaccio) | 6.1.2 |
|
||||
|
||||
11
README.zh.md
11
README.zh.md
@@ -12,6 +12,7 @@ Compose Anything 通过提供一组高质量的 Docker Compose 配置文件,
|
||||
| [Apache HBase](./src/hbase) | 2.6 |
|
||||
| [Apache HTTP Server](./src/apache) | 2.4.62 |
|
||||
| [Apache Kafka](./src/kafka) | 7.8.0 |
|
||||
| [Apache Pulsar](./src/pulsar) | 4.0.7 |
|
||||
| [Apache RocketMQ](./src/rocketmq) | 5.3.1 |
|
||||
| [Bifrost Gateway](./src/bifrost-gateway) | 1.2.15 |
|
||||
| [Bolt.diy](./src/bolt-diy) | latest |
|
||||
@@ -56,8 +57,8 @@ Compose Anything 通过提供一组高质量的 Docker Compose 配置文件,
|
||||
| [Milvus Standalone Embed](./src/milvus-standalone-embed) | 2.6.2 |
|
||||
| [Milvus Standalone](./src/milvus-standalone) | 2.6.2 |
|
||||
| [Minecraft Bedrock Server](./src/minecraft-bedrock-server) | latest |
|
||||
| [MinerU SGALNG](./src/mineru-sgalng) | 2.2.2 |
|
||||
| [MinerU vLLM](./builds/mineru-vllm) | 2.5.4 |
|
||||
| [MinerU SGLang](./src/mineru-sglang) | 2.2.2 |
|
||||
| [MinerU vLLM](./builds/mineru-vllm) | 2.6.4 |
|
||||
| [MinIO](./src/minio) | RELEASE.2025-09-07T16-13-09Z |
|
||||
| [MLflow](./src/mlflow) | v2.20.2 |
|
||||
| [MongoDB ReplicaSet Single](./src/mongodb-replicaset-single) | 8.0.13 |
|
||||
@@ -67,6 +68,7 @@ Compose Anything 通过提供一组高质量的 Docker Compose 配置文件,
|
||||
| [n8n](./src/n8n) | 1.114.0 |
|
||||
| [Nacos](./src/nacos) | v3.1.0 |
|
||||
| [NebulaGraph](./src/nebulagraph) | v3.8.0 |
|
||||
| [NexaSDK](./src/nexa-sdk) | v0.2.62 |
|
||||
| [Neo4j](./src/neo4j) | 5.27.4 |
|
||||
| [Nginx](./src/nginx) | 1.29.1 |
|
||||
| [Node Exporter](./src/node-exporter) | v1.8.2 |
|
||||
@@ -74,6 +76,9 @@ Compose Anything 通过提供一组高质量的 Docker Compose 配置文件,
|
||||
| [Odoo](./src/odoo) | 19.0 |
|
||||
| [Ollama](./src/ollama) | 0.12.0 |
|
||||
| [Open WebUI](./src/open-webui) | main |
|
||||
| [Phoenix (Arize)](./src/phoenix) | 12.19.0 |
|
||||
| [Pingora Proxy Manager](./src/pingora-proxy-manager) | v1.0.3 |
|
||||
| [Open WebUI Rust](./src/open-webui-rust) | latest |
|
||||
| [OpenCoze](./src/opencoze) | See Docs |
|
||||
| [OpenCut](./src/opencut) | latest |
|
||||
| [OpenList](./src/openlist) | latest |
|
||||
@@ -87,6 +92,7 @@ Compose Anything 通过提供一组高质量的 Docker Compose 配置文件,
|
||||
| [Qdrant](./src/qdrant) | 1.15.4 |
|
||||
| [RabbitMQ](./src/rabbitmq) | 4.1.4 |
|
||||
| [Ray](./src/ray) | 2.42.1 |
|
||||
| [Redpanda](./src/redpanda) | v24.3.1 |
|
||||
| [Redis Cluster](./src/redis-cluster) | 8.2.1 |
|
||||
| [Redis](./src/redis) | 8.2.1 |
|
||||
| [Restate Cluster](./src/restate-cluster) | 1.5.3 |
|
||||
@@ -97,6 +103,7 @@ Compose Anything 通过提供一组高质量的 Docker Compose 配置文件,
|
||||
| [Temporal](./src/temporal) | 1.24.2 |
|
||||
| [TiDB](./src/tidb) | v8.5.0 |
|
||||
| [TiKV](./src/tikv) | v8.5.0 |
|
||||
| [Trigger.dev](./src/trigger-dev) | v4.2.0 |
|
||||
| [Valkey Cluster](./src/valkey-cluster) | 8.0 |
|
||||
| [Valkey](./src/valkey) | 8.0 |
|
||||
| [Verdaccio](./src/verdaccio) | 6.1.2 |
|
||||
|
||||
@@ -7,9 +7,9 @@ x-defaults: &defaults
|
||||
max-file: "3"
|
||||
|
||||
services:
|
||||
lama-cleaner:
|
||||
io-paint:
|
||||
<<: *defaults
|
||||
image: ${DOCKER_REGISTRY:-docker.io}/local/lama-cleaner:${BUILD_VERSION:-1.6.0}
|
||||
image: ${DOCKER_REGISTRY:-docker.io}/alexsuntop/io-paint:${BUILD_VERSION:-1.6.0}
|
||||
ports:
|
||||
- 8080:8080
|
||||
build:
|
||||
|
||||
@@ -1,109 +0,0 @@
|
||||
x-defaults: &defaults
|
||||
restart: unless-stopped
|
||||
logging:
|
||||
driver: json-file
|
||||
options:
|
||||
max-size: 100m
|
||||
max-file: "3"
|
||||
|
||||
x-mineru-vllm: &mineru-vllm
|
||||
<<: *defaults
|
||||
image: ${MINERU_DOCKER_IMAGE:-alexsuntop/mineru:2.6.2}
|
||||
build:
|
||||
context: .
|
||||
dockerfile: Dockerfile
|
||||
environment:
|
||||
TZ: ${TZ:-UTC}
|
||||
MINERU_MODEL_SOURCE: local
|
||||
ulimits:
|
||||
memlock: -1
|
||||
stack: 67108864
|
||||
ipc: host
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '16.0'
|
||||
memory: 32G
|
||||
reservations:
|
||||
cpus: '8.0'
|
||||
memory: 16G
|
||||
devices:
|
||||
- driver: nvidia
|
||||
device_ids: [ '0' ]
|
||||
capabilities: [ gpu ]
|
||||
|
||||
services:
|
||||
mineru-vllm-server:
|
||||
<<: *mineru-vllm
|
||||
profiles: ["vllm-server"]
|
||||
ports:
|
||||
- ${MINERU_PORT_OVERRIDE_VLLM:-30000}:30000
|
||||
entrypoint: mineru-vllm-server
|
||||
command:
|
||||
- --host 0.0.0.0
|
||||
- --port 30000
|
||||
|
||||
# If using multiple GPUs, increase throughput using vllm's multi-GPU parallel mode.
|
||||
# - --data-parallel-size 2
|
||||
# If running on a single GPU and encountering VRAM shortage, reduce the KV cache size by this parameter,
|
||||
# if VRAM issues persist, try lowering it further to `0.4` or below.
|
||||
# - --gpu-memory-utilization 0.5
|
||||
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "curl -f http://localhost:30000/health || exit 1"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 60s
|
||||
|
||||
|
||||
mineru-api:
|
||||
<<: *mineru-vllm
|
||||
profiles: ["api"]
|
||||
ports:
|
||||
- ${MINERU_PORT_OVERRIDE_API:-8000}:8000
|
||||
entrypoint: mineru-api
|
||||
command:
|
||||
- --host 0.0.0.0
|
||||
- --port 8000
|
||||
|
||||
# If using multiple GPUs, increase throughput using vllm's multi-GPU parallel mode.
|
||||
# - --data-parallel-size 2
|
||||
# If running on a single GPU and encountering VRAM shortage, reduce the KV cache size by this parameter,
|
||||
# if VRAM issues persist, try lowering it further to `0.4` or below.
|
||||
# - --gpu-memory-utilization 0.5
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8000/health"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 60s
|
||||
|
||||
mineru-gradio:
|
||||
<<: *mineru-vllm
|
||||
profiles: ["gradio"]
|
||||
ports:
|
||||
- ${MINERU_PORT_OVERRIDE_GRADIO:-7860}:7860
|
||||
entrypoint: mineru-gradio
|
||||
command:
|
||||
- --server-name 0.0.0.0
|
||||
- --server-port 7860
|
||||
|
||||
# Enable the vllm engine for Gradio
|
||||
- --enable-vllm-engine true
|
||||
# If you want to disable the API, set this to false
|
||||
# - --enable-api false
|
||||
# If you want to limit the number of pages for conversion, set this to a specific number
|
||||
# - --max-convert-pages 20
|
||||
|
||||
# If using multiple GPUs, increase throughput using vllm's multi-GPU parallel mode.
|
||||
# - --data-parallel-size 2
|
||||
# If running on a single GPU and encountering VRAM shortage, reduce the KV cache size by this parameter,
|
||||
# if VRAM issues persist, try lowering it further to `0.4` or below.
|
||||
# - --gpu-memory-utilization 0.5
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:7860/"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 60s
|
||||
@@ -1,5 +1,5 @@
|
||||
# MinerU Docker image
|
||||
MINERU_DOCKER_IMAGE=alexsuntop/mineru:2.5.4
|
||||
MINERU_DOCKER_IMAGE=alexsuntop/mineru:2.6.5
|
||||
|
||||
# Port configurations
|
||||
MINERU_PORT_OVERRIDE_VLLM=30000
|
||||
@@ -39,7 +39,7 @@ mineru -p demo.pdf -o ./output -b vlm-http-client -u http://localhost:30000
|
||||
|
||||
## Configuration
|
||||
|
||||
- `MINERU_DOCKER_IMAGE`: The Docker image for MinerU, default is `alexsuntop/mineru:2.5.4`.
|
||||
- `MINERU_VERSION`: The version for MinerU, default is `2.6.5`.
|
||||
- `MINERU_PORT_OVERRIDE_VLLM`: The host port for the VLLM server, default is `30000`.
|
||||
- `MINERU_PORT_OVERRIDE_API`: The host port for the API service, default is `8000`.
|
||||
- `MINERU_PORT_OVERRIDE_GRADIO`: The host port for the Gradio WebUI, default is `7860`.
|
||||
@@ -39,7 +39,7 @@ mineru -p demo.pdf -o ./output -b vlm-http-client -u http://localhost:30000
|
||||
|
||||
## 配置
|
||||
|
||||
- `MINERU_DOCKER_IMAGE`: MinerU 的 Docker 镜像,默认为 `alexsuntop/mineru:2.5.4`。
|
||||
- `MINERU_VERSION`: MinerU 的 Docker 镜像版本,默认为 `2.6.5`。
|
||||
- `MINERU_PORT_OVERRIDE_VLLM`: VLLM 服务器的主机端口,默认为 `30000`。
|
||||
- `MINERU_PORT_OVERRIDE_API`: API 服务的主机端口,默认为 `8000`。
|
||||
- `MINERU_PORT_OVERRIDE_GRADIO`: Gradio WebUI 的主机端口,默认为 `7860`。
|
||||
139
builds/mineru/docker-compose.yaml
Normal file
139
builds/mineru/docker-compose.yaml
Normal file
@@ -0,0 +1,139 @@
|
||||
x-defaults: &defaults
|
||||
restart: unless-stopped
|
||||
logging:
|
||||
driver: json-file
|
||||
options:
|
||||
max-size: 100m
|
||||
max-file: "3"
|
||||
|
||||
x-mineru-vllm: &mineru-vllm
|
||||
<<: *defaults
|
||||
image: ${GLOBAL_REGISTRY:-}alexsuntop/mineru:${MINERU_VERSION:-2.6.5}
|
||||
build:
|
||||
context: .
|
||||
dockerfile: Dockerfile
|
||||
environment:
|
||||
TZ: ${TZ:-UTC}
|
||||
MINERU_MODEL_SOURCE: local
|
||||
ulimits:
|
||||
memlock: -1
|
||||
stack: 67108864
|
||||
ipc: host
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '16.0'
|
||||
memory: 32G
|
||||
reservations:
|
||||
cpus: '8.0'
|
||||
memory: 16G
|
||||
devices:
|
||||
- driver: nvidia
|
||||
device_ids: [ '0' ]
|
||||
capabilities: [ gpu ]
|
||||
|
||||
services:
|
||||
mineru-openai-server:
|
||||
<<: *mineru-vllm
|
||||
profiles: ["openai-server"]
|
||||
ports:
|
||||
- ${MINERU_PORT_OVERRIDE_VLLM:-30000}:30000
|
||||
entrypoint: mineru-openai-server
|
||||
command:
|
||||
# ==================== Engine Selection ====================
|
||||
# WARNING: Only ONE engine can be enabled at a time!
|
||||
# Choose 'vllm' OR 'lmdeploy' (uncomment one line below)
|
||||
- --engine vllm
|
||||
# --engine lmdeploy
|
||||
|
||||
# ==================== vLLM Engine Parameters ====================
|
||||
# Uncomment if using --engine vllm
|
||||
- --host 0.0.0.0
|
||||
- --port 30000
|
||||
# Multi-GPU configuration (increase throughput)
|
||||
# --data-parallel-size 2
|
||||
# Single GPU memory optimization (reduce if VRAM insufficient)
|
||||
# --gpu-memory-utilization 0.5 # Try 0.4 or lower if issues persist
|
||||
|
||||
# ==================== LMDeploy Engine Parameters ====================
|
||||
# Uncomment if using --engine lmdeploy
|
||||
# --server-name 0.0.0.0
|
||||
# --server-port 30000
|
||||
# Multi-GPU configuration (increase throughput)
|
||||
# --dp 2
|
||||
# Single GPU memory optimization (reduce if VRAM insufficient)
|
||||
# --cache-max-entry-count 0.5 # Try 0.4 or lower if issues persist
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "curl -f http://localhost:30000/health || exit 1"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 60s
|
||||
|
||||
|
||||
mineru-api:
|
||||
<<: *mineru-vllm
|
||||
profiles: ["api"]
|
||||
ports:
|
||||
- ${MINERU_PORT_OVERRIDE_API:-8000}:8000
|
||||
entrypoint: mineru-api
|
||||
command:
|
||||
# ==================== Server Configuration ====================
|
||||
- --host 0.0.0.0
|
||||
- --port 8000
|
||||
|
||||
# ==================== vLLM Engine Parameters ====================
|
||||
# Multi-GPU configuration
|
||||
# --data-parallel-size 2
|
||||
# Single GPU memory optimization
|
||||
# --gpu-memory-utilization 0.5 # Try 0.4 or lower if VRAM insufficient
|
||||
|
||||
# ==================== LMDeploy Engine Parameters ====================
|
||||
# Multi-GPU configuration
|
||||
# --dp 2
|
||||
# Single GPU memory optimization
|
||||
# --cache-max-entry-count 0.5 # Try 0.4 or lower if VRAM insufficient
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8000/health"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 60s
|
||||
|
||||
mineru-gradio:
|
||||
<<: *mineru-vllm
|
||||
profiles: ["gradio"]
|
||||
ports:
|
||||
- ${MINERU_PORT_OVERRIDE_GRADIO:-7860}:7860
|
||||
entrypoint: mineru-gradio
|
||||
command:
|
||||
# ==================== Gradio Server Configuration ====================
|
||||
- --server-name 0.0.0.0
|
||||
- --server-port 7860
|
||||
|
||||
# ==================== Gradio Feature Settings ====================
|
||||
# --enable-api false # Disable API endpoint
|
||||
# --max-convert-pages 20 # Limit conversion page count
|
||||
|
||||
# ==================== Engine Selection ====================
|
||||
# WARNING: Only ONE engine can be enabled at a time!
|
||||
|
||||
# Option 1: vLLM Engine (recommended for most users)
|
||||
- --enable-vllm-engine true
|
||||
# Multi-GPU configuration
|
||||
# --data-parallel-size 2
|
||||
# Single GPU memory optimization
|
||||
# --gpu-memory-utilization 0.5 # Try 0.4 or lower if VRAM insufficient
|
||||
|
||||
# Option 2: LMDeploy Engine
|
||||
# --enable-lmdeploy-engine true
|
||||
# Multi-GPU configuration
|
||||
# --dp 2
|
||||
# Single GPU memory optimization
|
||||
# --cache-max-entry-count 0.5 # Try 0.4 or lower if VRAM insufficient
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:7860/"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 60s
|
||||
@@ -27,11 +27,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '1.00'
|
||||
memory: 512M
|
||||
cpus: ${APACHE_CPU_LIMIT:-1.00}
|
||||
memory: ${APACHE_MEMORY_LIMIT:-512M}
|
||||
reservations:
|
||||
cpus: '0.25'
|
||||
memory: 128M
|
||||
cpus: ${APACHE_CPU_RESERVATION:-0.25}
|
||||
memory: ${APACHE_MEMORY_RESERVATION:-128M}
|
||||
healthcheck:
|
||||
test: ["CMD", "httpd", "-t"]
|
||||
interval: 30s
|
||||
|
||||
@@ -29,11 +29,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '1.0'
|
||||
memory: 1G
|
||||
cpus: ${APISIX_CPU_LIMIT:-1.0}
|
||||
memory: ${APISIX_MEMORY_LIMIT:-1G}
|
||||
reservations:
|
||||
cpus: '0.25'
|
||||
memory: 256M
|
||||
cpus: ${APISIX_CPU_RESERVATION:-0.25}
|
||||
memory: ${APISIX_MEMORY_RESERVATION:-256M}
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "curl -f http://localhost:9080/apisix/status || exit 1"]
|
||||
interval: 30s
|
||||
@@ -83,11 +83,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '0.5'
|
||||
memory: 512M
|
||||
cpus: ${ETCD_CPU_LIMIT:-0.5}
|
||||
memory: ${ETCD_MEMORY_LIMIT:-512M}
|
||||
reservations:
|
||||
cpus: '0.1'
|
||||
memory: 128M
|
||||
cpus: ${ETCD_CPU_RESERVATION:-0.1}
|
||||
memory: ${ETCD_MEMORY_RESERVATION:-128M}
|
||||
healthcheck:
|
||||
test: ["CMD", "etcdctl", "endpoint", "health"]
|
||||
interval: 30s
|
||||
@@ -115,11 +115,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '0.5'
|
||||
memory: 512M
|
||||
cpus: ${APISIX_DASHBOARD_CPU_LIMIT:-0.5}
|
||||
memory: ${APISIX_DASHBOARD_MEMORY_LIMIT:-512M}
|
||||
reservations:
|
||||
cpus: '0.1'
|
||||
memory: 128M
|
||||
cpus: ${APISIX_DASHBOARD_CPU_RESERVATION:-0.1}
|
||||
memory: ${APISIX_DASHBOARD_MEMORY_RESERVATION:-128M}
|
||||
|
||||
volumes:
|
||||
apisix_logs:
|
||||
|
||||
@@ -19,11 +19,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '0.50'
|
||||
memory: 512M
|
||||
cpus: ${BIFROST_CPU_LIMIT:-0.50}
|
||||
memory: ${BIFROST_MEMORY_LIMIT:-512M}
|
||||
reservations:
|
||||
cpus: '0.10'
|
||||
memory: 128M
|
||||
cpus: ${BIFROST_CPU_RESERVATION:-0.10}
|
||||
memory: ${BIFROST_MEMORY_RESERVATION:-128M}
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8080/health"]
|
||||
interval: 30s
|
||||
|
||||
@@ -19,11 +19,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '2.00'
|
||||
memory: 2G
|
||||
cpus: ${BOLT_DIY_CPU_LIMIT:-2.00}
|
||||
memory: ${BOLT_DIY_MEMORY_LIMIT:-2G}
|
||||
reservations:
|
||||
cpus: '0.5'
|
||||
memory: 512M
|
||||
cpus: ${BOLT_DIY_CPU_RESERVATION:-0.5}
|
||||
memory: ${BOLT_DIY_MEMORY_RESERVATION:-512M}
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:5173/"]
|
||||
interval: 30s
|
||||
|
||||
@@ -18,11 +18,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '2.0'
|
||||
memory: 4G
|
||||
cpus: ${BYTEBOT_DESKTOP_CPU_LIMIT:-2.0}
|
||||
memory: ${BYTEBOT_DESKTOP_MEMORY_LIMIT:-4G}
|
||||
reservations:
|
||||
cpus: '1.0'
|
||||
memory: 2G
|
||||
cpus: ${BYTEBOT_DESKTOP_CPU_RESERVATION:-1.0}
|
||||
memory: ${BYTEBOT_DESKTOP_MEMORY_RESERVATION:-2G}
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:9990/"]
|
||||
interval: 30s
|
||||
@@ -50,11 +50,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '1.0'
|
||||
memory: 1G
|
||||
cpus: ${BYTEBOT_AGENT_CPU_LIMIT:-1.0}
|
||||
memory: ${BYTEBOT_AGENT_MEMORY_LIMIT:-1G}
|
||||
reservations:
|
||||
cpus: '0.5'
|
||||
memory: 512M
|
||||
cpus: ${BYTEBOT_AGENT_CPU_RESERVATION:-0.5}
|
||||
memory: ${BYTEBOT_AGENT_MEMORY_RESERVATION:-512M}
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:9991/health"]
|
||||
interval: 30s
|
||||
@@ -77,11 +77,17 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '0.5'
|
||||
memory: 512M
|
||||
cpus: ${BYTEBOT_UI_CPU_LIMIT:-0.5}
|
||||
memory: ${BYTEBOT_UI_MEMORY_LIMIT:-512M}
|
||||
reservations:
|
||||
cpus: '0.25'
|
||||
memory: 256M
|
||||
cpus: ${BYTEBOT_UI_CPU_RESERVATION:-0.25}
|
||||
memory: ${BYTEBOT_UI_MEMORY_RESERVATION:-256M}
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:9992/"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 30s
|
||||
|
||||
bytebot-db:
|
||||
<<: *defaults
|
||||
@@ -97,11 +103,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '0.5'
|
||||
memory: 512M
|
||||
cpus: ${BYTEBOT_DB_CPU_LIMIT:-0.5}
|
||||
memory: ${BYTEBOT_DB_MEMORY_LIMIT:-512M}
|
||||
reservations:
|
||||
cpus: '0.25'
|
||||
memory: 256M
|
||||
cpus: ${BYTEBOT_DB_CPU_RESERVATION:-0.25}
|
||||
memory: ${BYTEBOT_DB_MEMORY_RESERVATION:-256M}
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -U $$POSTGRES_USER -d $$POSTGRES_DB"]
|
||||
interval: 30s
|
||||
|
||||
@@ -33,11 +33,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '2.00'
|
||||
memory: 2G
|
||||
cpus: ${CASSANDRA_CPU_LIMIT:-2.00}
|
||||
memory: ${CASSANDRA_MEMORY_LIMIT:-2G}
|
||||
reservations:
|
||||
cpus: '0.50'
|
||||
memory: 1G
|
||||
cpus: ${CASSANDRA_CPU_RESERVATION:-0.50}
|
||||
memory: ${CASSANDRA_MEMORY_RESERVATION:-1G}
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "cqlsh -e 'DESCRIBE CLUSTER'"]
|
||||
interval: 30s
|
||||
|
||||
@@ -20,11 +20,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: "0.5"
|
||||
memory: 512M
|
||||
cpus: ${CLASH_CPU_LIMIT:-0.5}
|
||||
memory: ${CLASH_MEMORY_LIMIT:-512M}
|
||||
reservations:
|
||||
cpus: "0.25"
|
||||
memory: 256M
|
||||
cpus: ${CLASH_CPU_RESERVATION:-0.25}
|
||||
memory: ${CLASH_MEMORY_RESERVATION:-256M}
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:80/"]
|
||||
interval: 30s
|
||||
|
||||
@@ -35,11 +35,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '4.0'
|
||||
memory: 4G
|
||||
cpus: ${CLICKHOUSE_CPU_LIMIT:-4.0}
|
||||
memory: ${CLICKHOUSE_MEMORY_LIMIT:-4G}
|
||||
reservations:
|
||||
cpus: '1.0'
|
||||
memory: 1G
|
||||
cpus: ${CLICKHOUSE_CPU_RESERVATION:-1.0}
|
||||
memory: ${CLICKHOUSE_MEMORY_RESERVATION:-1G}
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "wget --no-verbose --tries=1 --spider http://localhost:8123/ping || exit 1"]
|
||||
interval: 30s
|
||||
|
||||
@@ -38,11 +38,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '1.00'
|
||||
memory: 512M
|
||||
cpus: ${CONSUL_CPU_LIMIT:-1.00}
|
||||
memory: ${CONSUL_MEMORY_LIMIT:-512M}
|
||||
reservations:
|
||||
cpus: '0.25'
|
||||
memory: 128M
|
||||
cpus: ${CONSUL_CPU_RESERVATION:-0.25}
|
||||
memory: ${CONSUL_MEMORY_RESERVATION:-128M}
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "consul members"]
|
||||
interval: 30s
|
||||
|
||||
@@ -34,11 +34,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '1.0'
|
||||
memory: 2G
|
||||
cpus: ${DIFY_API_CPU_LIMIT:-1.0}
|
||||
memory: ${DIFY_API_MEMORY_LIMIT:-2G}
|
||||
reservations:
|
||||
cpus: '0.5'
|
||||
memory: 1G
|
||||
cpus: ${DIFY_API_CPU_RESERVATION:-0.5}
|
||||
memory: ${DIFY_API_MEMORY_RESERVATION:-1G}
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:5001/health"]
|
||||
interval: 30s
|
||||
@@ -73,11 +73,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '1.0'
|
||||
memory: 2G
|
||||
cpus: ${DIFY_WORKER_CPU_LIMIT:-1.0}
|
||||
memory: ${DIFY_WORKER_MEMORY_LIMIT:-2G}
|
||||
reservations:
|
||||
cpus: '0.5'
|
||||
memory: 1G
|
||||
cpus: ${DIFY_WORKER_CPU_RESERVATION:-0.5}
|
||||
memory: ${DIFY_WORKER_MEMORY_RESERVATION:-1G}
|
||||
|
||||
dify-web:
|
||||
<<: *defaults
|
||||
@@ -94,11 +94,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '0.5'
|
||||
memory: 512M
|
||||
cpus: ${DIFY_WEB_CPU_LIMIT:-0.5}
|
||||
memory: ${DIFY_WEB_MEMORY_LIMIT:-512M}
|
||||
reservations:
|
||||
cpus: '0.25'
|
||||
memory: 256M
|
||||
cpus: ${DIFY_WEB_CPU_RESERVATION:-0.25}
|
||||
memory: ${DIFY_WEB_MEMORY_RESERVATION:-256M}
|
||||
|
||||
dify-db:
|
||||
<<: *defaults
|
||||
@@ -114,11 +114,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '0.5'
|
||||
memory: 512M
|
||||
cpus: ${DIFY_DB_CPU_LIMIT:-0.5}
|
||||
memory: ${DIFY_DB_MEMORY_LIMIT:-512M}
|
||||
reservations:
|
||||
cpus: '0.25'
|
||||
memory: 256M
|
||||
cpus: ${DIFY_DB_CPU_RESERVATION:-0.25}
|
||||
memory: ${DIFY_DB_MEMORY_RESERVATION:-256M}
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -U $$POSTGRES_USER -d $$POSTGRES_DB"]
|
||||
interval: 30s
|
||||
@@ -137,11 +137,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '0.25'
|
||||
memory: 256M
|
||||
cpus: ${DIFY_REDIS_CPU_LIMIT:-0.25}
|
||||
memory: ${DIFY_REDIS_MEMORY_LIMIT:-256M}
|
||||
reservations:
|
||||
cpus: '0.1'
|
||||
memory: 128M
|
||||
cpus: ${DIFY_REDIS_CPU_RESERVATION:-0.1}
|
||||
memory: ${DIFY_REDIS_MEMORY_RESERVATION:-128M}
|
||||
healthcheck:
|
||||
test: ["CMD", "redis-cli", "ping"]
|
||||
interval: 10s
|
||||
@@ -166,11 +166,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '0.5'
|
||||
memory: 1G
|
||||
cpus: ${DIFY_WEAVIATE_CPU_LIMIT:-0.5}
|
||||
memory: ${DIFY_WEAVIATE_MEMORY_LIMIT:-1G}
|
||||
reservations:
|
||||
cpus: '0.25'
|
||||
memory: 512M
|
||||
cpus: ${DIFY_WEAVIATE_CPU_RESERVATION:-0.25}
|
||||
memory: ${DIFY_WEAVIATE_MEMORY_RESERVATION:-512M}
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8080/v1/.well-known/ready"]
|
||||
interval: 30s
|
||||
|
||||
@@ -31,8 +31,8 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '0.50'
|
||||
memory: 128M
|
||||
cpus: ${DNSMASQ_CPU_LIMIT:-0.50}
|
||||
memory: ${DNSMASQ_MEMORY_LIMIT:-128M}
|
||||
reservations:
|
||||
cpus: '0.10'
|
||||
memory: 32M
|
||||
cpus: ${DNSMASQ_CPU_RESERVATION:-0.10}
|
||||
memory: ${DNSMASQ_MEMORY_RESERVATION:-32M}
|
||||
|
||||
@@ -27,11 +27,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '1.0'
|
||||
memory: 1G
|
||||
cpus: ${REGISTRY_CPU_LIMIT:-1.0}
|
||||
memory: ${REGISTRY_MEMORY_LIMIT:-1G}
|
||||
reservations:
|
||||
cpus: '0.1'
|
||||
memory: 128M
|
||||
cpus: ${REGISTRY_CPU_RESERVATION:-0.1}
|
||||
memory: ${REGISTRY_MEMORY_RESERVATION:-128M}
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:5000/"]
|
||||
interval: 30s
|
||||
|
||||
@@ -24,11 +24,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '1.0'
|
||||
memory: 512M
|
||||
cpus: ${DOCKGE_CPU_LIMIT:-1.0}
|
||||
memory: ${DOCKGE_MEMORY_LIMIT:-512M}
|
||||
reservations:
|
||||
cpus: '0.25'
|
||||
memory: 256M
|
||||
cpus: ${DOCKGE_CPU_RESERVATION:-0.25}
|
||||
memory: ${DOCKGE_MEMORY_RESERVATION:-256M}
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:5001/"]
|
||||
interval: 30s
|
||||
|
||||
@@ -22,11 +22,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '2.0'
|
||||
memory: 4G
|
||||
cpus: ${DUCKDB_CPU_LIMIT:-2.0}
|
||||
memory: ${DUCKDB_MEMORY_LIMIT:-4G}
|
||||
reservations:
|
||||
cpus: '0.5'
|
||||
memory: 512M
|
||||
cpus: ${DUCKDB_CPU_RESERVATION:-0.5}
|
||||
memory: ${DUCKDB_MEMORY_RESERVATION:-512M}
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "duckdb /data/duckdb.db -c 'SELECT 1' || exit 1"]
|
||||
interval: 30s
|
||||
|
||||
@@ -23,11 +23,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '2.0'
|
||||
memory: 4G
|
||||
cpus: ${EASY_DATASET_CPU_LIMIT:-2.0}
|
||||
memory: ${EASY_DATASET_MEMORY_LIMIT:-4G}
|
||||
reservations:
|
||||
cpus: '0.5'
|
||||
memory: 1G
|
||||
cpus: ${EASY_DATASET_CPU_RESERVATION:-0.5}
|
||||
memory: ${EASY_DATASET_MEMORY_RESERVATION:-1G}
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:1717"]
|
||||
interval: 30s
|
||||
|
||||
@@ -36,11 +36,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '2.00'
|
||||
memory: 2G
|
||||
cpus: ${ELASTICSEARCH_CPU_LIMIT:-2.00}
|
||||
memory: ${ELASTICSEARCH_MEMORY_LIMIT:-2G}
|
||||
reservations:
|
||||
cpus: '0.50'
|
||||
memory: 1G
|
||||
cpus: ${ELASTICSEARCH_CPU_RESERVATION:-0.50}
|
||||
memory: ${ELASTICSEARCH_MEMORY_RESERVATION:-1G}
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "curl -f http://localhost:9200/_cluster/health || exit 1"]
|
||||
interval: 30s
|
||||
|
||||
@@ -50,11 +50,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '1.0'
|
||||
memory: 1G
|
||||
cpus: ${ETCD_CPU_LIMIT:-1.0}
|
||||
memory: ${ETCD_MEMORY_LIMIT:-1G}
|
||||
reservations:
|
||||
cpus: '0.25'
|
||||
memory: 256M
|
||||
cpus: ${ETCD_CPU_RESERVATION:-0.25}
|
||||
memory: ${ETCD_MEMORY_RESERVATION:-256M}
|
||||
healthcheck:
|
||||
test: ["CMD", "etcdctl", "endpoint", "health"]
|
||||
interval: 30s
|
||||
|
||||
@@ -49,11 +49,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: "1.0"
|
||||
memory: 1G
|
||||
cpus: ${PLAYWRIGHT_CPU_LIMIT:-1.0}
|
||||
memory: ${PLAYWRIGHT_MEMORY_LIMIT:-1G}
|
||||
reservations:
|
||||
cpus: "0.5"
|
||||
memory: 512M
|
||||
cpus: ${PLAYWRIGHT_CPU_RESERVATION:-0.5}
|
||||
memory: ${PLAYWRIGHT_MEMORY_RESERVATION:-512M}
|
||||
|
||||
api:
|
||||
<<: *defaults
|
||||
@@ -78,11 +78,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: "2.0"
|
||||
memory: 4G
|
||||
cpus: ${FIRECRAWL_API_CPU_LIMIT:-2.0}
|
||||
memory: ${FIRECRAWL_API_MEMORY_LIMIT:-4G}
|
||||
reservations:
|
||||
cpus: "1.0"
|
||||
memory: 2G
|
||||
cpus: ${FIRECRAWL_API_CPU_RESERVATION:-1.0}
|
||||
memory: ${FIRECRAWL_API_MEMORY_RESERVATION:-2G}
|
||||
healthcheck:
|
||||
test:
|
||||
[
|
||||
@@ -107,11 +107,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: "0.5"
|
||||
memory: 512M
|
||||
cpus: ${REDIS_CPU_LIMIT:-0.5}
|
||||
memory: ${REDIS_MEMORY_LIMIT:-512M}
|
||||
reservations:
|
||||
cpus: "0.25"
|
||||
memory: 256M
|
||||
cpus: ${REDIS_CPU_RESERVATION:-0.25}
|
||||
memory: ${REDIS_MEMORY_RESERVATION:-256M}
|
||||
healthcheck:
|
||||
test: ["CMD", "redis-cli", "ping"]
|
||||
interval: 10s
|
||||
@@ -133,11 +133,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: "1.0"
|
||||
memory: 1G
|
||||
cpus: ${NUQPOSTGRES_CPU_LIMIT:-1.0}
|
||||
memory: ${NUQPOSTGRES_MEMORY_LIMIT:-1G}
|
||||
reservations:
|
||||
cpus: "0.5"
|
||||
memory: 512M
|
||||
cpus: ${NUQPOSTGRES_CPU_RESERVATION:-0.5}
|
||||
memory: ${NUQPOSTGRES_MEMORY_RESERVATION:-512M}
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER:-postgres}"]
|
||||
interval: 10s
|
||||
|
||||
@@ -29,11 +29,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '2.0'
|
||||
memory: 2G
|
||||
cpus: ${FLINK_JOBMANAGER_CPU_LIMIT:-2.0}
|
||||
memory: ${FLINK_JOBMANAGER_MEMORY_LIMIT:-2G}
|
||||
reservations:
|
||||
cpus: '0.5'
|
||||
memory: 1G
|
||||
cpus: ${FLINK_JOBMANAGER_CPU_RESERVATION:-0.5}
|
||||
memory: ${FLINK_JOBMANAGER_MEMORY_RESERVATION:-1G}
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "wget --no-verbose --tries=1 --spider http://localhost:8081 || exit 1"]
|
||||
interval: 30s
|
||||
@@ -62,11 +62,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '2.0'
|
||||
memory: 2G
|
||||
cpus: ${FLINK_TASKMANAGER_CPU_LIMIT:-2.0}
|
||||
memory: ${FLINK_TASKMANAGER_MEMORY_LIMIT:-2G}
|
||||
reservations:
|
||||
cpus: '0.5'
|
||||
memory: 1G
|
||||
cpus: ${FLINK_TASKMANAGER_CPU_RESERVATION:-0.5}
|
||||
memory: ${FLINK_TASKMANAGER_MEMORY_RESERVATION:-1G}
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "ps aux | grep -v grep | grep -q taskmanager || exit 1"]
|
||||
interval: 30s
|
||||
|
||||
@@ -20,8 +20,8 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '0.5'
|
||||
memory: 128M
|
||||
cpus: ${FRPC_CPU_LIMIT:-0.5}
|
||||
memory: ${FRPC_MEMORY_LIMIT:-128M}
|
||||
reservations:
|
||||
cpus: '0.1'
|
||||
memory: 64M
|
||||
cpus: ${FRPC_CPU_RESERVATION:-0.1}
|
||||
memory: ${FRPC_MEMORY_RESERVATION:-64M}
|
||||
|
||||
@@ -25,11 +25,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '0.5'
|
||||
memory: 128M
|
||||
cpus: ${FRPS_CPU_LIMIT:-0.5}
|
||||
memory: ${FRPS_MEMORY_LIMIT:-128M}
|
||||
reservations:
|
||||
cpus: '0.1'
|
||||
memory: 64M
|
||||
cpus: ${FRPS_CPU_RESERVATION:-0.1}
|
||||
memory: ${FRPS_MEMORY_RESERVATION:-64M}
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:${FRP_ADMIN_PORT:-7890}/"]
|
||||
interval: 30s
|
||||
|
||||
@@ -24,11 +24,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '1.0'
|
||||
memory: 2G
|
||||
cpus: ${GITEA_RUNNER_CPU_LIMIT:-1.0}
|
||||
memory: ${GITEA_RUNNER_MEMORY_LIMIT:-2G}
|
||||
reservations:
|
||||
cpus: '0.5'
|
||||
memory: 1G
|
||||
cpus: ${GITEA_RUNNER_CPU_RESERVATION:-0.5}
|
||||
memory: ${GITEA_RUNNER_MEMORY_RESERVATION:-1G}
|
||||
|
||||
volumes:
|
||||
gitea_runner_data:
|
||||
|
||||
@@ -30,11 +30,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '1.0'
|
||||
memory: 1G
|
||||
cpus: ${GITEA_CPU_LIMIT:-1.0}
|
||||
memory: ${GITEA_MEMORY_LIMIT:-1G}
|
||||
reservations:
|
||||
cpus: '0.5'
|
||||
memory: 512M
|
||||
cpus: ${GITEA_CPU_RESERVATION:-0.5}
|
||||
memory: ${GITEA_MEMORY_RESERVATION:-512M}
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:3000/"]
|
||||
interval: 30s
|
||||
@@ -55,11 +55,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '1.0'
|
||||
memory: 1G
|
||||
cpus: ${GITEA_DB_CPU_LIMIT:-1.0}
|
||||
memory: ${GITEA_DB_MEMORY_LIMIT:-1G}
|
||||
reservations:
|
||||
cpus: '0.5'
|
||||
memory: 512M
|
||||
cpus: ${GITEA_DB_CPU_RESERVATION:-0.5}
|
||||
memory: ${GITEA_DB_MEMORY_RESERVATION:-512M}
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -U $$POSTGRES_USER -d $$POSTGRES_DB"]
|
||||
interval: 30s
|
||||
|
||||
@@ -18,8 +18,8 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '1.0'
|
||||
memory: 2G
|
||||
cpus: ${GITLAB_RUNNER_CPU_LIMIT:-1.0}
|
||||
memory: ${GITLAB_RUNNER_MEMORY_LIMIT:-2G}
|
||||
reservations:
|
||||
cpus: '0.5'
|
||||
memory: 1G
|
||||
cpus: ${GITLAB_RUNNER_CPU_RESERVATION:-0.5}
|
||||
memory: ${GITLAB_RUNNER_MEMORY_RESERVATION:-1G}
|
||||
|
||||
@@ -24,11 +24,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '2.0'
|
||||
memory: 8G
|
||||
cpus: ${GITLAB_CPU_LIMIT:-2.0}
|
||||
memory: ${GITLAB_MEMORY_LIMIT:-8G}
|
||||
reservations:
|
||||
cpus: '1.0'
|
||||
memory: 4G
|
||||
cpus: ${GITLAB_CPU_RESERVATION:-1.0}
|
||||
memory: ${GITLAB_MEMORY_RESERVATION:-4G}
|
||||
healthcheck:
|
||||
test: ["CMD", "/opt/gitlab/bin/gitlab-healthcheck", "--fail"]
|
||||
interval: 60s
|
||||
|
||||
@@ -26,11 +26,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '8.0'
|
||||
memory: 8G
|
||||
cpus: ${GPUSTACK_CPU_LIMIT:-8.0}
|
||||
memory: ${GPUSTACK_MEMORY_LIMIT:-8G}
|
||||
reservations:
|
||||
cpus: '2.0'
|
||||
memory: 4G
|
||||
cpus: ${GPUSTACK_CPU_RESERVATION:-2.0}
|
||||
memory: ${GPUSTACK_MEMORY_RESERVATION:-4G}
|
||||
devices:
|
||||
- driver: nvidia
|
||||
device_ids: [ '0' ]
|
||||
|
||||
@@ -31,11 +31,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '1.0'
|
||||
memory: 1G
|
||||
cpus: ${GRAFANA_CPU_LIMIT:-1.0}
|
||||
memory: ${GRAFANA_MEMORY_LIMIT:-1G}
|
||||
reservations:
|
||||
cpus: '0.25'
|
||||
memory: 256M
|
||||
cpus: ${GRAFANA_CPU_RESERVATION:-0.25}
|
||||
memory: ${GRAFANA_MEMORY_RESERVATION:-256M}
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:3000/api/health"]
|
||||
interval: 30s
|
||||
|
||||
@@ -29,11 +29,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '2.0'
|
||||
memory: 2G
|
||||
cpus: ${HALO_CPU_LIMIT:-2.0}
|
||||
memory: ${HALO_MEMORY_LIMIT:-2G}
|
||||
reservations:
|
||||
cpus: '0.5'
|
||||
memory: 512M
|
||||
cpus: ${HALO_CPU_RESERVATION:-0.5}
|
||||
memory: ${HALO_MEMORY_RESERVATION:-512M}
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8090/actuator/health"]
|
||||
interval: 30s
|
||||
@@ -61,11 +61,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '1.0'
|
||||
memory: 1G
|
||||
cpus: ${HALO_DB_CPU_LIMIT:-1.0}
|
||||
memory: ${HALO_DB_MEMORY_LIMIT:-1G}
|
||||
reservations:
|
||||
cpus: '0.25'
|
||||
memory: 256M
|
||||
cpus: ${HALO_DB_CPU_RESERVATION:-0.25}
|
||||
memory: ${HALO_DB_MEMORY_RESERVATION:-256M}
|
||||
|
||||
volumes:
|
||||
halo_data:
|
||||
|
||||
@@ -38,6 +38,14 @@ services:
|
||||
- REGISTRY_STORAGE_PROVIDER_NAME=filesystem
|
||||
- READ_ONLY=false
|
||||
- RELOAD_KEY=${HARBOR_RELOAD_KEY:-}
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: ${HARBOR_CORE_CPU_LIMIT:-2.0}
|
||||
memory: ${HARBOR_CORE_MEMORY_LIMIT:-2G}
|
||||
reservations:
|
||||
cpus: ${HARBOR_CORE_CPU_RESERVATION:-0.5}
|
||||
memory: ${HARBOR_CORE_MEMORY_RESERVATION:-1G}
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8080/api/v2.0/ping"]
|
||||
interval: 30s
|
||||
@@ -67,6 +75,14 @@ services:
|
||||
- POSTGRESQL_USERNAME=postgres
|
||||
- POSTGRESQL_PASSWORD=${HARBOR_DB_PASSWORD:-password}
|
||||
- POSTGRESQL_DATABASE=registry
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: ${HARBOR_JOBSERVICE_CPU_LIMIT:-1.0}
|
||||
memory: ${HARBOR_JOBSERVICE_MEMORY_LIMIT:-1G}
|
||||
reservations:
|
||||
cpus: ${HARBOR_JOBSERVICE_CPU_RESERVATION:-0.25}
|
||||
memory: ${HARBOR_JOBSERVICE_MEMORY_RESERVATION:-512M}
|
||||
|
||||
# Harbor Registry
|
||||
harbor-registry:
|
||||
@@ -77,6 +93,14 @@ services:
|
||||
environment:
|
||||
- TZ=${TZ:-UTC}
|
||||
- REGISTRY_HTTP_SECRET=${HARBOR_REGISTRY_SECRET:-}
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: ${HARBOR_REGISTRY_CPU_LIMIT:-1.0}
|
||||
memory: ${HARBOR_REGISTRY_MEMORY_LIMIT:-1G}
|
||||
reservations:
|
||||
cpus: ${HARBOR_REGISTRY_CPU_RESERVATION:-0.25}
|
||||
memory: ${HARBOR_REGISTRY_MEMORY_RESERVATION:-512M}
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:5000/"]
|
||||
interval: 30s
|
||||
@@ -90,6 +114,14 @@ services:
|
||||
image: ${GLOBAL_REGISTRY:-}goharbor/harbor-portal:${HARBOR_VERSION:-v2.12.0}
|
||||
environment:
|
||||
- TZ=${TZ:-UTC}
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: ${HARBOR_PORTAL_CPU_LIMIT:-0.5}
|
||||
memory: ${HARBOR_PORTAL_MEMORY_LIMIT:-512M}
|
||||
reservations:
|
||||
cpus: ${HARBOR_PORTAL_CPU_RESERVATION:-0.25}
|
||||
memory: ${HARBOR_PORTAL_MEMORY_RESERVATION:-256M}
|
||||
|
||||
# Harbor Proxy (Nginx)
|
||||
harbor-proxy:
|
||||
@@ -107,6 +139,14 @@ services:
|
||||
condition: service_healthy
|
||||
environment:
|
||||
- TZ=${TZ:-UTC}
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: ${HARBOR_PROXY_CPU_LIMIT:-1.0}
|
||||
memory: ${HARBOR_PROXY_MEMORY_LIMIT:-512M}
|
||||
reservations:
|
||||
cpus: ${HARBOR_PROXY_CPU_RESERVATION:-0.25}
|
||||
memory: ${HARBOR_PROXY_MEMORY_RESERVATION:-256M}
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8080/"]
|
||||
interval: 30s
|
||||
@@ -127,11 +167,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '1.00'
|
||||
memory: 1G
|
||||
cpus: ${HARBOR_DB_CPU_LIMIT:-1.00}
|
||||
memory: ${HARBOR_DB_MEMORY_LIMIT:-1G}
|
||||
reservations:
|
||||
cpus: '0.25'
|
||||
memory: 256M
|
||||
cpus: ${HARBOR_DB_CPU_RESERVATION:-0.25}
|
||||
memory: ${HARBOR_DB_MEMORY_RESERVATION:-256M}
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -U postgres"]
|
||||
interval: 30s
|
||||
@@ -150,11 +190,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '0.50'
|
||||
memory: 256M
|
||||
cpus: ${HARBOR_REDIS_CPU_LIMIT:-0.50}
|
||||
memory: ${HARBOR_REDIS_MEMORY_LIMIT:-256M}
|
||||
reservations:
|
||||
cpus: '0.10'
|
||||
memory: 64M
|
||||
cpus: ${HARBOR_REDIS_CPU_RESERVATION:-0.10}
|
||||
memory: ${HARBOR_REDIS_MEMORY_RESERVATION:-64M}
|
||||
healthcheck:
|
||||
test: ["CMD", "redis-cli", "ping"]
|
||||
interval: 10s
|
||||
|
||||
@@ -25,11 +25,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '4.0'
|
||||
memory: 4G
|
||||
cpus: ${HBASE_CPU_LIMIT:-4.0}
|
||||
memory: ${HBASE_MEMORY_LIMIT:-4G}
|
||||
reservations:
|
||||
cpus: '1.0'
|
||||
memory: 2G
|
||||
cpus: ${HBASE_CPU_RESERVATION:-1.0}
|
||||
memory: ${HBASE_MEMORY_RESERVATION:-2G}
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "echo 'status' | hbase shell -n || exit 1"]
|
||||
interval: 30s
|
||||
|
||||
@@ -28,11 +28,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '2.00'
|
||||
memory: 3G
|
||||
cpus: ${JENKINS_CPU_LIMIT:-2.00}
|
||||
memory: ${JENKINS_MEMORY_LIMIT:-3G}
|
||||
reservations:
|
||||
cpus: '0.50'
|
||||
memory: 1G
|
||||
cpus: ${JENKINS_CPU_RESERVATION:-0.50}
|
||||
memory: ${JENKINS_MEMORY_RESERVATION:-1G}
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "curl -f http://localhost:8080/login || exit 1"]
|
||||
interval: 30s
|
||||
|
||||
@@ -22,11 +22,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '2.00'
|
||||
memory: 2G
|
||||
cpus: ${OFFICECONVERTER_CPU_LIMIT:-2.00}
|
||||
memory: ${OFFICECONVERTER_MEMORY_LIMIT:-2G}
|
||||
reservations:
|
||||
cpus: '0.50'
|
||||
memory: 512M
|
||||
cpus: ${OFFICECONVERTER_CPU_RESERVATION:-0.50}
|
||||
memory: ${OFFICECONVERTER_MEMORY_RESERVATION:-512M}
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-f", "http://localhost:8000/ready"]
|
||||
interval: 30s
|
||||
|
||||
@@ -28,11 +28,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '1.00'
|
||||
memory: 1G
|
||||
cpus: ${ZOOKEEPER_CPU_LIMIT:-1.00}
|
||||
memory: ${ZOOKEEPER_MEMORY_LIMIT:-1G}
|
||||
reservations:
|
||||
cpus: '0.25'
|
||||
memory: 256M
|
||||
cpus: ${ZOOKEEPER_CPU_RESERVATION:-0.25}
|
||||
memory: ${ZOOKEEPER_MEMORY_RESERVATION:-256M}
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "echo ruok | nc localhost 2181 | grep imok"]
|
||||
interval: 30s
|
||||
@@ -76,11 +76,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '2.00'
|
||||
memory: 2G
|
||||
cpus: ${KAFKA_CPU_LIMIT:-2.00}
|
||||
memory: ${KAFKA_MEMORY_LIMIT:-2G}
|
||||
reservations:
|
||||
cpus: '0.50'
|
||||
memory: 1G
|
||||
cpus: ${KAFKA_CPU_RESERVATION:-0.50}
|
||||
memory: ${KAFKA_MEMORY_RESERVATION:-1G}
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "kafka-broker-api-versions --bootstrap-server localhost:9092"]
|
||||
interval: 30s
|
||||
@@ -108,11 +108,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '0.50'
|
||||
memory: 512M
|
||||
cpus: ${KAFKA_UI_CPU_LIMIT:-0.50}
|
||||
memory: ${KAFKA_UI_MEMORY_LIMIT:-512M}
|
||||
reservations:
|
||||
cpus: '0.10'
|
||||
memory: 128M
|
||||
cpus: ${KAFKA_UI_CPU_RESERVATION:-0.10}
|
||||
memory: ${KAFKA_UI_MEMORY_RESERVATION:-128M}
|
||||
profiles:
|
||||
- ui
|
||||
|
||||
|
||||
@@ -29,11 +29,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '1.00'
|
||||
memory: 1G
|
||||
cpus: ${KIBANA_CPU_LIMIT:-1.00}
|
||||
memory: ${KIBANA_MEMORY_LIMIT:-1G}
|
||||
reservations:
|
||||
cpus: '0.25'
|
||||
memory: 512M
|
||||
cpus: ${KIBANA_CPU_RESERVATION:-0.25}
|
||||
memory: ${KIBANA_MEMORY_RESERVATION:-512M}
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "curl -f http://localhost:5601/api/status || exit 1"]
|
||||
interval: 30s
|
||||
|
||||
@@ -32,11 +32,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '2.0'
|
||||
memory: 2G
|
||||
cpus: ${KODBOX_CPU_LIMIT:-2.0}
|
||||
memory: ${KODBOX_MEMORY_LIMIT:-2G}
|
||||
reservations:
|
||||
cpus: '0.5'
|
||||
memory: 256M
|
||||
cpus: ${KODBOX_CPU_RESERVATION:-0.5}
|
||||
memory: ${KODBOX_MEMORY_RESERVATION:-256M}
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:80/"]
|
||||
interval: 30s
|
||||
@@ -68,11 +68,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '1.0'
|
||||
memory: 1G
|
||||
cpus: ${KODBOX_DB_CPU_LIMIT:-1.0}
|
||||
memory: ${KODBOX_DB_MEMORY_LIMIT:-1G}
|
||||
reservations:
|
||||
cpus: '0.25'
|
||||
memory: 256M
|
||||
cpus: ${KODBOX_DB_CPU_RESERVATION:-0.25}
|
||||
memory: ${KODBOX_DB_MEMORY_RESERVATION:-256M}
|
||||
|
||||
kodbox-redis:
|
||||
<<: *defaults
|
||||
@@ -94,11 +94,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '0.50'
|
||||
memory: 512M
|
||||
cpus: ${KODBOX_REDIS_CPU_LIMIT:-0.50}
|
||||
memory: ${KODBOX_REDIS_MEMORY_LIMIT:-512M}
|
||||
reservations:
|
||||
cpus: '0.25'
|
||||
memory: 128M
|
||||
cpus: ${KODBOX_REDIS_CPU_RESERVATION:-0.25}
|
||||
memory: ${KODBOX_REDIS_MEMORY_RESERVATION:-128M}
|
||||
|
||||
volumes:
|
||||
kodbox_data:
|
||||
|
||||
@@ -21,11 +21,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '1.00'
|
||||
memory: 512M
|
||||
cpus: ${KONG_DB_CPU_LIMIT:-1.00}
|
||||
memory: ${KONG_DB_MEMORY_LIMIT:-512M}
|
||||
reservations:
|
||||
cpus: '0.25'
|
||||
memory: 128M
|
||||
cpus: ${KONG_DB_CPU_RESERVATION:-0.25}
|
||||
memory: ${KONG_DB_MEMORY_RESERVATION:-128M}
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -U kong"]
|
||||
interval: 30s
|
||||
@@ -83,11 +83,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '1.00'
|
||||
memory: 1G
|
||||
cpus: ${KONG_CPU_LIMIT:-1.00}
|
||||
memory: ${KONG_MEMORY_LIMIT:-1G}
|
||||
reservations:
|
||||
cpus: '0.25'
|
||||
memory: 256M
|
||||
cpus: ${KONG_CPU_RESERVATION:-0.25}
|
||||
memory: ${KONG_MEMORY_RESERVATION:-256M}
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "kong health"]
|
||||
interval: 30s
|
||||
@@ -113,11 +113,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '0.50'
|
||||
memory: 256M
|
||||
cpus: ${KONGA_CPU_LIMIT:-0.50}
|
||||
memory: ${KONGA_MEMORY_LIMIT:-256M}
|
||||
reservations:
|
||||
cpus: '0.10'
|
||||
memory: 64M
|
||||
cpus: ${KONGA_CPU_RESERVATION:-0.10}
|
||||
memory: ${KONGA_MEMORY_RESERVATION:-64M}
|
||||
profiles:
|
||||
- gui
|
||||
|
||||
|
||||
@@ -26,11 +26,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '2.0'
|
||||
memory: 2G
|
||||
cpus: ${LANGFUSE_CPU_LIMIT:-2.0}
|
||||
memory: ${LANGFUSE_MEMORY_LIMIT:-2G}
|
||||
reservations:
|
||||
cpus: '0.5'
|
||||
memory: 512M
|
||||
cpus: ${LANGFUSE_CPU_RESERVATION:-0.5}
|
||||
memory: ${LANGFUSE_MEMORY_RESERVATION:-512M}
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:3000/api/public/health"]
|
||||
interval: 30s
|
||||
@@ -57,11 +57,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '1.0'
|
||||
memory: 1G
|
||||
cpus: ${LANGFUSE_DB_CPU_LIMIT:-1.0}
|
||||
memory: ${LANGFUSE_DB_MEMORY_LIMIT:-1G}
|
||||
reservations:
|
||||
cpus: '0.25'
|
||||
memory: 256M
|
||||
cpus: ${LANGFUSE_DB_CPU_RESERVATION:-0.25}
|
||||
memory: ${LANGFUSE_DB_MEMORY_RESERVATION:-256M}
|
||||
|
||||
volumes:
|
||||
langfuse_db_data:
|
||||
|
||||
@@ -27,11 +27,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '2.00'
|
||||
memory: 2G
|
||||
cpus: ${LIBREOFFICE_CPU_LIMIT:-2.00}
|
||||
memory: ${LIBREOFFICE_MEMORY_LIMIT:-2G}
|
||||
reservations:
|
||||
cpus: '0.50'
|
||||
memory: 512M
|
||||
cpus: ${LIBREOFFICE_CPU_RESERVATION:-0.50}
|
||||
memory: ${LIBREOFFICE_MEMORY_RESERVATION:-512M}
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-f", "-k", "https://localhost:3001/"]
|
||||
interval: 30s
|
||||
|
||||
@@ -41,11 +41,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '2.00'
|
||||
memory: 2G
|
||||
cpus: ${LITELLM_CPU_LIMIT:-2.00}
|
||||
memory: ${LITELLM_MEMORY_LIMIT:-2G}
|
||||
reservations:
|
||||
cpus: '0.50'
|
||||
memory: 512M
|
||||
cpus: ${LITELLM_CPU_RESERVATION:-0.50}
|
||||
memory: ${LITELLM_MEMORY_RESERVATION:-512M}
|
||||
|
||||
db:
|
||||
<<: *defaults
|
||||
@@ -68,11 +68,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '1.00'
|
||||
memory: 1G
|
||||
cpus: ${LITELLM_DB_CPU_LIMIT:-1.00}
|
||||
memory: ${LITELLM_DB_MEMORY_LIMIT:-1G}
|
||||
reservations:
|
||||
cpus: '0.25'
|
||||
memory: 256M
|
||||
cpus: ${LITELLM_DB_CPU_RESERVATION:-0.25}
|
||||
memory: ${LITELLM_DB_MEMORY_RESERVATION:-256M}
|
||||
|
||||
prometheus:
|
||||
<<: *defaults
|
||||
@@ -99,11 +99,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '1.00'
|
||||
memory: 1G
|
||||
cpus: ${LITELLM_PROMETHEUS_CPU_LIMIT:-1.00}
|
||||
memory: ${LITELLM_PROMETHEUS_MEMORY_LIMIT:-1G}
|
||||
reservations:
|
||||
cpus: '0.25'
|
||||
memory: 256M
|
||||
cpus: ${LITELLM_PROMETHEUS_CPU_RESERVATION:-0.25}
|
||||
memory: ${LITELLM_PROMETHEUS_MEMORY_RESERVATION:-256M}
|
||||
|
||||
volumes:
|
||||
prometheus_data:
|
||||
|
||||
@@ -38,11 +38,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '1.50'
|
||||
memory: 2G
|
||||
cpus: ${LOGSTASH_CPU_LIMIT:-1.50}
|
||||
memory: ${LOGSTASH_MEMORY_LIMIT:-2G}
|
||||
reservations:
|
||||
cpus: '0.50'
|
||||
memory: 1G
|
||||
cpus: ${LOGSTASH_CPU_RESERVATION:-0.50}
|
||||
memory: ${LOGSTASH_MEMORY_RESERVATION:-1G}
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "curl -f http://localhost:9600/_node/stats || exit 1"]
|
||||
interval: 30s
|
||||
|
||||
@@ -28,11 +28,11 @@ x-mariadb-galera: &mariadb-galera
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '2.0'
|
||||
memory: 2G
|
||||
cpus: ${MARIADB_CPU_LIMIT:-2.0}
|
||||
memory: ${MARIADB_MEMORY_LIMIT:-2G}
|
||||
reservations:
|
||||
cpus: '1.0'
|
||||
memory: 1G
|
||||
cpus: ${MARIADB_CPU_RESERVATION:-1.0}
|
||||
memory: ${MARIADB_MEMORY_RESERVATION:-1G}
|
||||
healthcheck:
|
||||
test: ["CMD", "healthcheck.sh", "--connect", "--innodb_initialized"]
|
||||
interval: 30s
|
||||
|
||||
@@ -37,11 +37,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '2.0'
|
||||
memory: 4G
|
||||
cpus: ${MILVUS_EMBED_CPU_LIMIT:-2.0}
|
||||
memory: ${MILVUS_EMBED_MEMORY_LIMIT:-4G}
|
||||
reservations:
|
||||
cpus: '1.0'
|
||||
memory: 2G
|
||||
cpus: ${MILVUS_EMBED_CPU_RESERVATION:-1.0}
|
||||
memory: ${MILVUS_EMBED_MEMORY_RESERVATION:-2G}
|
||||
|
||||
attu:
|
||||
<<: *defaults
|
||||
@@ -56,11 +56,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '0.25'
|
||||
memory: 256M
|
||||
cpus: ${ATTU_CPU_LIMIT:-0.25}
|
||||
memory: ${ATTU_MEMORY_LIMIT:-256M}
|
||||
reservations:
|
||||
cpus: '0.1'
|
||||
memory: 128M
|
||||
cpus: ${ATTU_CPU_RESERVATION:-0.1}
|
||||
memory: ${ATTU_MEMORY_RESERVATION:-128M}
|
||||
|
||||
|
||||
volumes:
|
||||
|
||||
@@ -28,11 +28,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '0.25'
|
||||
memory: 256M
|
||||
cpus: ${ETCD_CPU_LIMIT:-0.25}
|
||||
memory: ${ETCD_MEMORY_LIMIT:-256M}
|
||||
reservations:
|
||||
cpus: '0.1'
|
||||
memory: 128M
|
||||
cpus: ${ETCD_CPU_RESERVATION:-0.1}
|
||||
memory: ${ETCD_MEMORY_RESERVATION:-128M}
|
||||
|
||||
minio:
|
||||
<<: *defaults
|
||||
@@ -56,11 +56,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '0.5'
|
||||
memory: 512M
|
||||
cpus: ${MINIO_STANDALONE_CPU_LIMIT:-0.5}
|
||||
memory: ${MINIO_STANDALONE_MEMORY_LIMIT:-512M}
|
||||
reservations:
|
||||
cpus: '0.1'
|
||||
memory: 256M
|
||||
cpus: ${MINIO_STANDALONE_CPU_RESERVATION:-0.1}
|
||||
memory: ${MINIO_STANDALONE_MEMORY_RESERVATION:-256M}
|
||||
|
||||
milvus-standalone:
|
||||
<<: *defaults
|
||||
@@ -92,11 +92,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '2.0'
|
||||
memory: 4G
|
||||
cpus: ${MILVUS_CPU_LIMIT:-2.0}
|
||||
memory: ${MILVUS_MEMORY_LIMIT:-4G}
|
||||
reservations:
|
||||
cpus: '1.0'
|
||||
memory: 2G
|
||||
cpus: ${MILVUS_CPU_RESERVATION:-1.0}
|
||||
memory: ${MILVUS_MEMORY_RESERVATION:-2G}
|
||||
|
||||
attu:
|
||||
<<: *defaults
|
||||
@@ -114,11 +114,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '0.25'
|
||||
memory: 256M
|
||||
cpus: ${ATTU_CPU_LIMIT:-0.25}
|
||||
memory: ${ATTU_MEMORY_LIMIT:-256M}
|
||||
reservations:
|
||||
cpus: '0.1'
|
||||
memory: 128M
|
||||
cpus: ${ATTU_CPU_RESERVATION:-0.1}
|
||||
memory: ${ATTU_MEMORY_RESERVATION:-128M}
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:3000/"]
|
||||
interval: 30s
|
||||
|
||||
@@ -37,11 +37,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '2.0'
|
||||
memory: 2G
|
||||
cpus: ${BEDROCK_CPU_LIMIT:-2.0}
|
||||
memory: ${BEDROCK_MEMORY_LIMIT:-2G}
|
||||
reservations:
|
||||
cpus: '1.0'
|
||||
memory: 1G
|
||||
cpus: ${BEDROCK_CPU_RESERVATION:-1.0}
|
||||
memory: ${BEDROCK_MEMORY_RESERVATION:-1G}
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "[ -f /data/valid_known_packs.json ]"]
|
||||
interval: 30s
|
||||
|
||||
@@ -1,7 +0,0 @@
|
||||
# MinerU SGLang Docker image
|
||||
MINERU_DOCKER_IMAGE=alexsuntop/mineru-sglang:2.2.2
|
||||
|
||||
# Port configurations
|
||||
MINERU_PORT_OVERRIDE_SGLANG=30000
|
||||
MINERU_PORT_OVERRIDE_API=8000
|
||||
MINERU_PORT_OVERRIDE_GRADIO=7860
|
||||
@@ -1,45 +0,0 @@
|
||||
# MinerU SGLang
|
||||
|
||||
[English](./README.md) | [中文](./README.zh.md)
|
||||
|
||||
This service runs MinerU with the SGLang backend.
|
||||
|
||||
## Start Services
|
||||
|
||||
- **SGLang backend server**:
|
||||
|
||||
```bash
|
||||
docker compose --profile sglang-server up -d
|
||||
```
|
||||
|
||||
- **Document parse API**:
|
||||
|
||||
```bash
|
||||
docker compose --profile api up -d
|
||||
```
|
||||
|
||||
- **Gradio WebUI**:
|
||||
|
||||
```bash
|
||||
docker compose --profile gradio up -d
|
||||
```
|
||||
|
||||
## Test SGLang backend
|
||||
|
||||
```bash
|
||||
pip install mineru
|
||||
mineru -p demo.pdf -o ./output -b vlm-sglang-client -u http://localhost:30000
|
||||
```
|
||||
|
||||
## Services
|
||||
|
||||
- `mineru-sglang-server`: The SGLang backend server.
|
||||
- `mineru-api`: The document parsing API.
|
||||
- `mineru-gradio`: The Gradio WebUI.
|
||||
|
||||
## Configuration
|
||||
|
||||
- `MINERU_DOCKER_IMAGE`: The Docker image for MinerU SGLang, default is `alexsuntop/mineru-sglang:2.2.2`.
|
||||
- `MINERU_PORT_OVERRIDE_SGLANG`: The host port for the SGLang server, default is `30000`.
|
||||
- `MINERU_PORT_OVERRIDE_API`: The host port for the API service, default is `8000`.
|
||||
- `MINERU_PORT_OVERRIDE_GRADIO`: The host port for the Gradio WebUI, default is `7860`.
|
||||
@@ -1,45 +0,0 @@
|
||||
# MinerU SGLang
|
||||
|
||||
[English](./README.md) | [中文](./README.zh.md)
|
||||
|
||||
此服务使用 SGLang 后端运行 MinerU。
|
||||
|
||||
## 启动服务
|
||||
|
||||
- **SGLang 后端服务器**:
|
||||
|
||||
```bash
|
||||
docker compose --profile sglang-server up -d
|
||||
```
|
||||
|
||||
- **文档解析 API**:
|
||||
|
||||
```bash
|
||||
docker compose --profile api up -d
|
||||
```
|
||||
|
||||
- **Gradio WebUI**:
|
||||
|
||||
```bash
|
||||
docker compose --profile gradio up -d
|
||||
```
|
||||
|
||||
## 测试 SGLang 后端
|
||||
|
||||
```bash
|
||||
pip install mineru
|
||||
mineru -p demo.pdf -o ./output -b vlm-sglang-client -u http://localhost:30000
|
||||
```
|
||||
|
||||
## 服务
|
||||
|
||||
- `mineru-sglang-server`: SGLang 后端服务器。
|
||||
- `mineru-api`: 文档解析 API。
|
||||
- `mineru-gradio`: Gradio WebUI。
|
||||
|
||||
## 配置
|
||||
|
||||
- `MINERU_DOCKER_IMAGE`: MinerU SGLang 的 Docker 镜像,默认为 `alexsuntop/mineru-sglang:2.2.2`。
|
||||
- `MINERU_PORT_OVERRIDE_SGLANG`: SGLang 服务器的主机端口,默认为 `30000`。
|
||||
- `MINERU_PORT_OVERRIDE_API`: API 服务的主机端口,默认为 `8000`。
|
||||
- `MINERU_PORT_OVERRIDE_GRADIO`: Gradio WebUI 的主机端口,默认为 `7860`。
|
||||
@@ -1,105 +0,0 @@
|
||||
x-defaults: &defaults
|
||||
restart: unless-stopped
|
||||
logging:
|
||||
driver: json-file
|
||||
options:
|
||||
max-size: 100m
|
||||
max-file: "3"
|
||||
|
||||
x-mineru-sglang: &mineru-sglang
|
||||
<<: *defaults
|
||||
image: ${MINERU_DOCKER_IMAGE:-alexsuntop/mineru-sglang:2.2.2}
|
||||
environment:
|
||||
TZ: ${TZ:-UTC}
|
||||
MINERU_MODEL_SOURCE: local
|
||||
ulimits:
|
||||
memlock: -1
|
||||
stack: 67108864
|
||||
ipc: host
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '8.0'
|
||||
memory: 4G
|
||||
reservations:
|
||||
cpus: '1.0'
|
||||
memory: 2G
|
||||
devices:
|
||||
- driver: nvidia
|
||||
device_ids: [ '0' ]
|
||||
capabilities: [ gpu ]
|
||||
|
||||
services:
|
||||
mineru-sglang-server:
|
||||
<<: *mineru-sglang
|
||||
profiles: ["sglang-server"]
|
||||
ports:
|
||||
- ${MINERU_PORT_OVERRIDE_SGLANG:-30000}:30000
|
||||
entrypoint: mineru-sglang-server
|
||||
command:
|
||||
- --host 0.0.0.0
|
||||
- --port 30000
|
||||
|
||||
# If using multiple GPUs, increase throughput using vllm's multi-GPU parallel mode.
|
||||
# - --data-parallel-size 2
|
||||
# If running on a single GPU and encountering VRAM shortage, reduce the KV cache size by this parameter,
|
||||
# if VRAM issues persist, try lowering it further to `0.4` or below.
|
||||
# - --gpu-memory-utilization 0.5
|
||||
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "curl -f http://localhost:30000/health || exit 1"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 60s
|
||||
|
||||
mineru-api:
|
||||
<<: *mineru-sglang
|
||||
profiles: ["api"]
|
||||
ports:
|
||||
- ${MINERU_PORT_OVERRIDE_API:-8000}:8000
|
||||
entrypoint: mineru-api
|
||||
command:
|
||||
- --host 0.0.0.0
|
||||
- --port 8000
|
||||
|
||||
# If using multiple GPUs, increase throughput using vllm's multi-GPU parallel mode.
|
||||
# - --data-parallel-size 2
|
||||
# If running on a single GPU and encountering VRAM shortage, reduce the KV cache size by this parameter,
|
||||
# if VRAM issues persist, try lowering it further to `0.4` or below.
|
||||
# - --gpu-memory-utilization 0.5
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8000/health"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 60s
|
||||
|
||||
mineru-gradio:
|
||||
<<: *mineru-sglang
|
||||
profiles: ["gradio"]
|
||||
ports:
|
||||
- ${MINERU_PORT_OVERRIDE_GRADIO:-7860}:7860
|
||||
entrypoint: mineru-gradio
|
||||
command:
|
||||
- --server-name 0.0.0.0
|
||||
- --server-port 7860
|
||||
|
||||
# Enable the vllm engine for Gradio
|
||||
- --enable-vllm-engine true
|
||||
# If you want to disable the API, set this to false
|
||||
# - --enable-api false
|
||||
# If you want to limit the number of pages for conversion, set this to a specific number
|
||||
# - --max-convert-pages 20
|
||||
|
||||
# If using multiple GPUs, increase throughput using vllm's multi-GPU parallel mode.
|
||||
# - --data-parallel-size 2
|
||||
# If running on a single GPU and encountering VRAM shortage, reduce the KV cache size by this parameter,
|
||||
# if VRAM issues persist, try lowering it further to `0.4` or below.
|
||||
# - --gpu-memory-utilization 0.5
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:7860/"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 60s
|
||||
@@ -30,11 +30,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '1.0'
|
||||
memory: 1G
|
||||
cpus: ${MINIO_CPU_LIMIT:-1.0}
|
||||
memory: ${MINIO_MEMORY_LIMIT:-1G}
|
||||
reservations:
|
||||
cpus: '0.5'
|
||||
memory: 512M
|
||||
cpus: ${MINIO_CPU_RESERVATION:-0.5}
|
||||
memory: ${MINIO_MEMORY_RESERVATION:-512M}
|
||||
|
||||
|
||||
volumes:
|
||||
|
||||
@@ -20,11 +20,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '1.0'
|
||||
memory: 1G
|
||||
cpus: ${POSTGRES_MLFLOW_CPU_LIMIT:-1.0}
|
||||
memory: ${POSTGRES_MLFLOW_MEMORY_LIMIT:-1G}
|
||||
reservations:
|
||||
cpus: '0.5'
|
||||
memory: 512M
|
||||
cpus: ${POSTGRES_MLFLOW_CPU_RESERVATION:-0.5}
|
||||
memory: ${POSTGRES_MLFLOW_MEMORY_RESERVATION:-512M}
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER:-mlflow}"]
|
||||
interval: 10s
|
||||
@@ -48,11 +48,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '1.0'
|
||||
memory: 1G
|
||||
cpus: ${MINIO_MLFLOW_CPU_LIMIT:-1.0}
|
||||
memory: ${MINIO_MLFLOW_MEMORY_LIMIT:-1G}
|
||||
reservations:
|
||||
cpus: '0.5'
|
||||
memory: 512M
|
||||
cpus: ${MINIO_MLFLOW_CPU_RESERVATION:-0.5}
|
||||
memory: ${MINIO_MLFLOW_MEMORY_RESERVATION:-512M}
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
|
||||
interval: 30s
|
||||
@@ -108,11 +108,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '2.0'
|
||||
memory: 2G
|
||||
cpus: ${MLFLOW_CPU_LIMIT:-2.0}
|
||||
memory: ${MLFLOW_MEMORY_LIMIT:-2G}
|
||||
reservations:
|
||||
cpus: '1.0'
|
||||
memory: 1G
|
||||
cpus: ${MLFLOW_CPU_RESERVATION:-1.0}
|
||||
memory: ${MLFLOW_MEMORY_RESERVATION:-1G}
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:5000/health"]
|
||||
interval: 30s
|
||||
|
||||
@@ -37,11 +37,12 @@ x-mongo: &mongo
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '0.50'
|
||||
memory: 1G
|
||||
cpus: ${MONGO_REPLICA_SINGLE_CPU_LIMIT:-1.00}
|
||||
memory: ${MONGO_REPLICA_SINGLE_MEMORY_LIMIT:-2048M}
|
||||
reservations:
|
||||
cpus: '0.25'
|
||||
memory: 256M
|
||||
cpus: ${MONGO_REPLICA_SINGLE_CPU_RESERVATION:-0.50}
|
||||
memory: ${MONGO_REPLICA_SINGLE_MEMORY_RESERVATION:-1024M}
|
||||
|
||||
|
||||
services:
|
||||
mongo1:
|
||||
@@ -100,11 +101,12 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '0.25'
|
||||
memory: 256M
|
||||
cpus: ${MONGO_REPLICA_SINGLE_INIT_CPU_LIMIT:-1.00}
|
||||
memory: ${MONGO_REPLICA_SINGLE_INIT_MEMORY_LIMIT:-2048M}
|
||||
reservations:
|
||||
cpus: '0.10'
|
||||
memory: 128M
|
||||
cpus: ${MONGO_REPLICA_SINGLE_INIT_CPU_RESERVATION:-0.50}
|
||||
memory: ${MONGO_REPLICA_SINGLE_INIT_MEMORY_RESERVATION:-1024M}
|
||||
|
||||
|
||||
volumes:
|
||||
mongo_data:
|
||||
|
||||
@@ -36,11 +36,11 @@ x-mongo: &mongo
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '0.50'
|
||||
memory: 1G
|
||||
cpus: ${MONGO_REPLICA_CPU_LIMIT:-1.00}
|
||||
memory: ${MONGO_REPLICA_MEMORY_LIMIT:-2048M}
|
||||
reservations:
|
||||
cpus: '0.25'
|
||||
memory: 256M
|
||||
cpus: ${MONGO_REPLICA_CPU_RESERVATION:-0.50}
|
||||
memory: ${MONGO_REPLICA_MEMORY_RESERVATION:-1024M}
|
||||
|
||||
services:
|
||||
mongo1:
|
||||
@@ -117,8 +117,8 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '0.25'
|
||||
memory: 256M
|
||||
cpus: ${MONGO_REPLICA_INIT_CPU_LIMIT:-1.00}
|
||||
memory: ${MONGO_REPLICA_INIT_MEMORY_LIMIT:-2048M}
|
||||
reservations:
|
||||
cpus: '0.10'
|
||||
memory: 128M
|
||||
cpus: ${MONGO_REPLICA_INIT_CPU_RESERVATION:-0.50}
|
||||
memory: ${MONGO_REPLICA_INIT_MEMORY_RESERVATION:-1024M}
|
||||
|
||||
@@ -19,20 +19,21 @@ services:
|
||||
- "${MONGO_PORT_OVERRIDE:-27017}:27017"
|
||||
volumes:
|
||||
- mongo_data:/data/db
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '0.50'
|
||||
memory: 1G
|
||||
reservations:
|
||||
cpus: '0.25'
|
||||
memory: 256M
|
||||
healthcheck:
|
||||
test: ["CMD", "mongosh", "--eval", "db.adminCommand('ping')"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 30s
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: ${MONGO_CPU_LIMIT:-1.00}
|
||||
memory: ${MONGO_MEMORY_LIMIT:-2048M}
|
||||
reservations:
|
||||
cpus: ${MONGO_CPU_RESERVATION:-0.50}
|
||||
memory: ${MONGO_MEMORY_RESERVATION:-1024M}
|
||||
|
||||
|
||||
volumes:
|
||||
mongo_data:
|
||||
|
||||
@@ -24,11 +24,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '2.0'
|
||||
memory: 2G
|
||||
cpus: ${MYSQL_CPU_LIMIT:-2.0}
|
||||
memory: ${MYSQL_MEMORY_LIMIT:-2G}
|
||||
reservations:
|
||||
cpus: '0.5'
|
||||
memory: 512M
|
||||
cpus: ${MYSQL_CPU_RESERVATION:-0.5}
|
||||
memory: ${MYSQL_MEMORY_RESERVATION:-512M}
|
||||
healthcheck:
|
||||
test: ["CMD", "mysqladmin", "ping", "-h", "localhost", "-u", "root", "-p$$MYSQL_ROOT_PASSWORD"]
|
||||
interval: 30s
|
||||
|
||||
@@ -42,11 +42,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '2.0'
|
||||
memory: 2G
|
||||
cpus: ${N8N_CPU_LIMIT:-2.0}
|
||||
memory: ${N8N_MEMORY_LIMIT:-2G}
|
||||
reservations:
|
||||
cpus: '0.5'
|
||||
memory: 512M
|
||||
cpus: ${N8N_CPU_RESERVATION:-0.5}
|
||||
memory: ${N8N_MEMORY_RESERVATION:-512M}
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:5678/healthz"]
|
||||
interval: 30s
|
||||
@@ -59,27 +59,19 @@ services:
|
||||
image: ${GLOBAL_REGISTRY:-}postgres:${POSTGRES_VERSION:-17.2-alpine3.21}
|
||||
environment:
|
||||
- TZ=${TZ:-UTC}
|
||||
- POSTGRES_USER=${DB_POSTGRESDB_USER:-n8n}
|
||||
- POSTGRES_PASSWORD=${DB_POSTGRESDB_PASSWORD:-n8n123}
|
||||
- POSTGRES_DB=${DB_POSTGRESDB_DATABASE:-n8n}
|
||||
- POSTGRES_USER=${POSTGRES_USER:-n8n}
|
||||
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-}
|
||||
- POSTGRES_DB=${POSTGRES_DB:-n8n}
|
||||
volumes:
|
||||
- n8n_db_data:/var/lib/postgresql/data
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -U ${DB_POSTGRESDB_USER:-n8n}"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 5
|
||||
start_period: 30s
|
||||
profiles:
|
||||
- postgres
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '1.0'
|
||||
memory: 1G
|
||||
cpus: ${N8N_DB_CPU_LIMIT:-1.0}
|
||||
memory: ${N8N_DB_MEMORY_LIMIT:-1G}
|
||||
reservations:
|
||||
cpus: '0.25'
|
||||
memory: 256M
|
||||
cpus: ${N8N_DB_CPU_RESERVATION:-0.5}
|
||||
memory: ${N8N_DB_MEMORY_RESERVATION:-512M}
|
||||
|
||||
volumes:
|
||||
n8n_data:
|
||||
|
||||
@@ -31,11 +31,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '1.0'
|
||||
memory: 1G
|
||||
cpus: ${NACOS_CPU_LIMIT:-1.0}
|
||||
memory: ${NACOS_MEMORY_LIMIT:-1G}
|
||||
reservations:
|
||||
cpus: '0.5'
|
||||
memory: 512M
|
||||
cpus: ${NACOS_CPU_RESERVATION:-0.5}
|
||||
memory: ${NACOS_MEMORY_RESERVATION:-512M}
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8848/nacos/"]
|
||||
interval: 30s
|
||||
|
||||
@@ -30,11 +30,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '0.5'
|
||||
memory: 512M
|
||||
cpus: ${NEBULA_METAD_CPU_LIMIT:-0.5}
|
||||
memory: ${NEBULA_METAD_MEMORY_LIMIT:-512M}
|
||||
reservations:
|
||||
cpus: '0.25'
|
||||
memory: 256M
|
||||
cpus: ${NEBULA_METAD_CPU_RESERVATION:-0.25}
|
||||
memory: ${NEBULA_METAD_MEMORY_RESERVATION:-256M}
|
||||
healthcheck:
|
||||
test: ["CMD", "/usr/local/nebula/bin/nebula-metad", "--version"]
|
||||
interval: 30s
|
||||
@@ -68,11 +68,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '1.0'
|
||||
memory: 1G
|
||||
cpus: ${NEBULA_STORAGED_CPU_LIMIT:-1.0}
|
||||
memory: ${NEBULA_STORAGED_MEMORY_LIMIT:-1G}
|
||||
reservations:
|
||||
cpus: '0.5'
|
||||
memory: 512M
|
||||
cpus: ${NEBULA_STORAGED_CPU_RESERVATION:-0.5}
|
||||
memory: ${NEBULA_STORAGED_MEMORY_RESERVATION:-512M}
|
||||
healthcheck:
|
||||
test: ["CMD", "/usr/local/nebula/bin/nebula-storaged", "--version"]
|
||||
interval: 30s
|
||||
@@ -106,11 +106,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '1.0'
|
||||
memory: 1G
|
||||
cpus: ${NEBULA_GRAPHD_CPU_LIMIT:-1.0}
|
||||
memory: ${NEBULA_GRAPHD_MEMORY_LIMIT:-1G}
|
||||
reservations:
|
||||
cpus: '0.5'
|
||||
memory: 512M
|
||||
cpus: ${NEBULA_GRAPHD_CPU_RESERVATION:-0.5}
|
||||
memory: ${NEBULA_GRAPHD_MEMORY_RESERVATION:-512M}
|
||||
healthcheck:
|
||||
test: ["CMD", "/usr/local/nebula/bin/nebula-graphd", "--version"]
|
||||
interval: 30s
|
||||
|
||||
@@ -28,11 +28,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '2.0'
|
||||
memory: 2G
|
||||
cpus: ${NEO4J_CPU_LIMIT:-2.0}
|
||||
memory: ${NEO4J_MEMORY_LIMIT:-2G}
|
||||
reservations:
|
||||
cpus: '0.5'
|
||||
memory: 1G
|
||||
cpus: ${NEO4J_CPU_RESERVATION:-0.5}
|
||||
memory: ${NEO4J_MEMORY_RESERVATION:-1G}
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:7474/"]
|
||||
interval: 30s
|
||||
|
||||
20
src/nexa-sdk/.env.example
Normal file
20
src/nexa-sdk/.env.example
Normal file
@@ -0,0 +1,20 @@
|
||||
# NexaSDK Docker Configuration
|
||||
|
||||
# Image version (e.g., v0.2.62, v0.2.62-cuda, latest, latest-cuda)
|
||||
NEXA_SDK_VERSION=v0.2.62
|
||||
|
||||
# Host port for NexaSDK REST API
|
||||
NEXA_SDK_PORT_OVERRIDE=18181
|
||||
|
||||
# Nexa API token (required for model access)
|
||||
# Obtain from https://sdk.nexa.ai -> Deployment -> Create Token
|
||||
NEXA_TOKEN=
|
||||
|
||||
# Timezone
|
||||
TZ=UTC
|
||||
|
||||
# Resource limits
|
||||
NEXA_SDK_CPU_LIMIT=4.0
|
||||
NEXA_SDK_MEMORY_LIMIT=8G
|
||||
NEXA_SDK_CPU_RESERVATION=1.0
|
||||
NEXA_SDK_MEMORY_RESERVATION=2G
|
||||
105
src/nexa-sdk/README.md
Normal file
105
src/nexa-sdk/README.md
Normal file
@@ -0,0 +1,105 @@
|
||||
# NexaSDK
|
||||
|
||||
[English](./README.md) | [中文](./README.zh.md)
|
||||
|
||||
This service deploys NexaSDK Docker for running AI models with OpenAI-compatible REST API. Supports LLM, Embeddings, Reranking, Computer Vision, and ASR models.
|
||||
|
||||
## Features
|
||||
|
||||
- **OpenAI-compatible API**: Drop-in replacement for OpenAI API endpoints
|
||||
- **Multiple Model Types**: LLM, VLM, Embeddings, Reranking, CV, ASR
|
||||
- **GPU Acceleration**: CUDA support for NVIDIA GPUs
|
||||
- **NPU Support**: Optimized for Qualcomm NPU on ARM64
|
||||
|
||||
## Supported Models
|
||||
|
||||
| Modality | Models |
|
||||
| ------------- | ------------------------------------------------------- |
|
||||
| **LLM** | `NexaAI/LFM2-1.2B-npu`, `NexaAI/Granite-4.0-h-350M-NPU` |
|
||||
| **VLM** | `NexaAI/OmniNeural-4B` |
|
||||
| **Embedding** | `NexaAI/embeddinggemma-300m-npu`, `NexaAI/EmbedNeural` |
|
||||
| **Rerank** | `NexaAI/jina-v2-rerank-npu` |
|
||||
| **CV** | `NexaAI/yolov12-npu`, `NexaAI/convnext-tiny-npu-IoT` |
|
||||
| **ASR** | `NexaAI/parakeet-tdt-0.6b-v3-npu` |
|
||||
|
||||
## Usage
|
||||
|
||||
### CPU Mode
|
||||
|
||||
```bash
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
### GPU Mode (CUDA)
|
||||
|
||||
```bash
|
||||
docker compose --profile gpu up -d nexa-sdk-cuda
|
||||
```
|
||||
|
||||
### Pull a Model
|
||||
|
||||
```bash
|
||||
docker exec -it nexa-sdk nexa pull NexaAI/Granite-4.0-h-350M-NPU
|
||||
```
|
||||
|
||||
### Interactive CLI
|
||||
|
||||
```bash
|
||||
docker exec -it nexa-sdk nexa infer NexaAI/Granite-4.0-h-350M-NPU
|
||||
```
|
||||
|
||||
### API Examples
|
||||
|
||||
- Chat completions:
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:18181/v1/chat/completions \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"model": "NexaAI/Granite-4.0-h-350M-NPU",
|
||||
"messages": [{"role": "user", "content": "Hello!"}]
|
||||
}'
|
||||
```
|
||||
|
||||
- Embeddings:
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:18181/v1/embeddings \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"model": "NexaAI/EmbedNeural",
|
||||
"input": "Hello, world!"
|
||||
}'
|
||||
```
|
||||
|
||||
- Swagger UI: Visit `http://localhost:18181/docs/ui`
|
||||
|
||||
## Services
|
||||
|
||||
- `nexa-sdk`: CPU-based NexaSDK service (default)
|
||||
- `nexa-sdk-cuda`: GPU-accelerated service with CUDA support (profile: `gpu`)
|
||||
|
||||
## Configuration
|
||||
|
||||
| Variable | Description | Default |
|
||||
| ------------------------ | ------------------------- | --------- |
|
||||
| `NEXA_SDK_VERSION` | NexaSDK image version | `v0.2.62` |
|
||||
| `NEXA_SDK_PORT_OVERRIDE` | Host port for REST API | `18181` |
|
||||
| `NEXA_TOKEN` | Nexa API token (required) | - |
|
||||
| `TZ` | Timezone | `UTC` |
|
||||
|
||||
## Volumes
|
||||
|
||||
- `nexa_data`: Volume for storing downloaded models and data
|
||||
|
||||
## Getting a Token
|
||||
|
||||
1. Create an account at [sdk.nexa.ai](https://sdk.nexa.ai)
|
||||
2. Go to **Deployment → Create Token**
|
||||
3. Copy the token to your `.env` file
|
||||
|
||||
## References
|
||||
|
||||
- [NexaSDK Documentation](https://docs.nexa.ai/nexa-sdk-docker/overview)
|
||||
- [Docker Hub](https://hub.docker.com/r/nexa4ai/nexasdk)
|
||||
- [Supported Models](https://docs.nexa.ai/nexa-sdk-docker/overview#supported-models)
|
||||
105
src/nexa-sdk/README.zh.md
Normal file
105
src/nexa-sdk/README.zh.md
Normal file
@@ -0,0 +1,105 @@
|
||||
# NexaSDK
|
||||
|
||||
[English](./README.md) | [中文](./README.zh.md)
|
||||
|
||||
此服务用于部署 NexaSDK Docker,运行兼容 OpenAI 的 REST API 的 AI 模型。支持 LLM、Embeddings、Reranking、计算机视觉和 ASR 模型。
|
||||
|
||||
## 特性
|
||||
|
||||
- **OpenAI 兼容 API**:可直接替换 OpenAI API 端点
|
||||
- **多种模型类型**:LLM、VLM、Embeddings、Reranking、CV、ASR
|
||||
- **GPU 加速**:支持 NVIDIA GPU 的 CUDA 加速
|
||||
- **NPU 支持**:针对 ARM64 上的 Qualcomm NPU 优化
|
||||
|
||||
## 支持的模型
|
||||
|
||||
| 类型 | 模型 |
|
||||
| ------------- | ------------------------------------------------------- |
|
||||
| **LLM** | `NexaAI/LFM2-1.2B-npu`、`NexaAI/Granite-4.0-h-350M-NPU` |
|
||||
| **VLM** | `NexaAI/OmniNeural-4B` |
|
||||
| **Embedding** | `NexaAI/embeddinggemma-300m-npu`、`NexaAI/EmbedNeural` |
|
||||
| **Rerank** | `NexaAI/jina-v2-rerank-npu` |
|
||||
| **CV** | `NexaAI/yolov12-npu`、`NexaAI/convnext-tiny-npu-IoT` |
|
||||
| **ASR** | `NexaAI/parakeet-tdt-0.6b-v3-npu` |
|
||||
|
||||
## 用法
|
||||
|
||||
### CPU 模式
|
||||
|
||||
```bash
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
### GPU 模式(CUDA)
|
||||
|
||||
```bash
|
||||
docker compose --profile gpu up -d nexa-sdk-cuda
|
||||
```
|
||||
|
||||
### 拉取模型
|
||||
|
||||
```bash
|
||||
docker exec -it nexa-sdk nexa pull NexaAI/Granite-4.0-h-350M-NPU
|
||||
```
|
||||
|
||||
### 交互式 CLI
|
||||
|
||||
```bash
|
||||
docker exec -it nexa-sdk nexa infer NexaAI/Granite-4.0-h-350M-NPU
|
||||
```
|
||||
|
||||
### API 示例
|
||||
|
||||
- 聊天补全:
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:18181/v1/chat/completions \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"model": "NexaAI/Granite-4.0-h-350M-NPU",
|
||||
"messages": [{"role": "user", "content": "Hello!"}]
|
||||
}'
|
||||
```
|
||||
|
||||
- Embeddings:
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:18181/v1/embeddings \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"model": "NexaAI/EmbedNeural",
|
||||
"input": "Hello, world!"
|
||||
}'
|
||||
```
|
||||
|
||||
- Swagger UI:访问 `http://localhost:18181/docs/ui`
|
||||
|
||||
## 服务
|
||||
|
||||
- `nexa-sdk`:基于 CPU 的 NexaSDK 服务(默认)
|
||||
- `nexa-sdk-cuda`:支持 CUDA 的 GPU 加速服务(profile:`gpu`)
|
||||
|
||||
## 配置
|
||||
|
||||
| 变量 | 描述 | 默认值 |
|
||||
| ------------------------ | --------------------- | --------- |
|
||||
| `NEXA_SDK_VERSION` | NexaSDK 镜像版本 | `v0.2.62` |
|
||||
| `NEXA_SDK_PORT_OVERRIDE` | REST API 的主机端口 | `18181` |
|
||||
| `NEXA_TOKEN` | Nexa API 令牌(必需) | - |
|
||||
| `TZ` | 时区 | `UTC` |
|
||||
|
||||
## 卷
|
||||
|
||||
- `nexa_data`:用于存储下载的模型和数据的卷
|
||||
|
||||
## 获取令牌
|
||||
|
||||
1. 在 [sdk.nexa.ai](https://sdk.nexa.ai) 创建账户
|
||||
2. 进入 **Deployment → Create Token**
|
||||
3. 将令牌复制到 `.env` 文件中
|
||||
|
||||
## 参考资料
|
||||
|
||||
- [NexaSDK 文档](https://docs.nexa.ai/nexa-sdk-docker/overview)
|
||||
- [Docker Hub](https://hub.docker.com/r/nexa4ai/nexasdk)
|
||||
- [支持的模型](https://docs.nexa.ai/nexa-sdk-docker/overview#supported-models)
|
||||
74
src/nexa-sdk/docker-compose.yaml
Normal file
74
src/nexa-sdk/docker-compose.yaml
Normal file
@@ -0,0 +1,74 @@
|
||||
# NexaSDK Docker Compose Configuration
|
||||
# OpenAI-compatible API for LLM, Embeddings, Reranking, and more
|
||||
# Supports both CPU and GPU (CUDA/NPU) acceleration
|
||||
|
||||
x-defaults: &defaults
|
||||
restart: unless-stopped
|
||||
logging:
|
||||
driver: json-file
|
||||
options:
|
||||
max-size: 100m
|
||||
max-file: "3"
|
||||
|
||||
services:
|
||||
nexa-sdk:
|
||||
<<: *defaults
|
||||
image: ${GLOBAL_REGISTRY:-}nexa4ai/nexasdk:${NEXA_SDK_VERSION:-v0.2.62}
|
||||
ports:
|
||||
- "${NEXA_SDK_PORT_OVERRIDE:-18181}:18181"
|
||||
volumes:
|
||||
- nexa_data:/data
|
||||
environment:
|
||||
- TZ=${TZ:-UTC}
|
||||
- NEXA_TOKEN=${NEXA_TOKEN:-}
|
||||
command: serve
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:18181/docs/ui"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 30s
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: ${NEXA_SDK_CPU_LIMIT:-4.0}
|
||||
memory: ${NEXA_SDK_MEMORY_LIMIT:-8G}
|
||||
reservations:
|
||||
cpus: ${NEXA_SDK_CPU_RESERVATION:-1.0}
|
||||
memory: ${NEXA_SDK_MEMORY_RESERVATION:-2G}
|
||||
|
||||
# GPU-accelerated service with CUDA support
|
||||
nexa-sdk-cuda:
|
||||
<<: *defaults
|
||||
profiles:
|
||||
- gpu
|
||||
image: ${GLOBAL_REGISTRY:-}nexa4ai/nexasdk:${NEXA_SDK_VERSION:-v0.2.62}-cuda
|
||||
ports:
|
||||
- "${NEXA_SDK_PORT_OVERRIDE:-18181}:18181"
|
||||
volumes:
|
||||
- nexa_data:/data
|
||||
environment:
|
||||
- TZ=${TZ:-UTC}
|
||||
- NEXA_TOKEN=${NEXA_TOKEN:-}
|
||||
command: serve
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:18181/docs/ui"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 60s
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: ${NEXA_SDK_CPU_LIMIT:-8.0}
|
||||
memory: ${NEXA_SDK_MEMORY_LIMIT:-16G}
|
||||
reservations:
|
||||
cpus: ${NEXA_SDK_CPU_RESERVATION:-2.0}
|
||||
memory: ${NEXA_SDK_MEMORY_RESERVATION:-4G}
|
||||
devices:
|
||||
- driver: nvidia
|
||||
count: 1
|
||||
capabilities: [gpu]
|
||||
|
||||
volumes:
|
||||
nexa_data:
|
||||
@@ -28,11 +28,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '1.00'
|
||||
memory: 512M
|
||||
cpus: ${NGINX_CPU_LIMIT:-1.00}
|
||||
memory: ${NGINX_MEMORY_LIMIT:-512M}
|
||||
reservations:
|
||||
cpus: '0.25'
|
||||
memory: 64M
|
||||
cpus: ${NGINX_CPU_RESERVATION:-0.25}
|
||||
memory: ${NGINX_MEMORY_RESERVATION:-64M}
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:80/"]
|
||||
interval: 30s
|
||||
|
||||
@@ -24,11 +24,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '0.25'
|
||||
memory: 128M
|
||||
cpus: ${NODE_EXPORTER_CPU_LIMIT:-0.25}
|
||||
memory: ${NODE_EXPORTER_MEMORY_LIMIT:-128M}
|
||||
reservations:
|
||||
cpus: '0.1'
|
||||
memory: 64M
|
||||
cpus: ${NODE_EXPORTER_CPU_RESERVATION:-0.1}
|
||||
memory: ${NODE_EXPORTER_MEMORY_RESERVATION:-64M}
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:9100/metrics"]
|
||||
interval: 30s
|
||||
|
||||
@@ -28,11 +28,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '4.0'
|
||||
memory: 10G
|
||||
cpus: ${OCEANBASE_CPU_LIMIT:-4.0}
|
||||
memory: ${OCEANBASE_MEMORY_LIMIT:-10G}
|
||||
reservations:
|
||||
cpus: '2.0'
|
||||
memory: 8G
|
||||
cpus: ${OCEANBASE_CPU_RESERVATION:-2.0}
|
||||
memory: ${OCEANBASE_MEMORY_RESERVATION:-8G}
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "mysql -h127.0.0.1 -P2881 -uroot -p$$OB_ROOT_PASSWORD -e 'SELECT 1' || exit 1"]
|
||||
interval: 30s
|
||||
|
||||
@@ -28,11 +28,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '2.0'
|
||||
memory: 2G
|
||||
cpus: ${ODOO_CPU_LIMIT:-2.0}
|
||||
memory: ${ODOO_MEMORY_LIMIT:-2G}
|
||||
reservations:
|
||||
cpus: '0.5'
|
||||
memory: 1G
|
||||
cpus: ${ODOO_CPU_RESERVATION:-0.5}
|
||||
memory: ${ODOO_MEMORY_RESERVATION:-1G}
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8069/"]
|
||||
interval: 30s
|
||||
@@ -54,11 +54,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '1.0'
|
||||
memory: 1G
|
||||
cpus: ${ODOO_DB_CPU_LIMIT:-1.0}
|
||||
memory: ${ODOO_DB_MEMORY_LIMIT:-1G}
|
||||
reservations:
|
||||
cpus: '0.25'
|
||||
memory: 512M
|
||||
cpus: ${ODOO_DB_CPU_RESERVATION:-0.25}
|
||||
memory: ${ODOO_DB_MEMORY_RESERVATION:-512M}
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER:-odoo}"]
|
||||
interval: 10s
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
# Ollama Version
|
||||
OLLAMA_VERSION=0.12.0
|
||||
OLLAMA_VERSION=0.12.10
|
||||
|
||||
# Port to bind to on the host machine
|
||||
OLLAMA_PORT_OVERRIDE=11434
|
||||
|
||||
@@ -9,13 +9,13 @@ This service deploys Ollama for running local LLM models.
|
||||
- Pull DeepSeek R1 7B model:
|
||||
|
||||
```bash
|
||||
docker exec -it ollama ollama pull deepseek-r1:7b
|
||||
docker exec -it ollama-ollama-1 ollama pull deepseek-r1:7b
|
||||
```
|
||||
|
||||
- List all local models:
|
||||
|
||||
```bash
|
||||
docker exec -it ollama ollama list
|
||||
docker exec -it ollama-ollama-1 ollama list
|
||||
```
|
||||
|
||||
- Get all local models via API:
|
||||
@@ -36,3 +36,25 @@ This service deploys Ollama for running local LLM models.
|
||||
## Volumes
|
||||
|
||||
- `ollama_models`: A volume for storing Ollama models.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### GPU Becomes Unavailable After Long Run (Linux Docker)
|
||||
|
||||
If Ollama initially works on the GPU in a Docker container, but then switches to running on CPU after some period of time with errors in the server log reporting GPU discovery failures, this can be resolved by disabling systemd cgroup management in Docker.
|
||||
|
||||
Edit `/etc/docker/daemon.json` on the host and add `"exec-opts": ["native.cgroupdriver=cgroupfs"]` to the Docker configuration:
|
||||
|
||||
```json
|
||||
{
|
||||
"exec-opts": ["native.cgroupdriver=cgroupfs"]
|
||||
}
|
||||
```
|
||||
|
||||
Then restart Docker:
|
||||
|
||||
```bash
|
||||
sudo systemctl restart docker
|
||||
```
|
||||
|
||||
For more details, see [Ollama Troubleshooting - Linux Docker](https://docs.ollama.com/troubleshooting#linux-docker).
|
||||
|
||||
@@ -9,13 +9,13 @@
|
||||
- 拉取 DeepSeek R1 7B 模型:
|
||||
|
||||
```bash
|
||||
docker exec -it ollama ollama pull deepseek-r1:7b
|
||||
docker exec -it ollama-ollama-1 ollama pull deepseek-r1:7b
|
||||
```
|
||||
|
||||
- 列出本地所有模型:
|
||||
|
||||
```bash
|
||||
docker exec -it ollama ollama list
|
||||
docker exec -it ollama-ollama-1 ollama list
|
||||
```
|
||||
|
||||
- 通过 API 获取本地所有模型:
|
||||
@@ -36,3 +36,25 @@
|
||||
## 卷
|
||||
|
||||
- `ollama_models`: 用于存储 Ollama 模型的卷。
|
||||
|
||||
## 故障排除
|
||||
|
||||
### 长时间运行后 GPU 离线(Linux Docker)
|
||||
|
||||
如果 Ollama 在 Docker 容器中最初可以正常使用 GPU,但运行一段时间后切换到 CPU 运行,且服务器日志中报告 GPU 发现失败的错误,可以通过禁用 Docker 的 systemd cgroup 管理来解决此问题。
|
||||
|
||||
编辑主机上的 `/etc/docker/daemon.json` 文件,添加 `"exec-opts": ["native.cgroupdriver=cgroupfs"]` 到 Docker 配置中:
|
||||
|
||||
```json
|
||||
{
|
||||
"exec-opts": ["native.cgroupdriver=cgroupfs"]
|
||||
}
|
||||
```
|
||||
|
||||
然后重启 Docker:
|
||||
|
||||
```bash
|
||||
sudo systemctl restart docker
|
||||
```
|
||||
|
||||
更多详情请参阅 [Ollama 故障排除 - Linux Docker](https://docs.ollama.com/troubleshooting#linux-docker)。
|
||||
|
||||
@@ -9,7 +9,7 @@ x-defaults: &defaults
|
||||
services:
|
||||
ollama:
|
||||
<<: *defaults
|
||||
image: ${GLOBAL_REGISTRY:-}ollama/ollama:${OLLAMA_VERSION:-0.12.6}
|
||||
image: ${GLOBAL_REGISTRY:-}ollama/ollama:${OLLAMA_VERSION:-0.12.10}
|
||||
ports:
|
||||
- "${OLLAMA_PORT_OVERRIDE:-11434}:11434"
|
||||
volumes:
|
||||
@@ -17,24 +17,24 @@ services:
|
||||
environment:
|
||||
- TZ=${TZ:-UTC}
|
||||
ipc: host
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '8.0'
|
||||
memory: 4G
|
||||
reservations:
|
||||
cpus: '2.0'
|
||||
memory: 2G
|
||||
devices:
|
||||
- driver: nvidia
|
||||
device_ids: [ '0' ]
|
||||
capabilities: [ gpu ]
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:11434/"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 30s
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: ${OLLAMA_CPU_LIMIT:-8.0}
|
||||
memory: ${OLLAMA_MEMORY_LIMIT:-16G}
|
||||
reservations:
|
||||
cpus: ${OLLAMA_CPU_RESERVATION:-2.0}
|
||||
memory: ${OLLAMA_MEMORY_RESERVATION:-4G}
|
||||
devices:
|
||||
- driver: nvidia
|
||||
device_ids: [ '0' ]
|
||||
capabilities: [ gpu ]
|
||||
|
||||
volumes:
|
||||
ollama_models:
|
||||
|
||||
@@ -18,20 +18,15 @@ services:
|
||||
- TZ=${TZ:-UTC}
|
||||
env_file:
|
||||
- .env
|
||||
# healthcheck already built into the image
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '1'
|
||||
memory: 512M
|
||||
cpus: ${OPEN_WEBUI_CPU_LIMIT:-1}
|
||||
memory: ${OPEN_WEBUI_MEMORY_LIMIT:-512M}
|
||||
reservations:
|
||||
cpus: '0.1'
|
||||
memory: 128M
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8080/health"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 30s
|
||||
cpus: ${OPEN_WEBUI_CPU_RESERVATION:-0.1}
|
||||
memory: ${OPEN_WEBUI_MEMORY_RESERVATION:-128M}
|
||||
|
||||
volumes:
|
||||
open_webui_data:
|
||||
|
||||
@@ -24,5 +24,5 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '0.1'
|
||||
memory: 64M
|
||||
cpus: ${OPENCOZE_INFO_CPU_LIMIT:-0.1}
|
||||
memory: ${OPENCOZE_INFO_MEMORY_LIMIT:-64M}
|
||||
|
||||
@@ -28,11 +28,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '2.00'
|
||||
memory: 1G
|
||||
cpus: ${OPENCUT_DB_CPU_LIMIT:-2.00}
|
||||
memory: ${OPENCUT_DB_MEMORY_LIMIT:-1G}
|
||||
reservations:
|
||||
cpus: '0.50'
|
||||
memory: 256M
|
||||
cpus: ${OPENCUT_DB_CPU_RESERVATION:-0.50}
|
||||
memory: ${OPENCUT_DB_MEMORY_RESERVATION:-256M}
|
||||
|
||||
redis:
|
||||
<<: *defaults
|
||||
@@ -48,11 +48,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '1.00'
|
||||
memory: 512M
|
||||
cpus: ${OPENCUT_REDIS_CPU_LIMIT:-1.00}
|
||||
memory: ${OPENCUT_REDIS_MEMORY_LIMIT:-512M}
|
||||
reservations:
|
||||
cpus: '0.25'
|
||||
memory: 128M
|
||||
cpus: ${OPENCUT_REDIS_CPU_RESERVATION:-0.25}
|
||||
memory: ${OPENCUT_REDIS_MEMORY_RESERVATION:-128M}
|
||||
|
||||
serverless-redis-http:
|
||||
<<: *defaults
|
||||
@@ -75,11 +75,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '1.00'
|
||||
memory: 256M
|
||||
cpus: ${OPENCUT_SRH_CPU_LIMIT:-1.00}
|
||||
memory: ${OPENCUT_SRH_MEMORY_LIMIT:-256M}
|
||||
reservations:
|
||||
cpus: '0.25'
|
||||
memory: 64M
|
||||
cpus: ${OPENCUT_SRH_CPU_RESERVATION:-0.25}
|
||||
memory: ${OPENCUT_SRH_MEMORY_RESERVATION:-64M}
|
||||
|
||||
web:
|
||||
<<: *defaults
|
||||
@@ -116,11 +116,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '2.00'
|
||||
memory: 2G
|
||||
cpus: ${OPENCUT_WEB_CPU_LIMIT:-2.00}
|
||||
memory: ${OPENCUT_WEB_MEMORY_LIMIT:-2G}
|
||||
reservations:
|
||||
cpus: '0.50'
|
||||
memory: 512M
|
||||
cpus: ${OPENCUT_WEB_CPU_RESERVATION:-0.50}
|
||||
memory: ${OPENCUT_WEB_MEMORY_RESERVATION:-512M}
|
||||
|
||||
volumes:
|
||||
postgres_data:
|
||||
|
||||
@@ -22,11 +22,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '1.0'
|
||||
memory: 1G
|
||||
cpus: ${OPENLIST_CPU_LIMIT:-1.0}
|
||||
memory: ${OPENLIST_MEMORY_LIMIT:-1G}
|
||||
reservations:
|
||||
cpus: '0.25'
|
||||
memory: 256M
|
||||
cpus: ${OPENLIST_CPU_RESERVATION:-0.25}
|
||||
memory: ${OPENLIST_MEMORY_RESERVATION:-256M}
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:5244/"]
|
||||
interval: 30s
|
||||
|
||||
@@ -34,11 +34,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '2.0'
|
||||
memory: 2G
|
||||
cpus: ${OPENSEARCH_CPU_LIMIT:-2.0}
|
||||
memory: ${OPENSEARCH_MEMORY_LIMIT:-2G}
|
||||
reservations:
|
||||
cpus: '1.0'
|
||||
memory: 1G
|
||||
cpus: ${OPENSEARCH_CPU_RESERVATION:-1.0}
|
||||
memory: ${OPENSEARCH_MEMORY_RESERVATION:-1G}
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "curl -f http://localhost:9200/_cluster/health || exit 1"]
|
||||
interval: 30s
|
||||
@@ -61,11 +61,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '1.0'
|
||||
memory: 1G
|
||||
cpus: ${OPENSEARCH_DASHBOARDS_CPU_LIMIT:-1.0}
|
||||
memory: ${OPENSEARCH_DASHBOARDS_MEMORY_LIMIT:-1G}
|
||||
reservations:
|
||||
cpus: '0.5'
|
||||
memory: 512M
|
||||
cpus: ${OPENSEARCH_DASHBOARDS_CPU_RESERVATION:-0.5}
|
||||
memory: ${OPENSEARCH_DASHBOARDS_MEMORY_RESERVATION:-512M}
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:5601/api/status"]
|
||||
interval: 30s
|
||||
|
||||
31
src/phoenix/.env.example
Normal file
31
src/phoenix/.env.example
Normal file
@@ -0,0 +1,31 @@
|
||||
# Phoenix version
|
||||
PHOENIX_VERSION=version-12.19.0
|
||||
|
||||
# Timezone
|
||||
TZ=UTC
|
||||
|
||||
# Phoenix ports
|
||||
PHOENIX_PORT_OVERRIDE=6006 # UI and OTLP HTTP collector
|
||||
PHOENIX_GRPC_PORT_OVERRIDE=4317 # OTLP gRPC collector
|
||||
|
||||
# Phoenix configuration
|
||||
PHOENIX_ENABLE_PROMETHEUS=false
|
||||
PHOENIX_SECRET= # Optional: Set for authentication, generate with: openssl rand -base64 32
|
||||
|
||||
# PostgreSQL configuration
|
||||
POSTGRES_VERSION=17.2-alpine3.21
|
||||
POSTGRES_USER=postgres
|
||||
POSTGRES_PASSWORD=postgres
|
||||
POSTGRES_DB=phoenix
|
||||
|
||||
# Resource limits for Phoenix
|
||||
PHOENIX_CPU_LIMIT=2.0
|
||||
PHOENIX_MEMORY_LIMIT=2G
|
||||
PHOENIX_CPU_RESERVATION=0.5
|
||||
PHOENIX_MEMORY_RESERVATION=512M
|
||||
|
||||
# Resource limits for PostgreSQL
|
||||
PHOENIX_DB_CPU_LIMIT=1.0
|
||||
PHOENIX_DB_MEMORY_LIMIT=1G
|
||||
PHOENIX_DB_CPU_RESERVATION=0.25
|
||||
PHOENIX_DB_MEMORY_RESERVATION=256M
|
||||
100
src/phoenix/README.md
Normal file
100
src/phoenix/README.md
Normal file
@@ -0,0 +1,100 @@
|
||||
# Arize Phoenix
|
||||
|
||||
[English](./README.md) | [中文](./README.zh.md)
|
||||
|
||||
Arize Phoenix is an open-source AI observability platform for LLM applications. It provides tracing, evaluation, datasets, and experiments to help you build and improve AI applications.
|
||||
|
||||
## Services
|
||||
|
||||
- `phoenix`: The main Phoenix application server with UI and OpenTelemetry collectors.
|
||||
- `phoenix-db`: PostgreSQL database for persistent storage.
|
||||
|
||||
## Ports
|
||||
|
||||
| Port | Protocol | Description |
|
||||
| ---- | -------- | ----------------------------------------- |
|
||||
| 6006 | HTTP | UI and OTLP HTTP collector (`/v1/traces`) |
|
||||
| 4317 | gRPC | OTLP gRPC collector |
|
||||
|
||||
## Environment Variables
|
||||
|
||||
| Variable Name | Description | Default Value |
|
||||
| -------------------------- | ------------------------------------- | ----------------- |
|
||||
| PHOENIX_VERSION | Phoenix image version | `version-12.19.0` |
|
||||
| PHOENIX_PORT_OVERRIDE | Host port for Phoenix UI and HTTP API | `6006` |
|
||||
| PHOENIX_GRPC_PORT_OVERRIDE | Host port for OTLP gRPC collector | `4317` |
|
||||
| PHOENIX_ENABLE_PROMETHEUS | Enable Prometheus metrics endpoint | `false` |
|
||||
| PHOENIX_SECRET | Secret for authentication (optional) | `""` |
|
||||
| POSTGRES_VERSION | PostgreSQL image version | `17.2-alpine3.21` |
|
||||
| POSTGRES_USER | PostgreSQL username | `postgres` |
|
||||
| POSTGRES_PASSWORD | PostgreSQL password | `postgres` |
|
||||
| POSTGRES_DB | PostgreSQL database name | `phoenix` |
|
||||
|
||||
## Volumes
|
||||
|
||||
- `phoenix_db_data`: PostgreSQL data volume for persistent storage.
|
||||
|
||||
## Getting Started
|
||||
|
||||
1. Copy the example environment file:
|
||||
|
||||
```bash
|
||||
cp .env.example .env
|
||||
```
|
||||
|
||||
2. (Optional) For production, set a secure password and secret:
|
||||
|
||||
```bash
|
||||
# Generate a secret for authentication
|
||||
openssl rand -base64 32
|
||||
```
|
||||
|
||||
3. Start the services:
|
||||
|
||||
```bash
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
4. Access Phoenix UI at `http://localhost:6006`
|
||||
|
||||
## Sending Traces
|
||||
|
||||
Phoenix supports OpenTelemetry-compatible traces. You can send traces using:
|
||||
|
||||
### HTTP (OTLP)
|
||||
|
||||
Send traces to `http://localhost:6006/v1/traces`
|
||||
|
||||
### gRPC (OTLP)
|
||||
|
||||
Send traces to `localhost:4317`
|
||||
|
||||
### Python Example
|
||||
|
||||
```python
|
||||
from phoenix.otel import register
|
||||
|
||||
tracer_provider = register(
|
||||
project_name="my-llm-app",
|
||||
endpoint="http://localhost:6006/v1/traces",
|
||||
)
|
||||
```
|
||||
|
||||
## Features
|
||||
|
||||
- **Tracing**: Capture and visualize LLM application traces with OpenTelemetry support.
|
||||
- **Evaluation**: Run evaluations using built-in or custom evaluators.
|
||||
- **Datasets**: Create and manage datasets for testing and evaluation.
|
||||
- **Experiments**: Run experiments to compare model performance.
|
||||
- **Playground**: Test prompts with different models interactively.
|
||||
|
||||
## Documentation
|
||||
|
||||
For more information, visit the [official Phoenix documentation](https://docs.arize.com/phoenix).
|
||||
|
||||
## Security Notes
|
||||
|
||||
- Change default PostgreSQL password in production.
|
||||
- Set `PHOENIX_SECRET` for authentication if exposing Phoenix publicly.
|
||||
- Consider using a reverse proxy with SSL/TLS in production.
|
||||
- Regularly backup the PostgreSQL database.
|
||||
100
src/phoenix/README.zh.md
Normal file
100
src/phoenix/README.zh.md
Normal file
@@ -0,0 +1,100 @@
|
||||
# Arize Phoenix
|
||||
|
||||
[English](./README.md) | [中文](./README.zh.md)
|
||||
|
||||
Arize Phoenix 是一个开源的 AI 可观测性平台,专为 LLM 应用设计。它提供追踪、评估、数据集和实验功能,帮助你构建和改进 AI 应用。
|
||||
|
||||
## 服务
|
||||
|
||||
- `phoenix`:Phoenix 主应用服务器,包含 UI 和 OpenTelemetry 采集器。
|
||||
- `phoenix-db`:用于持久化存储的 PostgreSQL 数据库。
|
||||
|
||||
## 端口
|
||||
|
||||
| 端口 | 协议 | 描述 |
|
||||
| ---- | ---- | -------------------------------------- |
|
||||
| 6006 | HTTP | UI 和 OTLP HTTP 采集器(`/v1/traces`) |
|
||||
| 4317 | gRPC | OTLP gRPC 采集器 |
|
||||
|
||||
## 环境变量
|
||||
|
||||
| 变量名 | 描述 | 默认值 |
|
||||
| -------------------------- | --------------------------------- | ----------------- |
|
||||
| PHOENIX_VERSION | Phoenix 镜像版本 | `version-12.19.0` |
|
||||
| PHOENIX_PORT_OVERRIDE | Phoenix UI 和 HTTP API 的主机端口 | `6006` |
|
||||
| PHOENIX_GRPC_PORT_OVERRIDE | OTLP gRPC 采集器的主机端口 | `4317` |
|
||||
| PHOENIX_ENABLE_PROMETHEUS | 启用 Prometheus 指标端点 | `false` |
|
||||
| PHOENIX_SECRET | 认证密钥(可选) | `""` |
|
||||
| POSTGRES_VERSION | PostgreSQL 镜像版本 | `17.2-alpine3.21` |
|
||||
| POSTGRES_USER | PostgreSQL 用户名 | `postgres` |
|
||||
| POSTGRES_PASSWORD | PostgreSQL 密码 | `postgres` |
|
||||
| POSTGRES_DB | PostgreSQL 数据库名 | `phoenix` |
|
||||
|
||||
## 数据卷
|
||||
|
||||
- `phoenix_db_data`:PostgreSQL 数据卷,用于持久化存储。
|
||||
|
||||
## 快速开始
|
||||
|
||||
1. 复制示例环境文件:
|
||||
|
||||
```bash
|
||||
cp .env.example .env
|
||||
```
|
||||
|
||||
2. (可选)生产环境下,请设置安全的密码和密钥:
|
||||
|
||||
```bash
|
||||
# 生成认证密钥
|
||||
openssl rand -base64 32
|
||||
```
|
||||
|
||||
3. 启动服务:
|
||||
|
||||
```bash
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
4. 访问 Phoenix UI:`http://localhost:6006`
|
||||
|
||||
## 发送追踪数据
|
||||
|
||||
Phoenix 支持 OpenTelemetry 兼容的追踪数据。你可以通过以下方式发送追踪:
|
||||
|
||||
### HTTP(OTLP)
|
||||
|
||||
发送追踪到 `http://localhost:6006/v1/traces`
|
||||
|
||||
### gRPC(OTLP)
|
||||
|
||||
发送追踪到 `localhost:4317`
|
||||
|
||||
### Python 示例
|
||||
|
||||
```python
|
||||
from phoenix.otel import register
|
||||
|
||||
tracer_provider = register(
|
||||
project_name="my-llm-app",
|
||||
endpoint="http://localhost:6006/v1/traces",
|
||||
)
|
||||
```
|
||||
|
||||
## 功能特性
|
||||
|
||||
- **追踪**:捕获和可视化 LLM 应用追踪,支持 OpenTelemetry。
|
||||
- **评估**:使用内置或自定义评估器运行评估。
|
||||
- **数据集**:创建和管理用于测试和评估的数据集。
|
||||
- **实验**:运行实验以比较模型性能。
|
||||
- **Playground**:交互式测试不同模型的提示词。
|
||||
|
||||
## 文档
|
||||
|
||||
更多信息请访问 [Phoenix 官方文档](https://docs.arize.com/phoenix)。
|
||||
|
||||
## 安全说明
|
||||
|
||||
- 生产环境请更改默认的 PostgreSQL 密码。
|
||||
- 如果公开暴露 Phoenix,请设置 `PHOENIX_SECRET` 进行认证。
|
||||
- 生产环境建议使用反向代理并启用 SSL/TLS。
|
||||
- 定期备份 PostgreSQL 数据库。
|
||||
68
src/phoenix/docker-compose.yaml
Normal file
68
src/phoenix/docker-compose.yaml
Normal file
@@ -0,0 +1,68 @@
|
||||
# Arize Phoenix - AI Observability and Evaluation Platform
|
||||
# https://docs.arize.com/phoenix
|
||||
|
||||
x-defaults: &defaults
|
||||
restart: unless-stopped
|
||||
logging:
|
||||
driver: json-file
|
||||
options:
|
||||
max-size: 100m
|
||||
max-file: "3"
|
||||
|
||||
services:
|
||||
phoenix:
|
||||
<<: *defaults
|
||||
image: ${GLOBAL_REGISTRY:-}arizephoenix/phoenix:${PHOENIX_VERSION:-version-12.19.0}
|
||||
ports:
|
||||
- "${PHOENIX_PORT_OVERRIDE:-6006}:6006" # UI and OTLP HTTP collector
|
||||
- "${PHOENIX_GRPC_PORT_OVERRIDE:-4317}:4317" # OTLP gRPC collector
|
||||
environment:
|
||||
- TZ=${TZ:-UTC}
|
||||
- PHOENIX_SQL_DATABASE_URL=postgresql://${POSTGRES_USER:-postgres}:${POSTGRES_PASSWORD:-postgres}@phoenix-db:5432/${POSTGRES_DB:-phoenix}
|
||||
- PHOENIX_ENABLE_PROMETHEUS=${PHOENIX_ENABLE_PROMETHEUS:-false}
|
||||
- PHOENIX_SECRET=${PHOENIX_SECRET:-}
|
||||
depends_on:
|
||||
phoenix-db:
|
||||
condition: service_healthy
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:6006/healthz"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 30s
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: ${PHOENIX_CPU_LIMIT:-2.0}
|
||||
memory: ${PHOENIX_MEMORY_LIMIT:-2G}
|
||||
reservations:
|
||||
cpus: ${PHOENIX_CPU_RESERVATION:-0.5}
|
||||
memory: ${PHOENIX_MEMORY_RESERVATION:-512M}
|
||||
|
||||
phoenix-db:
|
||||
<<: *defaults
|
||||
image: ${GLOBAL_REGISTRY:-}postgres:${POSTGRES_VERSION:-17.2-alpine3.21}
|
||||
environment:
|
||||
- TZ=${TZ:-UTC}
|
||||
- POSTGRES_USER=${POSTGRES_USER:-postgres}
|
||||
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-postgres}
|
||||
- POSTGRES_DB=${POSTGRES_DB:-phoenix}
|
||||
volumes:
|
||||
- phoenix_db_data:/var/lib/postgresql/data
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER:-postgres}"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 5
|
||||
start_period: 30s
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: ${PHOENIX_DB_CPU_LIMIT:-1.0}
|
||||
memory: ${PHOENIX_DB_MEMORY_LIMIT:-1G}
|
||||
reservations:
|
||||
cpus: ${PHOENIX_DB_CPU_RESERVATION:-0.25}
|
||||
memory: ${PHOENIX_DB_MEMORY_RESERVATION:-256M}
|
||||
|
||||
volumes:
|
||||
phoenix_db_data:
|
||||
33
src/pingora-proxy-manager/.env.example
Normal file
33
src/pingora-proxy-manager/.env.example
Normal file
@@ -0,0 +1,33 @@
|
||||
# Pingora Proxy Manager Configuration
|
||||
# https://github.com/DDULDDUCK/pingora-proxy-manager
|
||||
|
||||
# Image version (default: v1.0.3)
|
||||
# Available tags: latest, slim, amd64-slim, v1.0.3
|
||||
PINGORA_VERSION=v1.0.3
|
||||
|
||||
# Timezone setting (default: UTC)
|
||||
TZ=UTC
|
||||
|
||||
# JWT secret for authentication (CHANGE THIS IN PRODUCTION!)
|
||||
# Used for API authentication and session management
|
||||
PINGORA_JWT_SECRET=changeme_in_production_please
|
||||
|
||||
# Log level (trace, debug, info, warn, error)
|
||||
PINGORA_LOG_LEVEL=info
|
||||
|
||||
# Port overrides
|
||||
# HTTP proxy port (container listens on 8080)
|
||||
PINGORA_HTTP_PORT_OVERRIDE=80
|
||||
# Dashboard/API port (container listens on 81)
|
||||
PINGORA_DASHBOARD_PORT_OVERRIDE=81
|
||||
# HTTPS proxy port (container listens on 443)
|
||||
PINGORA_HTTPS_PORT_OVERRIDE=443
|
||||
|
||||
# Resource limits
|
||||
PINGORA_CPU_LIMIT=2.00
|
||||
PINGORA_MEMORY_LIMIT=512M
|
||||
PINGORA_CPU_RESERVATION=0.50
|
||||
PINGORA_MEMORY_RESERVATION=256M
|
||||
|
||||
# Optional: Global registry prefix (e.g., registry.example.com/)
|
||||
# GLOBAL_REGISTRY=
|
||||
82
src/pingora-proxy-manager/README.md
Normal file
82
src/pingora-proxy-manager/README.md
Normal file
@@ -0,0 +1,82 @@
|
||||
# Pingora Proxy Manager
|
||||
|
||||
A high-performance, zero-downtime reverse proxy manager built on Cloudflare's [Pingora](https://github.com/cloudflare/pingora). Simple, Modern, and Fast.
|
||||
|
||||
## Features
|
||||
|
||||
- **⚡️ High Performance**: Built on Rust & Pingora, capable of handling high traffic with low latency
|
||||
- **🔄 Zero-Downtime Configuration**: Dynamic reconfiguration without restarting the process
|
||||
- **🔒 SSL/TLS Automation**:
|
||||
- HTTP-01 challenge for single domains
|
||||
- DNS-01 challenge for wildcard certificates (`*.example.com`) via Cloudflare, AWS Route53, etc.
|
||||
- **🌐 Proxy Hosts**: Easy management of virtual hosts, locations, and path rewriting
|
||||
- **📡 Streams (L4)**: TCP and UDP forwarding for databases, game servers, etc.
|
||||
- **🛡️ Access Control**: IP whitelisting/blacklisting and Basic Authentication support
|
||||
- **🎨 Modern Dashboard**: Clean and responsive UI built with React, Tailwind CSS, and shadcn/ui
|
||||
- **🐳 Docker Ready**: Single container deployment for easy setup and maintenance
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
Access the dashboard at `http://localhost:81`.
|
||||
|
||||
**Default Credentials:**
|
||||
|
||||
- Username: `admin`
|
||||
- Password: `changeme` (Please change this immediately!)
|
||||
|
||||
## Ports
|
||||
|
||||
| Port | Description |
|
||||
| ---------------------------- | ------------- |
|
||||
| 80 (host) → 8080 (container) | HTTP Proxy |
|
||||
| 81 (host) → 81 (container) | Dashboard/API |
|
||||
| 443 (host) → 443 (container) | HTTPS Proxy |
|
||||
|
||||
## Environment Variables
|
||||
|
||||
| Variable | Default | Description |
|
||||
| --------------------------------- | ------------------------------- | -------------------------------------------------------- |
|
||||
| `PINGORA_VERSION` | `latest` | Docker image version |
|
||||
| `TZ` | `UTC` | Timezone |
|
||||
| `PINGORA_JWT_SECRET` | `changeme_in_production_please` | JWT secret for authentication (**change in production**) |
|
||||
| `PINGORA_LOG_LEVEL` | `info` | Log level (trace, debug, info, warn, error) |
|
||||
| `PINGORA_HTTP_PORT_OVERRIDE` | `80` | Host port for HTTP proxy |
|
||||
| `PINGORA_DASHBOARD_PORT_OVERRIDE` | `81` | Host port for Dashboard/API |
|
||||
| `PINGORA_HTTPS_PORT_OVERRIDE` | `443` | Host port for HTTPS proxy |
|
||||
|
||||
## Volumes
|
||||
|
||||
| Volume | Path | Description |
|
||||
| -------------- | ------------------ | -------------------------------- |
|
||||
| `pingora_data` | `/app/data` | SQLite database and certificates |
|
||||
| `pingora_logs` | `/app/logs` | Application logs |
|
||||
| `letsencrypt` | `/etc/letsencrypt` | Let's Encrypt certificates |
|
||||
|
||||
## Architecture
|
||||
|
||||
- **Data Plane (8080/443)**: Pingora handles all traffic with high efficiency
|
||||
- **Control Plane (81)**: Axum serves the API and Dashboard
|
||||
- **SSL Management**: Integrated Certbot for robust ACME handling
|
||||
- **State Management**: ArcSwap for lock-free configuration reads
|
||||
- **Database**: SQLite for persistent storage of hosts and certificates
|
||||
|
||||
## Security Notes
|
||||
|
||||
- **Always change the default credentials** immediately after deployment
|
||||
- **Set a strong `JWT_SECRET`** in production environments
|
||||
- The container runs with minimal capabilities (`NET_BIND_SERVICE` only)
|
||||
- Read-only root filesystem enabled for enhanced security
|
||||
|
||||
## References
|
||||
|
||||
- [Pingora Proxy Manager GitHub](https://github.com/DDULDDUCK/pingora-proxy-manager)
|
||||
- [Cloudflare Pingora](https://github.com/cloudflare/pingora)
|
||||
- [Docker Hub](https://hub.docker.com/r/dduldduck/pingora-proxy-manager)
|
||||
|
||||
## License
|
||||
|
||||
MIT License - see the [upstream project](https://github.com/DDULDDUCK/pingora-proxy-manager/blob/master/LICENSE) for details.
|
||||
82
src/pingora-proxy-manager/README.zh.md
Normal file
82
src/pingora-proxy-manager/README.zh.md
Normal file
@@ -0,0 +1,82 @@
|
||||
# Pingora Proxy Manager
|
||||
|
||||
基于 Cloudflare [Pingora](https://github.com/cloudflare/pingora) 构建的高性能、零停机反向代理管理器。简单、现代、快速。
|
||||
|
||||
## 特性
|
||||
|
||||
- **⚡️ 高性能**:基于 Rust 和 Pingora 构建,能够以低延迟处理高流量
|
||||
- **🔄 零停机配置**:动态重新配置,无需重启进程
|
||||
- **🔒 SSL/TLS 自动化**:
|
||||
- HTTP-01 验证用于单个域名
|
||||
- DNS-01 验证用于通配符证书(`*.example.com`),支持 Cloudflare、AWS Route53 等
|
||||
- **🌐 代理主机**:轻松管理虚拟主机、位置和路径重写
|
||||
- **📡 流(L4)**:TCP 和 UDP 转发,适用于数据库、游戏服务器等
|
||||
- **🛡️ 访问控制**:支持 IP 白名单/黑名单和基本认证
|
||||
- **🎨 现代化仪表板**:使用 React、Tailwind CSS 和 shadcn/ui 构建的简洁响应式 UI
|
||||
- **🐳 Docker 就绪**:单容器部署,易于设置和维护
|
||||
|
||||
## 快速开始
|
||||
|
||||
```bash
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
访问仪表板:`http://localhost:81`
|
||||
|
||||
**默认凭据:**
|
||||
|
||||
- 用户名:`admin`
|
||||
- 密码:`changeme`(请立即更改!)
|
||||
|
||||
## 端口
|
||||
|
||||
| 端口 | 描述 |
|
||||
| ------------------------ | ---------- |
|
||||
| 80(主机)→ 8080(容器) | HTTP 代理 |
|
||||
| 81(主机)→ 81(容器) | 仪表板/API |
|
||||
| 443(主机)→ 443(容器) | HTTPS 代理 |
|
||||
|
||||
## 环境变量
|
||||
|
||||
| 变量 | 默认值 | 描述 |
|
||||
| --------------------------------- | ------------------------------- | ------------------------------------------- |
|
||||
| `PINGORA_VERSION` | `latest` | Docker 镜像版本 |
|
||||
| `TZ` | `UTC` | 时区 |
|
||||
| `PINGORA_JWT_SECRET` | `changeme_in_production_please` | 认证用的 JWT 密钥(**生产环境必须更改**) |
|
||||
| `PINGORA_LOG_LEVEL` | `info` | 日志级别(trace、debug、info、warn、error) |
|
||||
| `PINGORA_HTTP_PORT_OVERRIDE` | `80` | HTTP 代理的主机端口 |
|
||||
| `PINGORA_DASHBOARD_PORT_OVERRIDE` | `81` | 仪表板/API 的主机端口 |
|
||||
| `PINGORA_HTTPS_PORT_OVERRIDE` | `443` | HTTPS 代理的主机端口 |
|
||||
|
||||
## 卷
|
||||
|
||||
| 卷 | 路径 | 描述 |
|
||||
| -------------- | ------------------ | ------------------- |
|
||||
| `pingora_data` | `/app/data` | SQLite 数据库和证书 |
|
||||
| `pingora_logs` | `/app/logs` | 应用程序日志 |
|
||||
| `letsencrypt` | `/etc/letsencrypt` | Let's Encrypt 证书 |
|
||||
|
||||
## 架构
|
||||
|
||||
- **数据平面(8080/443)**:Pingora 高效处理所有流量
|
||||
- **控制平面(81)**:Axum 提供 API 和仪表板服务
|
||||
- **SSL 管理**:集成 Certbot 进行可靠的 ACME 处理
|
||||
- **状态管理**:使用 ArcSwap 实现无锁配置读取
|
||||
- **数据库**:SQLite 用于持久化存储主机和证书
|
||||
|
||||
## 安全注意事项
|
||||
|
||||
- 部署后**立即更改默认凭据**
|
||||
- 在生产环境中**设置强密码的 `JWT_SECRET`**
|
||||
- 容器以最小权限运行(仅 `NET_BIND_SERVICE`)
|
||||
- 启用只读根文件系统以增强安全性
|
||||
|
||||
## 参考链接
|
||||
|
||||
- [Pingora Proxy Manager GitHub](https://github.com/DDULDDUCK/pingora-proxy-manager)
|
||||
- [Cloudflare Pingora](https://github.com/cloudflare/pingora)
|
||||
- [Docker Hub](https://hub.docker.com/r/dduldduck/pingora-proxy-manager)
|
||||
|
||||
## 许可证
|
||||
|
||||
MIT 许可证 - 详见[上游项目](https://github.com/DDULDDUCK/pingora-proxy-manager/blob/master/LICENSE)。
|
||||
54
src/pingora-proxy-manager/docker-compose.yaml
Normal file
54
src/pingora-proxy-manager/docker-compose.yaml
Normal file
@@ -0,0 +1,54 @@
|
||||
# Pingora Proxy Manager - High-performance reverse proxy built on Cloudflare's Pingora
|
||||
# https://github.com/DDULDDUCK/pingora-proxy-manager
|
||||
|
||||
x-defaults: &defaults
|
||||
restart: unless-stopped
|
||||
logging:
|
||||
driver: json-file
|
||||
options:
|
||||
max-size: 100m
|
||||
max-file: "3"
|
||||
|
||||
services:
|
||||
pingora-proxy-manager:
|
||||
<<: *defaults
|
||||
image: ${GLOBAL_REGISTRY:-}dduldduck/pingora-proxy-manager:${PINGORA_VERSION:-v1.0.3}
|
||||
ports:
|
||||
- "${PINGORA_HTTP_PORT_OVERRIDE:-80}:8080"
|
||||
- "${PINGORA_DASHBOARD_PORT_OVERRIDE:-81}:81"
|
||||
- "${PINGORA_HTTPS_PORT_OVERRIDE:-443}:443"
|
||||
volumes:
|
||||
- pingora_data:/app/data
|
||||
- pingora_logs:/app/logs
|
||||
- letsencrypt:/etc/letsencrypt
|
||||
environment:
|
||||
- TZ=${TZ:-UTC}
|
||||
- JWT_SECRET=${PINGORA_JWT_SECRET:-changeme_in_production_please}
|
||||
- RUST_LOG=${PINGORA_LOG_LEVEL:-info}
|
||||
healthcheck:
|
||||
test: ["CMD", "sh", "-c", "wget -q --spider http://127.0.0.1:81/api/login || exit 1"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 30s
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: "${PINGORA_CPU_LIMIT:-2.00}"
|
||||
memory: "${PINGORA_MEMORY_LIMIT:-512M}"
|
||||
reservations:
|
||||
cpus: "${PINGORA_CPU_RESERVATION:-0.50}"
|
||||
memory: "${PINGORA_MEMORY_RESERVATION:-256M}"
|
||||
# Security hardening
|
||||
cap_drop:
|
||||
- ALL
|
||||
cap_add:
|
||||
- NET_BIND_SERVICE
|
||||
read_only: true
|
||||
tmpfs:
|
||||
- /tmp:size=64M
|
||||
|
||||
volumes:
|
||||
pingora_data:
|
||||
pingora_logs:
|
||||
letsencrypt:
|
||||
@@ -35,11 +35,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '0.25'
|
||||
memory: 256M
|
||||
cpus: ${POCKETBASE_CPU_LIMIT:-0.25}
|
||||
memory: ${POCKETBASE_MEMORY_LIMIT:-256M}
|
||||
reservations:
|
||||
cpus: '0.1'
|
||||
memory: 128M
|
||||
cpus: ${POCKETBASE_CPU_RESERVATION:-0.1}
|
||||
memory: ${POCKETBASE_MEMORY_RESERVATION:-128M}
|
||||
|
||||
volumes:
|
||||
pb_data:
|
||||
|
||||
@@ -29,11 +29,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '1.00'
|
||||
memory: 512M
|
||||
cpus: ${PORTAINER_CPU_LIMIT:-1.00}
|
||||
memory: ${PORTAINER_MEMORY_LIMIT:-512M}
|
||||
reservations:
|
||||
cpus: '0.25'
|
||||
memory: 128M
|
||||
cpus: ${PORTAINER_CPU_RESERVATION:-0.25}
|
||||
memory: ${PORTAINER_MEMORY_RESERVATION:-128M}
|
||||
|
||||
volumes:
|
||||
portainer_data:
|
||||
|
||||
@@ -23,8 +23,8 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '1.00'
|
||||
memory: 512M
|
||||
cpus: ${PORTKEY_GATEWAY_CPU_LIMIT:-1.00}
|
||||
memory: ${PORTKEY_GATEWAY_MEMORY_LIMIT:-512M}
|
||||
reservations:
|
||||
cpus: '0.25'
|
||||
memory: 128M
|
||||
cpus: ${PORTKEY_GATEWAY_CPU_RESERVATION:-0.25}
|
||||
memory: ${PORTKEY_GATEWAY_MEMORY_RESERVATION:-128M}
|
||||
|
||||
@@ -25,11 +25,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '2.0'
|
||||
memory: 2G
|
||||
cpus: ${POSTGRES_CPU_LIMIT:-2.0}
|
||||
memory: ${POSTGRES_MEMORY_LIMIT:-2G}
|
||||
reservations:
|
||||
cpus: '0.5'
|
||||
memory: 512M
|
||||
cpus: ${POSTGRES_CPU_RESERVATION:-0.5}
|
||||
memory: ${POSTGRES_MEMORY_RESERVATION:-512M}
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -U $$POSTGRES_USER -d $$POSTGRES_DB"]
|
||||
interval: 30s
|
||||
|
||||
@@ -36,11 +36,11 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '1.0'
|
||||
memory: 2G
|
||||
cpus: ${PROMETHEUS_CPU_LIMIT:-1.0}
|
||||
memory: ${PROMETHEUS_MEMORY_LIMIT:-2G}
|
||||
reservations:
|
||||
cpus: '0.25'
|
||||
memory: 512M
|
||||
cpus: ${PROMETHEUS_CPU_RESERVATION:-0.25}
|
||||
memory: ${PROMETHEUS_MEMORY_RESERVATION:-512M}
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:9090/-/healthy"]
|
||||
interval: 30s
|
||||
|
||||
68
src/pulsar/.env.example
Normal file
68
src/pulsar/.env.example
Normal file
@@ -0,0 +1,68 @@
|
||||
# Apache Pulsar version
|
||||
PULSAR_VERSION=4.0.7
|
||||
|
||||
# Timezone
|
||||
TZ=UTC
|
||||
|
||||
# Global registry prefix (optional)
|
||||
# GLOBAL_REGISTRY=your-registry.example.com/
|
||||
|
||||
# ==================== Port Overrides ====================
|
||||
|
||||
# Pulsar broker port (default: 6650)
|
||||
# PULSAR_BROKER_PORT_OVERRIDE=6650
|
||||
|
||||
# Pulsar HTTP/Admin port (default: 8080)
|
||||
# PULSAR_HTTP_PORT_OVERRIDE=8080
|
||||
|
||||
# ==================== Standalone Mode Configuration ====================
|
||||
|
||||
# Enable ZooKeeper for standalone mode (0 = RocksDB, 1 = ZooKeeper)
|
||||
# PULSAR_STANDALONE_USE_ZOOKEEPER=0
|
||||
|
||||
# JVM memory settings for standalone
|
||||
# PULSAR_MEM=-Xms512m -Xmx512m -XX:MaxDirectMemorySize=256m
|
||||
|
||||
# ==================== Resource Limits (Standalone) ====================
|
||||
|
||||
# CPU limits
|
||||
# PULSAR_CPU_LIMIT=2.00
|
||||
# PULSAR_CPU_RESERVATION=0.50
|
||||
|
||||
# Memory limits
|
||||
# PULSAR_MEMORY_LIMIT=2G
|
||||
# PULSAR_MEMORY_RESERVATION=512M
|
||||
|
||||
# ==================== Cluster Mode Configuration ====================
|
||||
|
||||
# Cluster name
|
||||
# PULSAR_CLUSTER_NAME=cluster-a
|
||||
|
||||
# ZooKeeper JVM memory settings
|
||||
# ZOOKEEPER_MEM=-Xms256m -Xmx256m -XX:MaxDirectMemorySize=256m
|
||||
|
||||
# BookKeeper JVM memory settings
|
||||
# BOOKIE_MEM=-Xms512m -Xmx512m -XX:MaxDirectMemorySize=256m
|
||||
|
||||
# Broker JVM memory settings
|
||||
# BROKER_MEM=-Xms512m -Xmx512m -XX:MaxDirectMemorySize=256m
|
||||
|
||||
# ==================== Resource Limits (Cluster Mode) ====================
|
||||
|
||||
# ZooKeeper resources
|
||||
# ZOOKEEPER_CPU_LIMIT=1.00
|
||||
# ZOOKEEPER_CPU_RESERVATION=0.25
|
||||
# ZOOKEEPER_MEMORY_LIMIT=512M
|
||||
# ZOOKEEPER_MEMORY_RESERVATION=256M
|
||||
|
||||
# BookKeeper resources
|
||||
# BOOKIE_CPU_LIMIT=1.00
|
||||
# BOOKIE_CPU_RESERVATION=0.25
|
||||
# BOOKIE_MEMORY_LIMIT=1G
|
||||
# BOOKIE_MEMORY_RESERVATION=512M
|
||||
|
||||
# Broker resources
|
||||
# BROKER_CPU_LIMIT=2.00
|
||||
# BROKER_CPU_RESERVATION=0.50
|
||||
# BROKER_MEMORY_LIMIT=2G
|
||||
# BROKER_MEMORY_RESERVATION=512M
|
||||
264
src/pulsar/README.md
Normal file
264
src/pulsar/README.md
Normal file
@@ -0,0 +1,264 @@
|
||||
# Apache Pulsar
|
||||
|
||||
[English](./README.md) | [中文](./README.zh.md)
|
||||
|
||||
Apache Pulsar is a cloud-native, distributed messaging and streaming platform. It combines the best features of traditional messaging systems like RabbitMQ with the high-throughput of stream processing systems like Kafka.
|
||||
|
||||
## Services
|
||||
|
||||
### Default (Standalone Mode)
|
||||
|
||||
- `pulsar`: Single-node Pulsar instance for development and testing.
|
||||
- Runs with `--no-functions-worker` flag for simplicity and reduced resource usage
|
||||
- Uses RocksDB as metadata store by default (since Pulsar 2.11+)
|
||||
- Includes embedded ZooKeeper and BookKeeper in the same JVM process
|
||||
|
||||
### Cluster Mode (profile: `cluster`)
|
||||
|
||||
- `zookeeper`: ZooKeeper for cluster coordination.
|
||||
- `pulsar-init`: Initializes cluster metadata (runs once).
|
||||
- `bookie`: BookKeeper for persistent message storage.
|
||||
- `broker`: Pulsar Broker for message routing.
|
||||
|
||||
## Environment Variables
|
||||
|
||||
| Variable Name | Description | Default Value |
|
||||
| --------------------------------- | ------------------------------------------- | ------------------------------------------------ |
|
||||
| `PULSAR_VERSION` | Pulsar image version | `4.0.7` |
|
||||
| `TZ` | Timezone | `UTC` |
|
||||
| `PULSAR_BROKER_PORT_OVERRIDE` | Host port for Pulsar broker (maps to 6650) | `6650` |
|
||||
| `PULSAR_HTTP_PORT_OVERRIDE` | Host port for HTTP/Admin API (maps to 8080) | `8080` |
|
||||
| `PULSAR_STANDALONE_USE_ZOOKEEPER` | Use ZooKeeper in standalone mode (0 or 1) | `0` |
|
||||
| `PULSAR_MEM` | JVM memory settings for standalone | `-Xms512m -Xmx512m -XX:MaxDirectMemorySize=256m` |
|
||||
| `PULSAR_CLUSTER_NAME` | Cluster name (cluster mode) | `cluster-a` |
|
||||
|
||||
Please modify the `.env` file as needed for your use case.
|
||||
|
||||
## Volumes
|
||||
|
||||
- `pulsar_data`: Pulsar data directory (standalone mode).
|
||||
- `pulsar_conf`: Pulsar configuration directory (standalone mode).
|
||||
- `zookeeper_data`: ZooKeeper data directory (cluster mode).
|
||||
- `bookie_data`: BookKeeper data directory (cluster mode).
|
||||
|
||||
## Usage
|
||||
|
||||
### Standalone Mode (Default)
|
||||
|
||||
1. Start Pulsar in standalone mode:
|
||||
|
||||
```bash
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
2. Wait for Pulsar to be ready (check logs):
|
||||
|
||||
```bash
|
||||
docker compose logs -f pulsar
|
||||
```
|
||||
|
||||
You should see a message like:
|
||||
|
||||
```log
|
||||
INFO org.apache.pulsar.broker.PulsarService - messaging service is ready
|
||||
```
|
||||
|
||||
3. Verify the cluster is healthy:
|
||||
|
||||
```bash
|
||||
docker exec pulsar bin/pulsar-admin brokers healthcheck
|
||||
```
|
||||
|
||||
4. Access Pulsar:
|
||||
- Broker: `pulsar://localhost:6650`
|
||||
- Admin API: `http://localhost:8080`
|
||||
|
||||
### Cluster Mode
|
||||
|
||||
1. Start Pulsar cluster:
|
||||
|
||||
```bash
|
||||
docker compose --profile cluster up -d
|
||||
```
|
||||
|
||||
2. Wait for all services to be healthy:
|
||||
|
||||
```bash
|
||||
docker compose --profile cluster ps
|
||||
```
|
||||
|
||||
## Management and Monitoring
|
||||
|
||||
### Pulsar Admin CLI
|
||||
|
||||
The `pulsar-admin` CLI is the recommended tool for managing Pulsar. It's included in the Pulsar container.
|
||||
|
||||
```bash
|
||||
# Check cluster health
|
||||
docker exec pulsar bin/pulsar-admin brokers healthcheck
|
||||
|
||||
# List clusters
|
||||
docker exec pulsar bin/pulsar-admin clusters list
|
||||
|
||||
# List tenants
|
||||
docker exec pulsar bin/pulsar-admin tenants list
|
||||
|
||||
# List namespaces
|
||||
docker exec pulsar bin/pulsar-admin namespaces list public
|
||||
|
||||
# Get broker stats
|
||||
docker exec pulsar bin/pulsar-admin broker-stats monitoring-metrics
|
||||
```
|
||||
|
||||
### REST Admin API
|
||||
|
||||
Pulsar provides a comprehensive REST API for management tasks.
|
||||
|
||||
```bash
|
||||
# Get cluster information
|
||||
curl http://localhost:8080/admin/v2/clusters
|
||||
|
||||
# Get broker stats
|
||||
curl http://localhost:8080/admin/v2/broker-stats/monitoring-metrics
|
||||
|
||||
# List tenants
|
||||
curl http://localhost:8080/admin/v2/tenants
|
||||
|
||||
# List namespaces
|
||||
curl http://localhost:8080/admin/v2/namespaces/public
|
||||
|
||||
# Get topic stats
|
||||
curl http://localhost:8080/admin/v2/persistent/public/default/my-topic/stats
|
||||
```
|
||||
|
||||
### Monitoring with Prometheus
|
||||
|
||||
Pulsar exposes Prometheus metrics at the `/metrics` endpoint:
|
||||
|
||||
```bash
|
||||
# Access Pulsar metrics
|
||||
curl http://localhost:8080/metrics
|
||||
```
|
||||
|
||||
You can integrate with Prometheus and Grafana for visualization. Pulsar provides official Grafana dashboards.
|
||||
|
||||
## Testing Pulsar
|
||||
|
||||
1. Create a namespace:
|
||||
|
||||
```bash
|
||||
docker exec pulsar bin/pulsar-admin namespaces create public/test-namespace
|
||||
```
|
||||
|
||||
2. Create a topic:
|
||||
|
||||
```bash
|
||||
docker exec pulsar bin/pulsar-admin topics create persistent://public/test-namespace/test-topic
|
||||
```
|
||||
|
||||
3. List topics:
|
||||
|
||||
```bash
|
||||
docker exec pulsar bin/pulsar-admin topics list public/test-namespace
|
||||
```
|
||||
|
||||
4. Produce messages:
|
||||
|
||||
```bash
|
||||
docker exec -it pulsar bin/pulsar-client produce persistent://public/test-namespace/test-topic --messages "Hello Pulsar"
|
||||
```
|
||||
|
||||
5. Consume messages:
|
||||
|
||||
```bash
|
||||
docker exec -it pulsar bin/pulsar-client consume persistent://public/test-namespace/test-topic -s "test-subscription" -n 0
|
||||
```
|
||||
|
||||
## Client Libraries
|
||||
|
||||
Pulsar supports multiple client libraries:
|
||||
|
||||
- Java: `org.apache.pulsar:pulsar-client`
|
||||
- Python: `pip install pulsar-client`
|
||||
- Go: `github.com/apache/pulsar-client-go`
|
||||
- Node.js: `pulsar-client`
|
||||
- C++: Native client available
|
||||
|
||||
Example (Python):
|
||||
|
||||
```python
|
||||
import pulsar
|
||||
|
||||
client = pulsar.Client('pulsar://localhost:6650')
|
||||
|
||||
# Producer
|
||||
producer = client.create_producer('persistent://public/default/my-topic')
|
||||
producer.send('Hello Pulsar'.encode('utf-8'))
|
||||
|
||||
# Consumer
|
||||
consumer = client.subscribe('persistent://public/default/my-topic', 'my-subscription')
|
||||
msg = consumer.receive()
|
||||
print(f"Received: {msg.data().decode('utf-8')}")
|
||||
consumer.acknowledge(msg)
|
||||
|
||||
client.close()
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
- Standalone mode uses RocksDB as metadata store by default (recommended for single-node).
|
||||
- Set `PULSAR_STANDALONE_USE_ZOOKEEPER=1` to use ZooKeeper as metadata store.
|
||||
- Functions worker is disabled by default to reduce resource usage and startup time.
|
||||
- For production, use cluster mode with dedicated ZooKeeper and BookKeeper instances.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Standalone Mode Issues
|
||||
|
||||
If you encounter connection errors like "NoRouteToHostException" or "Bookie handle is not available":
|
||||
|
||||
1. **Clear existing data** (if upgrading or switching metadata store):
|
||||
|
||||
```bash
|
||||
docker compose down -v
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
2. **Check container logs**:
|
||||
|
||||
```bash
|
||||
docker compose logs pulsar
|
||||
```
|
||||
|
||||
3. **Verify healthcheck**:
|
||||
|
||||
```bash
|
||||
docker compose ps
|
||||
docker exec pulsar bin/pulsar-admin brokers healthcheck
|
||||
```
|
||||
|
||||
4. **Ensure sufficient resources**: Standalone mode requires at least:
|
||||
- 2GB RAM
|
||||
- 2 CPU cores
|
||||
- 5GB disk space
|
||||
|
||||
## Ports
|
||||
|
||||
| Service | Port | Description |
|
||||
| ------------- | ---- | ------------------------ |
|
||||
| Pulsar Broker | 6650 | Binary protocol |
|
||||
| Pulsar HTTP | 8080 | REST Admin API & Metrics |
|
||||
|
||||
## Security Notes
|
||||
|
||||
- This configuration is for development/testing purposes.
|
||||
- For production:
|
||||
- Enable TLS encryption for broker connections.
|
||||
- Configure authentication (JWT, OAuth2, etc.).
|
||||
- Enable authorization with role-based access control.
|
||||
- Use dedicated ZooKeeper and BookKeeper clusters.
|
||||
- Regularly update Pulsar version for security patches.
|
||||
|
||||
## License
|
||||
|
||||
Apache Pulsar is licensed under the Apache License 2.0.
|
||||
264
src/pulsar/README.zh.md
Normal file
264
src/pulsar/README.zh.md
Normal file
@@ -0,0 +1,264 @@
|
||||
# Apache Pulsar
|
||||
|
||||
[English](./README.md) | [中文](./README.zh.md)
|
||||
|
||||
Apache Pulsar 是一个云原生的分布式消息和流处理平台。它结合了传统消息系统(如 RabbitMQ)的最佳特性和流处理系统(如 Kafka)的高吞吐量优势。
|
||||
|
||||
## 服务
|
||||
|
||||
### 默认(单机模式)
|
||||
|
||||
- `pulsar`:单节点 Pulsar 实例,适用于开发和测试。
|
||||
- 使用 `--no-functions-worker` 标志运行,简化部署并减少资源使用
|
||||
- 默认使用 RocksDB 作为元数据存储(从 Pulsar 2.11+ 开始)
|
||||
- 在同一个 JVM 进程中包含内嵌的 ZooKeeper 和 BookKeeper
|
||||
|
||||
### 集群模式(profile: `cluster`)
|
||||
|
||||
- `zookeeper`:用于集群协调的 ZooKeeper。
|
||||
- `pulsar-init`:初始化集群元数据(仅运行一次)。
|
||||
- `bookie`:用于持久化消息存储的 BookKeeper。
|
||||
- `broker`:用于消息路由的 Pulsar Broker。
|
||||
|
||||
## 环境变量
|
||||
|
||||
| 变量名 | 说明 | 默认值 |
|
||||
| --------------------------------- | -------------------------------------- | ------------------------------------------------ |
|
||||
| `PULSAR_VERSION` | Pulsar 镜像版本 | `4.0.7` |
|
||||
| `TZ` | 时区 | `UTC` |
|
||||
| `PULSAR_BROKER_PORT_OVERRIDE` | Pulsar Broker 主机端口(映射到 6650) | `6650` |
|
||||
| `PULSAR_HTTP_PORT_OVERRIDE` | HTTP/Admin API 主机端口(映射到 8080) | `8080` |
|
||||
| `PULSAR_STANDALONE_USE_ZOOKEEPER` | 单机模式使用 ZooKeeper(0 或 1) | `0` |
|
||||
| `PULSAR_MEM` | 单机模式 JVM 内存设置 | `-Xms512m -Xmx512m -XX:MaxDirectMemorySize=256m` |
|
||||
| `PULSAR_CLUSTER_NAME` | 集群名称(集群模式) | `cluster-a` |
|
||||
|
||||
请根据实际需求修改 `.env` 文件。
|
||||
|
||||
## 卷
|
||||
|
||||
- `pulsar_data`:Pulsar 数据目录(单机模式)。
|
||||
- `pulsar_conf`:Pulsar 配置目录(单机模式)。
|
||||
- `zookeeper_data`:ZooKeeper 数据目录(集群模式)。
|
||||
- `bookie_data`:BookKeeper 数据目录(集群模式)。
|
||||
|
||||
## 使用方法
|
||||
|
||||
### 单机模式(默认)
|
||||
|
||||
1. 启动 Pulsar 单机模式:
|
||||
|
||||
```bash
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
2. 等待 Pulsar 就绪(查看日志):
|
||||
|
||||
```bash
|
||||
docker compose logs -f pulsar
|
||||
```
|
||||
|
||||
您应该看到类似以下的消息:
|
||||
|
||||
```log
|
||||
INFO org.apache.pulsar.broker.PulsarService - messaging service is ready
|
||||
```
|
||||
|
||||
3. 验证集群健康状态:
|
||||
|
||||
```bash
|
||||
docker exec pulsar bin/pulsar-admin brokers healthcheck
|
||||
```
|
||||
|
||||
4. 访问 Pulsar:
|
||||
- Broker:`pulsar://localhost:6650`
|
||||
- Admin API:`http://localhost:8080`
|
||||
|
||||
### 集群模式
|
||||
|
||||
1. 启动 Pulsar 集群:
|
||||
|
||||
```bash
|
||||
docker compose --profile cluster up -d
|
||||
```
|
||||
|
||||
2. 等待所有服务健康:
|
||||
|
||||
```bash
|
||||
docker compose --profile cluster ps
|
||||
```
|
||||
|
||||
## 管理与监控
|
||||
|
||||
### Pulsar Admin CLI
|
||||
|
||||
`pulsar-admin` CLI 是管理 Pulsar 的推荐工具,已包含在 Pulsar 容器中。
|
||||
|
||||
```bash
|
||||
# 检查集群健康状态
|
||||
docker exec pulsar bin/pulsar-admin brokers healthcheck
|
||||
|
||||
# 列出集群
|
||||
docker exec pulsar bin/pulsar-admin clusters list
|
||||
|
||||
# 列出租户
|
||||
docker exec pulsar bin/pulsar-admin tenants list
|
||||
|
||||
# 列出命名空间
|
||||
docker exec pulsar bin/pulsar-admin namespaces list public
|
||||
|
||||
# 获取 broker 统计信息
|
||||
docker exec pulsar bin/pulsar-admin broker-stats monitoring-metrics
|
||||
```
|
||||
|
||||
### REST Admin API
|
||||
|
||||
Pulsar 提供了全面的 REST API 用于管理任务。
|
||||
|
||||
```bash
|
||||
# 获取集群信息
|
||||
curl http://localhost:8080/admin/v2/clusters
|
||||
|
||||
# 获取 broker 统计信息
|
||||
curl http://localhost:8080/admin/v2/broker-stats/monitoring-metrics
|
||||
|
||||
# 列出租户
|
||||
curl http://localhost:8080/admin/v2/tenants
|
||||
|
||||
# 列出命名空间
|
||||
curl http://localhost:8080/admin/v2/namespaces/public
|
||||
|
||||
# 获取主题统计信息
|
||||
curl http://localhost:8080/admin/v2/persistent/public/default/my-topic/stats
|
||||
```
|
||||
|
||||
### 使用 Prometheus 监控
|
||||
|
||||
Pulsar 在 `/metrics` 端点暴露 Prometheus 指标:
|
||||
|
||||
```bash
|
||||
# 访问 Pulsar 指标
|
||||
curl http://localhost:8080/metrics
|
||||
```
|
||||
|
||||
您可以集成 Prometheus 和 Grafana 进行可视化。Pulsar 提供了官方的 Grafana 仪表板。
|
||||
|
||||
## 测试 Pulsar
|
||||
|
||||
1. 创建命名空间:
|
||||
|
||||
```bash
|
||||
docker exec pulsar bin/pulsar-admin namespaces create public/test-namespace
|
||||
```
|
||||
|
||||
2. 创建主题:
|
||||
|
||||
```bash
|
||||
docker exec pulsar bin/pulsar-admin topics create persistent://public/test-namespace/test-topic
|
||||
```
|
||||
|
||||
3. 列出主题:
|
||||
|
||||
```bash
|
||||
docker exec pulsar bin/pulsar-admin topics list public/test-namespace
|
||||
```
|
||||
|
||||
4. 生产消息:
|
||||
|
||||
```bash
|
||||
docker exec -it pulsar bin/pulsar-client produce persistent://public/test-namespace/test-topic --messages "Hello Pulsar"
|
||||
```
|
||||
|
||||
5. 消费消息:
|
||||
|
||||
```bash
|
||||
docker exec -it pulsar bin/pulsar-client consume persistent://public/test-namespace/test-topic -s "test-subscription" -n 0
|
||||
```
|
||||
|
||||
## 客户端库
|
||||
|
||||
Pulsar 支持多种客户端库:
|
||||
|
||||
- Java:`org.apache.pulsar:pulsar-client`
|
||||
- Python:`pip install pulsar-client`
|
||||
- Go:`github.com/apache/pulsar-client-go`
|
||||
- Node.js:`pulsar-client`
|
||||
- C++:原生客户端可用
|
||||
|
||||
示例(Python):
|
||||
|
||||
```python
|
||||
import pulsar
|
||||
|
||||
client = pulsar.Client('pulsar://localhost:6650')
|
||||
|
||||
# 生产者
|
||||
producer = client.create_producer('persistent://public/default/my-topic')
|
||||
producer.send('Hello Pulsar'.encode('utf-8'))
|
||||
|
||||
# 消费者
|
||||
consumer = client.subscribe('persistent://public/default/my-topic', 'my-subscription')
|
||||
msg = consumer.receive()
|
||||
print(f"收到消息: {msg.data().decode('utf-8')}")
|
||||
consumer.acknowledge(msg)
|
||||
|
||||
client.close()
|
||||
```
|
||||
|
||||
## 配置
|
||||
|
||||
- 单机模式默认使用 RocksDB 作为元数据存储(推荐用于单节点)。
|
||||
- 设置 `PULSAR_STANDALONE_USE_ZOOKEEPER=1` 可使用 ZooKeeper 作为元数据存储。
|
||||
- 默认禁用 Functions Worker 以减少资源使用和启动时间。
|
||||
- 生产环境请使用集群模式,配置专用的 ZooKeeper 和 BookKeeper 实例。
|
||||
|
||||
## 故障排除
|
||||
|
||||
### 单机模式问题
|
||||
|
||||
如果遇到连接错误,如"NoRouteToHostException"或"Bookie handle is not available":
|
||||
|
||||
1. **清除现有数据**(如果升级或切换元数据存储):
|
||||
|
||||
```bash
|
||||
docker compose down -v
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
2. **检查容器日志**:
|
||||
|
||||
```bash
|
||||
docker compose logs pulsar
|
||||
```
|
||||
|
||||
3. **验证健康检查**:
|
||||
|
||||
```bash
|
||||
docker compose ps
|
||||
docker exec pulsar bin/pulsar-admin brokers healthcheck
|
||||
```
|
||||
|
||||
4. **确保资源充足**:单机模式至少需要:
|
||||
- 2GB 内存
|
||||
- 2 个 CPU 核心
|
||||
- 5GB 磁盘空间
|
||||
|
||||
## 端口
|
||||
|
||||
| 服务 | 端口 | 说明 |
|
||||
| ------------- | ---- | --------------------- |
|
||||
| Pulsar Broker | 6650 | 二进制协议 |
|
||||
| Pulsar HTTP | 8080 | REST Admin API 和指标 |
|
||||
|
||||
## 安全提示
|
||||
|
||||
- 此配置用于开发/测试目的。
|
||||
- 生产环境请:
|
||||
- 为 broker 连接启用 TLS 加密。
|
||||
- 配置身份验证(JWT、OAuth2 等)。
|
||||
- 启用基于角色的访问控制授权。
|
||||
- 使用专用的 ZooKeeper 和 BookKeeper 集群。
|
||||
- 定期更新 Pulsar 版本以获取安全补丁。
|
||||
|
||||
## 许可证
|
||||
|
||||
Apache Pulsar 采用 Apache License 2.0 许可证。
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user