feat: update Guidelines

This commit is contained in:
Sun-ZhenXing
2025-10-15 14:00:03 +08:00
parent fe329c80eb
commit 8cf227bd14
76 changed files with 1078 additions and 671 deletions

41
.compose-template.yaml Normal file
View File

@@ -0,0 +1,41 @@
# Docker Compose Template for Compose Anything
# This template provides a standardized structure for all services
# Copy this template when creating new services
x-default: &default
restart: unless-stopped
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
# Example service structure:
# services:
# service-name:
# <<: *default
# image: image:${VERSION:-latest}
# ports:
# - "${PORT_OVERRIDE:-8080}:8080"
# volumes:
# - service_data:/data
# environment:
# - TZ=${TZ:-UTC}
# - ENV_VAR=${ENV_VAR:-default_value}
# healthcheck:
# test: ["CMD", "command", "to", "check", "health"]
# interval: 30s
# timeout: 10s
# retries: 3
# start_period: 10s
# deploy:
# resources:
# limits:
# cpus: '1.00'
# memory: 512M
# reservations:
# cpus: '0.25'
# memory: 128M
#
# volumes:
# service_data:

View File

@@ -0,0 +1,52 @@
---
applyTo: '**'
---
Compose Anything helps users quickly deploy various services by providing a set of high-quality Docker Compose configuration files. These configurations constrain resource usage, can be easily migrated to systems like K8S, and are easy to understand and modify.
1. Out-of-the-box
- Configurations should work out-of-the-box with no extra steps (at most, provide a `.env` file).
2. Simple commands
- Each project ships a single `docker-compose.yaml` file.
- Command complexity should not exceed `docker compose up -d`; if more is needed, provide a `Makefile`.
- For initialization, prefer `healthcheck` with `depends_on` using `condition: service_healthy` to orchestrate startup order.
3. Stable versions
- Pin to the latest stable version instead of `latest`.
- Expose image versions via environment variables (e.g., `FOO_VERSION`).
4. Configuration conventions
- Prefer environment variables over complex CLI flags;
- Pass secrets via env vars or mounted files, never hardcode;
- Provide sensible defaults to enable zero-config startup;
- A commented `.env.example` is required;
- Env var naming: UPPER_SNAKE_CASE with service prefix (e.g., `POSTGRES_*`); use `*_PORT_OVERRIDE` for host port overrides.
5. Profiles
- Use Profiles for optional components/dependencies;
- Recommended names: `gpu` (acceleration), `metrics` (observability/exporters), `dev` (dev-only features).
6. Cross-platform & architectures
- Where images support it, ensure Debian 12+/Ubuntu 22.04+, Windows 10+, macOS 12+ work;
- Support x86-64 and ARM64 as consistently as possible;
- Avoid Linux-only host paths like `/etc/localtime` and `/etc/timezone`; prefer `TZ` env var for time zone.
7. Volumes & mounts
- Prefer relative paths for configuration to improve portability;
- Prefer named volumes for data directories to avoid permission/compat issues of host paths;
- If host paths are necessary, provide a top-level directory variable (e.g., `DATA_DIR`).
8. Resources & logging
- Always limit CPU and memory to prevent resource exhaustion;
- For GPU services, enable a single GPU by default via `deploy.resources.reservations.devices` (maps to device requests) or `gpus` where applicable;
- Limit logs (`json-file` driver: `max-size`/`max-file`).
9. Healthchecks
- Every service should define a `healthcheck` with suitable `interval`, `timeout`, `retries`, and `start_period`;
- Use `depends_on.condition: service_healthy` for dependency chains.
10. Security baseline (apply when possible)
- Run as non-root (expose `PUID`/`PGID` or set `user: "1000:1000"`);
- Read-only root filesystem (`read_only: true`), use `tmpfs`/writable mounts for required paths;
- Least privilege: `cap_drop: ["ALL"]`, add back only whats needed via `cap_add`;
- Avoid `container_name` (hurts scaling and reusable network aliases);
- If exposing Docker socket or other high-risk mounts, clearly document risks and alternatives.
11. Documentation & discoverability
- Provide clear docs and examples (include admin/initialization notes, and security/license notes when relevant);
- Keep docs LLM-friendly;
- List primary env vars and default ports in the README, and link to `README.md` / `README.zh.md`.
Reference template: `.compose-template.yaml` in the repo root.
If you want to find image tags, try fetch url like `https://hub.docker.com/v2/repositories/library/nginx/tags?page_size=1&ordering=last_updated`.

View File

@@ -67,37 +67,50 @@ Compose Anything helps users quickly deploy various services by providing a set
## Guidelines
1. **Out-of-the-box**: Configurations should work out-of-the-box, requiring no setup to start (at most, provide a `.env` file).
2. **Simple Commands**
- Each project provides a single `docker-compose.yaml` file.
- Command complexity should not exceed the `docker compose` command; if it does, provide a `Makefile`.
- If a service requires initialization, use `depends_on` to simulate Init containers.
3. **Stable Versions**
- Provide the latest stable image version instead of `latest`.
- Allow version configuration via environment variables.
4. **Highly Configurable**
- Prefer configuration via environment variables rather than complex command-line arguments.
- Sensitive information like passwords should be passed via environment variables or mounted files, not hardcoded.
- Provide reasonable defaults so services can start with zero configuration.
- Provide a well-commented `.env.example` file to help users get started quickly.
- Use Profiles for optional dependencies.
5. **Cross-Platform**: (Where supported by the image) Ensure compatibility with major platforms.
- Compatibility: Debian 12+/Ubuntu 22.04+, Windows 10+, macOS 12+.
- Support multiple architectures where possible, such as x86-64 and ARM64.
6. **Careful Mounting**
- Use relative paths for configuration file mounts to ensure cross-platform compatibility.
- Use named volumes for data directories to avoid permission and compatibility issues with host path mounts.
7. **Default Resource Limits**
- Limit CPU and memory usage for each service to prevent accidental resource exhaustion.
- Limit log file size to prevent logs from filling up the disk.
- For GPU services, enable single GPU by default.
8. **Comprehensive Documentation**
- Provide good documentation and examples to help users get started and understand the configurations.
- Clearly explain how to initialize accounts, admin accounts, etc.
- Provide security and license notes when necessary.
- Offer LLM-friendly documentation for easy querying and understanding by language models.
9. **Best Practices**: Follow other best practices to ensure security, performance, and maintainability.
1. Out-of-the-box
- Configurations should work out-of-the-box with no extra steps (at most, provide a `.env` file).
2. Simple commands
- Each project ships a single `docker-compose.yaml` file.
- Command complexity should not exceed `docker compose up -d`; if more is needed, provide a `Makefile`.
- For initialization, prefer `healthcheck` with `depends_on` using `condition: service_healthy` to orchestrate startup order.
3. Stable versions
- Pin to the latest stable version instead of `latest`.
- Expose image versions via environment variables (e.g., `FOO_VERSION`).
4. Configuration conventions
- Prefer environment variables over complex CLI flags;
- Pass secrets via env vars or mounted files, never hardcode;
- Provide sensible defaults to enable zero-config startup;
- A commented `.env.example` is required;
- Env var naming: UPPER_SNAKE_CASE with service prefix (e.g., `POSTGRES_*`); use `*_PORT_OVERRIDE` for host port overrides.
5. Profiles
- Use Profiles for optional components/dependencies;
- Recommended names: `gpu` (acceleration), `metrics` (observability/exporters), `dev` (dev-only features).
6. Cross-platform & architectures
- Where images support it, ensure Debian 12+/Ubuntu 22.04+, Windows 10+, macOS 12+ work;
- Support x86-64 and ARM64 as consistently as possible;
- Avoid Linux-only host paths like `/etc/localtime` and `/etc/timezone`; prefer `TZ` env var for time zone.
7. Volumes & mounts
- Prefer relative paths for configuration to improve portability;
- Prefer named volumes for data directories to avoid permission/compat issues of host paths;
- If host paths are necessary, provide a top-level directory variable (e.g., `DATA_DIR`).
8. Resources & logging
- Always limit CPU and memory to prevent resource exhaustion;
- For GPU services, enable a single GPU by default via `deploy.resources.reservations.devices` (maps to device requests) or `gpus` where applicable;
- Limit logs (`json-file` driver: `max-size`/`max-file`).
9. Healthchecks
- Every service should define a `healthcheck` with suitable `interval`, `timeout`, `retries`, and `start_period`;
- Use `depends_on.condition: service_healthy` for dependency chains.
10. Security baseline (apply when possible)
- Run as non-root (expose `PUID`/`PGID` or set `user: "1000:1000"`);
- Read-only root filesystem (`read_only: true`), use `tmpfs`/writable mounts for required paths;
- Least privilege: `cap_drop: ["ALL"]`, add back only whats needed via `cap_add`;
- Avoid `container_name` (hurts scaling and reusable network aliases);
- If exposing Docker socket or other high-risk mounts, clearly document risks and alternatives.
11. Documentation & discoverability
- Provide clear docs and examples (include admin/initialization notes, and security/license notes when relevant);
- Keep docs LLM-friendly;
- List primary env vars and default ports in the README, and link to `README.md` / `README.zh.md`.
## License
MIT License.
[MIT License](./LICENSE).

View File

@@ -67,37 +67,50 @@ Compose Anything 通过提供一组高质量的 Docker Compose 配置文件,
## 规范
1. **开箱即用**,配置应该是开箱即用的,无需配置也能启动(最多提供 `.env` 文件);
2. **命令简单**
1. 开箱即用
- 配置应该是开箱即用的,无需额外步骤即可启动(最多提供 `.env` 文件)。
2. 命令简单
- 每个项目提供单一的 `docker-compose.yaml` 文件;
- 命令复杂性避免超过 `docker compose` 命令,如果超过请提供 `Makefile`
- 如果服务需要初始化,可借助 `depends_on` 模拟 Init 容器;
3. **版本稳定**
- 提供一个最新稳定的镜像版本而不是 `latest`
- 允许通过环境变量配置版本号;
4. **充分可配置**
- 尽量通过环境变量配置,而不是通过复杂的命令行参数;
- 环境变量,密码等敏感信息通过环境变量或挂载文件传递,不要硬编码;
- 提供合理默认值,尽量零配置启动;
- 尽可能提供 `.env.example` 文件并有注释,帮助用户快速上手
- 如果是非必要依赖,请使用 Profiles 配置;
5. **跨平台**,(在镜像支持的情况下)请确保主流平台都能正常启动;
- 兼容标准是Debian 12+/Ubuntu 22.04+、Windows 10+、macOS 12+
- 尽可能兼容不同的架构,如 x86-64、ARM64
6. **小心处理挂载**
- 配置文件尽量使用相对路径挂载,确保跨平台兼容性
- 数据目录尽量使用命名卷,避免主机路径挂载带来的权限和兼容性问题
7. **默认资源限制**
- 对每个服务限制 CPU 和内存使用,防止意外的资源耗尽;
- 限制日志的大小,防止日志文件占满磁盘
- 对于 GPU 服务默认启用单卡
8. **文档全面**
- 提供良好的文档和示例,帮助用户快速上手和理解配置;
- 特别要提供如何初始化账户,管理员账户等说明
- 必要时,提供安全和许可说明
- 提供 LLM 友好的文档,方便用户使用 LLM 进行查询和理解;
9. **最佳实践**,遵循其他可能的最佳实践,确保安全性、性能和可维护性。
- 命令复杂度不应超过 `docker compose up -d`;若需要额外流程,请提供 `Makefile`
- 服务需要初始化,优先使用 `healthcheck``depends_on``condition: service_healthy` 组织启动顺序。
3. 版本稳定
- 固定到“最新稳定版”而非 `latest`
- 通过环境变量暴露镜像版本(如 `FOO_VERSION`)。
4. 配置约定
- 尽量通过环境变量配置,而复杂的命令行参数;
- 敏感信息通过环境变量或挂载文件传递,不要硬编码;
- 提供合理默认值,实现零配置启动;
- 必须提供带注释的 `.env.example`
- 环境变量命名建议:全大写、下划线分隔,按服务加前缀(如 `POSTGRES_*`),端口覆写统一用 `*_PORT_OVERRIDE`
5. Profiles 规范
- 对“可选组件/依赖”使用 Profiles
- 推荐命名:`gpu`GPU 加速)、`metrics`(可观测性/导出器)、`dev`(开发特性)。
6. 跨平台与架构
- 在镜像支持前提下,确保 Debian 12+/Ubuntu 22.04+、Windows 10+、macOS 12+ 可用
- 支持 x86-64 与 ARM64 架构尽可能一致
- 避免依赖仅在 Linux 主机存在的主机路径(例如 `/etc/localtime``/etc/timezone`),统一使用 `TZ` 环境变量传递时区。
7. 卷与挂载
- 配置文件优先使用相对路径,增强跨平台兼容
- 数据目录优先使用“命名卷”,避免主机路径权限/兼容性问题
- 如需主机路径,建议提供顶层目录变量(如 `DATA_DIR`)。
8. 资源与日志
- 必须限制 CPU/内存,防止资源打爆
- GPU 服务默认单卡:可使用 `deploy.resources.reservations.devices`Compose 支持为 device_requests 映射)或 `gpus`
- 限制日志大小(`json-file``max-size`/`max-file`)。
9. 健康检查
- 每个服务应提供 `healthcheck`,包括合适的 `interval``timeout``retries``start_period`
- 依赖链通过 `depends_on.condition: service_healthy` 组织。
10. 安全基线(能用则用)
- 以非 root 运行(提供 `PUID`/`PGID` 或直接 `user: "1000:1000"`
- 只读根文件系统(`read_only: true`),必要目录使用 `tmpfs`/可写挂载;
- 最小权限:`cap_drop: ["ALL"]`,按需再 `cap_add`
- 避免使用 `container_name`(影响可扩缩与复用网络别名);
- 如需暴露 Docker 套接字等高危挂载,必须在文档中明确“风险与替代方案”。
11. 文档与可发现性
- 提供清晰文档与示例(含初始化与管理员账号说明、必要的安全/许可说明);
- 提供对 LLM 友好的结构化文档;
- 在 README 中标注主要环境变量与默认端口,并链接到 `README.md` / `README.zh.md`
## 开源协议
MIT License.
[MIT License](./LICENSE).

View File

@@ -1,24 +1,19 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
apache:
<<: *default
image: httpd:${APACHE_VERSION:-2.4.62-alpine3.20}
container_name: apache
ports:
- "${APACHE_HTTP_PORT_OVERRIDE:-80}:80"
- "${APACHE_HTTPS_PORT_OVERRIDE:-443}:443"
volumes:
- *localtime
- *timezone
- apache_logs:/usr/local/apache2/logs
- ./htdocs:/usr/local/apache2/htdocs:ro
@@ -26,6 +21,7 @@ services:
# - ./httpd.conf:/usr/local/apache2/conf/httpd.conf:ro
# - ./ssl:/usr/local/apache2/conf/ssl:ro
environment:
- TZ=${TZ:-UTC}
- APACHE_RUN_USER=${APACHE_RUN_USER:-www-data}
- APACHE_RUN_GROUP=${APACHE_RUN_GROUP:-www-data}
deploy:
@@ -36,6 +32,12 @@ services:
reservations:
cpus: '0.25'
memory: 128M
healthcheck:
test: ["CMD", "httpd", "-t"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
volumes:
apache_logs:

View File

@@ -1,34 +1,31 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
apisix:
<<: *default
image: apache/apisix:${APISIX_VERSION:-3.13.0-debian}
container_name: apisix
ports:
- "${APISIX_HTTP_PORT_OVERRIDE:-9080}:9080"
- "${APISIX_HTTPS_PORT_OVERRIDE:-9443}:9443"
- "${APISIX_ADMIN_PORT_OVERRIDE:-9180}:9180"
volumes:
- *localtime
- *timezone
- apisix_logs:/usr/local/apisix/logs
# Optional: Mount custom configuration
# - ./config.yaml:/usr/local/apisix/conf/config.yaml
# - ./apisix.yaml:/usr/local/apisix/conf/apisix.yaml
environment:
- TZ=${TZ:-UTC}
- APISIX_STAND_ALONE=${APISIX_STAND_ALONE:-false}
depends_on:
- etcd
etcd:
condition: service_healthy
deploy:
resources:
limits:
@@ -37,18 +34,22 @@ services:
reservations:
cpus: '0.25'
memory: 256M
healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost:9080/apisix/status || exit 1"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
etcd:
<<: *default
image: quay.io/coreos/etcd:${ETCD_VERSION:-v3.6.0}
container_name: apisix-etcd
ports:
- "${ETCD_CLIENT_PORT_OVERRIDE:-2379}:2379"
volumes:
- *localtime
- *timezone
- etcd_data:/etcd-data
environment:
- TZ=${TZ:-UTC}
- ETCD_NAME=apisix-etcd
- ETCD_DATA_DIR=/etcd-data
- ETCD_LISTEN_CLIENT_URLS=http://0.0.0.0:2379
@@ -87,23 +88,28 @@ services:
reservations:
cpus: '0.1'
memory: 128M
healthcheck:
test: ["CMD", "etcdctl", "endpoint", "health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
# Optional: APISIX Dashboard
apisix-dashboard:
<<: *default
image: apache/apisix-dashboard:${APISIX_DASHBOARD_VERSION:-3.0.1-alpine}
container_name: apisix-dashboard
ports:
- "${APISIX_DASHBOARD_PORT_OVERRIDE:-9000}:9000"
volumes:
- *localtime
- *timezone
- dashboard_conf:/usr/local/apisix-dashboard/conf
environment:
- TZ=${TZ:-UTC}
- APISIX_DASHBOARD_USER=${APISIX_DASHBOARD_USER:-admin}
- APISIX_DASHBOARD_PASSWORD=${APISIX_DASHBOARD_PASSWORD:-admin}
depends_on:
- apisix
apisix:
condition: service_healthy
profiles:
- dashboard
deploy:

View File

@@ -1,12 +1,10 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
bifrost:
@@ -16,6 +14,8 @@ services:
- bifrost_data:/app/data
ports:
- "${BIFROST_PORT:-28080}:8080"
environment:
- TZ=${TZ:-UTC}
deploy:
resources:
limits:
@@ -24,6 +24,12 @@ services:
reservations:
cpus: '0.10'
memory: 128M
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8080/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
volumes:
bifrost_data:

View File

@@ -1,23 +1,19 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
bytebot-desktop:
<<: *default
image: ghcr.io/bytebot-ai/bytebot-desktop:${BYTEBOT_VERSION:-edge}
container_name: bytebot-desktop
ports:
- "${BYTEBOT_DESKTOP_PORT_OVERRIDE:-9990}:9990"
volumes:
- *localtime
- *timezone
environment:
- TZ=${TZ:-UTC}
shm_size: 2gb
deploy:
resources:
@@ -27,25 +23,30 @@ services:
reservations:
cpus: '1.0'
memory: 2G
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:9990/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
bytebot-agent:
<<: *default
image: ghcr.io/bytebot-ai/bytebot-agent:${BYTEBOT_VERSION:-edge}
container_name: bytebot-agent
depends_on:
- bytebot-desktop
- bytebot-db
bytebot-desktop:
condition: service_healthy
bytebot-db:
condition: service_healthy
ports:
- "${BYTEBOT_AGENT_PORT_OVERRIDE:-9991}:9991"
environment:
- TZ=${TZ:-UTC}
- BYTEBOTD_URL=http://bytebot-desktop:9990
- DATABASE_URL=postgresql://${POSTGRES_USER:-bytebot}:${POSTGRES_PASSWORD:-bytebotpass}@bytebot-db:5432/${POSTGRES_DB:-bytebot}
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY:-}
- OPENAI_API_KEY=${OPENAI_API_KEY:-}
- GEMINI_API_KEY=${GEMINI_API_KEY:-}
volumes:
- *localtime
- *timezone
deploy:
resources:
limits:
@@ -54,21 +55,25 @@ services:
reservations:
cpus: '0.5'
memory: 512M
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:9991/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
bytebot-ui:
<<: *default
image: ghcr.io/bytebot-ai/bytebot-ui:${BYTEBOT_VERSION:-edge}
container_name: bytebot-ui
depends_on:
- bytebot-agent
bytebot-agent:
condition: service_healthy
ports:
- "${BYTEBOT_UI_PORT_OVERRIDE:-9992}:9992"
environment:
- TZ=${TZ:-UTC}
- BYTEBOT_AGENT_BASE_URL=http://localhost:9991
- BYTEBOT_DESKTOP_VNC_URL=http://localhost:9990/websockify
volumes:
- *localtime
- *timezone
deploy:
resources:
limits:
@@ -81,15 +86,13 @@ services:
bytebot-db:
<<: *default
image: postgres:${POSTGRES_VERSION:-17-alpine}
container_name: bytebot-db
environment:
- TZ=${TZ:-UTC}
- POSTGRES_USER=${POSTGRES_USER:-bytebot}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-bytebotpass}
- POSTGRES_DB=${POSTGRES_DB:-bytebot}
- PGDATA=/var/lib/postgresql/data/pgdata
volumes:
- *localtime
- *timezone
- bytebot_db_data:/var/lib/postgresql/data
deploy:
resources:
@@ -99,6 +102,12 @@ services:
reservations:
cpus: '0.25'
memory: 256M
healthcheck:
test: ["CMD-SHELL", "pg_isready -U $$POSTGRES_USER -d $$POSTGRES_DB"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
volumes:
bytebot_db_data:

View File

@@ -1,30 +1,26 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
cassandra:
<<: *default
image: cassandra:${CASSANDRA_VERSION:-5.0.2}
container_name: cassandra
ports:
- "${CASSANDRA_CQL_PORT_OVERRIDE:-9042}:9042"
- "${CASSANDRA_THRIFT_PORT_OVERRIDE:-9160}:9160"
volumes:
- *localtime
- *timezone
- cassandra_data:/var/lib/cassandra
- cassandra_logs:/var/log/cassandra
# Custom configuration
# - ./cassandra.yaml:/etc/cassandra/cassandra.yaml:ro
environment:
- TZ=${TZ:-UTC}
- CASSANDRA_CLUSTER_NAME=${CASSANDRA_CLUSTER_NAME:-Test Cluster}
- CASSANDRA_DC=${CASSANDRA_DC:-datacenter1}
- CASSANDRA_RACK=${CASSANDRA_RACK:-rack1}

View File

@@ -1,12 +1,10 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
clash:
@@ -16,9 +14,9 @@ services:
- "7880:80"
- "7890:7890"
volumes:
- *localtime
- *timezone
- ./config.yaml:/home/runner/.config/clash/config.yaml
environment:
- TZ=${TZ:-UTC}
deploy:
resources:
limits:
@@ -27,3 +25,9 @@ services:
reservations:
cpus: "0.25"
memory: 256M
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:80/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s

View File

@@ -1,18 +1,15 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
consul:
<<: *default
image: consul:${CONSUL_VERSION:-1.20.3}
container_name: consul
ports:
- "${CONSUL_HTTP_PORT_OVERRIDE:-8500}:8500"
- "${CONSUL_DNS_PORT_OVERRIDE:-8600}:8600/udp"
@@ -20,14 +17,13 @@ services:
- "${CONSUL_SERF_WAN_PORT_OVERRIDE:-8302}:8302"
- "${CONSUL_SERVER_RPC_PORT_OVERRIDE:-8300}:8300"
volumes:
- *localtime
- *timezone
- consul_data:/consul/data
- consul_config:/consul/config
# Custom configuration
# - ./consul.json:/consul/config/consul.json:ro
environment:
- TZ=${TZ:-UTC}
- CONSUL_BIND_INTERFACE=${CONSUL_BIND_INTERFACE:-eth0}
- CONSUL_CLIENT_INTERFACE=${CONSUL_CLIENT_INTERFACE:-eth0}
- CONSUL_LOCAL_CONFIG=${CONSUL_LOCAL_CONFIG:-'{"datacenter":"dc1","server":true,"ui_config":{"enabled":true},"bootstrap_expect":1,"log_level":"INFO"}'}

View File

@@ -1,22 +1,22 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
dify-api:
<<: *default
image: langgenius/dify-api:${DIFY_VERSION:-0.18.2}
container_name: dify-api
depends_on:
- dify-db
- dify-redis
dify-db:
condition: service_healthy
dify-redis:
condition: service_healthy
environment:
- TZ=${TZ:-UTC}
- MODE=api
- LOG_LEVEL=${LOG_LEVEL:-INFO}
- SECRET_KEY=${SECRET_KEY:-sk-9f73s3ljTXVcMT3Blb3ljTqtsKiGHXVcMT3BlbkFJLK7U}
@@ -30,8 +30,6 @@ services:
- VECTOR_STORE=${VECTOR_STORE:-weaviate}
- WEAVIATE_ENDPOINT=http://dify-weaviate:8080
volumes:
- *localtime
- *timezone
- dify_storage:/app/api/storage
deploy:
resources:
@@ -41,15 +39,23 @@ services:
reservations:
cpus: '0.5'
memory: 1G
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:5001/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
dify-worker:
<<: *default
image: langgenius/dify-api:${DIFY_VERSION:-0.18.2}
container_name: dify-worker
depends_on:
- dify-db
- dify-redis
dify-db:
condition: service_healthy
dify-redis:
condition: service_healthy
environment:
- TZ=${TZ:-UTC}
- MODE=worker
- LOG_LEVEL=${LOG_LEVEL:-INFO}
- SECRET_KEY=${SECRET_KEY:-sk-9f73s3ljTXVcMT3Blb3ljTqtsKiGHXVcMT3BlbkFJLK7U}
@@ -63,8 +69,6 @@ services:
- VECTOR_STORE=${VECTOR_STORE:-weaviate}
- WEAVIATE_ENDPOINT=http://dify-weaviate:8080
volumes:
- *localtime
- *timezone
- dify_storage:/app/api/storage
deploy:
resources:
@@ -78,10 +82,11 @@ services:
dify-web:
<<: *default
image: langgenius/dify-web:${DIFY_VERSION:-0.18.2}
container_name: dify-web
depends_on:
- dify-api
dify-api:
condition: service_healthy
environment:
- TZ=${TZ:-UTC}
- NEXT_PUBLIC_API_URL=${DIFY_API_URL:-http://localhost:5001}
- NEXT_PUBLIC_APP_URL=${DIFY_APP_URL:-http://localhost:3000}
ports:
@@ -98,15 +103,13 @@ services:
dify-db:
<<: *default
image: postgres:15-alpine
container_name: dify-db
environment:
- TZ=${TZ:-UTC}
- POSTGRES_USER=${POSTGRES_USER:-dify}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-difypass}
- POSTGRES_DB=${POSTGRES_DB:-dify}
- PGDATA=/var/lib/postgresql/data/pgdata
volumes:
- *localtime
- *timezone
- dify_db_data:/var/lib/postgresql/data
deploy:
resources:
@@ -116,15 +119,20 @@ services:
reservations:
cpus: '0.25'
memory: 256M
healthcheck:
test: ["CMD-SHELL", "pg_isready -U $$POSTGRES_USER -d $$POSTGRES_DB"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
dify-redis:
<<: *default
image: redis:7-alpine
container_name: dify-redis
command: redis-server --requirepass ${REDIS_PASSWORD:-}
environment:
- TZ=${TZ:-UTC}
volumes:
- *localtime
- *timezone
- dify_redis_data:/data
deploy:
resources:
@@ -134,22 +142,26 @@ services:
reservations:
cpus: '0.1'
memory: 128M
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 3s
retries: 3
start_period: 5s
dify-weaviate:
<<: *default
image: semitechnologies/weaviate:${WEAVIATE_VERSION:-1.28.12}
container_name: dify-weaviate
profiles:
- weaviate
environment:
- TZ=${TZ:-UTC}
- QUERY_DEFAULTS_LIMIT=25
- AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED=true
- PERSISTENCE_DATA_PATH=/var/lib/weaviate
- DEFAULT_VECTORIZER_MODULE=none
- CLUSTER_HOSTNAME=node1
volumes:
- *localtime
- *timezone
- dify_weaviate_data:/var/lib/weaviate
deploy:
resources:
@@ -159,6 +171,12 @@ services:
reservations:
cpus: '0.25'
memory: 512M
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8080/v1/.well-known/ready"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
volumes:
dify_storage:

View File

@@ -1,24 +1,21 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
registry:
<<: *default
image: registry:${REGISTRY_VERSION:-3.0.0}
volumes:
- *localtime
- *timezone
- ./certs:/certs:ro
- ./config.yml:/etc/distribution/config.yml:ro
- registry:/var/lib/registry
environment:
TZ: ${TZ:-UTC}
REGISTRY_AUTH: ${REGISTRY_AUTH:-htpasswd}
REGISTRY_AUTH_HTPASSWD_REALM: ${REGISTRY_AUTH_HTPASSWD_REALM:-Registry Realm}
REGISTRY_AUTH_HTPASSWD_PATH: ${REGISTRY_AUTH_HTPASSWD_PATH:-/certs/passwd}
@@ -35,6 +32,12 @@ services:
reservations:
cpus: '0.1'
memory: 128M
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:5000/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
volumes:
registry:

View File

@@ -1,27 +1,23 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
dockge:
<<: *default
image: louislam/dockge:${DOCKGE_VERSION:-1}
container_name: dockge
ports:
- "${PORT_OVERRIDE:-5001}:5001"
volumes:
- *localtime
- *timezone
- /var/run/docker.sock:/var/run/docker.sock
- dockge_data:/app/data
- ${STACKS_DIR:-./stacks}:/opt/stacks
environment:
- TZ=${TZ:-UTC}
- DOCKGE_STACKS_DIR=${DOCKGE_STACKS_DIR:-/opt/stacks}
- PUID=${PUID:-1000}
- PGID=${PGID:-1000}
@@ -33,6 +29,12 @@ services:
reservations:
cpus: '0.25'
memory: 256M
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:5001/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
volumes:
dockge_data:

View File

@@ -1,30 +1,26 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
elasticsearch:
<<: *default
image: docker.elastic.co/elasticsearch/elasticsearch:${ELASTICSEARCH_VERSION:-8.16.1}
container_name: elasticsearch
ports:
- "${ELASTICSEARCH_HTTP_PORT_OVERRIDE:-9200}:9200"
- "${ELASTICSEARCH_TRANSPORT_PORT_OVERRIDE:-9300}:9300"
volumes:
- *localtime
- *timezone
- elasticsearch_data:/usr/share/elasticsearch/data
- elasticsearch_logs:/usr/share/elasticsearch/logs
# Custom configuration
# - ./elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro
environment:
- TZ=${TZ:-UTC}
- node.name=elasticsearch
- cluster.name=${ELASTICSEARCH_CLUSTER_NAME:-docker-cluster}
- discovery.type=${ELASTICSEARCH_DISCOVERY_TYPE:-single-node}

View File

@@ -1,26 +1,22 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
etcd:
<<: *default
image: quay.io/coreos/etcd:${ETCD_VERSION:-v3.6.0}
container_name: etcd
ports:
- "${ETCD_CLIENT_PORT_OVERRIDE:-2379}:2379"
- "${ETCD_PEER_PORT_OVERRIDE:-2380}:2380"
volumes:
- *localtime
- *timezone
- etcd_data:/etcd-data
environment:
- TZ=${TZ:-UTC}
- ETCD_NAME=${ETCD_NAME:-etcd-node}
- ETCD_DATA_DIR=/etcd-data
- ETCD_LISTEN_CLIENT_URLS=http://0.0.0.0:2379
@@ -59,6 +55,12 @@ services:
reservations:
cpus: '0.25'
memory: 256M
healthcheck:
test: ["CMD", "etcdctl", "endpoint", "health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
volumes:
etcd_data:

View File

@@ -1,21 +1,19 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
firecrawl:
<<: *default
image: mendableai/firecrawl:${FIRECRAWL_VERSION:-v1.16.0}
container_name: firecrawl
ports:
- "${FIRECRAWL_PORT_OVERRIDE:-3002}:3002"
environment:
TZ: ${TZ:-UTC}
REDIS_URL: redis://:${REDIS_PASSWORD:-firecrawl}@redis:6379
PLAYWRIGHT_MICROSERVICE_URL: http://playwright:3000
PORT: 3002
@@ -23,8 +21,10 @@ services:
SCRAPE_RATE_LIMIT_TOKEN_BUCKET_SIZE: ${SCRAPE_RATE_LIMIT_TOKEN_BUCKET_SIZE:-20}
SCRAPE_RATE_LIMIT_TOKEN_BUCKET_REFILL: ${SCRAPE_RATE_LIMIT_TOKEN_BUCKET_REFILL:-1}
depends_on:
- redis
- playwright
redis:
condition: service_healthy
playwright:
condition: service_started
deploy:
resources:
limits:
@@ -33,15 +33,20 @@ services:
reservations:
cpus: '1.0'
memory: 2G
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:3002/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
redis:
<<: *default
image: redis:${REDIS_VERSION:-7.4.2-alpine}
container_name: firecrawl-redis
command: redis-server --requirepass ${REDIS_PASSWORD:-firecrawl} --appendonly yes
environment:
- TZ=${TZ:-UTC}
volumes:
- *localtime
- *timezone
- redis_data:/data
deploy:
resources:
@@ -51,12 +56,18 @@ services:
reservations:
cpus: '0.5'
memory: 256M
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 3s
retries: 3
start_period: 5s
playwright:
<<: *default
image: mendableai/firecrawl-playwright:${PLAYWRIGHT_VERSION:-latest}
container_name: firecrawl-playwright
environment:
TZ: ${TZ:-UTC}
PORT: 3000
PROXY_SERVER: ${PROXY_SERVER:-}
PROXY_USERNAME: ${PROXY_USERNAME:-}

View File

@@ -1,22 +1,19 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
frpc:
<<: *default
image: snowdreamtech/frpc:${FRPC_VERSION:-0.64.0}
volumes:
- *localtime
- *timezone
- ./frpc.toml:/etc/frp/frpc.toml:ro
environment:
TZ: ${TZ:-UTC}
FRP_SERVER_ADDR: ${FRP_SERVER_ADDR}
FRP_SERVER_PORT: ${FRP_SERVER_PORT}
FRP_SERVER_TOKEN: ${FRP_SERVER_TOKEN}

View File

@@ -1,25 +1,22 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
frps:
<<: *default
image: snowdreamtech/frps:${FRPS_VERSION:-0.64.0}
volumes:
- *localtime
- *timezone
- ./frps.toml:/etc/frp/frps.toml:ro
ports:
- ${FRP_PORT_OVERRIDE_SERVER:-9870}:${FRP_SERVER_PORT:-9870}
- ${FRP_PORT_OVERRIDE_ADMIN:-7890}:${FRP_ADMIN_PORT:-7890}
environment:
TZ: ${TZ:-UTC}
FRP_SERVER_TOKEN: ${FRP_SERVER_TOKEN}
FRP_SERVER_PORT: ${FRP_SERVER_PORT:-9870}
FRP_ADMIN_PORT: ${FRP_ADMIN_PORT:-7890}
@@ -33,3 +30,9 @@ services:
reservations:
cpus: '0.1'
memory: 64M
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:${FRP_ADMIN_PORT:-7890}/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s

View File

@@ -1,26 +1,23 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
gitea_runner:
<<: *default
image: gitea/act_runner:0.2.12
environment:
TZ: ${TZ:-UTC}
CONFIG_FILE: /config.yaml
GITEA_INSTANCE_URL: ${INSTANCE_URL:-http://localhost:3000}
GITEA_RUNNER_REGISTRATION_TOKEN: ${REGISTRATION_TOKEN}
GITEA_RUNNER_NAME: ${RUNNER_NAME:-Gitea-Runner}
GITEA_RUNNER_LABELS: ${RUNNER_LABELS:-DockerRunner}
volumes:
- *localtime
- *timezone
- ./config.yaml:/config.yaml:ro
- gitea_runner_data:/data
- /var/run/docker.sock:/var/run/docker.sock

View File

@@ -1,33 +1,31 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
gitea:
<<: *default
image: gitea/gitea:${GITEA_VERSION:-1.24.6-rootless}
environment:
- TZ=${TZ:-UTC}
- GITEA__database__DB_TYPE=${GITEA_DB_TYPE:-postgres}
- GITEA__database__HOST=${GITEA_POSTGRES_HOST:-db:5432}
- GITEA__database__USER=${POSTGRES_USER:-gitea}
- GITEA__database__NAME=${POSTGRES_DB:-gitea}
- GITEA__database__PASSWD=${POSTGRES_PASSWORD:-gitea}
volumes:
- *localtime
- *timezone
- gitea_data:/var/lib/gitea
- ./config:/etc/gitea
ports:
- "${GITEA_HTTP_PORT:-3000}:3000"
- "${GITEA_SSH_PORT:-3022}:22"
depends_on:
- db
db:
condition: service_healthy
deploy:
resources:
limits:
@@ -36,17 +34,22 @@ services:
reservations:
cpus: '0.5'
memory: 512M
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:3000/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
db:
<<: *default
image: postgres:${POSTGRES_VERSION:-17.6}
environment:
- TZ=${TZ:-UTC}
- POSTGRES_USER=${POSTGRES_USER:-gitea}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-gitea}
- POSTGRES_DB=${POSTGRES_DB:-gitea}
volumes:
- *localtime
- *timezone
- postgres:/var/lib/postgresql/data
deploy:
resources:
@@ -56,6 +59,12 @@ services:
reservations:
cpus: '0.5'
memory: 512M
healthcheck:
test: ["CMD-SHELL", "pg_isready -U $$POSTGRES_USER -d $$POSTGRES_DB"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
volumes:
gitea_data:

View File

@@ -1,22 +1,20 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
gitlab-runner:
<<: *default
image: gitlab/gitlab-runner:${GITLAB_RUNNER_VERSION:-alpine3.21-v18.4.0}
volumes:
- *localtime
- *timezone
- /var/run/docker.sock:/var/run/docker.sock
- ./config:/etc/gitlab-runner
environment:
- TZ=${TZ:-UTC}
deploy:
resources:
limits:

View File

@@ -1,12 +1,10 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
gitlab:
@@ -17,11 +15,12 @@ services:
- "${GITLAB_PORT_OVERRIDE_HTTP:-5080}:80"
- "${GITLAB_PORT_OVERRIDE_SSH:-5022}:22"
volumes:
- *localtime
- *timezone
- ./config:/etc/gitlab
- gitlab_logs:/var/log/gitlab
- gitlab_data:/var/opt/gitlab
environment:
- TZ=${TZ:-UTC}
- GITLAB_OMNIBUS_CONFIG=${GITLAB_OMNIBUS_CONFIG:-}
deploy:
resources:
limits:
@@ -30,6 +29,12 @@ services:
reservations:
cpus: '1.0'
memory: 4G
healthcheck:
test: ["CMD", "/opt/gitlab/bin/gitlab-healthcheck", "--fail"]
interval: 60s
timeout: 30s
retries: 5
start_period: 300s
volumes:
gitlab_logs:

View File

@@ -1,25 +1,21 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
gpustack:
<<: *default
image: gpustack/gpustack:${GPUSTACK_VERSION:-v0.5.3}
container_name: gpustack
ports:
- "${GPUSTACK_PORT_OVERRIDE:-80}:80"
volumes:
- *localtime
- *timezone
- gpustack_data:/var/lib/gpustack
environment:
- TZ=${TZ:-UTC}
- GPUSTACK_DEBUG=${GPUSTACK_DEBUG:-false}
- GPUSTACK_HOST=${GPUSTACK_HOST:-0.0.0.0}
- GPUSTACK_PORT=${GPUSTACK_PORT:-80}
@@ -34,14 +30,19 @@ services:
reservations:
cpus: '1.0'
memory: 2G
# Uncomment below for GPU support
# reservations:
# devices:
# - driver: nvidia
# count: 1
# capabilities: [gpu]
# Uncomment below for GPU support
# devices:
# - driver: nvidia
# count: 1
# capabilities: [gpu]
# For GPU support, uncomment the following section
# runtime: nvidia
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:80/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
volumes:
gpustack_data:

View File

@@ -1,23 +1,18 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
grafana:
<<: *default
image: grafana/grafana:${GRAFANA_VERSION:-12.1.1}
container_name: grafana
ports:
- "${GRAFANA_PORT_OVERRIDE:-3000}:3000"
volumes:
- *localtime
- *timezone
- grafana_data:/var/lib/grafana
- grafana_logs:/var/log/grafana
@@ -25,6 +20,7 @@ services:
# - ./grafana.ini:/etc/grafana/grafana.ini
# - ./provisioning:/etc/grafana/provisioning
environment:
- TZ=${TZ:-UTC}
- GF_SECURITY_ADMIN_USER=${GRAFANA_ADMIN_USER:-admin}
- GF_SECURITY_ADMIN_PASSWORD=${GRAFANA_ADMIN_PASSWORD:-admin}
- GF_USERS_ALLOW_SIGN_UP=${GRAFANA_ALLOW_SIGN_UP:-false}
@@ -40,6 +36,12 @@ services:
reservations:
cpus: '0.25'
memory: 256M
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:3000/api/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
volumes:
grafana_data:

View File

@@ -1,23 +1,21 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
halo:
<<: *default
image: halohub/halo:${HALO_VERSION:-2.21.9}
container_name: halo
ports:
- "${HALO_PORT:-8090}:8090"
volumes:
- halo_data:/root/.halo2
environment:
- TZ=${TZ:-UTC}
- SPRING_R2DBC_URL=${SPRING_R2DBC_URL:-r2dbc:pool:postgresql://halo-db:5432/halo}
- SPRING_R2DBC_USERNAME=${POSTGRES_USER:-postgres}
- SPRING_R2DBC_PASSWORD=${POSTGRES_PASSWORD:-postgres}
@@ -36,12 +34,18 @@ services:
reservations:
cpus: '0.5'
memory: 512M
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8090/actuator/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
halo-db:
<<: *default
image: postgres:${POSTGRES_VERSION:-17.2-alpine3.21}
container_name: halo-db
environment:
- TZ=${TZ:-UTC}
- POSTGRES_USER=${POSTGRES_USER:-postgres}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-postgres}
- POSTGRES_DB=${POSTGRES_DB:-halo}

View File

@@ -1,29 +1,27 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
# Harbor Core
harbor-core:
<<: *default
image: goharbor/harbor-core:${HARBOR_VERSION:-v2.12.0}
container_name: harbor-core
depends_on:
- harbor-db
- harbor-redis
harbor-db:
condition: service_healthy
harbor-redis:
condition: service_healthy
volumes:
- *localtime
- *timezone
- harbor_config:/etc/core
- harbor_ca_download:/etc/core/ca
- harbor_secret:/etc/core/certificates
environment:
- TZ=${TZ:-UTC}
- CORE_SECRET=${HARBOR_CORE_SECRET:-}
- JOBSERVICE_SECRET=${HARBOR_JOBSERVICE_SECRET:-}
- DATABASE_TYPE=postgresql
@@ -32,7 +30,7 @@ services:
- POSTGRESQL_USERNAME=postgres
- POSTGRESQL_PASSWORD=${HARBOR_DB_PASSWORD:-password}
- POSTGRESQL_DATABASE=registry
- REGISTRY_URL=http://registry:5000
- REGISTRY_URL=http://harbor-registry:5000
- TOKEN_SERVICE_URL=http://harbor-core:8080/service/token
- HARBOR_ADMIN_PASSWORD=${HARBOR_ADMIN_PASSWORD:-Harbor12345}
- CORE_URL=http://harbor-core:8080
@@ -40,20 +38,26 @@ services:
- REGISTRY_STORAGE_PROVIDER_NAME=filesystem
- READ_ONLY=false
- RELOAD_KEY=${HARBOR_RELOAD_KEY:-}
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8080/api/v2.0/ping"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
# Harbor JobService
harbor-jobservice:
<<: *default
image: goharbor/harbor-jobservice:${HARBOR_VERSION:-v2.12.0}
container_name: harbor-jobservice
depends_on:
- harbor-db
- harbor-redis
harbor-db:
condition: service_healthy
harbor-redis:
condition: service_healthy
volumes:
- *localtime
- *timezone
- harbor_job_logs:/var/log/jobs
environment:
- TZ=${TZ:-UTC}
- CORE_SECRET=${HARBOR_CORE_SECRET:-}
- JOBSERVICE_SECRET=${HARBOR_JOBSERVICE_SECRET:-}
- CORE_URL=http://harbor-core:8080
@@ -68,49 +72,56 @@ services:
harbor-registry:
<<: *default
image: goharbor/registry-photon:${HARBOR_VERSION:-v2.12.0}
container_name: harbor-registry
volumes:
- *localtime
- *timezone
- harbor_registry:/storage
environment:
- TZ=${TZ:-UTC}
- REGISTRY_HTTP_SECRET=${HARBOR_REGISTRY_SECRET:-}
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:5000/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
# Harbor Portal (UI)
harbor-portal:
<<: *default
image: goharbor/harbor-portal:${HARBOR_VERSION:-v2.12.0}
container_name: harbor-portal
volumes:
- *localtime
- *timezone
environment:
- TZ=${TZ:-UTC}
# Harbor Proxy (Nginx)
harbor-proxy:
<<: *default
image: goharbor/nginx-photon:${HARBOR_VERSION:-v2.12.0}
container_name: harbor-proxy
ports:
- "${HARBOR_HTTP_PORT_OVERRIDE:-80}:8080"
- "${HARBOR_HTTPS_PORT_OVERRIDE:-443}:8443"
depends_on:
- harbor-core
- harbor-portal
- harbor-registry
volumes:
- *localtime
- *timezone
harbor-core:
condition: service_healthy
harbor-portal:
condition: service_started
harbor-registry:
condition: service_healthy
environment:
- TZ=${TZ:-UTC}
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8080/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
# Harbor Database
harbor-db:
<<: *default
image: goharbor/harbor-db:${HARBOR_VERSION:-v2.12.0}
container_name: harbor-db
volumes:
- *localtime
- *timezone
- harbor_db:/var/lib/postgresql/data
environment:
- TZ=${TZ:-UTC}
- POSTGRES_PASSWORD=${HARBOR_DB_PASSWORD:-password}
- POSTGRES_DB=registry
deploy:
@@ -121,16 +132,21 @@ services:
reservations:
cpus: '0.25'
memory: 256M
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
# Harbor Redis
harbor-redis:
<<: *default
image: goharbor/redis-photon:${HARBOR_VERSION:-v2.12.0}
container_name: harbor-redis
volumes:
- *localtime
- *timezone
- harbor_redis:/var/lib/redis
environment:
- TZ=${TZ:-UTC}
deploy:
resources:
limits:
@@ -139,6 +155,12 @@ services:
reservations:
cpus: '0.10'
memory: 64M
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 3s
retries: 3
start_period: 5s
volumes:
harbor_config:

View File

@@ -1,12 +1,10 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
lama-cleaner:
@@ -17,11 +15,10 @@ services:
build:
context: .
dockerfile: Dockerfile
# environment:
# HF_ENDPOINT: https://hf-mirror.com
environment:
TZ: ${TZ:-UTC}
# HF_ENDPOINT: https://hf-mirror.com
volumes:
- *localtime
- *timezone
- ./models:/root/.cache
command:
- iopaint
@@ -32,8 +29,19 @@ services:
- --host=0.0.0.0
deploy:
resources:
limits:
cpus: '2.0'
memory: 4G
reservations:
cpus: '1.0'
memory: 2G
devices:
- driver: nvidia
device_ids: ['0']
capabilities: [compute, utility]
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8080/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s

View File

@@ -1,30 +1,26 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
jenkins:
<<: *default
image: jenkins/jenkins:${JENKINS_VERSION:-2.486-lts-jdk17}
container_name: jenkins
ports:
- "${JENKINS_HTTP_PORT_OVERRIDE:-8080}:8080"
- "${JENKINS_AGENT_PORT_OVERRIDE:-50000}:50000"
volumes:
- *localtime
- *timezone
- jenkins_home:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock:ro
# Custom configuration
# - ./jenkins.yaml:/var/jenkins_home/casc_configs/jenkins.yaml:ro
environment:
- TZ=${TZ:-UTC}
- JENKINS_OPTS=${JENKINS_OPTS:---httpPort=8080}
- JAVA_OPTS=${JAVA_OPTS:--Djenkins.install.runSetupWizard=false -Xmx2g}
- CASC_JENKINS_CONFIG=${CASC_JENKINS_CONFIG:-/var/jenkins_home/casc_configs}

View File

@@ -1,27 +1,23 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
# Zookeeper for Kafka coordination
zookeeper:
<<: *default
image: confluentinc/cp-zookeeper:${KAFKA_VERSION:-7.8.0}
container_name: zookeeper
ports:
- "${ZOOKEEPER_CLIENT_PORT_OVERRIDE:-2181}:2181"
volumes:
- *localtime
- *timezone
- zookeeper_data:/var/lib/zookeeper/data
- zookeeper_log:/var/lib/zookeeper/log
environment:
- TZ=${TZ:-UTC}
- ZOOKEEPER_CLIENT_PORT=2181
- ZOOKEEPER_TICK_TIME=2000
- ZOOKEEPER_SYNC_LIMIT=5
@@ -37,22 +33,27 @@ services:
reservations:
cpus: '0.25'
memory: 256M
healthcheck:
test: ["CMD-SHELL", "echo ruok | nc localhost 2181 | grep imok"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
# Kafka broker
kafka:
<<: *default
image: confluentinc/cp-kafka:${KAFKA_VERSION:-7.8.0}
container_name: kafka
depends_on:
- zookeeper
zookeeper:
condition: service_healthy
ports:
- "${KAFKA_BROKER_PORT_OVERRIDE:-9092}:9092"
- "${KAFKA_JMX_PORT_OVERRIDE:-9999}:9999"
volumes:
- *localtime
- *timezone
- kafka_data:/var/lib/kafka/data
environment:
- TZ=${TZ:-UTC}
- KAFKA_BROKER_ID=1
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://localhost:9092
@@ -91,16 +92,15 @@ services:
kafka-ui:
<<: *default
image: provectuslabs/kafka-ui:${KAFKA_UI_VERSION:-latest}
container_name: kafka-ui
depends_on:
- kafka
- zookeeper
kafka:
condition: service_healthy
zookeeper:
condition: service_healthy
ports:
- "${KAFKA_UI_PORT_OVERRIDE:-8080}:8080"
volumes:
- *localtime
- *timezone
environment:
- TZ=${TZ:-UTC}
- KAFKA_CLUSTERS_0_NAME=local
- KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=kafka:9092
- KAFKA_CLUSTERS_0_ZOOKEEPER=zookeeper:2181

View File

@@ -1,28 +1,24 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
kibana:
<<: *default
image: docker.elastic.co/kibana/kibana:${KIBANA_VERSION:-8.16.1}
container_name: kibana
ports:
- "${KIBANA_PORT_OVERRIDE:-5601}:5601"
volumes:
- *localtime
- *timezone
- kibana_data:/usr/share/kibana/data
# Custom configuration
# - ./kibana.yml:/usr/share/kibana/config/kibana.yml:ro
environment:
- TZ=${TZ:-UTC}
- SERVERNAME=kibana
- ELASTICSEARCH_HOSTS=${ELASTICSEARCH_HOSTS:-http://elasticsearch:9200}
- ELASTICSEARCH_USERNAME=${ELASTICSEARCH_USERNAME:-}

View File

@@ -1,23 +1,21 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
kodbox:
<<: *default
image: kodcloud/kodbox:${KODBOX_VERSION:-1.62}
container_name: kodbox
ports:
- "${KODBOX_PORT:-80}:80"
volumes:
- kodbox_data:/var/www/html
environment:
- TZ=${TZ:-UTC}
- MYSQL_HOST=${MYSQL_HOST:-kodbox-db}
- MYSQL_PORT=${MYSQL_PORT:-3306}
- MYSQL_DATABASE=${MYSQL_DATABASE:-kodbox}
@@ -39,12 +37,18 @@ services:
reservations:
cpus: '0.5'
memory: 256M
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:80/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
kodbox-db:
<<: *default
image: mysql:${MYSQL_VERSION:-9.4.0}
container_name: kodbox-db
environment:
- TZ=${TZ:-UTC}
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD:-root123}
- MYSQL_DATABASE=${MYSQL_DATABASE:-kodbox}
- MYSQL_USER=${MYSQL_USER:-kodbox}
@@ -73,11 +77,12 @@ services:
kodbox-redis:
<<: *default
image: redis:${REDIS_VERSION:-8.2.1-alpine3.22}
container_name: kodbox-redis
command:
- redis-server
- --requirepass
- ${REDIS_PASSWORD:-}
environment:
- TZ=${TZ:-UTC}
volumes:
- kodbox_redis_data:/data
healthcheck:

View File

@@ -1,24 +1,20 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
# Kong Database
kong-db:
<<: *default
image: postgres:${POSTGRES_VERSION:-16.6-alpine3.21}
container_name: kong-db
volumes:
- *localtime
- *timezone
- kong_db_data:/var/lib/postgresql/data
environment:
- TZ=${TZ:-UTC}
- POSTGRES_USER=kong
- POSTGRES_DB=kong
- POSTGRES_PASSWORD=${KONG_DB_PASSWORD:-kongpass}
@@ -35,15 +31,17 @@ services:
interval: 30s
timeout: 5s
retries: 5
start_period: 30s
# Kong Database Migration
kong-migrations:
<<: *default
image: kong:${KONG_VERSION:-3.8.0-alpine}
container_name: kong-migrations
depends_on:
- kong-db
kong-db:
condition: service_healthy
environment:
- TZ=${TZ:-UTC}
- KONG_DATABASE=postgres
- KONG_PG_HOST=kong-db
- KONG_PG_USER=kong
@@ -56,22 +54,21 @@ services:
kong:
<<: *default
image: kong:${KONG_VERSION:-3.8.0-alpine}
container_name: kong
depends_on:
- kong-db
- kong-migrations
kong-db:
condition: service_healthy
kong-migrations:
condition: service_completed_successfully
ports:
- "${KONG_PROXY_PORT_OVERRIDE:-8000}:8000"
- "${KONG_PROXY_SSL_PORT_OVERRIDE:-8443}:8443"
- "${KONG_ADMIN_API_PORT_OVERRIDE:-8001}:8001"
- "${KONG_ADMIN_SSL_PORT_OVERRIDE:-8444}:8444"
volumes:
- *localtime
- *timezone
# Custom configuration
# - ./kong.conf:/etc/kong/kong.conf:ro
environment:
- TZ=${TZ:-UTC}
- KONG_DATABASE=postgres
- KONG_PG_HOST=kong-db
- KONG_PG_USER=kong
@@ -102,16 +99,15 @@ services:
kong-gui:
<<: *default
image: pantsel/konga:${KONGA_VERSION:-latest}
container_name: kong-gui
depends_on:
- kong
kong:
condition: service_healthy
ports:
- "${KONG_GUI_PORT_OVERRIDE:-1337}:1337"
volumes:
- *localtime
- *timezone
- konga_data:/app/kongadata
environment:
- TZ=${TZ:-UTC}
- NODE_ENV=production
- KONGA_HOOK_TIMEOUT=120000
deploy:

View File

@@ -1,21 +1,19 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
langfuse-server:
<<: *default
image: langfuse/langfuse:${LANGFUSE_VERSION:-3.115.0}
container_name: langfuse-server
ports:
- "${LANGFUSE_PORT:-3000}:3000"
environment:
- TZ=${TZ:-UTC}
- DATABASE_URL=postgresql://${POSTGRES_USER:-postgres}:${POSTGRES_PASSWORD:-postgres}@langfuse-db:5432/${POSTGRES_DB:-langfuse}
- NEXTAUTH_URL=${NEXTAUTH_URL:-http://localhost:3000}
- NEXTAUTH_SECRET=${NEXTAUTH_SECRET}
@@ -33,12 +31,18 @@ services:
reservations:
cpus: '0.5'
memory: 512M
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:3000/api/public/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
langfuse-db:
<<: *default
image: postgres:${POSTGRES_VERSION:-17.2-alpine3.21}
container_name: langfuse-db
environment:
- TZ=${TZ:-UTC}
- POSTGRES_USER=${POSTGRES_USER:-postgres}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-postgres}
- POSTGRES_DB=${POSTGRES_DB:-langfuse}

View File

@@ -1,26 +1,21 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
logstash:
<<: *default
image: docker.elastic.co/logstash/logstash:${LOGSTASH_VERSION:-8.16.1}
container_name: logstash
ports:
- "${LOGSTASH_BEATS_PORT_OVERRIDE:-5044}:5044"
- "${LOGSTASH_TCP_PORT_OVERRIDE:-5000}:5000/tcp"
- "${LOGSTASH_UDP_PORT_OVERRIDE:-5000}:5000/udp"
- "${LOGSTASH_HTTP_PORT_OVERRIDE:-9600}:9600"
volumes:
- *localtime
- *timezone
- logstash_data:/usr/share/logstash/data
- logstash_logs:/usr/share/logstash/logs
- ./pipeline:/usr/share/logstash/pipeline:ro
@@ -29,6 +24,7 @@ services:
# - ./logstash.yml:/usr/share/logstash/config/logstash.yml:ro
# - ./pipelines.yml:/usr/share/logstash/config/pipelines.yml:ro
environment:
- TZ=${TZ:-UTC}
- XPACK_MONITORING_ENABLED=${LOGSTASH_MONITORING_ENABLED:-false}
- XPACK_MONITORING_ELASTICSEARCH_HOSTS=${ELASTICSEARCH_HOSTS:-http://elasticsearch:9200}
- ELASTICSEARCH_HOSTS=${ELASTICSEARCH_HOSTS:-http://elasticsearch:9200}

View File

@@ -1,17 +1,16 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
x-mariadb-galera: &mariadb-galera
<<: *default
image: mariadb:${MARIADB_VERSION:-11.7.2}
environment: &galera-env
TZ: ${TZ:-UTC}
MARIADB_ROOT_PASSWORD: ${MARIADB_ROOT_PASSWORD:-galera}
MARIADB_GALERA_CLUSTER_NAME: ${MARIADB_GALERA_CLUSTER_NAME:-galera_cluster}
MARIADB_GALERA_CLUSTER_ADDRESS: gcomm://mariadb-galera-1,mariadb-galera-2,mariadb-galera-3
@@ -34,11 +33,16 @@ x-mariadb-galera: &mariadb-galera
reservations:
cpus: '1.0'
memory: 1G
healthcheck:
test: ["CMD", "healthcheck.sh", "--connect", "--innodb_initialized"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
services:
mariadb-galera-1:
<<: *mariadb-galera
container_name: mariadb-galera-1
hostname: mariadb-galera-1
ports:
- "${MARIADB_PORT_1_OVERRIDE:-3306}:3306"
@@ -57,13 +61,10 @@ services:
- --default_storage_engine=InnoDB
- --innodb_autoinc_lock_mode=2
volumes:
- *localtime
- *timezone
- mariadb_galera_1_data:/var/lib/mysql
mariadb-galera-2:
<<: *mariadb-galera
container_name: mariadb-galera-2
hostname: mariadb-galera-2
ports:
- "${MARIADB_PORT_2_OVERRIDE:-3307}:3306"
@@ -81,15 +82,13 @@ services:
- --default_storage_engine=InnoDB
- --innodb_autoinc_lock_mode=2
volumes:
- *localtime
- *timezone
- mariadb_galera_2_data:/var/lib/mysql
depends_on:
- mariadb-galera-1
mariadb-galera-1:
condition: service_healthy
mariadb-galera-3:
<<: *mariadb-galera
container_name: mariadb-galera-3
hostname: mariadb-galera-3
ports:
- "${MARIADB_PORT_3_OVERRIDE:-3308}:3306"
@@ -107,11 +106,10 @@ services:
- --default_storage_engine=InnoDB
- --innodb_autoinc_lock_mode=2
volumes:
- *localtime
- *timezone
- mariadb_galera_3_data:/var/lib/mysql
depends_on:
- mariadb-galera-1
mariadb-galera-1:
condition: service_healthy
volumes:
mariadb_galera_1_data:

View File

@@ -1,12 +1,10 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
milvus-standalone-embed:
@@ -15,14 +13,13 @@ services:
security_opt:
- seccomp:unconfined
environment:
- ETCD_USE_EMBED=true
- ETCD_DATA_DIR=/var/lib/milvus/etcd
- ETCD_CONFIG_PATH=/milvus/configs/embed_etcd.yaml
- COMMON_STORAGETYPE=local
- DEPLOY_MODE=STANDALONE
TZ: ${TZ:-UTC}
ETCD_USE_EMBED: "true"
ETCD_DATA_DIR: /var/lib/milvus/etcd
ETCD_CONFIG_PATH: /milvus/configs/embed_etcd.yaml
COMMON_STORAGETYPE: local
DEPLOY_MODE: STANDALONE
volumes:
- *localtime
- *timezone
- milvus_data:/var/lib/milvus
- ./embed_etcd.yaml:/milvus/configs/embed_etcd.yaml
- ./user.yaml:/milvus/configs/user.yaml
@@ -52,7 +49,8 @@ services:
profiles:
- attu
environment:
- MILVUS_URL=${MILVUS_URL:-milvus-standalone-embed:19530}
TZ: ${TZ:-UTC}
MILVUS_URL: ${MILVUS_URL:-milvus-standalone-embed:19530}
ports:
- "${ATTU_OVERRIDE_PORT:-8000}:3000"
deploy:

View File

@@ -1,25 +1,22 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
etcd:
<<: *default
image: quay.io/coreos/etcd:${ETCD_VERSION:-v3.5.18}
environment:
- TZ=${TZ:-UTC}
- ETCD_AUTO_COMPACTION_MODE=revision
- ETCD_AUTO_COMPACTION_RETENTION=1000
- ETCD_QUOTA_BACKEND_BYTES=4294967296
- ETCD_SNAPSHOT_COUNT=50000
volumes:
- *localtime
- *timezone
- etcd_data:/etcd
command: etcd -advertise-client-urls=http://etcd:2379 -listen-client-urls http://0.0.0.0:2379 --data-dir /etcd
healthcheck:
@@ -27,6 +24,7 @@ services:
interval: 30s
timeout: 20s
retries: 3
start_period: 30s
deploy:
resources:
limits:
@@ -40,14 +38,13 @@ services:
<<: *default
image: minio/minio:${MINIO_VERSION:-RELEASE.2024-12-18T13-15-44Z}
environment:
TZ: ${TZ:-UTC}
MINIO_ROOT_USER: ${MINIO_ROOT_USER:-minioadmin}
MINIO_ROOT_PASSWORD: ${MINIO_ROOT_PASSWORD:-minioadmin}
ports:
- "${MINIO_PORT_OVERRIDE_API:-9000}:9000"
- "${MINIO_PORT_OVERRIDE_WEBUI:-9001}:9001"
volumes:
- *localtime
- *timezone
- minio_data:/minio_data
command: minio server /minio_data --console-address ":9001"
healthcheck:
@@ -55,6 +52,7 @@ services:
interval: 30s
timeout: 20s
retries: 3
start_period: 30s
deploy:
resources:
limits:
@@ -66,11 +64,12 @@ services:
milvus-standalone:
<<: *default
image: milvusdb/milvus:${MILVUS_VERSION:-v2.6.2}
image: milvusdb/milvus:${MILVUS_VERSION:-v2.6.3}
command: ["milvus", "run", "standalone"]
security_opt:
- seccomp:unconfined
environment:
TZ: ${TZ:-UTC}
ETCD_ENDPOINTS: etcd:2379
MINIO_ADDRESS: minio:9000
MQ_TYPE: woodpecker
@@ -86,8 +85,10 @@ services:
- "${MILVUS_PORT_OVERRIDE_HTTP:-19530}:19530"
- "${MILVUS_PORT_OVERRIDE_WEBUI:-9091}:9091"
depends_on:
- etcd
- minio
etcd:
condition: service_healthy
minio:
condition: service_healthy
deploy:
resources:
limits:
@@ -99,13 +100,17 @@ services:
attu:
<<: *default
image: zilliz/attu:${ATTU_VERSION:-v2.6.0}
image: zilliz/attu:${ATTU_VERSION:-v2.6.1}
profiles:
- attu
environment:
- TZ=${TZ:-UTC}
- MILVUS_URL=${MILVUS_URL:-milvus-standalone:19530}
ports:
- "${ATTU_PORT_OVERRIDE:-8000}:3000"
depends_on:
milvus-standalone:
condition: service_healthy
deploy:
resources:
limits:
@@ -114,6 +119,12 @@ services:
reservations:
cpus: '0.1'
memory: 128M
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:3000/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
volumes:
etcd_data:

View File

@@ -1,19 +1,17 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
minecraft-bedrock:
<<: *default
image: itzg/minecraft-bedrock-server:${BEDROCK_VERSION:-latest}
container_name: minecraft-bedrock-server
environment:
TZ: ${TZ:-UTC}
EULA: "${EULA:-TRUE}"
VERSION: "${MINECRAFT_VERSION:-LATEST}"
GAMEMODE: "${GAMEMODE:-survival}"
@@ -33,8 +31,6 @@ services:
- "${SERVER_PORT_OVERRIDE:-19132}:19132/udp"
- "${SERVER_PORT_V6_OVERRIDE:-19133}:19133/udp"
volumes:
- *localtime
- *timezone
- bedrock_data:/data
stdin_open: true
tty: true
@@ -46,6 +42,12 @@ services:
reservations:
cpus: '1.0'
memory: 1G
healthcheck:
test: ["CMD-SHELL", "[ -f /data/valid_known_packs.json ]"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
volumes:
bedrock_data:

View File

@@ -1,17 +1,16 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
x-mineru-sglang: &mineru-sglang
<<: *default
image: ${MINERU_DOCKER_IMAGE:-alexsuntop/mineru-sglang:2.2.2}
environment:
TZ: ${TZ:-UTC}
MINERU_MODEL_SOURCE: local
ulimits:
memlock: -1
@@ -49,6 +48,10 @@ services:
healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost:30000/health || exit 1"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
mineru-api:
<<: *mineru-sglang
@@ -65,6 +68,12 @@ services:
# If running on a single GPU and encountering VRAM shortage, reduce the KV cache size by this parameter,
# if VRAM issues persist, try lowering it further to `0.4` or below.
# - --gpu-memory-utilization 0.5
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
mineru-gradio:
<<: *mineru-sglang
@@ -88,3 +97,9 @@ services:
# If running on a single GPU and encountering VRAM shortage, reduce the KV cache size by this parameter,
# if VRAM issues persist, try lowering it further to `0.4` or below.
# - --gpu-memory-utilization 0.5
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:7860/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s

View File

@@ -1,12 +1,10 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
x-mineru-vllm: &mineru-vllm
<<: *default
@@ -15,6 +13,7 @@ x-mineru-vllm: &mineru-vllm
context: .
dockerfile: Dockerfile
environment:
TZ: ${TZ:-UTC}
MINERU_MODEL_SOURCE: local
ulimits:
memlock: -1
@@ -36,7 +35,6 @@ x-mineru-vllm: &mineru-vllm
services:
mineru-vllm-server:
<<: *mineru-vllm
container_name: mineru-vllm-server
profiles: ["vllm-server"]
ports:
- ${MINERU_PORT_OVERRIDE_VLLM:-30000}:30000
@@ -53,11 +51,14 @@ services:
healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost:30000/health || exit 1"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
mineru-api:
<<: *mineru-vllm
container_name: mineru-api
profiles: ["api"]
ports:
- ${MINERU_PORT_OVERRIDE_API:-8000}:8000
@@ -71,10 +72,15 @@ services:
# If running on a single GPU and encountering VRAM shortage, reduce the KV cache size by this parameter,
# if VRAM issues persist, try lowering it further to `0.4` or below.
# - --gpu-memory-utilization 0.5
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
mineru-gradio:
<<: *mineru-vllm
container_name: mineru-gradio
profiles: ["gradio"]
ports:
- ${MINERU_PORT_OVERRIDE_GRADIO:-7860}:7860
@@ -95,3 +101,9 @@ services:
# If running on a single GPU and encountering VRAM shortage, reduce the KV cache size by this parameter,
# if VRAM issues persist, try lowering it further to `0.4` or below.
# - --gpu-memory-utilization 0.5
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:7860/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s

View File

@@ -1,12 +1,10 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
minio:
@@ -16,11 +14,10 @@ services:
- "${MINIO_PORT_OVERRIDE_API:-9000}:9000"
- "${MINIO_PORT_OVERRIDE_WEBUI:-9001}:9001"
environment:
TZ: ${TZ:-UTC}
MINIO_ROOT_USER: ${MINIO_ROOT_USER:-minioadmin}
MINIO_ROOT_PASSWORD: ${MINIO_ROOT_PASSWORD:-minioadmin}
volumes:
- *localtime
- *timezone
- minio_data:/data
- ./config:/root/.minio/
command: server --console-address ':9001' /data
@@ -30,6 +27,14 @@ services:
timeout: 20s
retries: 5
start_period: 30s
deploy:
resources:
limits:
cpus: '1.0'
memory: 1G
reservations:
cpus: '0.5'
memory: 512M
volumes:

View File

@@ -1,25 +1,21 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
postgres:
<<: *default
image: postgres:${POSTGRES_VERSION:-17.6-alpine}
container_name: mlflow-postgres
environment:
TZ: ${TZ:-UTC}
POSTGRES_USER: ${POSTGRES_USER:-mlflow}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-mlflow}
POSTGRES_DB: ${POSTGRES_DB:-mlflow}
volumes:
- *localtime
- *timezone
- postgres_data:/var/lib/postgresql/data
deploy:
resources:
@@ -29,21 +25,25 @@ services:
reservations:
cpus: '0.5'
memory: 512M
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER:-mlflow}"]
interval: 10s
timeout: 5s
retries: 5
start_period: 30s
minio:
<<: *default
image: minio/minio:${MINIO_VERSION:-RELEASE.2025-01-07T16-13-09Z}
container_name: mlflow-minio
command: server /data --console-address ":9001"
environment:
TZ: ${TZ:-UTC}
MINIO_ROOT_USER: ${MINIO_ROOT_USER:-minio}
MINIO_ROOT_PASSWORD: ${MINIO_ROOT_PASSWORD:-minio123}
ports:
- "${MINIO_PORT_OVERRIDE:-9000}:9000"
- "${MINIO_CONSOLE_PORT_OVERRIDE:-9001}:9001"
volumes:
- *localtime
- *timezone
- minio_data:/data
deploy:
resources:
@@ -53,13 +53,19 @@ services:
reservations:
cpus: '0.5'
memory: 512M
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
minio-init:
<<: *default
image: minio/mc:${MINIO_MC_VERSION:-RELEASE.2025-01-07T17-25-52Z}
container_name: mlflow-minio-init
depends_on:
- minio
minio:
condition: service_healthy
entrypoint: >
/bin/sh -c "
sleep 5;
@@ -72,14 +78,17 @@ services:
mlflow:
<<: *default
image: ghcr.io/mlflow/mlflow:${MLFLOW_VERSION:-v2.20.2}
container_name: mlflow
depends_on:
- postgres
- minio
- minio-init
postgres:
condition: service_healthy
minio:
condition: service_healthy
minio-init:
condition: service_completed_successfully
ports:
- "${MLFLOW_PORT_OVERRIDE:-5000}:5000"
environment:
TZ: ${TZ:-UTC}
MLFLOW_BACKEND_STORE_URI: postgresql://${POSTGRES_USER:-mlflow}:${POSTGRES_PASSWORD:-mlflow}@postgres:5432/${POSTGRES_DB:-mlflow}
MLFLOW_ARTIFACT_ROOT: s3://${MINIO_BUCKET:-mlflow}/
MLFLOW_S3_ENDPOINT_URL: http://minio:9000
@@ -104,6 +113,12 @@ services:
reservations:
cpus: '1.0'
memory: 1G
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:5000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
volumes:
postgres_data:

View File

@@ -48,6 +48,7 @@ This service sets up a MongoDB replica set with three members.
## Configuration
- `TZ`: The timezone for the container, default is `UTC`.
- `MONGO_VERSION`: The version of the MongoDB image, default is `8.0.13`.
- `MONGO_INITDB_ROOT_USERNAME`: The root username for the database, default is `root`.
- `MONGO_INITDB_ROOT_PASSWORD`: The root password for the database, default is `password`.
@@ -60,3 +61,7 @@ This service sets up a MongoDB replica set with three members.
## Volumes
- `secrets/rs0.key`: The key file for authenticating members of the replica set.
## Security
The replica set key file is mounted read-only and copied to `/tmp` inside the container with proper permissions (400). This approach ensures cross-platform compatibility (Windows/Linux/macOS) while maintaining security requirements. The key file is never modified on the host system.

View File

@@ -48,6 +48,7 @@
## 配置
- `TZ`: 容器的时区,默认为 `UTC`。
- `MONGO_VERSION`: MongoDB 镜像的版本,默认为 `8.0.13`。
- `MONGO_INITDB_ROOT_USERNAME`: 数据库的 root 用户名,默认为 `root`。
- `MONGO_INITDB_ROOT_PASSWORD`: 数据库的 root 密码,默认为 `password`。
@@ -60,3 +61,7 @@
## 卷
- `secrets/rs0.key`: 用于副本集成员之间认证的密钥文件。
## 安全性
副本集密钥文件以只读方式挂载,并在容器内复制到 `/tmp` 目录设置适当的权限400。这种方法确保了跨平台兼容性Windows/Linux/macOS同时满足安全要求。主机系统上的密钥文件永远不会被修改。

View File

@@ -1,8 +1,5 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
@@ -12,26 +9,21 @@ x-mongo: &mongo
<<: *default
image: mongo:${MONGO_VERSION:-8.0.13}
environment:
TZ: ${TZ:-UTC}
MONGO_INITDB_ROOT_USERNAME: ${MONGO_INITDB_ROOT_USERNAME:-root}
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_INITDB_ROOT_PASSWORD:-password}
MONGO_INITDB_DATABASE: ${MONGO_INITDB_DATABASE:-admin}
command:
- mongod
- --replSet
- ${MONGO_REPLICA_SET_NAME:-rs0}
- --keyFile
- /secrets/rs0.key
volumes:
- *localtime
- *timezone
- ./secrets/rs0.key:/secrets/rs0.key
- ./secrets/rs0.key:/data/rs0.key:ro
entrypoint:
- bash
- -c
- |
chmod 400 /secrets/rs0.key
chown 999:999 /secrets/rs0.key
exec docker-entrypoint.sh $$@
cp /data/rs0.key /tmp/rs0.key
chmod 400 /tmp/rs0.key
chown 999:999 /tmp/rs0.key
export MONGO_INITDB_ROOT_USERNAME MONGO_INITDB_ROOT_PASSWORD MONGO_INITDB_DATABASE
exec docker-entrypoint.sh mongod --replSet ${MONGO_REPLICA_SET_NAME:-rs0} --keyFile /tmp/rs0.key
deploy:
resources:
limits:

View File

@@ -1,26 +1,23 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
mongo:
<<: *default
image: mongo:${MONGO_VERSION:-8.0.13}
environment:
TZ: ${TZ:-UTC}
MONGO_INITDB_ROOT_USERNAME: ${MONGO_INITDB_ROOT_USERNAME:-root}
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_INITDB_ROOT_PASSWORD:-password}
MONGO_INITDB_DATABASE: ${MONGO_INITDB_DATABASE:-admin}
ports:
- "${MONGO_PORT_OVERRIDE:-27017}:27017"
volumes:
- *localtime
- *timezone
- mongo_data:/data/db
deploy:
resources:
@@ -30,6 +27,12 @@ services:
reservations:
cpus: '0.25'
memory: 256M
healthcheck:
test: ["CMD", "mongosh", "--eval", "db.adminCommand('ping')"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
volumes:
mongo_data:

View File

@@ -1,12 +1,10 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
mysql:
@@ -15,16 +13,28 @@ services:
ports:
- "${MYSQL_PORT_OVERRIDE:-3306}:3306"
volumes:
- *localtime
- *timezone
- mysql_data:/var/lib/mysql
# Initialize database with scripts in ./init.sql
# - ./init.sql:/docker-entrypoint-initdb.d/init.sql:ro
environment:
TZ: ${TZ:-UTC}
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD:-password}
MYSQL_ROOT_HOST: ${MYSQL_ROOT_HOST:-%}
deploy:
resources:
limits:
cpus: '2.0'
memory: 2G
reservations:
cpus: '0.5'
memory: 512M
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-h", "localhost", "-u", "root", "-p$$MYSQL_ROOT_PASSWORD"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
volumes:
mysql_data:

View File

@@ -1,23 +1,21 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
n8n:
<<: *default
image: n8nio/n8n:${N8N_VERSION:-1.114.0}
container_name: n8n
ports:
- "${N8N_PORT:-5678}:5678"
volumes:
- n8n_data:/home/node/.n8n
environment:
- TZ=${TZ:-UTC}
- N8N_BASIC_AUTH_ACTIVE=${N8N_BASIC_AUTH_ACTIVE:-true}
- N8N_BASIC_AUTH_USER=${N8N_BASIC_AUTH_USER:-}
- N8N_BASIC_AUTH_PASSWORD=${N8N_BASIC_AUTH_PASSWORD:-}
@@ -26,7 +24,6 @@ services:
- N8N_PROTOCOL=${N8N_PROTOCOL:-http}
- WEBHOOK_URL=${WEBHOOK_URL:-http://localhost:5678/}
- GENERIC_TIMEZONE=${GENERIC_TIMEZONE:-UTC}
- TZ=${TZ:-UTC}
# Database configuration (optional, uses SQLite by default)
- DB_TYPE=${DB_TYPE:-sqlite}
- DB_POSTGRESDB_DATABASE=${DB_POSTGRESDB_DATABASE:-n8n}
@@ -50,12 +47,18 @@ services:
reservations:
cpus: '0.5'
memory: 512M
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:5678/healthz"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
n8n-db:
<<: *default
image: postgres:${POSTGRES_VERSION:-17.2-alpine3.21}
container_name: n8n-db
environment:
- TZ=${TZ:-UTC}
- POSTGRES_USER=${DB_POSTGRESDB_USER:-n8n}
- POSTGRES_PASSWORD=${DB_POSTGRESDB_PASSWORD:-n8n123}
- POSTGRES_DB=${DB_POSTGRESDB_DATABASE:-n8n}

View File

@@ -1,27 +1,23 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
nacos:
<<: *default
image: nacos/nacos-server:${NACOS_VERSION:-v3.1.0-slim}
container_name: nacos
ports:
- "${NACOS_HTTP_PORT_OVERRIDE:-8848}:8848"
- "${NACOS_GRPC_PORT_OVERRIDE:-9848}:9848"
- "${NACOS_GRPC_PORT2_OVERRIDE:-9849}:9849"
volumes:
- *localtime
- *timezone
- nacos_logs:/home/nacos/logs
environment:
- TZ=${TZ:-UTC}
- MODE=${NACOS_MODE:-standalone}
- PREFER_HOST_MODE=hostname
- NACOS_AUTH_ENABLE=${NACOS_AUTH_ENABLE:-true}
@@ -40,6 +36,12 @@ services:
reservations:
cpus: '0.5'
memory: 512M
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8848/nacos/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
volumes:
nacos_logs:

View File

@@ -1,19 +1,17 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
metad:
<<: *default
image: vesoft/nebula-metad:${NEBULA_VERSION:-v3.8.0}
container_name: nebula-metad
environment:
- TZ=${TZ:-UTC}
- USER=root
command:
- --meta_server_addrs=metad:9559
@@ -23,8 +21,6 @@ services:
- --data_path=/data/meta
- --log_dir=/logs
volumes:
- *localtime
- *timezone
- nebula_meta_data:/data/meta
- nebula_meta_logs:/logs
ports:
@@ -36,12 +32,21 @@ services:
limits:
cpus: '0.5'
memory: 512M
reservations:
cpus: '0.25'
memory: 256M
healthcheck:
test: ["CMD", "/usr/local/nebula/bin/nebula-metad", "--version"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
storaged:
<<: *default
image: vesoft/nebula-storaged:${NEBULA_VERSION:-v3.8.0}
container_name: nebula-storaged
environment:
- TZ=${TZ:-UTC}
- USER=root
command:
- --meta_server_addrs=metad:9559
@@ -51,10 +56,9 @@ services:
- --data_path=/data/storage
- --log_dir=/logs
depends_on:
- metad
metad:
condition: service_healthy
volumes:
- *localtime
- *timezone
- nebula_storage_data:/data/storage
- nebula_storage_logs:/logs
ports:
@@ -66,12 +70,21 @@ services:
limits:
cpus: '1.0'
memory: 1G
reservations:
cpus: '0.5'
memory: 512M
healthcheck:
test: ["CMD", "/usr/local/nebula/bin/nebula-storaged", "--version"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
graphd:
<<: *default
image: vesoft/nebula-graphd:${NEBULA_VERSION:-v3.8.0}
container_name: nebula-graphd
environment:
- TZ=${TZ:-UTC}
- USER=root
command:
- --meta_server_addrs=metad:9559
@@ -80,11 +93,11 @@ services:
- --ws_ip=graphd
- --log_dir=/logs
depends_on:
- metad
- storaged
metad:
condition: service_healthy
storaged:
condition: service_healthy
volumes:
- *localtime
- *timezone
- nebula_graph_logs:/logs
ports:
- "${NEBULA_GRAPHD_PORT_OVERRIDE:-9669}:9669"
@@ -95,6 +108,15 @@ services:
limits:
cpus: '1.0'
memory: 1G
reservations:
cpus: '0.5'
memory: 512M
healthcheck:
test: ["CMD", "/usr/local/nebula/bin/nebula-graphd", "--version"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
volumes:
nebula_meta_data:

View File

@@ -1,29 +1,25 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
neo4j:
<<: *default
image: neo4j:${NEO4J_VERSION:-5.27.4-community}
container_name: neo4j
ports:
- "${NEO4J_HTTP_PORT_OVERRIDE:-7474}:7474"
- "${NEO4J_BOLT_PORT_OVERRIDE:-7687}:7687"
volumes:
- *localtime
- *timezone
- neo4j_data:/data
- neo4j_logs:/logs
- neo4j_import:/var/lib/neo4j/import
- neo4j_plugins:/plugins
environment:
- TZ=${TZ:-UTC}
- NEO4J_AUTH=${NEO4J_AUTH:-neo4j/password}
- NEO4J_ACCEPT_LICENSE_AGREEMENT=${NEO4J_ACCEPT_LICENSE_AGREEMENT:-yes}
- NEO4J_dbms_memory_pagecache_size=${NEO4J_PAGECACHE_SIZE:-512M}
@@ -37,6 +33,12 @@ services:
reservations:
cpus: '0.5'
memory: 1G
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:7474/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
volumes:
neo4j_data:

View File

@@ -1,24 +1,19 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
nginx:
<<: *default
image: nginx:${NGINX_VERSION:-1.29.2-alpine3.22}
container_name: nginx
ports:
- "${NGINX_HTTP_PORT_OVERRIDE:-80}:80"
- "${NGINX_HTTPS_PORT_OVERRIDE:-443}:443"
volumes:
- *localtime
- *timezone
- nginx_logs:/var/log/nginx
- ./html:/usr/share/nginx/html:ro
@@ -27,6 +22,7 @@ services:
# - ./conf.d:/etc/nginx/conf.d:ro
# - ./ssl:/etc/nginx/ssl:ro
environment:
- TZ=${TZ:-UTC}
- NGINX_HOST=${NGINX_HOST:-localhost}
- NGINX_PORT=${NGINX_PORT:-80}
deploy:
@@ -37,6 +33,12 @@ services:
reservations:
cpus: '0.25'
memory: 64M
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:80/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
volumes:
nginx_logs:

View File

@@ -1,18 +1,15 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
node-exporter:
<<: *default
image: prom/node-exporter:${NODE_EXPORTER_VERSION:-v1.8.2}
container_name: node-exporter
ports:
- "${NODE_EXPORTER_PORT_OVERRIDE:-9100}:9100"
command:
@@ -20,6 +17,8 @@ services:
- '--path.procfs=/host/proc'
- '--path.sysfs=/host/sys'
- '--collector.filesystem.mount-points-exclude=^/(sys|proc|dev|host|etc)($$|/)'
environment:
- TZ=${TZ:-UTC}
volumes:
- '/:/host:ro,rslave'
deploy:
@@ -30,6 +29,12 @@ services:
reservations:
cpus: '0.1'
memory: 64M
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:9100/metrics"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
# Run with host network for accurate metrics
# network_mode: host

View File

@@ -1,28 +1,25 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
odoo:
<<: *default
image: odoo:${ODOO_VERSION:-19.0}
container_name: odoo
depends_on:
- odoo-db
odoo-db:
condition: service_healthy
ports:
- "${ODOO_PORT_OVERRIDE:-8069}:8069"
volumes:
- *localtime
- *timezone
- odoo_web_data:/var/lib/odoo
- odoo_addons:/mnt/extra-addons
environment:
- TZ=${TZ:-UTC}
- HOST=odoo-db
- USER=${POSTGRES_USER:-odoo}
- PASSWORD=${POSTGRES_PASSWORD:-odoopass}
@@ -36,19 +33,23 @@ services:
reservations:
cpus: '0.5'
memory: 1G
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8069/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
odoo-db:
<<: *default
image: postgres:${POSTGRES_VERSION:-17-alpine}
container_name: odoo-db
environment:
- TZ=${TZ:-UTC}
- POSTGRES_USER=${POSTGRES_USER:-odoo}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-odoopass}
- POSTGRES_DB=${POSTGRES_DB:-postgres}
- PGDATA=/var/lib/postgresql/data/pgdata
volumes:
- *localtime
- *timezone
- odoo_db_data:/var/lib/postgresql/data
deploy:
resources:
@@ -58,6 +59,12 @@ services:
reservations:
cpus: '0.25'
memory: 512M
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER:-odoo}"]
interval: 10s
timeout: 5s
retries: 5
start_period: 30s
volumes:
odoo_web_data:

View File

@@ -1,12 +1,10 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
ollama:
@@ -15,9 +13,9 @@ services:
ports:
- "${OLLAMA_PORT_OVERRIDE:-11434}:11434"
volumes:
- *localtime
- *timezone
- ollama_models:/root/.ollama
environment:
- TZ=${TZ:-UTC}
ipc: host
deploy:
resources:
@@ -31,6 +29,12 @@ services:
- driver: nvidia
device_ids: [ '0' ]
capabilities: [ gpu ]
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:11434/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
volumes:
ollama_models:

View File

@@ -1,12 +1,10 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
open_webui:
@@ -15,9 +13,9 @@ services:
ports:
- "${OPEN_WEBUI_PORT_OVERRIDE:-8080}:8080"
volumes:
- *localtime
- *timezone
- open_webui_data:/app/backend/data
environment:
- TZ=${TZ:-UTC}
env_file:
- .env
deploy:
@@ -28,6 +26,12 @@ services:
reservations:
cpus: '0.1'
memory: 128M
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8080/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
volumes:
open_webui_data:

View File

@@ -1,12 +1,10 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
# Note: OpenCoze is a complex platform that requires multiple services.
@@ -15,7 +13,8 @@ services:
opencoze-info:
image: alpine:latest
container_name: opencoze-info
environment:
- TZ=${TZ:-UTC}
command: >
sh -c "echo 'OpenCoze requires a complex multi-service setup.' &&
echo 'Please visit https://github.com/coze-dev/coze-studio for full deployment instructions.' &&

View File

@@ -1,29 +1,24 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
openlist:
<<: *default
image: openlistteam/openlist:${OPENLIST_VERSION:-latest}
container_name: openlist
ports:
- "${OPENLIST_PORT_OVERRIDE:-5244}:5244"
volumes:
- *localtime
- *timezone
- openlist_data:/opt/openlist/data
environment:
- TZ=${TZ:-UTC}
- PUID=${PUID:-0}
- PGID=${PGID:-0}
- UMASK=${UMASK:-022}
- TZ=${TZ:-Asia/Shanghai}
deploy:
resources:
limits:
@@ -32,6 +27,12 @@ services:
reservations:
cpus: '0.25'
memory: 256M
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:5244/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
volumes:
openlist_data:

View File

@@ -1,19 +1,17 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
opensearch:
<<: *default
image: opensearchproject/opensearch:${OPENSEARCH_VERSION:-2.19.0}
container_name: opensearch
environment:
TZ: ${TZ:-UTC}
cluster.name: ${CLUSTER_NAME:-opensearch-cluster}
node.name: opensearch
discovery.type: single-node
@@ -32,8 +30,6 @@ services:
- "${OPENSEARCH_PORT_OVERRIDE:-9200}:9200"
- "${OPENSEARCH_PERF_ANALYZER_PORT_OVERRIDE:-9600}:9600"
volumes:
- *localtime
- *timezone
- opensearch_data:/usr/share/opensearch/data
deploy:
resources:
@@ -43,18 +39,25 @@ services:
reservations:
cpus: '1.0'
memory: 1G
healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost:9200/_cluster/health || exit 1"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
opensearch-dashboards:
<<: *default
image: opensearchproject/opensearch-dashboards:${OPENSEARCH_DASHBOARDS_VERSION:-2.19.0}
container_name: opensearch-dashboards
ports:
- "${OPENSEARCH_DASHBOARDS_PORT_OVERRIDE:-5601}:5601"
environment:
TZ: ${TZ:-UTC}
OPENSEARCH_HOSTS: '["https://opensearch:9200"]'
DISABLE_SECURITY_DASHBOARDS_PLUGIN: ${DISABLE_SECURITY_PLUGIN:-false}
depends_on:
- opensearch
opensearch:
condition: service_healthy
deploy:
resources:
limits:
@@ -63,6 +66,12 @@ services:
reservations:
cpus: '0.5'
memory: 512M
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:5601/api/status"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
volumes:
opensearch_data:

View File

@@ -1,18 +1,17 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
pocketbase:
<<: *default
image: ghcr.io/muchobien/pocketbase:${PB_VERSION:-0.30.0}
environment:
TZ: ${TZ:-UTC}
# Optional ENCRYPTION (Ensure this is a 32-character long encryption key)
# $ openssl rand -hex 16
# https://pocketbase.io/docs/going-to-production/#enable-settings-encryption
@@ -22,8 +21,6 @@ services:
ports:
- "${PB_PORT:-8090}:8090"
volumes:
- *localtime
- *timezone
- pb_data:/pb_data
# optional public and hooks folders
@@ -34,6 +31,7 @@ services:
interval: 5s
timeout: 5s
retries: 5
start_period: 10s
deploy:
resources:
limits:

View File

@@ -1,24 +1,21 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
postgres:
<<: *default
image: postgres:${POSTGRES_VERSION:-17.6}
environment:
TZ: ${TZ:-UTC}
POSTGRES_USER: ${POSTGRES_USER:-postgres}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-postgres}
POSTGRES_DB: ${POSTGRES_DB:-postgres}
volumes:
- *localtime
- *timezone
- postgres_data:/var/lib/postgresql/data
# Initialize the database with a custom SQL script
@@ -28,11 +25,17 @@ services:
deploy:
resources:
limits:
cpus: '1.0'
memory: 1G
cpus: '2.0'
memory: 2G
reservations:
cpus: '0.5'
memory: 512M
healthcheck:
test: ["CMD-SHELL", "pg_isready -U $$POSTGRES_USER -d $$POSTGRES_DB"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
volumes:
postgres_data:

View File

@@ -1,23 +1,18 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
prometheus:
<<: *default
image: prom/prometheus:${PROMETHEUS_VERSION:-v3.5.0}
container_name: prometheus
ports:
- "${PROMETHEUS_PORT_OVERRIDE:-9090}:9090"
volumes:
- *localtime
- *timezone
- prometheus_data:/prometheus
# Optional: Mount custom configuration
@@ -34,6 +29,7 @@ services:
- '--web.enable-admin-api'
- '--web.external-url=${PROMETHEUS_EXTERNAL_URL:-http://localhost:9090}'
environment:
- TZ=${TZ:-UTC}
- PROMETHEUS_RETENTION_TIME=${PROMETHEUS_RETENTION_TIME:-15d}
- PROMETHEUS_RETENTION_SIZE=${PROMETHEUS_RETENTION_SIZE:-}
user: "65534:65534" # nobody user
@@ -45,6 +41,12 @@ services:
reservations:
cpus: '0.25'
memory: 512M
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:9090/-/healthy"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
volumes:
prometheus_data:

View File

@@ -1,22 +1,20 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
pytorch:
<<: *default
image: pytorch/pytorch:${PYTORCH_VERSION:-2.6.0-cuda12.6-cudnn9-runtime}
container_name: pytorch
ports:
- "${JUPYTER_PORT_OVERRIDE:-8888}:8888"
- "${TENSORBOARD_PORT_OVERRIDE:-6006}:6006"
environment:
TZ: ${TZ:-UTC}
NVIDIA_VISIBLE_DEVICES: ${NVIDIA_VISIBLE_DEVICES:-all}
NVIDIA_DRIVER_CAPABILITIES: ${NVIDIA_DRIVER_CAPABILITIES:-compute,utility}
JUPYTER_ENABLE_LAB: ${JUPYTER_ENABLE_LAB:-yes}
@@ -25,8 +23,6 @@ services:
jupyter lab --ip=0.0.0.0 --port=8888 --no-browser --allow-root
--NotebookApp.token='${JUPYTER_TOKEN:-pytorch}'"
volumes:
- *localtime
- *timezone
- pytorch_notebooks:/workspace
- pytorch_data:/data
working_dir: /workspace
@@ -42,6 +38,12 @@ services:
- driver: nvidia
count: ${GPU_COUNT:-1}
capabilities: [gpu]
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8888/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
volumes:
pytorch_notebooks:

View File

@@ -1,12 +1,10 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
qdrant:
@@ -16,10 +14,9 @@ services:
- "${QDRANT_HTTP_PORT:-6333}:6333"
- "${QDRANT_GRPC_PORT:-6334}:6334"
volumes:
- *localtime
- *timezone
- qdrant_data:/qdrant/storage:z
environment:
- TZ=${TZ:-UTC}
- QDRANT__SERVICE__API_KEY=${QDRANT_API_KEY}
- QDRANT__SERVICE__JWT_RBAC=${QDRANT_JWT_RBAC:-false}
deploy:
@@ -30,6 +27,12 @@ services:
reservations:
cpus: '0.5'
memory: 256M
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:6333/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
volumes:
qdrant_data:

View File

@@ -1,12 +1,10 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
rabbitmq:
@@ -14,12 +12,11 @@ services:
image: rabbitmq:${RABBITMQ_VERSION:-4.1.4-management-alpine}
volumes:
- rabbitmq_data:/var/lib/rabbitmq
- *localtime
- *timezone
ports:
- ${RABBITMQ_PORT:-5672}:5672
- ${RABBITMQ_MANAGEMENT_PORT:-15672}:15672
environment:
TZ: ${TZ:-UTC}
RABBITMQ_DEFAULT_USER: ${RABBITMQ_DEFAULT_USER:-admin}
RABBITMQ_DEFAULT_PASS: ${RABBITMQ_DEFAULT_PASS:-password}
deploy:
@@ -30,6 +27,12 @@ services:
reservations:
cpus: '0.5'
memory: 512M
healthcheck:
test: ["CMD", "rabbitmq-diagnostics", "-q", "ping"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
volumes:
rabbitmq_data:

View File

@@ -1,29 +1,25 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
ray-head:
<<: *default
image: rayproject/ray:${RAY_VERSION:-2.42.1-py312}
container_name: ray-head
command: ray start --head --dashboard-host=0.0.0.0 --port=6379 --block
ports:
- "${RAY_DASHBOARD_PORT_OVERRIDE:-8265}:8265"
- "${RAY_CLIENT_PORT_OVERRIDE:-10001}:10001"
- "${RAY_GCS_PORT_OVERRIDE:-6379}:6379"
environment:
TZ: ${TZ:-UTC}
RAY_NUM_CPUS: ${RAY_HEAD_NUM_CPUS:-4}
RAY_MEMORY: ${RAY_HEAD_MEMORY:-8589934592}
volumes:
- *localtime
- *timezone
- ray_storage:/tmp/ray
deploy:
resources:
@@ -33,20 +29,24 @@ services:
reservations:
cpus: '2.0'
memory: 4G
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8265/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
ray-worker-1:
<<: *default
image: rayproject/ray:${RAY_VERSION:-2.42.1-py312}
container_name: ray-worker-1
command: ray start --address=ray-head:6379 --block
depends_on:
- ray-head
ray-head:
condition: service_healthy
environment:
TZ: ${TZ:-UTC}
RAY_NUM_CPUS: ${RAY_WORKER_NUM_CPUS:-2}
RAY_MEMORY: ${RAY_WORKER_MEMORY:-4294967296}
volumes:
- *localtime
- *timezone
deploy:
resources:
limits:
@@ -59,16 +59,14 @@ services:
ray-worker-2:
<<: *default
image: rayproject/ray:${RAY_VERSION:-2.42.1-py312}
container_name: ray-worker-2
command: ray start --address=ray-head:6379 --block
depends_on:
- ray-head
ray-head:
condition: service_healthy
environment:
TZ: ${TZ:-UTC}
RAY_NUM_CPUS: ${RAY_WORKER_NUM_CPUS:-2}
RAY_MEMORY: ${RAY_WORKER_MEMORY:-4294967296}
volumes:
- *localtime
- *timezone
deploy:
resources:
limits:

View File

@@ -1,17 +1,16 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
redis-cluster-init:
image: redis:${REDIS_VERSION:-8.2.1-alpine}
container_name: redis-cluster-init
environment:
- TZ=${TZ:-UTC}
command: >
sh -c "
echo 'Waiting for all Redis instances to start...' &&
@@ -22,116 +21,170 @@ services:
--cluster-replicas 1 --cluster-yes
"
depends_on:
- redis-1
- redis-2
- redis-3
- redis-4
- redis-5
- redis-6
redis-1:
condition: service_healthy
redis-2:
condition: service_healthy
redis-3:
condition: service_healthy
redis-4:
condition: service_healthy
redis-5:
condition: service_healthy
redis-6:
condition: service_healthy
profiles:
- init
redis-1:
<<: *default
image: redis:${REDIS_VERSION:-8.2.1-alpine}
container_name: redis-1
environment:
- TZ=${TZ:-UTC}
command: redis-server --port 6379 --cluster-enabled yes --cluster-config-file nodes.conf --cluster-node-timeout 5000 --appendonly yes
ports:
- "7000:6379"
volumes:
- *localtime
- *timezone
- redis_1_data:/data
deploy:
resources:
limits:
cpus: '0.5'
memory: 512M
reservations:
cpus: '0.25'
memory: 256M
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 3s
retries: 3
start_period: 5s
redis-2:
<<: *default
image: redis:${REDIS_VERSION:-8.2.1-alpine}
container_name: redis-2
environment:
- TZ=${TZ:-UTC}
command: redis-server --port 6379 --cluster-enabled yes --cluster-config-file nodes.conf --cluster-node-timeout 5000 --appendonly yes
ports:
- "7001:6379"
volumes:
- *localtime
- *timezone
- redis_2_data:/data
deploy:
resources:
limits:
cpus: '0.5'
memory: 512M
reservations:
cpus: '0.25'
memory: 256M
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 3s
retries: 3
start_period: 5s
redis-3:
<<: *default
image: redis:${REDIS_VERSION:-8.2.1-alpine}
container_name: redis-3
environment:
- TZ=${TZ:-UTC}
command: redis-server --port 6379 --cluster-enabled yes --cluster-config-file nodes.conf --cluster-node-timeout 5000 --appendonly yes
ports:
- "7002:6379"
volumes:
- *localtime
- *timezone
- redis_3_data:/data
deploy:
resources:
limits:
cpus: '0.5'
memory: 512M
reservations:
cpus: '0.25'
memory: 256M
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 3s
retries: 3
start_period: 5s
redis-4:
<<: *default
image: redis:${REDIS_VERSION:-8.2.1-alpine}
container_name: redis-4
environment:
- TZ=${TZ:-UTC}
command: redis-server --port 6379 --cluster-enabled yes --cluster-config-file nodes.conf --cluster-node-timeout 5000 --appendonly yes
ports:
- "7003:6379"
volumes:
- *localtime
- *timezone
- redis_4_data:/data
deploy:
resources:
limits:
cpus: '0.5'
memory: 512M
reservations:
cpus: '0.25'
memory: 256M
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 3s
retries: 3
start_period: 5s
redis-5:
<<: *default
image: redis:${REDIS_VERSION:-8.2.1-alpine}
container_name: redis-5
environment:
- TZ=${TZ:-UTC}
command: redis-server --port 6379 --cluster-enabled yes --cluster-config-file nodes.conf --cluster-node-timeout 5000 --appendonly yes
ports:
- "7004:6379"
volumes:
- *localtime
- *timezone
- redis_5_data:/data
deploy:
resources:
limits:
cpus: '0.5'
memory: 512M
reservations:
cpus: '0.25'
memory: 256M
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 3s
retries: 3
start_period: 5s
redis-6:
<<: *default
image: redis:${REDIS_VERSION:-8.2.1-alpine}
container_name: redis-6
environment:
- TZ=${TZ:-UTC}
command: redis-server --port 6379 --cluster-enabled yes --cluster-config-file nodes.conf --cluster-node-timeout 5000 --appendonly yes
ports:
- "7005:6379"
volumes:
- *localtime
- *timezone
- redis_6_data:/data
deploy:
resources:
limits:
cpus: '0.5'
memory: 512M
reservations:
cpus: '0.25'
memory: 256M
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 3s
retries: 3
start_period: 5s
volumes:
redis_1_data:

View File

@@ -1,11 +1,11 @@
# App version
REDIS_VERSION="8.2.1-alpine3.22"
# Redis version
REDIS_VERSION=8.2.1-alpine3.22
# Skip fixing permissions, set to 1 to skip
SKIP_FIX_PERMS=1
# Password for the default "default" user
REDIS_PASSWORD="passw0rd"
# Password for Redis authentication (leave empty for no password)
REDIS_PASSWORD=passw0rd
# Port to bind to on the host machine
REDIS_PORT_OVERRIDE=6379
# Timezone (e.g., UTC, Asia/Shanghai, America/New_York)
TZ=UTC

View File

@@ -1,28 +1,24 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
redis:
<<: *default
image: redis:${REDIS_VERSION:-8.2.1-alpine3.22}
container_name: redis
ports:
- "${REDIS_PORT_OVERRIDE:-6379}:6379"
volumes:
- *localtime
- *timezone
- redis_data:/data
# Use a custom redis.conf file
# - ./redis.conf:/etc/redis/redis.conf
environment:
- TZ=${TZ:-UTC}
- REDIS_PASSWORD=${REDIS_PASSWORD:-}
- SKIP_FIX_PERMS=${SKIP_FIX_PERMS:-}
command:
- sh
@@ -33,6 +29,12 @@ services:
else
redis-server
fi
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 3s
retries: 3
start_period: 5s
deploy:
resources:
limits:

View File

@@ -1,27 +1,23 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
stable-diffusion-webui:
<<: *default
image: ghcr.io/absolutelyludicrous/sdnext:${SD_WEBUI_VERSION:-latest}
container_name: stable-diffusion-webui
ports:
- "${SD_WEBUI_PORT_OVERRIDE:-7860}:7860"
environment:
TZ: ${TZ:-UTC}
CLI_ARGS: ${CLI_ARGS:---listen --api --skip-version-check}
NVIDIA_VISIBLE_DEVICES: ${NVIDIA_VISIBLE_DEVICES:-all}
NVIDIA_DRIVER_CAPABILITIES: ${NVIDIA_DRIVER_CAPABILITIES:-compute,utility}
volumes:
- *localtime
- *timezone
- sd_webui_data:/data
- sd_webui_output:/output
deploy:
@@ -36,6 +32,12 @@ services:
- driver: nvidia
count: ${GPU_COUNT:-1}
capabilities: [gpu]
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:7860/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 120s
volumes:
sd_webui_data:

View File

@@ -1,28 +1,24 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
stirling-pdf:
<<: *default
image: stirlingtools/stirling-pdf:${STIRLING_VERSION:-latest}
container_name: stirling-pdf
ports:
- "${PORT_OVERRIDE:-8080}:8080"
volumes:
- *localtime
- *timezone
- stirling_trainingData:/usr/share/tessdata
- stirling_configs:/configs
- stirling_logs:/logs
- stirling_customFiles:/customFiles
environment:
- TZ=${TZ:-UTC}
- DOCKER_ENABLE_SECURITY=${ENABLE_SECURITY:-false}
- SECURITY_ENABLELOGIN=${ENABLE_LOGIN:-false}
- SECURITY_INITIALLOGIN_USERNAME=${INITIAL_USERNAME:-admin}
@@ -47,6 +43,12 @@ services:
reservations:
cpus: '1.0'
memory: 2G
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8080/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
volumes:
stirling_trainingData:

View File

@@ -1,19 +1,16 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
x-valkey-node: &valkey-node
<<: *default
image: valkey/valkey:${VALKEY_VERSION:-8.0-alpine}
volumes:
- *localtime
- *timezone
environment:
- TZ=${TZ:-UTC}
command:
- valkey-server
- --cluster-enabled
@@ -36,85 +33,80 @@ x-valkey-node: &valkey-node
reservations:
cpus: '0.25'
memory: 256M
healthcheck:
test: ["CMD", "valkey-cli", "-a", "${VALKEY_PASSWORD:-passw0rd}", "ping"]
interval: 10s
timeout: 3s
retries: 3
start_period: 5s
services:
valkey-node-1:
<<: *valkey-node
container_name: valkey-node-1
ports:
- "${VALKEY_PORT_1:-7001}:6379"
- "${VALKEY_BUS_PORT_1:-17001}:16379"
volumes:
- *localtime
- *timezone
- valkey_data_1:/data
valkey-node-2:
<<: *valkey-node
container_name: valkey-node-2
ports:
- "${VALKEY_PORT_2:-7002}:6379"
- "${VALKEY_BUS_PORT_2:-17002}:16379"
volumes:
- *localtime
- *timezone
- valkey_data_2:/data
valkey-node-3:
<<: *valkey-node
container_name: valkey-node-3
ports:
- "${VALKEY_PORT_3:-7003}:6379"
- "${VALKEY_BUS_PORT_3:-17003}:16379"
volumes:
- *localtime
- *timezone
- valkey_data_3:/data
valkey-node-4:
<<: *valkey-node
container_name: valkey-node-4
ports:
- "${VALKEY_PORT_4:-7004}:6379"
- "${VALKEY_BUS_PORT_4:-17004}:16379"
volumes:
- *localtime
- *timezone
- valkey_data_4:/data
valkey-node-5:
<<: *valkey-node
container_name: valkey-node-5
ports:
- "${VALKEY_PORT_5:-7005}:6379"
- "${VALKEY_BUS_PORT_5:-17005}:16379"
volumes:
- *localtime
- *timezone
- valkey_data_5:/data
valkey-node-6:
<<: *valkey-node
container_name: valkey-node-6
ports:
- "${VALKEY_PORT_6:-7006}:6379"
- "${VALKEY_BUS_PORT_6:-17006}:16379"
volumes:
- *localtime
- *timezone
- valkey_data_6:/data
valkey-cluster-init:
<<: *default
image: valkey/valkey:${VALKEY_VERSION:-8.0-alpine}
container_name: valkey-cluster-init
environment:
- TZ=${TZ:-UTC}
depends_on:
- valkey-node-1
- valkey-node-2
- valkey-node-3
- valkey-node-4
- valkey-node-5
- valkey-node-6
valkey-node-1:
condition: service_healthy
valkey-node-2:
condition: service_healthy
valkey-node-3:
condition: service_healthy
valkey-node-4:
condition: service_healthy
valkey-node-5:
condition: service_healthy
valkey-node-6:
condition: service_healthy
command:
- sh
- -c

View File

@@ -1,26 +1,23 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
valkey:
<<: *default
image: valkey/valkey:${VALKEY_VERSION:-8.0-alpine}
container_name: valkey
ports:
- "${VALKEY_PORT_OVERRIDE:-6379}:6379"
volumes:
- *localtime
- *timezone
- valkey_data:/data
# Use a custom valkey.conf file
# - ./valkey.conf:/etc/valkey/valkey.conf
environment:
- TZ=${TZ:-UTC}
command:
- sh
- -c
@@ -38,6 +35,12 @@ services:
reservations:
cpus: '0.25'
memory: 256M
healthcheck:
test: ["CMD", "valkey-cli", "ping"]
interval: 10s
timeout: 3s
retries: 3
start_period: 5s
volumes:
valkey_data:

View File

@@ -1,25 +1,21 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
vllm:
<<: *default
image: vllm/vllm-openai:${VLLM_VERSION:-v0.8.0}
container_name: vllm
ports:
- "${VLLM_PORT_OVERRIDE:-8000}:8000"
volumes:
- *localtime
- *timezone
- vllm_models:/root/.cache/huggingface
environment:
- TZ=${TZ:-UTC}
- HF_TOKEN=${HF_TOKEN:-}
command:
- --model
@@ -47,6 +43,12 @@ services:
# capabilities: [gpu]
# runtime: nvidia # Uncomment for GPU support
shm_size: 4g
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
volumes:
vllm_models:

View File

@@ -1,28 +1,24 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
zookeeper:
<<: *default
image: zookeeper:${ZOOKEEPER_VERSION:-3.9.3}
container_name: zookeeper
ports:
- "${ZOOKEEPER_CLIENT_PORT_OVERRIDE:-2181}:2181"
- "${ZOOKEEPER_ADMIN_PORT_OVERRIDE:-8080}:8080"
volumes:
- *localtime
- *timezone
- zookeeper_data:/data
- zookeeper_datalog:/datalog
- zookeeper_logs:/logs
environment:
- TZ=${TZ:-UTC}
- ZOO_MY_ID=1
- ZOO_PORT=2181
- ZOO_TICK_TIME=${ZOO_TICK_TIME:-2000}
@@ -38,6 +34,12 @@ services:
reservations:
cpus: '0.25'
memory: 512M
healthcheck:
test: ["CMD-SHELL", "echo ruok | nc localhost 2181 | grep imok"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
volumes:
zookeeper_data: