feat: update Guidelines

This commit is contained in:
Sun-ZhenXing
2025-10-15 14:00:03 +08:00
parent fe329c80eb
commit 8cf227bd14
76 changed files with 1078 additions and 671 deletions

41
.compose-template.yaml Normal file
View File

@@ -0,0 +1,41 @@
# Docker Compose Template for Compose Anything
# This template provides a standardized structure for all services
# Copy this template when creating new services
x-default: &default
restart: unless-stopped
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
# Example service structure:
# services:
# service-name:
# <<: *default
# image: image:${VERSION:-latest}
# ports:
# - "${PORT_OVERRIDE:-8080}:8080"
# volumes:
# - service_data:/data
# environment:
# - TZ=${TZ:-UTC}
# - ENV_VAR=${ENV_VAR:-default_value}
# healthcheck:
# test: ["CMD", "command", "to", "check", "health"]
# interval: 30s
# timeout: 10s
# retries: 3
# start_period: 10s
# deploy:
# resources:
# limits:
# cpus: '1.00'
# memory: 512M
# reservations:
# cpus: '0.25'
# memory: 128M
#
# volumes:
# service_data:

View File

@@ -0,0 +1,52 @@
---
applyTo: '**'
---
Compose Anything helps users quickly deploy various services by providing a set of high-quality Docker Compose configuration files. These configurations constrain resource usage, can be easily migrated to systems like K8S, and are easy to understand and modify.
1. Out-of-the-box
- Configurations should work out-of-the-box with no extra steps (at most, provide a `.env` file).
2. Simple commands
- Each project ships a single `docker-compose.yaml` file.
- Command complexity should not exceed `docker compose up -d`; if more is needed, provide a `Makefile`.
- For initialization, prefer `healthcheck` with `depends_on` using `condition: service_healthy` to orchestrate startup order.
3. Stable versions
- Pin to the latest stable version instead of `latest`.
- Expose image versions via environment variables (e.g., `FOO_VERSION`).
4. Configuration conventions
- Prefer environment variables over complex CLI flags;
- Pass secrets via env vars or mounted files, never hardcode;
- Provide sensible defaults to enable zero-config startup;
- A commented `.env.example` is required;
- Env var naming: UPPER_SNAKE_CASE with service prefix (e.g., `POSTGRES_*`); use `*_PORT_OVERRIDE` for host port overrides.
5. Profiles
- Use Profiles for optional components/dependencies;
- Recommended names: `gpu` (acceleration), `metrics` (observability/exporters), `dev` (dev-only features).
6. Cross-platform & architectures
- Where images support it, ensure Debian 12+/Ubuntu 22.04+, Windows 10+, macOS 12+ work;
- Support x86-64 and ARM64 as consistently as possible;
- Avoid Linux-only host paths like `/etc/localtime` and `/etc/timezone`; prefer `TZ` env var for time zone.
7. Volumes & mounts
- Prefer relative paths for configuration to improve portability;
- Prefer named volumes for data directories to avoid permission/compat issues of host paths;
- If host paths are necessary, provide a top-level directory variable (e.g., `DATA_DIR`).
8. Resources & logging
- Always limit CPU and memory to prevent resource exhaustion;
- For GPU services, enable a single GPU by default via `deploy.resources.reservations.devices` (maps to device requests) or `gpus` where applicable;
- Limit logs (`json-file` driver: `max-size`/`max-file`).
9. Healthchecks
- Every service should define a `healthcheck` with suitable `interval`, `timeout`, `retries`, and `start_period`;
- Use `depends_on.condition: service_healthy` for dependency chains.
10. Security baseline (apply when possible)
- Run as non-root (expose `PUID`/`PGID` or set `user: "1000:1000"`);
- Read-only root filesystem (`read_only: true`), use `tmpfs`/writable mounts for required paths;
- Least privilege: `cap_drop: ["ALL"]`, add back only whats needed via `cap_add`;
- Avoid `container_name` (hurts scaling and reusable network aliases);
- If exposing Docker socket or other high-risk mounts, clearly document risks and alternatives.
11. Documentation & discoverability
- Provide clear docs and examples (include admin/initialization notes, and security/license notes when relevant);
- Keep docs LLM-friendly;
- List primary env vars and default ports in the README, and link to `README.md` / `README.zh.md`.
Reference template: `.compose-template.yaml` in the repo root.
If you want to find image tags, try fetch url like `https://hub.docker.com/v2/repositories/library/nginx/tags?page_size=1&ordering=last_updated`.

View File

@@ -67,37 +67,50 @@ Compose Anything helps users quickly deploy various services by providing a set
## Guidelines ## Guidelines
1. **Out-of-the-box**: Configurations should work out-of-the-box, requiring no setup to start (at most, provide a `.env` file). 1. Out-of-the-box
2. **Simple Commands** - Configurations should work out-of-the-box with no extra steps (at most, provide a `.env` file).
- Each project provides a single `docker-compose.yaml` file. 2. Simple commands
- Command complexity should not exceed the `docker compose` command; if it does, provide a `Makefile`. - Each project ships a single `docker-compose.yaml` file.
- If a service requires initialization, use `depends_on` to simulate Init containers. - Command complexity should not exceed `docker compose up -d`; if more is needed, provide a `Makefile`.
3. **Stable Versions** - For initialization, prefer `healthcheck` with `depends_on` using `condition: service_healthy` to orchestrate startup order.
- Provide the latest stable image version instead of `latest`. 3. Stable versions
- Allow version configuration via environment variables. - Pin to the latest stable version instead of `latest`.
4. **Highly Configurable** - Expose image versions via environment variables (e.g., `FOO_VERSION`).
- Prefer configuration via environment variables rather than complex command-line arguments. 4. Configuration conventions
- Sensitive information like passwords should be passed via environment variables or mounted files, not hardcoded. - Prefer environment variables over complex CLI flags;
- Provide reasonable defaults so services can start with zero configuration. - Pass secrets via env vars or mounted files, never hardcode;
- Provide a well-commented `.env.example` file to help users get started quickly. - Provide sensible defaults to enable zero-config startup;
- Use Profiles for optional dependencies. - A commented `.env.example` is required;
5. **Cross-Platform**: (Where supported by the image) Ensure compatibility with major platforms. - Env var naming: UPPER_SNAKE_CASE with service prefix (e.g., `POSTGRES_*`); use `*_PORT_OVERRIDE` for host port overrides.
- Compatibility: Debian 12+/Ubuntu 22.04+, Windows 10+, macOS 12+. 5. Profiles
- Support multiple architectures where possible, such as x86-64 and ARM64. - Use Profiles for optional components/dependencies;
6. **Careful Mounting** - Recommended names: `gpu` (acceleration), `metrics` (observability/exporters), `dev` (dev-only features).
- Use relative paths for configuration file mounts to ensure cross-platform compatibility. 6. Cross-platform & architectures
- Use named volumes for data directories to avoid permission and compatibility issues with host path mounts. - Where images support it, ensure Debian 12+/Ubuntu 22.04+, Windows 10+, macOS 12+ work;
7. **Default Resource Limits** - Support x86-64 and ARM64 as consistently as possible;
- Limit CPU and memory usage for each service to prevent accidental resource exhaustion. - Avoid Linux-only host paths like `/etc/localtime` and `/etc/timezone`; prefer `TZ` env var for time zone.
- Limit log file size to prevent logs from filling up the disk. 7. Volumes & mounts
- For GPU services, enable single GPU by default. - Prefer relative paths for configuration to improve portability;
8. **Comprehensive Documentation** - Prefer named volumes for data directories to avoid permission/compat issues of host paths;
- Provide good documentation and examples to help users get started and understand the configurations. - If host paths are necessary, provide a top-level directory variable (e.g., `DATA_DIR`).
- Clearly explain how to initialize accounts, admin accounts, etc. 8. Resources & logging
- Provide security and license notes when necessary. - Always limit CPU and memory to prevent resource exhaustion;
- Offer LLM-friendly documentation for easy querying and understanding by language models. - For GPU services, enable a single GPU by default via `deploy.resources.reservations.devices` (maps to device requests) or `gpus` where applicable;
9. **Best Practices**: Follow other best practices to ensure security, performance, and maintainability. - Limit logs (`json-file` driver: `max-size`/`max-file`).
9. Healthchecks
- Every service should define a `healthcheck` with suitable `interval`, `timeout`, `retries`, and `start_period`;
- Use `depends_on.condition: service_healthy` for dependency chains.
10. Security baseline (apply when possible)
- Run as non-root (expose `PUID`/`PGID` or set `user: "1000:1000"`);
- Read-only root filesystem (`read_only: true`), use `tmpfs`/writable mounts for required paths;
- Least privilege: `cap_drop: ["ALL"]`, add back only whats needed via `cap_add`;
- Avoid `container_name` (hurts scaling and reusable network aliases);
- If exposing Docker socket or other high-risk mounts, clearly document risks and alternatives.
11. Documentation & discoverability
- Provide clear docs and examples (include admin/initialization notes, and security/license notes when relevant);
- Keep docs LLM-friendly;
- List primary env vars and default ports in the README, and link to `README.md` / `README.zh.md`.
## License ## License
MIT License. [MIT License](./LICENSE).

View File

@@ -67,37 +67,50 @@ Compose Anything 通过提供一组高质量的 Docker Compose 配置文件,
## 规范 ## 规范
1. **开箱即用**,配置应该是开箱即用的,无需配置也能启动(最多提供 `.env` 文件); 1. 开箱即用
2. **命令简单** - 配置应该是开箱即用的,无需额外步骤即可启动(最多提供 `.env` 文件)。
2. 命令简单
- 每个项目提供单一的 `docker-compose.yaml` 文件; - 每个项目提供单一的 `docker-compose.yaml` 文件;
- 命令复杂性避免超过 `docker compose` 命令,如果超过请提供 `Makefile` - 命令复杂度不应超过 `docker compose up -d`;若需要额外流程,请提供 `Makefile`
- 如果服务需要初始化,可借助 `depends_on` 模拟 Init 容器; - 服务需要初始化,优先使用 `healthcheck``depends_on``condition: service_healthy` 组织启动顺序。
3. **版本稳定** 3. 版本稳定
- 提供一个最新稳定的镜像版本而不是 `latest` - 固定到“最新稳定版”而非 `latest`
- 允许通过环境变量配置版本号; - 通过环境变量暴露镜像版本(如 `FOO_VERSION`)。
4. **充分可配置** 4. 配置约定
- 尽量通过环境变量配置,而不是通过复杂的命令行参数; - 尽量通过环境变量配置,而复杂的命令行参数;
- 环境变量,密码等敏感信息通过环境变量或挂载文件传递,不要硬编码; - 敏感信息通过环境变量或挂载文件传递,不要硬编码;
- 提供合理默认值,尽量零配置启动; - 提供合理默认值,实现零配置启动;
- 尽可能提供 `.env.example` 文件并有注释,帮助用户快速上手 - 必须提供带注释的 `.env.example`
- 如果是非必要依赖,请使用 Profiles 配置; - 环境变量命名建议:全大写、下划线分隔,按服务加前缀(如 `POSTGRES_*`),端口覆写统一用 `*_PORT_OVERRIDE`
5. **跨平台**,(在镜像支持的情况下)请确保主流平台都能正常启动; 5. Profiles 规范
- 兼容标准是Debian 12+/Ubuntu 22.04+、Windows 10+、macOS 12+ - 对“可选组件/依赖”使用 Profiles
- 尽可能兼容不同的架构,如 x86-64、ARM64 - 推荐命名:`gpu`GPU 加速)、`metrics`(可观测性/导出器)、`dev`(开发特性)。
6. **小心处理挂载** 6. 跨平台与架构
- 配置文件尽量使用相对路径挂载,确保跨平台兼容性 - 在镜像支持前提下,确保 Debian 12+/Ubuntu 22.04+、Windows 10+、macOS 12+ 可用
- 数据目录尽量使用命名卷,避免主机路径挂载带来的权限和兼容性问题 - 支持 x86-64 与 ARM64 架构尽可能一致
7. **默认资源限制** - 避免依赖仅在 Linux 主机存在的主机路径(例如 `/etc/localtime``/etc/timezone`),统一使用 `TZ` 环境变量传递时区。
- 对每个服务限制 CPU 和内存使用,防止意外的资源耗尽; 7. 卷与挂载
- 限制日志的大小,防止日志文件占满磁盘 - 配置文件优先使用相对路径,增强跨平台兼容
- 对于 GPU 服务默认启用单卡 - 数据目录优先使用“命名卷”,避免主机路径权限/兼容性问题
8. **文档全面** - 如需主机路径,建议提供顶层目录变量(如 `DATA_DIR`)。
- 提供良好的文档和示例,帮助用户快速上手和理解配置; 8. 资源与日志
- 特别要提供如何初始化账户,管理员账户等说明 - 必须限制 CPU/内存,防止资源打爆
- 必要时,提供安全和许可说明 - GPU 服务默认单卡:可使用 `deploy.resources.reservations.devices`Compose 支持为 device_requests 映射)或 `gpus`
- 提供 LLM 友好的文档,方便用户使用 LLM 进行查询和理解; - 限制日志大小(`json-file``max-size`/`max-file`)。
9. **最佳实践**,遵循其他可能的最佳实践,确保安全性、性能和可维护性。 9. 健康检查
- 每个服务应提供 `healthcheck`,包括合适的 `interval``timeout``retries``start_period`
- 依赖链通过 `depends_on.condition: service_healthy` 组织。
10. 安全基线(能用则用)
- 以非 root 运行(提供 `PUID`/`PGID` 或直接 `user: "1000:1000"`
- 只读根文件系统(`read_only: true`),必要目录使用 `tmpfs`/可写挂载;
- 最小权限:`cap_drop: ["ALL"]`,按需再 `cap_add`
- 避免使用 `container_name`(影响可扩缩与复用网络别名);
- 如需暴露 Docker 套接字等高危挂载,必须在文档中明确“风险与替代方案”。
11. 文档与可发现性
- 提供清晰文档与示例(含初始化与管理员账号说明、必要的安全/许可说明);
- 提供对 LLM 友好的结构化文档;
- 在 README 中标注主要环境变量与默认端口,并链接到 `README.md` / `README.zh.md`
## 开源协议 ## 开源协议
MIT License. [MIT License](./LICENSE).

View File

@@ -1,24 +1,19 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
apache: apache:
<<: *default <<: *default
image: httpd:${APACHE_VERSION:-2.4.62-alpine3.20} image: httpd:${APACHE_VERSION:-2.4.62-alpine3.20}
container_name: apache
ports: ports:
- "${APACHE_HTTP_PORT_OVERRIDE:-80}:80" - "${APACHE_HTTP_PORT_OVERRIDE:-80}:80"
- "${APACHE_HTTPS_PORT_OVERRIDE:-443}:443" - "${APACHE_HTTPS_PORT_OVERRIDE:-443}:443"
volumes: volumes:
- *localtime
- *timezone
- apache_logs:/usr/local/apache2/logs - apache_logs:/usr/local/apache2/logs
- ./htdocs:/usr/local/apache2/htdocs:ro - ./htdocs:/usr/local/apache2/htdocs:ro
@@ -26,6 +21,7 @@ services:
# - ./httpd.conf:/usr/local/apache2/conf/httpd.conf:ro # - ./httpd.conf:/usr/local/apache2/conf/httpd.conf:ro
# - ./ssl:/usr/local/apache2/conf/ssl:ro # - ./ssl:/usr/local/apache2/conf/ssl:ro
environment: environment:
- TZ=${TZ:-UTC}
- APACHE_RUN_USER=${APACHE_RUN_USER:-www-data} - APACHE_RUN_USER=${APACHE_RUN_USER:-www-data}
- APACHE_RUN_GROUP=${APACHE_RUN_GROUP:-www-data} - APACHE_RUN_GROUP=${APACHE_RUN_GROUP:-www-data}
deploy: deploy:
@@ -36,6 +32,12 @@ services:
reservations: reservations:
cpus: '0.25' cpus: '0.25'
memory: 128M memory: 128M
healthcheck:
test: ["CMD", "httpd", "-t"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
volumes: volumes:
apache_logs: apache_logs:

View File

@@ -1,34 +1,31 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
apisix: apisix:
<<: *default <<: *default
image: apache/apisix:${APISIX_VERSION:-3.13.0-debian} image: apache/apisix:${APISIX_VERSION:-3.13.0-debian}
container_name: apisix
ports: ports:
- "${APISIX_HTTP_PORT_OVERRIDE:-9080}:9080" - "${APISIX_HTTP_PORT_OVERRIDE:-9080}:9080"
- "${APISIX_HTTPS_PORT_OVERRIDE:-9443}:9443" - "${APISIX_HTTPS_PORT_OVERRIDE:-9443}:9443"
- "${APISIX_ADMIN_PORT_OVERRIDE:-9180}:9180" - "${APISIX_ADMIN_PORT_OVERRIDE:-9180}:9180"
volumes: volumes:
- *localtime
- *timezone
- apisix_logs:/usr/local/apisix/logs - apisix_logs:/usr/local/apisix/logs
# Optional: Mount custom configuration # Optional: Mount custom configuration
# - ./config.yaml:/usr/local/apisix/conf/config.yaml # - ./config.yaml:/usr/local/apisix/conf/config.yaml
# - ./apisix.yaml:/usr/local/apisix/conf/apisix.yaml # - ./apisix.yaml:/usr/local/apisix/conf/apisix.yaml
environment: environment:
- TZ=${TZ:-UTC}
- APISIX_STAND_ALONE=${APISIX_STAND_ALONE:-false} - APISIX_STAND_ALONE=${APISIX_STAND_ALONE:-false}
depends_on: depends_on:
- etcd etcd:
condition: service_healthy
deploy: deploy:
resources: resources:
limits: limits:
@@ -37,18 +34,22 @@ services:
reservations: reservations:
cpus: '0.25' cpus: '0.25'
memory: 256M memory: 256M
healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost:9080/apisix/status || exit 1"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
etcd: etcd:
<<: *default <<: *default
image: quay.io/coreos/etcd:${ETCD_VERSION:-v3.6.0} image: quay.io/coreos/etcd:${ETCD_VERSION:-v3.6.0}
container_name: apisix-etcd
ports: ports:
- "${ETCD_CLIENT_PORT_OVERRIDE:-2379}:2379" - "${ETCD_CLIENT_PORT_OVERRIDE:-2379}:2379"
volumes: volumes:
- *localtime
- *timezone
- etcd_data:/etcd-data - etcd_data:/etcd-data
environment: environment:
- TZ=${TZ:-UTC}
- ETCD_NAME=apisix-etcd - ETCD_NAME=apisix-etcd
- ETCD_DATA_DIR=/etcd-data - ETCD_DATA_DIR=/etcd-data
- ETCD_LISTEN_CLIENT_URLS=http://0.0.0.0:2379 - ETCD_LISTEN_CLIENT_URLS=http://0.0.0.0:2379
@@ -87,23 +88,28 @@ services:
reservations: reservations:
cpus: '0.1' cpus: '0.1'
memory: 128M memory: 128M
healthcheck:
test: ["CMD", "etcdctl", "endpoint", "health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
# Optional: APISIX Dashboard # Optional: APISIX Dashboard
apisix-dashboard: apisix-dashboard:
<<: *default <<: *default
image: apache/apisix-dashboard:${APISIX_DASHBOARD_VERSION:-3.0.1-alpine} image: apache/apisix-dashboard:${APISIX_DASHBOARD_VERSION:-3.0.1-alpine}
container_name: apisix-dashboard
ports: ports:
- "${APISIX_DASHBOARD_PORT_OVERRIDE:-9000}:9000" - "${APISIX_DASHBOARD_PORT_OVERRIDE:-9000}:9000"
volumes: volumes:
- *localtime
- *timezone
- dashboard_conf:/usr/local/apisix-dashboard/conf - dashboard_conf:/usr/local/apisix-dashboard/conf
environment: environment:
- TZ=${TZ:-UTC}
- APISIX_DASHBOARD_USER=${APISIX_DASHBOARD_USER:-admin} - APISIX_DASHBOARD_USER=${APISIX_DASHBOARD_USER:-admin}
- APISIX_DASHBOARD_PASSWORD=${APISIX_DASHBOARD_PASSWORD:-admin} - APISIX_DASHBOARD_PASSWORD=${APISIX_DASHBOARD_PASSWORD:-admin}
depends_on: depends_on:
- apisix apisix:
condition: service_healthy
profiles: profiles:
- dashboard - dashboard
deploy: deploy:

View File

@@ -1,12 +1,10 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
bifrost: bifrost:
@@ -16,6 +14,8 @@ services:
- bifrost_data:/app/data - bifrost_data:/app/data
ports: ports:
- "${BIFROST_PORT:-28080}:8080" - "${BIFROST_PORT:-28080}:8080"
environment:
- TZ=${TZ:-UTC}
deploy: deploy:
resources: resources:
limits: limits:
@@ -24,6 +24,12 @@ services:
reservations: reservations:
cpus: '0.10' cpus: '0.10'
memory: 128M memory: 128M
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8080/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
volumes: volumes:
bifrost_data: bifrost_data:

View File

@@ -1,23 +1,19 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
bytebot-desktop: bytebot-desktop:
<<: *default <<: *default
image: ghcr.io/bytebot-ai/bytebot-desktop:${BYTEBOT_VERSION:-edge} image: ghcr.io/bytebot-ai/bytebot-desktop:${BYTEBOT_VERSION:-edge}
container_name: bytebot-desktop
ports: ports:
- "${BYTEBOT_DESKTOP_PORT_OVERRIDE:-9990}:9990" - "${BYTEBOT_DESKTOP_PORT_OVERRIDE:-9990}:9990"
volumes: environment:
- *localtime - TZ=${TZ:-UTC}
- *timezone
shm_size: 2gb shm_size: 2gb
deploy: deploy:
resources: resources:
@@ -27,25 +23,30 @@ services:
reservations: reservations:
cpus: '1.0' cpus: '1.0'
memory: 2G memory: 2G
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:9990/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
bytebot-agent: bytebot-agent:
<<: *default <<: *default
image: ghcr.io/bytebot-ai/bytebot-agent:${BYTEBOT_VERSION:-edge} image: ghcr.io/bytebot-ai/bytebot-agent:${BYTEBOT_VERSION:-edge}
container_name: bytebot-agent
depends_on: depends_on:
- bytebot-desktop bytebot-desktop:
- bytebot-db condition: service_healthy
bytebot-db:
condition: service_healthy
ports: ports:
- "${BYTEBOT_AGENT_PORT_OVERRIDE:-9991}:9991" - "${BYTEBOT_AGENT_PORT_OVERRIDE:-9991}:9991"
environment: environment:
- TZ=${TZ:-UTC}
- BYTEBOTD_URL=http://bytebot-desktop:9990 - BYTEBOTD_URL=http://bytebot-desktop:9990
- DATABASE_URL=postgresql://${POSTGRES_USER:-bytebot}:${POSTGRES_PASSWORD:-bytebotpass}@bytebot-db:5432/${POSTGRES_DB:-bytebot} - DATABASE_URL=postgresql://${POSTGRES_USER:-bytebot}:${POSTGRES_PASSWORD:-bytebotpass}@bytebot-db:5432/${POSTGRES_DB:-bytebot}
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY:-} - ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY:-}
- OPENAI_API_KEY=${OPENAI_API_KEY:-} - OPENAI_API_KEY=${OPENAI_API_KEY:-}
- GEMINI_API_KEY=${GEMINI_API_KEY:-} - GEMINI_API_KEY=${GEMINI_API_KEY:-}
volumes:
- *localtime
- *timezone
deploy: deploy:
resources: resources:
limits: limits:
@@ -54,21 +55,25 @@ services:
reservations: reservations:
cpus: '0.5' cpus: '0.5'
memory: 512M memory: 512M
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:9991/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
bytebot-ui: bytebot-ui:
<<: *default <<: *default
image: ghcr.io/bytebot-ai/bytebot-ui:${BYTEBOT_VERSION:-edge} image: ghcr.io/bytebot-ai/bytebot-ui:${BYTEBOT_VERSION:-edge}
container_name: bytebot-ui
depends_on: depends_on:
- bytebot-agent bytebot-agent:
condition: service_healthy
ports: ports:
- "${BYTEBOT_UI_PORT_OVERRIDE:-9992}:9992" - "${BYTEBOT_UI_PORT_OVERRIDE:-9992}:9992"
environment: environment:
- TZ=${TZ:-UTC}
- BYTEBOT_AGENT_BASE_URL=http://localhost:9991 - BYTEBOT_AGENT_BASE_URL=http://localhost:9991
- BYTEBOT_DESKTOP_VNC_URL=http://localhost:9990/websockify - BYTEBOT_DESKTOP_VNC_URL=http://localhost:9990/websockify
volumes:
- *localtime
- *timezone
deploy: deploy:
resources: resources:
limits: limits:
@@ -81,15 +86,13 @@ services:
bytebot-db: bytebot-db:
<<: *default <<: *default
image: postgres:${POSTGRES_VERSION:-17-alpine} image: postgres:${POSTGRES_VERSION:-17-alpine}
container_name: bytebot-db
environment: environment:
- TZ=${TZ:-UTC}
- POSTGRES_USER=${POSTGRES_USER:-bytebot} - POSTGRES_USER=${POSTGRES_USER:-bytebot}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-bytebotpass} - POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-bytebotpass}
- POSTGRES_DB=${POSTGRES_DB:-bytebot} - POSTGRES_DB=${POSTGRES_DB:-bytebot}
- PGDATA=/var/lib/postgresql/data/pgdata - PGDATA=/var/lib/postgresql/data/pgdata
volumes: volumes:
- *localtime
- *timezone
- bytebot_db_data:/var/lib/postgresql/data - bytebot_db_data:/var/lib/postgresql/data
deploy: deploy:
resources: resources:
@@ -99,6 +102,12 @@ services:
reservations: reservations:
cpus: '0.25' cpus: '0.25'
memory: 256M memory: 256M
healthcheck:
test: ["CMD-SHELL", "pg_isready -U $$POSTGRES_USER -d $$POSTGRES_DB"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
volumes: volumes:
bytebot_db_data: bytebot_db_data:

View File

@@ -1,30 +1,26 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
cassandra: cassandra:
<<: *default <<: *default
image: cassandra:${CASSANDRA_VERSION:-5.0.2} image: cassandra:${CASSANDRA_VERSION:-5.0.2}
container_name: cassandra
ports: ports:
- "${CASSANDRA_CQL_PORT_OVERRIDE:-9042}:9042" - "${CASSANDRA_CQL_PORT_OVERRIDE:-9042}:9042"
- "${CASSANDRA_THRIFT_PORT_OVERRIDE:-9160}:9160" - "${CASSANDRA_THRIFT_PORT_OVERRIDE:-9160}:9160"
volumes: volumes:
- *localtime
- *timezone
- cassandra_data:/var/lib/cassandra - cassandra_data:/var/lib/cassandra
- cassandra_logs:/var/log/cassandra - cassandra_logs:/var/log/cassandra
# Custom configuration # Custom configuration
# - ./cassandra.yaml:/etc/cassandra/cassandra.yaml:ro # - ./cassandra.yaml:/etc/cassandra/cassandra.yaml:ro
environment: environment:
- TZ=${TZ:-UTC}
- CASSANDRA_CLUSTER_NAME=${CASSANDRA_CLUSTER_NAME:-Test Cluster} - CASSANDRA_CLUSTER_NAME=${CASSANDRA_CLUSTER_NAME:-Test Cluster}
- CASSANDRA_DC=${CASSANDRA_DC:-datacenter1} - CASSANDRA_DC=${CASSANDRA_DC:-datacenter1}
- CASSANDRA_RACK=${CASSANDRA_RACK:-rack1} - CASSANDRA_RACK=${CASSANDRA_RACK:-rack1}

View File

@@ -1,12 +1,10 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
clash: clash:
@@ -16,9 +14,9 @@ services:
- "7880:80" - "7880:80"
- "7890:7890" - "7890:7890"
volumes: volumes:
- *localtime
- *timezone
- ./config.yaml:/home/runner/.config/clash/config.yaml - ./config.yaml:/home/runner/.config/clash/config.yaml
environment:
- TZ=${TZ:-UTC}
deploy: deploy:
resources: resources:
limits: limits:
@@ -27,3 +25,9 @@ services:
reservations: reservations:
cpus: "0.25" cpus: "0.25"
memory: 256M memory: 256M
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:80/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s

View File

@@ -1,18 +1,15 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
consul: consul:
<<: *default <<: *default
image: consul:${CONSUL_VERSION:-1.20.3} image: consul:${CONSUL_VERSION:-1.20.3}
container_name: consul
ports: ports:
- "${CONSUL_HTTP_PORT_OVERRIDE:-8500}:8500" - "${CONSUL_HTTP_PORT_OVERRIDE:-8500}:8500"
- "${CONSUL_DNS_PORT_OVERRIDE:-8600}:8600/udp" - "${CONSUL_DNS_PORT_OVERRIDE:-8600}:8600/udp"
@@ -20,14 +17,13 @@ services:
- "${CONSUL_SERF_WAN_PORT_OVERRIDE:-8302}:8302" - "${CONSUL_SERF_WAN_PORT_OVERRIDE:-8302}:8302"
- "${CONSUL_SERVER_RPC_PORT_OVERRIDE:-8300}:8300" - "${CONSUL_SERVER_RPC_PORT_OVERRIDE:-8300}:8300"
volumes: volumes:
- *localtime
- *timezone
- consul_data:/consul/data - consul_data:/consul/data
- consul_config:/consul/config - consul_config:/consul/config
# Custom configuration # Custom configuration
# - ./consul.json:/consul/config/consul.json:ro # - ./consul.json:/consul/config/consul.json:ro
environment: environment:
- TZ=${TZ:-UTC}
- CONSUL_BIND_INTERFACE=${CONSUL_BIND_INTERFACE:-eth0} - CONSUL_BIND_INTERFACE=${CONSUL_BIND_INTERFACE:-eth0}
- CONSUL_CLIENT_INTERFACE=${CONSUL_CLIENT_INTERFACE:-eth0} - CONSUL_CLIENT_INTERFACE=${CONSUL_CLIENT_INTERFACE:-eth0}
- CONSUL_LOCAL_CONFIG=${CONSUL_LOCAL_CONFIG:-'{"datacenter":"dc1","server":true,"ui_config":{"enabled":true},"bootstrap_expect":1,"log_level":"INFO"}'} - CONSUL_LOCAL_CONFIG=${CONSUL_LOCAL_CONFIG:-'{"datacenter":"dc1","server":true,"ui_config":{"enabled":true},"bootstrap_expect":1,"log_level":"INFO"}'}

View File

@@ -1,22 +1,22 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
dify-api: dify-api:
<<: *default <<: *default
image: langgenius/dify-api:${DIFY_VERSION:-0.18.2} image: langgenius/dify-api:${DIFY_VERSION:-0.18.2}
container_name: dify-api
depends_on: depends_on:
- dify-db dify-db:
- dify-redis condition: service_healthy
dify-redis:
condition: service_healthy
environment: environment:
- TZ=${TZ:-UTC}
- MODE=api - MODE=api
- LOG_LEVEL=${LOG_LEVEL:-INFO} - LOG_LEVEL=${LOG_LEVEL:-INFO}
- SECRET_KEY=${SECRET_KEY:-sk-9f73s3ljTXVcMT3Blb3ljTqtsKiGHXVcMT3BlbkFJLK7U} - SECRET_KEY=${SECRET_KEY:-sk-9f73s3ljTXVcMT3Blb3ljTqtsKiGHXVcMT3BlbkFJLK7U}
@@ -30,8 +30,6 @@ services:
- VECTOR_STORE=${VECTOR_STORE:-weaviate} - VECTOR_STORE=${VECTOR_STORE:-weaviate}
- WEAVIATE_ENDPOINT=http://dify-weaviate:8080 - WEAVIATE_ENDPOINT=http://dify-weaviate:8080
volumes: volumes:
- *localtime
- *timezone
- dify_storage:/app/api/storage - dify_storage:/app/api/storage
deploy: deploy:
resources: resources:
@@ -41,15 +39,23 @@ services:
reservations: reservations:
cpus: '0.5' cpus: '0.5'
memory: 1G memory: 1G
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:5001/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
dify-worker: dify-worker:
<<: *default <<: *default
image: langgenius/dify-api:${DIFY_VERSION:-0.18.2} image: langgenius/dify-api:${DIFY_VERSION:-0.18.2}
container_name: dify-worker
depends_on: depends_on:
- dify-db dify-db:
- dify-redis condition: service_healthy
dify-redis:
condition: service_healthy
environment: environment:
- TZ=${TZ:-UTC}
- MODE=worker - MODE=worker
- LOG_LEVEL=${LOG_LEVEL:-INFO} - LOG_LEVEL=${LOG_LEVEL:-INFO}
- SECRET_KEY=${SECRET_KEY:-sk-9f73s3ljTXVcMT3Blb3ljTqtsKiGHXVcMT3BlbkFJLK7U} - SECRET_KEY=${SECRET_KEY:-sk-9f73s3ljTXVcMT3Blb3ljTqtsKiGHXVcMT3BlbkFJLK7U}
@@ -63,8 +69,6 @@ services:
- VECTOR_STORE=${VECTOR_STORE:-weaviate} - VECTOR_STORE=${VECTOR_STORE:-weaviate}
- WEAVIATE_ENDPOINT=http://dify-weaviate:8080 - WEAVIATE_ENDPOINT=http://dify-weaviate:8080
volumes: volumes:
- *localtime
- *timezone
- dify_storage:/app/api/storage - dify_storage:/app/api/storage
deploy: deploy:
resources: resources:
@@ -78,10 +82,11 @@ services:
dify-web: dify-web:
<<: *default <<: *default
image: langgenius/dify-web:${DIFY_VERSION:-0.18.2} image: langgenius/dify-web:${DIFY_VERSION:-0.18.2}
container_name: dify-web
depends_on: depends_on:
- dify-api dify-api:
condition: service_healthy
environment: environment:
- TZ=${TZ:-UTC}
- NEXT_PUBLIC_API_URL=${DIFY_API_URL:-http://localhost:5001} - NEXT_PUBLIC_API_URL=${DIFY_API_URL:-http://localhost:5001}
- NEXT_PUBLIC_APP_URL=${DIFY_APP_URL:-http://localhost:3000} - NEXT_PUBLIC_APP_URL=${DIFY_APP_URL:-http://localhost:3000}
ports: ports:
@@ -98,15 +103,13 @@ services:
dify-db: dify-db:
<<: *default <<: *default
image: postgres:15-alpine image: postgres:15-alpine
container_name: dify-db
environment: environment:
- TZ=${TZ:-UTC}
- POSTGRES_USER=${POSTGRES_USER:-dify} - POSTGRES_USER=${POSTGRES_USER:-dify}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-difypass} - POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-difypass}
- POSTGRES_DB=${POSTGRES_DB:-dify} - POSTGRES_DB=${POSTGRES_DB:-dify}
- PGDATA=/var/lib/postgresql/data/pgdata - PGDATA=/var/lib/postgresql/data/pgdata
volumes: volumes:
- *localtime
- *timezone
- dify_db_data:/var/lib/postgresql/data - dify_db_data:/var/lib/postgresql/data
deploy: deploy:
resources: resources:
@@ -116,15 +119,20 @@ services:
reservations: reservations:
cpus: '0.25' cpus: '0.25'
memory: 256M memory: 256M
healthcheck:
test: ["CMD-SHELL", "pg_isready -U $$POSTGRES_USER -d $$POSTGRES_DB"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
dify-redis: dify-redis:
<<: *default <<: *default
image: redis:7-alpine image: redis:7-alpine
container_name: dify-redis
command: redis-server --requirepass ${REDIS_PASSWORD:-} command: redis-server --requirepass ${REDIS_PASSWORD:-}
environment:
- TZ=${TZ:-UTC}
volumes: volumes:
- *localtime
- *timezone
- dify_redis_data:/data - dify_redis_data:/data
deploy: deploy:
resources: resources:
@@ -134,22 +142,26 @@ services:
reservations: reservations:
cpus: '0.1' cpus: '0.1'
memory: 128M memory: 128M
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 3s
retries: 3
start_period: 5s
dify-weaviate: dify-weaviate:
<<: *default <<: *default
image: semitechnologies/weaviate:${WEAVIATE_VERSION:-1.28.12} image: semitechnologies/weaviate:${WEAVIATE_VERSION:-1.28.12}
container_name: dify-weaviate
profiles: profiles:
- weaviate - weaviate
environment: environment:
- TZ=${TZ:-UTC}
- QUERY_DEFAULTS_LIMIT=25 - QUERY_DEFAULTS_LIMIT=25
- AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED=true - AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED=true
- PERSISTENCE_DATA_PATH=/var/lib/weaviate - PERSISTENCE_DATA_PATH=/var/lib/weaviate
- DEFAULT_VECTORIZER_MODULE=none - DEFAULT_VECTORIZER_MODULE=none
- CLUSTER_HOSTNAME=node1 - CLUSTER_HOSTNAME=node1
volumes: volumes:
- *localtime
- *timezone
- dify_weaviate_data:/var/lib/weaviate - dify_weaviate_data:/var/lib/weaviate
deploy: deploy:
resources: resources:
@@ -159,6 +171,12 @@ services:
reservations: reservations:
cpus: '0.25' cpus: '0.25'
memory: 512M memory: 512M
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8080/v1/.well-known/ready"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
volumes: volumes:
dify_storage: dify_storage:

View File

@@ -1,24 +1,21 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
registry: registry:
<<: *default <<: *default
image: registry:${REGISTRY_VERSION:-3.0.0} image: registry:${REGISTRY_VERSION:-3.0.0}
volumes: volumes:
- *localtime
- *timezone
- ./certs:/certs:ro - ./certs:/certs:ro
- ./config.yml:/etc/distribution/config.yml:ro - ./config.yml:/etc/distribution/config.yml:ro
- registry:/var/lib/registry - registry:/var/lib/registry
environment: environment:
TZ: ${TZ:-UTC}
REGISTRY_AUTH: ${REGISTRY_AUTH:-htpasswd} REGISTRY_AUTH: ${REGISTRY_AUTH:-htpasswd}
REGISTRY_AUTH_HTPASSWD_REALM: ${REGISTRY_AUTH_HTPASSWD_REALM:-Registry Realm} REGISTRY_AUTH_HTPASSWD_REALM: ${REGISTRY_AUTH_HTPASSWD_REALM:-Registry Realm}
REGISTRY_AUTH_HTPASSWD_PATH: ${REGISTRY_AUTH_HTPASSWD_PATH:-/certs/passwd} REGISTRY_AUTH_HTPASSWD_PATH: ${REGISTRY_AUTH_HTPASSWD_PATH:-/certs/passwd}
@@ -35,6 +32,12 @@ services:
reservations: reservations:
cpus: '0.1' cpus: '0.1'
memory: 128M memory: 128M
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:5000/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
volumes: volumes:
registry: registry:

View File

@@ -1,27 +1,23 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
dockge: dockge:
<<: *default <<: *default
image: louislam/dockge:${DOCKGE_VERSION:-1} image: louislam/dockge:${DOCKGE_VERSION:-1}
container_name: dockge
ports: ports:
- "${PORT_OVERRIDE:-5001}:5001" - "${PORT_OVERRIDE:-5001}:5001"
volumes: volumes:
- *localtime
- *timezone
- /var/run/docker.sock:/var/run/docker.sock - /var/run/docker.sock:/var/run/docker.sock
- dockge_data:/app/data - dockge_data:/app/data
- ${STACKS_DIR:-./stacks}:/opt/stacks - ${STACKS_DIR:-./stacks}:/opt/stacks
environment: environment:
- TZ=${TZ:-UTC}
- DOCKGE_STACKS_DIR=${DOCKGE_STACKS_DIR:-/opt/stacks} - DOCKGE_STACKS_DIR=${DOCKGE_STACKS_DIR:-/opt/stacks}
- PUID=${PUID:-1000} - PUID=${PUID:-1000}
- PGID=${PGID:-1000} - PGID=${PGID:-1000}
@@ -33,6 +29,12 @@ services:
reservations: reservations:
cpus: '0.25' cpus: '0.25'
memory: 256M memory: 256M
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:5001/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
volumes: volumes:
dockge_data: dockge_data:

View File

@@ -1,30 +1,26 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
elasticsearch: elasticsearch:
<<: *default <<: *default
image: docker.elastic.co/elasticsearch/elasticsearch:${ELASTICSEARCH_VERSION:-8.16.1} image: docker.elastic.co/elasticsearch/elasticsearch:${ELASTICSEARCH_VERSION:-8.16.1}
container_name: elasticsearch
ports: ports:
- "${ELASTICSEARCH_HTTP_PORT_OVERRIDE:-9200}:9200" - "${ELASTICSEARCH_HTTP_PORT_OVERRIDE:-9200}:9200"
- "${ELASTICSEARCH_TRANSPORT_PORT_OVERRIDE:-9300}:9300" - "${ELASTICSEARCH_TRANSPORT_PORT_OVERRIDE:-9300}:9300"
volumes: volumes:
- *localtime
- *timezone
- elasticsearch_data:/usr/share/elasticsearch/data - elasticsearch_data:/usr/share/elasticsearch/data
- elasticsearch_logs:/usr/share/elasticsearch/logs - elasticsearch_logs:/usr/share/elasticsearch/logs
# Custom configuration # Custom configuration
# - ./elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro # - ./elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro
environment: environment:
- TZ=${TZ:-UTC}
- node.name=elasticsearch - node.name=elasticsearch
- cluster.name=${ELASTICSEARCH_CLUSTER_NAME:-docker-cluster} - cluster.name=${ELASTICSEARCH_CLUSTER_NAME:-docker-cluster}
- discovery.type=${ELASTICSEARCH_DISCOVERY_TYPE:-single-node} - discovery.type=${ELASTICSEARCH_DISCOVERY_TYPE:-single-node}

View File

@@ -1,26 +1,22 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
etcd: etcd:
<<: *default <<: *default
image: quay.io/coreos/etcd:${ETCD_VERSION:-v3.6.0} image: quay.io/coreos/etcd:${ETCD_VERSION:-v3.6.0}
container_name: etcd
ports: ports:
- "${ETCD_CLIENT_PORT_OVERRIDE:-2379}:2379" - "${ETCD_CLIENT_PORT_OVERRIDE:-2379}:2379"
- "${ETCD_PEER_PORT_OVERRIDE:-2380}:2380" - "${ETCD_PEER_PORT_OVERRIDE:-2380}:2380"
volumes: volumes:
- *localtime
- *timezone
- etcd_data:/etcd-data - etcd_data:/etcd-data
environment: environment:
- TZ=${TZ:-UTC}
- ETCD_NAME=${ETCD_NAME:-etcd-node} - ETCD_NAME=${ETCD_NAME:-etcd-node}
- ETCD_DATA_DIR=/etcd-data - ETCD_DATA_DIR=/etcd-data
- ETCD_LISTEN_CLIENT_URLS=http://0.0.0.0:2379 - ETCD_LISTEN_CLIENT_URLS=http://0.0.0.0:2379
@@ -59,6 +55,12 @@ services:
reservations: reservations:
cpus: '0.25' cpus: '0.25'
memory: 256M memory: 256M
healthcheck:
test: ["CMD", "etcdctl", "endpoint", "health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
volumes: volumes:
etcd_data: etcd_data:

View File

@@ -1,21 +1,19 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
firecrawl: firecrawl:
<<: *default <<: *default
image: mendableai/firecrawl:${FIRECRAWL_VERSION:-v1.16.0} image: mendableai/firecrawl:${FIRECRAWL_VERSION:-v1.16.0}
container_name: firecrawl
ports: ports:
- "${FIRECRAWL_PORT_OVERRIDE:-3002}:3002" - "${FIRECRAWL_PORT_OVERRIDE:-3002}:3002"
environment: environment:
TZ: ${TZ:-UTC}
REDIS_URL: redis://:${REDIS_PASSWORD:-firecrawl}@redis:6379 REDIS_URL: redis://:${REDIS_PASSWORD:-firecrawl}@redis:6379
PLAYWRIGHT_MICROSERVICE_URL: http://playwright:3000 PLAYWRIGHT_MICROSERVICE_URL: http://playwright:3000
PORT: 3002 PORT: 3002
@@ -23,8 +21,10 @@ services:
SCRAPE_RATE_LIMIT_TOKEN_BUCKET_SIZE: ${SCRAPE_RATE_LIMIT_TOKEN_BUCKET_SIZE:-20} SCRAPE_RATE_LIMIT_TOKEN_BUCKET_SIZE: ${SCRAPE_RATE_LIMIT_TOKEN_BUCKET_SIZE:-20}
SCRAPE_RATE_LIMIT_TOKEN_BUCKET_REFILL: ${SCRAPE_RATE_LIMIT_TOKEN_BUCKET_REFILL:-1} SCRAPE_RATE_LIMIT_TOKEN_BUCKET_REFILL: ${SCRAPE_RATE_LIMIT_TOKEN_BUCKET_REFILL:-1}
depends_on: depends_on:
- redis redis:
- playwright condition: service_healthy
playwright:
condition: service_started
deploy: deploy:
resources: resources:
limits: limits:
@@ -33,15 +33,20 @@ services:
reservations: reservations:
cpus: '1.0' cpus: '1.0'
memory: 2G memory: 2G
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:3002/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
redis: redis:
<<: *default <<: *default
image: redis:${REDIS_VERSION:-7.4.2-alpine} image: redis:${REDIS_VERSION:-7.4.2-alpine}
container_name: firecrawl-redis
command: redis-server --requirepass ${REDIS_PASSWORD:-firecrawl} --appendonly yes command: redis-server --requirepass ${REDIS_PASSWORD:-firecrawl} --appendonly yes
environment:
- TZ=${TZ:-UTC}
volumes: volumes:
- *localtime
- *timezone
- redis_data:/data - redis_data:/data
deploy: deploy:
resources: resources:
@@ -51,12 +56,18 @@ services:
reservations: reservations:
cpus: '0.5' cpus: '0.5'
memory: 256M memory: 256M
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 3s
retries: 3
start_period: 5s
playwright: playwright:
<<: *default <<: *default
image: mendableai/firecrawl-playwright:${PLAYWRIGHT_VERSION:-latest} image: mendableai/firecrawl-playwright:${PLAYWRIGHT_VERSION:-latest}
container_name: firecrawl-playwright
environment: environment:
TZ: ${TZ:-UTC}
PORT: 3000 PORT: 3000
PROXY_SERVER: ${PROXY_SERVER:-} PROXY_SERVER: ${PROXY_SERVER:-}
PROXY_USERNAME: ${PROXY_USERNAME:-} PROXY_USERNAME: ${PROXY_USERNAME:-}

View File

@@ -1,22 +1,19 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
frpc: frpc:
<<: *default <<: *default
image: snowdreamtech/frpc:${FRPC_VERSION:-0.64.0} image: snowdreamtech/frpc:${FRPC_VERSION:-0.64.0}
volumes: volumes:
- *localtime
- *timezone
- ./frpc.toml:/etc/frp/frpc.toml:ro - ./frpc.toml:/etc/frp/frpc.toml:ro
environment: environment:
TZ: ${TZ:-UTC}
FRP_SERVER_ADDR: ${FRP_SERVER_ADDR} FRP_SERVER_ADDR: ${FRP_SERVER_ADDR}
FRP_SERVER_PORT: ${FRP_SERVER_PORT} FRP_SERVER_PORT: ${FRP_SERVER_PORT}
FRP_SERVER_TOKEN: ${FRP_SERVER_TOKEN} FRP_SERVER_TOKEN: ${FRP_SERVER_TOKEN}

View File

@@ -1,25 +1,22 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
frps: frps:
<<: *default <<: *default
image: snowdreamtech/frps:${FRPS_VERSION:-0.64.0} image: snowdreamtech/frps:${FRPS_VERSION:-0.64.0}
volumes: volumes:
- *localtime
- *timezone
- ./frps.toml:/etc/frp/frps.toml:ro - ./frps.toml:/etc/frp/frps.toml:ro
ports: ports:
- ${FRP_PORT_OVERRIDE_SERVER:-9870}:${FRP_SERVER_PORT:-9870} - ${FRP_PORT_OVERRIDE_SERVER:-9870}:${FRP_SERVER_PORT:-9870}
- ${FRP_PORT_OVERRIDE_ADMIN:-7890}:${FRP_ADMIN_PORT:-7890} - ${FRP_PORT_OVERRIDE_ADMIN:-7890}:${FRP_ADMIN_PORT:-7890}
environment: environment:
TZ: ${TZ:-UTC}
FRP_SERVER_TOKEN: ${FRP_SERVER_TOKEN} FRP_SERVER_TOKEN: ${FRP_SERVER_TOKEN}
FRP_SERVER_PORT: ${FRP_SERVER_PORT:-9870} FRP_SERVER_PORT: ${FRP_SERVER_PORT:-9870}
FRP_ADMIN_PORT: ${FRP_ADMIN_PORT:-7890} FRP_ADMIN_PORT: ${FRP_ADMIN_PORT:-7890}
@@ -33,3 +30,9 @@ services:
reservations: reservations:
cpus: '0.1' cpus: '0.1'
memory: 64M memory: 64M
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:${FRP_ADMIN_PORT:-7890}/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s

View File

@@ -1,26 +1,23 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
gitea_runner: gitea_runner:
<<: *default <<: *default
image: gitea/act_runner:0.2.12 image: gitea/act_runner:0.2.12
environment: environment:
TZ: ${TZ:-UTC}
CONFIG_FILE: /config.yaml CONFIG_FILE: /config.yaml
GITEA_INSTANCE_URL: ${INSTANCE_URL:-http://localhost:3000} GITEA_INSTANCE_URL: ${INSTANCE_URL:-http://localhost:3000}
GITEA_RUNNER_REGISTRATION_TOKEN: ${REGISTRATION_TOKEN} GITEA_RUNNER_REGISTRATION_TOKEN: ${REGISTRATION_TOKEN}
GITEA_RUNNER_NAME: ${RUNNER_NAME:-Gitea-Runner} GITEA_RUNNER_NAME: ${RUNNER_NAME:-Gitea-Runner}
GITEA_RUNNER_LABELS: ${RUNNER_LABELS:-DockerRunner} GITEA_RUNNER_LABELS: ${RUNNER_LABELS:-DockerRunner}
volumes: volumes:
- *localtime
- *timezone
- ./config.yaml:/config.yaml:ro - ./config.yaml:/config.yaml:ro
- gitea_runner_data:/data - gitea_runner_data:/data
- /var/run/docker.sock:/var/run/docker.sock - /var/run/docker.sock:/var/run/docker.sock

View File

@@ -1,33 +1,31 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
gitea: gitea:
<<: *default <<: *default
image: gitea/gitea:${GITEA_VERSION:-1.24.6-rootless} image: gitea/gitea:${GITEA_VERSION:-1.24.6-rootless}
environment: environment:
- TZ=${TZ:-UTC}
- GITEA__database__DB_TYPE=${GITEA_DB_TYPE:-postgres} - GITEA__database__DB_TYPE=${GITEA_DB_TYPE:-postgres}
- GITEA__database__HOST=${GITEA_POSTGRES_HOST:-db:5432} - GITEA__database__HOST=${GITEA_POSTGRES_HOST:-db:5432}
- GITEA__database__USER=${POSTGRES_USER:-gitea} - GITEA__database__USER=${POSTGRES_USER:-gitea}
- GITEA__database__NAME=${POSTGRES_DB:-gitea} - GITEA__database__NAME=${POSTGRES_DB:-gitea}
- GITEA__database__PASSWD=${POSTGRES_PASSWORD:-gitea} - GITEA__database__PASSWD=${POSTGRES_PASSWORD:-gitea}
volumes: volumes:
- *localtime
- *timezone
- gitea_data:/var/lib/gitea - gitea_data:/var/lib/gitea
- ./config:/etc/gitea - ./config:/etc/gitea
ports: ports:
- "${GITEA_HTTP_PORT:-3000}:3000" - "${GITEA_HTTP_PORT:-3000}:3000"
- "${GITEA_SSH_PORT:-3022}:22" - "${GITEA_SSH_PORT:-3022}:22"
depends_on: depends_on:
- db db:
condition: service_healthy
deploy: deploy:
resources: resources:
limits: limits:
@@ -36,17 +34,22 @@ services:
reservations: reservations:
cpus: '0.5' cpus: '0.5'
memory: 512M memory: 512M
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:3000/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
db: db:
<<: *default <<: *default
image: postgres:${POSTGRES_VERSION:-17.6} image: postgres:${POSTGRES_VERSION:-17.6}
environment: environment:
- TZ=${TZ:-UTC}
- POSTGRES_USER=${POSTGRES_USER:-gitea} - POSTGRES_USER=${POSTGRES_USER:-gitea}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-gitea} - POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-gitea}
- POSTGRES_DB=${POSTGRES_DB:-gitea} - POSTGRES_DB=${POSTGRES_DB:-gitea}
volumes: volumes:
- *localtime
- *timezone
- postgres:/var/lib/postgresql/data - postgres:/var/lib/postgresql/data
deploy: deploy:
resources: resources:
@@ -56,6 +59,12 @@ services:
reservations: reservations:
cpus: '0.5' cpus: '0.5'
memory: 512M memory: 512M
healthcheck:
test: ["CMD-SHELL", "pg_isready -U $$POSTGRES_USER -d $$POSTGRES_DB"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
volumes: volumes:
gitea_data: gitea_data:

View File

@@ -1,22 +1,20 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
gitlab-runner: gitlab-runner:
<<: *default <<: *default
image: gitlab/gitlab-runner:${GITLAB_RUNNER_VERSION:-alpine3.21-v18.4.0} image: gitlab/gitlab-runner:${GITLAB_RUNNER_VERSION:-alpine3.21-v18.4.0}
volumes: volumes:
- *localtime
- *timezone
- /var/run/docker.sock:/var/run/docker.sock - /var/run/docker.sock:/var/run/docker.sock
- ./config:/etc/gitlab-runner - ./config:/etc/gitlab-runner
environment:
- TZ=${TZ:-UTC}
deploy: deploy:
resources: resources:
limits: limits:

View File

@@ -1,12 +1,10 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
gitlab: gitlab:
@@ -17,11 +15,12 @@ services:
- "${GITLAB_PORT_OVERRIDE_HTTP:-5080}:80" - "${GITLAB_PORT_OVERRIDE_HTTP:-5080}:80"
- "${GITLAB_PORT_OVERRIDE_SSH:-5022}:22" - "${GITLAB_PORT_OVERRIDE_SSH:-5022}:22"
volumes: volumes:
- *localtime
- *timezone
- ./config:/etc/gitlab - ./config:/etc/gitlab
- gitlab_logs:/var/log/gitlab - gitlab_logs:/var/log/gitlab
- gitlab_data:/var/opt/gitlab - gitlab_data:/var/opt/gitlab
environment:
- TZ=${TZ:-UTC}
- GITLAB_OMNIBUS_CONFIG=${GITLAB_OMNIBUS_CONFIG:-}
deploy: deploy:
resources: resources:
limits: limits:
@@ -30,6 +29,12 @@ services:
reservations: reservations:
cpus: '1.0' cpus: '1.0'
memory: 4G memory: 4G
healthcheck:
test: ["CMD", "/opt/gitlab/bin/gitlab-healthcheck", "--fail"]
interval: 60s
timeout: 30s
retries: 5
start_period: 300s
volumes: volumes:
gitlab_logs: gitlab_logs:

View File

@@ -1,25 +1,21 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
gpustack: gpustack:
<<: *default <<: *default
image: gpustack/gpustack:${GPUSTACK_VERSION:-v0.5.3} image: gpustack/gpustack:${GPUSTACK_VERSION:-v0.5.3}
container_name: gpustack
ports: ports:
- "${GPUSTACK_PORT_OVERRIDE:-80}:80" - "${GPUSTACK_PORT_OVERRIDE:-80}:80"
volumes: volumes:
- *localtime
- *timezone
- gpustack_data:/var/lib/gpustack - gpustack_data:/var/lib/gpustack
environment: environment:
- TZ=${TZ:-UTC}
- GPUSTACK_DEBUG=${GPUSTACK_DEBUG:-false} - GPUSTACK_DEBUG=${GPUSTACK_DEBUG:-false}
- GPUSTACK_HOST=${GPUSTACK_HOST:-0.0.0.0} - GPUSTACK_HOST=${GPUSTACK_HOST:-0.0.0.0}
- GPUSTACK_PORT=${GPUSTACK_PORT:-80} - GPUSTACK_PORT=${GPUSTACK_PORT:-80}
@@ -35,13 +31,18 @@ services:
cpus: '1.0' cpus: '1.0'
memory: 2G memory: 2G
# Uncomment below for GPU support # Uncomment below for GPU support
# reservations:
# devices: # devices:
# - driver: nvidia # - driver: nvidia
# count: 1 # count: 1
# capabilities: [gpu] # capabilities: [gpu]
# For GPU support, uncomment the following section # For GPU support, uncomment the following section
# runtime: nvidia # runtime: nvidia
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:80/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
volumes: volumes:
gpustack_data: gpustack_data:

View File

@@ -1,23 +1,18 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
grafana: grafana:
<<: *default <<: *default
image: grafana/grafana:${GRAFANA_VERSION:-12.1.1} image: grafana/grafana:${GRAFANA_VERSION:-12.1.1}
container_name: grafana
ports: ports:
- "${GRAFANA_PORT_OVERRIDE:-3000}:3000" - "${GRAFANA_PORT_OVERRIDE:-3000}:3000"
volumes: volumes:
- *localtime
- *timezone
- grafana_data:/var/lib/grafana - grafana_data:/var/lib/grafana
- grafana_logs:/var/log/grafana - grafana_logs:/var/log/grafana
@@ -25,6 +20,7 @@ services:
# - ./grafana.ini:/etc/grafana/grafana.ini # - ./grafana.ini:/etc/grafana/grafana.ini
# - ./provisioning:/etc/grafana/provisioning # - ./provisioning:/etc/grafana/provisioning
environment: environment:
- TZ=${TZ:-UTC}
- GF_SECURITY_ADMIN_USER=${GRAFANA_ADMIN_USER:-admin} - GF_SECURITY_ADMIN_USER=${GRAFANA_ADMIN_USER:-admin}
- GF_SECURITY_ADMIN_PASSWORD=${GRAFANA_ADMIN_PASSWORD:-admin} - GF_SECURITY_ADMIN_PASSWORD=${GRAFANA_ADMIN_PASSWORD:-admin}
- GF_USERS_ALLOW_SIGN_UP=${GRAFANA_ALLOW_SIGN_UP:-false} - GF_USERS_ALLOW_SIGN_UP=${GRAFANA_ALLOW_SIGN_UP:-false}
@@ -40,6 +36,12 @@ services:
reservations: reservations:
cpus: '0.25' cpus: '0.25'
memory: 256M memory: 256M
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:3000/api/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
volumes: volumes:
grafana_data: grafana_data:

View File

@@ -1,23 +1,21 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
halo: halo:
<<: *default <<: *default
image: halohub/halo:${HALO_VERSION:-2.21.9} image: halohub/halo:${HALO_VERSION:-2.21.9}
container_name: halo
ports: ports:
- "${HALO_PORT:-8090}:8090" - "${HALO_PORT:-8090}:8090"
volumes: volumes:
- halo_data:/root/.halo2 - halo_data:/root/.halo2
environment: environment:
- TZ=${TZ:-UTC}
- SPRING_R2DBC_URL=${SPRING_R2DBC_URL:-r2dbc:pool:postgresql://halo-db:5432/halo} - SPRING_R2DBC_URL=${SPRING_R2DBC_URL:-r2dbc:pool:postgresql://halo-db:5432/halo}
- SPRING_R2DBC_USERNAME=${POSTGRES_USER:-postgres} - SPRING_R2DBC_USERNAME=${POSTGRES_USER:-postgres}
- SPRING_R2DBC_PASSWORD=${POSTGRES_PASSWORD:-postgres} - SPRING_R2DBC_PASSWORD=${POSTGRES_PASSWORD:-postgres}
@@ -36,12 +34,18 @@ services:
reservations: reservations:
cpus: '0.5' cpus: '0.5'
memory: 512M memory: 512M
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8090/actuator/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
halo-db: halo-db:
<<: *default <<: *default
image: postgres:${POSTGRES_VERSION:-17.2-alpine3.21} image: postgres:${POSTGRES_VERSION:-17.2-alpine3.21}
container_name: halo-db
environment: environment:
- TZ=${TZ:-UTC}
- POSTGRES_USER=${POSTGRES_USER:-postgres} - POSTGRES_USER=${POSTGRES_USER:-postgres}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-postgres} - POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-postgres}
- POSTGRES_DB=${POSTGRES_DB:-halo} - POSTGRES_DB=${POSTGRES_DB:-halo}

View File

@@ -1,29 +1,27 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
# Harbor Core # Harbor Core
harbor-core: harbor-core:
<<: *default <<: *default
image: goharbor/harbor-core:${HARBOR_VERSION:-v2.12.0} image: goharbor/harbor-core:${HARBOR_VERSION:-v2.12.0}
container_name: harbor-core
depends_on: depends_on:
- harbor-db harbor-db:
- harbor-redis condition: service_healthy
harbor-redis:
condition: service_healthy
volumes: volumes:
- *localtime
- *timezone
- harbor_config:/etc/core - harbor_config:/etc/core
- harbor_ca_download:/etc/core/ca - harbor_ca_download:/etc/core/ca
- harbor_secret:/etc/core/certificates - harbor_secret:/etc/core/certificates
environment: environment:
- TZ=${TZ:-UTC}
- CORE_SECRET=${HARBOR_CORE_SECRET:-} - CORE_SECRET=${HARBOR_CORE_SECRET:-}
- JOBSERVICE_SECRET=${HARBOR_JOBSERVICE_SECRET:-} - JOBSERVICE_SECRET=${HARBOR_JOBSERVICE_SECRET:-}
- DATABASE_TYPE=postgresql - DATABASE_TYPE=postgresql
@@ -32,7 +30,7 @@ services:
- POSTGRESQL_USERNAME=postgres - POSTGRESQL_USERNAME=postgres
- POSTGRESQL_PASSWORD=${HARBOR_DB_PASSWORD:-password} - POSTGRESQL_PASSWORD=${HARBOR_DB_PASSWORD:-password}
- POSTGRESQL_DATABASE=registry - POSTGRESQL_DATABASE=registry
- REGISTRY_URL=http://registry:5000 - REGISTRY_URL=http://harbor-registry:5000
- TOKEN_SERVICE_URL=http://harbor-core:8080/service/token - TOKEN_SERVICE_URL=http://harbor-core:8080/service/token
- HARBOR_ADMIN_PASSWORD=${HARBOR_ADMIN_PASSWORD:-Harbor12345} - HARBOR_ADMIN_PASSWORD=${HARBOR_ADMIN_PASSWORD:-Harbor12345}
- CORE_URL=http://harbor-core:8080 - CORE_URL=http://harbor-core:8080
@@ -40,20 +38,26 @@ services:
- REGISTRY_STORAGE_PROVIDER_NAME=filesystem - REGISTRY_STORAGE_PROVIDER_NAME=filesystem
- READ_ONLY=false - READ_ONLY=false
- RELOAD_KEY=${HARBOR_RELOAD_KEY:-} - RELOAD_KEY=${HARBOR_RELOAD_KEY:-}
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8080/api/v2.0/ping"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
# Harbor JobService # Harbor JobService
harbor-jobservice: harbor-jobservice:
<<: *default <<: *default
image: goharbor/harbor-jobservice:${HARBOR_VERSION:-v2.12.0} image: goharbor/harbor-jobservice:${HARBOR_VERSION:-v2.12.0}
container_name: harbor-jobservice
depends_on: depends_on:
- harbor-db harbor-db:
- harbor-redis condition: service_healthy
harbor-redis:
condition: service_healthy
volumes: volumes:
- *localtime
- *timezone
- harbor_job_logs:/var/log/jobs - harbor_job_logs:/var/log/jobs
environment: environment:
- TZ=${TZ:-UTC}
- CORE_SECRET=${HARBOR_CORE_SECRET:-} - CORE_SECRET=${HARBOR_CORE_SECRET:-}
- JOBSERVICE_SECRET=${HARBOR_JOBSERVICE_SECRET:-} - JOBSERVICE_SECRET=${HARBOR_JOBSERVICE_SECRET:-}
- CORE_URL=http://harbor-core:8080 - CORE_URL=http://harbor-core:8080
@@ -68,49 +72,56 @@ services:
harbor-registry: harbor-registry:
<<: *default <<: *default
image: goharbor/registry-photon:${HARBOR_VERSION:-v2.12.0} image: goharbor/registry-photon:${HARBOR_VERSION:-v2.12.0}
container_name: harbor-registry
volumes: volumes:
- *localtime
- *timezone
- harbor_registry:/storage - harbor_registry:/storage
environment: environment:
- TZ=${TZ:-UTC}
- REGISTRY_HTTP_SECRET=${HARBOR_REGISTRY_SECRET:-} - REGISTRY_HTTP_SECRET=${HARBOR_REGISTRY_SECRET:-}
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:5000/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
# Harbor Portal (UI) # Harbor Portal (UI)
harbor-portal: harbor-portal:
<<: *default <<: *default
image: goharbor/harbor-portal:${HARBOR_VERSION:-v2.12.0} image: goharbor/harbor-portal:${HARBOR_VERSION:-v2.12.0}
container_name: harbor-portal environment:
volumes: - TZ=${TZ:-UTC}
- *localtime
- *timezone
# Harbor Proxy (Nginx) # Harbor Proxy (Nginx)
harbor-proxy: harbor-proxy:
<<: *default <<: *default
image: goharbor/nginx-photon:${HARBOR_VERSION:-v2.12.0} image: goharbor/nginx-photon:${HARBOR_VERSION:-v2.12.0}
container_name: harbor-proxy
ports: ports:
- "${HARBOR_HTTP_PORT_OVERRIDE:-80}:8080" - "${HARBOR_HTTP_PORT_OVERRIDE:-80}:8080"
- "${HARBOR_HTTPS_PORT_OVERRIDE:-443}:8443" - "${HARBOR_HTTPS_PORT_OVERRIDE:-443}:8443"
depends_on: depends_on:
- harbor-core harbor-core:
- harbor-portal condition: service_healthy
- harbor-registry harbor-portal:
volumes: condition: service_started
- *localtime harbor-registry:
- *timezone condition: service_healthy
environment:
- TZ=${TZ:-UTC}
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8080/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
# Harbor Database # Harbor Database
harbor-db: harbor-db:
<<: *default <<: *default
image: goharbor/harbor-db:${HARBOR_VERSION:-v2.12.0} image: goharbor/harbor-db:${HARBOR_VERSION:-v2.12.0}
container_name: harbor-db
volumes: volumes:
- *localtime
- *timezone
- harbor_db:/var/lib/postgresql/data - harbor_db:/var/lib/postgresql/data
environment: environment:
- TZ=${TZ:-UTC}
- POSTGRES_PASSWORD=${HARBOR_DB_PASSWORD:-password} - POSTGRES_PASSWORD=${HARBOR_DB_PASSWORD:-password}
- POSTGRES_DB=registry - POSTGRES_DB=registry
deploy: deploy:
@@ -121,16 +132,21 @@ services:
reservations: reservations:
cpus: '0.25' cpus: '0.25'
memory: 256M memory: 256M
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
# Harbor Redis # Harbor Redis
harbor-redis: harbor-redis:
<<: *default <<: *default
image: goharbor/redis-photon:${HARBOR_VERSION:-v2.12.0} image: goharbor/redis-photon:${HARBOR_VERSION:-v2.12.0}
container_name: harbor-redis
volumes: volumes:
- *localtime
- *timezone
- harbor_redis:/var/lib/redis - harbor_redis:/var/lib/redis
environment:
- TZ=${TZ:-UTC}
deploy: deploy:
resources: resources:
limits: limits:
@@ -139,6 +155,12 @@ services:
reservations: reservations:
cpus: '0.10' cpus: '0.10'
memory: 64M memory: 64M
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 3s
retries: 3
start_period: 5s
volumes: volumes:
harbor_config: harbor_config:

View File

@@ -1,12 +1,10 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
lama-cleaner: lama-cleaner:
@@ -17,11 +15,10 @@ services:
build: build:
context: . context: .
dockerfile: Dockerfile dockerfile: Dockerfile
# environment: environment:
TZ: ${TZ:-UTC}
# HF_ENDPOINT: https://hf-mirror.com # HF_ENDPOINT: https://hf-mirror.com
volumes: volumes:
- *localtime
- *timezone
- ./models:/root/.cache - ./models:/root/.cache
command: command:
- iopaint - iopaint
@@ -32,8 +29,19 @@ services:
- --host=0.0.0.0 - --host=0.0.0.0
deploy: deploy:
resources: resources:
limits:
cpus: '2.0'
memory: 4G
reservations: reservations:
cpus: '1.0'
memory: 2G
devices: devices:
- driver: nvidia - driver: nvidia
device_ids: ['0'] device_ids: ['0']
capabilities: [compute, utility] capabilities: [compute, utility]
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8080/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s

View File

@@ -1,30 +1,26 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
jenkins: jenkins:
<<: *default <<: *default
image: jenkins/jenkins:${JENKINS_VERSION:-2.486-lts-jdk17} image: jenkins/jenkins:${JENKINS_VERSION:-2.486-lts-jdk17}
container_name: jenkins
ports: ports:
- "${JENKINS_HTTP_PORT_OVERRIDE:-8080}:8080" - "${JENKINS_HTTP_PORT_OVERRIDE:-8080}:8080"
- "${JENKINS_AGENT_PORT_OVERRIDE:-50000}:50000" - "${JENKINS_AGENT_PORT_OVERRIDE:-50000}:50000"
volumes: volumes:
- *localtime
- *timezone
- jenkins_home:/var/jenkins_home - jenkins_home:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock:ro - /var/run/docker.sock:/var/run/docker.sock:ro
# Custom configuration # Custom configuration
# - ./jenkins.yaml:/var/jenkins_home/casc_configs/jenkins.yaml:ro # - ./jenkins.yaml:/var/jenkins_home/casc_configs/jenkins.yaml:ro
environment: environment:
- TZ=${TZ:-UTC}
- JENKINS_OPTS=${JENKINS_OPTS:---httpPort=8080} - JENKINS_OPTS=${JENKINS_OPTS:---httpPort=8080}
- JAVA_OPTS=${JAVA_OPTS:--Djenkins.install.runSetupWizard=false -Xmx2g} - JAVA_OPTS=${JAVA_OPTS:--Djenkins.install.runSetupWizard=false -Xmx2g}
- CASC_JENKINS_CONFIG=${CASC_JENKINS_CONFIG:-/var/jenkins_home/casc_configs} - CASC_JENKINS_CONFIG=${CASC_JENKINS_CONFIG:-/var/jenkins_home/casc_configs}

View File

@@ -1,27 +1,23 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
# Zookeeper for Kafka coordination # Zookeeper for Kafka coordination
zookeeper: zookeeper:
<<: *default <<: *default
image: confluentinc/cp-zookeeper:${KAFKA_VERSION:-7.8.0} image: confluentinc/cp-zookeeper:${KAFKA_VERSION:-7.8.0}
container_name: zookeeper
ports: ports:
- "${ZOOKEEPER_CLIENT_PORT_OVERRIDE:-2181}:2181" - "${ZOOKEEPER_CLIENT_PORT_OVERRIDE:-2181}:2181"
volumes: volumes:
- *localtime
- *timezone
- zookeeper_data:/var/lib/zookeeper/data - zookeeper_data:/var/lib/zookeeper/data
- zookeeper_log:/var/lib/zookeeper/log - zookeeper_log:/var/lib/zookeeper/log
environment: environment:
- TZ=${TZ:-UTC}
- ZOOKEEPER_CLIENT_PORT=2181 - ZOOKEEPER_CLIENT_PORT=2181
- ZOOKEEPER_TICK_TIME=2000 - ZOOKEEPER_TICK_TIME=2000
- ZOOKEEPER_SYNC_LIMIT=5 - ZOOKEEPER_SYNC_LIMIT=5
@@ -37,22 +33,27 @@ services:
reservations: reservations:
cpus: '0.25' cpus: '0.25'
memory: 256M memory: 256M
healthcheck:
test: ["CMD-SHELL", "echo ruok | nc localhost 2181 | grep imok"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
# Kafka broker # Kafka broker
kafka: kafka:
<<: *default <<: *default
image: confluentinc/cp-kafka:${KAFKA_VERSION:-7.8.0} image: confluentinc/cp-kafka:${KAFKA_VERSION:-7.8.0}
container_name: kafka
depends_on: depends_on:
- zookeeper zookeeper:
condition: service_healthy
ports: ports:
- "${KAFKA_BROKER_PORT_OVERRIDE:-9092}:9092" - "${KAFKA_BROKER_PORT_OVERRIDE:-9092}:9092"
- "${KAFKA_JMX_PORT_OVERRIDE:-9999}:9999" - "${KAFKA_JMX_PORT_OVERRIDE:-9999}:9999"
volumes: volumes:
- *localtime
- *timezone
- kafka_data:/var/lib/kafka/data - kafka_data:/var/lib/kafka/data
environment: environment:
- TZ=${TZ:-UTC}
- KAFKA_BROKER_ID=1 - KAFKA_BROKER_ID=1
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 - KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://localhost:9092 - KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://localhost:9092
@@ -91,16 +92,15 @@ services:
kafka-ui: kafka-ui:
<<: *default <<: *default
image: provectuslabs/kafka-ui:${KAFKA_UI_VERSION:-latest} image: provectuslabs/kafka-ui:${KAFKA_UI_VERSION:-latest}
container_name: kafka-ui
depends_on: depends_on:
- kafka kafka:
- zookeeper condition: service_healthy
zookeeper:
condition: service_healthy
ports: ports:
- "${KAFKA_UI_PORT_OVERRIDE:-8080}:8080" - "${KAFKA_UI_PORT_OVERRIDE:-8080}:8080"
volumes:
- *localtime
- *timezone
environment: environment:
- TZ=${TZ:-UTC}
- KAFKA_CLUSTERS_0_NAME=local - KAFKA_CLUSTERS_0_NAME=local
- KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=kafka:9092 - KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=kafka:9092
- KAFKA_CLUSTERS_0_ZOOKEEPER=zookeeper:2181 - KAFKA_CLUSTERS_0_ZOOKEEPER=zookeeper:2181

View File

@@ -1,28 +1,24 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
kibana: kibana:
<<: *default <<: *default
image: docker.elastic.co/kibana/kibana:${KIBANA_VERSION:-8.16.1} image: docker.elastic.co/kibana/kibana:${KIBANA_VERSION:-8.16.1}
container_name: kibana
ports: ports:
- "${KIBANA_PORT_OVERRIDE:-5601}:5601" - "${KIBANA_PORT_OVERRIDE:-5601}:5601"
volumes: volumes:
- *localtime
- *timezone
- kibana_data:/usr/share/kibana/data - kibana_data:/usr/share/kibana/data
# Custom configuration # Custom configuration
# - ./kibana.yml:/usr/share/kibana/config/kibana.yml:ro # - ./kibana.yml:/usr/share/kibana/config/kibana.yml:ro
environment: environment:
- TZ=${TZ:-UTC}
- SERVERNAME=kibana - SERVERNAME=kibana
- ELASTICSEARCH_HOSTS=${ELASTICSEARCH_HOSTS:-http://elasticsearch:9200} - ELASTICSEARCH_HOSTS=${ELASTICSEARCH_HOSTS:-http://elasticsearch:9200}
- ELASTICSEARCH_USERNAME=${ELASTICSEARCH_USERNAME:-} - ELASTICSEARCH_USERNAME=${ELASTICSEARCH_USERNAME:-}

View File

@@ -1,23 +1,21 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
kodbox: kodbox:
<<: *default <<: *default
image: kodcloud/kodbox:${KODBOX_VERSION:-1.62} image: kodcloud/kodbox:${KODBOX_VERSION:-1.62}
container_name: kodbox
ports: ports:
- "${KODBOX_PORT:-80}:80" - "${KODBOX_PORT:-80}:80"
volumes: volumes:
- kodbox_data:/var/www/html - kodbox_data:/var/www/html
environment: environment:
- TZ=${TZ:-UTC}
- MYSQL_HOST=${MYSQL_HOST:-kodbox-db} - MYSQL_HOST=${MYSQL_HOST:-kodbox-db}
- MYSQL_PORT=${MYSQL_PORT:-3306} - MYSQL_PORT=${MYSQL_PORT:-3306}
- MYSQL_DATABASE=${MYSQL_DATABASE:-kodbox} - MYSQL_DATABASE=${MYSQL_DATABASE:-kodbox}
@@ -39,12 +37,18 @@ services:
reservations: reservations:
cpus: '0.5' cpus: '0.5'
memory: 256M memory: 256M
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:80/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
kodbox-db: kodbox-db:
<<: *default <<: *default
image: mysql:${MYSQL_VERSION:-9.4.0} image: mysql:${MYSQL_VERSION:-9.4.0}
container_name: kodbox-db
environment: environment:
- TZ=${TZ:-UTC}
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD:-root123} - MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD:-root123}
- MYSQL_DATABASE=${MYSQL_DATABASE:-kodbox} - MYSQL_DATABASE=${MYSQL_DATABASE:-kodbox}
- MYSQL_USER=${MYSQL_USER:-kodbox} - MYSQL_USER=${MYSQL_USER:-kodbox}
@@ -73,11 +77,12 @@ services:
kodbox-redis: kodbox-redis:
<<: *default <<: *default
image: redis:${REDIS_VERSION:-8.2.1-alpine3.22} image: redis:${REDIS_VERSION:-8.2.1-alpine3.22}
container_name: kodbox-redis
command: command:
- redis-server - redis-server
- --requirepass - --requirepass
- ${REDIS_PASSWORD:-} - ${REDIS_PASSWORD:-}
environment:
- TZ=${TZ:-UTC}
volumes: volumes:
- kodbox_redis_data:/data - kodbox_redis_data:/data
healthcheck: healthcheck:

View File

@@ -1,24 +1,20 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
# Kong Database # Kong Database
kong-db: kong-db:
<<: *default <<: *default
image: postgres:${POSTGRES_VERSION:-16.6-alpine3.21} image: postgres:${POSTGRES_VERSION:-16.6-alpine3.21}
container_name: kong-db
volumes: volumes:
- *localtime
- *timezone
- kong_db_data:/var/lib/postgresql/data - kong_db_data:/var/lib/postgresql/data
environment: environment:
- TZ=${TZ:-UTC}
- POSTGRES_USER=kong - POSTGRES_USER=kong
- POSTGRES_DB=kong - POSTGRES_DB=kong
- POSTGRES_PASSWORD=${KONG_DB_PASSWORD:-kongpass} - POSTGRES_PASSWORD=${KONG_DB_PASSWORD:-kongpass}
@@ -35,15 +31,17 @@ services:
interval: 30s interval: 30s
timeout: 5s timeout: 5s
retries: 5 retries: 5
start_period: 30s
# Kong Database Migration # Kong Database Migration
kong-migrations: kong-migrations:
<<: *default <<: *default
image: kong:${KONG_VERSION:-3.8.0-alpine} image: kong:${KONG_VERSION:-3.8.0-alpine}
container_name: kong-migrations
depends_on: depends_on:
- kong-db kong-db:
condition: service_healthy
environment: environment:
- TZ=${TZ:-UTC}
- KONG_DATABASE=postgres - KONG_DATABASE=postgres
- KONG_PG_HOST=kong-db - KONG_PG_HOST=kong-db
- KONG_PG_USER=kong - KONG_PG_USER=kong
@@ -56,22 +54,21 @@ services:
kong: kong:
<<: *default <<: *default
image: kong:${KONG_VERSION:-3.8.0-alpine} image: kong:${KONG_VERSION:-3.8.0-alpine}
container_name: kong
depends_on: depends_on:
- kong-db kong-db:
- kong-migrations condition: service_healthy
kong-migrations:
condition: service_completed_successfully
ports: ports:
- "${KONG_PROXY_PORT_OVERRIDE:-8000}:8000" - "${KONG_PROXY_PORT_OVERRIDE:-8000}:8000"
- "${KONG_PROXY_SSL_PORT_OVERRIDE:-8443}:8443" - "${KONG_PROXY_SSL_PORT_OVERRIDE:-8443}:8443"
- "${KONG_ADMIN_API_PORT_OVERRIDE:-8001}:8001" - "${KONG_ADMIN_API_PORT_OVERRIDE:-8001}:8001"
- "${KONG_ADMIN_SSL_PORT_OVERRIDE:-8444}:8444" - "${KONG_ADMIN_SSL_PORT_OVERRIDE:-8444}:8444"
volumes:
- *localtime
- *timezone
# Custom configuration # Custom configuration
# - ./kong.conf:/etc/kong/kong.conf:ro # - ./kong.conf:/etc/kong/kong.conf:ro
environment: environment:
- TZ=${TZ:-UTC}
- KONG_DATABASE=postgres - KONG_DATABASE=postgres
- KONG_PG_HOST=kong-db - KONG_PG_HOST=kong-db
- KONG_PG_USER=kong - KONG_PG_USER=kong
@@ -102,16 +99,15 @@ services:
kong-gui: kong-gui:
<<: *default <<: *default
image: pantsel/konga:${KONGA_VERSION:-latest} image: pantsel/konga:${KONGA_VERSION:-latest}
container_name: kong-gui
depends_on: depends_on:
- kong kong:
condition: service_healthy
ports: ports:
- "${KONG_GUI_PORT_OVERRIDE:-1337}:1337" - "${KONG_GUI_PORT_OVERRIDE:-1337}:1337"
volumes: volumes:
- *localtime
- *timezone
- konga_data:/app/kongadata - konga_data:/app/kongadata
environment: environment:
- TZ=${TZ:-UTC}
- NODE_ENV=production - NODE_ENV=production
- KONGA_HOOK_TIMEOUT=120000 - KONGA_HOOK_TIMEOUT=120000
deploy: deploy:

View File

@@ -1,21 +1,19 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
langfuse-server: langfuse-server:
<<: *default <<: *default
image: langfuse/langfuse:${LANGFUSE_VERSION:-3.115.0} image: langfuse/langfuse:${LANGFUSE_VERSION:-3.115.0}
container_name: langfuse-server
ports: ports:
- "${LANGFUSE_PORT:-3000}:3000" - "${LANGFUSE_PORT:-3000}:3000"
environment: environment:
- TZ=${TZ:-UTC}
- DATABASE_URL=postgresql://${POSTGRES_USER:-postgres}:${POSTGRES_PASSWORD:-postgres}@langfuse-db:5432/${POSTGRES_DB:-langfuse} - DATABASE_URL=postgresql://${POSTGRES_USER:-postgres}:${POSTGRES_PASSWORD:-postgres}@langfuse-db:5432/${POSTGRES_DB:-langfuse}
- NEXTAUTH_URL=${NEXTAUTH_URL:-http://localhost:3000} - NEXTAUTH_URL=${NEXTAUTH_URL:-http://localhost:3000}
- NEXTAUTH_SECRET=${NEXTAUTH_SECRET} - NEXTAUTH_SECRET=${NEXTAUTH_SECRET}
@@ -33,12 +31,18 @@ services:
reservations: reservations:
cpus: '0.5' cpus: '0.5'
memory: 512M memory: 512M
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:3000/api/public/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
langfuse-db: langfuse-db:
<<: *default <<: *default
image: postgres:${POSTGRES_VERSION:-17.2-alpine3.21} image: postgres:${POSTGRES_VERSION:-17.2-alpine3.21}
container_name: langfuse-db
environment: environment:
- TZ=${TZ:-UTC}
- POSTGRES_USER=${POSTGRES_USER:-postgres} - POSTGRES_USER=${POSTGRES_USER:-postgres}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-postgres} - POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-postgres}
- POSTGRES_DB=${POSTGRES_DB:-langfuse} - POSTGRES_DB=${POSTGRES_DB:-langfuse}

View File

@@ -1,26 +1,21 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
logstash: logstash:
<<: *default <<: *default
image: docker.elastic.co/logstash/logstash:${LOGSTASH_VERSION:-8.16.1} image: docker.elastic.co/logstash/logstash:${LOGSTASH_VERSION:-8.16.1}
container_name: logstash
ports: ports:
- "${LOGSTASH_BEATS_PORT_OVERRIDE:-5044}:5044" - "${LOGSTASH_BEATS_PORT_OVERRIDE:-5044}:5044"
- "${LOGSTASH_TCP_PORT_OVERRIDE:-5000}:5000/tcp" - "${LOGSTASH_TCP_PORT_OVERRIDE:-5000}:5000/tcp"
- "${LOGSTASH_UDP_PORT_OVERRIDE:-5000}:5000/udp" - "${LOGSTASH_UDP_PORT_OVERRIDE:-5000}:5000/udp"
- "${LOGSTASH_HTTP_PORT_OVERRIDE:-9600}:9600" - "${LOGSTASH_HTTP_PORT_OVERRIDE:-9600}:9600"
volumes: volumes:
- *localtime
- *timezone
- logstash_data:/usr/share/logstash/data - logstash_data:/usr/share/logstash/data
- logstash_logs:/usr/share/logstash/logs - logstash_logs:/usr/share/logstash/logs
- ./pipeline:/usr/share/logstash/pipeline:ro - ./pipeline:/usr/share/logstash/pipeline:ro
@@ -29,6 +24,7 @@ services:
# - ./logstash.yml:/usr/share/logstash/config/logstash.yml:ro # - ./logstash.yml:/usr/share/logstash/config/logstash.yml:ro
# - ./pipelines.yml:/usr/share/logstash/config/pipelines.yml:ro # - ./pipelines.yml:/usr/share/logstash/config/pipelines.yml:ro
environment: environment:
- TZ=${TZ:-UTC}
- XPACK_MONITORING_ENABLED=${LOGSTASH_MONITORING_ENABLED:-false} - XPACK_MONITORING_ENABLED=${LOGSTASH_MONITORING_ENABLED:-false}
- XPACK_MONITORING_ELASTICSEARCH_HOSTS=${ELASTICSEARCH_HOSTS:-http://elasticsearch:9200} - XPACK_MONITORING_ELASTICSEARCH_HOSTS=${ELASTICSEARCH_HOSTS:-http://elasticsearch:9200}
- ELASTICSEARCH_HOSTS=${ELASTICSEARCH_HOSTS:-http://elasticsearch:9200} - ELASTICSEARCH_HOSTS=${ELASTICSEARCH_HOSTS:-http://elasticsearch:9200}

View File

@@ -1,17 +1,16 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
x-mariadb-galera: &mariadb-galera x-mariadb-galera: &mariadb-galera
<<: *default <<: *default
image: mariadb:${MARIADB_VERSION:-11.7.2} image: mariadb:${MARIADB_VERSION:-11.7.2}
environment: &galera-env environment: &galera-env
TZ: ${TZ:-UTC}
MARIADB_ROOT_PASSWORD: ${MARIADB_ROOT_PASSWORD:-galera} MARIADB_ROOT_PASSWORD: ${MARIADB_ROOT_PASSWORD:-galera}
MARIADB_GALERA_CLUSTER_NAME: ${MARIADB_GALERA_CLUSTER_NAME:-galera_cluster} MARIADB_GALERA_CLUSTER_NAME: ${MARIADB_GALERA_CLUSTER_NAME:-galera_cluster}
MARIADB_GALERA_CLUSTER_ADDRESS: gcomm://mariadb-galera-1,mariadb-galera-2,mariadb-galera-3 MARIADB_GALERA_CLUSTER_ADDRESS: gcomm://mariadb-galera-1,mariadb-galera-2,mariadb-galera-3
@@ -34,11 +33,16 @@ x-mariadb-galera: &mariadb-galera
reservations: reservations:
cpus: '1.0' cpus: '1.0'
memory: 1G memory: 1G
healthcheck:
test: ["CMD", "healthcheck.sh", "--connect", "--innodb_initialized"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
services: services:
mariadb-galera-1: mariadb-galera-1:
<<: *mariadb-galera <<: *mariadb-galera
container_name: mariadb-galera-1
hostname: mariadb-galera-1 hostname: mariadb-galera-1
ports: ports:
- "${MARIADB_PORT_1_OVERRIDE:-3306}:3306" - "${MARIADB_PORT_1_OVERRIDE:-3306}:3306"
@@ -57,13 +61,10 @@ services:
- --default_storage_engine=InnoDB - --default_storage_engine=InnoDB
- --innodb_autoinc_lock_mode=2 - --innodb_autoinc_lock_mode=2
volumes: volumes:
- *localtime
- *timezone
- mariadb_galera_1_data:/var/lib/mysql - mariadb_galera_1_data:/var/lib/mysql
mariadb-galera-2: mariadb-galera-2:
<<: *mariadb-galera <<: *mariadb-galera
container_name: mariadb-galera-2
hostname: mariadb-galera-2 hostname: mariadb-galera-2
ports: ports:
- "${MARIADB_PORT_2_OVERRIDE:-3307}:3306" - "${MARIADB_PORT_2_OVERRIDE:-3307}:3306"
@@ -81,15 +82,13 @@ services:
- --default_storage_engine=InnoDB - --default_storage_engine=InnoDB
- --innodb_autoinc_lock_mode=2 - --innodb_autoinc_lock_mode=2
volumes: volumes:
- *localtime
- *timezone
- mariadb_galera_2_data:/var/lib/mysql - mariadb_galera_2_data:/var/lib/mysql
depends_on: depends_on:
- mariadb-galera-1 mariadb-galera-1:
condition: service_healthy
mariadb-galera-3: mariadb-galera-3:
<<: *mariadb-galera <<: *mariadb-galera
container_name: mariadb-galera-3
hostname: mariadb-galera-3 hostname: mariadb-galera-3
ports: ports:
- "${MARIADB_PORT_3_OVERRIDE:-3308}:3306" - "${MARIADB_PORT_3_OVERRIDE:-3308}:3306"
@@ -107,11 +106,10 @@ services:
- --default_storage_engine=InnoDB - --default_storage_engine=InnoDB
- --innodb_autoinc_lock_mode=2 - --innodb_autoinc_lock_mode=2
volumes: volumes:
- *localtime
- *timezone
- mariadb_galera_3_data:/var/lib/mysql - mariadb_galera_3_data:/var/lib/mysql
depends_on: depends_on:
- mariadb-galera-1 mariadb-galera-1:
condition: service_healthy
volumes: volumes:
mariadb_galera_1_data: mariadb_galera_1_data:

View File

@@ -1,12 +1,10 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
milvus-standalone-embed: milvus-standalone-embed:
@@ -15,14 +13,13 @@ services:
security_opt: security_opt:
- seccomp:unconfined - seccomp:unconfined
environment: environment:
- ETCD_USE_EMBED=true TZ: ${TZ:-UTC}
- ETCD_DATA_DIR=/var/lib/milvus/etcd ETCD_USE_EMBED: "true"
- ETCD_CONFIG_PATH=/milvus/configs/embed_etcd.yaml ETCD_DATA_DIR: /var/lib/milvus/etcd
- COMMON_STORAGETYPE=local ETCD_CONFIG_PATH: /milvus/configs/embed_etcd.yaml
- DEPLOY_MODE=STANDALONE COMMON_STORAGETYPE: local
DEPLOY_MODE: STANDALONE
volumes: volumes:
- *localtime
- *timezone
- milvus_data:/var/lib/milvus - milvus_data:/var/lib/milvus
- ./embed_etcd.yaml:/milvus/configs/embed_etcd.yaml - ./embed_etcd.yaml:/milvus/configs/embed_etcd.yaml
- ./user.yaml:/milvus/configs/user.yaml - ./user.yaml:/milvus/configs/user.yaml
@@ -52,7 +49,8 @@ services:
profiles: profiles:
- attu - attu
environment: environment:
- MILVUS_URL=${MILVUS_URL:-milvus-standalone-embed:19530} TZ: ${TZ:-UTC}
MILVUS_URL: ${MILVUS_URL:-milvus-standalone-embed:19530}
ports: ports:
- "${ATTU_OVERRIDE_PORT:-8000}:3000" - "${ATTU_OVERRIDE_PORT:-8000}:3000"
deploy: deploy:

View File

@@ -1,25 +1,22 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
etcd: etcd:
<<: *default <<: *default
image: quay.io/coreos/etcd:${ETCD_VERSION:-v3.5.18} image: quay.io/coreos/etcd:${ETCD_VERSION:-v3.5.18}
environment: environment:
- TZ=${TZ:-UTC}
- ETCD_AUTO_COMPACTION_MODE=revision - ETCD_AUTO_COMPACTION_MODE=revision
- ETCD_AUTO_COMPACTION_RETENTION=1000 - ETCD_AUTO_COMPACTION_RETENTION=1000
- ETCD_QUOTA_BACKEND_BYTES=4294967296 - ETCD_QUOTA_BACKEND_BYTES=4294967296
- ETCD_SNAPSHOT_COUNT=50000 - ETCD_SNAPSHOT_COUNT=50000
volumes: volumes:
- *localtime
- *timezone
- etcd_data:/etcd - etcd_data:/etcd
command: etcd -advertise-client-urls=http://etcd:2379 -listen-client-urls http://0.0.0.0:2379 --data-dir /etcd command: etcd -advertise-client-urls=http://etcd:2379 -listen-client-urls http://0.0.0.0:2379 --data-dir /etcd
healthcheck: healthcheck:
@@ -27,6 +24,7 @@ services:
interval: 30s interval: 30s
timeout: 20s timeout: 20s
retries: 3 retries: 3
start_period: 30s
deploy: deploy:
resources: resources:
limits: limits:
@@ -40,14 +38,13 @@ services:
<<: *default <<: *default
image: minio/minio:${MINIO_VERSION:-RELEASE.2024-12-18T13-15-44Z} image: minio/minio:${MINIO_VERSION:-RELEASE.2024-12-18T13-15-44Z}
environment: environment:
TZ: ${TZ:-UTC}
MINIO_ROOT_USER: ${MINIO_ROOT_USER:-minioadmin} MINIO_ROOT_USER: ${MINIO_ROOT_USER:-minioadmin}
MINIO_ROOT_PASSWORD: ${MINIO_ROOT_PASSWORD:-minioadmin} MINIO_ROOT_PASSWORD: ${MINIO_ROOT_PASSWORD:-minioadmin}
ports: ports:
- "${MINIO_PORT_OVERRIDE_API:-9000}:9000" - "${MINIO_PORT_OVERRIDE_API:-9000}:9000"
- "${MINIO_PORT_OVERRIDE_WEBUI:-9001}:9001" - "${MINIO_PORT_OVERRIDE_WEBUI:-9001}:9001"
volumes: volumes:
- *localtime
- *timezone
- minio_data:/minio_data - minio_data:/minio_data
command: minio server /minio_data --console-address ":9001" command: minio server /minio_data --console-address ":9001"
healthcheck: healthcheck:
@@ -55,6 +52,7 @@ services:
interval: 30s interval: 30s
timeout: 20s timeout: 20s
retries: 3 retries: 3
start_period: 30s
deploy: deploy:
resources: resources:
limits: limits:
@@ -66,11 +64,12 @@ services:
milvus-standalone: milvus-standalone:
<<: *default <<: *default
image: milvusdb/milvus:${MILVUS_VERSION:-v2.6.2} image: milvusdb/milvus:${MILVUS_VERSION:-v2.6.3}
command: ["milvus", "run", "standalone"] command: ["milvus", "run", "standalone"]
security_opt: security_opt:
- seccomp:unconfined - seccomp:unconfined
environment: environment:
TZ: ${TZ:-UTC}
ETCD_ENDPOINTS: etcd:2379 ETCD_ENDPOINTS: etcd:2379
MINIO_ADDRESS: minio:9000 MINIO_ADDRESS: minio:9000
MQ_TYPE: woodpecker MQ_TYPE: woodpecker
@@ -86,8 +85,10 @@ services:
- "${MILVUS_PORT_OVERRIDE_HTTP:-19530}:19530" - "${MILVUS_PORT_OVERRIDE_HTTP:-19530}:19530"
- "${MILVUS_PORT_OVERRIDE_WEBUI:-9091}:9091" - "${MILVUS_PORT_OVERRIDE_WEBUI:-9091}:9091"
depends_on: depends_on:
- etcd etcd:
- minio condition: service_healthy
minio:
condition: service_healthy
deploy: deploy:
resources: resources:
limits: limits:
@@ -99,13 +100,17 @@ services:
attu: attu:
<<: *default <<: *default
image: zilliz/attu:${ATTU_VERSION:-v2.6.0} image: zilliz/attu:${ATTU_VERSION:-v2.6.1}
profiles: profiles:
- attu - attu
environment: environment:
- TZ=${TZ:-UTC}
- MILVUS_URL=${MILVUS_URL:-milvus-standalone:19530} - MILVUS_URL=${MILVUS_URL:-milvus-standalone:19530}
ports: ports:
- "${ATTU_PORT_OVERRIDE:-8000}:3000" - "${ATTU_PORT_OVERRIDE:-8000}:3000"
depends_on:
milvus-standalone:
condition: service_healthy
deploy: deploy:
resources: resources:
limits: limits:
@@ -114,6 +119,12 @@ services:
reservations: reservations:
cpus: '0.1' cpus: '0.1'
memory: 128M memory: 128M
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:3000/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
volumes: volumes:
etcd_data: etcd_data:

View File

@@ -1,19 +1,17 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
minecraft-bedrock: minecraft-bedrock:
<<: *default <<: *default
image: itzg/minecraft-bedrock-server:${BEDROCK_VERSION:-latest} image: itzg/minecraft-bedrock-server:${BEDROCK_VERSION:-latest}
container_name: minecraft-bedrock-server
environment: environment:
TZ: ${TZ:-UTC}
EULA: "${EULA:-TRUE}" EULA: "${EULA:-TRUE}"
VERSION: "${MINECRAFT_VERSION:-LATEST}" VERSION: "${MINECRAFT_VERSION:-LATEST}"
GAMEMODE: "${GAMEMODE:-survival}" GAMEMODE: "${GAMEMODE:-survival}"
@@ -33,8 +31,6 @@ services:
- "${SERVER_PORT_OVERRIDE:-19132}:19132/udp" - "${SERVER_PORT_OVERRIDE:-19132}:19132/udp"
- "${SERVER_PORT_V6_OVERRIDE:-19133}:19133/udp" - "${SERVER_PORT_V6_OVERRIDE:-19133}:19133/udp"
volumes: volumes:
- *localtime
- *timezone
- bedrock_data:/data - bedrock_data:/data
stdin_open: true stdin_open: true
tty: true tty: true
@@ -46,6 +42,12 @@ services:
reservations: reservations:
cpus: '1.0' cpus: '1.0'
memory: 1G memory: 1G
healthcheck:
test: ["CMD-SHELL", "[ -f /data/valid_known_packs.json ]"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
volumes: volumes:
bedrock_data: bedrock_data:

View File

@@ -1,17 +1,16 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
x-mineru-sglang: &mineru-sglang x-mineru-sglang: &mineru-sglang
<<: *default <<: *default
image: ${MINERU_DOCKER_IMAGE:-alexsuntop/mineru-sglang:2.2.2} image: ${MINERU_DOCKER_IMAGE:-alexsuntop/mineru-sglang:2.2.2}
environment: environment:
TZ: ${TZ:-UTC}
MINERU_MODEL_SOURCE: local MINERU_MODEL_SOURCE: local
ulimits: ulimits:
memlock: -1 memlock: -1
@@ -49,6 +48,10 @@ services:
healthcheck: healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost:30000/health || exit 1"] test: ["CMD-SHELL", "curl -f http://localhost:30000/health || exit 1"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
mineru-api: mineru-api:
<<: *mineru-sglang <<: *mineru-sglang
@@ -65,6 +68,12 @@ services:
# If running on a single GPU and encountering VRAM shortage, reduce the KV cache size by this parameter, # If running on a single GPU and encountering VRAM shortage, reduce the KV cache size by this parameter,
# if VRAM issues persist, try lowering it further to `0.4` or below. # if VRAM issues persist, try lowering it further to `0.4` or below.
# - --gpu-memory-utilization 0.5 # - --gpu-memory-utilization 0.5
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
mineru-gradio: mineru-gradio:
<<: *mineru-sglang <<: *mineru-sglang
@@ -88,3 +97,9 @@ services:
# If running on a single GPU and encountering VRAM shortage, reduce the KV cache size by this parameter, # If running on a single GPU and encountering VRAM shortage, reduce the KV cache size by this parameter,
# if VRAM issues persist, try lowering it further to `0.4` or below. # if VRAM issues persist, try lowering it further to `0.4` or below.
# - --gpu-memory-utilization 0.5 # - --gpu-memory-utilization 0.5
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:7860/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s

View File

@@ -1,12 +1,10 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
x-mineru-vllm: &mineru-vllm x-mineru-vllm: &mineru-vllm
<<: *default <<: *default
@@ -15,6 +13,7 @@ x-mineru-vllm: &mineru-vllm
context: . context: .
dockerfile: Dockerfile dockerfile: Dockerfile
environment: environment:
TZ: ${TZ:-UTC}
MINERU_MODEL_SOURCE: local MINERU_MODEL_SOURCE: local
ulimits: ulimits:
memlock: -1 memlock: -1
@@ -36,7 +35,6 @@ x-mineru-vllm: &mineru-vllm
services: services:
mineru-vllm-server: mineru-vllm-server:
<<: *mineru-vllm <<: *mineru-vllm
container_name: mineru-vllm-server
profiles: ["vllm-server"] profiles: ["vllm-server"]
ports: ports:
- ${MINERU_PORT_OVERRIDE_VLLM:-30000}:30000 - ${MINERU_PORT_OVERRIDE_VLLM:-30000}:30000
@@ -53,11 +51,14 @@ services:
healthcheck: healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost:30000/health || exit 1"] test: ["CMD-SHELL", "curl -f http://localhost:30000/health || exit 1"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
mineru-api: mineru-api:
<<: *mineru-vllm <<: *mineru-vllm
container_name: mineru-api
profiles: ["api"] profiles: ["api"]
ports: ports:
- ${MINERU_PORT_OVERRIDE_API:-8000}:8000 - ${MINERU_PORT_OVERRIDE_API:-8000}:8000
@@ -71,10 +72,15 @@ services:
# If running on a single GPU and encountering VRAM shortage, reduce the KV cache size by this parameter, # If running on a single GPU and encountering VRAM shortage, reduce the KV cache size by this parameter,
# if VRAM issues persist, try lowering it further to `0.4` or below. # if VRAM issues persist, try lowering it further to `0.4` or below.
# - --gpu-memory-utilization 0.5 # - --gpu-memory-utilization 0.5
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
mineru-gradio: mineru-gradio:
<<: *mineru-vllm <<: *mineru-vllm
container_name: mineru-gradio
profiles: ["gradio"] profiles: ["gradio"]
ports: ports:
- ${MINERU_PORT_OVERRIDE_GRADIO:-7860}:7860 - ${MINERU_PORT_OVERRIDE_GRADIO:-7860}:7860
@@ -95,3 +101,9 @@ services:
# If running on a single GPU and encountering VRAM shortage, reduce the KV cache size by this parameter, # If running on a single GPU and encountering VRAM shortage, reduce the KV cache size by this parameter,
# if VRAM issues persist, try lowering it further to `0.4` or below. # if VRAM issues persist, try lowering it further to `0.4` or below.
# - --gpu-memory-utilization 0.5 # - --gpu-memory-utilization 0.5
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:7860/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s

View File

@@ -1,12 +1,10 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
minio: minio:
@@ -16,11 +14,10 @@ services:
- "${MINIO_PORT_OVERRIDE_API:-9000}:9000" - "${MINIO_PORT_OVERRIDE_API:-9000}:9000"
- "${MINIO_PORT_OVERRIDE_WEBUI:-9001}:9001" - "${MINIO_PORT_OVERRIDE_WEBUI:-9001}:9001"
environment: environment:
TZ: ${TZ:-UTC}
MINIO_ROOT_USER: ${MINIO_ROOT_USER:-minioadmin} MINIO_ROOT_USER: ${MINIO_ROOT_USER:-minioadmin}
MINIO_ROOT_PASSWORD: ${MINIO_ROOT_PASSWORD:-minioadmin} MINIO_ROOT_PASSWORD: ${MINIO_ROOT_PASSWORD:-minioadmin}
volumes: volumes:
- *localtime
- *timezone
- minio_data:/data - minio_data:/data
- ./config:/root/.minio/ - ./config:/root/.minio/
command: server --console-address ':9001' /data command: server --console-address ':9001' /data
@@ -30,6 +27,14 @@ services:
timeout: 20s timeout: 20s
retries: 5 retries: 5
start_period: 30s start_period: 30s
deploy:
resources:
limits:
cpus: '1.0'
memory: 1G
reservations:
cpus: '0.5'
memory: 512M
volumes: volumes:

View File

@@ -1,25 +1,21 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
postgres: postgres:
<<: *default <<: *default
image: postgres:${POSTGRES_VERSION:-17.6-alpine} image: postgres:${POSTGRES_VERSION:-17.6-alpine}
container_name: mlflow-postgres
environment: environment:
TZ: ${TZ:-UTC}
POSTGRES_USER: ${POSTGRES_USER:-mlflow} POSTGRES_USER: ${POSTGRES_USER:-mlflow}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-mlflow} POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-mlflow}
POSTGRES_DB: ${POSTGRES_DB:-mlflow} POSTGRES_DB: ${POSTGRES_DB:-mlflow}
volumes: volumes:
- *localtime
- *timezone
- postgres_data:/var/lib/postgresql/data - postgres_data:/var/lib/postgresql/data
deploy: deploy:
resources: resources:
@@ -29,21 +25,25 @@ services:
reservations: reservations:
cpus: '0.5' cpus: '0.5'
memory: 512M memory: 512M
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER:-mlflow}"]
interval: 10s
timeout: 5s
retries: 5
start_period: 30s
minio: minio:
<<: *default <<: *default
image: minio/minio:${MINIO_VERSION:-RELEASE.2025-01-07T16-13-09Z} image: minio/minio:${MINIO_VERSION:-RELEASE.2025-01-07T16-13-09Z}
container_name: mlflow-minio
command: server /data --console-address ":9001" command: server /data --console-address ":9001"
environment: environment:
TZ: ${TZ:-UTC}
MINIO_ROOT_USER: ${MINIO_ROOT_USER:-minio} MINIO_ROOT_USER: ${MINIO_ROOT_USER:-minio}
MINIO_ROOT_PASSWORD: ${MINIO_ROOT_PASSWORD:-minio123} MINIO_ROOT_PASSWORD: ${MINIO_ROOT_PASSWORD:-minio123}
ports: ports:
- "${MINIO_PORT_OVERRIDE:-9000}:9000" - "${MINIO_PORT_OVERRIDE:-9000}:9000"
- "${MINIO_CONSOLE_PORT_OVERRIDE:-9001}:9001" - "${MINIO_CONSOLE_PORT_OVERRIDE:-9001}:9001"
volumes: volumes:
- *localtime
- *timezone
- minio_data:/data - minio_data:/data
deploy: deploy:
resources: resources:
@@ -53,13 +53,19 @@ services:
reservations: reservations:
cpus: '0.5' cpus: '0.5'
memory: 512M memory: 512M
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
minio-init: minio-init:
<<: *default <<: *default
image: minio/mc:${MINIO_MC_VERSION:-RELEASE.2025-01-07T17-25-52Z} image: minio/mc:${MINIO_MC_VERSION:-RELEASE.2025-01-07T17-25-52Z}
container_name: mlflow-minio-init
depends_on: depends_on:
- minio minio:
condition: service_healthy
entrypoint: > entrypoint: >
/bin/sh -c " /bin/sh -c "
sleep 5; sleep 5;
@@ -72,14 +78,17 @@ services:
mlflow: mlflow:
<<: *default <<: *default
image: ghcr.io/mlflow/mlflow:${MLFLOW_VERSION:-v2.20.2} image: ghcr.io/mlflow/mlflow:${MLFLOW_VERSION:-v2.20.2}
container_name: mlflow
depends_on: depends_on:
- postgres postgres:
- minio condition: service_healthy
- minio-init minio:
condition: service_healthy
minio-init:
condition: service_completed_successfully
ports: ports:
- "${MLFLOW_PORT_OVERRIDE:-5000}:5000" - "${MLFLOW_PORT_OVERRIDE:-5000}:5000"
environment: environment:
TZ: ${TZ:-UTC}
MLFLOW_BACKEND_STORE_URI: postgresql://${POSTGRES_USER:-mlflow}:${POSTGRES_PASSWORD:-mlflow}@postgres:5432/${POSTGRES_DB:-mlflow} MLFLOW_BACKEND_STORE_URI: postgresql://${POSTGRES_USER:-mlflow}:${POSTGRES_PASSWORD:-mlflow}@postgres:5432/${POSTGRES_DB:-mlflow}
MLFLOW_ARTIFACT_ROOT: s3://${MINIO_BUCKET:-mlflow}/ MLFLOW_ARTIFACT_ROOT: s3://${MINIO_BUCKET:-mlflow}/
MLFLOW_S3_ENDPOINT_URL: http://minio:9000 MLFLOW_S3_ENDPOINT_URL: http://minio:9000
@@ -104,6 +113,12 @@ services:
reservations: reservations:
cpus: '1.0' cpus: '1.0'
memory: 1G memory: 1G
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:5000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
volumes: volumes:
postgres_data: postgres_data:

View File

@@ -48,6 +48,7 @@ This service sets up a MongoDB replica set with three members.
## Configuration ## Configuration
- `TZ`: The timezone for the container, default is `UTC`.
- `MONGO_VERSION`: The version of the MongoDB image, default is `8.0.13`. - `MONGO_VERSION`: The version of the MongoDB image, default is `8.0.13`.
- `MONGO_INITDB_ROOT_USERNAME`: The root username for the database, default is `root`. - `MONGO_INITDB_ROOT_USERNAME`: The root username for the database, default is `root`.
- `MONGO_INITDB_ROOT_PASSWORD`: The root password for the database, default is `password`. - `MONGO_INITDB_ROOT_PASSWORD`: The root password for the database, default is `password`.
@@ -60,3 +61,7 @@ This service sets up a MongoDB replica set with three members.
## Volumes ## Volumes
- `secrets/rs0.key`: The key file for authenticating members of the replica set. - `secrets/rs0.key`: The key file for authenticating members of the replica set.
## Security
The replica set key file is mounted read-only and copied to `/tmp` inside the container with proper permissions (400). This approach ensures cross-platform compatibility (Windows/Linux/macOS) while maintaining security requirements. The key file is never modified on the host system.

View File

@@ -48,6 +48,7 @@
## 配置 ## 配置
- `TZ`: 容器的时区,默认为 `UTC`。
- `MONGO_VERSION`: MongoDB 镜像的版本,默认为 `8.0.13`。 - `MONGO_VERSION`: MongoDB 镜像的版本,默认为 `8.0.13`。
- `MONGO_INITDB_ROOT_USERNAME`: 数据库的 root 用户名,默认为 `root`。 - `MONGO_INITDB_ROOT_USERNAME`: 数据库的 root 用户名,默认为 `root`。
- `MONGO_INITDB_ROOT_PASSWORD`: 数据库的 root 密码,默认为 `password`。 - `MONGO_INITDB_ROOT_PASSWORD`: 数据库的 root 密码,默认为 `password`。
@@ -60,3 +61,7 @@
## 卷 ## 卷
- `secrets/rs0.key`: 用于副本集成员之间认证的密钥文件。 - `secrets/rs0.key`: 用于副本集成员之间认证的密钥文件。
## 安全性
副本集密钥文件以只读方式挂载,并在容器内复制到 `/tmp` 目录设置适当的权限400。这种方法确保了跨平台兼容性Windows/Linux/macOS同时满足安全要求。主机系统上的密钥文件永远不会被修改。

View File

@@ -1,8 +1,5 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
@@ -12,26 +9,21 @@ x-mongo: &mongo
<<: *default <<: *default
image: mongo:${MONGO_VERSION:-8.0.13} image: mongo:${MONGO_VERSION:-8.0.13}
environment: environment:
TZ: ${TZ:-UTC}
MONGO_INITDB_ROOT_USERNAME: ${MONGO_INITDB_ROOT_USERNAME:-root} MONGO_INITDB_ROOT_USERNAME: ${MONGO_INITDB_ROOT_USERNAME:-root}
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_INITDB_ROOT_PASSWORD:-password} MONGO_INITDB_ROOT_PASSWORD: ${MONGO_INITDB_ROOT_PASSWORD:-password}
MONGO_INITDB_DATABASE: ${MONGO_INITDB_DATABASE:-admin} MONGO_INITDB_DATABASE: ${MONGO_INITDB_DATABASE:-admin}
command:
- mongod
- --replSet
- ${MONGO_REPLICA_SET_NAME:-rs0}
- --keyFile
- /secrets/rs0.key
volumes: volumes:
- *localtime - ./secrets/rs0.key:/data/rs0.key:ro
- *timezone
- ./secrets/rs0.key:/secrets/rs0.key
entrypoint: entrypoint:
- bash - bash
- -c - -c
- | - |
chmod 400 /secrets/rs0.key cp /data/rs0.key /tmp/rs0.key
chown 999:999 /secrets/rs0.key chmod 400 /tmp/rs0.key
exec docker-entrypoint.sh $$@ chown 999:999 /tmp/rs0.key
export MONGO_INITDB_ROOT_USERNAME MONGO_INITDB_ROOT_PASSWORD MONGO_INITDB_DATABASE
exec docker-entrypoint.sh mongod --replSet ${MONGO_REPLICA_SET_NAME:-rs0} --keyFile /tmp/rs0.key
deploy: deploy:
resources: resources:
limits: limits:

View File

@@ -1,26 +1,23 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
mongo: mongo:
<<: *default <<: *default
image: mongo:${MONGO_VERSION:-8.0.13} image: mongo:${MONGO_VERSION:-8.0.13}
environment: environment:
TZ: ${TZ:-UTC}
MONGO_INITDB_ROOT_USERNAME: ${MONGO_INITDB_ROOT_USERNAME:-root} MONGO_INITDB_ROOT_USERNAME: ${MONGO_INITDB_ROOT_USERNAME:-root}
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_INITDB_ROOT_PASSWORD:-password} MONGO_INITDB_ROOT_PASSWORD: ${MONGO_INITDB_ROOT_PASSWORD:-password}
MONGO_INITDB_DATABASE: ${MONGO_INITDB_DATABASE:-admin} MONGO_INITDB_DATABASE: ${MONGO_INITDB_DATABASE:-admin}
ports: ports:
- "${MONGO_PORT_OVERRIDE:-27017}:27017" - "${MONGO_PORT_OVERRIDE:-27017}:27017"
volumes: volumes:
- *localtime
- *timezone
- mongo_data:/data/db - mongo_data:/data/db
deploy: deploy:
resources: resources:
@@ -30,6 +27,12 @@ services:
reservations: reservations:
cpus: '0.25' cpus: '0.25'
memory: 256M memory: 256M
healthcheck:
test: ["CMD", "mongosh", "--eval", "db.adminCommand('ping')"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
volumes: volumes:
mongo_data: mongo_data:

View File

@@ -1,12 +1,10 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
mysql: mysql:
@@ -15,16 +13,28 @@ services:
ports: ports:
- "${MYSQL_PORT_OVERRIDE:-3306}:3306" - "${MYSQL_PORT_OVERRIDE:-3306}:3306"
volumes: volumes:
- *localtime
- *timezone
- mysql_data:/var/lib/mysql - mysql_data:/var/lib/mysql
# Initialize database with scripts in ./init.sql # Initialize database with scripts in ./init.sql
# - ./init.sql:/docker-entrypoint-initdb.d/init.sql:ro # - ./init.sql:/docker-entrypoint-initdb.d/init.sql:ro
environment: environment:
TZ: ${TZ:-UTC}
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD:-password} MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD:-password}
MYSQL_ROOT_HOST: ${MYSQL_ROOT_HOST:-%} MYSQL_ROOT_HOST: ${MYSQL_ROOT_HOST:-%}
deploy:
resources:
limits:
cpus: '2.0'
memory: 2G
reservations:
cpus: '0.5'
memory: 512M
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-h", "localhost", "-u", "root", "-p$$MYSQL_ROOT_PASSWORD"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
volumes: volumes:
mysql_data: mysql_data:

View File

@@ -1,23 +1,21 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
n8n: n8n:
<<: *default <<: *default
image: n8nio/n8n:${N8N_VERSION:-1.114.0} image: n8nio/n8n:${N8N_VERSION:-1.114.0}
container_name: n8n
ports: ports:
- "${N8N_PORT:-5678}:5678" - "${N8N_PORT:-5678}:5678"
volumes: volumes:
- n8n_data:/home/node/.n8n - n8n_data:/home/node/.n8n
environment: environment:
- TZ=${TZ:-UTC}
- N8N_BASIC_AUTH_ACTIVE=${N8N_BASIC_AUTH_ACTIVE:-true} - N8N_BASIC_AUTH_ACTIVE=${N8N_BASIC_AUTH_ACTIVE:-true}
- N8N_BASIC_AUTH_USER=${N8N_BASIC_AUTH_USER:-} - N8N_BASIC_AUTH_USER=${N8N_BASIC_AUTH_USER:-}
- N8N_BASIC_AUTH_PASSWORD=${N8N_BASIC_AUTH_PASSWORD:-} - N8N_BASIC_AUTH_PASSWORD=${N8N_BASIC_AUTH_PASSWORD:-}
@@ -26,7 +24,6 @@ services:
- N8N_PROTOCOL=${N8N_PROTOCOL:-http} - N8N_PROTOCOL=${N8N_PROTOCOL:-http}
- WEBHOOK_URL=${WEBHOOK_URL:-http://localhost:5678/} - WEBHOOK_URL=${WEBHOOK_URL:-http://localhost:5678/}
- GENERIC_TIMEZONE=${GENERIC_TIMEZONE:-UTC} - GENERIC_TIMEZONE=${GENERIC_TIMEZONE:-UTC}
- TZ=${TZ:-UTC}
# Database configuration (optional, uses SQLite by default) # Database configuration (optional, uses SQLite by default)
- DB_TYPE=${DB_TYPE:-sqlite} - DB_TYPE=${DB_TYPE:-sqlite}
- DB_POSTGRESDB_DATABASE=${DB_POSTGRESDB_DATABASE:-n8n} - DB_POSTGRESDB_DATABASE=${DB_POSTGRESDB_DATABASE:-n8n}
@@ -50,12 +47,18 @@ services:
reservations: reservations:
cpus: '0.5' cpus: '0.5'
memory: 512M memory: 512M
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:5678/healthz"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
n8n-db: n8n-db:
<<: *default <<: *default
image: postgres:${POSTGRES_VERSION:-17.2-alpine3.21} image: postgres:${POSTGRES_VERSION:-17.2-alpine3.21}
container_name: n8n-db
environment: environment:
- TZ=${TZ:-UTC}
- POSTGRES_USER=${DB_POSTGRESDB_USER:-n8n} - POSTGRES_USER=${DB_POSTGRESDB_USER:-n8n}
- POSTGRES_PASSWORD=${DB_POSTGRESDB_PASSWORD:-n8n123} - POSTGRES_PASSWORD=${DB_POSTGRESDB_PASSWORD:-n8n123}
- POSTGRES_DB=${DB_POSTGRESDB_DATABASE:-n8n} - POSTGRES_DB=${DB_POSTGRESDB_DATABASE:-n8n}

View File

@@ -1,27 +1,23 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
nacos: nacos:
<<: *default <<: *default
image: nacos/nacos-server:${NACOS_VERSION:-v3.1.0-slim} image: nacos/nacos-server:${NACOS_VERSION:-v3.1.0-slim}
container_name: nacos
ports: ports:
- "${NACOS_HTTP_PORT_OVERRIDE:-8848}:8848" - "${NACOS_HTTP_PORT_OVERRIDE:-8848}:8848"
- "${NACOS_GRPC_PORT_OVERRIDE:-9848}:9848" - "${NACOS_GRPC_PORT_OVERRIDE:-9848}:9848"
- "${NACOS_GRPC_PORT2_OVERRIDE:-9849}:9849" - "${NACOS_GRPC_PORT2_OVERRIDE:-9849}:9849"
volumes: volumes:
- *localtime
- *timezone
- nacos_logs:/home/nacos/logs - nacos_logs:/home/nacos/logs
environment: environment:
- TZ=${TZ:-UTC}
- MODE=${NACOS_MODE:-standalone} - MODE=${NACOS_MODE:-standalone}
- PREFER_HOST_MODE=hostname - PREFER_HOST_MODE=hostname
- NACOS_AUTH_ENABLE=${NACOS_AUTH_ENABLE:-true} - NACOS_AUTH_ENABLE=${NACOS_AUTH_ENABLE:-true}
@@ -40,6 +36,12 @@ services:
reservations: reservations:
cpus: '0.5' cpus: '0.5'
memory: 512M memory: 512M
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8848/nacos/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
volumes: volumes:
nacos_logs: nacos_logs:

View File

@@ -1,19 +1,17 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
metad: metad:
<<: *default <<: *default
image: vesoft/nebula-metad:${NEBULA_VERSION:-v3.8.0} image: vesoft/nebula-metad:${NEBULA_VERSION:-v3.8.0}
container_name: nebula-metad
environment: environment:
- TZ=${TZ:-UTC}
- USER=root - USER=root
command: command:
- --meta_server_addrs=metad:9559 - --meta_server_addrs=metad:9559
@@ -23,8 +21,6 @@ services:
- --data_path=/data/meta - --data_path=/data/meta
- --log_dir=/logs - --log_dir=/logs
volumes: volumes:
- *localtime
- *timezone
- nebula_meta_data:/data/meta - nebula_meta_data:/data/meta
- nebula_meta_logs:/logs - nebula_meta_logs:/logs
ports: ports:
@@ -36,12 +32,21 @@ services:
limits: limits:
cpus: '0.5' cpus: '0.5'
memory: 512M memory: 512M
reservations:
cpus: '0.25'
memory: 256M
healthcheck:
test: ["CMD", "/usr/local/nebula/bin/nebula-metad", "--version"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
storaged: storaged:
<<: *default <<: *default
image: vesoft/nebula-storaged:${NEBULA_VERSION:-v3.8.0} image: vesoft/nebula-storaged:${NEBULA_VERSION:-v3.8.0}
container_name: nebula-storaged
environment: environment:
- TZ=${TZ:-UTC}
- USER=root - USER=root
command: command:
- --meta_server_addrs=metad:9559 - --meta_server_addrs=metad:9559
@@ -51,10 +56,9 @@ services:
- --data_path=/data/storage - --data_path=/data/storage
- --log_dir=/logs - --log_dir=/logs
depends_on: depends_on:
- metad metad:
condition: service_healthy
volumes: volumes:
- *localtime
- *timezone
- nebula_storage_data:/data/storage - nebula_storage_data:/data/storage
- nebula_storage_logs:/logs - nebula_storage_logs:/logs
ports: ports:
@@ -66,12 +70,21 @@ services:
limits: limits:
cpus: '1.0' cpus: '1.0'
memory: 1G memory: 1G
reservations:
cpus: '0.5'
memory: 512M
healthcheck:
test: ["CMD", "/usr/local/nebula/bin/nebula-storaged", "--version"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
graphd: graphd:
<<: *default <<: *default
image: vesoft/nebula-graphd:${NEBULA_VERSION:-v3.8.0} image: vesoft/nebula-graphd:${NEBULA_VERSION:-v3.8.0}
container_name: nebula-graphd
environment: environment:
- TZ=${TZ:-UTC}
- USER=root - USER=root
command: command:
- --meta_server_addrs=metad:9559 - --meta_server_addrs=metad:9559
@@ -80,11 +93,11 @@ services:
- --ws_ip=graphd - --ws_ip=graphd
- --log_dir=/logs - --log_dir=/logs
depends_on: depends_on:
- metad metad:
- storaged condition: service_healthy
storaged:
condition: service_healthy
volumes: volumes:
- *localtime
- *timezone
- nebula_graph_logs:/logs - nebula_graph_logs:/logs
ports: ports:
- "${NEBULA_GRAPHD_PORT_OVERRIDE:-9669}:9669" - "${NEBULA_GRAPHD_PORT_OVERRIDE:-9669}:9669"
@@ -95,6 +108,15 @@ services:
limits: limits:
cpus: '1.0' cpus: '1.0'
memory: 1G memory: 1G
reservations:
cpus: '0.5'
memory: 512M
healthcheck:
test: ["CMD", "/usr/local/nebula/bin/nebula-graphd", "--version"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
volumes: volumes:
nebula_meta_data: nebula_meta_data:

View File

@@ -1,29 +1,25 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
neo4j: neo4j:
<<: *default <<: *default
image: neo4j:${NEO4J_VERSION:-5.27.4-community} image: neo4j:${NEO4J_VERSION:-5.27.4-community}
container_name: neo4j
ports: ports:
- "${NEO4J_HTTP_PORT_OVERRIDE:-7474}:7474" - "${NEO4J_HTTP_PORT_OVERRIDE:-7474}:7474"
- "${NEO4J_BOLT_PORT_OVERRIDE:-7687}:7687" - "${NEO4J_BOLT_PORT_OVERRIDE:-7687}:7687"
volumes: volumes:
- *localtime
- *timezone
- neo4j_data:/data - neo4j_data:/data
- neo4j_logs:/logs - neo4j_logs:/logs
- neo4j_import:/var/lib/neo4j/import - neo4j_import:/var/lib/neo4j/import
- neo4j_plugins:/plugins - neo4j_plugins:/plugins
environment: environment:
- TZ=${TZ:-UTC}
- NEO4J_AUTH=${NEO4J_AUTH:-neo4j/password} - NEO4J_AUTH=${NEO4J_AUTH:-neo4j/password}
- NEO4J_ACCEPT_LICENSE_AGREEMENT=${NEO4J_ACCEPT_LICENSE_AGREEMENT:-yes} - NEO4J_ACCEPT_LICENSE_AGREEMENT=${NEO4J_ACCEPT_LICENSE_AGREEMENT:-yes}
- NEO4J_dbms_memory_pagecache_size=${NEO4J_PAGECACHE_SIZE:-512M} - NEO4J_dbms_memory_pagecache_size=${NEO4J_PAGECACHE_SIZE:-512M}
@@ -37,6 +33,12 @@ services:
reservations: reservations:
cpus: '0.5' cpus: '0.5'
memory: 1G memory: 1G
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:7474/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
volumes: volumes:
neo4j_data: neo4j_data:

View File

@@ -1,24 +1,19 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
nginx: nginx:
<<: *default <<: *default
image: nginx:${NGINX_VERSION:-1.29.2-alpine3.22} image: nginx:${NGINX_VERSION:-1.29.2-alpine3.22}
container_name: nginx
ports: ports:
- "${NGINX_HTTP_PORT_OVERRIDE:-80}:80" - "${NGINX_HTTP_PORT_OVERRIDE:-80}:80"
- "${NGINX_HTTPS_PORT_OVERRIDE:-443}:443" - "${NGINX_HTTPS_PORT_OVERRIDE:-443}:443"
volumes: volumes:
- *localtime
- *timezone
- nginx_logs:/var/log/nginx - nginx_logs:/var/log/nginx
- ./html:/usr/share/nginx/html:ro - ./html:/usr/share/nginx/html:ro
@@ -27,6 +22,7 @@ services:
# - ./conf.d:/etc/nginx/conf.d:ro # - ./conf.d:/etc/nginx/conf.d:ro
# - ./ssl:/etc/nginx/ssl:ro # - ./ssl:/etc/nginx/ssl:ro
environment: environment:
- TZ=${TZ:-UTC}
- NGINX_HOST=${NGINX_HOST:-localhost} - NGINX_HOST=${NGINX_HOST:-localhost}
- NGINX_PORT=${NGINX_PORT:-80} - NGINX_PORT=${NGINX_PORT:-80}
deploy: deploy:
@@ -37,6 +33,12 @@ services:
reservations: reservations:
cpus: '0.25' cpus: '0.25'
memory: 64M memory: 64M
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:80/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
volumes: volumes:
nginx_logs: nginx_logs:

View File

@@ -1,18 +1,15 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
node-exporter: node-exporter:
<<: *default <<: *default
image: prom/node-exporter:${NODE_EXPORTER_VERSION:-v1.8.2} image: prom/node-exporter:${NODE_EXPORTER_VERSION:-v1.8.2}
container_name: node-exporter
ports: ports:
- "${NODE_EXPORTER_PORT_OVERRIDE:-9100}:9100" - "${NODE_EXPORTER_PORT_OVERRIDE:-9100}:9100"
command: command:
@@ -20,6 +17,8 @@ services:
- '--path.procfs=/host/proc' - '--path.procfs=/host/proc'
- '--path.sysfs=/host/sys' - '--path.sysfs=/host/sys'
- '--collector.filesystem.mount-points-exclude=^/(sys|proc|dev|host|etc)($$|/)' - '--collector.filesystem.mount-points-exclude=^/(sys|proc|dev|host|etc)($$|/)'
environment:
- TZ=${TZ:-UTC}
volumes: volumes:
- '/:/host:ro,rslave' - '/:/host:ro,rslave'
deploy: deploy:
@@ -30,6 +29,12 @@ services:
reservations: reservations:
cpus: '0.1' cpus: '0.1'
memory: 64M memory: 64M
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:9100/metrics"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
# Run with host network for accurate metrics # Run with host network for accurate metrics
# network_mode: host # network_mode: host

View File

@@ -1,28 +1,25 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
odoo: odoo:
<<: *default <<: *default
image: odoo:${ODOO_VERSION:-19.0} image: odoo:${ODOO_VERSION:-19.0}
container_name: odoo
depends_on: depends_on:
- odoo-db odoo-db:
condition: service_healthy
ports: ports:
- "${ODOO_PORT_OVERRIDE:-8069}:8069" - "${ODOO_PORT_OVERRIDE:-8069}:8069"
volumes: volumes:
- *localtime
- *timezone
- odoo_web_data:/var/lib/odoo - odoo_web_data:/var/lib/odoo
- odoo_addons:/mnt/extra-addons - odoo_addons:/mnt/extra-addons
environment: environment:
- TZ=${TZ:-UTC}
- HOST=odoo-db - HOST=odoo-db
- USER=${POSTGRES_USER:-odoo} - USER=${POSTGRES_USER:-odoo}
- PASSWORD=${POSTGRES_PASSWORD:-odoopass} - PASSWORD=${POSTGRES_PASSWORD:-odoopass}
@@ -36,19 +33,23 @@ services:
reservations: reservations:
cpus: '0.5' cpus: '0.5'
memory: 1G memory: 1G
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8069/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
odoo-db: odoo-db:
<<: *default <<: *default
image: postgres:${POSTGRES_VERSION:-17-alpine} image: postgres:${POSTGRES_VERSION:-17-alpine}
container_name: odoo-db
environment: environment:
- TZ=${TZ:-UTC}
- POSTGRES_USER=${POSTGRES_USER:-odoo} - POSTGRES_USER=${POSTGRES_USER:-odoo}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-odoopass} - POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-odoopass}
- POSTGRES_DB=${POSTGRES_DB:-postgres} - POSTGRES_DB=${POSTGRES_DB:-postgres}
- PGDATA=/var/lib/postgresql/data/pgdata - PGDATA=/var/lib/postgresql/data/pgdata
volumes: volumes:
- *localtime
- *timezone
- odoo_db_data:/var/lib/postgresql/data - odoo_db_data:/var/lib/postgresql/data
deploy: deploy:
resources: resources:
@@ -58,6 +59,12 @@ services:
reservations: reservations:
cpus: '0.25' cpus: '0.25'
memory: 512M memory: 512M
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER:-odoo}"]
interval: 10s
timeout: 5s
retries: 5
start_period: 30s
volumes: volumes:
odoo_web_data: odoo_web_data:

View File

@@ -1,12 +1,10 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
ollama: ollama:
@@ -15,9 +13,9 @@ services:
ports: ports:
- "${OLLAMA_PORT_OVERRIDE:-11434}:11434" - "${OLLAMA_PORT_OVERRIDE:-11434}:11434"
volumes: volumes:
- *localtime
- *timezone
- ollama_models:/root/.ollama - ollama_models:/root/.ollama
environment:
- TZ=${TZ:-UTC}
ipc: host ipc: host
deploy: deploy:
resources: resources:
@@ -31,6 +29,12 @@ services:
- driver: nvidia - driver: nvidia
device_ids: [ '0' ] device_ids: [ '0' ]
capabilities: [ gpu ] capabilities: [ gpu ]
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:11434/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
volumes: volumes:
ollama_models: ollama_models:

View File

@@ -1,12 +1,10 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
open_webui: open_webui:
@@ -15,9 +13,9 @@ services:
ports: ports:
- "${OPEN_WEBUI_PORT_OVERRIDE:-8080}:8080" - "${OPEN_WEBUI_PORT_OVERRIDE:-8080}:8080"
volumes: volumes:
- *localtime
- *timezone
- open_webui_data:/app/backend/data - open_webui_data:/app/backend/data
environment:
- TZ=${TZ:-UTC}
env_file: env_file:
- .env - .env
deploy: deploy:
@@ -28,6 +26,12 @@ services:
reservations: reservations:
cpus: '0.1' cpus: '0.1'
memory: 128M memory: 128M
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8080/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
volumes: volumes:
open_webui_data: open_webui_data:

View File

@@ -1,12 +1,10 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
# Note: OpenCoze is a complex platform that requires multiple services. # Note: OpenCoze is a complex platform that requires multiple services.
@@ -15,7 +13,8 @@ services:
opencoze-info: opencoze-info:
image: alpine:latest image: alpine:latest
container_name: opencoze-info environment:
- TZ=${TZ:-UTC}
command: > command: >
sh -c "echo 'OpenCoze requires a complex multi-service setup.' && sh -c "echo 'OpenCoze requires a complex multi-service setup.' &&
echo 'Please visit https://github.com/coze-dev/coze-studio for full deployment instructions.' && echo 'Please visit https://github.com/coze-dev/coze-studio for full deployment instructions.' &&

View File

@@ -1,29 +1,24 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
openlist: openlist:
<<: *default <<: *default
image: openlistteam/openlist:${OPENLIST_VERSION:-latest} image: openlistteam/openlist:${OPENLIST_VERSION:-latest}
container_name: openlist
ports: ports:
- "${OPENLIST_PORT_OVERRIDE:-5244}:5244" - "${OPENLIST_PORT_OVERRIDE:-5244}:5244"
volumes: volumes:
- *localtime
- *timezone
- openlist_data:/opt/openlist/data - openlist_data:/opt/openlist/data
environment: environment:
- TZ=${TZ:-UTC}
- PUID=${PUID:-0} - PUID=${PUID:-0}
- PGID=${PGID:-0} - PGID=${PGID:-0}
- UMASK=${UMASK:-022} - UMASK=${UMASK:-022}
- TZ=${TZ:-Asia/Shanghai}
deploy: deploy:
resources: resources:
limits: limits:
@@ -32,6 +27,12 @@ services:
reservations: reservations:
cpus: '0.25' cpus: '0.25'
memory: 256M memory: 256M
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:5244/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
volumes: volumes:
openlist_data: openlist_data:

View File

@@ -1,19 +1,17 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
opensearch: opensearch:
<<: *default <<: *default
image: opensearchproject/opensearch:${OPENSEARCH_VERSION:-2.19.0} image: opensearchproject/opensearch:${OPENSEARCH_VERSION:-2.19.0}
container_name: opensearch
environment: environment:
TZ: ${TZ:-UTC}
cluster.name: ${CLUSTER_NAME:-opensearch-cluster} cluster.name: ${CLUSTER_NAME:-opensearch-cluster}
node.name: opensearch node.name: opensearch
discovery.type: single-node discovery.type: single-node
@@ -32,8 +30,6 @@ services:
- "${OPENSEARCH_PORT_OVERRIDE:-9200}:9200" - "${OPENSEARCH_PORT_OVERRIDE:-9200}:9200"
- "${OPENSEARCH_PERF_ANALYZER_PORT_OVERRIDE:-9600}:9600" - "${OPENSEARCH_PERF_ANALYZER_PORT_OVERRIDE:-9600}:9600"
volumes: volumes:
- *localtime
- *timezone
- opensearch_data:/usr/share/opensearch/data - opensearch_data:/usr/share/opensearch/data
deploy: deploy:
resources: resources:
@@ -43,18 +39,25 @@ services:
reservations: reservations:
cpus: '1.0' cpus: '1.0'
memory: 1G memory: 1G
healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost:9200/_cluster/health || exit 1"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
opensearch-dashboards: opensearch-dashboards:
<<: *default <<: *default
image: opensearchproject/opensearch-dashboards:${OPENSEARCH_DASHBOARDS_VERSION:-2.19.0} image: opensearchproject/opensearch-dashboards:${OPENSEARCH_DASHBOARDS_VERSION:-2.19.0}
container_name: opensearch-dashboards
ports: ports:
- "${OPENSEARCH_DASHBOARDS_PORT_OVERRIDE:-5601}:5601" - "${OPENSEARCH_DASHBOARDS_PORT_OVERRIDE:-5601}:5601"
environment: environment:
TZ: ${TZ:-UTC}
OPENSEARCH_HOSTS: '["https://opensearch:9200"]' OPENSEARCH_HOSTS: '["https://opensearch:9200"]'
DISABLE_SECURITY_DASHBOARDS_PLUGIN: ${DISABLE_SECURITY_PLUGIN:-false} DISABLE_SECURITY_DASHBOARDS_PLUGIN: ${DISABLE_SECURITY_PLUGIN:-false}
depends_on: depends_on:
- opensearch opensearch:
condition: service_healthy
deploy: deploy:
resources: resources:
limits: limits:
@@ -63,6 +66,12 @@ services:
reservations: reservations:
cpus: '0.5' cpus: '0.5'
memory: 512M memory: 512M
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:5601/api/status"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
volumes: volumes:
opensearch_data: opensearch_data:

View File

@@ -1,18 +1,17 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
pocketbase: pocketbase:
<<: *default <<: *default
image: ghcr.io/muchobien/pocketbase:${PB_VERSION:-0.30.0} image: ghcr.io/muchobien/pocketbase:${PB_VERSION:-0.30.0}
environment: environment:
TZ: ${TZ:-UTC}
# Optional ENCRYPTION (Ensure this is a 32-character long encryption key) # Optional ENCRYPTION (Ensure this is a 32-character long encryption key)
# $ openssl rand -hex 16 # $ openssl rand -hex 16
# https://pocketbase.io/docs/going-to-production/#enable-settings-encryption # https://pocketbase.io/docs/going-to-production/#enable-settings-encryption
@@ -22,8 +21,6 @@ services:
ports: ports:
- "${PB_PORT:-8090}:8090" - "${PB_PORT:-8090}:8090"
volumes: volumes:
- *localtime
- *timezone
- pb_data:/pb_data - pb_data:/pb_data
# optional public and hooks folders # optional public and hooks folders
@@ -34,6 +31,7 @@ services:
interval: 5s interval: 5s
timeout: 5s timeout: 5s
retries: 5 retries: 5
start_period: 10s
deploy: deploy:
resources: resources:
limits: limits:

View File

@@ -1,24 +1,21 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
postgres: postgres:
<<: *default <<: *default
image: postgres:${POSTGRES_VERSION:-17.6} image: postgres:${POSTGRES_VERSION:-17.6}
environment: environment:
TZ: ${TZ:-UTC}
POSTGRES_USER: ${POSTGRES_USER:-postgres} POSTGRES_USER: ${POSTGRES_USER:-postgres}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-postgres} POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-postgres}
POSTGRES_DB: ${POSTGRES_DB:-postgres} POSTGRES_DB: ${POSTGRES_DB:-postgres}
volumes: volumes:
- *localtime
- *timezone
- postgres_data:/var/lib/postgresql/data - postgres_data:/var/lib/postgresql/data
# Initialize the database with a custom SQL script # Initialize the database with a custom SQL script
@@ -28,11 +25,17 @@ services:
deploy: deploy:
resources: resources:
limits: limits:
cpus: '1.0' cpus: '2.0'
memory: 1G memory: 2G
reservations: reservations:
cpus: '0.5' cpus: '0.5'
memory: 512M memory: 512M
healthcheck:
test: ["CMD-SHELL", "pg_isready -U $$POSTGRES_USER -d $$POSTGRES_DB"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
volumes: volumes:
postgres_data: postgres_data:

View File

@@ -1,23 +1,18 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
prometheus: prometheus:
<<: *default <<: *default
image: prom/prometheus:${PROMETHEUS_VERSION:-v3.5.0} image: prom/prometheus:${PROMETHEUS_VERSION:-v3.5.0}
container_name: prometheus
ports: ports:
- "${PROMETHEUS_PORT_OVERRIDE:-9090}:9090" - "${PROMETHEUS_PORT_OVERRIDE:-9090}:9090"
volumes: volumes:
- *localtime
- *timezone
- prometheus_data:/prometheus - prometheus_data:/prometheus
# Optional: Mount custom configuration # Optional: Mount custom configuration
@@ -34,6 +29,7 @@ services:
- '--web.enable-admin-api' - '--web.enable-admin-api'
- '--web.external-url=${PROMETHEUS_EXTERNAL_URL:-http://localhost:9090}' - '--web.external-url=${PROMETHEUS_EXTERNAL_URL:-http://localhost:9090}'
environment: environment:
- TZ=${TZ:-UTC}
- PROMETHEUS_RETENTION_TIME=${PROMETHEUS_RETENTION_TIME:-15d} - PROMETHEUS_RETENTION_TIME=${PROMETHEUS_RETENTION_TIME:-15d}
- PROMETHEUS_RETENTION_SIZE=${PROMETHEUS_RETENTION_SIZE:-} - PROMETHEUS_RETENTION_SIZE=${PROMETHEUS_RETENTION_SIZE:-}
user: "65534:65534" # nobody user user: "65534:65534" # nobody user
@@ -45,6 +41,12 @@ services:
reservations: reservations:
cpus: '0.25' cpus: '0.25'
memory: 512M memory: 512M
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:9090/-/healthy"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
volumes: volumes:
prometheus_data: prometheus_data:

View File

@@ -1,22 +1,20 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
pytorch: pytorch:
<<: *default <<: *default
image: pytorch/pytorch:${PYTORCH_VERSION:-2.6.0-cuda12.6-cudnn9-runtime} image: pytorch/pytorch:${PYTORCH_VERSION:-2.6.0-cuda12.6-cudnn9-runtime}
container_name: pytorch
ports: ports:
- "${JUPYTER_PORT_OVERRIDE:-8888}:8888" - "${JUPYTER_PORT_OVERRIDE:-8888}:8888"
- "${TENSORBOARD_PORT_OVERRIDE:-6006}:6006" - "${TENSORBOARD_PORT_OVERRIDE:-6006}:6006"
environment: environment:
TZ: ${TZ:-UTC}
NVIDIA_VISIBLE_DEVICES: ${NVIDIA_VISIBLE_DEVICES:-all} NVIDIA_VISIBLE_DEVICES: ${NVIDIA_VISIBLE_DEVICES:-all}
NVIDIA_DRIVER_CAPABILITIES: ${NVIDIA_DRIVER_CAPABILITIES:-compute,utility} NVIDIA_DRIVER_CAPABILITIES: ${NVIDIA_DRIVER_CAPABILITIES:-compute,utility}
JUPYTER_ENABLE_LAB: ${JUPYTER_ENABLE_LAB:-yes} JUPYTER_ENABLE_LAB: ${JUPYTER_ENABLE_LAB:-yes}
@@ -25,8 +23,6 @@ services:
jupyter lab --ip=0.0.0.0 --port=8888 --no-browser --allow-root jupyter lab --ip=0.0.0.0 --port=8888 --no-browser --allow-root
--NotebookApp.token='${JUPYTER_TOKEN:-pytorch}'" --NotebookApp.token='${JUPYTER_TOKEN:-pytorch}'"
volumes: volumes:
- *localtime
- *timezone
- pytorch_notebooks:/workspace - pytorch_notebooks:/workspace
- pytorch_data:/data - pytorch_data:/data
working_dir: /workspace working_dir: /workspace
@@ -42,6 +38,12 @@ services:
- driver: nvidia - driver: nvidia
count: ${GPU_COUNT:-1} count: ${GPU_COUNT:-1}
capabilities: [gpu] capabilities: [gpu]
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8888/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
volumes: volumes:
pytorch_notebooks: pytorch_notebooks:

View File

@@ -1,12 +1,10 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
qdrant: qdrant:
@@ -16,10 +14,9 @@ services:
- "${QDRANT_HTTP_PORT:-6333}:6333" - "${QDRANT_HTTP_PORT:-6333}:6333"
- "${QDRANT_GRPC_PORT:-6334}:6334" - "${QDRANT_GRPC_PORT:-6334}:6334"
volumes: volumes:
- *localtime
- *timezone
- qdrant_data:/qdrant/storage:z - qdrant_data:/qdrant/storage:z
environment: environment:
- TZ=${TZ:-UTC}
- QDRANT__SERVICE__API_KEY=${QDRANT_API_KEY} - QDRANT__SERVICE__API_KEY=${QDRANT_API_KEY}
- QDRANT__SERVICE__JWT_RBAC=${QDRANT_JWT_RBAC:-false} - QDRANT__SERVICE__JWT_RBAC=${QDRANT_JWT_RBAC:-false}
deploy: deploy:
@@ -30,6 +27,12 @@ services:
reservations: reservations:
cpus: '0.5' cpus: '0.5'
memory: 256M memory: 256M
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:6333/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
volumes: volumes:
qdrant_data: qdrant_data:

View File

@@ -1,12 +1,10 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
rabbitmq: rabbitmq:
@@ -14,12 +12,11 @@ services:
image: rabbitmq:${RABBITMQ_VERSION:-4.1.4-management-alpine} image: rabbitmq:${RABBITMQ_VERSION:-4.1.4-management-alpine}
volumes: volumes:
- rabbitmq_data:/var/lib/rabbitmq - rabbitmq_data:/var/lib/rabbitmq
- *localtime
- *timezone
ports: ports:
- ${RABBITMQ_PORT:-5672}:5672 - ${RABBITMQ_PORT:-5672}:5672
- ${RABBITMQ_MANAGEMENT_PORT:-15672}:15672 - ${RABBITMQ_MANAGEMENT_PORT:-15672}:15672
environment: environment:
TZ: ${TZ:-UTC}
RABBITMQ_DEFAULT_USER: ${RABBITMQ_DEFAULT_USER:-admin} RABBITMQ_DEFAULT_USER: ${RABBITMQ_DEFAULT_USER:-admin}
RABBITMQ_DEFAULT_PASS: ${RABBITMQ_DEFAULT_PASS:-password} RABBITMQ_DEFAULT_PASS: ${RABBITMQ_DEFAULT_PASS:-password}
deploy: deploy:
@@ -30,6 +27,12 @@ services:
reservations: reservations:
cpus: '0.5' cpus: '0.5'
memory: 512M memory: 512M
healthcheck:
test: ["CMD", "rabbitmq-diagnostics", "-q", "ping"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
volumes: volumes:
rabbitmq_data: rabbitmq_data:

View File

@@ -1,29 +1,25 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
ray-head: ray-head:
<<: *default <<: *default
image: rayproject/ray:${RAY_VERSION:-2.42.1-py312} image: rayproject/ray:${RAY_VERSION:-2.42.1-py312}
container_name: ray-head
command: ray start --head --dashboard-host=0.0.0.0 --port=6379 --block command: ray start --head --dashboard-host=0.0.0.0 --port=6379 --block
ports: ports:
- "${RAY_DASHBOARD_PORT_OVERRIDE:-8265}:8265" - "${RAY_DASHBOARD_PORT_OVERRIDE:-8265}:8265"
- "${RAY_CLIENT_PORT_OVERRIDE:-10001}:10001" - "${RAY_CLIENT_PORT_OVERRIDE:-10001}:10001"
- "${RAY_GCS_PORT_OVERRIDE:-6379}:6379" - "${RAY_GCS_PORT_OVERRIDE:-6379}:6379"
environment: environment:
TZ: ${TZ:-UTC}
RAY_NUM_CPUS: ${RAY_HEAD_NUM_CPUS:-4} RAY_NUM_CPUS: ${RAY_HEAD_NUM_CPUS:-4}
RAY_MEMORY: ${RAY_HEAD_MEMORY:-8589934592} RAY_MEMORY: ${RAY_HEAD_MEMORY:-8589934592}
volumes: volumes:
- *localtime
- *timezone
- ray_storage:/tmp/ray - ray_storage:/tmp/ray
deploy: deploy:
resources: resources:
@@ -33,20 +29,24 @@ services:
reservations: reservations:
cpus: '2.0' cpus: '2.0'
memory: 4G memory: 4G
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8265/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
ray-worker-1: ray-worker-1:
<<: *default <<: *default
image: rayproject/ray:${RAY_VERSION:-2.42.1-py312} image: rayproject/ray:${RAY_VERSION:-2.42.1-py312}
container_name: ray-worker-1
command: ray start --address=ray-head:6379 --block command: ray start --address=ray-head:6379 --block
depends_on: depends_on:
- ray-head ray-head:
condition: service_healthy
environment: environment:
TZ: ${TZ:-UTC}
RAY_NUM_CPUS: ${RAY_WORKER_NUM_CPUS:-2} RAY_NUM_CPUS: ${RAY_WORKER_NUM_CPUS:-2}
RAY_MEMORY: ${RAY_WORKER_MEMORY:-4294967296} RAY_MEMORY: ${RAY_WORKER_MEMORY:-4294967296}
volumes:
- *localtime
- *timezone
deploy: deploy:
resources: resources:
limits: limits:
@@ -59,16 +59,14 @@ services:
ray-worker-2: ray-worker-2:
<<: *default <<: *default
image: rayproject/ray:${RAY_VERSION:-2.42.1-py312} image: rayproject/ray:${RAY_VERSION:-2.42.1-py312}
container_name: ray-worker-2
command: ray start --address=ray-head:6379 --block command: ray start --address=ray-head:6379 --block
depends_on: depends_on:
- ray-head ray-head:
condition: service_healthy
environment: environment:
TZ: ${TZ:-UTC}
RAY_NUM_CPUS: ${RAY_WORKER_NUM_CPUS:-2} RAY_NUM_CPUS: ${RAY_WORKER_NUM_CPUS:-2}
RAY_MEMORY: ${RAY_WORKER_MEMORY:-4294967296} RAY_MEMORY: ${RAY_WORKER_MEMORY:-4294967296}
volumes:
- *localtime
- *timezone
deploy: deploy:
resources: resources:
limits: limits:

View File

@@ -1,17 +1,16 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
redis-cluster-init: redis-cluster-init:
image: redis:${REDIS_VERSION:-8.2.1-alpine} image: redis:${REDIS_VERSION:-8.2.1-alpine}
container_name: redis-cluster-init environment:
- TZ=${TZ:-UTC}
command: > command: >
sh -c " sh -c "
echo 'Waiting for all Redis instances to start...' && echo 'Waiting for all Redis instances to start...' &&
@@ -22,116 +21,170 @@ services:
--cluster-replicas 1 --cluster-yes --cluster-replicas 1 --cluster-yes
" "
depends_on: depends_on:
- redis-1 redis-1:
- redis-2 condition: service_healthy
- redis-3 redis-2:
- redis-4 condition: service_healthy
- redis-5 redis-3:
- redis-6 condition: service_healthy
redis-4:
condition: service_healthy
redis-5:
condition: service_healthy
redis-6:
condition: service_healthy
profiles: profiles:
- init - init
redis-1: redis-1:
<<: *default <<: *default
image: redis:${REDIS_VERSION:-8.2.1-alpine} image: redis:${REDIS_VERSION:-8.2.1-alpine}
container_name: redis-1 environment:
- TZ=${TZ:-UTC}
command: redis-server --port 6379 --cluster-enabled yes --cluster-config-file nodes.conf --cluster-node-timeout 5000 --appendonly yes command: redis-server --port 6379 --cluster-enabled yes --cluster-config-file nodes.conf --cluster-node-timeout 5000 --appendonly yes
ports: ports:
- "7000:6379" - "7000:6379"
volumes: volumes:
- *localtime
- *timezone
- redis_1_data:/data - redis_1_data:/data
deploy: deploy:
resources: resources:
limits: limits:
cpus: '0.5' cpus: '0.5'
memory: 512M memory: 512M
reservations:
cpus: '0.25'
memory: 256M
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 3s
retries: 3
start_period: 5s
redis-2: redis-2:
<<: *default <<: *default
image: redis:${REDIS_VERSION:-8.2.1-alpine} image: redis:${REDIS_VERSION:-8.2.1-alpine}
container_name: redis-2 environment:
- TZ=${TZ:-UTC}
command: redis-server --port 6379 --cluster-enabled yes --cluster-config-file nodes.conf --cluster-node-timeout 5000 --appendonly yes command: redis-server --port 6379 --cluster-enabled yes --cluster-config-file nodes.conf --cluster-node-timeout 5000 --appendonly yes
ports: ports:
- "7001:6379" - "7001:6379"
volumes: volumes:
- *localtime
- *timezone
- redis_2_data:/data - redis_2_data:/data
deploy: deploy:
resources: resources:
limits: limits:
cpus: '0.5' cpus: '0.5'
memory: 512M memory: 512M
reservations:
cpus: '0.25'
memory: 256M
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 3s
retries: 3
start_period: 5s
redis-3: redis-3:
<<: *default <<: *default
image: redis:${REDIS_VERSION:-8.2.1-alpine} image: redis:${REDIS_VERSION:-8.2.1-alpine}
container_name: redis-3 environment:
- TZ=${TZ:-UTC}
command: redis-server --port 6379 --cluster-enabled yes --cluster-config-file nodes.conf --cluster-node-timeout 5000 --appendonly yes command: redis-server --port 6379 --cluster-enabled yes --cluster-config-file nodes.conf --cluster-node-timeout 5000 --appendonly yes
ports: ports:
- "7002:6379" - "7002:6379"
volumes: volumes:
- *localtime
- *timezone
- redis_3_data:/data - redis_3_data:/data
deploy: deploy:
resources: resources:
limits: limits:
cpus: '0.5' cpus: '0.5'
memory: 512M memory: 512M
reservations:
cpus: '0.25'
memory: 256M
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 3s
retries: 3
start_period: 5s
redis-4: redis-4:
<<: *default <<: *default
image: redis:${REDIS_VERSION:-8.2.1-alpine} image: redis:${REDIS_VERSION:-8.2.1-alpine}
container_name: redis-4 environment:
- TZ=${TZ:-UTC}
command: redis-server --port 6379 --cluster-enabled yes --cluster-config-file nodes.conf --cluster-node-timeout 5000 --appendonly yes command: redis-server --port 6379 --cluster-enabled yes --cluster-config-file nodes.conf --cluster-node-timeout 5000 --appendonly yes
ports: ports:
- "7003:6379" - "7003:6379"
volumes: volumes:
- *localtime
- *timezone
- redis_4_data:/data - redis_4_data:/data
deploy: deploy:
resources: resources:
limits: limits:
cpus: '0.5' cpus: '0.5'
memory: 512M memory: 512M
reservations:
cpus: '0.25'
memory: 256M
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 3s
retries: 3
start_period: 5s
redis-5: redis-5:
<<: *default <<: *default
image: redis:${REDIS_VERSION:-8.2.1-alpine} image: redis:${REDIS_VERSION:-8.2.1-alpine}
container_name: redis-5 environment:
- TZ=${TZ:-UTC}
command: redis-server --port 6379 --cluster-enabled yes --cluster-config-file nodes.conf --cluster-node-timeout 5000 --appendonly yes command: redis-server --port 6379 --cluster-enabled yes --cluster-config-file nodes.conf --cluster-node-timeout 5000 --appendonly yes
ports: ports:
- "7004:6379" - "7004:6379"
volumes: volumes:
- *localtime
- *timezone
- redis_5_data:/data - redis_5_data:/data
deploy: deploy:
resources: resources:
limits: limits:
cpus: '0.5' cpus: '0.5'
memory: 512M memory: 512M
reservations:
cpus: '0.25'
memory: 256M
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 3s
retries: 3
start_period: 5s
redis-6: redis-6:
<<: *default <<: *default
image: redis:${REDIS_VERSION:-8.2.1-alpine} image: redis:${REDIS_VERSION:-8.2.1-alpine}
container_name: redis-6 environment:
- TZ=${TZ:-UTC}
command: redis-server --port 6379 --cluster-enabled yes --cluster-config-file nodes.conf --cluster-node-timeout 5000 --appendonly yes command: redis-server --port 6379 --cluster-enabled yes --cluster-config-file nodes.conf --cluster-node-timeout 5000 --appendonly yes
ports: ports:
- "7005:6379" - "7005:6379"
volumes: volumes:
- *localtime
- *timezone
- redis_6_data:/data - redis_6_data:/data
deploy: deploy:
resources: resources:
limits: limits:
cpus: '0.5' cpus: '0.5'
memory: 512M memory: 512M
reservations:
cpus: '0.25'
memory: 256M
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 3s
retries: 3
start_period: 5s
volumes: volumes:
redis_1_data: redis_1_data:

View File

@@ -1,11 +1,11 @@
# App version # Redis version
REDIS_VERSION="8.2.1-alpine3.22" REDIS_VERSION=8.2.1-alpine3.22
# Skip fixing permissions, set to 1 to skip # Password for Redis authentication (leave empty for no password)
SKIP_FIX_PERMS=1 REDIS_PASSWORD=passw0rd
# Password for the default "default" user
REDIS_PASSWORD="passw0rd"
# Port to bind to on the host machine # Port to bind to on the host machine
REDIS_PORT_OVERRIDE=6379 REDIS_PORT_OVERRIDE=6379
# Timezone (e.g., UTC, Asia/Shanghai, America/New_York)
TZ=UTC

View File

@@ -1,28 +1,24 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
redis: redis:
<<: *default <<: *default
image: redis:${REDIS_VERSION:-8.2.1-alpine3.22} image: redis:${REDIS_VERSION:-8.2.1-alpine3.22}
container_name: redis
ports: ports:
- "${REDIS_PORT_OVERRIDE:-6379}:6379" - "${REDIS_PORT_OVERRIDE:-6379}:6379"
volumes: volumes:
- *localtime
- *timezone
- redis_data:/data - redis_data:/data
# Use a custom redis.conf file # Use a custom redis.conf file
# - ./redis.conf:/etc/redis/redis.conf # - ./redis.conf:/etc/redis/redis.conf
environment: environment:
- TZ=${TZ:-UTC}
- REDIS_PASSWORD=${REDIS_PASSWORD:-}
- SKIP_FIX_PERMS=${SKIP_FIX_PERMS:-} - SKIP_FIX_PERMS=${SKIP_FIX_PERMS:-}
command: command:
- sh - sh
@@ -33,6 +29,12 @@ services:
else else
redis-server redis-server
fi fi
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 3s
retries: 3
start_period: 5s
deploy: deploy:
resources: resources:
limits: limits:

View File

@@ -1,27 +1,23 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
stable-diffusion-webui: stable-diffusion-webui:
<<: *default <<: *default
image: ghcr.io/absolutelyludicrous/sdnext:${SD_WEBUI_VERSION:-latest} image: ghcr.io/absolutelyludicrous/sdnext:${SD_WEBUI_VERSION:-latest}
container_name: stable-diffusion-webui
ports: ports:
- "${SD_WEBUI_PORT_OVERRIDE:-7860}:7860" - "${SD_WEBUI_PORT_OVERRIDE:-7860}:7860"
environment: environment:
TZ: ${TZ:-UTC}
CLI_ARGS: ${CLI_ARGS:---listen --api --skip-version-check} CLI_ARGS: ${CLI_ARGS:---listen --api --skip-version-check}
NVIDIA_VISIBLE_DEVICES: ${NVIDIA_VISIBLE_DEVICES:-all} NVIDIA_VISIBLE_DEVICES: ${NVIDIA_VISIBLE_DEVICES:-all}
NVIDIA_DRIVER_CAPABILITIES: ${NVIDIA_DRIVER_CAPABILITIES:-compute,utility} NVIDIA_DRIVER_CAPABILITIES: ${NVIDIA_DRIVER_CAPABILITIES:-compute,utility}
volumes: volumes:
- *localtime
- *timezone
- sd_webui_data:/data - sd_webui_data:/data
- sd_webui_output:/output - sd_webui_output:/output
deploy: deploy:
@@ -36,6 +32,12 @@ services:
- driver: nvidia - driver: nvidia
count: ${GPU_COUNT:-1} count: ${GPU_COUNT:-1}
capabilities: [gpu] capabilities: [gpu]
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:7860/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 120s
volumes: volumes:
sd_webui_data: sd_webui_data:

View File

@@ -1,28 +1,24 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
stirling-pdf: stirling-pdf:
<<: *default <<: *default
image: stirlingtools/stirling-pdf:${STIRLING_VERSION:-latest} image: stirlingtools/stirling-pdf:${STIRLING_VERSION:-latest}
container_name: stirling-pdf
ports: ports:
- "${PORT_OVERRIDE:-8080}:8080" - "${PORT_OVERRIDE:-8080}:8080"
volumes: volumes:
- *localtime
- *timezone
- stirling_trainingData:/usr/share/tessdata - stirling_trainingData:/usr/share/tessdata
- stirling_configs:/configs - stirling_configs:/configs
- stirling_logs:/logs - stirling_logs:/logs
- stirling_customFiles:/customFiles - stirling_customFiles:/customFiles
environment: environment:
- TZ=${TZ:-UTC}
- DOCKER_ENABLE_SECURITY=${ENABLE_SECURITY:-false} - DOCKER_ENABLE_SECURITY=${ENABLE_SECURITY:-false}
- SECURITY_ENABLELOGIN=${ENABLE_LOGIN:-false} - SECURITY_ENABLELOGIN=${ENABLE_LOGIN:-false}
- SECURITY_INITIALLOGIN_USERNAME=${INITIAL_USERNAME:-admin} - SECURITY_INITIALLOGIN_USERNAME=${INITIAL_USERNAME:-admin}
@@ -47,6 +43,12 @@ services:
reservations: reservations:
cpus: '1.0' cpus: '1.0'
memory: 2G memory: 2G
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8080/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
volumes: volumes:
stirling_trainingData: stirling_trainingData:

View File

@@ -1,19 +1,16 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
x-valkey-node: &valkey-node x-valkey-node: &valkey-node
<<: *default <<: *default
image: valkey/valkey:${VALKEY_VERSION:-8.0-alpine} image: valkey/valkey:${VALKEY_VERSION:-8.0-alpine}
volumes: environment:
- *localtime - TZ=${TZ:-UTC}
- *timezone
command: command:
- valkey-server - valkey-server
- --cluster-enabled - --cluster-enabled
@@ -36,85 +33,80 @@ x-valkey-node: &valkey-node
reservations: reservations:
cpus: '0.25' cpus: '0.25'
memory: 256M memory: 256M
healthcheck:
test: ["CMD", "valkey-cli", "-a", "${VALKEY_PASSWORD:-passw0rd}", "ping"]
interval: 10s
timeout: 3s
retries: 3
start_period: 5s
services: services:
valkey-node-1: valkey-node-1:
<<: *valkey-node <<: *valkey-node
container_name: valkey-node-1
ports: ports:
- "${VALKEY_PORT_1:-7001}:6379" - "${VALKEY_PORT_1:-7001}:6379"
- "${VALKEY_BUS_PORT_1:-17001}:16379" - "${VALKEY_BUS_PORT_1:-17001}:16379"
volumes: volumes:
- *localtime
- *timezone
- valkey_data_1:/data - valkey_data_1:/data
valkey-node-2: valkey-node-2:
<<: *valkey-node <<: *valkey-node
container_name: valkey-node-2
ports: ports:
- "${VALKEY_PORT_2:-7002}:6379" - "${VALKEY_PORT_2:-7002}:6379"
- "${VALKEY_BUS_PORT_2:-17002}:16379" - "${VALKEY_BUS_PORT_2:-17002}:16379"
volumes: volumes:
- *localtime
- *timezone
- valkey_data_2:/data - valkey_data_2:/data
valkey-node-3: valkey-node-3:
<<: *valkey-node <<: *valkey-node
container_name: valkey-node-3
ports: ports:
- "${VALKEY_PORT_3:-7003}:6379" - "${VALKEY_PORT_3:-7003}:6379"
- "${VALKEY_BUS_PORT_3:-17003}:16379" - "${VALKEY_BUS_PORT_3:-17003}:16379"
volumes: volumes:
- *localtime
- *timezone
- valkey_data_3:/data - valkey_data_3:/data
valkey-node-4: valkey-node-4:
<<: *valkey-node <<: *valkey-node
container_name: valkey-node-4
ports: ports:
- "${VALKEY_PORT_4:-7004}:6379" - "${VALKEY_PORT_4:-7004}:6379"
- "${VALKEY_BUS_PORT_4:-17004}:16379" - "${VALKEY_BUS_PORT_4:-17004}:16379"
volumes: volumes:
- *localtime
- *timezone
- valkey_data_4:/data - valkey_data_4:/data
valkey-node-5: valkey-node-5:
<<: *valkey-node <<: *valkey-node
container_name: valkey-node-5
ports: ports:
- "${VALKEY_PORT_5:-7005}:6379" - "${VALKEY_PORT_5:-7005}:6379"
- "${VALKEY_BUS_PORT_5:-17005}:16379" - "${VALKEY_BUS_PORT_5:-17005}:16379"
volumes: volumes:
- *localtime
- *timezone
- valkey_data_5:/data - valkey_data_5:/data
valkey-node-6: valkey-node-6:
<<: *valkey-node <<: *valkey-node
container_name: valkey-node-6
ports: ports:
- "${VALKEY_PORT_6:-7006}:6379" - "${VALKEY_PORT_6:-7006}:6379"
- "${VALKEY_BUS_PORT_6:-17006}:16379" - "${VALKEY_BUS_PORT_6:-17006}:16379"
volumes: volumes:
- *localtime
- *timezone
- valkey_data_6:/data - valkey_data_6:/data
valkey-cluster-init: valkey-cluster-init:
<<: *default <<: *default
image: valkey/valkey:${VALKEY_VERSION:-8.0-alpine} image: valkey/valkey:${VALKEY_VERSION:-8.0-alpine}
container_name: valkey-cluster-init environment:
- TZ=${TZ:-UTC}
depends_on: depends_on:
- valkey-node-1 valkey-node-1:
- valkey-node-2 condition: service_healthy
- valkey-node-3 valkey-node-2:
- valkey-node-4 condition: service_healthy
- valkey-node-5 valkey-node-3:
- valkey-node-6 condition: service_healthy
valkey-node-4:
condition: service_healthy
valkey-node-5:
condition: service_healthy
valkey-node-6:
condition: service_healthy
command: command:
- sh - sh
- -c - -c

View File

@@ -1,26 +1,23 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
valkey: valkey:
<<: *default <<: *default
image: valkey/valkey:${VALKEY_VERSION:-8.0-alpine} image: valkey/valkey:${VALKEY_VERSION:-8.0-alpine}
container_name: valkey
ports: ports:
- "${VALKEY_PORT_OVERRIDE:-6379}:6379" - "${VALKEY_PORT_OVERRIDE:-6379}:6379"
volumes: volumes:
- *localtime
- *timezone
- valkey_data:/data - valkey_data:/data
# Use a custom valkey.conf file # Use a custom valkey.conf file
# - ./valkey.conf:/etc/valkey/valkey.conf # - ./valkey.conf:/etc/valkey/valkey.conf
environment:
- TZ=${TZ:-UTC}
command: command:
- sh - sh
- -c - -c
@@ -38,6 +35,12 @@ services:
reservations: reservations:
cpus: '0.25' cpus: '0.25'
memory: 256M memory: 256M
healthcheck:
test: ["CMD", "valkey-cli", "ping"]
interval: 10s
timeout: 3s
retries: 3
start_period: 5s
volumes: volumes:
valkey_data: valkey_data:

View File

@@ -1,25 +1,21 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
vllm: vllm:
<<: *default <<: *default
image: vllm/vllm-openai:${VLLM_VERSION:-v0.8.0} image: vllm/vllm-openai:${VLLM_VERSION:-v0.8.0}
container_name: vllm
ports: ports:
- "${VLLM_PORT_OVERRIDE:-8000}:8000" - "${VLLM_PORT_OVERRIDE:-8000}:8000"
volumes: volumes:
- *localtime
- *timezone
- vllm_models:/root/.cache/huggingface - vllm_models:/root/.cache/huggingface
environment: environment:
- TZ=${TZ:-UTC}
- HF_TOKEN=${HF_TOKEN:-} - HF_TOKEN=${HF_TOKEN:-}
command: command:
- --model - --model
@@ -47,6 +43,12 @@ services:
# capabilities: [gpu] # capabilities: [gpu]
# runtime: nvidia # Uncomment for GPU support # runtime: nvidia # Uncomment for GPU support
shm_size: 4g shm_size: 4g
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
volumes: volumes:
vllm_models: vllm_models:

View File

@@ -1,28 +1,24 @@
x-default: &default x-default: &default
restart: unless-stopped restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: 100m
max-file: "3"
services: services:
zookeeper: zookeeper:
<<: *default <<: *default
image: zookeeper:${ZOOKEEPER_VERSION:-3.9.3} image: zookeeper:${ZOOKEEPER_VERSION:-3.9.3}
container_name: zookeeper
ports: ports:
- "${ZOOKEEPER_CLIENT_PORT_OVERRIDE:-2181}:2181" - "${ZOOKEEPER_CLIENT_PORT_OVERRIDE:-2181}:2181"
- "${ZOOKEEPER_ADMIN_PORT_OVERRIDE:-8080}:8080" - "${ZOOKEEPER_ADMIN_PORT_OVERRIDE:-8080}:8080"
volumes: volumes:
- *localtime
- *timezone
- zookeeper_data:/data - zookeeper_data:/data
- zookeeper_datalog:/datalog - zookeeper_datalog:/datalog
- zookeeper_logs:/logs - zookeeper_logs:/logs
environment: environment:
- TZ=${TZ:-UTC}
- ZOO_MY_ID=1 - ZOO_MY_ID=1
- ZOO_PORT=2181 - ZOO_PORT=2181
- ZOO_TICK_TIME=${ZOO_TICK_TIME:-2000} - ZOO_TICK_TIME=${ZOO_TICK_TIME:-2000}
@@ -38,6 +34,12 @@ services:
reservations: reservations:
cpus: '0.25' cpus: '0.25'
memory: 512M memory: 512M
healthcheck:
test: ["CMD-SHELL", "echo ruok | nc localhost 2181 | grep imok"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
volumes: volumes:
zookeeper_data: zookeeper_data: