feat: add more

This commit is contained in:
Sun-ZhenXing
2025-10-06 21:48:39 +08:00
parent f330e00fa0
commit 3c609b5989
120 changed files with 7698 additions and 59 deletions

View File

@@ -22,12 +22,15 @@ Compose Anything helps users quickly deploy various services by providing a set
| [GitLab](./src/gitlab) | 17.10.4-ce.0 |
| [GitLab Runner](./src/gitlab-runner) | 17.10.1 |
| [Grafana](./src/grafana) | 12.1.1 |
| [Halo](./src/halo) | 2.21.9 |
| [Harbor](./src/harbor) | v2.12.0 |
| [IOPaint](./src/io-paint) | latest |
| [Jenkins](./src/jenkins) | 2.486-lts |
| [Apache Kafka](./src/kafka) | 7.8.0 |
| [Kibana](./src/kibana) | 8.16.1 |
| [Kodbox](./src/kodbox) | 1.62 |
| [Kong](./src/kong) | 3.8.0 |
| [Langfuse](./src/langfuse) | 3.115.0 |
| [Logstash](./src/logstash) | 8.16.1 |
| [Milvus Standalone](./src/milvus-standalone) | 2.6.2 |
| [Milvus Standalone Embed](./src/milvus-standalone-embed) | 2.6.2 |
@@ -37,6 +40,7 @@ Compose Anything helps users quickly deploy various services by providing a set
| [MongoDB ReplicaSet](./src/mongodb-replicaset) | 8.0.13 |
| [MongoDB Standalone](./src/mongodb-standalone) | 8.0.13 |
| [MySQL](./src/mysql) | 9.4.0 |
| [n8n](./src/n8n) | 1.114.0 |
| [Nginx](./src/nginx) | 1.29.1 |
| [Ollama](./src/ollama) | 0.12.0 |
| [Open WebUI](./src/open-webui) | main |
@@ -47,6 +51,20 @@ Compose Anything helps users quickly deploy various services by providing a set
| [Qdrant](./src/qdrant) | 1.15.4 |
| [RabbitMQ](./src/rabbitmq) | 4.1.4 |
| [Redis](./src/redis) | 8.2.1 |
| [Redis Cluster](./src/redis-cluster) | 8.2.1 |
| [ZooKeeper](./src/zookeeper) | 3.9.3 |
| [Nacos](./src/nacos) | v3.1.0 |
| [Dify](./src/dify) | 0.18.2 |
| [GPUStack](./src/gpustack) | v0.5.3 |
| [vLLM](./src/vllm) | v0.8.0 |
| [Bytebot](./src/bytebot) | edge |
| [Neo4j](./src/neo4j) | 5.27.4 |
| [NebulaGraph](./src/nebulagraph) | v3.8.0 |
| [Kuzu](./src/kuzu) | N/A (Embedded) |
| [Odoo](./src/odoo) | 19.0 |
| [OpenCoze](./src/opencoze) | See Docs |
| [OpenList](./src/openlist) | latest |
| [Node Exporter](./src/node-exporter) | v1.8.2 |
## Guidelines

View File

@@ -4,49 +4,67 @@ Compose Anything 通过提供一组高质量的 Docker Compose 配置文件,
## 已经支持的服务
| 服务 | 版本 |
| -------------------------------------------------------- | ---------------------------- |
| [Apache HTTP Server](./src/apache) | 2.4.62 |
| [Apache APISIX](./src/apisix) | 3.13.0 |
| [Bifrost Gateway](./src/bifrost-gateway) | 1.2.15 |
| [Apache Cassandra](./src/cassandra) | 5.0.2 |
| [Clash](./src/clash) | 1.18.0 |
| [HashiCorp Consul](./src/consul) | 1.20.3 |
| [Docker Registry](./src/docker-registry) | 3.0.0 |
| [Elasticsearch](./src/elasticsearch) | 8.16.1 |
| [etcd](./src/etcd) | 3.6.0 |
| [frpc](./src/frpc) | 0.64.0 |
| [frps](./src/frps) | 0.64.0 |
| [Gitea](./src/gitea) | 1.24.6 |
| [Gitea Runner](./src/gitea-runner) | 0.2.12 |
| [GitLab](./src/gitlab) | 17.10.4-ce.0 |
| [GitLab Runner](./src/gitlab-runner) | 17.10.1 |
| [Grafana](./src/grafana) | 12.1.1 |
| [Harbor](./src/harbor) | v2.12.0 |
| [IOPaint](./src/io-paint) | latest |
| [Jenkins](./src/jenkins) | 2.486-lts |
| [Apache Kafka](./src/kafka) | 7.8.0 |
| [Kibana](./src/kibana) | 8.16.1 |
| [Kong](./src/kong) | 3.8.0 |
| [Logstash](./src/logstash) | 8.16.1 |
| [Milvus Standalone](./src/milvus-standalone) | 2.6.2 |
| [Milvus Standalone Embed](./src/milvus-standalone-embed) | 2.6.2 |
| [MinerU SGALNG](./src/mineru-sgalng) | 2.2.2 |
| [MinerU v2](./src/mineru-v2) | 2.5.3 |
| [MinIO](./src/minio) | RELEASE.2025-09-07T16-13-09Z |
| [MongoDB ReplicaSet](./src/mongodb-replicaset) | 8.0.13 |
| [MongoDB Standalone](./src/mongodb-standalone) | 8.0.13 |
| [MySQL](./src/mysql) | 9.4.0 |
| [Nginx](./src/nginx) | 1.29.1 |
| [Ollama](./src/ollama) | 0.12.0 |
| [Open WebUI](./src/open-webui) | main |
| [OpenCut](./src/opencut) | latest |
| [PocketBase](./src/pocketbase) | 0.30.0 |
| [PostgreSQL](./src/postgres) | 17.6 |
| [Prometheus](./src/prometheus) | 3.5.0 |
| [Qdrant](./src/qdrant) | 1.15.4 |
| [RabbitMQ](./src/rabbitmq) | 4.1.4 |
| [Redis](./src/redis) | 8.2.1 |
| 服务 | 版本 |
| ------------------------------------------------------------- | ----------------------------- |
| [Apache HTTP Server](./src/apache) | 2.4.62 |
| [Apache APISIX](./src/apisix) | 3.13.0 |
| [Bifrost Gateway](./src/bifrost-gateway) | 1.2.15 |
| [Apache Cassandra](./src/cassandra) | 5.0.2 |
| [Clash](./src/clash) | 1.18.0 |
| [HashiCorp Consul](./src/consul) | 1.20.3 |
| [Dify](./src/dify) | latest |
| [Dockge](./src/dockge) | 1 |
| [Docker Registry](./src/docker-registry) | 3.0.0 |
| [Elasticsearch](./src/elasticsearch) | 8.16.1 |
| [etcd](./src/etcd) | 3.6.0 |
| [Firecrawl](./src/firecrawl) | v1.16.0 |
| [frpc](./src/frpc) | 0.64.0 |
| [frps](./src/frps) | 0.64.0 |
| [Gitea](./src/gitea) | 1.24.6 |
| [Gitea Runner](./src/gitea-runner) | 0.2.12 |
| [GitLab](./src/gitlab) | 17.10.4-ce.0 |
| [GitLab Runner](./src/gitlab-runner) | 17.10.1 |
| [Grafana](./src/grafana) | 12.1.1 |
| [Halo](./src/halo) | 2.21.9 |
| [Harbor](./src/harbor) | v2.12.0 |
| [IOPaint](./src/io-paint) | latest |
| [Jenkins](./src/jenkins) | 2.486-lts |
| [Apache Kafka](./src/kafka) | 7.8.0 |
| [Kibana](./src/kibana) | 8.16.1 |
| [Kodbox](./src/kodbox) | 1.62 |
| [Kong](./src/kong) | 3.8.0 |
| [Langfuse](./src/langfuse) | 3.115.0 |
| [Logstash](./src/logstash) | 8.16.1 |
| [MariaDB Galera](./src/mariadb-galera) | 11.7.2 |
| [Minecraft Bedrock Server](./src/minecraft-bedrock-server) | latest |
| [Milvus Standalone](./src/milvus-standalone) | 2.6.2 |
| [Milvus Standalone Embed](./src/milvus-standalone-embed) | 2.6.2 |
| [MinerU SGALNG](./src/mineru-sgalng) | 2.2.2 |
| [MinerU v2](./src/mineru-v2) | 2.5.3 |
| [MinIO](./src/minio) | RELEASE.2025-09-07T16-13-09Z |
| [MLflow](./src/mlflow) | v2.20.2 |
| [MongoDB ReplicaSet](./src/mongodb-replicaset) | 8.0.13 |
| [MongoDB Standalone](./src/mongodb-standalone) | 8.0.13 |
| [MySQL](./src/mysql) | 9.4.0 |
| [n8n](./src/n8n) | 1.114.0 |
| [Nginx](./src/nginx) | 1.29.1 |
| [Ollama](./src/ollama) | 0.12.0 |
| [Open WebUI](./src/open-webui) | main |
| [OpenCoze](./src/opencoze) | 见文档 |
| [OpenCut](./src/opencut) | latest |
| [OpenSearch](./src/opensearch) | 2.19.0 |
| [PocketBase](./src/pocketbase) | 0.30.0 |
| [PostgreSQL](./src/postgres) | 17.6 |
| [Prometheus](./src/prometheus) | 3.5.0 |
| [PyTorch](./src/pytorch) | 2.6.0-cuda12.6-cudnn9-runtime |
| [Qdrant](./src/qdrant) | 1.15.4 |
| [RabbitMQ](./src/rabbitmq) | 4.1.4 |
| [Ray](./src/ray) | 2.42.1-py312 |
| [Redis](./src/redis) | 8.2.1 |
| [Stable Diffusion WebUI](./src/stable-diffusion-webui-docker) | latest |
| [Stirling PDF](./src/stirling-pdf) | latest |
| [Valkey](./src/valkey) | 8.0-alpine |
| [Valkey Cluster](./src/valkey-cluster) | 8.0-alpine |
## 规范

20
src/bytebot/.env.example Normal file
View File

@@ -0,0 +1,20 @@
# Bytebot version
BYTEBOT_VERSION="edge"
# PostgreSQL version
POSTGRES_VERSION="17-alpine"
# Database configuration
POSTGRES_USER="bytebot"
POSTGRES_PASSWORD="bytebotpass"
POSTGRES_DB="bytebot"
# AI API Keys (at least one required)
ANTHROPIC_API_KEY=""
OPENAI_API_KEY=""
GEMINI_API_KEY=""
# Port overrides
BYTEBOT_DESKTOP_PORT_OVERRIDE=9990
BYTEBOT_AGENT_PORT_OVERRIDE=9991
BYTEBOT_UI_PORT_OVERRIDE=9992

78
src/bytebot/README.md Normal file
View File

@@ -0,0 +1,78 @@
# Bytebot
[English](./README.md) | [中文](./README.zh.md)
This service deploys Bytebot, an open-source AI desktop agent that automates computer tasks.
## Services
- `bytebot-desktop`: Containerized Linux desktop environment
- `bytebot-agent`: AI agent for task processing
- `bytebot-ui`: Web interface for task management
- `bytebot-db`: PostgreSQL database
## Environment Variables
| Variable Name | Description | Default Value |
| ----------------------------- | ------------------------------ | ------------- |
| BYTEBOT_VERSION | Bytebot image version | `edge` |
| POSTGRES_VERSION | PostgreSQL version | `17-alpine` |
| POSTGRES_USER | PostgreSQL username | `bytebot` |
| POSTGRES_PASSWORD | PostgreSQL password | `bytebotpass` |
| POSTGRES_DB | PostgreSQL database name | `bytebot` |
| ANTHROPIC_API_KEY | Anthropic API key (for Claude) | `""` |
| OPENAI_API_KEY | OpenAI API key (for GPT) | `""` |
| GEMINI_API_KEY | Google Gemini API key | `""` |
| BYTEBOT_DESKTOP_PORT_OVERRIDE | Desktop port override | `9990` |
| BYTEBOT_AGENT_PORT_OVERRIDE | Agent port override | `9991` |
| BYTEBOT_UI_PORT_OVERRIDE | UI port override | `9992` |
At least one AI API key is required.
## Volumes
- `bytebot_db_data`: PostgreSQL data
## Usage
### Start Bytebot
```bash
docker compose up -d
```
### Access
- Web UI: <http://localhost:9992>
- Agent API: <http://localhost:9991>
- Desktop VNC: <http://localhost:9990/vnc>
### Create Tasks
1. Open <http://localhost:9992>
2. Create a new task with natural language description
3. Watch the agent work in the desktop environment
## Features
- Natural language task automation
- Visual desktop environment with VNC
- Supports multiple AI models (Claude, GPT, Gemini)
- Web-based task management interface
## Notes
- Requires at least one AI API key to function
- Desktop environment uses shared memory (2GB)
- First startup may take a few minutes
- Suitable for development and testing
## Security
- Change default database password in production
- Keep AI API keys secure
- Consider using environment files instead of command-line arguments
## License
Bytebot is licensed under Apache License 2.0. See [Bytebot GitHub](https://github.com/bytebot-ai/bytebot) for more information.

View File

@@ -0,0 +1,104 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
services:
bytebot-desktop:
<<: *default
image: ghcr.io/bytebot-ai/bytebot-desktop:${BYTEBOT_VERSION:-edge}
container_name: bytebot-desktop
ports:
- "${BYTEBOT_DESKTOP_PORT_OVERRIDE:-9990}:9990"
volumes:
- *localtime
- *timezone
shm_size: 2gb
deploy:
resources:
limits:
cpus: '2.0'
memory: 4G
reservations:
cpus: '1.0'
memory: 2G
bytebot-agent:
<<: *default
image: ghcr.io/bytebot-ai/bytebot-agent:${BYTEBOT_VERSION:-edge}
container_name: bytebot-agent
depends_on:
- bytebot-desktop
- bytebot-db
ports:
- "${BYTEBOT_AGENT_PORT_OVERRIDE:-9991}:9991"
environment:
- BYTEBOTD_URL=http://bytebot-desktop:9990
- DATABASE_URL=postgresql://${POSTGRES_USER:-bytebot}:${POSTGRES_PASSWORD:-bytebotpass}@bytebot-db:5432/${POSTGRES_DB:-bytebot}
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY:-}
- OPENAI_API_KEY=${OPENAI_API_KEY:-}
- GEMINI_API_KEY=${GEMINI_API_KEY:-}
volumes:
- *localtime
- *timezone
deploy:
resources:
limits:
cpus: '1.0'
memory: 1G
reservations:
cpus: '0.5'
memory: 512M
bytebot-ui:
<<: *default
image: ghcr.io/bytebot-ai/bytebot-ui:${BYTEBOT_VERSION:-edge}
container_name: bytebot-ui
depends_on:
- bytebot-agent
ports:
- "${BYTEBOT_UI_PORT_OVERRIDE:-9992}:9992"
environment:
- BYTEBOT_AGENT_BASE_URL=http://localhost:9991
- BYTEBOT_DESKTOP_VNC_URL=http://localhost:9990/websockify
volumes:
- *localtime
- *timezone
deploy:
resources:
limits:
cpus: '0.5'
memory: 512M
reservations:
cpus: '0.25'
memory: 256M
bytebot-db:
<<: *default
image: postgres:${POSTGRES_VERSION:-17-alpine}
container_name: bytebot-db
environment:
- POSTGRES_USER=${POSTGRES_USER:-bytebot}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-bytebotpass}
- POSTGRES_DB=${POSTGRES_DB:-bytebot}
- PGDATA=/var/lib/postgresql/data/pgdata
volumes:
- *localtime
- *timezone
- bytebot_db_data:/var/lib/postgresql/data
deploy:
resources:
limits:
cpus: '0.5'
memory: 512M
reservations:
cpus: '0.25'
memory: 256M
volumes:
bytebot_db_data:

76
src/consul/README.md Normal file
View File

@@ -0,0 +1,76 @@
# Consul
[Consul](https://www.consul.io/) is a service networking solution to automate network configurations, discover services, and enable secure connectivity across any cloud or runtime.
## Features
- Service Discovery: Automatically discover and register services
- Health Checking: Monitor service health and availability
- Key/Value Store: Store configuration data
- Multi-Datacenter: Support for multiple datacenters
- Service Mesh: Secure service-to-service communication
## Quick Start
Start Consul server:
```bash
docker compose up -d
```
## Configuration
### Environment Variables
- `CONSUL_VERSION`: Consul version (default: `1.20.3`)
- `CONSUL_HTTP_PORT_OVERRIDE`: HTTP API port (default: `8500`)
- `CONSUL_DNS_PORT_OVERRIDE`: DNS query port (default: `8600`)
- `CONSUL_SERF_LAN_PORT_OVERRIDE`: Serf LAN port (default: `8301`)
- `CONSUL_SERF_WAN_PORT_OVERRIDE`: Serf WAN port (default: `8302`)
- `CONSUL_SERVER_RPC_PORT_OVERRIDE`: Server RPC port (default: `8300`)
- `CONSUL_BIND_INTERFACE`: Network interface to bind (default: `eth0`)
- `CONSUL_CLIENT_INTERFACE`: Client network interface (default: `eth0`)
## Access
- Web UI: <http://localhost:8500>
- HTTP API: <http://localhost:8500/v1>
- DNS Query: localhost:8600
## Default Configuration
The default configuration runs Consul in server mode with:
- Single node (bootstrap mode)
- Web UI enabled
- Log level: INFO
- Datacenter: dc1
## Custom Configuration
Uncomment the configuration volume in `docker-compose.yaml` and create `consul.json`:
```json
{
"datacenter": "dc1",
"server": true,
"ui_config": {
"enabled": true
},
"bootstrap_expect": 1,
"log_level": "INFO"
}
```
## Health Check
Check Consul cluster members:
```bash
docker compose exec consul consul members
```
## Resources
- Resource Limits: 1 CPU, 512MB RAM
- Resource Reservations: 0.25 CPU, 128MB RAM

76
src/consul/README.zh.md Normal file
View File

@@ -0,0 +1,76 @@
# Consul
[Consul](https://www.consul.io/) 是一个服务网络解决方案,用于自动化网络配置、发现服务,并在任何云或运行时环境中实现安全连接。
## 功能特性
- 服务发现:自动发现和注册服务
- 健康检查:监控服务健康状态和可用性
- 键值存储:存储配置数据
- 多数据中心:支持多个数据中心
- 服务网格:安全的服务间通信
## 快速开始
启动 Consul 服务器:
```bash
docker compose up -d
```
## 配置
### 环境变量
- `CONSUL_VERSION`: Consul 版本(默认:`1.20.3`
- `CONSUL_HTTP_PORT_OVERRIDE`: HTTP API 端口(默认:`8500`
- `CONSUL_DNS_PORT_OVERRIDE`: DNS 查询端口(默认:`8600`
- `CONSUL_SERF_LAN_PORT_OVERRIDE`: Serf LAN 端口(默认:`8301`
- `CONSUL_SERF_WAN_PORT_OVERRIDE`: Serf WAN 端口(默认:`8302`
- `CONSUL_SERVER_RPC_PORT_OVERRIDE`: 服务器 RPC 端口(默认:`8300`
- `CONSUL_BIND_INTERFACE`: 绑定的网络接口(默认:`eth0`
- `CONSUL_CLIENT_INTERFACE`: 客户端网络接口(默认:`eth0`
## 访问
- Web UI: <http://localhost:8500>
- HTTP API: <http://localhost:8500/v1>
- DNS 查询: localhost:8600
## 默认配置
默认配置以服务器模式运行 Consul
- 单节点(引导模式)
- 启用 Web UI
- 日志级别INFO
- 数据中心dc1
## 自定义配置
`docker-compose.yaml` 中取消配置卷的注释,并创建 `consul.json`
```json
{
"datacenter": "dc1",
"server": true,
"ui_config": {
"enabled": true
},
"bootstrap_expect": 1,
"log_level": "INFO"
}
```
## 健康检查
检查 Consul 集群成员:
```bash
docker compose exec consul consul members
```
## 资源配置
- 资源限制1 CPU512MB 内存
- 资源预留0.25 CPU128MB 内存

28
src/dify/.env.example Normal file
View File

@@ -0,0 +1,28 @@
# Dify version
DIFY_VERSION="0.18.2"
# Database configuration
POSTGRES_USER="dify"
POSTGRES_PASSWORD="difypass"
POSTGRES_DB="dify"
# Redis configuration
REDIS_PASSWORD=""
# Application configuration
SECRET_KEY="sk-xxxxxx"
LOG_LEVEL="INFO"
# API URLs
DIFY_API_URL="http://localhost:5001"
DIFY_APP_URL="http://localhost:3000"
# Port override
DIFY_PORT_OVERRIDE=3000
# Storage type: local, s3, azure-blob, etc.
STORAGE_TYPE="local"
# Vector store type: weaviate, milvus, qdrant, etc.
VECTOR_STORE="weaviate"
WEAVIATE_VERSION="1.28.12"

84
src/dify/README.md Normal file
View File

@@ -0,0 +1,84 @@
# Dify
[English](./README.md) | [中文](./README.zh.md)
This service deploys Dify, an LLM app development platform that combines AI workflow, RAG pipeline, agent capabilities, model management, observability features and more.
## Services
- `dify-api`: API service for Dify
- `dify-worker`: Background worker for async tasks
- `dify-web`: Web frontend interface
- `dify-db`: PostgreSQL database
- `dify-redis`: Redis cache
- `dify-weaviate`: Weaviate vector database (optional profile)
## Environment Variables
| Variable Name | Description | Default Value |
| ------------------ | ------------------------------------------ | ------------- |
| DIFY_VERSION | Dify image version | `0.18.2` |
| POSTGRES_USER | PostgreSQL username | `dify` |
| POSTGRES_PASSWORD | PostgreSQL password | `difypass` |
| POSTGRES_DB | PostgreSQL database name | `dify` |
| REDIS_PASSWORD | Redis password (empty for no auth) | `""` |
| SECRET_KEY | Secret key for encryption | (auto) |
| LOG_LEVEL | Log level | `INFO` |
| DIFY_PORT_OVERRIDE | Host port mapping for web interface | `3000` |
| STORAGE_TYPE | Storage type (local, s3, azure-blob, etc.) | `local` |
| VECTOR_STORE | Vector store type (weaviate, milvus, etc.) | `weaviate` |
| WEAVIATE_VERSION | Weaviate version (if using weaviate) | `1.28.12` |
Please modify the `.env` file as needed for your use case.
## Volumes
- `dify_storage`: Storage for uploaded files and generated content
- `dify_db_data`: PostgreSQL data
- `dify_redis_data`: Redis persistence data
- `dify_weaviate_data`: Weaviate vector database data
## Usage
### Start Dify with Weaviate
```bash
docker compose --profile weaviate up -d
```
### Start Dify without Vector Database
```bash
docker compose up -d
```
### Access
- Web Interface: <http://localhost:3000>
- API Docs: <http://localhost:5001/docs>
### First Time Setup
1. Open <http://localhost:3000>
2. Create an admin account
3. Configure your LLM API keys (OpenAI, Azure OpenAI, Anthropic, etc.)
4. Start creating your AI applications
## Notes
- First startup may take a few minutes for database initialization
- Change `SECRET_KEY` in production for security
- For production deployment, consider using external PostgreSQL and Redis
- Supports multiple LLM providers: OpenAI, Azure OpenAI, Anthropic, Google, local models via Ollama, etc.
- Vector database is optional but recommended for RAG capabilities
## Security
- Change default passwords in production
- Use strong `SECRET_KEY`
- Enable authentication on Redis in production
- Consider using TLS for API connections in production
## License
Dify is licensed under Apache License 2.0. See [Dify GitHub](https://github.com/langgenius/dify) for more information.

84
src/dify/README.zh.md Normal file
View File

@@ -0,0 +1,84 @@
# Dify
[English](./README.md) | [中文](./README.zh.md)
这个服务部署 Dify,一个 LLM 应用开发平台,结合了 AI 工作流、RAG 管道、代理能力、模型管理、可观测性功能等。
## 服务
- `dify-api`: Dify API 服务
- `dify-worker`: 后台异步任务 Worker
- `dify-web`: Web 前端界面
- `dify-db`: PostgreSQL 数据库
- `dify-redis`: Redis 缓存
- `dify-weaviate`: Weaviate 向量数据库(可选 profile)
## 环境变量
| 变量名 | 描述 | 默认值 |
| ------------------ | --------------------------------- | ---------- |
| DIFY_VERSION | Dify 镜像版本 | `0.18.2` |
| POSTGRES_USER | PostgreSQL 用户名 | `dify` |
| POSTGRES_PASSWORD | PostgreSQL 密码 | `difypass` |
| POSTGRES_DB | PostgreSQL 数据库名 | `dify` |
| REDIS_PASSWORD | Redis 密码(留空则不需要认证) | `""` |
| SECRET_KEY | 加密密钥 | (自动) |
| LOG_LEVEL | 日志级别 | `INFO` |
| DIFY_PORT_OVERRIDE | Web 界面主机端口映射 | `3000` |
| STORAGE_TYPE | 存储类型(local, s3, azure-blob等) | `local` |
| VECTOR_STORE | 向量库类型(weaviate, milvus等) | `weaviate` |
| WEAVIATE_VERSION | Weaviate 版本(如果使用weaviate) | `1.28.12` |
请根据您的需求修改 `.env` 文件。
## 数据卷
- `dify_storage`: 上传文件和生成内容的存储
- `dify_db_data`: PostgreSQL 数据
- `dify_redis_data`: Redis 持久化数据
- `dify_weaviate_data`: Weaviate 向量数据库数据
## 使用方法
### 启动 Dify 与 Weaviate
```bash
docker compose --profile weaviate up -d
```
### 启动 Dify 不含向量数据库
```bash
docker compose up -d
```
### 访问
- Web 界面: <http://localhost:3000>
- API 文档: <http://localhost:5001/docs>
### 首次设置
1. 打开 <http://localhost:3000>
2. 创建管理员账户
3. 配置您的 LLM API 密钥(OpenAI、Azure OpenAI、Anthropic 等)
4. 开始创建您的 AI 应用
## 注意事项
- 首次启动可能需要几分钟进行数据库初始化
- 生产环境请修改 `SECRET_KEY` 以提高安全性
- 生产环境建议使用外部 PostgreSQL 和 Redis
- 支持多种 LLM 提供商:OpenAI、Azure OpenAI、Anthropic、Google、通过 Ollama 的本地模型等
- 向量数据库是可选的,但建议用于 RAG 功能
## 安全性
- 生产环境请修改默认密码
- 使用强 `SECRET_KEY`
- 生产环境在 Redis 上启用认证
- 生产环境考虑为 API 连接使用 TLS
## 许可证
Dify 采用 Apache License 2.0 许可。更多信息请参见 [Dify GitHub](https://github.com/langgenius/dify)。

View File

@@ -0,0 +1,167 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
services:
dify-api:
<<: *default
image: langgenius/dify-api:${DIFY_VERSION:-0.18.2}
container_name: dify-api
depends_on:
- dify-db
- dify-redis
environment:
- MODE=api
- LOG_LEVEL=${LOG_LEVEL:-INFO}
- SECRET_KEY=${SECRET_KEY:-sk-9f73s3ljTXVcMT3Blb3ljTqtsKiGHXVcMT3BlbkFJLK7U}
- DATABASE_URL=postgresql://${POSTGRES_USER:-dify}:${POSTGRES_PASSWORD:-difypass}@dify-db:5432/${POSTGRES_DB:-dify}
- REDIS_HOST=dify-redis
- REDIS_PORT=6379
- REDIS_DB=0
- REDIS_PASSWORD=${REDIS_PASSWORD:-}
- CELERY_BROKER_URL=redis://:${REDIS_PASSWORD:-}@dify-redis:6379/1
- STORAGE_TYPE=${STORAGE_TYPE:-local}
- VECTOR_STORE=${VECTOR_STORE:-weaviate}
- WEAVIATE_ENDPOINT=http://dify-weaviate:8080
volumes:
- *localtime
- *timezone
- dify_storage:/app/api/storage
deploy:
resources:
limits:
cpus: '1.0'
memory: 2G
reservations:
cpus: '0.5'
memory: 1G
dify-worker:
<<: *default
image: langgenius/dify-api:${DIFY_VERSION:-0.18.2}
container_name: dify-worker
depends_on:
- dify-db
- dify-redis
environment:
- MODE=worker
- LOG_LEVEL=${LOG_LEVEL:-INFO}
- SECRET_KEY=${SECRET_KEY:-sk-9f73s3ljTXVcMT3Blb3ljTqtsKiGHXVcMT3BlbkFJLK7U}
- DATABASE_URL=postgresql://${POSTGRES_USER:-dify}:${POSTGRES_PASSWORD:-difypass}@dify-db:5432/${POSTGRES_DB:-dify}
- REDIS_HOST=dify-redis
- REDIS_PORT=6379
- REDIS_DB=0
- REDIS_PASSWORD=${REDIS_PASSWORD:-}
- CELERY_BROKER_URL=redis://:${REDIS_PASSWORD:-}@dify-redis:6379/1
- STORAGE_TYPE=${STORAGE_TYPE:-local}
- VECTOR_STORE=${VECTOR_STORE:-weaviate}
- WEAVIATE_ENDPOINT=http://dify-weaviate:8080
volumes:
- *localtime
- *timezone
- dify_storage:/app/api/storage
deploy:
resources:
limits:
cpus: '1.0'
memory: 2G
reservations:
cpus: '0.5'
memory: 1G
dify-web:
<<: *default
image: langgenius/dify-web:${DIFY_VERSION:-0.18.2}
container_name: dify-web
depends_on:
- dify-api
environment:
- NEXT_PUBLIC_API_URL=${DIFY_API_URL:-http://localhost:5001}
- NEXT_PUBLIC_APP_URL=${DIFY_APP_URL:-http://localhost:3000}
ports:
- "${DIFY_PORT_OVERRIDE:-3000}:3000"
deploy:
resources:
limits:
cpus: '0.5'
memory: 512M
reservations:
cpus: '0.25'
memory: 256M
dify-db:
<<: *default
image: postgres:15-alpine
container_name: dify-db
environment:
- POSTGRES_USER=${POSTGRES_USER:-dify}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-difypass}
- POSTGRES_DB=${POSTGRES_DB:-dify}
- PGDATA=/var/lib/postgresql/data/pgdata
volumes:
- *localtime
- *timezone
- dify_db_data:/var/lib/postgresql/data
deploy:
resources:
limits:
cpus: '0.5'
memory: 512M
reservations:
cpus: '0.25'
memory: 256M
dify-redis:
<<: *default
image: redis:7-alpine
container_name: dify-redis
command: redis-server --requirepass ${REDIS_PASSWORD:-}
volumes:
- *localtime
- *timezone
- dify_redis_data:/data
deploy:
resources:
limits:
cpus: '0.25'
memory: 256M
reservations:
cpus: '0.1'
memory: 128M
dify-weaviate:
<<: *default
image: semitechnologies/weaviate:${WEAVIATE_VERSION:-1.28.12}
container_name: dify-weaviate
profiles:
- weaviate
environment:
- QUERY_DEFAULTS_LIMIT=25
- AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED=true
- PERSISTENCE_DATA_PATH=/var/lib/weaviate
- DEFAULT_VECTORIZER_MODULE=none
- CLUSTER_HOSTNAME=node1
volumes:
- *localtime
- *timezone
- dify_weaviate_data:/var/lib/weaviate
deploy:
resources:
limits:
cpus: '0.5'
memory: 1G
reservations:
cpus: '0.25'
memory: 512M
volumes:
dify_storage:
dify_db_data:
dify_redis_data:
dify_weaviate_data:

15
src/dockge/.env.example Normal file
View File

@@ -0,0 +1,15 @@
# Dockge version
DOCKGE_VERSION="1"
# Port override
PORT_OVERRIDE=5001
# Stacks directory on host
STACKS_DIR="./stacks"
# Stacks directory inside container
DOCKGE_STACKS_DIR="/opt/stacks"
# User and group IDs
PUID=1000
PGID=1000

52
src/dockge/README.md Normal file
View File

@@ -0,0 +1,52 @@
# Dockge
[English](./README.md) | [中文](./README.zh.md)
This service deploys Dockge, a fancy, easy-to-use and reactive self-hosted docker compose stack-oriented manager.
## Services
- `dockge`: The Dockge web interface for managing Docker Compose stacks.
## Environment Variables
| Variable Name | Description | Default Value |
| ----------------- | ------------------------------------- | ------------- |
| DOCKGE_VERSION | Dockge image version | `1` |
| PORT_OVERRIDE | Host port mapping | `5001` |
| STACKS_DIR | Directory on host for storing stacks | `./stacks` |
| DOCKGE_STACKS_DIR | Directory inside container for stacks | `/opt/stacks` |
| PUID | User ID to run the service | `1000` |
| PGID | Group ID to run the service | `1000` |
Please modify the `.env` file as needed for your use case.
## Volumes
- `dockge_data`: A volume for storing Dockge application data.
- Docker socket: Mounted to allow Dockge to manage Docker containers.
- Stacks directory: Where your docker-compose.yaml files are stored.
## Features
- 🧑‍💼 Manage your `compose.yaml` files
- ⌨️ Interactive Editor for `compose.yaml`
- 🦦 Interactive Web Terminal
- 🏪 Convert `docker run ...` commands into `compose.yaml`
- 📙 File based structure - doesn't kidnap your compose files
- 🚄 Reactive - Everything is just responsive
## Security Notes
- Dockge requires access to the Docker socket, which grants it full control over Docker.
- Only run Dockge on trusted networks.
- Consider using authentication if exposing to the internet.
- The default setup stores data in a named volume for persistence.
## First Run
On first run, you will be prompted to create an admin account. Make sure to use a strong password.
## License
Dockge is licensed under the MIT License.

52
src/dockge/README.zh.md Normal file
View File

@@ -0,0 +1,52 @@
# Dockge
[English](./README.md) | [中文](./README.zh.md)
此服务用于部署 Dockge一个精美、易用且响应式的自托管 Docker Compose 堆栈管理器。
## 服务
- `dockge`: Dockge Web 界面,用于管理 Docker Compose 堆栈。
## 环境变量
| 变量名 | 说明 | 默认值 |
| ----------------- | -------------------- | ------------- |
| DOCKGE_VERSION | Dockge 镜像版本 | `1` |
| PORT_OVERRIDE | 主机端口映射 | `5001` |
| STACKS_DIR | 主机上存储堆栈的目录 | `./stacks` |
| DOCKGE_STACKS_DIR | 容器内堆栈目录 | `/opt/stacks` |
| PUID | 运行服务的用户 ID | `1000` |
| PGID | 运行服务的组 ID | `1000` |
请根据实际需求修改 `.env` 文件。
## 卷
- `dockge_data`: 用于存储 Dockge 应用程序数据的卷。
- Docker socket: 挂载以允许 Dockge 管理 Docker 容器。
- Stacks 目录: 存储 docker-compose.yaml 文件的位置。
## 功能
- 🧑‍💼 管理你的 `compose.yaml` 文件
- ⌨️ `compose.yaml` 交互式编辑器
- 🦦 交互式 Web 终端
- 🏪 将 `docker run ...` 命令转换为 `compose.yaml`
- 📙 基于文件的结构 - 不会劫持你的 compose 文件
- 🚄 响应式 - 一切都是实时响应的
## 安全说明
- Dockge 需要访问 Docker socket这将授予它对 Docker 的完全控制权。
- 仅在受信任的网络上运行 Dockge。
- 如果暴露到互联网,请考虑使用身份验证。
- 默认设置将数据存储在命名卷中以保持持久性。
## 首次运行
首次运行时,系统会提示你创建管理员帐户。请确保使用强密码。
## 许可证
Dockge 使用 MIT 许可证授权。

View File

@@ -0,0 +1,38 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
services:
dockge:
<<: *default
image: louislam/dockge:${DOCKGE_VERSION:-1}
container_name: dockge
ports:
- "${PORT_OVERRIDE:-5001}:5001"
volumes:
- *localtime
- *timezone
- /var/run/docker.sock:/var/run/docker.sock
- dockge_data:/app/data
- ${STACKS_DIR:-./stacks}:/opt/stacks
environment:
- DOCKGE_STACKS_DIR=${DOCKGE_STACKS_DIR:-/opt/stacks}
- PUID=${PUID:-1000}
- PGID=${PGID:-1000}
deploy:
resources:
limits:
cpus: '1.0'
memory: 512M
reservations:
cpus: '0.25'
memory: 256M
volumes:
dockge_data:

View File

@@ -0,0 +1,25 @@
# Firecrawl version
FIRECRAWL_VERSION="v1.16.0"
# Redis version
REDIS_VERSION="7.4.2-alpine"
# Playwright version
PLAYWRIGHT_VERSION="latest"
# Redis configuration
REDIS_PASSWORD="firecrawl"
# Firecrawl configuration
NUM_WORKERS_PER_QUEUE=8
SCRAPE_RATE_LIMIT_TOKEN_BUCKET_SIZE=20
SCRAPE_RATE_LIMIT_TOKEN_BUCKET_REFILL=1
# Playwright configuration (optional)
PROXY_SERVER=""
PROXY_USERNAME=""
PROXY_PASSWORD=""
BLOCK_MEDIA="true"
# Port overrides
FIRECRAWL_PORT_OVERRIDE=3002

96
src/firecrawl/README.md Normal file
View File

@@ -0,0 +1,96 @@
# Firecrawl
[English](./README.md) | [中文](./README.zh.md)
This service deploys Firecrawl, a web scraping and crawling API powered by Playwright and headless browsers.
## Services
- `firecrawl`: The main Firecrawl API server.
- `redis`: Redis for job queue and caching.
- `playwright`: Playwright service for browser automation.
## Environment Variables
| Variable Name | Description | Default Value |
| ------------------------------------- | ----------------------------------- | -------------- |
| FIRECRAWL_VERSION | Firecrawl image version | `v1.16.0` |
| REDIS_VERSION | Redis image version | `7.4.2-alpine` |
| PLAYWRIGHT_VERSION | Playwright service version | `latest` |
| REDIS_PASSWORD | Redis password | `firecrawl` |
| NUM_WORKERS_PER_QUEUE | Number of workers per queue | `8` |
| SCRAPE_RATE_LIMIT_TOKEN_BUCKET_SIZE | Token bucket size for rate limiting | `20` |
| SCRAPE_RATE_LIMIT_TOKEN_BUCKET_REFILL | Token refill rate per second | `1` |
| PROXY_SERVER | Proxy server URL (optional) | `""` |
| PROXY_USERNAME | Proxy username (optional) | `""` |
| PROXY_PASSWORD | Proxy password (optional) | `""` |
| BLOCK_MEDIA | Block media content | `true` |
| FIRECRAWL_PORT_OVERRIDE | Firecrawl API port | `3002` |
Please modify the `.env` file as needed for your use case.
## Volumes
- `redis_data`: Redis data storage for job queues and caching.
## Usage
### Start the Services
```bash
docker-compose up -d
```
### Access the API
The Firecrawl API will be available at:
```text
http://localhost:3002
```
### Example API Calls
**Scrape a Single Page:**
```bash
curl -X POST http://localhost:3002/v0/scrape \
-H "Content-Type: application/json" \
-d '{
"url": "https://example.com"
}'
```
**Crawl a Website:**
```bash
curl -X POST http://localhost:3002/v0/crawl \
-H "Content-Type: application/json" \
-d '{
"url": "https://example.com",
"crawlerOptions": {
"limit": 100
}
}'
```
## Features
- **Web Scraping**: Extract clean content from any webpage
- **Web Crawling**: Recursively crawl entire websites
- **JavaScript Rendering**: Full support for dynamic JavaScript-rendered pages
- **Markdown Output**: Clean markdown conversion of web content
- **Rate Limiting**: Built-in rate limiting to prevent abuse
- **Proxy Support**: Optional proxy configuration for all requests
## Notes
- The service uses Playwright for browser automation, supporting complex web pages
- Redis is used for job queuing and caching
- Rate limiting is configurable via environment variables
- For production use, consider scaling the number of workers
- BLOCK_MEDIA can reduce memory usage by blocking images/videos
## License
Firecrawl is licensed under the AGPL-3.0 License.

View File

@@ -0,0 +1,96 @@
# Firecrawl
[English](./README.md) | [中文](./README.zh.md)
此服务用于部署 Firecrawl一个由 Playwright 和无头浏览器驱动的网页抓取和爬取 API。
## 服务
- `firecrawl`: Firecrawl API 主服务器。
- `redis`: 用于作业队列和缓存的 Redis。
- `playwright`: 用于浏览器自动化的 Playwright 服务。
## 环境变量
| 变量名 | 说明 | 默认值 |
| ------------------------------------- | ---------------------- | -------------- |
| FIRECRAWL_VERSION | Firecrawl 镜像版本 | `v1.16.0` |
| REDIS_VERSION | Redis 镜像版本 | `7.4.2-alpine` |
| PLAYWRIGHT_VERSION | Playwright 服务版本 | `latest` |
| REDIS_PASSWORD | Redis 密码 | `firecrawl` |
| NUM_WORKERS_PER_QUEUE | 每个队列的工作进程数 | `8` |
| SCRAPE_RATE_LIMIT_TOKEN_BUCKET_SIZE | 速率限制的令牌桶大小 | `20` |
| SCRAPE_RATE_LIMIT_TOKEN_BUCKET_REFILL | 每秒令牌填充速率 | `1` |
| PROXY_SERVER | 代理服务器 URL可选 | `""` |
| PROXY_USERNAME | 代理用户名(可选) | `""` |
| PROXY_PASSWORD | 代理密码(可选) | `""` |
| BLOCK_MEDIA | 阻止媒体内容 | `true` |
| FIRECRAWL_PORT_OVERRIDE | Firecrawl API 端口 | `3002` |
请根据实际需求修改 `.env` 文件。
## 卷
- `redis_data`: 用于作业队列和缓存的 Redis 数据存储。
## 使用方法
### 启动服务
```bash
docker-compose up -d
```
### 访问 API
Firecrawl API 可在以下地址访问:
```text
http://localhost:3002
```
### API 调用示例
**抓取单个页面:**
```bash
curl -X POST http://localhost:3002/v0/scrape \
-H "Content-Type: application/json" \
-d '{
"url": "https://example.com"
}'
```
**爬取网站:**
```bash
curl -X POST http://localhost:3002/v0/crawl \
-H "Content-Type: application/json" \
-d '{
"url": "https://example.com",
"crawlerOptions": {
"limit": 100
}
}'
```
## 功能
- **网页抓取**: 从任何网页提取干净的内容
- **网站爬取**: 递归爬取整个网站
- **JavaScript 渲染**: 完全支持动态 JavaScript 渲染的页面
- **Markdown 输出**: 将网页内容清晰地转换为 markdown
- **速率限制**: 内置速率限制以防止滥用
- **代理支持**: 所有请求的可选代理配置
## 注意事项
- 该服务使用 Playwright 进行浏览器自动化,支持复杂的网页
- Redis 用于作业队列和缓存
- 速率限制可通过环境变量配置
- 对于生产环境,考虑扩展工作进程数量
- BLOCK_MEDIA 可以通过阻止图像/视频来减少内存使用
## 许可证
Firecrawl 使用 AGPL-3.0 许可证授权。

View File

@@ -0,0 +1,75 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
services:
firecrawl:
<<: *default
image: mendableai/firecrawl:${FIRECRAWL_VERSION:-v1.16.0}
container_name: firecrawl
ports:
- "${FIRECRAWL_PORT_OVERRIDE:-3002}:3002"
environment:
REDIS_URL: redis://:${REDIS_PASSWORD:-firecrawl}@redis:6379
PLAYWRIGHT_MICROSERVICE_URL: http://playwright:3000
PORT: 3002
NUM_WORKERS_PER_QUEUE: ${NUM_WORKERS_PER_QUEUE:-8}
SCRAPE_RATE_LIMIT_TOKEN_BUCKET_SIZE: ${SCRAPE_RATE_LIMIT_TOKEN_BUCKET_SIZE:-20}
SCRAPE_RATE_LIMIT_TOKEN_BUCKET_REFILL: ${SCRAPE_RATE_LIMIT_TOKEN_BUCKET_REFILL:-1}
depends_on:
- redis
- playwright
deploy:
resources:
limits:
cpus: '2.0'
memory: 4G
reservations:
cpus: '1.0'
memory: 2G
redis:
<<: *default
image: redis:${REDIS_VERSION:-7.4.2-alpine}
container_name: firecrawl-redis
command: redis-server --requirepass ${REDIS_PASSWORD:-firecrawl} --appendonly yes
volumes:
- *localtime
- *timezone
- redis_data:/data
deploy:
resources:
limits:
cpus: '1.0'
memory: 512M
reservations:
cpus: '0.5'
memory: 256M
playwright:
<<: *default
image: mendableai/firecrawl-playwright:${PLAYWRIGHT_VERSION:-latest}
container_name: firecrawl-playwright
environment:
PORT: 3000
PROXY_SERVER: ${PROXY_SERVER:-}
PROXY_USERNAME: ${PROXY_USERNAME:-}
PROXY_PASSWORD: ${PROXY_PASSWORD:-}
BLOCK_MEDIA: ${BLOCK_MEDIA:-true}
deploy:
resources:
limits:
cpus: '2.0'
memory: 2G
reservations:
cpus: '1.0'
memory: 1G
volumes:
redis_data:

19
src/gpustack/.env.example Normal file
View File

@@ -0,0 +1,19 @@
# GPUStack version
GPUSTACK_VERSION="v0.5.3"
# Server configuration
GPUSTACK_HOST="0.0.0.0"
GPUSTACK_PORT=80
GPUSTACK_DEBUG=false
# Admin bootstrap password
GPUSTACK_BOOTSTRAP_PASSWORD="admin"
# Token for worker registration (auto-generated if not set)
GPUSTACK_TOKEN=""
# Hugging Face token for model downloads
HF_TOKEN=""
# Port to bind to on the host machine
GPUSTACK_PORT_OVERRIDE=80

139
src/gpustack/README.md Normal file
View File

@@ -0,0 +1,139 @@
# GPUStack
[English](./README.md) | [中文](./README.zh.md)
This service deploys GPUStack, an open-source GPU cluster manager for running large language models (LLMs).
## Services
- `gpustack`: GPUStack server with built-in worker
## Environment Variables
| Variable Name | Description | Default Value |
| --------------------------- | -------------------------------------- | ------------- |
| GPUSTACK_VERSION | GPUStack image version | `v0.5.3` |
| GPUSTACK_HOST | Host to bind the server to | `0.0.0.0` |
| GPUSTACK_PORT | Port to bind the server to | `80` |
| GPUSTACK_DEBUG | Enable debug mode | `false` |
| GPUSTACK_BOOTSTRAP_PASSWORD | Password for the bootstrap admin user | `admin` |
| GPUSTACK_TOKEN | Token for worker registration | (auto) |
| HF_TOKEN | Hugging Face token for model downloads | `""` |
| GPUSTACK_PORT_OVERRIDE | Host port mapping | `80` |
Please modify the `.env` file as needed for your use case.
## Volumes
- `gpustack_data`: Data directory for GPUStack
## GPU Support
### NVIDIA GPU
Uncomment the GPU-related configuration in `docker-compose.yaml`:
```yaml
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
runtime: nvidia
```
### AMD GPU (ROCm)
Use the ROCm-specific image:
```yaml
image: gpustack/gpustack:v0.5.3-rocm
```
## Usage
### Start GPUStack
```bash
docker compose up -d
```
### Access
- Web UI: <http://localhost:80>
- Default credentials: `admin` / `admin` (configured via `GPUSTACK_BOOTSTRAP_PASSWORD`)
### Deploy a Model
1. Log in to the web UI
2. Navigate to Models
3. Click "Deploy Model"
4. Select a model from the catalog or add a custom model
5. Configure the model parameters
6. Click "Deploy"
### Add Worker Nodes
To add more GPU nodes to the cluster:
1. Get the registration token from the server:
```bash
docker exec gpustack cat /var/lib/gpustack/token
```
2. Start a worker on another node:
```bash
docker run -d --name gpustack-worker \
--gpus all \
--network host \
--ipc host \
-v gpustack-data:/var/lib/gpustack \
gpustack/gpustack:v0.5.3 \
--server-url http://your-server-ip:80 \
--token YOUR_TOKEN
```
## Features
- **Model Management**: Deploy and manage LLM models from Hugging Face, ModelScope, or custom sources
- **GPU Scheduling**: Automatic GPU allocation and scheduling
- **Multi-Backend**: Supports llama-box, vLLM, and other backends
- **API Compatible**: OpenAI-compatible API endpoint
- **Web UI**: User-friendly web interface for management
- **Monitoring**: Resource usage and model metrics
## API Usage
GPUStack provides an OpenAI-compatible API:
```bash
curl http://localhost:80/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "llama-3.2-3b-instruct",
"messages": [{"role": "user", "content": "Hello!"}]
}'
```
## Notes
- For production use, change the default password
- GPU support requires NVIDIA Docker runtime or AMD ROCm support
- Model downloads can be large (several GB), ensure sufficient disk space
- First model deployment may take time as it downloads the model files
## Security
- Change default admin password after first login
- Use strong passwords for API keys
- Consider using TLS for production deployments
- Restrict network access to trusted sources
## License
GPUStack is licensed under Apache License 2.0. See [GPUStack GitHub](https://github.com/gpustack/gpustack) for more information.

View File

@@ -0,0 +1,47 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
services:
gpustack:
<<: *default
image: gpustack/gpustack:${GPUSTACK_VERSION:-v0.5.3}
container_name: gpustack
ports:
- "${GPUSTACK_PORT_OVERRIDE:-80}:80"
volumes:
- *localtime
- *timezone
- gpustack_data:/var/lib/gpustack
environment:
- GPUSTACK_DEBUG=${GPUSTACK_DEBUG:-false}
- GPUSTACK_HOST=${GPUSTACK_HOST:-0.0.0.0}
- GPUSTACK_PORT=${GPUSTACK_PORT:-80}
- GPUSTACK_TOKEN=${GPUSTACK_TOKEN:-}
- GPUSTACK_BOOTSTRAP_PASSWORD=${GPUSTACK_BOOTSTRAP_PASSWORD:-admin}
- HF_TOKEN=${HF_TOKEN:-}
deploy:
resources:
limits:
cpus: '2.0'
memory: 4G
reservations:
cpus: '1.0'
memory: 2G
# Uncomment below for GPU support
# reservations:
# devices:
# - driver: nvidia
# count: 1
# capabilities: [gpu]
# For GPU support, uncomment the following section
# runtime: nvidia
volumes:
gpustack_data:

76
src/halo/README.md Normal file
View File

@@ -0,0 +1,76 @@
# Halo
[English](./README.md) | [中文](./README.zh.md)
This service deploys Halo, a powerful and easy-to-use open-source blogging and content management system.
## Services
- `halo`: The main Halo application server.
- `halo-db`: PostgreSQL database for Halo.
## Environment Variables
| Variable Name | Description | Default Value |
| ------------------------ | -------------------------------------------------------------- | ----------------------- |
| HALO_VERSION | Halo image version | `2.21.9` |
| HALO_PORT | Host port mapping for Halo web interface | `8090` |
| POSTGRES_VERSION | PostgreSQL image version | `17.2-alpine3.21` |
| POSTGRES_USER | PostgreSQL username | `postgres` |
| POSTGRES_PASSWORD | PostgreSQL password (required) | `postgres` |
| POSTGRES_DB | PostgreSQL database name | `halo` |
| SPRING_R2DBC_URL | R2DBC connection URL | (auto-configured) |
| SPRING_SQL_INIT_PLATFORM | SQL platform type | `postgresql` |
| HALO_EXTERNAL_URL | External URL for Halo | `http://localhost:8090` |
| HALO_ADMIN_USERNAME | Initial admin username | `admin` |
| HALO_ADMIN_PASSWORD | Initial admin password (leave empty to set during first login) | `""` |
Please create a `.env` file and modify it as needed for your use case.
## Volumes
- `halo_data`: A volume for storing Halo application data.
- `halo_db_data`: A volume for storing PostgreSQL data.
## Getting Started
1. (Optional) Create a `.env` file to customize settings:
```env
POSTGRES_PASSWORD=your-secure-password
HALO_EXTERNAL_URL=https://yourdomain.com
```
2. Start the services:
```bash
docker compose up -d
```
3. Access Halo at `http://localhost:8090`
4. Follow the setup wizard to create your admin account (if `HALO_ADMIN_PASSWORD` is not set)
## Initial Setup
On first access, you'll be guided through the initial setup:
- Set your admin account credentials (if not configured via environment)
- Configure site information
- Choose and install a theme from the marketplace
## Documentation
For more information, visit the [official Halo documentation](https://docs.halo.run).
## Theme and Plugin Marketplace
Visit the [Halo Application Store](https://www.halo.run/store/apps) to browse themes and plugins.
## Security Notes
- Change default database password in production
- Use HTTPS in production environments
- Set a strong admin password
- Regularly backup both data volumes
- Keep Halo and PostgreSQL updated to the latest stable versions

76
src/halo/README.zh.md Normal file
View File

@@ -0,0 +1,76 @@
# Halo
[English](./README.md) | [中文](./README.zh.md)
此服务部署 Halo,一个强大易用的开源博客和内容管理系统。
## 服务
- `halo`: Halo 主应用服务器。
- `halo-db`: Halo 的 PostgreSQL 数据库。
## 环境变量
| 变量名 | 描述 | 默认值 |
| ------------------------ | -------------------------------------- | ----------------------- |
| HALO_VERSION | Halo 镜像版本 | `2.21.9` |
| HALO_PORT | Halo Web 界面的主机端口映射 | `8090` |
| POSTGRES_VERSION | PostgreSQL 镜像版本 | `17.2-alpine3.21` |
| POSTGRES_USER | PostgreSQL 用户名 | `postgres` |
| POSTGRES_PASSWORD | PostgreSQL 密码(必需) | `postgres` |
| POSTGRES_DB | PostgreSQL 数据库名 | `halo` |
| SPRING_R2DBC_URL | R2DBC 连接 URL | (自动配置) |
| SPRING_SQL_INIT_PLATFORM | SQL 平台类型 | `postgresql` |
| HALO_EXTERNAL_URL | Halo 的外部 URL | `http://localhost:8090` |
| HALO_ADMIN_USERNAME | 初始管理员用户名 | `admin` |
| HALO_ADMIN_PASSWORD | 初始管理员密码(留空则在首次登录时设置) | `""` |
请创建 `.env` 文件并根据需要进行修改。
## 数据卷
- `halo_data`: 用于存储 Halo 应用数据的卷。
- `halo_db_data`: 用于存储 PostgreSQL 数据的卷。
## 快速开始
1. (可选)创建 `.env` 文件以自定义设置:
```env
POSTGRES_PASSWORD=your-secure-password
HALO_EXTERNAL_URL=https://yourdomain.com
```
2. 启动服务:
```bash
docker compose up -d
```
3. 访问 `http://localhost:8090`
4. 按照设置向导创建管理员账户(如果未通过 `HALO_ADMIN_PASSWORD` 配置)
## 初始设置
首次访问时,您将看到初始设置向导:
- 设置管理员账户凭据(如果未通过环境变量配置)
- 配置站点信息
- 从应用市场选择并安装主题
## 文档
更多信息请访问 [Halo 官方文档](https://docs.halo.run)。
## 主题和插件市场
访问 [Halo 应用市场](https://www.halo.run/store/apps) 浏览主题和插件。
## 安全提示
- 在生产环境中更改默认数据库密码
- 在生产环境中使用 HTTPS
- 设置强管理员密码
- 定期备份两个数据卷
- 保持 Halo 和 PostgreSQL 更新到最新稳定版本

View File

@@ -0,0 +1,68 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
services:
halo:
<<: *default
image: halohub/halo:${HALO_VERSION:-2.21.9}
container_name: halo
ports:
- "${HALO_PORT:-8090}:8090"
volumes:
- halo_data:/root/.halo2
environment:
- SPRING_R2DBC_URL=${SPRING_R2DBC_URL:-r2dbc:pool:postgresql://halo-db:5432/halo}
- SPRING_R2DBC_USERNAME=${POSTGRES_USER:-postgres}
- SPRING_R2DBC_PASSWORD=${POSTGRES_PASSWORD:-postgres}
- SPRING_SQL_INIT_PLATFORM=${SPRING_SQL_INIT_PLATFORM:-postgresql}
- HALO_EXTERNAL_URL=${HALO_EXTERNAL_URL:-http://localhost:8090}
- HALO_SECURITY_INITIALIZER_SUPERADMINUSERNAME=${HALO_ADMIN_USERNAME:-admin}
- HALO_SECURITY_INITIALIZER_SUPERADMINPASSWORD=${HALO_ADMIN_PASSWORD:-}
depends_on:
halo-db:
condition: service_healthy
deploy:
resources:
limits:
cpus: '2.0'
memory: 2G
reservations:
cpus: '0.5'
memory: 512M
halo-db:
<<: *default
image: postgres:${POSTGRES_VERSION:-17.2-alpine3.21}
container_name: halo-db
environment:
- POSTGRES_USER=${POSTGRES_USER:-postgres}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-postgres}
- POSTGRES_DB=${POSTGRES_DB:-halo}
- PGUSER=${POSTGRES_USER:-postgres}
volumes:
- halo_db_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
start_period: 30s
deploy:
resources:
limits:
cpus: '1.0'
memory: 1G
reservations:
cpus: '0.25'
memory: 256M
volumes:
halo_data:
halo_db_data:

91
src/harbor/README.md Normal file
View File

@@ -0,0 +1,91 @@
# Harbor
[Harbor](https://goharbor.io/) is an open source registry that secures artifacts with policies and role-based access control, ensures images are scanned and free from vulnerabilities, and signs images as trusted.
## Features
- Security and Vulnerability Analysis: Scan images for vulnerabilities
- Content Trust: Sign and verify images
- Policy-based Replication: Replicate images across registries
- Role-based Access Control: Fine-grained access control
- Webhook Notifications: Notify external services on events
- Multi-tenancy: Support for multiple projects
## Quick Start
Start Harbor:
```bash
docker compose up -d
```
## Configuration
### Environment Variables
- `HARBOR_VERSION`: Harbor version (default: `v2.12.0`)
- `HARBOR_HTTP_PORT_OVERRIDE`: HTTP port (default: `80`)
- `HARBOR_HTTPS_PORT_OVERRIDE`: HTTPS port (default: `443`)
- `HARBOR_ADMIN_PASSWORD`: Admin password (default: `Harbor12345`)
- `HARBOR_DB_PASSWORD`: Database password (default: `password`)
- `HARBOR_CORE_SECRET`: Core service secret
- `HARBOR_JOBSERVICE_SECRET`: Job service secret
- `HARBOR_REGISTRY_SECRET`: Registry HTTP secret
- `HARBOR_RELOAD_KEY`: Configuration reload key
## Access
- Web UI: <http://localhost>
- Docker Registry: <http://localhost>
Default credentials:
- Username: `admin`
- Password: `Harbor12345` (or value of `HARBOR_ADMIN_PASSWORD`)
## Usage
### Login to Harbor
```bash
docker login localhost
```
### Push an Image
```bash
docker tag myimage:latest localhost/myproject/myimage:latest
docker push localhost/myproject/myimage:latest
```
### Pull an Image
```bash
docker pull localhost/myproject/myimage:latest
```
## Important Notes
⚠️ **Security Warning**:
- Change the default admin password immediately after first login
- Set secure values for all secret environment variables
- Use HTTPS in production environments
## Components
- **harbor-core**: Core API server
- **harbor-portal**: Web UI
- **harbor-jobservice**: Background job service
- **harbor-registry**: Docker registry
- **harbor-db**: PostgreSQL database
- **harbor-redis**: Redis cache
- **harbor-proxy**: Nginx reverse proxy
## Resources
- Core: 1 CPU, 2G RAM
- JobService: 0.5 CPU, 512M RAM
- Registry: 0.5 CPU, 512M RAM
- Database: 1 CPU, 1G RAM
- Redis: 0.5 CPU, 256M RAM

91
src/harbor/README.zh.md Normal file
View File

@@ -0,0 +1,91 @@
# Harbor
[Harbor](https://goharbor.io/) 是一个开源的容器镜像仓库,通过策略和基于角色的访问控制来保护制品,确保镜像经过扫描且没有漏洞,并将镜像签名为可信任的。
## 功能特性
- 安全与漏洞分析:扫描镜像漏洞
- 内容信任:签名和验证镜像
- 基于策略的复制:跨注册表复制镜像
- 基于角色的访问控制:细粒度的访问控制
- Webhook 通知:事件发生时通知外部服务
- 多租户:支持多个项目
## 快速开始
启动 Harbor
```bash
docker compose up -d
```
## 配置
### 环境变量
- `HARBOR_VERSION`: Harbor 版本(默认:`v2.12.0`
- `HARBOR_HTTP_PORT_OVERRIDE`: HTTP 端口(默认:`80`
- `HARBOR_HTTPS_PORT_OVERRIDE`: HTTPS 端口(默认:`443`
- `HARBOR_ADMIN_PASSWORD`: 管理员密码(默认:`Harbor12345`
- `HARBOR_DB_PASSWORD`: 数据库密码(默认:`password`
- `HARBOR_CORE_SECRET`: 核心服务密钥
- `HARBOR_JOBSERVICE_SECRET`: 作业服务密钥
- `HARBOR_REGISTRY_SECRET`: 注册表 HTTP 密钥
- `HARBOR_RELOAD_KEY`: 配置重载密钥
## 访问
- Web UI: <http://localhost>
- Docker 镜像仓库: <http://localhost>
默认凭据:
- 用户名:`admin`
- 密码:`Harbor12345`(或 `HARBOR_ADMIN_PASSWORD` 的值)
## 使用方法
### 登录到 Harbor
```bash
docker login localhost
```
### 推送镜像
```bash
docker tag myimage:latest localhost/myproject/myimage:latest
docker push localhost/myproject/myimage:latest
```
### 拉取镜像
```bash
docker pull localhost/myproject/myimage:latest
```
## 重要提示
⚠️ **安全警告**
- 首次登录后立即更改默认管理员密码
- 为所有密钥环境变量设置安全的值
- 在生产环境中使用 HTTPS
## 组件
- **harbor-core**: 核心 API 服务器
- **harbor-portal**: Web UI
- **harbor-jobservice**: 后台作业服务
- **harbor-registry**: Docker 镜像仓库
- **harbor-db**: PostgreSQL 数据库
- **harbor-redis**: Redis 缓存
- **harbor-proxy**: Nginx 反向代理
## 资源配置
- Core: 1 CPU2G 内存
- JobService: 0.5 CPU512M 内存
- Registry: 0.5 CPU512M 内存
- Database: 1 CPU1G 内存
- Redis: 0.5 CPU256M 内存

91
src/kibana/README.md Normal file
View File

@@ -0,0 +1,91 @@
# Kibana
[Kibana](https://www.elastic.co/kibana) is a free and open user interface that lets you visualize your Elasticsearch data and navigate the Elastic Stack.
## Features
- Data Visualization: Create beautiful visualizations and dashboards
- Search and Filter: Powerful search capabilities
- Machine Learning: Detect anomalies and patterns
- Alerting: Set up alerts based on your data
- Security: User authentication and authorization
## Quick Start
Start Kibana (requires Elasticsearch):
```bash
docker compose up -d
```
## Configuration
### Environment Variables
- `KIBANA_VERSION`: Kibana version (default: `8.16.1`)
- `KIBANA_PORT_OVERRIDE`: HTTP port (default: `5601`)
- `ELASTICSEARCH_HOSTS`: Elasticsearch hosts (default: `http://elasticsearch:9200`)
- `ELASTICSEARCH_USERNAME`: Elasticsearch username
- `ELASTICSEARCH_PASSWORD`: Elasticsearch password
- `KIBANA_SECURITY_ENABLED`: Enable security (default: `false`)
- `KIBANA_ENCRYPTION_KEY`: Encryption key for saved objects
- `KIBANA_LOG_LEVEL`: Log level (default: `info`)
## Access
- Web UI: <http://localhost:5601>
## Prerequisites
Kibana requires Elasticsearch to be running. Make sure Elasticsearch is accessible at the configured `ELASTICSEARCH_HOSTS`.
## Custom Configuration
Uncomment the configuration volume in `docker-compose.yaml` and create `kibana.yml`:
```yaml
server.name: kibana
server.host: "0.0.0.0"
elasticsearch.hosts: ["http://elasticsearch:9200"]
monitoring.ui.container.elasticsearch.enabled: true
```
## Health Check
Check Kibana status:
```bash
curl http://localhost:5601/api/status
```
## Resources
- Resource Limits: 1 CPU, 1G RAM
- Resource Reservations: 0.25 CPU, 512M RAM
## Common Tasks
### Create Index Pattern
1. Navigate to Management → Stack Management → Index Patterns
2. Click "Create index pattern"
3. Enter your index pattern (e.g., `logstash-*`)
4. Select the time field
5. Click "Create index pattern"
### Create Visualization
1. Navigate to Analytics → Visualize Library
2. Click "Create visualization"
3. Select visualization type
4. Configure the visualization
5. Save the visualization
## Integration
Kibana works with:
- Elasticsearch (required)
- Logstash (optional)
- Beats (optional)
- APM Server (optional)

91
src/kibana/README.zh.md Normal file
View File

@@ -0,0 +1,91 @@
# Kibana
[Kibana](https://www.elastic.co/kibana) 是一个免费且开源的用户界面,可让您可视化 Elasticsearch 数据并浏览 Elastic Stack。
## 功能特性
- 数据可视化:创建美观的可视化和仪表板
- 搜索和过滤:强大的搜索功能
- 机器学习:检测异常和模式
- 告警:基于数据设置告警
- 安全性:用户身份验证和授权
## 快速开始
启动 Kibana需要 Elasticsearch
```bash
docker compose up -d
```
## 配置
### 环境变量
- `KIBANA_VERSION`: Kibana 版本(默认:`8.16.1`
- `KIBANA_PORT_OVERRIDE`: HTTP 端口(默认:`5601`
- `ELASTICSEARCH_HOSTS`: Elasticsearch 主机(默认:`http://elasticsearch:9200`
- `ELASTICSEARCH_USERNAME`: Elasticsearch 用户名
- `ELASTICSEARCH_PASSWORD`: Elasticsearch 密码
- `KIBANA_SECURITY_ENABLED`: 启用安全(默认:`false`
- `KIBANA_ENCRYPTION_KEY`: 保存对象的加密密钥
- `KIBANA_LOG_LEVEL`: 日志级别(默认:`info`
## 访问
- Web UI: <http://localhost:5601>
## 前置要求
Kibana 需要运行 Elasticsearch。确保 Elasticsearch 在配置的 `ELASTICSEARCH_HOSTS` 可访问。
## 自定义配置
`docker-compose.yaml` 中取消配置卷的注释,并创建 `kibana.yml`
```yaml
server.name: kibana
server.host: "0.0.0.0"
elasticsearch.hosts: ["http://elasticsearch:9200"]
monitoring.ui.container.elasticsearch.enabled: true
```
## 健康检查
检查 Kibana 状态:
```bash
curl http://localhost:5601/api/status
```
## 资源配置
- 资源限制1 CPU1G 内存
- 资源预留0.25 CPU512M 内存
## 常见任务
### 创建索引模式
1. 导航到 Management → Stack Management → Index Patterns
2. 点击 "Create index pattern"
3. 输入索引模式(例如:`logstash-*`
4. 选择时间字段
5. 点击 "Create index pattern"
### 创建可视化
1. 导航到 Analytics → Visualize Library
2. 点击 "Create visualization"
3. 选择可视化类型
4. 配置可视化
5. 保存可视化
## 集成
Kibana 与以下组件配合使用:
- Elasticsearch必需
- Logstash可选
- Beats可选
- APM Server可选

89
src/kodbox/README.md Normal file
View File

@@ -0,0 +1,89 @@
# Kodbox
[English](./README.md) | [中文](./README.zh.md)
This service deploys Kodbox, a powerful web-based file manager and cloud storage platform with Windows-like user experience.
## Services
- `kodbox`: The main Kodbox application server.
- `kodbox-db`: MySQL database for Kodbox.
- `kodbox-redis`: Redis for caching and session management.
## Environment Variables
| Variable Name | Description | Default Value |
| ------------------- | ------------------------------------------ | ------------------ |
| KODBOX_VERSION | Kodbox image version | `1.62` |
| KODBOX_PORT | Host port mapping for Kodbox web interface | `80` |
| MYSQL_VERSION | MySQL image version | `9.4.0` |
| MYSQL_HOST | MySQL host | `kodbox-db` |
| MYSQL_PORT | MySQL port | `3306` |
| MYSQL_DATABASE | MySQL database name | `kodbox` |
| MYSQL_USER | MySQL username | `kodbox` |
| MYSQL_PASSWORD | MySQL password | `kodbox123` |
| MYSQL_ROOT_PASSWORD | MySQL root password | `root123` |
| REDIS_VERSION | Redis image version | `8.2.1-alpine3.22` |
| REDIS_HOST | Redis host | `kodbox-redis` |
| REDIS_PORT | Redis port | `6379` |
| REDIS_PASSWORD | Redis password (leave empty for no auth) | `""` |
Please create a `.env` file and modify it as needed for your use case.
## Volumes
- `kodbox_data`: A volume for storing Kodbox application and user files.
- `kodbox_db_data`: A volume for storing MySQL data.
- `kodbox_redis_data`: A volume for storing Redis data.
## Getting Started
1. (Optional) Create a `.env` file to customize settings:
```env
KODBOX_PORT=8080
MYSQL_PASSWORD=your-secure-password
MYSQL_ROOT_PASSWORD=your-secure-root-password
```
2. Start the services:
```bash
docker compose up -d
```
3. Access Kodbox at `http://localhost` (or your configured port)
4. Follow the installation wizard on first access
## Initial Setup
On first access, the installation wizard will guide you through:
- Database configuration (automatically filled from environment variables)
- Admin account creation
- Basic settings configuration
**Note**: If you change database credentials in `.env`, make sure to update them during the installation wizard as well.
## Features
- **Windows-like Interface**: Familiar desktop experience in web browser
- **Multi-cloud Support**: Connect to local disk, FTP, WebDAV, and various cloud storage services
- **File Management**: Full-featured file operations with drag-and-drop support
- **Online Preview**: Preview 100+ file formats including Office, PDF, images, videos
- **Online Editing**: Built-in text editor with syntax highlighting for 120+ languages
- **Team Collaboration**: Fine-grained permission control and file sharing
- **Plugin System**: Extend functionality with plugins
## Documentation
For more information, visit the [official Kodbox documentation](https://doc.kodcloud.com/).
## Security Notes
- Change all default passwords in production
- Use HTTPS in production environments
- Regularly backup all data volumes
- Keep Kodbox, MySQL, and Redis updated to the latest stable versions
- Consider setting a Redis password in production environments

88
src/kodbox/README.zh.md Normal file
View File

@@ -0,0 +1,88 @@
# Kodbox
[English](./README.md) | [中文](./README.zh.md)
此服务部署 Kodbox,一个功能强大的 Web 文件管理器和云存储平台,具有类似 Windows 的用户体验。
## 服务
- `kodbox`: Kodbox 主应用服务器。
- `kodbox-db`: Kodbox 的 MySQL 数据库。
- `kodbox-redis`: 用于缓存和会话管理的 Redis。
## 环境变量
| 变量名 | 描述 | 默认值 |
| ------------------- | ------------------------------ | ------------------ |
| KODBOX_VERSION | Kodbox 镜像版本 | `1.62` |
| KODBOX_PORT | Kodbox Web 界面的主机端口映射 | `80` |
| MYSQL_VERSION | MySQL 镜像版本 | `9.4.0` |
| MYSQL_HOST | MySQL 主机 | `kodbox-db` |
| MYSQL_PORT | MySQL 端口 | `3306` |
| MYSQL_DATABASE | MySQL 数据库名 | `kodbox` |
| MYSQL_USER | MySQL 用户名 | `kodbox` |
| MYSQL_PASSWORD | MySQL 密码 | `kodbox123` |
| MYSQL_ROOT_PASSWORD | MySQL root 密码 | `root123` |
| REDIS_VERSION | Redis 镜像版本 | `8.2.1-alpine3.22` |
| REDIS_HOST | Redis 主机 | `kodbox-redis` |
| REDIS_PORT | Redis 端口 | `6379` |
| REDIS_PASSWORD | Redis 密码(留空表示不需要认证) | `""` |
请创建 `.env` 文件并根据需要进行修改。
## 数据卷
- `kodbox_data`: 用于存储 Kodbox 应用和用户文件的卷。
- `kodbox_db_data`: 用于存储 MySQL 数据的卷。
- `kodbox_redis_data`: 用于存储 Redis 数据的卷。
## 快速开始
1. (可选)创建 `.env` 文件以自定义设置:
```env
KODBOX_PORT=8080
MYSQL_PASSWORD=your-secure-password
MYSQL_ROOT_PASSWORD=your-secure-root-password
```
2. 启动服务:
```bash
docker compose up -d
```
3. 访问 `http://localhost`(或您配置的端口)
4. 首次访问时按照安装向导操作
## 初始设置
首次访问时,安装向导将引导您完成:
- 数据库配置(从环境变量自动填充)
- 创建管理员账户
- 基本设置配置
**注意**: 如果您在 `.env` 中更改了数据库凭据,请确保在安装向导中也进行相应更新。
## 功能特性
- **类 Windows 界面**: 在 Web 浏览器中提供熟悉的桌面体验
- **多云支持**: 连接本地磁盘、FTP、WebDAV 和各种云存储服务
- **文件管理**: 支持拖放的全功能文件操作
- **在线预览**: 预览 100+ 种文件格式,包括 Office、PDF、图片、视频
- **在线编辑**: 内置文本编辑器,支持 120+ 种语言的语法高亮
- **团队协作**: 细粒度权限控制和文件共享
- **插件系统**: 通过插件扩展功能
## 文档
更多信息请访问 [Kodbox 官方文档](https://doc.kodcloud.com/)。
## 安全提示
- 在生产环境中更改所有默认密码
- 在生产环境中使用 HTTPS
- 定期备份所有数据卷
- 保持 Kodbox、MySQL 和 Redis 更新到最新稳定版本
- 在生产环境中考虑为 Redis 设置密码

View File

@@ -0,0 +1,101 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
services:
kodbox:
<<: *default
image: kodcloud/kodbox:${KODBOX_VERSION:-1.62}
container_name: kodbox
ports:
- "${KODBOX_PORT:-80}:80"
volumes:
- kodbox_data:/var/www/html
environment:
- MYSQL_HOST=${MYSQL_HOST:-kodbox-db}
- MYSQL_PORT=${MYSQL_PORT:-3306}
- MYSQL_DATABASE=${MYSQL_DATABASE:-kodbox}
- MYSQL_USER=${MYSQL_USER:-kodbox}
- MYSQL_PASSWORD=${MYSQL_PASSWORD:-kodbox123}
- REDIS_HOST=${REDIS_HOST:-kodbox-redis}
- REDIS_PORT=${REDIS_PORT:-6379}
- REDIS_PASSWORD=${REDIS_PASSWORD:-}
depends_on:
kodbox-db:
condition: service_healthy
kodbox-redis:
condition: service_healthy
deploy:
resources:
limits:
cpus: '2.0'
memory: 2G
reservations:
cpus: '0.5'
memory: 256M
kodbox-db:
<<: *default
image: mysql:${MYSQL_VERSION:-9.4.0}
container_name: kodbox-db
environment:
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD:-root123}
- MYSQL_DATABASE=${MYSQL_DATABASE:-kodbox}
- MYSQL_USER=${MYSQL_USER:-kodbox}
- MYSQL_PASSWORD=${MYSQL_PASSWORD:-kodbox123}
command:
- --character-set-server=utf8mb4
- --collation-server=utf8mb4_unicode_ci
- --default-authentication-plugin=mysql_native_password
volumes:
- kodbox_db_data:/var/lib/mysql
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-h", "localhost", "-u", "root", "-p${MYSQL_ROOT_PASSWORD:-root123}"]
interval: 10s
timeout: 5s
retries: 5
start_period: 30s
deploy:
resources:
limits:
cpus: '1.0'
memory: 1G
reservations:
cpus: '0.25'
memory: 256M
kodbox-redis:
<<: *default
image: redis:${REDIS_VERSION:-8.2.1-alpine3.22}
container_name: kodbox-redis
command:
- redis-server
- --requirepass
- ${REDIS_PASSWORD:-}
volumes:
- kodbox_redis_data:/data
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 3s
retries: 5
start_period: 10s
deploy:
resources:
limits:
cpus: '0.50'
memory: 512M
reservations:
cpus: '0.25'
memory: 128M
volumes:
kodbox_data:
kodbox_db_data:
kodbox_redis_data:

57
src/kuzu/README.md Normal file
View File

@@ -0,0 +1,57 @@
# Kuzu
Kuzu is an embedded graph database. It doesn't have an official Docker image for standalone deployment.
Kuzu is designed to be embedded in applications. To use Kuzu:
1. **Python**: Install via pip
```bash
pip install kuzu
```
2. **C++**: Build from source or use pre-built libraries
3. **Node.js**: Install via npm
```bash
npm install kuzu
```
## Example Usage (Python)
```python
import kuzu
# Create a database
db = kuzu.Database("./test_db")
conn = kuzu.Connection(db)
# Create schema
conn.execute("CREATE NODE TABLE Person(name STRING, age INT64, PRIMARY KEY(name))")
conn.execute("CREATE REL TABLE Knows(FROM Person TO Person)")
# Insert data
conn.execute("CREATE (:Person {name: 'Alice', age: 30})")
conn.execute("CREATE (:Person {name: 'Bob', age: 25})")
conn.execute("MATCH (a:Person), (b:Person) WHERE a.name = 'Alice' AND b.name = 'Bob' CREATE (a)-[:Knows]->(b)")
# Query
result = conn.execute("MATCH (a:Person)-[:Knows]->(b:Person) RETURN a.name, b.name")
while result.has_next():
print(result.get_next())
```
## Reference
- [Kuzu GitHub](https://github.com/kuzudb/kuzu)
- [Kuzu Documentation](https://kuzudb.com)
## Notes
Kuzu is an embedded database and does not run as a standalone service. It's designed to be integrated directly into your application.
For a standalone graph database service, consider:
- [Neo4j](../neo4j/)
- [NebulaGraph](../nebulagraph/)

73
src/langfuse/README.md Normal file
View File

@@ -0,0 +1,73 @@
# Langfuse
[English](./README.md) | [中文](./README.zh.md)
This service deploys Langfuse, an open-source LLM engineering platform for observability, metrics, evaluations, and prompt management.
## Services
- `langfuse-server`: The main Langfuse application server.
- `langfuse-db`: PostgreSQL database for Langfuse.
## Environment Variables
| Variable Name | Description | Default Value |
| ------------------------------------- | ----------------------------------------------- | ----------------------- |
| LANGFUSE_VERSION | Langfuse image version | `3.115.0` |
| LANGFUSE_PORT | Host port mapping for Langfuse web interface | `3000` |
| POSTGRES_VERSION | PostgreSQL image version | `17.2-alpine3.21` |
| POSTGRES_USER | PostgreSQL username | `postgres` |
| POSTGRES_PASSWORD | PostgreSQL password | `postgres` |
| POSTGRES_DB | PostgreSQL database name | `langfuse` |
| NEXTAUTH_URL | Public URL of your Langfuse instance | `http://localhost:3000` |
| NEXTAUTH_SECRET | Secret for NextAuth.js (required, generate one) | `""` |
| SALT | Salt for encryption (required, generate one) | `""` |
| TELEMETRY_ENABLED | Enable telemetry | `true` |
| LANGFUSE_ENABLE_EXPERIMENTAL_FEATURES | Enable experimental features | `false` |
**Important**: You must set `NEXTAUTH_SECRET` and `SALT` for production use. Generate them using:
```bash
# For NEXTAUTH_SECRET
openssl rand -base64 32
# For SALT
openssl rand -base64 32
```
Please create a `.env` file and modify it as needed for your use case.
## Volumes
- `langfuse_db_data`: A volume for storing PostgreSQL data.
## Getting Started
1. Create a `.env` file with required secrets:
```env
NEXTAUTH_SECRET=your-generated-secret-here
SALT=your-generated-salt-here
POSTGRES_PASSWORD=your-secure-password
```
2. Start the services:
```bash
docker compose up -d
```
3. Access Langfuse at `http://localhost:3000`
4. Create your first account on the setup page
## Documentation
For more information, visit the [official Langfuse documentation](https://langfuse.com/docs).
## Security Notes
- Change default passwords in production
- Use strong, randomly generated values for `NEXTAUTH_SECRET` and `SALT`
- Consider using a reverse proxy with SSL/TLS in production
- Regularly backup the PostgreSQL database

73
src/langfuse/README.zh.md Normal file
View File

@@ -0,0 +1,73 @@
# Langfuse
[English](./README.md) | [中文](./README.zh.md)
此服务部署 Langfuse,一个用于 LLM 应用可观测性、指标、评估和提示管理的开源平台。
## 服务
- `langfuse-server`: Langfuse 主应用服务器。
- `langfuse-db`: Langfuse 的 PostgreSQL 数据库。
## 环境变量
| 变量名 | 描述 | 默认值 |
| ------------------------------------- | ------------------------------- | ----------------------- |
| LANGFUSE_VERSION | Langfuse 镜像版本 | `3.115.0` |
| LANGFUSE_PORT | Langfuse Web 界面的主机端口映射 | `3000` |
| POSTGRES_VERSION | PostgreSQL 镜像版本 | `17.2-alpine3.21` |
| POSTGRES_USER | PostgreSQL 用户名 | `postgres` |
| POSTGRES_PASSWORD | PostgreSQL 密码 | `postgres` |
| POSTGRES_DB | PostgreSQL 数据库名 | `langfuse` |
| NEXTAUTH_URL | Langfuse 实例的公开 URL | `http://localhost:3000` |
| NEXTAUTH_SECRET | NextAuth.js 密钥(必需,需要生成) | `""` |
| SALT | 加密盐值(必需,需要生成) | `""` |
| TELEMETRY_ENABLED | 启用遥测 | `true` |
| LANGFUSE_ENABLE_EXPERIMENTAL_FEATURES | 启用实验性功能 | `false` |
**重要提示**: 在生产环境中必须设置 `NEXTAUTH_SECRET``SALT`。使用以下命令生成:
```bash
# 生成 NEXTAUTH_SECRET
openssl rand -base64 32
# 生成 SALT
openssl rand -base64 32
```
请创建 `.env` 文件并根据需要进行修改。
## 数据卷
- `langfuse_db_data`: 用于存储 PostgreSQL 数据的卷。
## 快速开始
1. 创建包含必需密钥的 `.env` 文件:
```env
NEXTAUTH_SECRET=your-generated-secret-here
SALT=your-generated-salt-here
POSTGRES_PASSWORD=your-secure-password
```
2. 启动服务:
```bash
docker compose up -d
```
3. 访问 `http://localhost:3000`
4. 在设置页面创建您的第一个账户
## 文档
更多信息请访问 [Langfuse 官方文档](https://langfuse.com/docs)。
## 安全提示
- 在生产环境中更改默认密码
- 为 `NEXTAUTH_SECRET` 和 `SALT` 使用强随机生成的值
- 在生产环境中考虑使用带 SSL/TLS 的反向代理
- 定期备份 PostgreSQL 数据库

View File

@@ -0,0 +1,63 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
services:
langfuse-server:
<<: *default
image: langfuse/langfuse:${LANGFUSE_VERSION:-3.115.0}
container_name: langfuse-server
ports:
- "${LANGFUSE_PORT:-3000}:3000"
environment:
- DATABASE_URL=postgresql://${POSTGRES_USER:-postgres}:${POSTGRES_PASSWORD:-postgres}@langfuse-db:5432/${POSTGRES_DB:-langfuse}
- NEXTAUTH_URL=${NEXTAUTH_URL:-http://localhost:3000}
- NEXTAUTH_SECRET=${NEXTAUTH_SECRET}
- SALT=${SALT}
- TELEMETRY_ENABLED=${TELEMETRY_ENABLED:-true}
- LANGFUSE_ENABLE_EXPERIMENTAL_FEATURES=${LANGFUSE_ENABLE_EXPERIMENTAL_FEATURES:-false}
depends_on:
langfuse-db:
condition: service_healthy
deploy:
resources:
limits:
cpus: '2.0'
memory: 2G
reservations:
cpus: '0.5'
memory: 512M
langfuse-db:
<<: *default
image: postgres:${POSTGRES_VERSION:-17.2-alpine3.21}
container_name: langfuse-db
environment:
- POSTGRES_USER=${POSTGRES_USER:-postgres}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-postgres}
- POSTGRES_DB=${POSTGRES_DB:-langfuse}
volumes:
- langfuse_db_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER:-postgres}"]
interval: 10s
timeout: 5s
retries: 5
start_period: 30s
deploy:
resources:
limits:
cpus: '1.0'
memory: 1G
reservations:
cpus: '0.25'
memory: 256M
volumes:
langfuse_db_data:

96
src/logstash/README.md Normal file
View File

@@ -0,0 +1,96 @@
# Logstash
[Logstash](https://www.elastic.co/logstash) is a free and open server-side data processing pipeline that ingests data from a multitude of sources, transforms it, and then sends it to your favorite "stash."
## Features
- Data Ingestion: Collect data from various sources
- Data Transformation: Parse, filter, and enrich data
- Data Output: Send data to Elasticsearch, databases, or other destinations
- Plugin Ecosystem: Extensive plugin library for inputs, filters, and outputs
## Quick Start
Start Logstash:
```bash
docker compose up -d
```
## Configuration
### Environment Variables
- `LOGSTASH_VERSION`: Logstash version (default: `8.16.1`)
- `LOGSTASH_BEATS_PORT_OVERRIDE`: Beats input port (default: `5044`)
- `LOGSTASH_TCP_PORT_OVERRIDE`: TCP input port (default: `5000`)
- `LOGSTASH_UDP_PORT_OVERRIDE`: UDP input port (default: `5000`)
- `LOGSTASH_HTTP_PORT_OVERRIDE`: HTTP API port (default: `9600`)
- `LOGSTASH_MONITORING_ENABLED`: Enable monitoring (default: `false`)
- `ELASTICSEARCH_HOSTS`: Elasticsearch hosts (default: `http://elasticsearch:9200`)
- `ELASTICSEARCH_USERNAME`: Elasticsearch username
- `ELASTICSEARCH_PASSWORD`: Elasticsearch password
- `LS_JAVA_OPTS`: Java options (default: `-Xmx1g -Xms1g`)
- `LOGSTASH_PIPELINE_WORKERS`: Number of pipeline workers (default: `2`)
- `LOGSTASH_PIPELINE_BATCH_SIZE`: Pipeline batch size (default: `125`)
- `LOGSTASH_PIPELINE_BATCH_DELAY`: Pipeline batch delay in ms (default: `50`)
- `LOGSTASH_LOG_LEVEL`: Log level (default: `info`)
## Pipeline Configuration
Create pipeline configuration files in the `./pipeline` directory. Example `logstash.conf`:
```conf
input {
beats {
port => 5044
}
tcp {
port => 5000
}
}
filter {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}" }
}
date {
match => [ "timestamp", "dd/MMM/yyyy:HH:mm:ss Z" ]
}
}
output {
elasticsearch {
hosts => ["${ELASTICSEARCH_HOSTS}"]
index => "logstash-%{+YYYY.MM.dd}"
}
stdout {
codec => rubydebug
}
}
```
## Access
- HTTP API: <http://localhost:9600>
- Monitoring: <http://localhost:9600/_node/stats>
## Health Check
Check Logstash status:
```bash
curl http://localhost:9600/_node/stats
```
## Custom Configuration
Uncomment the configuration volumes in `docker-compose.yaml` and create:
- `logstash.yml`: Main configuration
- `pipelines.yml`: Pipeline definitions
## Resources
- Resource Limits: 1.5 CPU, 2G RAM
- Resource Reservations: 0.5 CPU, 1G RAM

96
src/logstash/README.zh.md Normal file
View File

@@ -0,0 +1,96 @@
# Logstash
[Logstash](https://www.elastic.co/logstash) 是一个免费且开源的服务器端数据处理管道,从多种来源摄取数据,转换数据,然后将其发送到您喜欢的"存储"位置。
## 功能特性
- 数据摄取:从各种来源收集数据
- 数据转换:解析、过滤和丰富数据
- 数据输出:将数据发送到 Elasticsearch、数据库或其他目标
- 插件生态系统:用于输入、过滤器和输出的广泛插件库
## 快速开始
启动 Logstash
```bash
docker compose up -d
```
## 配置
### 环境变量
- `LOGSTASH_VERSION`: Logstash 版本(默认:`8.16.1`
- `LOGSTASH_BEATS_PORT_OVERRIDE`: Beats 输入端口(默认:`5044`
- `LOGSTASH_TCP_PORT_OVERRIDE`: TCP 输入端口(默认:`5000`
- `LOGSTASH_UDP_PORT_OVERRIDE`: UDP 输入端口(默认:`5000`
- `LOGSTASH_HTTP_PORT_OVERRIDE`: HTTP API 端口(默认:`9600`
- `LOGSTASH_MONITORING_ENABLED`: 启用监控(默认:`false`
- `ELASTICSEARCH_HOSTS`: Elasticsearch 主机(默认:`http://elasticsearch:9200`
- `ELASTICSEARCH_USERNAME`: Elasticsearch 用户名
- `ELASTICSEARCH_PASSWORD`: Elasticsearch 密码
- `LS_JAVA_OPTS`: Java 选项(默认:`-Xmx1g -Xms1g`
- `LOGSTASH_PIPELINE_WORKERS`: 管道工作线程数(默认:`2`
- `LOGSTASH_PIPELINE_BATCH_SIZE`: 管道批处理大小(默认:`125`
- `LOGSTASH_PIPELINE_BATCH_DELAY`: 管道批处理延迟(毫秒)(默认:`50`
- `LOGSTASH_LOG_LEVEL`: 日志级别(默认:`info`
## 管道配置
`./pipeline` 目录中创建管道配置文件。示例 `logstash.conf`
```conf
input {
beats {
port => 5044
}
tcp {
port => 5000
}
}
filter {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}" }
}
date {
match => [ "timestamp", "dd/MMM/yyyy:HH:mm:ss Z" ]
}
}
output {
elasticsearch {
hosts => ["${ELASTICSEARCH_HOSTS}"]
index => "logstash-%{+YYYY.MM.dd}"
}
stdout {
codec => rubydebug
}
}
```
## 访问
- HTTP API: <http://localhost:9600>
- 监控: <http://localhost:9600/_node/stats>
## 健康检查
检查 Logstash 状态:
```bash
curl http://localhost:9600/_node/stats
```
## 自定义配置
`docker-compose.yaml` 中取消配置卷的注释,并创建:
- `logstash.yml`: 主配置
- `pipelines.yml`: 管道定义
## 资源配置
- 资源限制1.5 CPU2G 内存
- 资源预留0.5 CPU1G 内存

View File

@@ -0,0 +1,13 @@
# MariaDB version (must support Galera)
MARIADB_VERSION="11.7.2"
# MariaDB root password
MARIADB_ROOT_PASSWORD="galera"
# Galera cluster configuration
MARIADB_GALERA_CLUSTER_NAME="galera_cluster"
# Port overrides for each node
MARIADB_PORT_1_OVERRIDE=3306
MARIADB_PORT_2_OVERRIDE=3307
MARIADB_PORT_3_OVERRIDE=3308

View File

@@ -0,0 +1,89 @@
# MariaDB Galera Cluster
[English](./README.md) | [中文](./README.zh.md)
This service deploys a 3-node MariaDB Galera Cluster for high availability and synchronous multi-master replication.
## Services
- `mariadb-galera-1`: First MariaDB Galera node (bootstrap node).
- `mariadb-galera-2`: Second MariaDB Galera node.
- `mariadb-galera-3`: Third MariaDB Galera node.
## Environment Variables
| Variable Name | Description | Default Value |
| --------------------------- | --------------------- | ---------------- |
| MARIADB_VERSION | MariaDB image version | `11.7.2` |
| MARIADB_ROOT_PASSWORD | Root user password | `galera` |
| MARIADB_GALERA_CLUSTER_NAME | Galera cluster name | `galera_cluster` |
| MARIADB_PORT_1_OVERRIDE | Node 1 port | `3306` |
| MARIADB_PORT_2_OVERRIDE | Node 2 port | `3307` |
| MARIADB_PORT_3_OVERRIDE | Node 3 port | `3308` |
Please modify the `.env` file as needed for your use case.
## Volumes
- `mariadb_galera_1_data`: Node 1 data storage.
- `mariadb_galera_2_data`: Node 2 data storage.
- `mariadb_galera_3_data`: Node 3 data storage.
## Usage
### Start the Cluster
```bash
docker-compose up -d
```
The first node (mariadb-galera-1) bootstraps the cluster with `--wsrep-new-cluster`. The other nodes join automatically.
### Connect to the Cluster
Connect to any node:
```bash
mysql -h 127.0.0.1 -P 3306 -u root -p
```
Or:
```bash
mysql -h 127.0.0.1 -P 3307 -u root -p
mysql -h 127.0.0.1 -P 3308 -u root -p
```
### Check Cluster Status
```sql
SHOW STATUS LIKE 'wsrep_cluster_size';
SHOW STATUS LIKE 'wsrep_local_state_comment';
```
The `wsrep_cluster_size` should be 3, and `wsrep_local_state_comment` should show "Synced".
## Features
- **Multi-Master Replication**: All nodes accept writes simultaneously
- **Synchronous Replication**: Data is replicated to all nodes before commit
- **Automatic Failover**: If one node fails, the cluster continues operating
- **High Availability**: No single point of failure with 3 nodes
- **Read/Write on Any Node**: Connect to any node for read and write operations
## Notes
- The first node (mariadb-galera-1) must start first as the bootstrap node
- All nodes must be able to communicate with each other
- For production, consider adding more nodes (5, 7, etc.) for better fault tolerance
- Use an odd number of nodes to avoid split-brain scenarios
- Change the default root password for production use
- The cluster uses `rsync` for State Snapshot Transfer (SST)
## Scaling
To add more nodes, add new service definitions following the pattern of nodes 2 and 3, and update the `wsrep_cluster_address` to include all nodes.
## License
MariaDB is licensed under the GPL v2.

View File

@@ -0,0 +1,89 @@
# MariaDB Galera 集群
[English](./README.md) | [中文](./README.zh.md)
此服务用于部署 3 节点 MariaDB Galera 集群,提供高可用性和同步多主复制。
## 服务
- `mariadb-galera-1`: 第一个 MariaDB Galera 节点(引导节点)。
- `mariadb-galera-2`: 第二个 MariaDB Galera 节点。
- `mariadb-galera-3`: 第三个 MariaDB Galera 节点。
## 环境变量
| 变量名 | 说明 | 默认值 |
| --------------------------- | ---------------- | ---------------- |
| MARIADB_VERSION | MariaDB 镜像版本 | `11.7.2` |
| MARIADB_ROOT_PASSWORD | root 用户密码 | `galera` |
| MARIADB_GALERA_CLUSTER_NAME | Galera 集群名称 | `galera_cluster` |
| MARIADB_PORT_1_OVERRIDE | 节点 1 端口 | `3306` |
| MARIADB_PORT_2_OVERRIDE | 节点 2 端口 | `3307` |
| MARIADB_PORT_3_OVERRIDE | 节点 3 端口 | `3308` |
请根据实际需求修改 `.env` 文件。
## 卷
- `mariadb_galera_1_data`: 节点 1 数据存储。
- `mariadb_galera_2_data`: 节点 2 数据存储。
- `mariadb_galera_3_data`: 节点 3 数据存储。
## 使用方法
### 启动集群
```bash
docker-compose up -d
```
第一个节点 (mariadb-galera-1) 使用 `--wsrep-new-cluster` 引导集群。其他节点自动加入。
### 连接到集群
连接到任何节点:
```bash
mysql -h 127.0.0.1 -P 3306 -u root -p
```
或者:
```bash
mysql -h 127.0.0.1 -P 3307 -u root -p
mysql -h 127.0.0.1 -P 3308 -u root -p
```
### 检查集群状态
```sql
SHOW STATUS LIKE 'wsrep_cluster_size';
SHOW STATUS LIKE 'wsrep_local_state_comment';
```
`wsrep_cluster_size` 应该为 3`wsrep_local_state_comment` 应显示 "Synced"。
## 功能
- **多主复制**: 所有节点同时接受写入
- **同步复制**: 数据在提交前复制到所有节点
- **自动故障转移**: 如果一个节点失败,集群继续运行
- **高可用性**: 3 个节点无单点故障
- **任意节点读写**: 连接到任何节点进行读写操作
## 注意事项
- 第一个节点 (mariadb-galera-1) 必须首先启动作为引导节点
- 所有节点必须能够相互通信
- 对于生产环境考虑添加更多节点5、7 等)以获得更好的容错能力
- 使用奇数个节点以避免脑裂场景
- 生产环境请更改默认 root 密码
- 集群使用 `rsync` 进行状态快照传输SST
## 扩展
要添加更多节点,请按照节点 2 和 3 的模式添加新的服务定义,并更新 `wsrep_cluster_address` 以包含所有节点。
## 许可证
MariaDB 使用 GPL v2 许可证授权。

View File

@@ -0,0 +1,119 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
x-mariadb-galera: &mariadb-galera
<<: *default
image: mariadb:${MARIADB_VERSION:-11.7.2}
environment: &galera-env
MARIADB_ROOT_PASSWORD: ${MARIADB_ROOT_PASSWORD:-galera}
MARIADB_GALERA_CLUSTER_NAME: ${MARIADB_GALERA_CLUSTER_NAME:-galera_cluster}
MARIADB_GALERA_CLUSTER_ADDRESS: gcomm://mariadb-galera-1,mariadb-galera-2,mariadb-galera-3
command:
- --wsrep-new-cluster
- --wsrep_node_address=${WSREP_NODE_ADDRESS}
- --wsrep_cluster_name=${MARIADB_GALERA_CLUSTER_NAME:-galera_cluster}
- --wsrep_cluster_address=gcomm://mariadb-galera-1,mariadb-galera-2,mariadb-galera-3
- --wsrep_sst_method=rsync
- --wsrep_on=ON
- --wsrep_provider=/usr/lib/galera/libgalera_smm.so
- --binlog_format=row
- --default_storage_engine=InnoDB
- --innodb_autoinc_lock_mode=2
deploy:
resources:
limits:
cpus: '2.0'
memory: 2G
reservations:
cpus: '1.0'
memory: 1G
services:
mariadb-galera-1:
<<: *mariadb-galera
container_name: mariadb-galera-1
hostname: mariadb-galera-1
ports:
- "${MARIADB_PORT_1_OVERRIDE:-3306}:3306"
environment:
<<: *galera-env
WSREP_NODE_ADDRESS: mariadb-galera-1
command:
- --wsrep-new-cluster
- --wsrep_node_address=mariadb-galera-1
- --wsrep_cluster_name=${MARIADB_GALERA_CLUSTER_NAME:-galera_cluster}
- --wsrep_cluster_address=gcomm://mariadb-galera-1,mariadb-galera-2,mariadb-galera-3
- --wsrep_sst_method=rsync
- --wsrep_on=ON
- --wsrep_provider=/usr/lib/galera/libgalera_smm.so
- --binlog_format=row
- --default_storage_engine=InnoDB
- --innodb_autoinc_lock_mode=2
volumes:
- *localtime
- *timezone
- mariadb_galera_1_data:/var/lib/mysql
mariadb-galera-2:
<<: *mariadb-galera
container_name: mariadb-galera-2
hostname: mariadb-galera-2
ports:
- "${MARIADB_PORT_2_OVERRIDE:-3307}:3306"
environment:
<<: *galera-env
WSREP_NODE_ADDRESS: mariadb-galera-2
command:
- --wsrep_node_address=mariadb-galera-2
- --wsrep_cluster_name=${MARIADB_GALERA_CLUSTER_NAME:-galera_cluster}
- --wsrep_cluster_address=gcomm://mariadb-galera-1,mariadb-galera-2,mariadb-galera-3
- --wsrep_sst_method=rsync
- --wsrep_on=ON
- --wsrep_provider=/usr/lib/galera/libgalera_smm.so
- --binlog_format=row
- --default_storage_engine=InnoDB
- --innodb_autoinc_lock_mode=2
volumes:
- *localtime
- *timezone
- mariadb_galera_2_data:/var/lib/mysql
depends_on:
- mariadb-galera-1
mariadb-galera-3:
<<: *mariadb-galera
container_name: mariadb-galera-3
hostname: mariadb-galera-3
ports:
- "${MARIADB_PORT_3_OVERRIDE:-3308}:3306"
environment:
<<: *galera-env
WSREP_NODE_ADDRESS: mariadb-galera-3
command:
- --wsrep_node_address=mariadb-galera-3
- --wsrep_cluster_name=${MARIADB_GALERA_CLUSTER_NAME:-galera_cluster}
- --wsrep_cluster_address=gcomm://mariadb-galera-1,mariadb-galera-2,mariadb-galera-3
- --wsrep_sst_method=rsync
- --wsrep_on=ON
- --wsrep_provider=/usr/lib/galera/libgalera_smm.so
- --binlog_format=row
- --default_storage_engine=InnoDB
- --innodb_autoinc_lock_mode=2
volumes:
- *localtime
- *timezone
- mariadb_galera_3_data:/var/lib/mysql
depends_on:
- mariadb-galera-1
volumes:
mariadb_galera_1_data:
mariadb_galera_2_data:
mariadb_galera_3_data:

View File

@@ -40,8 +40,8 @@ services:
<<: *default
image: minio/minio:${MINIO_VERSION:-RELEASE.2024-12-18T13-15-44Z}
environment:
MINIO_ACCESS_KEY: ${MINIO_ACCESS_KEY:-minioadmin}
MINIO_SECRET_KEY: ${MINIO_SECRET_KEY:-minioadmin}
MINIO_ROOT_USER: ${MINIO_ROOT_USER:-minioadmin}
MINIO_ROOT_PASSWORD: ${MINIO_ROOT_PASSWORD:-minioadmin}
ports:
- "${MINIO_PORT_OVERRIDE_API:-9000}:9000"
- "${MINIO_PORT_OVERRIDE_WEBUI:-9001}:9001"

View File

@@ -0,0 +1,47 @@
# Bedrock server version (Docker image tag)
BEDROCK_VERSION="latest"
# Minecraft version (LATEST, PREVIEW, or specific version like 1.20.81.01)
MINECRAFT_VERSION="LATEST"
# Accept EULA (must be TRUE to start server)
EULA="TRUE"
# Game mode (survival, creative, adventure)
GAMEMODE="survival"
# Difficulty (peaceful, easy, normal, hard)
DIFFICULTY="easy"
# Server name
SERVER_NAME="Dedicated Server"
# Maximum number of players
MAX_PLAYERS="10"
# Allow cheats
ALLOW_CHEATS="false"
# Level/world name
LEVEL_NAME="Bedrock level"
# Level seed (leave empty for random)
LEVEL_SEED=""
# Online mode
ONLINE_MODE="true"
# Enable whitelist
WHITE_LIST="false"
# Server ports
SERVER_PORT="19132"
SERVER_PORT_V6="19133"
# Host port mappings
SERVER_PORT_OVERRIDE=19132
SERVER_PORT_V6_OVERRIDE=19133
# User and group IDs
UID=1000
GID=1000

View File

@@ -0,0 +1,54 @@
# Minecraft Bedrock Server
[English](./README.md) | [中文](./README.zh.md)
This service deploys a Minecraft Bedrock Edition dedicated server.
## Services
- `minecraft-bedrock`: The Minecraft Bedrock server.
## Environment Variables
| Variable Name | Description | Default Value |
| ----------------------- | ------------------------------------------------ | ------------------ |
| BEDROCK_VERSION | Bedrock server Docker image version | `latest` |
| MINECRAFT_VERSION | Minecraft version (LATEST, PREVIEW, or specific) | `LATEST` |
| EULA | Accept Minecraft EULA (must be TRUE) | `TRUE` |
| GAMEMODE | Game mode (survival, creative, adventure) | `survival` |
| DIFFICULTY | Difficulty (peaceful, easy, normal, hard) | `easy` |
| SERVER_NAME | Server name | `Dedicated Server` |
| MAX_PLAYERS | Maximum number of players | `10` |
| ALLOW_CHEATS | Allow cheats | `false` |
| LEVEL_NAME | Level/world name | `Bedrock level` |
| LEVEL_SEED | Level seed (empty for random) | `""` |
| ONLINE_MODE | Enable online mode | `true` |
| WHITE_LIST | Enable whitelist | `false` |
| SERVER_PORT | Server port (IPv4) | `19132` |
| SERVER_PORT_V6 | Server port (IPv6) | `19133` |
| SERVER_PORT_OVERRIDE | Host port mapping for IPv4 | `19132` |
| SERVER_PORT_V6_OVERRIDE | Host port mapping for IPv6 | `19133` |
| UID | User ID to run the server | `1000` |
| GID | Group ID to run the server | `1000` |
Please modify the `.env` file as needed for your use case.
## Volumes
- `bedrock_data`: A volume for storing Minecraft world data and server files.
## Ports
- **19132/udp**: The main Bedrock server port (IPv4).
- **19133/udp**: The Bedrock server port (IPv6).
## Notes
- You must accept the Minecraft EULA by setting `EULA=TRUE`.
- The server uses UDP protocol, so ensure your firewall allows UDP traffic on the specified ports.
- To enable whitelist, set `WHITE_LIST=true` and add player XUIDs to the `allowlist.json` file in the data volume.
- Supports both `LATEST` stable releases and `PREVIEW` versions.
## License
Minecraft is a trademark of Mojang AB. This Docker image uses the official Minecraft Bedrock Server software, which is subject to the [Minecraft End User License Agreement](https://minecraft.net/terms).

View File

@@ -0,0 +1,54 @@
# Minecraft Bedrock 服务器
[English](./README.md) | [中文](./README.zh.md)
此服务用于部署 Minecraft 基岩版专用服务器。
## 服务
- `minecraft-bedrock`: Minecraft 基岩版服务器。
## 环境变量
| 变量名 | 说明 | 默认值 |
| ----------------------- | ----------------------------------------- | ------------------ |
| BEDROCK_VERSION | 基岩版服务器 Docker 镜像版本 | `latest` |
| MINECRAFT_VERSION | Minecraft 版本LATEST、PREVIEW 或具体) | `LATEST` |
| EULA | 接受 Minecraft EULA必须为 TRUE | `TRUE` |
| GAMEMODE | 游戏模式survival、creative、adventure | `survival` |
| DIFFICULTY | 难度peaceful、easy、normal、hard | `easy` |
| SERVER_NAME | 服务器名称 | `Dedicated Server` |
| MAX_PLAYERS | 最大玩家数 | `10` |
| ALLOW_CHEATS | 允许作弊 | `false` |
| LEVEL_NAME | 世界名称 | `Bedrock level` |
| LEVEL_SEED | 世界种子(留空随机生成) | `""` |
| ONLINE_MODE | 启用在线模式 | `true` |
| WHITE_LIST | 启用白名单 | `false` |
| SERVER_PORT | 服务器端口IPv4 | `19132` |
| SERVER_PORT_V6 | 服务器端口IPv6 | `19133` |
| SERVER_PORT_OVERRIDE | 主机端口映射IPv4 | `19132` |
| SERVER_PORT_V6_OVERRIDE | 主机端口映射IPv6 | `19133` |
| UID | 运行服务器的用户 ID | `1000` |
| GID | 运行服务器的组 ID | `1000` |
请根据实际需求修改 `.env` 文件。
## 卷
- `bedrock_data`: 用于存储 Minecraft 世界数据和服务器文件的卷。
## 端口
- **19132/udp**: 主要的基岩版服务器端口IPv4
- **19133/udp**: 基岩版服务器端口IPv6
## 注意事项
- 必须设置 `EULA=TRUE` 以接受 Minecraft 最终用户许可协议。
- 服务器使用 UDP 协议,请确保防火墙允许指定端口的 UDP 流量。
- 要启用白名单,设置 `WHITE_LIST=true` 并在数据卷中的 `allowlist.json` 文件中添加玩家 XUID。
- 支持 `LATEST` 稳定版本和 `PREVIEW` 预览版本。
## 许可证
Minecraft 是 Mojang AB 的商标。此 Docker 镜像使用官方 Minecraft 基岩版服务器软件,受 [Minecraft 最终用户许可协议](https://minecraft.net/terms)约束。

View File

@@ -0,0 +1,51 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
services:
minecraft-bedrock:
<<: *default
image: itzg/minecraft-bedrock-server:${BEDROCK_VERSION:-latest}
container_name: minecraft-bedrock-server
environment:
EULA: "${EULA:-TRUE}"
VERSION: "${MINECRAFT_VERSION:-LATEST}"
GAMEMODE: "${GAMEMODE:-survival}"
DIFFICULTY: "${DIFFICULTY:-easy}"
SERVER_NAME: "${SERVER_NAME:-Dedicated Server}"
MAX_PLAYERS: "${MAX_PLAYERS:-10}"
ALLOW_CHEATS: "${ALLOW_CHEATS:-false}"
LEVEL_NAME: "${LEVEL_NAME:-Bedrock level}"
LEVEL_SEED: "${LEVEL_SEED:-}"
ONLINE_MODE: "${ONLINE_MODE:-true}"
WHITE_LIST: "${WHITE_LIST:-false}"
SERVER_PORT: "${SERVER_PORT:-19132}"
SERVER_PORT_V6: "${SERVER_PORT_V6:-19133}"
UID: "${UID:-1000}"
GID: "${GID:-1000}"
ports:
- "${SERVER_PORT_OVERRIDE:-19132}:19132/udp"
- "${SERVER_PORT_V6_OVERRIDE:-19133}:19133/udp"
volumes:
- *localtime
- *timezone
- bedrock_data:/data
stdin_open: true
tty: true
deploy:
resources:
limits:
cpus: '2.0'
memory: 2G
reservations:
cpus: '1.0'
memory: 1G
volumes:
bedrock_data:

View File

@@ -1,5 +1,5 @@
# MinerU Docker image
MINERU_DOCKER_IMAGE=alexsuntop/mineru:2.5.3
MINERU_DOCKER_IMAGE=alexsuntop/mineru:2.5.4
# Port configurations
MINERU_PORT_OVERRIDE_VLLM=30000

View File

@@ -0,0 +1,27 @@
# Use the official vllm image for gpu with Ampere architecture and above (Compute Capability>=8.0)
# Compute Capability version query (https://developer.nvidia.com/cuda-gpus)
FROM vllm/vllm-openai:v0.10.1.1
# Use the official vllm image for gpu with Turing architecture and below (Compute Capability<8.0)
# FROM vllm/vllm-openai:v0.10.2
# Install libgl for opencv support & Noto fonts for Chinese characters
RUN apt-get update && \
apt-get install -y \
fonts-noto-core \
fonts-noto-cjk \
fontconfig \
libgl1 && \
fc-cache -fv && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
# Install mineru latest
RUN python3 -m pip install -U 'mineru[core]' --break-system-packages && \
python3 -m pip cache purge
# Download models and update the configuration file
RUN /bin/bash -c "mineru-models-download -s huggingface -m all"
# Set the entry point to activate the virtual environment and run the command line tool
ENTRYPOINT ["/bin/bash", "-c", "export MINERU_MODEL_SOURCE=local && exec \"$@\"", "--"]

View File

@@ -39,7 +39,7 @@ mineru -p demo.pdf -o ./output -b vlm-http-client -u http://localhost:30000
## Configuration
- `MINERU_DOCKER_IMAGE`: The Docker image for MinerU, default is `alexsuntop/mineru:2.5.3`.
- `MINERU_DOCKER_IMAGE`: The Docker image for MinerU, default is `alexsuntop/mineru:2.5.4`.
- `MINERU_PORT_OVERRIDE_VLLM`: The host port for the VLLM server, default is `30000`.
- `MINERU_PORT_OVERRIDE_API`: The host port for the API service, default is `8000`.
- `MINERU_PORT_OVERRIDE_GRADIO`: The host port for the Gradio WebUI, default is `7860`.

View File

@@ -39,7 +39,7 @@ mineru -p demo.pdf -o ./output -b vlm-http-client -u http://localhost:30000
## 配置
- `MINERU_DOCKER_IMAGE`: MinerU 的 Docker 镜像,默认为 `alexsuntop/mineru:2.5.3`
- `MINERU_DOCKER_IMAGE`: MinerU 的 Docker 镜像,默认为 `alexsuntop/mineru:2.5.4`
- `MINERU_PORT_OVERRIDE_VLLM`: VLLM 服务器的主机端口,默认为 `30000`
- `MINERU_PORT_OVERRIDE_API`: API 服务的主机端口,默认为 `8000`
- `MINERU_PORT_OVERRIDE_GRADIO`: Gradio WebUI 的主机端口,默认为 `7860`

View File

@@ -10,7 +10,10 @@ x-default: &default
x-mineru-vllm: &mineru-vllm
<<: *default
image: ${MINERU_DOCKER_IMAGE:-alexsuntop/mineru:2.5.3}
image: ${MINERU_DOCKER_IMAGE:-alexsuntop/mineru:2.5.4}
build:
context: .
dockerfile: Dockerfile
environment:
MINERU_MODEL_SOURCE: local
ulimits:
@@ -21,10 +24,10 @@ x-mineru-vllm: &mineru-vllm
resources:
limits:
cpus: '8.0'
memory: 4G
memory: 16G
reservations:
cpus: '1.0'
memory: 2G
cpus: '4.0'
memory: 8G
devices:
- driver: nvidia
device_ids: [ '0' ]

View File

@@ -16,10 +16,8 @@ services:
- "${MINIO_PORT_OVERRIDE_API:-9000}:9000"
- "${MINIO_PORT_OVERRIDE_WEBUI:-9001}:9001"
environment:
MINIO_ROOT_USER: ${MINIO_ROOT_USER:-root}
MINIO_ROOT_PASSWORD: ${MINIO_ROOT_PASSWORD:-password}
MINIO_ACCESS_KEY: ${MINIO_ACCESS_KEY}
MINIO_SECRET_KEY: ${MINIO_SECRET_KEY}
MINIO_ROOT_USER: ${MINIO_ROOT_USER:-minioadmin}
MINIO_ROOT_PASSWORD: ${MINIO_ROOT_PASSWORD:-minioadmin}
volumes:
- *localtime
- *timezone

24
src/mlflow/.env.example Normal file
View File

@@ -0,0 +1,24 @@
# MLflow version
MLFLOW_VERSION="v2.20.2"
# PostgreSQL version
POSTGRES_VERSION="17.6-alpine"
# PostgreSQL configuration
POSTGRES_USER="mlflow"
POSTGRES_PASSWORD="mlflow"
POSTGRES_DB="mlflow"
# MinIO version
MINIO_VERSION="RELEASE.2025-01-07T16-13-09Z"
MINIO_MC_VERSION="RELEASE.2025-01-07T17-25-52Z"
# MinIO configuration
MINIO_ROOT_USER="minio"
MINIO_ROOT_PASSWORD="minio123"
MINIO_BUCKET="mlflow"
# Port overrides
MLFLOW_PORT_OVERRIDE=5000
MINIO_PORT_OVERRIDE=9000
MINIO_CONSOLE_PORT_OVERRIDE=9001

92
src/mlflow/README.md Normal file
View File

@@ -0,0 +1,92 @@
# MLflow
[English](./README.md) | [中文](./README.zh.md)
This service deploys MLflow with PostgreSQL backend and MinIO artifact storage.
## Services
- `mlflow`: MLflow tracking server.
- `postgres`: PostgreSQL database for MLflow metadata.
- `minio`: MinIO server for artifact storage (S3-compatible).
- `minio-init`: Initialization service to create the MLflow bucket.
## Environment Variables
| Variable Name | Description | Default Value |
| --------------------------- | -------------------------- | ------------------------------ |
| MLFLOW_VERSION | MLflow image version | `v2.20.2` |
| POSTGRES_VERSION | PostgreSQL image version | `17.6-alpine` |
| POSTGRES_USER | PostgreSQL username | `mlflow` |
| POSTGRES_PASSWORD | PostgreSQL password | `mlflow` |
| POSTGRES_DB | PostgreSQL database name | `mlflow` |
| MINIO_VERSION | MinIO image version | `RELEASE.2025-01-07T16-13-09Z` |
| MINIO_MC_VERSION | MinIO client version | `RELEASE.2025-01-07T17-25-52Z` |
| MINIO_ROOT_USER | MinIO root username | `minio` |
| MINIO_ROOT_PASSWORD | MinIO root password | `minio123` |
| MINIO_BUCKET | MinIO bucket for artifacts | `mlflow` |
| MLFLOW_PORT_OVERRIDE | MLflow server port | `5000` |
| MINIO_PORT_OVERRIDE | MinIO API port | `9000` |
| MINIO_CONSOLE_PORT_OVERRIDE | MinIO Console port | `9001` |
Please modify the `.env` file as needed for your use case.
## Volumes
- `postgres_data`: PostgreSQL data storage.
- `minio_data`: MinIO data storage for artifacts.
## Usage
### Access MLflow UI
After starting the services, access the MLflow UI at:
```text
http://localhost:5000
```
### Configure MLflow Client
In your Python scripts or notebooks:
```python
import mlflow
# Set the tracking URI
mlflow.set_tracking_uri("http://localhost:5000")
# Your MLflow code here
with mlflow.start_run():
mlflow.log_param("param1", 5)
mlflow.log_metric("metric1", 0.89)
```
### MinIO Console
Access the MinIO console at:
```text
http://localhost:9001
```
Login with the credentials specified in `MINIO_ROOT_USER` and `MINIO_ROOT_PASSWORD`.
## Features
- **Experiment Tracking**: Track ML experiments with parameters, metrics, and artifacts
- **Model Registry**: Version and manage ML models
- **Projects**: Package ML code in a reusable format
- **Models**: Deploy ML models to various platforms
- **Persistent Storage**: PostgreSQL for metadata, MinIO for artifacts
## Notes
- The `minio-init` service runs once to create the bucket and then stops.
- For production use, change all default passwords.
- Consider using external PostgreSQL and S3-compatible storage for production.
- The setup uses named volumes for data persistence.
## License
MLflow is licensed under the Apache License 2.0.

92
src/mlflow/README.zh.md Normal file
View File

@@ -0,0 +1,92 @@
# MLflow
[English](./README.md) | [中文](./README.zh.md)
此服务用于部署带有 PostgreSQL 后端和 MinIO 工件存储的 MLflow。
## 服务
- `mlflow`: MLflow 跟踪服务器。
- `postgres`: 用于 MLflow 元数据的 PostgreSQL 数据库。
- `minio`: 用于工件存储的 MinIO 服务器S3 兼容)。
- `minio-init`: 创建 MLflow 存储桶的初始化服务。
## 环境变量
| 变量名 | 说明 | 默认值 |
| --------------------------- | --------------------- | ------------------------------ |
| MLFLOW_VERSION | MLflow 镜像版本 | `v2.20.2` |
| POSTGRES_VERSION | PostgreSQL 镜像版本 | `17.6-alpine` |
| POSTGRES_USER | PostgreSQL 用户名 | `mlflow` |
| POSTGRES_PASSWORD | PostgreSQL 密码 | `mlflow` |
| POSTGRES_DB | PostgreSQL 数据库名称 | `mlflow` |
| MINIO_VERSION | MinIO 镜像版本 | `RELEASE.2025-01-07T16-13-09Z` |
| MINIO_MC_VERSION | MinIO 客户端版本 | `RELEASE.2025-01-07T17-25-52Z` |
| MINIO_ROOT_USER | MinIO 根用户名 | `minio` |
| MINIO_ROOT_PASSWORD | MinIO 根密码 | `minio123` |
| MINIO_BUCKET | 工件的 MinIO 存储桶 | `mlflow` |
| MLFLOW_PORT_OVERRIDE | MLflow 服务器端口 | `5000` |
| MINIO_PORT_OVERRIDE | MinIO API 端口 | `9000` |
| MINIO_CONSOLE_PORT_OVERRIDE | MinIO 控制台端口 | `9001` |
请根据实际需求修改 `.env` 文件。
## 卷
- `postgres_data`: PostgreSQL 数据存储。
- `minio_data`: 工件的 MinIO 数据存储。
## 使用方法
### 访问 MLflow UI
启动服务后,在以下地址访问 MLflow UI
```text
http://localhost:5000
```
### 配置 MLflow 客户端
在你的 Python 脚本或笔记本中:
```python
import mlflow
# 设置跟踪 URI
mlflow.set_tracking_uri("http://localhost:5000")
# 你的 MLflow 代码
with mlflow.start_run():
mlflow.log_param("param1", 5)
mlflow.log_metric("metric1", 0.89)
```
### MinIO 控制台
在以下地址访问 MinIO 控制台:
```text
http://localhost:9001
```
使用 `MINIO_ROOT_USER``MINIO_ROOT_PASSWORD` 中指定的凭据登录。
## 功能
- **实验跟踪**: 使用参数、指标和工件跟踪 ML 实验
- **模型注册表**: 版本化和管理 ML 模型
- **项目**: 以可重用格式打包 ML 代码
- **模型**: 将 ML 模型部署到各种平台
- **持久存储**: PostgreSQL 用于元数据MinIO 用于工件
## 注意事项
- `minio-init` 服务运行一次以创建存储桶,然后停止。
- 对于生产环境,请更改所有默认密码。
- 考虑使用外部 PostgreSQL 和 S3 兼容存储用于生产环境。
- 该设置使用命名卷进行数据持久化。
## 许可证
MLflow 使用 Apache License 2.0 授权。

View File

@@ -0,0 +1,110 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
services:
postgres:
<<: *default
image: postgres:${POSTGRES_VERSION:-17.6-alpine}
container_name: mlflow-postgres
environment:
POSTGRES_USER: ${POSTGRES_USER:-mlflow}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-mlflow}
POSTGRES_DB: ${POSTGRES_DB:-mlflow}
volumes:
- *localtime
- *timezone
- postgres_data:/var/lib/postgresql/data
deploy:
resources:
limits:
cpus: '1.0'
memory: 1G
reservations:
cpus: '0.5'
memory: 512M
minio:
<<: *default
image: minio/minio:${MINIO_VERSION:-RELEASE.2025-01-07T16-13-09Z}
container_name: mlflow-minio
command: server /data --console-address ":9001"
environment:
MINIO_ROOT_USER: ${MINIO_ROOT_USER:-minio}
MINIO_ROOT_PASSWORD: ${MINIO_ROOT_PASSWORD:-minio123}
ports:
- "${MINIO_PORT_OVERRIDE:-9000}:9000"
- "${MINIO_CONSOLE_PORT_OVERRIDE:-9001}:9001"
volumes:
- *localtime
- *timezone
- minio_data:/data
deploy:
resources:
limits:
cpus: '1.0'
memory: 1G
reservations:
cpus: '0.5'
memory: 512M
minio-init:
<<: *default
image: minio/mc:${MINIO_MC_VERSION:-RELEASE.2025-01-07T17-25-52Z}
container_name: mlflow-minio-init
depends_on:
- minio
entrypoint: >
/bin/sh -c "
sleep 5;
/usr/bin/mc config host add minio http://minio:9000 ${MINIO_ROOT_USER:-minio} ${MINIO_ROOT_PASSWORD:-minio123};
/usr/bin/mc mb minio/${MINIO_BUCKET:-mlflow} --ignore-existing;
exit 0;
"
restart: "no"
mlflow:
<<: *default
image: ghcr.io/mlflow/mlflow:${MLFLOW_VERSION:-v2.20.2}
container_name: mlflow
depends_on:
- postgres
- minio
- minio-init
ports:
- "${MLFLOW_PORT_OVERRIDE:-5000}:5000"
environment:
MLFLOW_BACKEND_STORE_URI: postgresql://${POSTGRES_USER:-mlflow}:${POSTGRES_PASSWORD:-mlflow}@postgres:5432/${POSTGRES_DB:-mlflow}
MLFLOW_ARTIFACT_ROOT: s3://${MINIO_BUCKET:-mlflow}/
MLFLOW_S3_ENDPOINT_URL: http://minio:9000
AWS_ACCESS_KEY_ID: ${MINIO_ROOT_USER:-minio}
AWS_SECRET_ACCESS_KEY: ${MINIO_ROOT_PASSWORD:-minio123}
command:
- mlflow
- server
- --host
- "0.0.0.0"
- --port
- "5000"
- --backend-store-uri
- postgresql://${POSTGRES_USER:-mlflow}:${POSTGRES_PASSWORD:-mlflow}@postgres:5432/${POSTGRES_DB:-mlflow}
- --default-artifact-root
- s3://${MINIO_BUCKET:-mlflow}/
deploy:
resources:
limits:
cpus: '2.0'
memory: 2G
reservations:
cpus: '1.0'
memory: 1G
volumes:
postgres_data:
minio_data:

View File

@@ -30,3 +30,6 @@ services:
reservations:
cpus: '0.25'
memory: 256M
volumes:
mongo_data:

118
src/n8n/README.md Normal file
View File

@@ -0,0 +1,118 @@
# n8n
[English](./README.md) | [中文](./README.zh.md)
This service deploys n8n, a fair-code workflow automation platform with native AI capabilities.
## Services
- `n8n`: The main n8n application server.
- `n8n-db`: PostgreSQL database for n8n (optional, uses SQLite by default).
## Profiles
- `default`: Runs n8n with SQLite (no external database required).
- `postgres`: Runs n8n with PostgreSQL database.
To use PostgreSQL, start with:
```bash
docker compose --profile postgres up -d
```
## Environment Variables
| Variable Name | Description | Default Value |
| ----------------------- | ------------------------------------------------ | ------------------------ |
| N8N_VERSION | n8n image version | `1.114.0` |
| N8N_PORT | Host port mapping for n8n web interface | `5678` |
| N8N_BASIC_AUTH_ACTIVE | Enable basic authentication | `true` |
| N8N_BASIC_AUTH_USER | Basic auth username (required if auth is active) | `""` |
| N8N_BASIC_AUTH_PASSWORD | Basic auth password (required if auth is active) | `""` |
| N8N_HOST | Host address | `0.0.0.0` |
| N8N_PROTOCOL | Protocol (http or https) | `http` |
| WEBHOOK_URL | Webhook URL for external access | `http://localhost:5678/` |
| GENERIC_TIMEZONE | Timezone for n8n | `UTC` |
| TZ | System timezone | `UTC` |
| DB_TYPE | Database type (sqlite or postgresdb) | `sqlite` |
| DB_POSTGRESDB_DATABASE | PostgreSQL database name | `n8n` |
| DB_POSTGRESDB_HOST | PostgreSQL host | `n8n-db` |
| DB_POSTGRESDB_PORT | PostgreSQL port | `5432` |
| DB_POSTGRESDB_USER | PostgreSQL username | `n8n` |
| DB_POSTGRESDB_PASSWORD | PostgreSQL password | `n8n123` |
| POSTGRES_VERSION | PostgreSQL image version | `17.2-alpine3.21` |
| EXECUTIONS_MODE | Execution mode (regular or queue) | `regular` |
| N8N_ENCRYPTION_KEY | Encryption key for credentials | `""` |
Please create a `.env` file and modify it as needed for your use case.
## Volumes
- `n8n_data`: A volume for storing n8n data (workflows, credentials, etc.).
- `n8n_db_data`: A volume for storing PostgreSQL data (when using PostgreSQL profile).
## Getting Started
### SQLite (Default)
1. Create a `.env` file with authentication credentials:
```env
N8N_BASIC_AUTH_USER=admin
N8N_BASIC_AUTH_PASSWORD=your-secure-password
```
2. Start the service:
```bash
docker compose up -d
```
3. Access n8n at `http://localhost:5678`
### PostgreSQL
1. Create a `.env` file with authentication and database credentials:
```env
N8N_BASIC_AUTH_USER=admin
N8N_BASIC_AUTH_PASSWORD=your-secure-password
DB_TYPE=postgresdb
DB_POSTGRESDB_PASSWORD=your-db-password
```
2. Start the service with PostgreSQL profile:
```bash
docker compose --profile postgres up -d
```
3. Access n8n at `http://localhost:5678`
## Features
- **Visual Workflow Builder**: Create workflows with an intuitive drag-and-drop interface
- **400+ Integrations**: Connect to popular services and APIs
- **AI-Native**: Built-in LangChain support for AI workflows
- **Code When Needed**: Write JavaScript/Python or use visual nodes
- **Self-Hosted**: Full control over your data and deployments
- **Webhook Support**: Trigger workflows from external events
- **Scheduled Executions**: Run workflows on a schedule
## Documentation
For more information, visit the [official n8n documentation](https://docs.n8n.io/).
## Community Resources
- [n8n Community Forum](https://community.n8n.io/)
- [Workflow Templates](https://n8n.io/workflows)
- [Integration List](https://n8n.io/integrations)
## Security Notes
- Always set `N8N_BASIC_AUTH_USER` and `N8N_BASIC_AUTH_PASSWORD` in production
- Use HTTPS in production environments (set `N8N_PROTOCOL=https`)
- Consider setting `N8N_ENCRYPTION_KEY` for credential encryption
- Regularly backup the n8n data volume
- Keep n8n updated to the latest stable version

118
src/n8n/README.zh.md Normal file
View File

@@ -0,0 +1,118 @@
# n8n
[English](./README.md) | [中文](./README.zh.md)
此服务部署 n8n,一个具有原生 AI 功能的公平代码工作流自动化平台。
## 服务
- `n8n`: n8n 主应用服务器。
- `n8n-db`: n8n 的 PostgreSQL 数据库(可选,默认使用 SQLite)。
## 配置文件
- `default`: 使用 SQLite 运行 n8n(不需要外部数据库)。
- `postgres`: 使用 PostgreSQL 数据库运行 n8n。
要使用 PostgreSQL,请使用以下命令启动:
```bash
docker compose --profile postgres up -d
```
## 环境变量
| 变量名 | 描述 | 默认值 |
| ----------------------- | -------------------------------- | ------------------------ |
| N8N_VERSION | n8n 镜像版本 | `1.114.0` |
| N8N_PORT | n8n Web 界面的主机端口映射 | `5678` |
| N8N_BASIC_AUTH_ACTIVE | 启用基本认证 | `true` |
| N8N_BASIC_AUTH_USER | 基本认证用户名(认证启用时必需) | `""` |
| N8N_BASIC_AUTH_PASSWORD | 基本认证密码(认证启用时必需) | `""` |
| N8N_HOST | 主机地址 | `0.0.0.0` |
| N8N_PROTOCOL | 协议(http 或 https) | `http` |
| WEBHOOK_URL | 外部访问的 Webhook URL | `http://localhost:5678/` |
| GENERIC_TIMEZONE | n8n 时区 | `UTC` |
| TZ | 系统时区 | `UTC` |
| DB_TYPE | 数据库类型(sqlite 或 postgresdb) | `sqlite` |
| DB_POSTGRESDB_DATABASE | PostgreSQL 数据库名 | `n8n` |
| DB_POSTGRESDB_HOST | PostgreSQL 主机 | `n8n-db` |
| DB_POSTGRESDB_PORT | PostgreSQL 端口 | `5432` |
| DB_POSTGRESDB_USER | PostgreSQL 用户名 | `n8n` |
| DB_POSTGRESDB_PASSWORD | PostgreSQL 密码 | `n8n123` |
| POSTGRES_VERSION | PostgreSQL 镜像版本 | `17.2-alpine3.21` |
| EXECUTIONS_MODE | 执行模式(regular 或 queue) | `regular` |
| N8N_ENCRYPTION_KEY | 凭据加密密钥 | `""` |
请创建 `.env` 文件并根据需要进行修改。
## 数据卷
- `n8n_data`: 用于存储 n8n 数据(工作流、凭据等)的卷。
- `n8n_db_data`: 用于存储 PostgreSQL 数据的卷(使用 PostgreSQL 配置文件时)。
## 快速开始
### SQLite(默认)
1. 创建包含认证凭据的 `.env` 文件:
```env
N8N_BASIC_AUTH_USER=admin
N8N_BASIC_AUTH_PASSWORD=your-secure-password
```
2. 启动服务:
```bash
docker compose up -d
```
3. 访问 `http://localhost:5678`
### PostgreSQL
1. 创建包含认证和数据库凭据的 `.env` 文件:
```env
N8N_BASIC_AUTH_USER=admin
N8N_BASIC_AUTH_PASSWORD=your-secure-password
DB_TYPE=postgresdb
DB_POSTGRESDB_PASSWORD=your-db-password
```
2. 使用 PostgreSQL 配置文件启动服务:
```bash
docker compose --profile postgres up -d
```
3. 访问 `http://localhost:5678`
## 功能特性
- **可视化工作流构建器**: 使用直观的拖放界面创建工作流
- **400+ 集成**: 连接到流行的服务和 API
- **原生 AI**: 内置 LangChain 支持用于 AI 工作流
- **按需编码**: 编写 JavaScript/Python 或使用可视化节点
- **自托管**: 完全控制您的数据和部署
- **Webhook 支持**: 通过外部事件触发工作流
- **定时执行**: 按计划运行工作流
## 文档
更多信息请访问 [n8n 官方文档](https://docs.n8n.io/)。
## 社区资源
- [n8n 社区论坛](https://community.n8n.io/)
- [工作流模板](https://n8n.io/workflows)
- [集成列表](https://n8n.io/integrations)
## 安全提示
- 在生产环境中始终设置 `N8N_BASIC_AUTH_USER` 和 `N8N_BASIC_AUTH_PASSWORD`
- 在生产环境中使用 HTTPS(设置 `N8N_PROTOCOL=https`)
- 考虑设置 `N8N_ENCRYPTION_KEY` 用于凭据加密
- 定期备份 n8n 数据卷
- 保持 n8n 更新到最新稳定版本

View File

@@ -0,0 +1,83 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
services:
n8n:
<<: *default
image: n8nio/n8n:${N8N_VERSION:-1.114.0}
container_name: n8n
ports:
- "${N8N_PORT:-5678}:5678"
volumes:
- n8n_data:/home/node/.n8n
environment:
- N8N_BASIC_AUTH_ACTIVE=${N8N_BASIC_AUTH_ACTIVE:-true}
- N8N_BASIC_AUTH_USER=${N8N_BASIC_AUTH_USER:-}
- N8N_BASIC_AUTH_PASSWORD=${N8N_BASIC_AUTH_PASSWORD:-}
- N8N_HOST=${N8N_HOST:-0.0.0.0}
- N8N_PORT=${N8N_PORT:-5678}
- N8N_PROTOCOL=${N8N_PROTOCOL:-http}
- WEBHOOK_URL=${WEBHOOK_URL:-http://localhost:5678/}
- GENERIC_TIMEZONE=${GENERIC_TIMEZONE:-UTC}
- TZ=${TZ:-UTC}
# Database configuration (optional, uses SQLite by default)
- DB_TYPE=${DB_TYPE:-sqlite}
- DB_POSTGRESDB_DATABASE=${DB_POSTGRESDB_DATABASE:-n8n}
- DB_POSTGRESDB_HOST=${DB_POSTGRESDB_HOST:-n8n-db}
- DB_POSTGRESDB_PORT=${DB_POSTGRESDB_PORT:-5432}
- DB_POSTGRESDB_USER=${DB_POSTGRESDB_USER:-n8n}
- DB_POSTGRESDB_PASSWORD=${DB_POSTGRESDB_PASSWORD:-}
# Execution mode
- EXECUTIONS_MODE=${EXECUTIONS_MODE:-regular}
- N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY:-}
depends_on:
n8n-db:
condition: service_healthy
profiles:
- default
deploy:
resources:
limits:
cpus: '2.0'
memory: 2G
reservations:
cpus: '0.5'
memory: 512M
n8n-db:
<<: *default
image: postgres:${POSTGRES_VERSION:-17.2-alpine3.21}
container_name: n8n-db
environment:
- POSTGRES_USER=${DB_POSTGRESDB_USER:-n8n}
- POSTGRES_PASSWORD=${DB_POSTGRESDB_PASSWORD:-n8n123}
- POSTGRES_DB=${DB_POSTGRESDB_DATABASE:-n8n}
volumes:
- n8n_db_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${DB_POSTGRESDB_USER:-n8n}"]
interval: 10s
timeout: 5s
retries: 5
start_period: 30s
profiles:
- postgres
deploy:
resources:
limits:
cpus: '1.0'
memory: 1G
reservations:
cpus: '0.25'
memory: 256M
volumes:
n8n_data:
n8n_db_data:

24
src/nacos/.env.example Normal file
View File

@@ -0,0 +1,24 @@
# Nacos version
NACOS_VERSION="v3.1.0-slim"
# Mode: standalone or cluster
NACOS_MODE="standalone"
# Authentication settings
NACOS_AUTH_ENABLE=true
NACOS_AUTH_TOKEN="SecretKey012345678901234567890123456789012345678901234567890123456789"
NACOS_AUTH_IDENTITY_KEY="serverIdentity"
NACOS_AUTH_IDENTITY_VALUE="security"
# Database platform (leave empty for embedded db, or set to mysql)
SPRING_DATASOURCE_PLATFORM=""
# JVM settings
JVM_XMS="512m"
JVM_XMX="512m"
JVM_XMN="256m"
# Port overrides
NACOS_HTTP_PORT_OVERRIDE=8848
NACOS_GRPC_PORT_OVERRIDE=9848
NACOS_GRPC_PORT2_OVERRIDE=9849

View File

@@ -0,0 +1,45 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
services:
nacos:
<<: *default
image: nacos/nacos-server:${NACOS_VERSION:-v3.1.0-slim}
container_name: nacos
ports:
- "${NACOS_HTTP_PORT_OVERRIDE:-8848}:8848"
- "${NACOS_GRPC_PORT_OVERRIDE:-9848}:9848"
- "${NACOS_GRPC_PORT2_OVERRIDE:-9849}:9849"
volumes:
- *localtime
- *timezone
- nacos_logs:/home/nacos/logs
environment:
- MODE=${NACOS_MODE:-standalone}
- PREFER_HOST_MODE=hostname
- NACOS_AUTH_ENABLE=${NACOS_AUTH_ENABLE:-true}
- NACOS_AUTH_TOKEN=${NACOS_AUTH_TOKEN:-SecretKey012345678901234567890123456789012345678901234567890123456789}
- NACOS_AUTH_IDENTITY_KEY=${NACOS_AUTH_IDENTITY_KEY:-serverIdentity}
- NACOS_AUTH_IDENTITY_VALUE=${NACOS_AUTH_IDENTITY_VALUE:-security}
- SPRING_DATASOURCE_PLATFORM=${SPRING_DATASOURCE_PLATFORM:-}
- JVM_XMS=${JVM_XMS:-512m}
- JVM_XMX=${JVM_XMX:-512m}
- JVM_XMN=${JVM_XMN:-256m}
deploy:
resources:
limits:
cpus: '1.0'
memory: 1G
reservations:
cpus: '0.5'
memory: 512M
volumes:
nacos_logs:

View File

@@ -0,0 +1,5 @@
# NebulaGraph version
NEBULA_VERSION="v3.8.0"
# Port override for GraphD
NEBULA_GRAPHD_PORT_OVERRIDE=9669

53
src/nebulagraph/README.md Normal file
View File

@@ -0,0 +1,53 @@
# NebulaGraph
[English](./README.md) | [中文](./README.zh.md)
This service deploys NebulaGraph, a distributed, fast open-source graph database.
## Services
- `metad`: Meta service for cluster management
- `storaged`: Storage service for data persistence
- `graphd`: Query service for client connections
## Environment Variables
| Variable Name | Description | Default Value |
| --------------------------- | -------------------- | ------------- |
| NEBULA_VERSION | NebulaGraph version | `v3.8.0` |
| NEBULA_GRAPHD_PORT_OVERRIDE | GraphD port override | `9669` |
## Volumes
- `nebula_meta_data`: Meta service data
- `nebula_storage_data`: Storage service data
- `nebula_*_logs`: Log files for each service
## Usage
### Start NebulaGraph
```bash
docker compose up -d
```
### Connect to NebulaGraph
```bash
# Using console
docker run --rm -it --network host vesoft/nebula-console:v3.8.0 -addr 127.0.0.1 -port 9669 -u root -p nebula
```
## Access
- GraphD: <tcp://localhost:9669>
## Notes
- Default credentials: root/nebula
- Wait 20-30 seconds after startup for services to be ready
- Suitable for development and testing
## License
NebulaGraph is licensed under Apache License 2.0.

View File

@@ -0,0 +1,104 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
services:
metad:
<<: *default
image: vesoft/nebula-metad:${NEBULA_VERSION:-v3.8.0}
container_name: nebula-metad
environment:
- USER=root
command:
- --meta_server_addrs=metad:9559
- --local_ip=metad
- --ws_ip=metad
- --port=9559
- --data_path=/data/meta
- --log_dir=/logs
volumes:
- *localtime
- *timezone
- nebula_meta_data:/data/meta
- nebula_meta_logs:/logs
ports:
- "9559:9559"
- "19559:19559"
- "19560:19560"
deploy:
resources:
limits:
cpus: '0.5'
memory: 512M
storaged:
<<: *default
image: vesoft/nebula-storaged:${NEBULA_VERSION:-v3.8.0}
container_name: nebula-storaged
environment:
- USER=root
command:
- --meta_server_addrs=metad:9559
- --local_ip=storaged
- --ws_ip=storaged
- --port=9779
- --data_path=/data/storage
- --log_dir=/logs
depends_on:
- metad
volumes:
- *localtime
- *timezone
- nebula_storage_data:/data/storage
- nebula_storage_logs:/logs
ports:
- "9779:9779"
- "19779:19779"
- "19780:19780"
deploy:
resources:
limits:
cpus: '1.0'
memory: 1G
graphd:
<<: *default
image: vesoft/nebula-graphd:${NEBULA_VERSION:-v3.8.0}
container_name: nebula-graphd
environment:
- USER=root
command:
- --meta_server_addrs=metad:9559
- --port=9669
- --local_ip=graphd
- --ws_ip=graphd
- --log_dir=/logs
depends_on:
- metad
- storaged
volumes:
- *localtime
- *timezone
- nebula_graph_logs:/logs
ports:
- "${NEBULA_GRAPHD_PORT_OVERRIDE:-9669}:9669"
- "19669:19669"
- "19670:19670"
deploy:
resources:
limits:
cpus: '1.0'
memory: 1G
volumes:
nebula_meta_data:
nebula_meta_logs:
nebula_storage_data:
nebula_storage_logs:
nebula_graph_logs:

17
src/neo4j/.env.example Normal file
View File

@@ -0,0 +1,17 @@
# Neo4j version (community or enterprise)
NEO4J_VERSION="5.27.4-community"
# Authentication (format: username/password or "none" to disable)
NEO4J_AUTH="neo4j/password"
# Accept license agreement (required for enterprise edition)
NEO4J_ACCEPT_LICENSE_AGREEMENT="yes"
# Memory configuration
NEO4J_PAGECACHE_SIZE="512M"
NEO4J_HEAP_INIT_SIZE="512M"
NEO4J_HEAP_MAX_SIZE="1G"
# Port overrides
NEO4J_HTTP_PORT_OVERRIDE=7474
NEO4J_BOLT_PORT_OVERRIDE=7687

View File

@@ -0,0 +1,45 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
services:
neo4j:
<<: *default
image: neo4j:${NEO4J_VERSION:-5.27.4-community}
container_name: neo4j
ports:
- "${NEO4J_HTTP_PORT_OVERRIDE:-7474}:7474"
- "${NEO4J_BOLT_PORT_OVERRIDE:-7687}:7687"
volumes:
- *localtime
- *timezone
- neo4j_data:/data
- neo4j_logs:/logs
- neo4j_import:/var/lib/neo4j/import
- neo4j_plugins:/plugins
environment:
- NEO4J_AUTH=${NEO4J_AUTH:-neo4j/password}
- NEO4J_ACCEPT_LICENSE_AGREEMENT=${NEO4J_ACCEPT_LICENSE_AGREEMENT:-yes}
- NEO4J_dbms_memory_pagecache_size=${NEO4J_PAGECACHE_SIZE:-512M}
- NEO4J_dbms_memory_heap_initial__size=${NEO4J_HEAP_INIT_SIZE:-512M}
- NEO4J_dbms_memory_heap_max__size=${NEO4J_HEAP_MAX_SIZE:-1G}
deploy:
resources:
limits:
cpus: '2.0'
memory: 2G
reservations:
cpus: '0.5'
memory: 1G
volumes:
neo4j_data:
neo4j_logs:
neo4j_import:
neo4j_plugins:

View File

@@ -0,0 +1,5 @@
# Node Exporter version
NODE_EXPORTER_VERSION="v1.8.2"
# Port to bind to on the host machine
NODE_EXPORTER_PORT_OVERRIDE=9100

View File

@@ -0,0 +1,81 @@
# Node Exporter
[English](./README.md) | [中文](./README.zh.md)
This service deploys Prometheus Node Exporter, which exposes hardware and OS metrics from *NIX kernels.
## Services
- `node-exporter`: Prometheus Node Exporter service
## Environment Variables
| Variable Name | Description | Default Value |
| --------------------------- | --------------------- | ------------- |
| NODE_EXPORTER_VERSION | Node Exporter version | `v1.8.2` |
| NODE_EXPORTER_PORT_OVERRIDE | Host port mapping | `9100` |
Please modify the `.env` file as needed for your use case.
## Usage
### Start Node Exporter
```bash
docker compose up -d
```
### Access Metrics
- Metrics endpoint: <http://localhost:9100/metrics>
### Configure Prometheus
Add this scrape config to your Prometheus configuration:
```yaml
scrape_configs:
- job_name: 'node'
static_configs:
- targets: ['localhost:9100']
```
## Metrics Collected
Node Exporter collects a wide variety of system metrics:
- **CPU**: usage, frequency, temperature
- **Memory**: usage, available, cached
- **Disk**: I/O, space usage
- **Network**: traffic, errors
- **File system**: mount points, usage
- **Load**: system load averages
- **And many more**
## Network Mode
For more accurate metrics, you can run Node Exporter with host network mode. Uncomment in `docker-compose.yaml`:
```yaml
network_mode: host
```
Note: When using host network mode, port mapping is not needed.
## Notes
- Node Exporter should run on each host you want to monitor
- The service needs access to host filesystem and processes
- Metrics are exposed in Prometheus format
- No authentication is provided by default
## Security
- Bind to localhost only if running Prometheus on the same host
- Use firewall rules to restrict access to the metrics endpoint
- Consider using a reverse proxy with authentication for production
- Monitor access logs for suspicious activity
## License
Node Exporter is licensed under Apache License 2.0. See [Node Exporter GitHub](https://github.com/prometheus/node_exporter) for more information.

View File

@@ -0,0 +1,36 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
services:
node-exporter:
<<: *default
image: prom/node-exporter:${NODE_EXPORTER_VERSION:-v1.8.2}
container_name: node-exporter
ports:
- "${NODE_EXPORTER_PORT_OVERRIDE:-9100}:9100"
command:
- '--path.rootfs=/host'
- '--path.procfs=/host/proc'
- '--path.sysfs=/host/sys'
- '--collector.filesystem.mount-points-exclude=^/(sys|proc|dev|host|etc)($$|/)'
volumes:
- '/:/host:ro,rslave'
deploy:
resources:
limits:
cpus: '0.25'
memory: 128M
reservations:
cpus: '0.1'
memory: 64M
# Run with host network for accurate metrics
# network_mode: host
volumes: {}

13
src/odoo/.env.example Normal file
View File

@@ -0,0 +1,13 @@
# Odoo version
ODOO_VERSION="19.0"
# PostgreSQL version
POSTGRES_VERSION="17-alpine"
# Database configuration
POSTGRES_USER="odoo"
POSTGRES_PASSWORD="odoopass"
POSTGRES_DB="postgres"
# Port to bind to on the host machine
ODOO_PORT_OVERRIDE=8069

101
src/odoo/README.md Normal file
View File

@@ -0,0 +1,101 @@
# Odoo
[Odoo](https://www.odoo.com/) is a suite of open source business apps that cover all your company needs: CRM, eCommerce, accounting, inventory, point of sale, project management, etc.
## Features
- Modular: Choose from over 30,000 apps
- Integrated: All apps work seamlessly together
- Open Source: Free to use and customize
- Scalable: From small businesses to enterprises
- User-Friendly: Modern and intuitive interface
## Quick Start
Start Odoo with PostgreSQL:
```bash
docker compose up -d
```
## Configuration
### Environment Variables
- `ODOO_VERSION`: Odoo version (default: `19.0`)
- `ODOO_PORT_OVERRIDE`: HTTP port (default: `8069`)
- `POSTGRES_VERSION`: PostgreSQL version (default: `17-alpine`)
- `POSTGRES_USER`: Database user (default: `odoo`)
- `POSTGRES_PASSWORD`: Database password (default: `odoopass`)
- `POSTGRES_DB`: Database name (default: `postgres`)
## Access
- Web UI: <http://localhost:8069>
## First Time Setup
1. Navigate to <http://localhost:8069>
2. Create a new database:
- Master password: (set a strong password)
- Database name: (e.g., `mycompany`)
- Email: Your admin email
- Password: Your admin password
3. Choose apps to install
4. Start using Odoo!
## Custom Addons
Place custom addons in the `odoo_addons` volume. The directory structure should be:
```text
odoo_addons/
├── addon1/
│ ├── __init__.py
│ ├── __manifest__.py
│ └── ...
└── addon2/
├── __init__.py
├── __manifest__.py
└── ...
```
## Database Management
### Create a New Database
1. Go to <http://localhost:8069/web/database/manager>
2. Click "Create Database"
3. Fill in the required information
4. Click "Create"
### Backup Database
1. Go to <http://localhost:8069/web/database/manager>
2. Select your database
3. Click "Backup"
4. Save the backup file
### Restore Database
1. Go to <http://localhost:8069/web/database/manager>
2. Click "Restore Database"
3. Upload your backup file
4. Click "Restore"
## Resources
- Resource Limits: 2 CPU, 2G RAM (Odoo), 1 CPU, 1G RAM (Database)
- Resource Reservations: 0.5 CPU, 1G RAM (Odoo), 0.25 CPU, 512M RAM (Database)
## Production Considerations
For production deployments:
1. Set a strong master password
2. Use HTTPS (configure reverse proxy)
3. Regular database backups
4. Monitor resource usage
5. Keep Odoo and addons updated
6. Configure email settings for notifications
7. Set up proper logging and monitoring

101
src/odoo/README.zh.md Normal file
View File

@@ -0,0 +1,101 @@
# Odoo
[Odoo](https://www.odoo.com/) 是一套开源商业应用程序涵盖了公司的所有需求CRM、电子商务、会计、库存、销售点、项目管理等。
## 功能特性
- 模块化:从 30,000 多个应用中选择
- 集成化:所有应用无缝协作
- 开源:免费使用和定制
- 可扩展:从小型企业到大型企业
- 用户友好:现代直观的界面
## 快速开始
使用 PostgreSQL 启动 Odoo
```bash
docker compose up -d
```
## 配置
### 环境变量
- `ODOO_VERSION`: Odoo 版本(默认:`19.0`
- `ODOO_PORT_OVERRIDE`: HTTP 端口(默认:`8069`
- `POSTGRES_VERSION`: PostgreSQL 版本(默认:`17-alpine`
- `POSTGRES_USER`: 数据库用户(默认:`odoo`
- `POSTGRES_PASSWORD`: 数据库密码(默认:`odoopass`
- `POSTGRES_DB`: 数据库名称(默认:`postgres`
## 访问
- Web UI: <http://localhost:8069>
## 首次设置
1. 导航到 <http://localhost:8069>
2. 创建新数据库:
- 主密码:(设置一个强密码)
- 数据库名称:(例如:`mycompany`
- 邮箱:您的管理员邮箱
- 密码:您的管理员密码
3. 选择要安装的应用
4. 开始使用 Odoo
## 自定义插件
将自定义插件放在 `odoo_addons` 卷中。目录结构应该是:
```text
odoo_addons/
├── addon1/
│ ├── __init__.py
│ ├── __manifest__.py
│ └── ...
└── addon2/
├── __init__.py
├── __manifest__.py
└── ...
```
## 数据库管理
### 创建新数据库
1. 访问 <http://localhost:8069/web/database/manager>
2. 点击"创建数据库"
3. 填写必要信息
4. 点击"创建"
### 备份数据库
1. 访问 <http://localhost:8069/web/database/manager>
2. 选择您的数据库
3. 点击"备份"
4. 保存备份文件
### 恢复数据库
1. 访问 <http://localhost:8069/web/database/manager>
2. 点击"恢复数据库"
3. 上传您的备份文件
4. 点击"恢复"
## 资源配置
- 资源限制2 CPU2G 内存Odoo1 CPU1G 内存(数据库)
- 资源预留0.5 CPU1G 内存Odoo0.25 CPU512M 内存(数据库)
## 生产环境考虑因素
对于生产环境部署:
1. 设置强主密码
2. 使用 HTTPS配置反向代理
3. 定期数据库备份
4. 监控资源使用情况
5. 保持 Odoo 和插件更新
6. 配置电子邮件设置以接收通知
7. 设置适当的日志记录和监控

View File

@@ -0,0 +1,65 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
services:
odoo:
<<: *default
image: odoo:${ODOO_VERSION:-19.0}
container_name: odoo
depends_on:
- odoo-db
ports:
- "${ODOO_PORT_OVERRIDE:-8069}:8069"
volumes:
- *localtime
- *timezone
- odoo_web_data:/var/lib/odoo
- odoo_addons:/mnt/extra-addons
environment:
- HOST=odoo-db
- USER=${POSTGRES_USER:-odoo}
- PASSWORD=${POSTGRES_PASSWORD:-odoopass}
- DB_PORT=5432
- DB_NAME=${POSTGRES_DB:-postgres}
deploy:
resources:
limits:
cpus: '2.0'
memory: 2G
reservations:
cpus: '0.5'
memory: 1G
odoo-db:
<<: *default
image: postgres:${POSTGRES_VERSION:-17-alpine}
container_name: odoo-db
environment:
- POSTGRES_USER=${POSTGRES_USER:-odoo}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-odoopass}
- POSTGRES_DB=${POSTGRES_DB:-postgres}
- PGDATA=/var/lib/postgresql/data/pgdata
volumes:
- *localtime
- *timezone
- odoo_db_data:/var/lib/postgresql/data
deploy:
resources:
limits:
cpus: '1.0'
memory: 1G
reservations:
cpus: '0.25'
memory: 512M
volumes:
odoo_web_data:
odoo_addons:
odoo_db_data:

89
src/opencoze/README.md Normal file
View File

@@ -0,0 +1,89 @@
# OpenCoze
[English](./README.md) | [中文](./README.zh.md)
OpenCoze is a comprehensive AI application development platform based on Coze Studio.
## Important Notice
OpenCoze requires a complex multi-service architecture that includes:
- MySQL (database)
- Redis (caching)
- Elasticsearch (search engine)
- MinIO (object storage)
- etcd (distributed configuration)
- Milvus (vector database)
- NSQ (message queue)
- Coze Server (main application)
- Nginx (reverse proxy)
Due to the complexity of this setup, **we recommend using the official docker-compose configuration directly** from the Coze Studio repository.
## Official Deployment
1. Clone the official repository:
```bash
git clone https://github.com/coze-dev/coze-studio.git
cd coze-studio/docker
```
2. Follow the official deployment guide:
- [Official Documentation](https://github.com/coze-dev/coze-studio)
- [Docker Deployment Guide](https://github.com/coze-dev/coze-studio/tree/main/docker)
3. The official docker-compose includes all necessary services with proper configuration.
## System Requirements
- **Minimum Resources**:
- CPU: 8 cores
- RAM: 16GB
- Disk: 100GB SSD
- **Recommended Resources**:
- CPU: 16 cores
- RAM: 32GB
- Disk: 200GB SSD
## Key Features
- **AI Bot Builder**: Visual interface for creating AI-powered chatbots
- **Workflow Automation**: Design complex workflows with AI capabilities
- **Knowledge Base**: Manage and utilize knowledge bases for AI responses
- **Plugin System**: Extend functionality with custom plugins
- **Multi-model Support**: Integration with various LLM providers
- **Team Collaboration**: Multi-user workspace with permission management
## Getting Started
For detailed setup instructions, please refer to:
- [Official GitHub Repository](https://github.com/coze-dev/coze-studio)
- [Official Docker Compose](https://github.com/coze-dev/coze-studio/blob/main/docker/docker-compose.yml)
## Alternative: Cloud Version
If self-hosting is too complex, consider using the cloud version:
- [Coze Cloud](https://www.coze.com/) (Official cloud service)
## Security Notes
When deploying OpenCoze:
- Change all default passwords
- Use strong encryption keys
- Enable HTTPS with valid SSL certificates
- Implement proper firewall rules
- Regularly backup all data volumes
- Keep all services updated to the latest versions
- Monitor resource usage and performance
## Support
For issues and questions:
- [GitHub Issues](https://github.com/coze-dev/coze-studio/issues)
- [Official Documentation](https://github.com/coze-dev/coze-studio)

89
src/opencoze/README.zh.md Normal file
View File

@@ -0,0 +1,89 @@
# OpenCoze
[English](./README.md) | [中文](./README.zh.md)
OpenCoze 是一个基于 Coze Studio 的综合性 AI 应用开发平台。
## 重要提示
OpenCoze 需要一个复杂的多服务架构,包括:
- MySQL(数据库)
- Redis(缓存)
- Elasticsearch(搜索引擎)
- MinIO(对象存储)
- etcd(分布式配置)
- Milvus(向量数据库)
- NSQ(消息队列)
- Coze Server(主应用)
- Nginx(反向代理)
由于设置的复杂性,**我们建议直接使用 Coze Studio 仓库中的官方 docker-compose 配置**。
## 官方部署
1. 克隆官方仓库:
```bash
git clone https://github.com/coze-dev/coze-studio.git
cd coze-studio/docker
```
2. 遵循官方部署指南:
- [官方文档](https://github.com/coze-dev/coze-studio)
- [Docker 部署指南](https://github.com/coze-dev/coze-studio/tree/main/docker)
3. 官方 docker-compose 包含所有必需的服务及适当的配置。
## 系统要求
- **最低要求**:
- CPU: 8 核
- 内存: 16GB
- 磁盘: 100GB SSD
- **推荐配置**:
- CPU: 16 核
- 内存: 32GB
- 磁盘: 200GB SSD
## 主要功能
- **AI 机器人构建器**: 用于创建 AI 驱动的聊天机器人的可视化界面
- **工作流自动化**: 设计具有 AI 能力的复杂工作流
- **知识库**: 管理和利用知识库进行 AI 响应
- **插件系统**: 使用自定义插件扩展功能
- **多模型支持**: 与各种 LLM 提供商集成
- **团队协作**: 具有权限管理的多用户工作区
## 快速开始
详细的设置说明请参考:
- [官方 GitHub 仓库](https://github.com/coze-dev/coze-studio)
- [官方 Docker Compose](https://github.com/coze-dev/coze-studio/blob/main/docker/docker-compose.yml)
## 替代方案: 云版本
如果自托管过于复杂,可以考虑使用云版本:
- [Coze 云服务](https://www.coze.com/)(官方云服务)
## 安全提示
部署 OpenCoze 时:
- 更改所有默认密码
- 使用强加密密钥
- 启用带有有效 SSL 证书的 HTTPS
- 实施适当的防火墙规则
- 定期备份所有数据卷
- 保持所有服务更新到最新版本
- 监控资源使用和性能
## 支持
如有问题和疑问:
- [GitHub Issues](https://github.com/coze-dev/coze-studio/issues)
- [官方文档](https://github.com/coze-dev/coze-studio)

View File

@@ -0,0 +1,29 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
services:
# Note: OpenCoze is a complex platform that requires multiple services.
# This is a placeholder configuration. For full deployment, please refer to:
# https://github.com/coze-dev/coze-studio/tree/main/docker
opencoze-info:
image: alpine:latest
container_name: opencoze-info
command: >
sh -c "echo 'OpenCoze requires a complex multi-service setup.' &&
echo 'Please visit https://github.com/coze-dev/coze-studio for full deployment instructions.' &&
echo 'The official docker-compose includes: MySQL, Redis, Elasticsearch, MinIO, etcd, Milvus, NSQ, and the Coze server.' &&
echo 'For production deployment, consider using their official docker-compose.yml directly.' &&
tail -f /dev/null"
deploy:
resources:
limits:
cpus: '0.1'
memory: 64M

13
src/openlist/.env.example Normal file
View File

@@ -0,0 +1,13 @@
# OpenList version
OPENLIST_VERSION="latest"
# User and group IDs
PUID=0
PGID=0
UMASK=022
# Timezone
TZ="Asia/Shanghai"
# Port override
OPENLIST_PORT_OVERRIDE=5244

53
src/openlist/README.md Normal file
View File

@@ -0,0 +1,53 @@
# OpenList
[English](./README.md) | [中文](./README.zh.md)
This service deploys OpenList, a file list program that supports multiple storage.
## Services
- `openlist`: OpenList service
## Environment Variables
| Variable Name | Description | Default Value |
| ---------------------- | ----------------- | --------------- |
| OPENLIST_VERSION | OpenList version | `latest` |
| PUID | User ID | `0` |
| PGID | Group ID | `0` |
| UMASK | UMASK | `022` |
| TZ | Timezone | `Asia/Shanghai` |
| OPENLIST_PORT_OVERRIDE | Host port mapping | `5244` |
## Volumes
- `openlist_data`: Data directory
## Usage
### Start OpenList
```bash
docker compose up -d
```
### Access
- Web UI: <http://localhost:5244>
### Initial Setup
1. Open <http://localhost:5244>
2. Complete the initial setup wizard
3. Configure storage providers
4. Start managing files
## Notes
- First startup requires initial configuration
- Supports multiple cloud storage providers
- Community-driven fork of AList
## License
OpenList follows the original AList license. See [OpenList GitHub](https://github.com/OpenListTeam/OpenList) for more information.

View File

@@ -0,0 +1,37 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
services:
openlist:
<<: *default
image: openlistteam/openlist:${OPENLIST_VERSION:-latest}
container_name: openlist
ports:
- "${OPENLIST_PORT_OVERRIDE:-5244}:5244"
volumes:
- *localtime
- *timezone
- openlist_data:/opt/openlist/data
environment:
- PUID=${PUID:-0}
- PGID=${PGID:-0}
- UMASK=${UMASK:-022}
- TZ=${TZ:-Asia/Shanghai}
deploy:
resources:
limits:
cpus: '1.0'
memory: 1G
reservations:
cpus: '0.25'
memory: 256M
volumes:
openlist_data:

View File

@@ -0,0 +1,22 @@
# OpenSearch version
OPENSEARCH_VERSION="2.19.0"
# OpenSearch Dashboards version
OPENSEARCH_DASHBOARDS_VERSION="2.19.0"
# Cluster configuration
CLUSTER_NAME="opensearch-cluster"
# JVM heap size (should be 50% of container memory)
OPENSEARCH_HEAP_SIZE="512m"
# Admin password (minimum 8 chars with upper, lower, digit, special char)
OPENSEARCH_ADMIN_PASSWORD="Admin@123"
# Security plugin (set to true to disable for testing)
DISABLE_SECURITY_PLUGIN="false"
# Port overrides
OPENSEARCH_PORT_OVERRIDE=9200
OPENSEARCH_PERF_ANALYZER_PORT_OVERRIDE=9600
OPENSEARCH_DASHBOARDS_PORT_OVERRIDE=5601

99
src/opensearch/README.md Normal file
View File

@@ -0,0 +1,99 @@
# OpenSearch
[English](./README.md) | [中文](./README.zh.md)
This service deploys OpenSearch (Elasticsearch fork) with OpenSearch Dashboards (Kibana fork).
## Services
- `opensearch`: OpenSearch server for search and analytics.
- `opensearch-dashboards`: OpenSearch Dashboards for visualization.
## Environment Variables
| Variable Name | Description | Default Value |
| -------------------------------------- | ----------------------------- | -------------------- |
| OPENSEARCH_VERSION | OpenSearch image version | `2.19.0` |
| OPENSEARCH_DASHBOARDS_VERSION | OpenSearch Dashboards version | `2.19.0` |
| CLUSTER_NAME | Cluster name | `opensearch-cluster` |
| OPENSEARCH_HEAP_SIZE | JVM heap size | `512m` |
| OPENSEARCH_ADMIN_PASSWORD | Admin password | `Admin@123` |
| DISABLE_SECURITY_PLUGIN | Disable security plugin | `false` |
| OPENSEARCH_PORT_OVERRIDE | OpenSearch API port | `9200` |
| OPENSEARCH_PERF_ANALYZER_PORT_OVERRIDE | Performance Analyzer port | `9600` |
| OPENSEARCH_DASHBOARDS_PORT_OVERRIDE | Dashboards UI port | `5601` |
Please modify the `.env` file as needed for your use case.
## Volumes
- `opensearch_data`: OpenSearch data storage.
## Usage
### Start the Services
```bash
docker-compose up -d
```
### Access OpenSearch
OpenSearch API:
```bash
curl -XGET https://localhost:9200 -u 'admin:Admin@123' --insecure
```
### Access OpenSearch Dashboards
Open your browser and navigate to:
```text
http://localhost:5601
```
Login with username `admin` and the password set in `OPENSEARCH_ADMIN_PASSWORD`.
### Create an Index
```bash
curl -XPUT https://localhost:9200/my-index -u 'admin:Admin@123' --insecure
```
### Index a Document
```bash
curl -XPOST https://localhost:9200/my-index/_doc -u 'admin:Admin@123' --insecure \
-H 'Content-Type: application/json' \
-d '{"title": "Hello OpenSearch", "content": "This is a test document"}'
```
### Search Documents
```bash
curl -XGET https://localhost:9200/my-index/_search -u 'admin:Admin@123' --insecure \
-H 'Content-Type: application/json' \
-d '{"query": {"match": {"title": "Hello"}}}'
```
## Features
- **Full-Text Search**: Advanced search capabilities with relevance scoring
- **Analytics**: Real-time data analysis and aggregations
- **Visualization**: Rich dashboards with OpenSearch Dashboards
- **Security**: Built-in security plugin with authentication and encryption
- **RESTful API**: Easy integration with any programming language
- **Scalable**: Single-node for development, cluster mode for production
## Notes
- Default admin password must contain at least 8 characters with uppercase, lowercase, digit, and special character
- For production, change the admin password and consider using external certificates
- JVM heap size should be set to 50% of available memory (max 31GB)
- Security plugin can be disabled for testing by setting `DISABLE_SECURITY_PLUGIN=true`
- For cluster mode, add more nodes and configure `discovery.seed_hosts`
## License
OpenSearch is licensed under the Apache License 2.0.

View File

@@ -0,0 +1,99 @@
# OpenSearch
[English](./README.md) | [中文](./README.zh.md)
此服务用于部署 OpenSearchElasticsearch 分支)和 OpenSearch DashboardsKibana 分支)。
## 服务
- `opensearch`: 用于搜索和分析的 OpenSearch 服务器。
- `opensearch-dashboards`: 用于可视化的 OpenSearch Dashboards。
## 环境变量
| 变量名 | 说明 | 默认值 |
| -------------------------------------- | -------------------------- | -------------------- |
| OPENSEARCH_VERSION | OpenSearch 镜像版本 | `2.19.0` |
| OPENSEARCH_DASHBOARDS_VERSION | OpenSearch Dashboards 版本 | `2.19.0` |
| CLUSTER_NAME | 集群名称 | `opensearch-cluster` |
| OPENSEARCH_HEAP_SIZE | JVM 堆大小 | `512m` |
| OPENSEARCH_ADMIN_PASSWORD | 管理员密码 | `Admin@123` |
| DISABLE_SECURITY_PLUGIN | 禁用安全插件 | `false` |
| OPENSEARCH_PORT_OVERRIDE | OpenSearch API 端口 | `9200` |
| OPENSEARCH_PERF_ANALYZER_PORT_OVERRIDE | 性能分析器端口 | `9600` |
| OPENSEARCH_DASHBOARDS_PORT_OVERRIDE | Dashboards UI 端口 | `5601` |
请根据实际需求修改 `.env` 文件。
## 卷
- `opensearch_data`: OpenSearch 数据存储。
## 使用方法
### 启动服务
```bash
docker-compose up -d
```
### 访问 OpenSearch
OpenSearch API:
```bash
curl -XGET https://localhost:9200 -u 'admin:Admin@123' --insecure
```
### 访问 OpenSearch Dashboards
在浏览器中打开:
```text
http://localhost:5601
```
使用用户名 `admin``OPENSEARCH_ADMIN_PASSWORD` 中设置的密码登录。
### 创建索引
```bash
curl -XPUT https://localhost:9200/my-index -u 'admin:Admin@123' --insecure
```
### 索引文档
```bash
curl -XPOST https://localhost:9200/my-index/_doc -u 'admin:Admin@123' --insecure \
-H 'Content-Type: application/json' \
-d '{"title": "Hello OpenSearch", "content": "This is a test document"}'
```
### 搜索文档
```bash
curl -XGET https://localhost:9200/my-index/_search -u 'admin:Admin@123' --insecure \
-H 'Content-Type: application/json' \
-d '{"query": {"match": {"title": "Hello"}}}'
```
## 功能
- **全文搜索**: 具有相关性评分的高级搜索功能
- **分析**: 实时数据分析和聚合
- **可视化**: 使用 OpenSearch Dashboards 创建丰富的仪表板
- **安全性**: 内置安全插件,具有身份验证和加密功能
- **RESTful API**: 易于与任何编程语言集成
- **可扩展**: 开发环境使用单节点,生产环境使用集群模式
## 注意事项
- 默认管理员密码必须至少包含 8 个字符,包括大写字母、小写字母、数字和特殊字符
- 对于生产环境,请更改管理员密码并考虑使用外部证书
- JVM 堆大小应设置为可用内存的 50%(最大 31GB
- 可以通过设置 `DISABLE_SECURITY_PLUGIN=true` 禁用安全插件进行测试
- 对于集群模式,添加更多节点并配置 `discovery.seed_hosts`
## 许可证
OpenSearch 使用 Apache License 2.0 授权。

View File

@@ -0,0 +1,68 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
services:
opensearch:
<<: *default
image: opensearchproject/opensearch:${OPENSEARCH_VERSION:-2.19.0}
container_name: opensearch
environment:
cluster.name: ${CLUSTER_NAME:-opensearch-cluster}
node.name: opensearch
discovery.type: single-node
bootstrap.memory_lock: true
OPENSEARCH_JAVA_OPTS: "-Xms${OPENSEARCH_HEAP_SIZE:-512m} -Xmx${OPENSEARCH_HEAP_SIZE:-512m}"
OPENSEARCH_INITIAL_ADMIN_PASSWORD: ${OPENSEARCH_ADMIN_PASSWORD:-Admin@123}
DISABLE_SECURITY_PLUGIN: ${DISABLE_SECURITY_PLUGIN:-false}
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
ports:
- "${OPENSEARCH_PORT_OVERRIDE:-9200}:9200"
- "${OPENSEARCH_PERF_ANALYZER_PORT_OVERRIDE:-9600}:9600"
volumes:
- *localtime
- *timezone
- opensearch_data:/usr/share/opensearch/data
deploy:
resources:
limits:
cpus: '2.0'
memory: 2G
reservations:
cpus: '1.0'
memory: 1G
opensearch-dashboards:
<<: *default
image: opensearchproject/opensearch-dashboards:${OPENSEARCH_DASHBOARDS_VERSION:-2.19.0}
container_name: opensearch-dashboards
ports:
- "${OPENSEARCH_DASHBOARDS_PORT_OVERRIDE:-5601}:5601"
environment:
OPENSEARCH_HOSTS: '["https://opensearch:9200"]'
DISABLE_SECURITY_DASHBOARDS_PLUGIN: ${DISABLE_SECURITY_PLUGIN:-false}
depends_on:
- opensearch
deploy:
resources:
limits:
cpus: '1.0'
memory: 1G
reservations:
cpus: '0.5'
memory: 512M
volumes:
opensearch_data:

15
src/pytorch/.env.example Normal file
View File

@@ -0,0 +1,15 @@
# PyTorch version with CUDA support
PYTORCH_VERSION="2.6.0-cuda12.6-cudnn9-runtime"
# Jupyter configuration
JUPYTER_ENABLE_LAB="yes"
JUPYTER_TOKEN="pytorch"
# NVIDIA GPU configuration
NVIDIA_VISIBLE_DEVICES="all"
NVIDIA_DRIVER_CAPABILITIES="compute,utility"
GPU_COUNT=1
# Port overrides
JUPYTER_PORT_OVERRIDE=8888
TENSORBOARD_PORT_OVERRIDE=6006

153
src/pytorch/README.md Normal file
View File

@@ -0,0 +1,153 @@
# PyTorch
[English](./README.md) | [中文](./README.zh.md)
This service deploys PyTorch with CUDA support, Jupyter Lab, and TensorBoard for deep learning development.
## Services
- `pytorch`: PyTorch container with GPU support, Jupyter Lab, and TensorBoard.
## Prerequisites
**NVIDIA GPU Required**: This service requires an NVIDIA GPU with CUDA support and the NVIDIA Container Toolkit installed.
### Install NVIDIA Container Toolkit
**Linux:**
```bash
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit
sudo systemctl restart docker
```
**Windows (Docker Desktop):**
Ensure you have WSL2 with NVIDIA drivers installed and Docker Desktop configured to use WSL2 backend.
## Environment Variables
| Variable Name | Description | Default Value |
| -------------------------- | -------------------------- | ------------------------------- |
| PYTORCH_VERSION | PyTorch image version | `2.6.0-cuda12.6-cudnn9-runtime` |
| JUPYTER_ENABLE_LAB | Enable Jupyter Lab | `yes` |
| JUPYTER_TOKEN | Jupyter access token | `pytorch` |
| NVIDIA_VISIBLE_DEVICES | GPUs to use | `all` |
| NVIDIA_DRIVER_CAPABILITIES | Driver capabilities | `compute,utility` |
| GPU_COUNT | Number of GPUs to allocate | `1` |
| JUPYTER_PORT_OVERRIDE | Jupyter Lab port | `8888` |
| TENSORBOARD_PORT_OVERRIDE | TensorBoard port | `6006` |
Please modify the `.env` file as needed for your use case.
## Volumes
- `pytorch_notebooks`: Jupyter notebooks and scripts.
- `pytorch_data`: Training data and datasets.
## Usage
### Start the Service
```bash
docker-compose up -d
```
### Access Jupyter Lab
Open your browser and navigate to:
```text
http://localhost:8888
```
Login with the token specified in `JUPYTER_TOKEN` (default: `pytorch`).
### Verify GPU Access
In a Jupyter notebook:
```python
import torch
print(f"PyTorch version: {torch.__version__}")
print(f"CUDA available: {torch.cuda.is_available()}")
print(f"CUDA version: {torch.version.cuda}")
print(f"Number of GPUs: {torch.cuda.device_count()}")
if torch.cuda.is_available():
print(f"GPU name: {torch.cuda.get_device_name(0)}")
```
### Example Training Script
```python
import torch
import torch.nn as nn
import torch.optim as optim
# Set device
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Define a simple model
model = nn.Sequential(
nn.Linear(784, 128),
nn.ReLU(),
nn.Linear(128, 10)
).to(device)
# Create dummy data
x = torch.randn(64, 784).to(device)
y = torch.randint(0, 10, (64,)).to(device)
# Training
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters())
output = model(x)
loss = criterion(output, y)
loss.backward()
optimizer.step()
print(f"Loss: {loss.item()}")
```
### Access TensorBoard
TensorBoard port is exposed but needs to be started manually:
```python
from torch.utils.tensorboard import SummaryWriter
writer = SummaryWriter('/workspace/runs')
```
Then start TensorBoard:
```bash
docker exec pytorch tensorboard --logdir=/workspace/runs --host=0.0.0.0
```
Access at: `http://localhost:6006`
## Features
- **GPU Acceleration**: CUDA support for fast training
- **Jupyter Lab**: Interactive development environment
- **TensorBoard**: Visualization for training metrics
- **Pre-installed**: PyTorch, CUDA, cuDNN ready to use
- **Persistent Storage**: Notebooks and data stored in volumes
## Notes
- GPU is required for optimal performance
- Recommended: 8GB+ VRAM for most deep learning tasks
- The container installs Jupyter and TensorBoard on first start
- Use `pytorch/pytorch:*-devel` for building custom extensions
- For multi-GPU training, adjust `GPU_COUNT` and use `torch.nn.DataParallel`
## License
PyTorch is licensed under the BSD-style license.

153
src/pytorch/README.zh.md Normal file
View File

@@ -0,0 +1,153 @@
# PyTorch
[English](./README.md) | [中文](./README.zh.md)
此服务用于部署支持 CUDA、Jupyter Lab 和 TensorBoard 的 PyTorch 深度学习开发环境。
## 服务
- `pytorch`: 支持 GPU、Jupyter Lab 和 TensorBoard 的 PyTorch 容器。
## 先决条件
**需要 NVIDIA GPU**: 此服务需要支持 CUDA 的 NVIDIA GPU 和已安装的 NVIDIA Container Toolkit。
### 安装 NVIDIA Container Toolkit
**Linux:**
```bash
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit
sudo systemctl restart docker
```
**Windows (Docker Desktop):**
确保已安装带有 NVIDIA 驱动程序的 WSL2并将 Docker Desktop 配置为使用 WSL2 后端。
## 环境变量
| 变量名 | 说明 | 默认值 |
| -------------------------- | ---------------- | ------------------------------- |
| PYTORCH_VERSION | PyTorch 镜像版本 | `2.6.0-cuda12.6-cudnn9-runtime` |
| JUPYTER_ENABLE_LAB | 启用 Jupyter Lab | `yes` |
| JUPYTER_TOKEN | Jupyter 访问令牌 | `pytorch` |
| NVIDIA_VISIBLE_DEVICES | 使用的 GPU | `all` |
| NVIDIA_DRIVER_CAPABILITIES | 驱动程序功能 | `compute,utility` |
| GPU_COUNT | 分配的 GPU 数量 | `1` |
| JUPYTER_PORT_OVERRIDE | Jupyter Lab 端口 | `8888` |
| TENSORBOARD_PORT_OVERRIDE | TensorBoard 端口 | `6006` |
请根据实际需求修改 `.env` 文件。
## 卷
- `pytorch_notebooks`: Jupyter 笔记本和脚本。
- `pytorch_data`: 训练数据和数据集。
## 使用方法
### 启动服务
```bash
docker-compose up -d
```
### 访问 Jupyter Lab
在浏览器中打开:
```text
http://localhost:8888
```
使用 `JUPYTER_TOKEN` 中指定的令牌登录(默认: `pytorch`)。
### 验证 GPU 访问
在 Jupyter 笔记本中:
```python
import torch
print(f"PyTorch version: {torch.__version__}")
print(f"CUDA available: {torch.cuda.is_available()}")
print(f"CUDA version: {torch.version.cuda}")
print(f"Number of GPUs: {torch.cuda.device_count()}")
if torch.cuda.is_available():
print(f"GPU name: {torch.cuda.get_device_name(0)}")
```
### 训练脚本示例
```python
import torch
import torch.nn as nn
import torch.optim as optim
# 设置设备
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# 定义简单模型
model = nn.Sequential(
nn.Linear(784, 128),
nn.ReLU(),
nn.Linear(128, 10)
).to(device)
# 创建虚拟数据
x = torch.randn(64, 784).to(device)
y = torch.randint(0, 10, (64,)).to(device)
# 训练
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters())
output = model(x)
loss = criterion(output, y)
loss.backward()
optimizer.step()
print(f"Loss: {loss.item()}")
```
### 访问 TensorBoard
TensorBoard 端口已暴露,但需要手动启动:
```python
from torch.utils.tensorboard import SummaryWriter
writer = SummaryWriter('/workspace/runs')
```
然后启动 TensorBoard:
```bash
docker exec pytorch tensorboard --logdir=/workspace/runs --host=0.0.0.0
```
访问地址: `http://localhost:6006`
## 功能
- **GPU 加速**: CUDA 支持以实现快速训练
- **Jupyter Lab**: 交互式开发环境
- **TensorBoard**: 训练指标的可视化
- **预安装**: PyTorch、CUDA、cuDNN 即可使用
- **持久存储**: 笔记本和数据存储在卷中
## 注意事项
- GPU 对于最佳性能是必需的
- 推荐: 大多数深度学习任务需要 8GB+ 显存
- 容器在首次启动时安装 Jupyter 和 TensorBoard
- 使用 `pytorch/pytorch:*-devel` 构建自定义扩展
- 对于多 GPU 训练,调整 `GPU_COUNT` 并使用 `torch.nn.DataParallel`
## 许可证
PyTorch 使用 BSD 风格许可证授权。

View File

@@ -0,0 +1,48 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
services:
pytorch:
<<: *default
image: pytorch/pytorch:${PYTORCH_VERSION:-2.6.0-cuda12.6-cudnn9-runtime}
container_name: pytorch
ports:
- "${JUPYTER_PORT_OVERRIDE:-8888}:8888"
- "${TENSORBOARD_PORT_OVERRIDE:-6006}:6006"
environment:
NVIDIA_VISIBLE_DEVICES: ${NVIDIA_VISIBLE_DEVICES:-all}
NVIDIA_DRIVER_CAPABILITIES: ${NVIDIA_DRIVER_CAPABILITIES:-compute,utility}
JUPYTER_ENABLE_LAB: ${JUPYTER_ENABLE_LAB:-yes}
command: >
bash -c "pip install --no-cache-dir jupyter tensorboard &&
jupyter lab --ip=0.0.0.0 --port=8888 --no-browser --allow-root
--NotebookApp.token='${JUPYTER_TOKEN:-pytorch}'"
volumes:
- *localtime
- *timezone
- pytorch_notebooks:/workspace
- pytorch_data:/data
working_dir: /workspace
deploy:
resources:
limits:
cpus: '4.0'
memory: 16G
reservations:
cpus: '2.0'
memory: 8G
devices:
- driver: nvidia
count: ${GPU_COUNT:-1}
capabilities: [gpu]
volumes:
pytorch_notebooks:
pytorch_data:

15
src/ray/.env.example Normal file
View File

@@ -0,0 +1,15 @@
# Ray version
RAY_VERSION="2.42.1-py312"
# Ray head node configuration
RAY_HEAD_NUM_CPUS=4
RAY_HEAD_MEMORY=8589934592 # 8GB in bytes
# Ray worker node configuration
RAY_WORKER_NUM_CPUS=2
RAY_WORKER_MEMORY=4294967296 # 4GB in bytes
# Port overrides
RAY_DASHBOARD_PORT_OVERRIDE=8265
RAY_CLIENT_PORT_OVERRIDE=10001
RAY_GCS_PORT_OVERRIDE=6379

142
src/ray/README.md Normal file
View File

@@ -0,0 +1,142 @@
# Ray
[English](./README.md) | [中文](./README.zh.md)
This service deploys a Ray cluster with 1 head node and 2 worker nodes for distributed computing.
## Services
- `ray-head`: Ray head node with dashboard.
- `ray-worker-1`: First Ray worker node.
- `ray-worker-2`: Second Ray worker node.
## Environment Variables
| Variable Name | Description | Default Value |
| --------------------------- | -------------------------- | ------------------ |
| RAY_VERSION | Ray image version | `2.42.1-py312` |
| RAY_HEAD_NUM_CPUS | Head node CPU count | `4` |
| RAY_HEAD_MEMORY | Head node memory (bytes) | `8589934592` (8GB) |
| RAY_WORKER_NUM_CPUS | Worker node CPU count | `2` |
| RAY_WORKER_MEMORY | Worker node memory (bytes) | `4294967296` (4GB) |
| RAY_DASHBOARD_PORT_OVERRIDE | Ray Dashboard port | `8265` |
| RAY_CLIENT_PORT_OVERRIDE | Ray Client Server port | `10001` |
| RAY_GCS_PORT_OVERRIDE | Ray GCS Server port | `6379` |
Please modify the `.env` file as needed for your use case.
## Volumes
- `ray_storage`: Shared storage for Ray temporary files.
## Usage
### Start the Cluster
```bash
docker-compose up -d
```
### Access Ray Dashboard
Open your browser and navigate to:
```text
http://localhost:8265
```
The dashboard shows cluster status, running jobs, and resource usage.
### Connect from Python Client
```python
import ray
# Connect to the Ray cluster
ray.init(address="ray://localhost:10001")
# Run a simple task
@ray.remote
def hello_world():
return "Hello from Ray!"
# Execute the task
result = ray.get(hello_world.remote())
print(result)
# Check cluster resources
print(ray.cluster_resources())
```
### Distributed Computing Example
```python
import ray
import time
ray.init(address="ray://localhost:10001")
@ray.remote
def compute_task(x):
time.sleep(1)
return x * x
# Submit 100 tasks in parallel
results = ray.get([compute_task.remote(i) for i in range(100)])
print(f"Sum of squares: {sum(results)}")
```
### Using Ray Data
```python
import ray
ray.init(address="ray://localhost:10001")
# Create a dataset
ds = ray.data.range(1000)
# Process data in parallel
result = ds.map(lambda x: x * 2).take(10)
print(result)
```
## Features
- **Distributed Computing**: Scale Python applications across multiple nodes
- **Auto-scaling**: Dynamic resource allocation
- **Ray Dashboard**: Web UI for monitoring and debugging
- **Ray Data**: Distributed data processing
- **Ray Train**: Distributed training for ML models
- **Ray Serve**: Model serving and deployment
- **Ray Tune**: Hyperparameter tuning
## Notes
- Workers automatically connect to the head node
- The cluster has 1 head node (4 CPU, 8GB RAM) and 2 workers (2 CPU, 4GB RAM each)
- Total cluster resources: 8 CPUs, 16GB RAM
- Add more workers by duplicating the worker service definition
- For GPU support, use `rayproject/ray-ml` image and configure NVIDIA runtime
- Ray uses Redis protocol on port 6379 for cluster communication
## Scaling
To add more worker nodes, add new service definitions:
```yaml
ray-worker-3:
<<: *default
image: rayproject/ray:${RAY_VERSION:-2.42.1-py312}
container_name: ray-worker-3
command: ray start --address=ray-head:6379 --block
depends_on:
- ray-head
environment:
RAY_NUM_CPUS: ${RAY_WORKER_NUM_CPUS:-2}
RAY_MEMORY: ${RAY_WORKER_MEMORY:-4294967296}
```
## License
Ray is licensed under the Apache License 2.0.

142
src/ray/README.zh.md Normal file
View File

@@ -0,0 +1,142 @@
# Ray
[English](./README.md) | [中文](./README.zh.md)
此服务用于部署一个包含 1 个头节点和 2 个工作节点的 Ray 集群,用于分布式计算。
## 服务
- `ray-head`: Ray 头节点,带有仪表板。
- `ray-worker-1`: 第一个 Ray 工作节点。
- `ray-worker-2`: 第二个 Ray 工作节点。
## 环境变量
| 变量名 | 说明 | 默认值 |
| --------------------------- | -------------------- | ------------------ |
| RAY_VERSION | Ray 镜像版本 | `2.42.1-py312` |
| RAY_HEAD_NUM_CPUS | 头节点 CPU 数量 | `4` |
| RAY_HEAD_MEMORY | 头节点内存(字节) | `8589934592` (8GB) |
| RAY_WORKER_NUM_CPUS | 工作节点 CPU 数量 | `2` |
| RAY_WORKER_MEMORY | 工作节点内存(字节) | `4294967296` (4GB) |
| RAY_DASHBOARD_PORT_OVERRIDE | Ray 仪表板端口 | `8265` |
| RAY_CLIENT_PORT_OVERRIDE | Ray 客户端服务器端口 | `10001` |
| RAY_GCS_PORT_OVERRIDE | Ray GCS 服务器端口 | `6379` |
请根据实际需求修改 `.env` 文件。
## 卷
- `ray_storage`: Ray 临时文件的共享存储。
## 使用方法
### 启动集群
```bash
docker-compose up -d
```
### 访问 Ray 仪表板
在浏览器中打开:
```text
http://localhost:8265
```
仪表板显示集群状态、正在运行的作业和资源使用情况。
### 从 Python 客户端连接
```python
import ray
# 连接到 Ray 集群
ray.init(address="ray://localhost:10001")
# 运行简单任务
@ray.remote
def hello_world():
return "Hello from Ray!"
# 执行任务
result = ray.get(hello_world.remote())
print(result)
# 检查集群资源
print(ray.cluster_resources())
```
### 分布式计算示例
```python
import ray
import time
ray.init(address="ray://localhost:10001")
@ray.remote
def compute_task(x):
time.sleep(1)
return x * x
# 并行提交 100 个任务
results = ray.get([compute_task.remote(i) for i in range(100)])
print(f"Sum of squares: {sum(results)}")
```
### 使用 Ray Data
```python
import ray
ray.init(address="ray://localhost:10001")
# 创建数据集
ds = ray.data.range(1000)
# 并行处理数据
result = ds.map(lambda x: x * 2).take(10)
print(result)
```
## 功能
- **分布式计算**: 跨多个节点扩展 Python 应用程序
- **自动扩展**: 动态资源分配
- **Ray 仪表板**: 用于监控和调试的 Web UI
- **Ray Data**: 分布式数据处理
- **Ray Train**: ML 模型的分布式训练
- **Ray Serve**: 模型服务和部署
- **Ray Tune**: 超参数调优
## 注意事项
- 工作节点自动连接到头节点
- 集群有 1 个头节点4 CPU8GB RAM和 2 个工作节点(每个 2 CPU4GB RAM
- 集群总资源: 8 个 CPU16GB RAM
- 通过复制工作节点服务定义添加更多工作节点
- 对于 GPU 支持,使用 `rayproject/ray-ml` 镜像并配置 NVIDIA 运行时
- Ray 使用端口 6379 上的 Redis 协议进行集群通信
## 扩展
要添加更多工作节点,添加新的服务定义:
```yaml
ray-worker-3:
<<: *default
image: rayproject/ray:${RAY_VERSION:-2.42.1-py312}
container_name: ray-worker-3
command: ray start --address=ray-head:6379 --block
depends_on:
- ray-head
environment:
RAY_NUM_CPUS: ${RAY_WORKER_NUM_CPUS:-2}
RAY_MEMORY: ${RAY_WORKER_MEMORY:-4294967296}
```
## 许可证
Ray 使用 Apache License 2.0 授权。

View File

@@ -0,0 +1,82 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
services:
ray-head:
<<: *default
image: rayproject/ray:${RAY_VERSION:-2.42.1-py312}
container_name: ray-head
command: ray start --head --dashboard-host=0.0.0.0 --port=6379 --block
ports:
- "${RAY_DASHBOARD_PORT_OVERRIDE:-8265}:8265"
- "${RAY_CLIENT_PORT_OVERRIDE:-10001}:10001"
- "${RAY_GCS_PORT_OVERRIDE:-6379}:6379"
environment:
RAY_NUM_CPUS: ${RAY_HEAD_NUM_CPUS:-4}
RAY_MEMORY: ${RAY_HEAD_MEMORY:-8589934592}
volumes:
- *localtime
- *timezone
- ray_storage:/tmp/ray
deploy:
resources:
limits:
cpus: '4.0'
memory: 8G
reservations:
cpus: '2.0'
memory: 4G
ray-worker-1:
<<: *default
image: rayproject/ray:${RAY_VERSION:-2.42.1-py312}
container_name: ray-worker-1
command: ray start --address=ray-head:6379 --block
depends_on:
- ray-head
environment:
RAY_NUM_CPUS: ${RAY_WORKER_NUM_CPUS:-2}
RAY_MEMORY: ${RAY_WORKER_MEMORY:-4294967296}
volumes:
- *localtime
- *timezone
deploy:
resources:
limits:
cpus: '2.0'
memory: 4G
reservations:
cpus: '1.0'
memory: 2G
ray-worker-2:
<<: *default
image: rayproject/ray:${RAY_VERSION:-2.42.1-py312}
container_name: ray-worker-2
command: ray start --address=ray-head:6379 --block
depends_on:
- ray-head
environment:
RAY_NUM_CPUS: ${RAY_WORKER_NUM_CPUS:-2}
RAY_MEMORY: ${RAY_WORKER_MEMORY:-4294967296}
volumes:
- *localtime
- *timezone
deploy:
resources:
limits:
cpus: '2.0'
memory: 4G
reservations:
cpus: '1.0'
memory: 2G
volumes:
ray_storage:

View File

@@ -0,0 +1,2 @@
# Redis version
REDIS_VERSION="8.2.1-alpine"

120
src/redis-cluster/README.md Normal file
View File

@@ -0,0 +1,120 @@
# Redis Cluster
[English](./README.md) | [中文](./README.zh.md)
This service deploys a Redis Cluster with 6 nodes (3 masters + 3 replicas).
## Services
- `redis-1` to `redis-6`: Redis cluster nodes
- `redis-cluster-init`: Initialization container (one-time setup)
## Environment Variables
| Variable Name | Description | Default Value |
| ------------- | ------------------- | -------------- |
| REDIS_VERSION | Redis image version | `8.2.1-alpine` |
Please modify the `.env` file as needed for your use case.
## Volumes
- `redis_1_data` to `redis_6_data`: Data persistence for each Redis node
## Usage
### Start Redis Cluster
```bash
# Start all Redis nodes
docker compose up -d
# Initialize the cluster (first time only)
docker compose --profile init up redis-cluster-init
# Verify cluster status
docker exec redis-1 redis-cli --cluster check redis-1:6379
```
### Connect to Cluster
```bash
# Connect using redis-cli
docker exec -it redis-1 redis-cli -c
# Test cluster
127.0.0.1:6379> CLUSTER INFO
127.0.0.1:6379> SET mykey "Hello"
127.0.0.1:6379> GET mykey
```
### Access from Application
Use cluster mode connection from your application:
```python
# Python example
from redis.cluster import RedisCluster
startup_nodes = [
{"host": "localhost", "port": "7000"},
{"host": "localhost", "port": "7001"},
{"host": "localhost", "port": "7002"},
]
rc = RedisCluster(startup_nodes=startup_nodes, decode_responses=True)
rc.set("foo", "bar")
print(rc.get("foo"))
```
## Ports
- 7000-7005: Mapped to each Redis node's 6379 port
## Cluster Information
- **Masters**: 3 nodes (redis-1, redis-2, redis-3)
- **Replicas**: 3 nodes (redis-4, redis-5, redis-6)
- **Total slots**: 16384 (distributed across masters)
## Adding New Nodes
To add more nodes to the cluster:
1. Add new service in `docker-compose.yaml`
2. Start the new node
3. Add to cluster:
```bash
docker exec redis-1 redis-cli --cluster add-node new-node-ip:6379 redis-1:6379
docker exec redis-1 redis-cli --cluster reshard redis-1:6379
```
## Removing Nodes
```bash
# Remove a node
docker exec redis-1 redis-cli --cluster del-node redis-1:6379 <node-id>
```
## Notes
- Cluster initialization only needs to be done once
- Each node stores a subset of the data
- Automatic failover is handled by Redis Cluster
- Minimum 3 master nodes required for production
- Data is automatically replicated to replica nodes
## Security
- Add password authentication for production:
```bash
command: redis-server --requirepass yourpassword --cluster-enabled yes ...
```
- Use firewall rules to restrict access
- Consider using TLS for inter-node communication in production
## License
Redis is available under the Redis Source Available License 2.0 (RSALv2). See [Redis GitHub](https://github.com/redis/redis) for more information.

View File

@@ -0,0 +1,142 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
services:
redis-cluster-init:
image: redis:${REDIS_VERSION:-8.2.1-alpine}
container_name: redis-cluster-init
command: >
sh -c "
echo 'Waiting for all Redis instances to start...' &&
sleep 5 &&
redis-cli --cluster create
redis-1:6379 redis-2:6379 redis-3:6379
redis-4:6379 redis-5:6379 redis-6:6379
--cluster-replicas 1 --cluster-yes
"
depends_on:
- redis-1
- redis-2
- redis-3
- redis-4
- redis-5
- redis-6
profiles:
- init
redis-1:
<<: *default
image: redis:${REDIS_VERSION:-8.2.1-alpine}
container_name: redis-1
command: redis-server --port 6379 --cluster-enabled yes --cluster-config-file nodes.conf --cluster-node-timeout 5000 --appendonly yes
ports:
- "7000:6379"
volumes:
- *localtime
- *timezone
- redis_1_data:/data
deploy:
resources:
limits:
cpus: '0.5'
memory: 512M
redis-2:
<<: *default
image: redis:${REDIS_VERSION:-8.2.1-alpine}
container_name: redis-2
command: redis-server --port 6379 --cluster-enabled yes --cluster-config-file nodes.conf --cluster-node-timeout 5000 --appendonly yes
ports:
- "7001:6379"
volumes:
- *localtime
- *timezone
- redis_2_data:/data
deploy:
resources:
limits:
cpus: '0.5'
memory: 512M
redis-3:
<<: *default
image: redis:${REDIS_VERSION:-8.2.1-alpine}
container_name: redis-3
command: redis-server --port 6379 --cluster-enabled yes --cluster-config-file nodes.conf --cluster-node-timeout 5000 --appendonly yes
ports:
- "7002:6379"
volumes:
- *localtime
- *timezone
- redis_3_data:/data
deploy:
resources:
limits:
cpus: '0.5'
memory: 512M
redis-4:
<<: *default
image: redis:${REDIS_VERSION:-8.2.1-alpine}
container_name: redis-4
command: redis-server --port 6379 --cluster-enabled yes --cluster-config-file nodes.conf --cluster-node-timeout 5000 --appendonly yes
ports:
- "7003:6379"
volumes:
- *localtime
- *timezone
- redis_4_data:/data
deploy:
resources:
limits:
cpus: '0.5'
memory: 512M
redis-5:
<<: *default
image: redis:${REDIS_VERSION:-8.2.1-alpine}
container_name: redis-5
command: redis-server --port 6379 --cluster-enabled yes --cluster-config-file nodes.conf --cluster-node-timeout 5000 --appendonly yes
ports:
- "7004:6379"
volumes:
- *localtime
- *timezone
- redis_5_data:/data
deploy:
resources:
limits:
cpus: '0.5'
memory: 512M
redis-6:
<<: *default
image: redis:${REDIS_VERSION:-8.2.1-alpine}
container_name: redis-6
command: redis-server --port 6379 --cluster-enabled yes --cluster-config-file nodes.conf --cluster-node-timeout 5000 --appendonly yes
ports:
- "7005:6379"
volumes:
- *localtime
- *timezone
- redis_6_data:/data
deploy:
resources:
limits:
cpus: '0.5'
memory: 512M
volumes:
redis_1_data:
redis_2_data:
redis_3_data:
redis_4_data:
redis_5_data:
redis_6_data:

View File

@@ -25,9 +25,14 @@ services:
environment:
- SKIP_FIX_PERMS=${SKIP_FIX_PERMS:-}
command:
- redis-server
- --requirepass
- ${REDIS_PASSWORD:-}
- sh
- -c
- |
if [ -n "$${REDIS_PASSWORD}" ]; then
redis-server --requirepass "$${REDIS_PASSWORD}"
else
redis-server
fi
deploy:
resources:
limits:

View File

@@ -0,0 +1,13 @@
# Stable Diffusion WebUI version
SD_WEBUI_VERSION="latest"
# CLI arguments for WebUI
CLI_ARGS="--listen --api --skip-version-check"
# NVIDIA GPU configuration
NVIDIA_VISIBLE_DEVICES="all"
NVIDIA_DRIVER_CAPABILITIES="compute,utility"
GPU_COUNT=1
# Port overrides
SD_WEBUI_PORT_OVERRIDE=7860

View File

@@ -0,0 +1,122 @@
# Stable Diffusion WebUI Docker
[English](./README.md) | [中文](./README.zh.md)
This service deploys Stable Diffusion WebUI (SD.Next) for AI image generation.
## Services
- `stable-diffusion-webui`: Stable Diffusion WebUI with GPU support.
## Prerequisites
**NVIDIA GPU Required**: This service requires an NVIDIA GPU with CUDA support and the NVIDIA Container Toolkit installed.
### Install NVIDIA Container Toolkit
**Linux:**
```bash
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit
sudo systemctl restart docker
```
**Windows (Docker Desktop):**
Ensure you have WSL2 with NVIDIA drivers installed and Docker Desktop configured to use WSL2 backend.
## Environment Variables
| Variable Name | Description | Default Value |
| -------------------------- | -------------------------- | ------------------------------------- |
| SD_WEBUI_VERSION | SD WebUI image version | `latest` |
| CLI_ARGS | Command-line arguments | `--listen --api --skip-version-check` |
| NVIDIA_VISIBLE_DEVICES | GPUs to use | `all` |
| NVIDIA_DRIVER_CAPABILITIES | Driver capabilities | `compute,utility` |
| GPU_COUNT | Number of GPUs to allocate | `1` |
| SD_WEBUI_PORT_OVERRIDE | WebUI port | `7860` |
Please modify the `.env` file as needed for your use case.
## Volumes
- `sd_webui_data`: Model files, extensions, and configuration.
- `sd_webui_output`: Generated images output directory.
## Usage
### Start the Service
```bash
docker-compose up -d
```
### Access the WebUI
Open your browser and navigate to:
```text
http://localhost:7860
```
### Download Models
On first start, you need to download models. The WebUI will guide you through this process, or you can manually place models in the `/data/models` directory.
Common model locations:
- Stable Diffusion models: `/data/models/Stable-diffusion/`
- VAE models: `/data/models/VAE/`
- LoRA models: `/data/models/Lora/`
- Embeddings: `/data/models/embeddings/`
### Generate Images
1. Select a model from the dropdown
2. Enter your prompt
3. Adjust parameters (steps, CFG scale, sampler, etc.)
4. Click "Generate"
## Features
- **Text-to-Image**: Generate images from text prompts
- **Image-to-Image**: Transform existing images
- **Inpainting**: Edit specific parts of images
- **Upscaling**: Enhance image resolution
- **API Access**: RESTful API for automation
- **Extensions**: Support for custom extensions
- **Multiple Models**: Support for various SD models (1.5, 2.x, SDXL, etc.)
## Notes
- First startup may take time to download dependencies and models
- Recommended: 8GB+ VRAM for SD 1.5, 12GB+ for SDXL
- GPU is required; CPU-only mode is extremely slow
- Generated images are saved in the `sd_webui_output` volume
- Models can be large (2-7GB each); ensure adequate disk space
## API Usage
With `--api` flag enabled, access the API at:
```text
http://localhost:7860/docs
```
Example API call:
```bash
curl -X POST http://localhost:7860/sdapi/v1/txt2img \
-H "Content-Type: application/json" \
-d '{
"prompt": "a beautiful landscape",
"steps": 20
}'
```
## License
Stable Diffusion models have various licenses. Please check individual model licenses before use.

View File

@@ -0,0 +1,122 @@
# Stable Diffusion WebUI Docker
[English](./README.md) | [中文](./README.zh.md)
此服务用于部署 Stable Diffusion WebUI (SD.Next) 进行 AI 图像生成。
## 服务
- `stable-diffusion-webui`: 支持 GPU 的 Stable Diffusion WebUI。
## 先决条件
**需要 NVIDIA GPU**: 此服务需要支持 CUDA 的 NVIDIA GPU 和已安装的 NVIDIA Container Toolkit。
### 安装 NVIDIA Container Toolkit
**Linux:**
```bash
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit
sudo systemctl restart docker
```
**Windows (Docker Desktop):**
确保已安装带有 NVIDIA 驱动程序的 WSL2并将 Docker Desktop 配置为使用 WSL2 后端。
## 环境变量
| 变量名 | 说明 | 默认值 |
| -------------------------- | ----------------- | ------------------------------------- |
| SD_WEBUI_VERSION | SD WebUI 镜像版本 | `latest` |
| CLI_ARGS | 命令行参数 | `--listen --api --skip-version-check` |
| NVIDIA_VISIBLE_DEVICES | 使用的 GPU | `all` |
| NVIDIA_DRIVER_CAPABILITIES | 驱动程序功能 | `compute,utility` |
| GPU_COUNT | 分配的 GPU 数量 | `1` |
| SD_WEBUI_PORT_OVERRIDE | WebUI 端口 | `7860` |
请根据实际需求修改 `.env` 文件。
## 卷
- `sd_webui_data`: 模型文件、扩展和配置。
- `sd_webui_output`: 生成的图像输出目录。
## 使用方法
### 启动服务
```bash
docker-compose up -d
```
### 访问 WebUI
在浏览器中打开:
```text
http://localhost:7860
```
### 下载模型
首次启动时您需要下载模型。WebUI 会指导您完成此过程,或者您可以手动将模型放置在 `/data/models` 目录中。
常见模型位置:
- Stable Diffusion 模型: `/data/models/Stable-diffusion/`
- VAE 模型: `/data/models/VAE/`
- LoRA 模型: `/data/models/Lora/`
- 嵌入: `/data/models/embeddings/`
### 生成图像
1. 从下拉列表中选择模型
2. 输入您的提示词
3. 调整参数步数、CFG 比例、采样器等)
4. 点击"生成"
## 功能
- **文本到图像**: 从文本提示生成图像
- **图像到图像**: 转换现有图像
- **修复**: 编辑图像的特定部分
- **放大**: 增强图像分辨率
- **API 访问**: 用于自动化的 RESTful API
- **扩展**: 支持自定义扩展
- **多模型**: 支持各种 SD 模型1.5、2.x、SDXL 等)
## 注意事项
- 首次启动可能需要时间下载依赖项和模型
- 推荐: SD 1.5 需要 8GB+ 显存SDXL 需要 12GB+
- 需要 GPU纯 CPU 模式极其缓慢
- 生成的图像保存在 `sd_webui_output` 卷中
- 模型可能很大(每个 2-7GB确保有足够的磁盘空间
## API 使用
启用 `--api` 标志后,在以下地址访问 API:
```text
http://localhost:7860/docs
```
API 调用示例:
```bash
curl -X POST http://localhost:7860/sdapi/v1/txt2img \
-H "Content-Type: application/json" \
-d '{
"prompt": "a beautiful landscape",
"steps": 20
}'
```
## 许可证
Stable Diffusion 模型有各种许可证。使用前请检查各个模型的许可证。

Some files were not shown because too many files have changed in this diff Show More