feat: Add Temporal and Windmill services with configuration files

- Implemented Temporal service with Docker Compose, including PostgreSQL setup and environment variables for configuration.
- Added Temporal README and Chinese translation for documentation.
- Introduced Windmill service with Docker Compose, including PostgreSQL setup and environment variables for configuration.
- Added Windmill README and Chinese translation for documentation.
- Updated MongoDB configurations to use host.docker.internal for better compatibility.
This commit is contained in:
Sun-ZhenXing
2025-11-01 19:40:54 +08:00
parent 843ebc24a1
commit 0f54723be1
22 changed files with 2805 additions and 6 deletions

52
src/budibase/.env.example Normal file
View File

@@ -0,0 +1,52 @@
# Budibase Configuration
# Version
BUDIBASE_VERSION=3.23.0
REDIS_VERSION=7-alpine
# Port Configuration
BUDIBASE_PORT_OVERRIDE=10000
# Internal Service Ports (usually no need to change)
BUDIBASE_APP_PORT=4002
BUDIBASE_WORKER_PORT=4003
BUDIBASE_MINIO_PORT=4004
BUDIBASE_COUCH_DB_PORT=4005
BUDIBASE_REDIS_PORT=6379
# Environment
BUDIBASE_ENVIRONMENT=PRODUCTION
BUDIBASE_DEPLOYMENT_ENVIRONMENT=docker
# Security Settings - IMPORTANT: Change these in production!
BUDIBASE_INTERNAL_API_KEY=changeme_internal_api_key_minimum_32_chars
BUDIBASE_JWT_SECRET=changeme_jwt_secret_minimum_32_chars
BUDIBASE_MINIO_ACCESS_KEY=budibase
BUDIBASE_MINIO_SECRET_KEY=budibase
BUDIBASE_COUCHDB_USER=admin
BUDIBASE_COUCHDB_PASSWORD=admin
# Admin User - IMPORTANT: Change these!
BUDIBASE_ADMIN_EMAIL=admin@budibase.com
BUDIBASE_ADMIN_PASSWORD=changeme
# Optional: Account Portal URL
BUDIBASE_ACCOUNT_PORTAL_URL=https://account.budibase.app
# Optional: PostHog Analytics Token (leave empty to disable)
BUDIBASE_POSTHOG_TOKEN=
# Timezone
TZ=UTC
# Resource Limits - Budibase
BUDIBASE_CPU_LIMIT=2.0
BUDIBASE_CPU_RESERVATION=0.5
BUDIBASE_MEMORY_LIMIT=2G
BUDIBASE_MEMORY_RESERVATION=512M
# Resource Limits - Redis
REDIS_CPU_LIMIT=0.5
REDIS_CPU_RESERVATION=0.1
REDIS_MEMORY_LIMIT=512M
REDIS_MEMORY_RESERVATION=128M

142
src/budibase/README.md Normal file
View File

@@ -0,0 +1,142 @@
# Budibase
Budibase is an all-in-one low-code platform for building modern internal tools and dashboards. Build CRUD apps, admin panels, approval workflows, and more in minutes.
## Features
- **Visual App Builder**: Drag-and-drop interface for building apps quickly
- **Built-in Database**: Spreadsheet-like database or connect to external data sources
- **Multi-tenant Support**: User management and role-based access control
- **Automation**: Build workflows and automations without code
- **Custom Plugins**: Extend functionality with custom components
- **API & Webhooks**: REST API, GraphQL, and webhook support
- **Self-hosted**: Full control over your data
## Quick Start
1. Copy `.env.example` to `.env`:
```bash
cp .env.example .env
```
2. **IMPORTANT**: Edit `.env` and change the following security settings:
- `BUDIBASE_INTERNAL_API_KEY` - Generate a random 32+ character string
- `BUDIBASE_JWT_SECRET` - Generate a random 32+ character string
- `BUDIBASE_ADMIN_EMAIL` - Your admin email
- `BUDIBASE_ADMIN_PASSWORD` - A strong password
- `BUDIBASE_MINIO_ACCESS_KEY` and `BUDIBASE_MINIO_SECRET_KEY` - MinIO credentials
3. Start Budibase:
```bash
docker compose up -d
```
4. Access Budibase at `http://localhost:10000`
5. Log in with your configured admin credentials
## Default Configuration
| Service | Port | Description |
| -------- | ----- | -------------- |
| Budibase | 10000 | Web UI and API |
**Default Admin Credentials** (Change these!):
- Email: `admin@budibase.com`
- Password: `changeme`
## Environment Variables
Key environment variables (see `.env.example` for full list):
| Variable | Description | Default |
| --------------------------- | ---------------------------- | -------------------- |
| `BUDIBASE_VERSION` | Budibase image version | `3.23.0` |
| `BUDIBASE_PORT_OVERRIDE` | Host port for UI | `10000` |
| `BUDIBASE_INTERNAL_API_KEY` | Internal API key (32+ chars) | **Must change!** |
| `BUDIBASE_JWT_SECRET` | JWT secret (32+ chars) | **Must change!** |
| `BUDIBASE_ADMIN_EMAIL` | Admin user email | `admin@budibase.com` |
| `BUDIBASE_ADMIN_PASSWORD` | Admin user password | `changeme` |
| `BUDIBASE_ENVIRONMENT` | Environment mode | `PRODUCTION` |
| `TZ` | Timezone | `UTC` |
## Resource Requirements
**Minimum**:
- CPU: 0.5 cores
- RAM: 512MB
- Disk: 2GB
**Recommended**:
- CPU: 2 cores
- RAM: 2GB
- Disk: 10GB
## Volumes
- `budibase_data`: Budibase application data (database, files, configs)
- `redis_data`: Redis cache data
## Security Considerations
1. **Change Default Credentials**: Always change the default admin credentials
2. **Strong Secrets**: Use strong random strings for API keys and JWT secrets
3. **Environment Variables**: Store sensitive values in `.env` file, never commit to version control
4. **SSL/TLS**: Use reverse proxy (nginx, Traefik) with SSL in production
5. **Firewall**: Restrict access to port 10000 in production environments
6. **Backups**: Regularly backup the `budibase_data` volume
## Upgrading
1. Pull the latest image:
```bash
docker compose pull
```
2. Restart the services:
```bash
docker compose up -d
```
3. Check logs:
```bash
docker compose logs -f
```
## Troubleshooting
**Service won't start:**
- Check logs: `docker compose logs budibase`
- Ensure ports are not in use: `netstat -an | findstr 10000`
- Verify environment variables are set correctly
**Cannot login:**
- Verify admin credentials in `.env` file
- Reset admin password by recreating the container with new credentials
**Performance issues:**
- Increase resource limits in `.env` file
- Check Redis memory usage: `docker compose exec redis redis-cli INFO memory`
## References
- Official Website: <https://budibase.com>
- Documentation: <https://docs.budibase.com>
- GitHub: <https://github.com/Budibase/budibase>
- Community: <https://github.com/Budibase/budibase/discussions>
- Docker Hub: <https://hub.docker.com/r/budibase/budibase>
## License
Budibase is licensed under GPL-3.0. See [LICENSE](https://github.com/Budibase/budibase/blob/master/LICENSE) for more information.

142
src/budibase/README.zh.md Normal file
View File

@@ -0,0 +1,142 @@
# Budibase
Budibase 是一个一体化的低代码平台,用于快速构建现代内部工具和仪表板。可以在几分钟内构建 CRUD 应用、管理面板、审批工作流等。
## 功能特点
- **可视化应用构建器**:通过拖放界面快速构建应用
- **内置数据库**:类似电子表格的数据库或连接到外部数据源
- **多租户支持**:用户管理和基于角色的访问控制
- **自动化**:无需编码即可构建工作流和自动化流程
- **自定义插件**:使用自定义组件扩展功能
- **API 和 Webhook**REST API、GraphQL 和 webhook 支持
- **自托管**:完全控制您的数据
## 快速开始
1. 复制 `.env.example``.env`
```bash
copy .env.example .env
```
2. **重要**:编辑 `.env` 并更改以下安全设置:
- `BUDIBASE_INTERNAL_API_KEY` - 生成一个 32 位以上的随机字符串
- `BUDIBASE_JWT_SECRET` - 生成一个 32 位以上的随机字符串
- `BUDIBASE_ADMIN_EMAIL` - 您的管理员邮箱
- `BUDIBASE_ADMIN_PASSWORD` - 一个强密码
- `BUDIBASE_MINIO_ACCESS_KEY` 和 `BUDIBASE_MINIO_SECRET_KEY` - MinIO 凭证
3. 启动 Budibase
```bash
docker compose up -d
```
4. 访问 `http://localhost:10000`
5. 使用配置的管理员凭证登录
## 默认配置
| 服务 | 端口 | 说明 |
| -------- | ----- | ------------- |
| Budibase | 10000 | Web UI 和 API |
**默认管理员凭证**(请更改!):
- 邮箱:`admin@budibase.com`
- 密码:`changeme`
## 环境变量
主要环境变量(完整列表请参阅 `.env.example`
| 变量 | 说明 | 默认值 |
| --------------------------- | ------------------------- | -------------------- |
| `BUDIBASE_VERSION` | Budibase 镜像版本 | `3.23.0` |
| `BUDIBASE_PORT_OVERRIDE` | UI 的主机端口 | `10000` |
| `BUDIBASE_INTERNAL_API_KEY` | 内部 API 密钥32+ 字符) | **必须更改!** |
| `BUDIBASE_JWT_SECRET` | JWT 密钥32+ 字符) | **必须更改!** |
| `BUDIBASE_ADMIN_EMAIL` | 管理员用户邮箱 | `admin@budibase.com` |
| `BUDIBASE_ADMIN_PASSWORD` | 管理员用户密码 | `changeme` |
| `BUDIBASE_ENVIRONMENT` | 环境模式 | `PRODUCTION` |
| `TZ` | 时区 | `UTC` |
## 资源需求
**最低要求**
- CPU0.5 核心
- 内存512MB
- 磁盘2GB
**推荐配置**
- CPU2 核心
- 内存2GB
- 磁盘10GB
## 数据卷
- `budibase_data`Budibase 应用数据(数据库、文件、配置)
- `redis_data`Redis 缓存数据
## 安全注意事项
1. **更改默认凭证**:始终更改默认管理员凭证
2. **强密钥**:为 API 密钥和 JWT 密钥使用强随机字符串
3. **环境变量**:将敏感值存储在 `.env` 文件中,切勿提交到版本控制
4. **SSL/TLS**:在生产环境中使用带 SSL 的反向代理nginx、Traefik
5. **防火墙**:在生产环境中限制对 10000 端口的访问
6. **备份**:定期备份 `budibase_data` 数据卷
## 升级
1. 拉取最新镜像:
```bash
docker compose pull
```
2. 重启服务:
```bash
docker compose up -d
```
3. 检查日志:
```bash
docker compose logs -f
```
## 故障排除
**服务无法启动:**
- 检查日志:`docker compose logs budibase`
- 确保端口未被占用:`netstat -an | findstr 10000`
- 验证环境变量设置正确
**无法登录:**
- 验证 `.env` 文件中的管理员凭证
- 通过使用新凭证重新创建容器来重置管理员密码
**性能问题:**
- 在 `.env` 文件中增加资源限制
- 检查 Redis 内存使用:`docker compose exec redis redis-cli INFO memory`
## 参考资料
- 官方网站:<https://budibase.com>
- 文档:<https://docs.budibase.com>
- GitHub<https://github.com/Budibase/budibase>
- 社区:<https://github.com/Budibase/budibase/discussions>
- Docker Hub<https://hub.docker.com/r/budibase/budibase>
## 许可证
Budibase 使用 GPL-3.0 许可证。详情请参阅 [LICENSE](https://github.com/Budibase/budibase/blob/master/LICENSE)。

View File

@@ -0,0 +1,116 @@
# Budibase - Low-code platform for building internal tools
# https://github.com/Budibase/budibase
#
# Budibase is an all-in-one low-code platform for building modern internal tools and
# dashboards. It allows you to build apps quickly with a spreadsheet-like database,
# drag-and-drop UI, and pre-built components.
#
# Key Features:
# - Visual app builder with drag-and-drop interface
# - Built-in database or connect to external data sources
# - Multi-tenant support with user management
# - REST API, GraphQL, and webhooks support
# - Custom plugins and automation support
#
# Default Credentials:
# - Access UI at http://localhost:10000
# - Default admin email: admin@budibase.com
# - Default password: changeme
#
# Security Notes:
# - Change default admin credentials immediately
# - Use strong INTERNAL_API_KEY and JWT_SECRET in production
# - Store sensitive data in .env file
# - Enable SSL/TLS in production environments
#
# License: GPL-3.0 (https://github.com/Budibase/budibase/blob/master/LICENSE)
x-default: &default
restart: unless-stopped
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
budibase:
<<: *default
image: budibase/budibase:${BUDIBASE_VERSION:-3.23.0}
container_name: budibase
ports:
- "${BUDIBASE_PORT_OVERRIDE:-10000}:80"
environment:
# Core settings
- APP_PORT=${BUDIBASE_APP_PORT:-4002}
- WORKER_PORT=${BUDIBASE_WORKER_PORT:-4003}
- MINIO_PORT=${BUDIBASE_MINIO_PORT:-4004}
- COUCH_DB_PORT=${BUDIBASE_COUCH_DB_PORT:-4005}
- REDIS_PORT=${BUDIBASE_REDIS_PORT:-6379}
- BUDIBASE_ENVIRONMENT=${BUDIBASE_ENVIRONMENT:-PRODUCTION}
# Security - REQUIRED: Override these in .env file
- INTERNAL_API_KEY=${BUDIBASE_INTERNAL_API_KEY:-changeme_internal_api_key_minimum_32_chars}
- JWT_SECRET=${BUDIBASE_JWT_SECRET:-changeme_jwt_secret_minimum_32_chars}
- MINIO_ACCESS_KEY=${BUDIBASE_MINIO_ACCESS_KEY:-budibase}
- MINIO_SECRET_KEY=${BUDIBASE_MINIO_SECRET_KEY:-budibase}
- COUCHDB_USER=${BUDIBASE_COUCHDB_USER:-admin}
- COUCHDB_PASSWORD=${BUDIBASE_COUCHDB_PASSWORD:-admin}
# Admin user - REQUIRED: Override these in .env file
- BB_ADMIN_USER_EMAIL=${BUDIBASE_ADMIN_EMAIL:-admin@budibase.com}
- BB_ADMIN_USER_PASSWORD=${BUDIBASE_ADMIN_PASSWORD:-changeme}
# Optional settings
- DEPLOYMENT_ENVIRONMENT=${BUDIBASE_DEPLOYMENT_ENVIRONMENT:-docker}
- POSTHOG_TOKEN=${BUDIBASE_POSTHOG_TOKEN:-}
- ACCOUNT_PORTAL_URL=${BUDIBASE_ACCOUNT_PORTAL_URL:-https://account.budibase.app}
- REDIS_URL=redis://redis:6379
- TZ=${TZ:-UTC}
volumes:
- budibase_data:/data
depends_on:
redis:
condition: service_healthy
healthcheck:
test: ["CMD", "wget", "--quiet", "--tries=1", "--spider", "http://localhost/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
deploy:
resources:
limits:
cpus: "${BUDIBASE_CPU_LIMIT:-2.0}"
memory: "${BUDIBASE_MEMORY_LIMIT:-2G}"
reservations:
cpus: "${BUDIBASE_CPU_RESERVATION:-0.5}"
memory: "${BUDIBASE_MEMORY_RESERVATION:-512M}"
redis:
<<: *default
image: redis:${REDIS_VERSION:-7-alpine}
container_name: budibase-redis
command: redis-server --appendonly yes --maxmemory 256mb --maxmemory-policy allkeys-lru
volumes:
- redis_data:/data
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 5s
retries: 5
deploy:
resources:
limits:
cpus: "${REDIS_CPU_LIMIT:-0.5}"
memory: "${REDIS_MEMORY_LIMIT:-512M}"
reservations:
cpus: "${REDIS_CPU_RESERVATION:-0.1}"
memory: "${REDIS_MEMORY_RESERVATION:-128M}"
volumes:
budibase_data:
driver: local
redis_data:
driver: local

View File

@@ -0,0 +1,42 @@
# Conductor Configuration
# Versions
POSTGRES_VERSION=16-alpine
ELASTICSEARCH_VERSION=8.11.0
# Port Configuration
CONDUCTOR_SERVER_PORT_OVERRIDE=8080
CONDUCTOR_UI_PORT_OVERRIDE=5000
CONDUCTOR_GRPC_PORT=8090
# PostgreSQL Configuration
POSTGRES_DB=conductor
POSTGRES_USER=conductor
POSTGRES_PASSWORD=conductor
# Logging
CONDUCTOR_LOG_LEVEL=INFO
# Elasticsearch Configuration
ELASTICSEARCH_HEAP_SIZE=512m
# Timezone
TZ=UTC
# Resource Limits - Conductor Server
CONDUCTOR_CPU_LIMIT=2.0
CONDUCTOR_CPU_RESERVATION=0.5
CONDUCTOR_MEMORY_LIMIT=2G
CONDUCTOR_MEMORY_RESERVATION=512M
# Resource Limits - PostgreSQL
POSTGRES_CPU_LIMIT=1.0
POSTGRES_CPU_RESERVATION=0.25
POSTGRES_MEMORY_LIMIT=1G
POSTGRES_MEMORY_RESERVATION=256M
# Resource Limits - Elasticsearch
ELASTICSEARCH_CPU_LIMIT=2.0
ELASTICSEARCH_CPU_RESERVATION=0.5
ELASTICSEARCH_MEMORY_LIMIT=2G
ELASTICSEARCH_MEMORY_RESERVATION=1G

169
src/conductor/README.md Normal file
View File

@@ -0,0 +1,169 @@
# Conductor
Netflix Conductor is a workflow orchestration engine that runs in the cloud. It allows you to orchestrate microservices and workflows with a visual workflow designer.
## Features
- **Visual Workflow Designer**: Drag-and-drop interface for building complex workflows
- **Microservice Orchestration**: Coordinate multiple services with decision logic
- **Task Management**: Built-in retry mechanisms and error handling
- **Scalable Architecture**: Designed for high-throughput scenarios
- **REST API**: Full REST API with SDKs for Java, Python, Go, C#
- **Monitoring**: Real-time monitoring and metrics via Prometheus
## Quick Start
1. Copy `.env.example` to `.env`:
```bash
cp .env.example .env
```
2. (Optional) Edit `.env` to customize database passwords and other settings
3. Start Conductor (note: first run will build the image, may take several minutes):
```bash
docker compose up -d
```
4. Wait for services to be healthy (check with `docker compose ps`)
5. Access Conductor UI at `http://localhost:5000`
6. API is available at `http://localhost:8080`
## Default Configuration
| Service | Port | Description |
| ---------------- | ---- | ---------------------------- |
| Conductor Server | 8080 | REST API |
| Conductor UI | 5000 | Web UI |
| PostgreSQL | 5432 | Database (internal) |
| Elasticsearch | 9200 | Search & indexing (internal) |
**Authentication**: No authentication is configured by default. Add an authentication layer (reverse proxy with OAuth2, LDAP, etc.) in production.
## Environment Variables
Key environment variables (see `.env.example` for full list):
| Variable | Description | Default |
| -------------------------------- | --------------------- | ----------- |
| `CONDUCTOR_SERVER_PORT_OVERRIDE` | Host port for API | `8080` |
| `CONDUCTOR_UI_PORT_OVERRIDE` | Host port for UI | `5000` |
| `POSTGRES_DB` | Database name | `conductor` |
| `POSTGRES_USER` | Database user | `conductor` |
| `POSTGRES_PASSWORD` | Database password | `conductor` |
| `ELASTICSEARCH_VERSION` | Elasticsearch version | `8.11.0` |
| `CONDUCTOR_LOG_LEVEL` | Log level | `INFO` |
| `TZ` | Timezone | `UTC` |
## Resource Requirements
**Minimum**:
- CPU: 1 core
- RAM: 1.5GB
- Disk: 5GB
**Recommended**:
- CPU: 4+ cores
- RAM: 4GB+
- Disk: 20GB+
## Volumes
- `postgres_data`: PostgreSQL database data
- `elasticsearch_data`: Elasticsearch indices
- `conductor_logs`: Conductor server logs
## Using Conductor
### Creating a Workflow
1. Access the UI at `http://localhost:5000`
2. Go to "Definitions" > "Workflow Defs"
3. Click "Define Workflow" and use the visual editor
4. Define tasks and their execution logic
5. Save and execute your workflow
### Using the API
Example: Get server information
```bash
curl http://localhost:8080/api/
```
Example: List workflows
```bash
curl http://localhost:8080/api/metadata/workflow
```
### SDKs
Conductor provides official SDKs:
- Java: <https://github.com/conductor-oss/conductor/tree/main/java-sdk>
- Python: <https://github.com/conductor-oss/conductor/tree/main/python-sdk>
- Go: <https://github.com/conductor-oss/conductor/tree/main/go-sdk>
- C#: <https://github.com/conductor-oss/conductor/tree/main/csharp-sdk>
## Security Considerations
1. **Authentication**: Configure authentication for production use
2. **Database Passwords**: Use strong passwords for PostgreSQL
3. **Network Security**: Use firewall rules to restrict access
4. **SSL/TLS**: Enable HTTPS with a reverse proxy
5. **Elasticsearch**: Consider enabling X-Pack security for production
## Upgrading
To upgrade Conductor:
1. Update version in `.env` file (if using versioned tags)
2. Pull latest image and restart:
```bash
docker compose pull
docker compose up -d
```
3. Check logs for any migration messages:
```bash
docker compose logs -f conductor-server
```
## Troubleshooting
**Service won't start:**
- Check logs: `docker compose logs conductor-server`
- Ensure database is healthy: `docker compose ps postgres`
- Verify Elasticsearch: `docker compose ps elasticsearch`
**UI not accessible:**
- Check if port 5000 is available: `netstat -an | findstr 5000`
- Verify service is running: `docker compose ps conductor-server`
**Performance issues:**
- Increase resource limits in `.env`
- Monitor Elasticsearch heap size
- Check database connection pool settings
## References
- Official Website: <https://conductor-oss.org>
- Documentation: <https://docs.conductor-oss.org>
- GitHub: <https://github.com/conductor-oss/conductor>
- Community: <https://github.com/conductor-oss/conductor/discussions>
## License
Conductor is licensed under Apache-2.0. See [LICENSE](https://github.com/conductor-oss/conductor/blob/main/LICENSE) for more information.

169
src/conductor/README.zh.md Normal file
View File

@@ -0,0 +1,169 @@
# Conductor
Netflix Conductor 是一个在云端运行的工作流编排引擎,允许您通过可视化工作流设计器来编排微服务和工作流。
## 功能特点
- **可视化工作流设计器**:通过拖放界面构建复杂工作流
- **微服务编排**:使用决策逻辑协调多个服务
- **任务管理**:内置重试机制和错误处理
- **可扩展架构**:为高吞吐量场景而设计
- **REST API**:完整的 REST API,提供 Java、Python、Go、C# SDK
- **监控**:通过 Prometheus 进行实时监控和指标收集
## 快速开始
1. 复制 `.env.example``.env`
```bash
copy .env.example .env
```
2. (可选)编辑 `.env` 自定义数据库密码和其他设置
3. 启动 Conductor注意首次运行将构建镜像,可能需要几分钟):
```bash
docker compose up -d
```
4. 等待服务健康检查通过(使用 `docker compose ps` 检查)
5. 访问 Conductor UI`http://localhost:5000`
6. API 地址:`http://localhost:8080`
## 默认配置
| 服务 | 端口 | 说明 |
| ---------------- | ---- | ------------------ |
| Conductor Server | 8080 | REST API |
| Conductor UI | 5000 | Web UI |
| PostgreSQL | 5432 | 数据库(内部) |
| Elasticsearch | 9200 | 搜索与索引(内部) |
**身份验证**:默认未配置身份验证。在生产环境中应添加身份验证层(使用 OAuth2、LDAP 等的反向代理)。
## 环境变量
主要环境变量(完整列表请参阅 `.env.example`
| 变量 | 说明 | 默认值 |
| -------------------------------- | ------------------ | ----------- |
| `CONDUCTOR_SERVER_PORT_OVERRIDE` | API 的主机端口 | `8080` |
| `CONDUCTOR_UI_PORT_OVERRIDE` | UI 的主机端口 | `5000` |
| `POSTGRES_DB` | 数据库名称 | `conductor` |
| `POSTGRES_USER` | 数据库用户 | `conductor` |
| `POSTGRES_PASSWORD` | 数据库密码 | `conductor` |
| `ELASTICSEARCH_VERSION` | Elasticsearch 版本 | `8.11.0` |
| `CONDUCTOR_LOG_LEVEL` | 日志级别 | `INFO` |
| `TZ` | 时区 | `UTC` |
## 资源需求
**最低要求**
- CPU1 核心
- 内存1.5GB
- 磁盘5GB
**推荐配置**
- CPU4+ 核心
- 内存4GB+
- 磁盘20GB+
## 数据卷
- `postgres_data`PostgreSQL 数据库数据
- `elasticsearch_data`Elasticsearch 索引
- `conductor_logs`Conductor 服务器日志
## 使用 Conductor
### 创建工作流
1. 访问 UI`http://localhost:5000`
2. 进入 "Definitions" > "Workflow Defs"
3. 点击 "Define Workflow" 并使用可视化编辑器
4. 定义任务及其执行逻辑
5. 保存并执行您的工作流
### 使用 API
示例:获取服务器信息
```bash
curl http://localhost:8080/api/
```
示例:列出工作流
```bash
curl http://localhost:8080/api/metadata/workflow
```
### SDK
Conductor 提供官方 SDK
- Java<https://github.com/conductor-oss/conductor/tree/main/java-sdk>
- Python<https://github.com/conductor-oss/conductor/tree/main/python-sdk>
- Go<https://github.com/conductor-oss/conductor/tree/main/go-sdk>
- C#<https://github.com/conductor-oss/conductor/tree/main/csharp-sdk>
## 安全注意事项
1. **身份验证**:生产环境中配置身份验证
2. **数据库密码**:为 PostgreSQL 使用强密码
3. **网络安全**:使用防火墙规则限制访问
4. **SSL/TLS**:通过反向代理启用 HTTPS
5. **Elasticsearch**:生产环境中考虑启用 X-Pack 安全功能
## 升级
升级 Conductor
1. 在 `.env` 文件中更新版本(如果使用版本标签)
2. 拉取最新镜像并重启:
```bash
docker compose pull
docker compose up -d
```
3. 检查日志查看迁移消息:
```bash
docker compose logs -f conductor-server
```
## 故障排除
**服务无法启动:**
- 检查日志:`docker compose logs conductor-server`
- 确保数据库健康:`docker compose ps postgres`
- 验证 Elasticsearch`docker compose ps elasticsearch`
**UI 无法访问:**
- 检查端口 5000 是否可用:`netstat -an | findstr 5000`
- 验证服务运行状态:`docker compose ps conductor-server`
**性能问题:**
- 在 `.env` 中增加资源限制
- 监控 Elasticsearch 堆大小
- 检查数据库连接池设置
## 参考资料
- 官方网站:<https://conductor-oss.org>
- 文档:<https://docs.conductor-oss.org>
- GitHub<https://github.com/conductor-oss/conductor>
- 社区:<https://github.com/conductor-oss/conductor/discussions>
## 许可证
Conductor 使用 Apache-2.0 许可证。详情请参阅 [LICENSE](https://github.com/conductor-oss/conductor/blob/main/LICENSE)。

View File

@@ -0,0 +1,141 @@
# Conductor - Netflix Workflow Orchestration Engine
# https://github.com/conductor-oss/conductor
#
# Conductor is a platform for orchestrating microservices and workflows. It was
# originally developed by Netflix to manage their microservices architecture.
#
# Key Features:
# - Visual workflow designer with drag-and-drop interface
# - Support for complex workflows with decision logic
# - Task retry and error handling mechanisms
# - Scalable architecture for high-throughput scenarios
# - REST API and multiple language SDKs
#
# Default Credentials:
# - Access UI at http://localhost:5000
# - No authentication by default (add reverse proxy with auth in production)
#
# Security Notes:
# - Add authentication layer in production (OAuth2, LDAP, etc.)
# - Use strong database passwords
# - Enable SSL/TLS in production
# - Restrict network access to Conductor services
#
# License: Apache-2.0 (https://github.com/conductor-oss/conductor/blob/main/LICENSE)
x-default: &default
restart: unless-stopped
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
conductor-server:
<<: *default
image: conductor:server
build:
context: https://github.com/conductor-oss/conductor.git#main:docker/server
dockerfile: Dockerfile
container_name: conductor-server
ports:
- "${CONDUCTOR_SERVER_PORT_OVERRIDE:-8080}:8080"
- "${CONDUCTOR_UI_PORT_OVERRIDE:-5000}:5000"
environment:
# Database configuration
- spring.datasource.url=jdbc:postgresql://postgres:5432/${POSTGRES_DB}
- spring.datasource.username=${POSTGRES_USER}
- spring.datasource.password=${POSTGRES_PASSWORD}
# Elasticsearch configuration
- conductor.elasticsearch.url=http://elasticsearch:9200
- conductor.indexing.enabled=true
# Server configuration
- conductor.grpc-server.port=${CONDUCTOR_GRPC_PORT:-8090}
- conductor.metrics-prometheus.enabled=true
- LOG_LEVEL=${CONDUCTOR_LOG_LEVEL:-INFO}
- TZ=${TZ:-UTC}
volumes:
- conductor_logs:/app/logs
depends_on:
postgres:
condition: service_healthy
elasticsearch:
condition: service_healthy
healthcheck:
test: ["CMD", "wget", "--quiet", "--tries=1", "--spider", "http://localhost:8080/health"]
interval: 30s
timeout: 10s
retries: 5
start_period: 60s
deploy:
resources:
limits:
cpus: "${CONDUCTOR_CPU_LIMIT:-2.0}"
memory: "${CONDUCTOR_MEMORY_LIMIT:-2G}"
reservations:
cpus: "${CONDUCTOR_CPU_RESERVATION:-0.5}"
memory: "${CONDUCTOR_MEMORY_RESERVATION:-512M}"
postgres:
<<: *default
image: postgres:${POSTGRES_VERSION:-16-alpine}
container_name: conductor-postgres
environment:
- POSTGRES_DB=${POSTGRES_DB:-conductor}
- POSTGRES_USER=${POSTGRES_USER:-conductor}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-conductor}
- POSTGRES_INITDB_ARGS=--encoding=UTF8
- TZ=${TZ:-UTC}
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER:-conductor} -d ${POSTGRES_DB:-conductor}"]
interval: 10s
timeout: 5s
retries: 5
deploy:
resources:
limits:
cpus: "${POSTGRES_CPU_LIMIT:-1.0}"
memory: "${POSTGRES_MEMORY_LIMIT:-1G}"
reservations:
cpus: "${POSTGRES_CPU_RESERVATION:-0.25}"
memory: "${POSTGRES_MEMORY_RESERVATION:-256M}"
elasticsearch:
<<: *default
image: elasticsearch:${ELASTICSEARCH_VERSION:-8.11.0}
container_name: conductor-elasticsearch
environment:
- discovery.type=single-node
- xpack.security.enabled=false
- ES_JAVA_OPTS=-Xms${ELASTICSEARCH_HEAP_SIZE:-512m} -Xmx${ELASTICSEARCH_HEAP_SIZE:-512m}
- TZ=${TZ:-UTC}
volumes:
- elasticsearch_data:/usr/share/elasticsearch/data
healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost:9200/_cluster/health || exit 1"]
interval: 30s
timeout: 10s
retries: 5
start_period: 60s
deploy:
resources:
limits:
cpus: "${ELASTICSEARCH_CPU_LIMIT:-2.0}"
memory: "${ELASTICSEARCH_MEMORY_LIMIT:-2G}"
reservations:
cpus: "${ELASTICSEARCH_CPU_RESERVATION:-0.5}"
memory: "${ELASTICSEARCH_MEMORY_RESERVATION:-1G}"
volumes:
postgres_data:
driver: local
elasticsearch_data:
driver: local
conductor_logs:
driver: local

39
src/kestra/.env.example Normal file
View File

@@ -0,0 +1,39 @@
# Kestra Configuration
# Versions
KESTRA_VERSION=latest-full
POSTGRES_VERSION=16-alpine
# Port Configuration
KESTRA_PORT_OVERRIDE=8080
KESTRA_MANAGEMENT_PORT=8081
# PostgreSQL Configuration
POSTGRES_DB=kestra
POSTGRES_USER=kestra
POSTGRES_PASSWORD=k3str4
# Basic Authentication (optional, set enabled=true to activate)
KESTRA_BASIC_AUTH_ENABLED=false
KESTRA_BASIC_AUTH_USERNAME=admin
KESTRA_BASIC_AUTH_PASSWORD=admin
# Java Options
KESTRA_JAVA_OPTS=-Xmx1g
# Timezone
TZ=UTC
# Logging - removed, using template defaults
# Resource Limits - Kestra
KESTRA_CPU_LIMIT=2.0
KESTRA_CPU_RESERVATION=0.5
KESTRA_MEMORY_LIMIT=2G
KESTRA_MEMORY_RESERVATION=512M
# Resource Limits - PostgreSQL
POSTGRES_CPU_LIMIT=1.0
POSTGRES_CPU_RESERVATION=0.25
POSTGRES_MEMORY_LIMIT=1G
POSTGRES_MEMORY_RESERVATION=256M

185
src/kestra/README.md Normal file
View File

@@ -0,0 +1,185 @@
# Kestra
Kestra is an infinitely scalable orchestration and scheduling platform that allows you to declare, run, schedule, and monitor millions of workflows declaratively in code.
## Features
- **Declarative YAML**: Define workflows in simple YAML syntax
- **Event-Driven**: Trigger workflows based on events, schedules, or APIs
- **Multi-Language Support**: Execute Python, Node.js, Shell, SQL, and more
- **Real-Time Monitoring**: Live logs and execution tracking
- **Plugin Ecosystem**: Extensive library of integrations
- **Version Control**: Git integration for workflow versioning
- **Scalable**: Handle millions of workflow executions
## Quick Start
1. Copy `.env.example` to `.env`:
```bash
cp .env.example .env
```
2. (Optional) Edit `.env` to customize settings, especially if enabling basic auth
3. Start Kestra:
```bash
docker compose up -d
```
4. Wait for services to be ready (check with `docker compose logs -f kestra`)
5. Access Kestra UI at `http://localhost:8080`
## Default Configuration
| Service | Port | Description |
| ----------------- | ---- | -------------------- |
| Kestra | 8080 | Web UI and API |
| Kestra Management | 8081 | Management endpoints |
| PostgreSQL | 5432 | Database (internal) |
**Authentication**: No authentication by default. Set `KESTRA_BASIC_AUTH_ENABLED=true` in `.env` to enable basic authentication.
## Environment Variables
Key environment variables (see `.env.example` for full list):
| Variable | Description | Default |
| ---------------------------- | -------------------- | ------------- |
| `KESTRA_VERSION` | Kestra image version | `latest-full` |
| `KESTRA_PORT_OVERRIDE` | Host port for UI/API | `8080` |
| `KESTRA_MANAGEMENT_PORT` | Management port | `8081` |
| `POSTGRES_DB` | Database name | `kestra` |
| `POSTGRES_USER` | Database user | `kestra` |
| `POSTGRES_PASSWORD` | Database password | `k3str4` |
| `KESTRA_BASIC_AUTH_ENABLED` | Enable basic auth | `false` |
| `KESTRA_BASIC_AUTH_USERNAME` | Auth username | `admin` |
| `KESTRA_BASIC_AUTH_PASSWORD` | Auth password | `admin` |
| `TZ` | Timezone | `UTC` |
## Resource Requirements
**Minimum**:
- CPU: 1 core
- RAM: 1GB
- Disk: 5GB
**Recommended**:
- CPU: 2+ cores
- RAM: 2GB+
- Disk: 20GB+
## Volumes
- `postgres_data`: PostgreSQL database data
- `kestra_data`: Kestra storage (workflow outputs, files)
- `kestra_logs`: Kestra application logs
## Using Kestra
### Creating a Workflow
1. Access the UI at `http://localhost:8080`
2. Go to "Flows" and click "Create"
3. Define your workflow in YAML:
```yaml
id: hello-world
namespace: company.team
tasks:
- id: hello
type: io.kestra.plugin.core.log.Log
message: Hello, World!
```
4. Save and execute
### Using the API
Example: List flows
```bash
curl http://localhost:8080/api/v1/flows/search
```
Example: Trigger execution
```bash
curl -X POST http://localhost:8080/api/v1/executions/company.team/hello-world
```
### CLI
Install Kestra CLI:
```bash
curl -o kestra https://github.com/kestra-io/kestra/releases/latest/download/kestra
chmod +x kestra
```
### Docker Task Runner
Kestra can execute tasks in Docker containers. The compose file mounts `/var/run/docker.sock` to enable this feature. Use the `io.kestra.plugin.scripts.runner.docker.Docker` task type.
## Security Considerations
1. **Authentication**: Enable basic auth or configure SSO (OIDC) for production
2. **Database Passwords**: Use strong passwords for PostgreSQL
3. **Docker Socket**: Mounting Docker socket grants container control; ensure proper security
4. **Network Access**: Restrict access with firewall rules
5. **SSL/TLS**: Use reverse proxy with HTTPS in production
## Upgrading
To upgrade Kestra:
1. Update `KESTRA_VERSION` in `.env`
2. Pull and restart:
```bash
docker compose pull
docker compose up -d
```
3. Check logs:
```bash
docker compose logs -f kestra
```
## Troubleshooting
**Service won't start:**
- Check logs: `docker compose logs kestra`
- Verify database: `docker compose ps postgres`
- Ensure Docker socket is accessible
**Cannot execute Docker tasks:**
- Verify `/var/run/docker.sock` is mounted
- Check Docker daemon is running
- Review task logs in Kestra UI
**Performance issues:**
- Increase resource limits in `.env`
- Check database performance
- Monitor Java heap usage (adjust `KESTRA_JAVA_OPTS`)
## References
- Official Website: <https://kestra.io>
- Documentation: <https://kestra.io/docs>
- GitHub: <https://github.com/kestra-io/kestra>
- Community: <https://kestra.io/slack>
- Plugin Hub: <https://kestra.io/plugins>
## License
Kestra is licensed under Apache-2.0. See [LICENSE](https://github.com/kestra-io/kestra/blob/develop/LICENSE) for more information.

185
src/kestra/README.zh.md Normal file
View File

@@ -0,0 +1,185 @@
# Kestra
Kestra 是一个无限可扩展的编排和调度平台,允许您以声明方式在代码中定义、运行、调度和监控数百万个工作流。
## 功能特点
- **声明式 YAML**:使用简单的 YAML 语法定义工作流
- **事件驱动**:基于事件、计划或 API 触发工作流
- **多语言支持**:执行 Python、Node.js、Shell、SQL 等
- **实时监控**:实时日志和执行跟踪
- **插件生态系统**:丰富的集成库
- **版本控制**Git 集成用于工作流版本管理
- **可扩展**:处理数百万个工作流执行
## 快速开始
1. 复制 `.env.example``.env`
```bash
copy .env.example .env
```
2. (可选)编辑 `.env` 自定义设置,特别是启用基本身份验证
3. 启动 Kestra
```bash
docker compose up -d
```
4. 等待服务就绪(使用 `docker compose logs -f kestra` 检查)
5. 访问 Kestra UI`http://localhost:8080`
## 默认配置
| 服务 | 端口 | 说明 |
| ----------------- | ---- | -------------- |
| Kestra | 8080 | Web UI 和 API |
| Kestra Management | 8081 | 管理端点 |
| PostgreSQL | 5432 | 数据库(内部) |
**身份验证**:默认无身份验证。在 `.env` 中设置 `KESTRA_BASIC_AUTH_ENABLED=true` 以启用基本身份验证。
## 环境变量
主要环境变量(完整列表请参阅 `.env.example`
| 变量 | 说明 | 默认值 |
| ---------------------------- | ----------------- | ------------- |
| `KESTRA_VERSION` | Kestra 镜像版本 | `latest-full` |
| `KESTRA_PORT_OVERRIDE` | UI/API 的主机端口 | `8080` |
| `KESTRA_MANAGEMENT_PORT` | 管理端口 | `8081` |
| `POSTGRES_DB` | 数据库名称 | `kestra` |
| `POSTGRES_USER` | 数据库用户 | `kestra` |
| `POSTGRES_PASSWORD` | 数据库密码 | `k3str4` |
| `KESTRA_BASIC_AUTH_ENABLED` | 启用基本身份验证 | `false` |
| `KESTRA_BASIC_AUTH_USERNAME` | 验证用户名 | `admin` |
| `KESTRA_BASIC_AUTH_PASSWORD` | 验证密码 | `admin` |
| `TZ` | 时区 | `UTC` |
## 资源需求
**最低要求**
- CPU1 核心
- 内存1GB
- 磁盘5GB
**推荐配置**
- CPU2+ 核心
- 内存2GB+
- 磁盘20GB+
## 数据卷
- `postgres_data`PostgreSQL 数据库数据
- `kestra_data`Kestra 存储(工作流输出、文件)
- `kestra_logs`Kestra 应用日志
## 使用 Kestra
### 创建工作流
1. 访问 UI`http://localhost:8080`
2. 进入 "Flows" 并点击 "Create"
3. 用 YAML 定义您的工作流:
```yaml
id: hello-world
namespace: company.team
tasks:
- id: hello
type: io.kestra.plugin.core.log.Log
message: Hello, World!
```
4. 保存并执行
### 使用 API
示例:列出流
```bash
curl http://localhost:8080/api/v1/flows/search
```
示例:触发执行
```bash
curl -X POST http://localhost:8080/api/v1/executions/company.team/hello-world
```
### CLI
安装 Kestra CLI
```bash
curl -o kestra https://github.com/kestra-io/kestra/releases/latest/download/kestra
chmod +x kestra
```
### Docker 任务运行器
Kestra 可以在 Docker 容器中执行任务。compose 文件挂载了 `/var/run/docker.sock` 以启用此功能。使用 `io.kestra.plugin.scripts.runner.docker.Docker` 任务类型。
## 安全注意事项
1. **身份验证**:生产环境中启用基本身份验证或配置 SSOOIDC
2. **数据库密码**:为 PostgreSQL 使用强密码
3. **Docker Socket**:挂载 Docker socket 授予容器控制权限,确保适当的安全性
4. **网络访问**:使用防火墙规则限制访问
5. **SSL/TLS**:在生产环境中使用带 HTTPS 的反向代理
## 升级
升级 Kestra
1. 在 `.env` 中更新 `KESTRA_VERSION`
2. 拉取并重启:
```bash
docker compose pull
docker compose up -d
```
3. 检查日志:
```bash
docker compose logs -f kestra
```
## 故障排除
**服务无法启动:**
- 检查日志:`docker compose logs kestra`
- 验证数据库:`docker compose ps postgres`
- 确保 Docker socket 可访问
**无法执行 Docker 任务:**
- 验证 `/var/run/docker.sock` 已挂载
- 检查 Docker 守护进程是否运行
- 在 Kestra UI 中查看任务日志
**性能问题:**
- 在 `.env` 中增加资源限制
- 检查数据库性能
- 监控 Java 堆使用(调整 `KESTRA_JAVA_OPTS`
## 参考资料
- 官方网站:<https://kestra.io>
- 文档:<https://kestra.io/docs>
- GitHub<https://github.com/kestra-io/kestra>
- 社区:<https://kestra.io/slack>
- 插件中心:<https://kestra.io/plugins>
## 许可证
Kestra 使用 Apache-2.0 许可证。详情请参阅 [LICENSE](https://github.com/kestra-io/kestra/blob/develop/LICENSE)。

View File

@@ -0,0 +1,125 @@
# Kestra - Event-driven Orchestration Platform
# https://github.com/kestra-io/kestra
#
# Kestra is an infinitely scalable orchestration and scheduling platform that allows
# you to declare, run, schedule, and monitor millions of workflows declaratively in code.
#
# Key Features:
# - Declarative YAML-based workflow definitions
# - Event-driven orchestration with triggers
# - Built-in scheduling and cron support
# - Support for multiple programming languages (Python, Node.js, etc.)
# - Real-time monitoring and logging
# - Plugin ecosystem for integrations
#
# Default Credentials:
# - Access UI at http://localhost:8080
# - No authentication by default (configure in production)
#
# Security Notes:
# - Configure authentication in production (basic auth, OAuth2, OIDC)
# - Use strong database passwords
# - Enable SSL/TLS in production
# - Restrict network access appropriately
#
# License: Apache-2.0 (https://github.com/kestra-io/kestra/blob/develop/LICENSE)
x-default: &default
restart: unless-stopped
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
kestra:
<<: *default
image: kestra/kestra:${KESTRA_VERSION:-latest-full}
container_name: kestra
command: server standalone
ports:
- "${KESTRA_PORT_OVERRIDE:-8080}:8080"
- "${KESTRA_MANAGEMENT_PORT:-8081}:8081"
environment:
# Database configuration
- KESTRA_CONFIGURATION=datasources.postgres.url=jdbc:postgresql://postgres:5432/${POSTGRES_DB}
- KESTRA_CONFIGURATION_datasources_postgres_username=${POSTGRES_USER}
- KESTRA_CONFIGURATION_datasources_postgres_password=${POSTGRES_PASSWORD}
- KESTRA_CONFIGURATION_datasources_postgres_driverClassName=org.postgresql.Driver
# Server configuration
- KESTRA_CONFIGURATION_micronaut_server_port=8080
- KESTRA_CONFIGURATION_kestra_server_basic--auth_enabled=${KESTRA_BASIC_AUTH_ENABLED:-false}
- KESTRA_CONFIGURATION_kestra_server_basic--auth_username=${KESTRA_BASIC_AUTH_USERNAME:-admin}
- KESTRA_CONFIGURATION_kestra_server_basic--auth_password=${KESTRA_BASIC_AUTH_PASSWORD:-admin}
# Storage configuration
- KESTRA_CONFIGURATION_kestra_storage_type=local
- KESTRA_CONFIGURATION_kestra_storage_local_base--path=/app/storage
# Repository configuration
- KESTRA_CONFIGURATION_kestra_repository_type=postgres
# Queue configuration
- KESTRA_CONFIGURATION_kestra_queue_type=postgres
# Other settings
- TZ=${TZ:-UTC}
- JAVA_OPTS=${KESTRA_JAVA_OPTS:--Xmx1g}
volumes:
- kestra_data:/app/storage
- kestra_logs:/app/logs
- /var/run/docker.sock:/var/run/docker.sock:ro # For Docker task runner
depends_on:
postgres:
condition: service_healthy
healthcheck:
test: ["CMD", "wget", "--quiet", "--tries=1", "--spider", "http://localhost:8080/health"]
interval: 30s
timeout: 10s
retries: 5
start_period: 60s
deploy:
resources:
limits:
cpus: "${KESTRA_CPU_LIMIT:-2.0}"
memory: "${KESTRA_MEMORY_LIMIT:-2G}"
reservations:
cpus: "${KESTRA_CPU_RESERVATION:-0.5}"
memory: "${KESTRA_MEMORY_RESERVATION:-512M}"
postgres:
<<: *default
image: postgres:${POSTGRES_VERSION:-16-alpine}
container_name: kestra-postgres
environment:
- POSTGRES_DB=${POSTGRES_DB:-kestra}
- POSTGRES_USER=${POSTGRES_USER:-kestra}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-k3str4}
- POSTGRES_INITDB_ARGS=--encoding=UTF8
- TZ=${TZ:-UTC}
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER:-kestra} -d ${POSTGRES_DB:-kestra}"]
interval: 10s
timeout: 5s
retries: 5
deploy:
resources:
limits:
cpus: "${POSTGRES_CPU_LIMIT:-1.0}"
memory: "${POSTGRES_MEMORY_LIMIT:-1G}"
reservations:
cpus: "${POSTGRES_CPU_RESERVATION:-0.25}"
memory: "${POSTGRES_MEMORY_RESERVATION:-256M}"
volumes:
postgres_data:
driver: local
kestra_data:
driver: local
kestra_logs:
driver: local

View File

@@ -32,6 +32,8 @@ x-mongo: &mongo
timeout: 3s
retries: 10
start_period: 30s
extra_hosts:
- "host.docker.internal:host-gateway"
deploy:
resources:
limits:
@@ -60,7 +62,7 @@ services:
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_INITDB_ROOT_PASSWORD:-password}
MONGO_REPLICA_SET_NAME: ${MONGO_REPLICA_SET_NAME:-rs0}
MONGO_PORT_1: ${MONGO_PORT_OVERRIDE_1:-27017}
MONGO_HOST: ${MONGO_HOST:-mongo1}
MONGO_HOST: ${MONGO_HOST:-host.docker.internal}
volumes:
- ./secrets/rs0.key:/data/rs0.key:ro
entrypoint:
@@ -80,7 +82,7 @@ services:
const config = {
_id: '$${MONGO_REPLICA_SET_NAME}',
members: [
{ _id: 0, host: 'mongo1:27017' }
{ _id: 0, host: '$${MONGO_HOST}:$${MONGO_PORT_1}' }
]
};

View File

@@ -31,6 +31,8 @@ x-mongo: &mongo
timeout: 3s
retries: 10
start_period: 30s
extra_hosts:
- "host.docker.internal:host-gateway"
deploy:
resources:
limits:
@@ -75,7 +77,7 @@ services:
MONGO_PORT_1: ${MONGO_PORT_OVERRIDE_1:-27017}
MONGO_PORT_2: ${MONGO_PORT_OVERRIDE_2:-27018}
MONGO_PORT_3: ${MONGO_PORT_OVERRIDE_3:-27019}
MONGO_HOST: ${MONGO_HOST:-mongo1}
MONGO_HOST: ${MONGO_HOST:-host.docker.internal}
volumes:
- ./secrets/rs0.key:/data/rs0.key:ro
entrypoint:
@@ -95,9 +97,9 @@ services:
const config = {
_id: '$${MONGO_REPLICA_SET_NAME}',
members: [
{ _id: 0, host: 'mongo1:27017' },
{ _id: 1, host: 'mongo2:27017' },
{ _id: 2, host: 'mongo3:27017' }
{ _id: 0, host: '$${MONGO_HOST}:$${MONGO_PORT_1}' },
{ _id: 1, host: '$${MONGO_HOST}:$${MONGO_PORT_2}' },
{ _id: 2, host: '$${MONGO_HOST}:$${MONGO_PORT_3}' },
]
};

39
src/temporal/.env.example Normal file
View File

@@ -0,0 +1,39 @@
# Temporal Configuration
# Versions
TEMPORAL_VERSION=1.24.2
TEMPORAL_UI_VERSION=2.28.0
POSTGRES_VERSION=16-alpine
# Port Configuration
TEMPORAL_FRONTEND_PORT_OVERRIDE=7233
TEMPORAL_UI_PORT_OVERRIDE=8233
# PostgreSQL Configuration
POSTGRES_DB=temporal
POSTGRES_USER=temporal
POSTGRES_PASSWORD=temporal
# Logging
TEMPORAL_LOG_LEVEL=info
# Timezone
TZ=UTC
# Resource Limits - Temporal Server
TEMPORAL_CPU_LIMIT=2.0
TEMPORAL_CPU_RESERVATION=0.5
TEMPORAL_MEMORY_LIMIT=2G
TEMPORAL_MEMORY_RESERVATION=512M
# Resource Limits - Temporal UI
TEMPORAL_UI_CPU_LIMIT=0.5
TEMPORAL_UI_CPU_RESERVATION=0.1
TEMPORAL_UI_MEMORY_LIMIT=512M
TEMPORAL_UI_MEMORY_RESERVATION=128M
# Resource Limits - PostgreSQL
POSTGRES_CPU_LIMIT=1.0
POSTGRES_CPU_RESERVATION=0.25
POSTGRES_MEMORY_LIMIT=1G
POSTGRES_MEMORY_RESERVATION=256M

239
src/temporal/README.md Normal file
View File

@@ -0,0 +1,239 @@
# Temporal
Temporal is a scalable and reliable runtime for Reentrant Processes called Temporal Workflow Executions. It enables developers to write simple, resilient code without worrying about failures, retries, or state management.
## Features
- **Durable Execution**: Workflows survive failures, restarts, and even code deployments
- **Built-in Reliability**: Automatic retries, timeouts, and error handling
- **Long-Running Workflows**: Support workflows that run for days, months, or years
- **Multi-Language SDKs**: Official support for Go, Java, TypeScript, Python, PHP, .NET
- **Advanced Visibility**: Search and filter workflow executions
- **Event-Driven**: Signals, queries, and updates for workflow interaction
## Quick Start
1. Copy `.env.example` to `.env`:
```bash
cp .env.example .env
```
2. (Optional) Edit `.env` to customize database passwords and settings
3. Start Temporal:
```bash
docker compose up -d
```
4. Wait for services to be ready (check with `docker compose logs -f temporal`)
5. Access Temporal Web UI at `http://localhost:8233`
6. Frontend service is available at `localhost:7233` (gRPC)
## Default Configuration
| Service | Port | Description |
| ----------------- | ---- | ---------------------- |
| Temporal Frontend | 7233 | gRPC endpoint for SDKs |
| Temporal Web UI | 8233 | Web interface |
| PostgreSQL | 5432 | Database (internal) |
**Authentication**: No authentication by default. Configure mTLS and authorization for production use.
## Environment Variables
Key environment variables (see `.env.example` for full list):
| Variable | Description | Default |
| --------------------------------- | ----------------------- | ---------- |
| `TEMPORAL_VERSION` | Temporal server version | `1.24.2` |
| `TEMPORAL_UI_VERSION` | Temporal UI version | `2.28.0` |
| `TEMPORAL_FRONTEND_PORT_OVERRIDE` | Frontend gRPC port | `7233` |
| `TEMPORAL_UI_PORT_OVERRIDE` | Web UI port | `8233` |
| `POSTGRES_DB` | Database name | `temporal` |
| `POSTGRES_USER` | Database user | `temporal` |
| `POSTGRES_PASSWORD` | Database password | `temporal` |
| `TEMPORAL_LOG_LEVEL` | Log level | `info` |
| `TZ` | Timezone | `UTC` |
## Resource Requirements
**Minimum**:
- CPU: 1 core
- RAM: 1GB
- Disk: 5GB
**Recommended**:
- CPU: 2+ cores
- RAM: 2GB+
- Disk: 20GB+
## Volumes
- `postgres_data`: PostgreSQL database data
- `temporal_data`: Temporal configuration and state
## Using Temporal
### Install SDK
Choose your language:
**Go:**
```bash
go get go.temporal.io/sdk
```
**TypeScript:**
```bash
npm install @temporalio/client @temporalio/worker
```
**Python:**
```bash
pip install temporalio
```
### Write a Workflow (Python Example)
```python
from temporalio import workflow, activity
from datetime import timedelta
@activity.defn
async def say_hello(name: str) -> str:
return f"Hello, {name}!"
@workflow.defn
class HelloWorkflow:
@workflow.run
async def run(self, name: str) -> str:
return await workflow.execute_activity(
say_hello,
name,
start_to_close_timeout=timedelta(seconds=10),
)
```
### Run a Worker
```python
from temporalio.client import Client
from temporalio.worker import Worker
async def main():
client = await Client.connect("localhost:7233")
worker = Worker(
client,
task_queue="hello-queue",
workflows=[HelloWorkflow],
activities=[say_hello],
)
await worker.run()
```
### Execute a Workflow
```python
from temporalio.client import Client
async def main():
client = await Client.connect("localhost:7233")
result = await client.execute_workflow(
HelloWorkflow.run,
"World",
id="hello-workflow",
task_queue="hello-queue",
)
print(result)
```
### Using tctl CLI
The admin-tools container (dev profile) includes tctl:
```bash
docker compose --profile dev run temporal-admin-tools
tctl namespace list
tctl workflow list
```
## Profiles
- `dev`: Include admin-tools container for CLI access
To enable dev profile:
```bash
docker compose --profile dev up -d
```
## Security Considerations
1. **Authentication**: Configure mTLS for production deployments
2. **Authorization**: Set up authorization rules for namespaces and workflows
3. **Database Passwords**: Use strong PostgreSQL passwords
4. **Network Security**: Restrict access to Temporal ports
5. **Encryption**: Enable encryption at rest for sensitive data
## Upgrading
To upgrade Temporal:
1. Update versions in `.env`
2. Pull and restart:
```bash
docker compose pull
docker compose up -d
```
3. Check logs for migration messages:
```bash
docker compose logs -f temporal
```
## Troubleshooting
**Service won't start:**
- Check logs: `docker compose logs temporal`
- Verify database: `docker compose ps postgres`
- Ensure sufficient resources allocated
**Cannot connect from SDK:**
- Verify port 7233 is accessible
- Check firewall rules
- Ensure SDK version compatibility
**Web UI not loading:**
- Check UI logs: `docker compose logs temporal-ui`
- Verify frontend is healthy: `docker compose ps temporal`
- Clear browser cache
## References
- Official Website: <https://temporal.io>
- Documentation: <https://docs.temporal.io>
- GitHub: <https://github.com/temporalio/temporal>
- Community: <https://community.temporal.io>
- SDK Documentation: <https://docs.temporal.io/dev-guide>
## License
Temporal is licensed under MIT. See [LICENSE](https://github.com/temporalio/temporal/blob/master/LICENSE) for more information.

239
src/temporal/README.zh.md Normal file
View File

@@ -0,0 +1,239 @@
# Temporal
Temporal 是一个可扩展且可靠的持久化执行运行时,用于运行称为 Temporal 工作流执行的可重入流程。它使开发人员能够编写简单、有弹性的代码,而无需担心故障、重试或状态管理。
## 功能特点
- **持久化执行**:工作流可以在故障、重启甚至代码部署后继续运行
- **内置可靠性**:自动重试、超时和错误处理
- **长期运行的工作流**:支持运行数天、数月或数年的工作流
- **多语言 SDK**:官方支持 Go、Java、TypeScript、Python、PHP、.NET
- **高级可见性**:搜索和过滤工作流执行
- **事件驱动**:通过信号、查询和更新与工作流交互
## 快速开始
1. 复制 `.env.example``.env`
```bash
copy .env.example .env
```
2. (可选)编辑 `.env` 自定义数据库密码和设置
3. 启动 Temporal
```bash
docker compose up -d
```
4. 等待服务就绪(使用 `docker compose logs -f temporal` 检查)
5. 访问 Temporal Web UI`http://localhost:8233`
6. Frontend 服务地址:`localhost:7233`gRPC
## 默认配置
| 服务 | 端口 | 说明 |
| ----------------- | ---- | ---------------- |
| Temporal Frontend | 7233 | SDK 的 gRPC 端点 |
| Temporal Web UI | 8233 | Web 界面 |
| PostgreSQL | 5432 | 数据库(内部) |
**身份验证**:默认无身份验证。生产环境中请配置 mTLS 和授权。
## 环境变量
主要环境变量(完整列表请参阅 `.env.example`
| 变量 | 说明 | 默认值 |
| --------------------------------- | ------------------- | ---------- |
| `TEMPORAL_VERSION` | Temporal 服务器版本 | `1.24.2` |
| `TEMPORAL_UI_VERSION` | Temporal UI 版本 | `2.28.0` |
| `TEMPORAL_FRONTEND_PORT_OVERRIDE` | Frontend gRPC 端口 | `7233` |
| `TEMPORAL_UI_PORT_OVERRIDE` | Web UI 端口 | `8233` |
| `POSTGRES_DB` | 数据库名称 | `temporal` |
| `POSTGRES_USER` | 数据库用户 | `temporal` |
| `POSTGRES_PASSWORD` | 数据库密码 | `temporal` |
| `TEMPORAL_LOG_LEVEL` | 日志级别 | `info` |
| `TZ` | 时区 | `UTC` |
## 资源需求
**最低要求**
- CPU1 核心
- 内存1GB
- 磁盘5GB
**推荐配置**
- CPU2+ 核心
- 内存2GB+
- 磁盘20GB+
## 数据卷
- `postgres_data`PostgreSQL 数据库数据
- `temporal_data`Temporal 配置和状态
## 使用 Temporal
### 安装 SDK
选择您的语言:
**Go**
```bash
go get go.temporal.io/sdk
```
**TypeScript**
```bash
npm install @temporalio/client @temporalio/worker
```
**Python**
```bash
pip install temporalio
```
### 编写工作流Python 示例)
```python
from temporalio import workflow, activity
from datetime import timedelta
@activity.defn
async def say_hello(name: str) -> str:
return f"Hello, {name}!"
@workflow.defn
class HelloWorkflow:
@workflow.run
async def run(self, name: str) -> str:
return await workflow.execute_activity(
say_hello,
name,
start_to_close_timeout=timedelta(seconds=10),
)
```
### 运行 Worker
```python
from temporalio.client import Client
from temporalio.worker import Worker
async def main():
client = await Client.connect("localhost:7233")
worker = Worker(
client,
task_queue="hello-queue",
workflows=[HelloWorkflow],
activities=[say_hello],
)
await worker.run()
```
### 执行工作流
```python
from temporalio.client import Client
async def main():
client = await Client.connect("localhost:7233")
result = await client.execute_workflow(
HelloWorkflow.run,
"World",
id="hello-workflow",
task_queue="hello-queue",
)
print(result)
```
### 使用 tctl CLI
admin-tools 容器dev 配置文件)包含 tctl
```bash
docker compose --profile dev run temporal-admin-tools
tctl namespace list
tctl workflow list
```
## 配置文件
- `dev`:包含用于 CLI 访问的 admin-tools 容器
启用 dev 配置文件:
```bash
docker compose --profile dev up -d
```
## 安全注意事项
1. **身份验证**:为生产部署配置 mTLS
2. **授权**:为命名空间和工作流设置授权规则
3. **数据库密码**:使用强 PostgreSQL 密码
4. **网络安全**:限制对 Temporal 端口的访问
5. **加密**:为敏感数据启用静态加密
## 升级
升级 Temporal
1. 在 `.env` 中更新版本
2. 拉取并重启:
```bash
docker compose pull
docker compose up -d
```
3. 检查日志查看迁移消息:
```bash
docker compose logs -f temporal
```
## 故障排除
**服务无法启动:**
- 检查日志:`docker compose logs temporal`
- 验证数据库:`docker compose ps postgres`
- 确保分配了足够的资源
**SDK 无法连接:**
- 验证端口 7233 可访问
- 检查防火墙规则
- 确保 SDK 版本兼容性
**Web UI 无法加载:**
- 检查 UI 日志:`docker compose logs temporal-ui`
- 验证 frontend 健康:`docker compose ps temporal`
- 清除浏览器缓存
## 参考资料
- 官方网站:<https://temporal.io>
- 文档:<https://docs.temporal.io>
- GitHub<https://github.com/temporalio/temporal>
- 社区:<https://community.temporal.io>
- SDK 文档:<https://docs.temporal.io/dev-guide>
## 许可证
Temporal 使用 MIT 许可证。详情请参阅 [LICENSE](https://github.com/temporalio/temporal/blob/master/LICENSE)。

View File

@@ -0,0 +1,166 @@
# Temporal - Durable Execution Platform
# https://github.com/temporalio/temporal
#
# Temporal is a scalable and reliable runtime for microservice orchestration that enables
# developers to write simple, resilient code without worrying about failures, retries, or
# state management.
#
# Key Features:
# - Durable workflow execution with automatic state management
# - Built-in retry and error handling
# - Support for long-running workflows (days, months, years)
# - Multiple language SDKs (Go, Java, TypeScript, Python, PHP, .NET)
# - Advanced visibility with search capabilities
# - Multi-cluster replication support
#
# Default Credentials:
# - Access Web UI at http://localhost:8233
# - Frontend service at localhost:7233 (gRPC)
# - No authentication by default
#
# Security Notes:
# - Configure authentication and authorization in production
# - Use strong database passwords
# - Enable mTLS for production deployments
# - Restrict network access appropriately
#
# License: MIT (https://github.com/temporalio/temporal/blob/master/LICENSE)
x-default: &default
restart: unless-stopped
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
temporal:
<<: *default
image: temporalio/auto-setup:${TEMPORAL_VERSION:-1.24.2}
container_name: temporal
ports:
- "${TEMPORAL_FRONTEND_PORT_OVERRIDE:-7233}:7233" # Frontend gRPC
environment:
# Database configuration
- DB=postgresql
- DB_PORT=5432
- POSTGRES_SEEDS=postgres
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PWD=${POSTGRES_PASSWORD}
# Visibility database (using same Postgres)
- DBNAME=${POSTGRES_DB}
- VISIBILITY_DBNAME=${POSTGRES_DB}
# Server configuration
- DYNAMIC_CONFIG_FILE_PATH=config/dynamicconfig/development-sql.yaml
- ENABLE_ES=false
- SKIP_DB_CREATE=false
- SKIP_DEFAULT_NAMESPACE_CREATION=false
# Other settings
- TZ=${TZ:-UTC}
- LOG_LEVEL=${TEMPORAL_LOG_LEVEL:-info}
volumes:
- temporal_data:/etc/temporal
depends_on:
postgres:
condition: service_healthy
healthcheck:
test: ["CMD", "tctl", "--address", "temporal:7233", "cluster", "health"]
interval: 30s
timeout: 10s
retries: 5
start_period: 90s
deploy:
resources:
limits:
cpus: "${TEMPORAL_CPU_LIMIT:-2.0}"
memory: "${TEMPORAL_MEMORY_LIMIT:-2G}"
reservations:
cpus: "${TEMPORAL_CPU_RESERVATION:-0.5}"
memory: "${TEMPORAL_MEMORY_RESERVATION:-512M}"
temporal-ui:
<<: *default
image: temporalio/ui:${TEMPORAL_UI_VERSION:-2.28.0}
container_name: temporal-ui
ports:
- "${TEMPORAL_UI_PORT_OVERRIDE:-8233}:8080"
environment:
- TEMPORAL_ADDRESS=temporal:7233
- TEMPORAL_CORS_ORIGINS=http://localhost:8233
- TEMPORAL_UI_ENABLED=true
- TZ=${TZ:-UTC}
depends_on:
temporal:
condition: service_healthy
healthcheck:
test: ["CMD", "wget", "--quiet", "--tries=1", "--spider", "http://localhost:8080"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
deploy:
resources:
limits:
cpus: "${TEMPORAL_UI_CPU_LIMIT:-0.5}"
memory: "${TEMPORAL_UI_MEMORY_LIMIT:-512M}"
reservations:
cpus: "${TEMPORAL_UI_CPU_RESERVATION:-0.1}"
memory: "${TEMPORAL_UI_MEMORY_RESERVATION:-128M}"
temporal-admin-tools:
<<: *default
image: temporalio/admin-tools:${TEMPORAL_VERSION:-1.24.2}
container_name: temporal-admin-tools
profiles:
- dev
environment:
- TEMPORAL_ADDRESS=temporal:7233
- TEMPORAL_CLI_ADDRESS=temporal:7233
- TZ=${TZ:-UTC}
depends_on:
temporal:
condition: service_healthy
stdin_open: true
tty: true
deploy:
resources:
limits:
cpus: "0.5"
memory: "256M"
postgres:
<<: *default
image: postgres:${POSTGRES_VERSION:-16-alpine}
container_name: temporal-postgres
environment:
- POSTGRES_DB=${POSTGRES_DB:-temporal}
- POSTGRES_USER=${POSTGRES_USER:-temporal}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-temporal}
- POSTGRES_INITDB_ARGS=--encoding=UTF8
- TZ=${TZ:-UTC}
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER:-temporal} -d ${POSTGRES_DB:-temporal}"]
interval: 10s
timeout: 5s
retries: 5
deploy:
resources:
limits:
cpus: "${POSTGRES_CPU_LIMIT:-1.0}"
memory: "${POSTGRES_MEMORY_LIMIT:-1G}"
reservations:
cpus: "${POSTGRES_CPU_RESERVATION:-0.25}"
memory: "${POSTGRES_MEMORY_RESERVATION:-256M}"
volumes:
postgres_data:
driver: local
temporal_data:
driver: local

53
src/windmill/.env.example Normal file
View File

@@ -0,0 +1,53 @@
# Windmill Configuration
# Versions
WINDMILL_VERSION=main
WINDMILL_LSP_VERSION=latest
POSTGRES_VERSION=16-alpine
# Port Configuration
WINDMILL_PORT_OVERRIDE=8000
WINDMILL_LSP_PORT=3001
# Base URL
WINDMILL_BASE_URL=http://localhost:8000
# PostgreSQL Configuration
POSTGRES_DB=windmill
POSTGRES_USER=windmill
POSTGRES_PASSWORD=changeme
# Superadmin Credentials - IMPORTANT: Change these!
WINDMILL_SUPERADMIN_EMAIL=admin@windmill.dev
WINDMILL_SUPERADMIN_PASSWORD=changeme
# Optional: Enterprise License Key
WINDMILL_LICENSE_KEY=
# Worker Configuration
WINDMILL_WORKER_TAGS=
WINDMILL_NUM_WORKERS=3
# Logging
WINDMILL_LOG_LEVEL=info
# Timezone
TZ=UTC
# Resource Limits - Server
WINDMILL_SERVER_CPU_LIMIT=1.0
WINDMILL_SERVER_CPU_RESERVATION=0.25
WINDMILL_SERVER_MEMORY_LIMIT=1G
WINDMILL_SERVER_MEMORY_RESERVATION=256M
# Resource Limits - Worker
WINDMILL_WORKER_CPU_LIMIT=2.0
WINDMILL_WORKER_CPU_RESERVATION=0.5
WINDMILL_WORKER_MEMORY_LIMIT=2G
WINDMILL_WORKER_MEMORY_RESERVATION=512M
# Resource Limits - PostgreSQL
POSTGRES_CPU_LIMIT=1.0
POSTGRES_CPU_RESERVATION=0.25
POSTGRES_MEMORY_LIMIT=1G
POSTGRES_MEMORY_RESERVATION=256M

194
src/windmill/README.md Normal file
View File

@@ -0,0 +1,194 @@
# Windmill
Windmill is an open-source developer infrastructure platform that allows you to quickly build production-grade multi-step automations and internal apps from minimal Python, TypeScript, Go, Bash, SQL scripts.
## Features
- **Multi-Language Support**: Write scripts in Python, TypeScript, Go, Bash, SQL
- **Auto-Generated UIs**: Automatic UI generation from scripts
- **Visual Workflow Builder**: Build complex workflows with code execution
- **Scheduling**: Built-in cron-based scheduling
- **Webhooks**: Trigger scripts via HTTP webhooks
- **Version Control**: Built-in Git sync and audit logs
- **Multi-Tenant**: Workspace-based multi-tenancy
## Quick Start
1. Copy `.env.example` to `.env`:
```bash
cp .env.example .env
```
2. **IMPORTANT**: Edit `.env` and change:
- `WINDMILL_SUPERADMIN_EMAIL` - Your admin email
- `WINDMILL_SUPERADMIN_PASSWORD` - A strong password
- `POSTGRES_PASSWORD` - A strong database password
3. Start Windmill:
```bash
docker compose up -d
```
4. Wait for services to be ready
5. Access Windmill UI at `http://localhost:8000`
6. Log in with your configured superadmin credentials
## Default Configuration
| Service | Port | Description |
| --------------- | ---- | ----------------------------- |
| Windmill Server | 8000 | Web UI and API |
| PostgreSQL | 5432 | Database (internal) |
| Windmill LSP | 3001 | Language Server (dev profile) |
**Default Credentials** (Change these!):
- Email: `admin@windmill.dev`
- Password: `changeme`
## Environment Variables
Key environment variables (see `.env.example` for full list):
| Variable | Description | Default |
| ------------------------------ | ---------------------- | ----------------------- |
| `WINDMILL_VERSION` | Windmill image version | `main` |
| `WINDMILL_PORT_OVERRIDE` | Host port for UI | `8000` |
| `WINDMILL_BASE_URL` | Base URL | `http://localhost:8000` |
| `WINDMILL_SUPERADMIN_EMAIL` | Superadmin email | `admin@windmill.dev` |
| `WINDMILL_SUPERADMIN_PASSWORD` | Superadmin password | **Must change!** |
| `POSTGRES_PASSWORD` | Database password | `changeme` |
| `WINDMILL_NUM_WORKERS` | Number of workers | `3` |
| `WINDMILL_LICENSE_KEY` | Enterprise license | (empty) |
| `TZ` | Timezone | `UTC` |
## Resource Requirements
**Minimum**:
- CPU: 1 core
- RAM: 1GB
- Disk: 5GB
**Recommended**:
- CPU: 3+ cores (1 for server, 2+ for workers)
- RAM: 3GB+
- Disk: 20GB+
## Volumes
- `postgres_data`: PostgreSQL database data
- `windmill_server_data`: Windmill server data
- `windmill_worker_data`: Worker execution data
## Using Windmill
### Creating a Script
1. Access the UI at `http://localhost:8000`
2. Create a workspace or use default
3. Go to "Scripts" and click "New Script"
4. Write your script (Python example):
```python
def main(name: str = "world"):
return f"Hello {name}!"
```
5. Save and run
### Creating a Workflow
1. Go to "Flows" and click "New Flow"
2. Use the visual editor to add steps
3. Each step can be a script, flow, or approval
4. Configure inputs and outputs between steps
5. Deploy and run
### Using the API
Example: List scripts
```bash
curl -H "Authorization: Bearer YOUR_TOKEN" \
http://localhost:8000/api/w/workspace/scripts/list
```
### Scheduling
1. Open any script or flow
2. Click "Schedule"
3. Set cron expression or interval
4. Save
## Profiles
- `dev`: Include LSP service for code intelligence (port 3001)
To enable dev profile:
```bash
docker compose --profile dev up -d
```
## Security Considerations
1. **Change Default Credentials**: Always change superadmin credentials
2. **Database Password**: Use a strong PostgreSQL password
3. **Docker Socket**: Mounting Docker socket grants container control
4. **SSL/TLS**: Use reverse proxy with HTTPS in production
5. **License Key**: Keep enterprise license key secure if using
## Upgrading
To upgrade Windmill:
1. Update `WINDMILL_VERSION` in `.env`
2. Pull and restart:
```bash
docker compose pull
docker compose up -d
```
3. Check logs:
```bash
docker compose logs -f windmill-server
```
## Troubleshooting
**Service won't start:**
- Check logs: `docker compose logs windmill-server`
- Verify database: `docker compose ps postgres`
- Ensure Docker socket is accessible
**Cannot login:**
- Verify credentials in `.env`
- Check server logs for authentication errors
- Try resetting password via CLI
**Workers not processing:**
- Check worker logs: `docker compose logs windmill-worker`
- Verify database connection
- Increase `WINDMILL_NUM_WORKERS` if needed
## References
- Official Website: <https://windmill.dev>
- Documentation: <https://docs.windmill.dev>
- GitHub: <https://github.com/windmill-labs/windmill>
- Community: <https://discord.gg/V7PM2YHsPB>
## License
Windmill is licensed under AGPLv3. See [LICENSE](https://github.com/windmill-labs/windmill/blob/main/LICENSE) for more information.

194
src/windmill/README.zh.md Normal file
View File

@@ -0,0 +1,194 @@
# Windmill
Windmill 是一个开源的开发者基础设施平台,允许您从最少的 Python、TypeScript、Go、Bash、SQL 脚本快速构建生产级的多步骤自动化和内部应用。
## 功能特点
- **多语言支持**:使用 Python、TypeScript、Go、Bash、SQL 编写脚本
- **自动生成 UI**:从脚本自动生成用户界面
- **可视化工作流构建器**:通过代码执行构建复杂工作流
- **调度**:内置基于 cron 的调度
- **Webhook**:通过 HTTP webhook 触发脚本
- **版本控制**:内置 Git 同步和审计日志
- **多租户**:基于工作区的多租户
## 快速开始
1. 复制 `.env.example``.env`
```bash
copy .env.example .env
```
2. **重要**:编辑 `.env` 并更改:
- `WINDMILL_SUPERADMIN_EMAIL` - 您的管理员邮箱
- `WINDMILL_SUPERADMIN_PASSWORD` - 一个强密码
- `POSTGRES_PASSWORD` - 一个强数据库密码
3. 启动 Windmill
```bash
docker compose up -d
```
4. 等待服务就绪
5. 访问 Windmill UI`http://localhost:8000`
6. 使用配置的超级管理员凭证登录
## 默认配置
| 服务 | 端口 | 说明 |
| --------------- | ---- | -------------------------- |
| Windmill Server | 8000 | Web UI 和 API |
| PostgreSQL | 5432 | 数据库(内部) |
| Windmill LSP | 3001 | 语言服务器dev 配置文件) |
**默认凭证**(请更改!):
- 邮箱:`admin@windmill.dev`
- 密码:`changeme`
## 环境变量
主要环境变量(完整列表请参阅 `.env.example`
| 变量 | 说明 | 默认值 |
| ------------------------------ | ----------------- | ----------------------- |
| `WINDMILL_VERSION` | Windmill 镜像版本 | `main` |
| `WINDMILL_PORT_OVERRIDE` | UI 的主机端口 | `8000` |
| `WINDMILL_BASE_URL` | 基础 URL | `http://localhost:8000` |
| `WINDMILL_SUPERADMIN_EMAIL` | 超级管理员邮箱 | `admin@windmill.dev` |
| `WINDMILL_SUPERADMIN_PASSWORD` | 超级管理员密码 | **必须更改!** |
| `POSTGRES_PASSWORD` | 数据库密码 | `changeme` |
| `WINDMILL_NUM_WORKERS` | Worker 数量 | `3` |
| `WINDMILL_LICENSE_KEY` | 企业许可证 | (空) |
| `TZ` | 时区 | `UTC` |
## 资源需求
**最低要求**
- CPU1 核心
- 内存1GB
- 磁盘5GB
**推荐配置**
- CPU3+ 核心server 1 个,worker 2+ 个)
- 内存3GB+
- 磁盘20GB+
## 数据卷
- `postgres_data`PostgreSQL 数据库数据
- `windmill_server_data`Windmill 服务器数据
- `windmill_worker_data`Worker 执行数据
## 使用 Windmill
### 创建脚本
1. 访问 UI`http://localhost:8000`
2. 创建工作区或使用默认工作区
3. 进入 "Scripts" 并点击 "New Script"
4. 编写您的脚本Python 示例):
```python
def main(name: str = "world"):
return f"Hello {name}!"
```
5. 保存并运行
### 创建工作流
1. 进入 "Flows" 并点击 "New Flow"
2. 使用可视化编辑器添加步骤
3. 每个步骤可以是脚本、流程或审批
4. 配置步骤之间的输入和输出
5. 部署并运行
### 使用 API
示例:列出脚本
```bash
curl -H "Authorization: Bearer YOUR_TOKEN" \
http://localhost:8000/api/w/workspace/scripts/list
```
### 调度
1. 打开任何脚本或流程
2. 点击 "Schedule"
3. 设置 cron 表达式或间隔
4. 保存
## 配置文件
- `dev`:包含用于代码智能的 LSP 服务(端口 3001
启用 dev 配置文件:
```bash
docker compose --profile dev up -d
```
## 安全注意事项
1. **更改默认凭证**:始终更改超级管理员凭证
2. **数据库密码**:使用强 PostgreSQL 密码
3. **Docker Socket**:挂载 Docker socket 授予容器控制权限
4. **SSL/TLS**:在生产环境中使用带 HTTPS 的反向代理
5. **许可证密钥**:如果使用企业许可证,请妥善保管密钥
## 升级
升级 Windmill
1. 在 `.env` 中更新 `WINDMILL_VERSION`
2. 拉取并重启:
```bash
docker compose pull
docker compose up -d
```
3. 检查日志:
```bash
docker compose logs -f windmill-server
```
## 故障排除
**服务无法启动:**
- 检查日志:`docker compose logs windmill-server`
- 验证数据库:`docker compose ps postgres`
- 确保 Docker socket 可访问
**无法登录:**
- 验证 `.env` 中的凭证
- 检查服务器日志中的身份验证错误
- 尝试通过 CLI 重置密码
**Worker 未处理:**
- 检查 worker 日志:`docker compose logs windmill-worker`
- 验证数据库连接
- 如需要增加 `WINDMILL_NUM_WORKERS`
## 参考资料
- 官方网站:<https://windmill.dev>
- 文档:<https://docs.windmill.dev>
- GitHub<https://github.com/windmill-labs/windmill>
- 社区:<https://discord.gg/V7PM2YHsPB>
## 许可证
Windmill 使用 AGPLv3 许可证。详情请参阅 [LICENSE](https://github.com/windmill-labs/windmill/blob/main/LICENSE)。

View File

@@ -0,0 +1,164 @@
# Windmill - Developer Infrastructure Platform
# https://github.com/windmill-labs/windmill
#
# Windmill is an open-source developer platform and workflow engine that allows you to
# quickly build production-grade multi-step automations and internal apps from minimal
# scripts in Python, TypeScript, Go, Bash, SQL, or any Docker image.
#
# Key Features:
# - Write scripts in Python, TypeScript, Go, Bash, SQL
# - Auto-generated UIs from scripts
# - Visual workflow builder with code execution
# - Built-in scheduling and webhooks
# - Version control and audit logs
# - Multi-tenant workspaces
#
# Default Credentials:
# - Access UI at http://localhost:8000
# - Default email: admin@windmill.dev
# - Default password: changeme
#
# Security Notes:
# - Change default admin credentials immediately
# - Use strong database passwords
# - Enable SSL/TLS in production
# - Configure proper authentication (OAuth, SAML)
#
# License: AGPLv3 (https://github.com/windmill-labs/windmill/blob/main/LICENSE)
x-default: &default
restart: unless-stopped
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
windmill-server:
<<: *default
image: ghcr.io/windmill-labs/windmill:${WINDMILL_VERSION:-main}
container_name: windmill-server
ports:
- "${WINDMILL_PORT_OVERRIDE:-8000}:8000"
environment:
# Database configuration
- DATABASE_URL=postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}@postgres:5432/${POSTGRES_DB}?sslmode=disable
# Server configuration
- MODE=server
- BASE_URL=${WINDMILL_BASE_URL:-http://localhost:8000}
# Authentication
- SUPERADMIN_EMAIL=${WINDMILL_SUPERADMIN_EMAIL:-admin@windmill.dev}
- SUPERADMIN_PASSWORD=${WINDMILL_SUPERADMIN_PASSWORD:-changeme}
# Optional: License key for enterprise features
- LICENSE_KEY=${WINDMILL_LICENSE_KEY:-}
# Other settings
- TZ=${TZ:-UTC}
- RUST_LOG=${WINDMILL_LOG_LEVEL:-info}
volumes:
- windmill_server_data:/tmp/windmill
depends_on:
postgres:
condition: service_healthy
healthcheck:
test: ["CMD", "wget", "--quiet", "--tries=1", "--spider", "http://localhost:8000/api/version"]
interval: 30s
timeout: 10s
retries: 5
start_period: 60s
deploy:
resources:
limits:
cpus: "${WINDMILL_SERVER_CPU_LIMIT:-1.0}"
memory: "${WINDMILL_SERVER_MEMORY_LIMIT:-1G}"
reservations:
cpus: "${WINDMILL_SERVER_CPU_RESERVATION:-0.25}"
memory: "${WINDMILL_SERVER_MEMORY_RESERVATION:-256M}"
windmill-worker:
<<: *default
image: ghcr.io/windmill-labs/windmill:${WINDMILL_VERSION:-main}
container_name: windmill-worker
environment:
# Database configuration
- DATABASE_URL=postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}@postgres:5432/${POSTGRES_DB}?sslmode=disable
# Worker configuration
- MODE=worker
- WORKER_TAGS=${WINDMILL_WORKER_TAGS:-}
- NUM_WORKERS=${WINDMILL_NUM_WORKERS:-3}
# Other settings
- TZ=${TZ:-UTC}
- RUST_LOG=${WINDMILL_LOG_LEVEL:-info}
volumes:
- windmill_worker_data:/tmp/windmill
- /var/run/docker.sock:/var/run/docker.sock:ro # For Docker execution
depends_on:
postgres:
condition: service_healthy
windmill-server:
condition: service_healthy
deploy:
resources:
limits:
cpus: "${WINDMILL_WORKER_CPU_LIMIT:-2.0}"
memory: "${WINDMILL_WORKER_MEMORY_LIMIT:-2G}"
reservations:
cpus: "${WINDMILL_WORKER_CPU_RESERVATION:-0.5}"
memory: "${WINDMILL_WORKER_MEMORY_RESERVATION:-512M}"
postgres:
<<: *default
image: postgres:${POSTGRES_VERSION:-16-alpine}
container_name: windmill-postgres
environment:
- POSTGRES_DB=${POSTGRES_DB:-windmill}
- POSTGRES_USER=${POSTGRES_USER:-windmill}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-changeme}
- POSTGRES_INITDB_ARGS=--encoding=UTF8
- TZ=${TZ:-UTC}
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER:-windmill} -d ${POSTGRES_DB:-windmill}"]
interval: 10s
timeout: 5s
retries: 5
deploy:
resources:
limits:
cpus: "${POSTGRES_CPU_LIMIT:-1.0}"
memory: "${POSTGRES_MEMORY_LIMIT:-1G}"
reservations:
cpus: "${POSTGRES_CPU_RESERVATION:-0.25}"
memory: "${POSTGRES_MEMORY_RESERVATION:-256M}"
# Optional: LSP service for code intelligence
windmill-lsp:
<<: *default
image: ghcr.io/windmill-labs/windmill-lsp:${WINDMILL_LSP_VERSION:-latest}
container_name: windmill-lsp
profiles:
- dev
ports:
- "${WINDMILL_LSP_PORT:-3001}:3001"
deploy:
resources:
limits:
cpus: "0.5"
memory: "512M"
volumes:
postgres_data:
driver: local
windmill_server_data:
driver: local
windmill_worker_data:
driver: local