feat: add sim & pingap

This commit is contained in:
Sun-ZhenXing
2025-12-27 11:24:44 +08:00
parent 72b36f2748
commit d536fbc995
25 changed files with 1727 additions and 483 deletions

View File

@@ -48,8 +48,8 @@ Compose Anything helps users quickly deploy various services by providing a set
| [Kibana](./src/kibana) | 8.16.1 | | [Kibana](./src/kibana) | 8.16.1 |
| [Kodbox](./src/kodbox) | 1.62 | | [Kodbox](./src/kodbox) | 1.62 |
| [Kong](./src/kong) | 3.8.0 | | [Kong](./src/kong) | 3.8.0 |
| [Langflow](./src/langflow) | latest | | [Langflow](./apps/langflow) | latest |
| [Langfuse](./src/langfuse) | 3.115.0 | | [Langfuse](./apps/langfuse) | 3.115.0 |
| [LibreOffice](./src/libreoffice) | latest | | [LibreOffice](./src/libreoffice) | latest |
| [LiteLLM](./src/litellm) | main-stable | | [LiteLLM](./src/litellm) | main-stable |
| [Logstash](./src/logstash) | 8.16.1 | | [Logstash](./src/logstash) | 8.16.1 |
@@ -100,6 +100,7 @@ Compose Anything helps users quickly deploy various services by providing a set
| [Restate Cluster](./src/restate-cluster) | 1.5.3 | | [Restate Cluster](./src/restate-cluster) | 1.5.3 |
| [Restate](./src/restate) | 1.5.3 | | [Restate](./src/restate) | 1.5.3 |
| [SearXNG](./src/searxng) | 2025.1.20-1ce14ef99 | | [SearXNG](./src/searxng) | 2025.1.20-1ce14ef99 |
| [Sim](./apps/sim) | latest |
| [Stable Diffusion WebUI](./src/stable-diffusion-webui-docker) | latest | | [Stable Diffusion WebUI](./src/stable-diffusion-webui-docker) | latest |
| [Stirling-PDF](./src/stirling-pdf) | latest | | [Stirling-PDF](./src/stirling-pdf) | latest |
| [Temporal](./src/temporal) | 1.24.2 | | [Temporal](./src/temporal) | 1.24.2 |

View File

@@ -48,8 +48,8 @@ Compose Anything 通过提供一组高质量的 Docker Compose 配置文件,
| [Kibana](./src/kibana) | 8.16.1 | | [Kibana](./src/kibana) | 8.16.1 |
| [Kodbox](./src/kodbox) | 1.62 | | [Kodbox](./src/kodbox) | 1.62 |
| [Kong](./src/kong) | 3.8.0 | | [Kong](./src/kong) | 3.8.0 |
| [Langflow](./src/langflow) | latest | | [Langflow](./apps/langflow) | latest |
| [Langfuse](./src/langfuse) | 3.115.0 | | [Langfuse](./apps/langfuse) | 3.115.0 |
| [LibreOffice](./src/libreoffice) | latest | | [LibreOffice](./src/libreoffice) | latest |
| [LiteLLM](./src/litellm) | main-stable | | [LiteLLM](./src/litellm) | main-stable |
| [Logstash](./src/logstash) | 8.16.1 | | [Logstash](./src/logstash) | 8.16.1 |
@@ -100,6 +100,7 @@ Compose Anything 通过提供一组高质量的 Docker Compose 配置文件,
| [Restate Cluster](./src/restate-cluster) | 1.5.3 | | [Restate Cluster](./src/restate-cluster) | 1.5.3 |
| [Restate](./src/restate) | 1.5.3 | | [Restate](./src/restate) | 1.5.3 |
| [SearXNG](./src/searxng) | 2025.1.20-1ce14ef99 | | [SearXNG](./src/searxng) | 2025.1.20-1ce14ef99 |
| [Sim](./apps/sim) | latest |
| [Stable Diffusion WebUI](./src/stable-diffusion-webui-docker) | latest | | [Stable Diffusion WebUI](./src/stable-diffusion-webui-docker) | latest |
| [Stirling-PDF](./src/stirling-pdf) | latest | | [Stirling-PDF](./src/stirling-pdf) | latest |
| [Temporal](./src/temporal) | 1.24.2 | | [Temporal](./src/temporal) | 1.24.2 |

View File

@@ -1,7 +1,7 @@
# Langflow Configuration # Langflow Configuration
# Versions # Versions
LANGFLOW_VERSION=latest LANGFLOW_VERSION=1.1.1
POSTGRES_VERSION=16-alpine POSTGRES_VERSION=16-alpine
# Port Configuration # Port Configuration
@@ -52,12 +52,16 @@ DO_NOT_TRACK=false
# Resource Limits - Langflow # Resource Limits - Langflow
LANGFLOW_CPU_LIMIT=2.0 LANGFLOW_CPU_LIMIT=2.0
LANGFLOW_CPU_RESERVATION=0.5
LANGFLOW_MEMORY_LIMIT=2G LANGFLOW_MEMORY_LIMIT=2G
LANGFLOW_CPU_RESERVATION=0.5
LANGFLOW_MEMORY_RESERVATION=512M LANGFLOW_MEMORY_RESERVATION=512M
# Resource Limits - PostgreSQL # Resource Limits - PostgreSQL
POSTGRES_CPU_LIMIT=1.0 POSTGRES_CPU_LIMIT=1.0
POSTGRES_CPU_RESERVATION=0.25
POSTGRES_MEMORY_LIMIT=1G POSTGRES_MEMORY_LIMIT=1G
POSTGRES_CPU_RESERVATION=0.25
POSTGRES_MEMORY_RESERVATION=256M POSTGRES_MEMORY_RESERVATION=256M
# Logging Configuration
LOG_MAX_SIZE=100m
LOG_MAX_FILE=3

336
apps/langflow/README.md Normal file
View File

@@ -0,0 +1,336 @@
# Langflow
Langflow is a low-code visual framework for building AI applications. It's Python-based and agnostic to any model, API, or database, making it easy to build RAG applications, multi-agent systems, and custom AI workflows.
## Features
- **Visual Flow Builder**: Drag-and-drop interface for building AI applications
- **Multi-Model Support**: Works with OpenAI, Anthropic, Google, HuggingFace, and more
- **RAG Components**: Built-in support for vector databases and retrieval
- **Custom Components**: Create your own Python components
- **Agent Support**: Build multi-agent systems with memory and tools
- **Real-Time Monitoring**: Track executions and debug flows
- **API Integration**: REST API for programmatic access
## Quick Start
1. Copy `.env.example` to `.env`:
```bash
cp .env.example .env
```
2. (Optional) Edit `.env` to customize settings:
- Generate a secure `LANGFLOW_SECRET_KEY` for production
- Set `LANGFLOW_AUTO_LOGIN=false` to require authentication
- Configure superuser credentials
- Add API keys for LLM providers
3. Start Langflow:
```bash
docker compose up -d
```
4. Wait for services to be ready (usually takes 1-2 minutes)
5. Access Langflow UI at `http://localhost:7860`
6. Start building your AI application!
## Default Configuration
| Service | Port | Description |
| ---------- | ---- | ------------------- |
| Langflow | 7860 | Web UI and API |
| PostgreSQL | 5432 | Database (internal) |
**Default Credentials** (if authentication enabled):
- Username: `langflow`
- Password: `langflow`
## Environment Variables
Key environment variables (see `.env.example` for full list):
| Variable | Description | Default |
| ----------------------------- | ----------------------------- | ---------- |
| `LANGFLOW_VERSION` | Langflow image version | `1.1.1` |
| `LANGFLOW_PORT_OVERRIDE` | Host port for UI | `7860` |
| `POSTGRES_PASSWORD` | Database password | `langflow` |
| `LANGFLOW_AUTO_LOGIN` | Auto-login (disable for auth) | `true` |
| `LANGFLOW_SUPERUSER` | Superuser username | `langflow` |
| `LANGFLOW_SUPERUSER_PASSWORD` | Superuser password | `langflow` |
| `LANGFLOW_SECRET_KEY` | Secret key for sessions | (empty) |
| `LANGFLOW_COMPONENTS_PATH` | Custom components directory | (empty) |
| `LANGFLOW_LOAD_FLOWS_PATH` | Auto-load flows directory | (empty) |
| `TZ` | Timezone | `UTC` |
## Resource Requirements
**Minimum**:
- CPU: 1 core
- RAM: 1GB
- Disk: 5GB
**Recommended**:
- CPU: 2+ cores
- RAM: 2GB+
- Disk: 20GB+
## Volumes
- `postgres_data`: PostgreSQL database data
- `langflow_data`: Langflow configuration, flows, and logs
## Using Langflow
### Building Your First Flow
1. Access the UI at `http://localhost:7860`
2. Click "New Flow" or use a template
3. Drag components from the sidebar to the canvas
4. Connect components by dragging between ports
5. Configure component parameters
6. Click "Run" to test your flow
7. Use the API or integrate with your application
### Adding LLM Providers
To use external LLM providers, configure their API keys:
1. In Langflow UI, go to Settings > Global Variables
2. Add your API keys (e.g., `OPENAI_API_KEY`, `ANTHROPIC_API_KEY`)
3. Reference these variables in your flow components
Alternatively, add them to your `.env` file and restart:
```bash
# Example LLM API Keys (add to .env)
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GOOGLE_API_KEY=...
```
### Custom Components
To add custom components:
1. Create a directory for your components (e.g., `./custom_components`)
2. Update `.env`:
```bash
LANGFLOW_COMPONENTS_PATH=/app/langflow/custom_components
```
3. Mount the directory in `docker-compose.yaml`:
```yaml
volumes:
- ./custom_components:/app/langflow/custom_components
```
4. Restart Langflow
### Auto-Loading Flows
To automatically load flows on startup:
1. Export your flows as JSON files
2. Create a directory (e.g., `./flows`)
3. Update `.env`:
```bash
LANGFLOW_LOAD_FLOWS_PATH=/app/langflow/flows
```
4. Mount the directory in `docker-compose.yaml`:
```yaml
volumes:
- ./flows:/app/langflow/flows
```
5. Restart Langflow
## API Usage
Langflow provides a REST API for running flows programmatically.
### Get Flow ID
1. Open your flow in the UI
2. The flow ID is in the URL: `http://localhost:7860/flow/{flow_id}`
### Run Flow via API
```bash
curl -X POST http://localhost:7860/api/v1/run/{flow_id} \
-H "Content-Type: application/json" \
-d '{
"inputs": {
"input_field": "your input value"
}
}'
```
### With Authentication
If authentication is enabled, first get a token:
```bash
# Login
curl -X POST http://localhost:7860/api/v1/login \
-H "Content-Type: application/json" \
-d '{
"username": "langflow",
"password": "langflow"
}'
# Use token in subsequent requests
curl -X POST http://localhost:7860/api/v1/run/{flow_id} \
-H "Authorization: Bearer {token}" \
-H "Content-Type: application/json" \
-d '{
"inputs": {
"input_field": "your input value"
}
}'
```
## Production Deployment
For production deployments:
1. **Enable Authentication**:
```bash
LANGFLOW_AUTO_LOGIN=false
LANGFLOW_SUPERUSER=admin
LANGFLOW_SUPERUSER_PASSWORD=<strong-password>
```
2. **Set Secret Key**:
```bash
# Generate a secure key
python -c "from secrets import token_urlsafe; print(token_urlsafe(32))"
# Add to .env
LANGFLOW_SECRET_KEY=<generated-key>
```
3. **Use Strong Database Password**:
```bash
POSTGRES_PASSWORD=<strong-password>
```
4. **Enable SSL/TLS**: Use a reverse proxy (nginx, traefik) with SSL certificates
5. **Configure Resource Limits**: Adjust CPU and memory limits based on your workload
6. **Backup Database**: Regularly backup the PostgreSQL data volume
## Troubleshooting
### Langflow Won't Start
- Check logs: `docker compose logs langflow`
- Ensure PostgreSQL is healthy: `docker compose ps postgres`
- Verify port 7860 is not in use
### Components Not Loading
- Check custom components path is correct
- Ensure Python dependencies are installed in custom components
- Check logs for component errors
### Slow Performance
- Increase resource limits in `.env`
- Reduce `LANGFLOW_WORKERS` if low on memory
- Optimize your flows (reduce unnecessary components)
### Database Connection Errors
- Verify PostgreSQL is running: `docker compose ps postgres`
- Check database credentials in `.env`
- Ensure `LANGFLOW_DATABASE_URL` is correct
## Maintenance
### Backup
Backup volumes:
```bash
docker compose down
docker run --rm -v compose-anything_postgres_data:/data -v $(pwd):/backup alpine tar czf /backup/postgres-backup.tar.gz -C /data .
docker run --rm -v compose-anything_langflow_data:/data -v $(pwd):/backup alpine tar czf /backup/langflow-backup.tar.gz -C /data .
docker compose up -d
```
### Restore
Restore from backup:
```bash
docker compose down
docker run --rm -v compose-anything_postgres_data:/data -v $(pwd):/backup alpine sh -c "cd /data && tar xzf /backup/postgres-backup.tar.gz"
docker run --rm -v compose-anything_langflow_data:/data -v $(pwd):/backup alpine sh -c "cd /data && tar xzf /backup/langflow-backup.tar.gz"
docker compose up -d
```
### Upgrade
To upgrade Langflow:
1. Update version in `.env`:
```bash
LANGFLOW_VERSION=1.2.0
```
2. Pull new image and restart:
```bash
docker compose pull
docker compose up -d
```
3. Check for breaking changes in release notes
## Useful Commands
```bash
# View logs
docker compose logs -f langflow
# Restart Langflow
docker compose restart langflow
# Access PostgreSQL
docker compose exec postgres psql -U langflow -d langflow
# Check resource usage
docker stats
# Clean up
docker compose down -v # WARNING: Deletes all data
```
## References
- [Official Documentation](https://docs.langflow.org/)
- [GitHub Repository](https://github.com/langflow-ai/langflow)
- [Component Documentation](https://docs.langflow.org/components/)
- [API Documentation](https://docs.langflow.org/api/)
- [Community Discord](https://discord.gg/langflow)
## License
MIT - See [LICENSE](https://github.com/langflow-ai/langflow/blob/main/LICENSE)

336
apps/langflow/README.zh.md Normal file
View File

@@ -0,0 +1,336 @@
# Langflow
Langflow 是一个低代码可视化框架,用于构建 AI 应用。它基于 Python与任何模型、API 或数据库无关,可轻松构建 RAG 应用、多智能体系统和自定义 AI 工作流。
## 功能特点
- **可视化流构建器**:拖放界面构建 AI 应用
- **多模型支持**:支持 OpenAI、Anthropic、Google、HuggingFace 等
- **RAG 组件**:内置向量数据库和检索支持
- **自定义组件**:创建您自己的 Python 组件
- **智能体支持**:构建具有记忆和工具的多智能体系统
- **实时监控**:跟踪执行并调试流程
- **API 集成**:用于编程访问的 REST API
## 快速开始
1. 复制 `.env.example``.env`
```bash
copy .env.example .env
```
2. (可选)编辑 `.env` 自定义设置:
- 为生产环境生成安全的 `LANGFLOW_SECRET_KEY`
- 设置 `LANGFLOW_AUTO_LOGIN=false` 以要求身份验证
- 配置超级用户凭证
- 为 LLM 提供商添加 API 密钥
3. 启动 Langflow
```bash
docker compose up -d
```
4. 等待服务就绪(通常需要 1-2 分钟)
5. 访问 Langflow UI`http://localhost:7860`
6. 开始构建您的 AI 应用!
## 默认配置
| 服务 | 端口 | 说明 |
| ---------- | ---- | -------------- |
| Langflow | 7860 | Web UI 和 API |
| PostgreSQL | 5432 | 数据库(内部) |
**默认凭证**(如果启用了身份验证):
- 用户名:`langflow`
- 密码:`langflow`
## 环境变量
主要环境变量(完整列表请参阅 `.env.example`
| 变量 | 说明 | 默认值 |
| ----------------------------- | ------------------------------ | ---------- |
| `LANGFLOW_VERSION` | Langflow 镜像版本 | `1.1.1` |
| `LANGFLOW_PORT_OVERRIDE` | UI 的主机端口 | `7860` |
| `POSTGRES_PASSWORD` | 数据库密码 | `langflow` |
| `LANGFLOW_AUTO_LOGIN` | 自动登录(禁用以启用身份验证) | `true` |
| `LANGFLOW_SUPERUSER` | 超级用户用户名 | `langflow` |
| `LANGFLOW_SUPERUSER_PASSWORD` | 超级用户密码 | `langflow` |
| `LANGFLOW_SECRET_KEY` | 会话密钥 | (空) |
| `LANGFLOW_COMPONENTS_PATH` | 自定义组件目录 | (空) |
| `LANGFLOW_LOAD_FLOWS_PATH` | 自动加载流目录 | (空) |
| `TZ` | 时区 | `UTC` |
## 资源需求
**最低要求**
- CPU1 核心
- 内存1GB
- 磁盘5GB
**推荐配置**
- CPU2+ 核心
- 内存2GB+
- 磁盘20GB+
## 数据卷
- `postgres_data`PostgreSQL 数据库数据
- `langflow_data`Langflow 配置、流和日志
## 使用 Langflow
### 构建您的第一个流
1. 访问 UI`http://localhost:7860`
2. 点击 "New Flow" 或使用模板
3. 从侧边栏拖动组件到画布
4. 通过在端口之间拖动来连接组件
5. 配置组件参数
6. 点击 "Run" 测试您的流
7. 使用 API 或与您的应用集成
### 添加 LLM 提供商
要使用外部 LLM 提供商,请配置其 API 密钥:
1. 在 Langflow UI 中,转到 Settings > Global Variables
2. 添加您的 API 密钥(例如,`OPENAI_API_KEY`、`ANTHROPIC_API_KEY`
3. 在您的流组件中引用这些变量
或者,将它们添加到您的 `.env` 文件并重启:
```bash
# LLM API 密钥示例(添加到 .env
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GOOGLE_API_KEY=...
```
### 自定义组件
要添加自定义组件:
1. 为您的组件创建一个目录(例如,`./custom_components`
2. 更新 `.env`
```bash
LANGFLOW_COMPONENTS_PATH=/app/langflow/custom_components
```
3. 在 `docker-compose.yaml` 中挂载目录:
```yaml
volumes:
- ./custom_components:/app/langflow/custom_components
```
4. 重启 Langflow
### 自动加载流
要在启动时自动加载流:
1. 将您的流导出为 JSON 文件
2. 创建一个目录(例如,`./flows`
3. 更新 `.env`
```bash
LANGFLOW_LOAD_FLOWS_PATH=/app/langflow/flows
```
4. 在 `docker-compose.yaml` 中挂载目录:
```yaml
volumes:
- ./flows:/app/langflow/flows
```
5. 重启 Langflow
## API 使用
Langflow 提供 REST API 用于以编程方式运行流。
### 获取流 ID
1. 在 UI 中打开您的流
2. 流 ID 在 URL 中:`http://localhost:7860/flow/{flow_id}`
### 通过 API 运行流
```bash
curl -X POST http://localhost:7860/api/v1/run/{flow_id} \
-H "Content-Type: application/json" \
-d '{
"inputs": {
"input_field": "your input value"
}
}'
```
### 使用身份验证
如果启用了身份验证,首先获取令牌:
```bash
# 登录
curl -X POST http://localhost:7860/api/v1/login \
-H "Content-Type: application/json" \
-d '{
"username": "langflow",
"password": "langflow"
}'
# 在后续请求中使用令牌
curl -X POST http://localhost:7860/api/v1/run/{flow_id} \
-H "Authorization: Bearer {token}" \
-H "Content-Type: application/json" \
-d '{
"inputs": {
"input_field": "your input value"
}
}'
```
## 生产部署
对于生产部署:
1. **启用身份验证**
```bash
LANGFLOW_AUTO_LOGIN=false
LANGFLOW_SUPERUSER=admin
LANGFLOW_SUPERUSER_PASSWORD=<强密码>
```
2. **设置密钥**
```bash
# 生成安全密钥
python -c "from secrets import token_urlsafe; print(token_urlsafe(32))"
# 添加到 .env
LANGFLOW_SECRET_KEY=<生成的密钥>
```
3. **使用强数据库密码**
```bash
POSTGRES_PASSWORD=<强密码>
```
4. **启用 SSL/TLS**:使用带有 SSL 证书的反向代理nginx、traefik
5. **配置资源限制**:根据您的工作负载调整 CPU 和内存限制
6. **备份数据库**:定期备份 PostgreSQL 数据卷
## 故障排除
### Langflow 无法启动
- 查看日志:`docker compose logs langflow`
- 确保 PostgreSQL 健康:`docker compose ps postgres`
- 验证端口 7860 未被使用
### 组件未加载
- 检查自定义组件路径是否正确
- 确保在自定义组件中安装了 Python 依赖项
- 检查日志中的组件错误
### 性能缓慢
- 在 `.env` 中增加资源限制
- 如果内存不足,减少 `LANGFLOW_WORKERS`
- 优化您的流(减少不必要的组件)
### 数据库连接错误
- 验证 PostgreSQL 正在运行:`docker compose ps postgres`
- 检查 `.env` 中的数据库凭证
- 确保 `LANGFLOW_DATABASE_URL` 正确
## 维护
### 备份
备份数据卷:
```bash
docker compose down
docker run --rm -v compose-anything_postgres_data:/data -v $(pwd):/backup alpine tar czf /backup/postgres-backup.tar.gz -C /data .
docker run --rm -v compose-anything_langflow_data:/data -v $(pwd):/backup alpine tar czf /backup/langflow-backup.tar.gz -C /data .
docker compose up -d
```
### 恢复
从备份恢复:
```bash
docker compose down
docker run --rm -v compose-anything_postgres_data:/data -v $(pwd):/backup alpine sh -c "cd /data && tar xzf /backup/postgres-backup.tar.gz"
docker run --rm -v compose-anything_langflow_data:/data -v $(pwd):/backup alpine sh -c "cd /data && tar xzf /backup/langflow-backup.tar.gz"
docker compose up -d
```
### 升级
升级 Langflow
1. 在 `.env` 中更新版本:
```bash
LANGFLOW_VERSION=1.2.0
```
2. 拉取新镜像并重启:
```bash
docker compose pull
docker compose up -d
```
3. 检查发布说明中的重大更改
## 常用命令
```bash
# 查看日志
docker compose logs -f langflow
# 重启 Langflow
docker compose restart langflow
# 访问 PostgreSQL
docker compose exec postgres psql -U langflow -d langflow
# 检查资源使用
docker stats
# 清理
docker compose down -v # 警告:删除所有数据
```
## 参考资料
- [官方文档](https://docs.langflow.org/)
- [GitHub 仓库](https://github.com/langflow-ai/langflow)
- [组件文档](https://docs.langflow.org/components/)
- [API 文档](https://docs.langflow.org/api/)
- [社区 Discord](https://discord.gg/langflow)
## 许可证
MIT - 查看 [LICENSE](https://github.com/langflow-ai/langflow/blob/main/LICENSE)

View File

@@ -12,7 +12,7 @@
# - Real-time monitoring and logging # - Real-time monitoring and logging
# - Multi-user support with authentication # - Multi-user support with authentication
# #
# Default Credentials: # Default Access:
# - Access UI at http://localhost:7860 # - Access UI at http://localhost:7860
# - No authentication by default (set LANGFLOW_AUTO_LOGIN=false to enable) # - No authentication by default (set LANGFLOW_AUTO_LOGIN=false to enable)
# #
@@ -30,15 +30,13 @@ x-defaults: &defaults
logging: logging:
driver: json-file driver: json-file
options: options:
max-size: 100m max-size: ${LOG_MAX_SIZE:-100m}
max-file: "3" max-file: "${LOG_MAX_FILE:-3}"
services: services:
langflow: langflow:
<<: *defaults <<: *defaults
image: ${GLOBAL_REGISTRY:-}langflowai/langflow:${LANGFLOW_VERSION:-latest} image: ${GLOBAL_REGISTRY:-}langflowai/langflow:${LANGFLOW_VERSION:-1.1.1}
pull_policy: always
container_name: langflow
ports: ports:
- "${LANGFLOW_PORT_OVERRIDE:-7860}:7860" - "${LANGFLOW_PORT_OVERRIDE:-7860}:7860"
environment: environment:
@@ -101,7 +99,6 @@ services:
postgres: postgres:
<<: *defaults <<: *defaults
image: ${GLOBAL_REGISTRY:-}postgres:${POSTGRES_VERSION:-16-alpine} image: ${GLOBAL_REGISTRY:-}postgres:${POSTGRES_VERSION:-16-alpine}
container_name: langflow-postgres
environment: environment:
- POSTGRES_DB=${POSTGRES_DB:-langflow} - POSTGRES_DB=${POSTGRES_DB:-langflow}
- POSTGRES_USER=${POSTGRES_USER:-langflow} - POSTGRES_USER=${POSTGRES_USER:-langflow}
@@ -115,6 +112,7 @@ services:
interval: 10s interval: 10s
timeout: 5s timeout: 5s
retries: 5 retries: 5
start_period: 10s
deploy: deploy:
resources: resources:
limits: limits:

View File

@@ -224,10 +224,6 @@ services:
volumes: volumes:
langfuse_postgres_data: langfuse_postgres_data:
driver: local
langfuse_clickhouse_data: langfuse_clickhouse_data:
driver: local
langfuse_clickhouse_logs: langfuse_clickhouse_logs:
driver: local
langfuse_minio_data: langfuse_minio_data:
driver: local

105
apps/sim/.env.example Normal file
View File

@@ -0,0 +1,105 @@
# =============================================================================
# Sim - AI Agent Workflow Builder Configuration
# =============================================================================
# Documentation: https://docs.sim.ai
# GitHub: https://github.com/simstudioai/sim
# -----------------------------------------------------------------------------
# Time Zone Configuration
# -----------------------------------------------------------------------------
TZ=UTC
# -----------------------------------------------------------------------------
# Image Versions
# -----------------------------------------------------------------------------
SIM_VERSION=latest
SIM_REALTIME_VERSION=latest
SIM_MIGRATIONS_VERSION=latest
PGVECTOR_VERSION=pg17
# -----------------------------------------------------------------------------
# Global Registry (optional, e.g., mirror.example.com/)
# -----------------------------------------------------------------------------
# GLOBAL_REGISTRY=
# -----------------------------------------------------------------------------
# Port Overrides
# -----------------------------------------------------------------------------
SIM_PORT_OVERRIDE=3000
SIM_REALTIME_PORT_OVERRIDE=3002
POSTGRES_PORT_OVERRIDE=5432
# -----------------------------------------------------------------------------
# Application Configuration
# -----------------------------------------------------------------------------
NODE_ENV=production
NEXT_PUBLIC_APP_URL=http://localhost:3000
BETTER_AUTH_URL=http://localhost:3000
SOCKET_SERVER_URL=http://localhost:3002
NEXT_PUBLIC_SOCKET_URL=http://localhost:3002
# -----------------------------------------------------------------------------
# Security Secrets (REQUIRED: Generate secure values in production)
# -----------------------------------------------------------------------------
# Generate with: openssl rand -hex 32
BETTER_AUTH_SECRET=your_auth_secret_here
ENCRYPTION_KEY=your_encryption_key_here
# -----------------------------------------------------------------------------
# API Keys (Optional)
# -----------------------------------------------------------------------------
# COPILOT_API_KEY=
# SIM_AGENT_API_URL=
# -----------------------------------------------------------------------------
# Ollama Configuration
# -----------------------------------------------------------------------------
# For external Ollama on host machine:
# - macOS/Windows: http://host.docker.internal:11434
# - Linux: http://YOUR_HOST_IP:11434 (e.g., http://192.168.1.100:11434)
OLLAMA_URL=http://localhost:11434
# -----------------------------------------------------------------------------
# PostgreSQL Configuration
# -----------------------------------------------------------------------------
POSTGRES_USER=postgres
POSTGRES_PASSWORD=postgres
POSTGRES_DB=simstudio
# -----------------------------------------------------------------------------
# Resource Limits - Main Application
# -----------------------------------------------------------------------------
SIM_CPU_LIMIT=4.0
SIM_CPU_RESERVATION=2.0
SIM_MEMORY_LIMIT=8G
SIM_MEMORY_RESERVATION=4G
# -----------------------------------------------------------------------------
# Resource Limits - Realtime Server
# -----------------------------------------------------------------------------
SIM_REALTIME_CPU_LIMIT=2.0
SIM_REALTIME_CPU_RESERVATION=1.0
SIM_REALTIME_MEMORY_LIMIT=4G
SIM_REALTIME_MEMORY_RESERVATION=2G
# -----------------------------------------------------------------------------
# Resource Limits - Database Migrations
# -----------------------------------------------------------------------------
SIM_MIGRATIONS_CPU_LIMIT=1.0
SIM_MIGRATIONS_CPU_RESERVATION=0.5
SIM_MIGRATIONS_MEMORY_LIMIT=512M
SIM_MIGRATIONS_MEMORY_RESERVATION=256M
# -----------------------------------------------------------------------------
# Resource Limits - PostgreSQL
# -----------------------------------------------------------------------------
POSTGRES_CPU_LIMIT=2.0
POSTGRES_CPU_RESERVATION=1.0
POSTGRES_MEMORY_LIMIT=2G
POSTGRES_MEMORY_RESERVATION=1G
# -----------------------------------------------------------------------------
# Logging Configuration
# -----------------------------------------------------------------------------
LOG_MAX_SIZE=100m
LOG_MAX_FILE=3

224
apps/sim/README.md Normal file
View File

@@ -0,0 +1,224 @@
# Sim - AI Agent Workflow Builder
Open-source platform to build and deploy AI agent workflows. Developers at trail-blazing startups to Fortune 500 companies deploy agentic workflows on the Sim platform.
## Features
- **Visual Workflow Builder**: Multi-step AI agents and tools with drag-and-drop interface
- **LLM Orchestration**: Coordinate LLM calls, tools, webhooks, and external APIs
- **Scheduled Execution**: Event-driven and scheduled agent executions
- **RAG Support**: First-class support for retrieval-augmented generation
- **Multi-tenant**: Workspace-based access model for teams
- **100+ Integrations**: Connect with popular services and APIs
## Requirements
| Resource | Minimum | Recommended |
| -------- | --------- | ----------- |
| CPU | 2 cores | 4+ cores |
| RAM | 12 GB | 16+ GB |
| Storage | 20 GB SSD | 50+ GB SSD |
| Docker | 20.10+ | Latest |
## Quick Start
```bash
# Copy environment file
cp .env.example .env
# IMPORTANT: Generate secure secrets in production
sed -i "s/your_auth_secret_here/$(openssl rand -hex 32)/" .env
sed -i "s/your_encryption_key_here/$(openssl rand -hex 32)/" .env
# Start services
docker compose up -d
# View logs
docker compose logs -f simstudio
```
Access the application at [http://localhost:3000](http://localhost:3000)
## Configuration
### Required Environment Variables
Before deployment, update these critical settings in `.env`:
```bash
# Security (REQUIRED - generate with: openssl rand -hex 32)
BETTER_AUTH_SECRET=<your-secret-here>
ENCRYPTION_KEY=<your-secret-here>
# Application URLs (update for production)
NEXT_PUBLIC_APP_URL=https://sim.yourdomain.com
BETTER_AUTH_URL=https://sim.yourdomain.com
NEXT_PUBLIC_SOCKET_URL=https://sim.yourdomain.com
# Database credentials (change defaults in production)
POSTGRES_USER=postgres
POSTGRES_PASSWORD=<strong-password>
POSTGRES_DB=simstudio
```
### Using with Ollama
Sim can work with local AI models using [Ollama](https://ollama.ai):
**External Ollama (running on host machine)**:
```bash
# macOS/Windows
OLLAMA_URL=http://host.docker.internal:11434
# Linux - use your actual host IP
OLLAMA_URL=http://192.168.1.100:11434
```
> **Note**: Inside Docker, `localhost` refers to the container. Use `host.docker.internal` (macOS/Windows) or your host's IP address (Linux).
### Port Configuration
Default ports can be overridden via environment variables:
```bash
SIM_PORT_OVERRIDE=3000 # Main application
SIM_REALTIME_PORT_OVERRIDE=3002 # Realtime server
POSTGRES_PORT_OVERRIDE=5432 # PostgreSQL database
```
### Resource Limits
Adjust resource allocation based on your workload:
```bash
# Main application
SIM_CPU_LIMIT=4.0
SIM_MEMORY_LIMIT=8G
# Realtime server
SIM_REALTIME_CPU_LIMIT=2.0
SIM_REALTIME_MEMORY_LIMIT=4G
# PostgreSQL
POSTGRES_CPU_LIMIT=2.0
POSTGRES_MEMORY_LIMIT=2G
```
## Service Architecture
The deployment consists of 4 services:
1. **simstudio**: Main Next.js application (port 3000)
2. **realtime**: WebSocket server for real-time features (port 3002)
3. **migrations**: Database schema management (runs once)
4. **db**: PostgreSQL 17 with pgvector extension (port 5432)
## Common Operations
### View Logs
```bash
# All services
docker compose logs -f
# Specific service
docker compose logs -f simstudio
```
### Stop Services
```bash
docker compose down
```
### Update to Latest Version
```bash
docker compose pull
docker compose up -d
```
### Backup Database
```bash
docker compose exec db pg_dump -U postgres simstudio > backup_$(date +%Y%m%d).sql
```
### Restore Database
```bash
cat backup.sql | docker compose exec -T db psql -U postgres simstudio
```
## Security Considerations
- **Change default credentials**: Update `POSTGRES_PASSWORD` in production
- **Generate strong secrets**: Use `openssl rand -hex 32` for all secret values
- **Use HTTPS**: Configure reverse proxy (Nginx/Caddy) with SSL certificates
- **Network isolation**: Keep database on internal network
- **Regular backups**: Automate database backups
- **Update regularly**: Pull latest images to get security patches
## Production Deployment
For production deployments:
1. **Use reverse proxy** (Nginx, Caddy, Traefik) for SSL/TLS termination
2. **Configure firewall** to restrict database access
3. **Set up monitoring** (health checks, metrics, logs)
4. **Enable backups** (automated PostgreSQL backups)
5. **Use external database** for better performance and reliability (optional)
Example Caddy configuration:
```caddy
sim.yourdomain.com {
reverse_proxy localhost:3000
handle /socket.io/* {
reverse_proxy localhost:3002
}
}
```
## Troubleshooting
### Models not showing in dropdown
If using external Ollama on host machine, ensure `OLLAMA_URL` uses `host.docker.internal` or your host's IP address, not `localhost`.
### Database connection errors
- Verify PostgreSQL is healthy: `docker compose ps`
- Check database logs: `docker compose logs db`
- Ensure migrations completed: `docker compose logs migrations`
### Port conflicts
If ports are already in use, override them:
```bash
SIM_PORT_OVERRIDE=3100 \
SIM_REALTIME_PORT_OVERRIDE=3102 \
POSTGRES_PORT_OVERRIDE=5433 \
docker compose up -d
```
## Additional Resources
- **Official Documentation**: [https://docs.sim.ai](https://docs.sim.ai)
- **GitHub Repository**: [https://github.com/simstudioai/sim](https://github.com/simstudioai/sim)
- **Cloud-hosted Version**: [https://sim.ai](https://sim.ai)
- **Self-hosting Guide**: [https://docs.sim.ai/self-hosting](https://docs.sim.ai/self-hosting)
## License
This configuration follows the Sim project licensing. Check the [official repository](https://github.com/simstudioai/sim) for license details.
## Support
For issues and questions:
- GitHub Issues: [https://github.com/simstudioai/sim/issues](https://github.com/simstudioai/sim/issues)
- Documentation: [https://docs.sim.ai](https://docs.sim.ai)

224
apps/sim/README.zh.md Normal file
View File

@@ -0,0 +1,224 @@
# Sim - AI Agent Workflow Builder
开源 AI 智能体工作流构建和部署平台。从初创公司到世界 500 强企业的开发者都在 Sim 平台上部署智能体工作流。
## 功能特性
- **可视化工作流构建器**:通过拖拽界面构建多步骤 AI 智能体和工具
- **LLM 编排**:协调 LLM 调用、工具、Webhook 和外部 API
- **计划执行**:支持事件驱动和定时调度的智能体执行
- **RAG 支持**一流的检索增强生成RAG支持
- **多租户**:基于工作空间的团队访问模型
- **100+ 集成**:连接流行的服务和 API
## 系统要求
| 资源 | 最低要求 | 推荐配置 |
| ------ | --------- | ---------------- |
| CPU | 2 核 | 4 核及以上 |
| 内存 | 12 GB | 16 GB 及以上 |
| 存储 | 20 GB SSD | 50 GB 及以上 SSD |
| Docker | 20.10+ | 最新版本 |
## 快速开始
```bash
# 复制环境配置文件
cp .env.example .env
# 重要:在生产环境中生成安全密钥
sed -i "s/your_auth_secret_here/$(openssl rand -hex 32)/" .env
sed -i "s/your_encryption_key_here/$(openssl rand -hex 32)/" .env
# 启动服务
docker compose up -d
# 查看日志
docker compose logs -f simstudio
```
访问应用:[http://localhost:3000](http://localhost:3000)
## 配置说明
### 必需的环境变量
在部署前,请在 `.env` 文件中更新这些关键设置:
```bash
# 安全密钥(必需 - 使用以下命令生成openssl rand -hex 32
BETTER_AUTH_SECRET=<your-secret-here>
ENCRYPTION_KEY=<your-secret-here>
# 应用 URL生产环境需更新
NEXT_PUBLIC_APP_URL=https://sim.yourdomain.com
BETTER_AUTH_URL=https://sim.yourdomain.com
NEXT_PUBLIC_SOCKET_URL=https://sim.yourdomain.com
# 数据库凭据(生产环境需更改默认值)
POSTGRES_USER=postgres
POSTGRES_PASSWORD=<strong-password>
POSTGRES_DB=simstudio
```
### 使用 Ollama
Sim 可以配合本地 AI 模型使用 [Ollama](https://ollama.ai)
**外部 Ollama运行在宿主机上**
```bash
# macOS/Windows
OLLAMA_URL=http://host.docker.internal:11434
# Linux - 使用宿主机实际 IP
OLLAMA_URL=http://192.168.1.100:11434
```
> **注意**:在 Docker 内部,`localhost` 指向容器本身。请使用 `host.docker.internal`macOS/Windows或宿主机 IP 地址Linux
### 端口配置
默认端口可通过环境变量覆盖:
```bash
SIM_PORT_OVERRIDE=3000 # 主应用
SIM_REALTIME_PORT_OVERRIDE=3002 # 实时服务器
POSTGRES_PORT_OVERRIDE=5432 # PostgreSQL 数据库
```
### 资源限制
根据工作负载调整资源分配:
```bash
# 主应用
SIM_CPU_LIMIT=4.0
SIM_MEMORY_LIMIT=8G
# 实时服务器
SIM_REALTIME_CPU_LIMIT=2.0
SIM_REALTIME_MEMORY_LIMIT=4G
# PostgreSQL
POSTGRES_CPU_LIMIT=2.0
POSTGRES_MEMORY_LIMIT=2G
```
## 服务架构
部署包含 4 个服务:
1. **simstudio**:主 Next.js 应用(端口 3000
2. **realtime**WebSocket 实时功能服务器(端口 3002
3. **migrations**:数据库架构管理(仅运行一次)
4. **db**PostgreSQL 17 with pgvector 扩展(端口 5432
## 常用操作
### 查看日志
```bash
# 所有服务
docker compose logs -f
# 特定服务
docker compose logs -f simstudio
```
### 停止服务
```bash
docker compose down
```
### 更新到最新版本
```bash
docker compose pull
docker compose up -d
```
### 备份数据库
```bash
docker compose exec db pg_dump -U postgres simstudio > backup_$(date +%Y%m%d).sql
```
### 恢复数据库
```bash
cat backup.sql | docker compose exec -T db psql -U postgres simstudio
```
## 安全注意事项
- **更改默认凭据**:在生产环境中更新 `POSTGRES_PASSWORD`
- **生成强密钥**:使用 `openssl rand -hex 32` 生成所有密钥值
- **使用 HTTPS**配置反向代理Nginx/Caddy和 SSL 证书
- **网络隔离**:将数据库保持在内部网络
- **定期备份**:自动化数据库备份
- **定期更新**:拉取最新镜像获取安全补丁
## 生产环境部署
生产环境部署建议:
1. **使用反向代理**Nginx、Caddy、Traefik进行 SSL/TLS 终止
2. **配置防火墙**限制数据库访问
3. **设置监控**(健康检查、指标、日志)
4. **启用备份**(自动化 PostgreSQL 备份)
5. **使用外部数据库**以获得更好的性能和可靠性(可选)
Caddy 配置示例:
```caddy
sim.yourdomain.com {
reverse_proxy localhost:3000
handle /socket.io/* {
reverse_proxy localhost:3002
}
}
```
## 故障排查
### 模型不显示在下拉列表中
如果使用宿主机上的外部 Ollama请确保 `OLLAMA_URL` 使用 `host.docker.internal` 或宿主机 IP 地址,而不是 `localhost`
### 数据库连接错误
- 验证 PostgreSQL 健康状态:`docker compose ps`
- 检查数据库日志:`docker compose logs db`
- 确保迁移完成:`docker compose logs migrations`
### 端口冲突
如果端口已被占用,可以覆盖它们:
```bash
SIM_PORT_OVERRIDE=3100 \
SIM_REALTIME_PORT_OVERRIDE=3102 \
POSTGRES_PORT_OVERRIDE=5433 \
docker compose up -d
```
## 相关资源
- **官方文档**[https://docs.sim.ai](https://docs.sim.ai)
- **GitHub 仓库**[https://github.com/simstudioai/sim](https://github.com/simstudioai/sim)
- **云托管版本**[https://sim.ai](https://sim.ai)
- **自托管指南**[https://docs.sim.ai/self-hosting](https://docs.sim.ai/self-hosting)
## 许可证
此配置遵循 Sim 项目许可。请查看[官方仓库](https://github.com/simstudioai/sim)了解许可证详情。
## 支持
如有问题和疑问:
- GitHub Issues[https://github.com/simstudioai/sim/issues](https://github.com/simstudioai/sim/issues)
- 文档:[https://docs.sim.ai](https://docs.sim.ai)

View File

@@ -0,0 +1,136 @@
# Sim - AI Agent Workflow Builder
# Open-source platform to build and deploy AI agent workflows
# Documentation: https://docs.sim.ai
x-defaults: &defaults
restart: unless-stopped
logging:
driver: json-file
options:
max-size: ${LOG_MAX_SIZE:-100m}
max-file: "${LOG_MAX_FILE:-3}"
services:
simstudio:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}ghcr.io/simstudioai/simstudio:${SIM_VERSION:-latest}
ports:
- "${SIM_PORT_OVERRIDE:-3000}:3000"
environment:
- TZ=${TZ:-UTC}
- NODE_ENV=production
- DATABASE_URL=postgresql://${POSTGRES_USER:-postgres}:${POSTGRES_PASSWORD:-postgres}@db:5432/${POSTGRES_DB:-simstudio}
- BETTER_AUTH_URL=${NEXT_PUBLIC_APP_URL:-http://localhost:3000}
- NEXT_PUBLIC_APP_URL=${NEXT_PUBLIC_APP_URL:-http://localhost:3000}
- BETTER_AUTH_SECRET=${BETTER_AUTH_SECRET:-your_auth_secret_here}
- ENCRYPTION_KEY=${ENCRYPTION_KEY:-your_encryption_key_here}
- COPILOT_API_KEY=${COPILOT_API_KEY:-}
- SIM_AGENT_API_URL=${SIM_AGENT_API_URL:-}
- OLLAMA_URL=${OLLAMA_URL:-http://localhost:11434}
- SOCKET_SERVER_URL=${SOCKET_SERVER_URL:-http://localhost:3002}
- NEXT_PUBLIC_SOCKET_URL=${NEXT_PUBLIC_SOCKET_URL:-http://localhost:3002}
depends_on:
db:
condition: service_healthy
migrations:
condition: service_completed_successfully
realtime:
condition: service_healthy
healthcheck:
test: ["CMD", "wget", "--spider", "--quiet", "http://127.0.0.1:3000"]
interval: 90s
timeout: 10s
retries: 3
start_period: 30s
deploy:
resources:
limits:
cpus: ${SIM_CPU_LIMIT:-4.0}
memory: ${SIM_MEMORY_LIMIT:-8G}
reservations:
cpus: ${SIM_CPU_RESERVATION:-2.0}
memory: ${SIM_MEMORY_RESERVATION:-4G}
realtime:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}ghcr.io/simstudioai/realtime:${SIM_REALTIME_VERSION:-latest}
ports:
- "${SIM_REALTIME_PORT_OVERRIDE:-3002}:3002"
environment:
- TZ=${TZ:-UTC}
- DATABASE_URL=postgresql://${POSTGRES_USER:-postgres}:${POSTGRES_PASSWORD:-postgres}@db:5432/${POSTGRES_DB:-simstudio}
- NEXT_PUBLIC_APP_URL=${NEXT_PUBLIC_APP_URL:-http://localhost:3000}
- BETTER_AUTH_URL=${BETTER_AUTH_URL:-http://localhost:3000}
- BETTER_AUTH_SECRET=${BETTER_AUTH_SECRET:-your_auth_secret_here}
depends_on:
db:
condition: service_healthy
healthcheck:
test: ["CMD", "wget", "--spider", "--quiet", "http://127.0.0.1:3002/health"]
interval: 90s
timeout: 10s
retries: 3
start_period: 30s
deploy:
resources:
limits:
cpus: ${SIM_REALTIME_CPU_LIMIT:-2.0}
memory: ${SIM_REALTIME_MEMORY_LIMIT:-4G}
reservations:
cpus: ${SIM_REALTIME_CPU_RESERVATION:-1.0}
memory: ${SIM_REALTIME_MEMORY_RESERVATION:-2G}
migrations:
image: ${GLOBAL_REGISTRY:-}ghcr.io/simstudioai/migrations:${SIM_MIGRATIONS_VERSION:-latest}
working_dir: /app/packages/db
environment:
- TZ=${TZ:-UTC}
- DATABASE_URL=postgresql://${POSTGRES_USER:-postgres}:${POSTGRES_PASSWORD:-postgres}@db:5432/${POSTGRES_DB:-simstudio}
depends_on:
db:
condition: service_healthy
command: ["bun", "run", "db:migrate"]
restart: "no"
logging:
driver: json-file
options:
max-size: ${LOG_MAX_SIZE:-100m}
max-file: "${LOG_MAX_FILE:-3}"
deploy:
resources:
limits:
cpus: ${SIM_MIGRATIONS_CPU_LIMIT:-1.0}
memory: ${SIM_MIGRATIONS_MEMORY_LIMIT:-512M}
reservations:
cpus: ${SIM_MIGRATIONS_CPU_RESERVATION:-0.5}
memory: ${SIM_MIGRATIONS_MEMORY_RESERVATION:-256M}
db:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}pgvector/pgvector:${PGVECTOR_VERSION:-pg17}
ports:
- "${POSTGRES_PORT_OVERRIDE:-5432}:5432"
environment:
- TZ=${TZ:-UTC}
- POSTGRES_USER=${POSTGRES_USER:-postgres}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-postgres}
- POSTGRES_DB=${POSTGRES_DB:-simstudio}
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER:-postgres}"]
interval: 10s
timeout: 5s
retries: 5
start_period: 10s
deploy:
resources:
limits:
cpus: ${POSTGRES_CPU_LIMIT:-2.0}
memory: ${POSTGRES_MEMORY_LIMIT:-2G}
reservations:
cpus: ${POSTGRES_CPU_RESERVATION:-1.0}
memory: ${POSTGRES_MEMORY_RESERVATION:-1G}
volumes:
postgres_data:

View File

@@ -7,7 +7,7 @@ K3S_VERSION=v1.28.2+k3s1
# K3s DinD Image Version # K3s DinD Image Version
# Built image version tag # Built image version tag
K3S_DIND_VERSION=0.2.0 K3S_DIND_VERSION=0.2.1
# Preload Images # Preload Images
# Whether to pre-download common images during build (true/false) # Whether to pre-download common images during build (true/false)
# Set to false to speed up build time if you have good internet connectivity # Set to false to speed up build time if you have good internet connectivity

View File

@@ -62,13 +62,14 @@ A lightweight Kubernetes distribution (K3s) running inside a Docker-in-Docker (D
| Variable | Default | Description | | Variable | Default | Description |
| ----------------------------- | -------------- | ------------------------------------- | | ----------------------------- | -------------- | ------------------------------------- |
| `K3S_VERSION` | `v1.28.2+k3s1` | K3s version to install | | `K3S_VERSION` | `v1.28.2+k3s1` | K3s version to install |
| `K3S_DIND_VERSION` | `0.2.0` | Built image version tag | | `K3S_DIND_VERSION` | `0.2.1` | Built image version tag |
| `PRELOAD_IMAGES` | `true` | Pre-download images during build | | `PRELOAD_IMAGES` | `true` | Pre-download images during build |
| `TZ` | `UTC` | Container timezone | | `TZ` | `UTC` | Container timezone |
| `K3S_API_PORT_OVERRIDE` | `6443` | Kubernetes API server port | | `K3S_API_PORT_OVERRIDE` | `6443` | Kubernetes API server port |
| `DOCKER_TLS_PORT_OVERRIDE` | `2376` | Docker daemon TLS port | | `DOCKER_TLS_PORT_OVERRIDE` | `2376` | Docker daemon TLS port |
| `K3S_TOKEN` | (empty) | Shared secret for cluster join | | `K3S_TOKEN` | (empty) | Shared secret for cluster join |
| `K3S_DISABLE_SERVICES` | `traefik` | Services to disable (comma-separated) | | `K3S_DISABLE_SERVICES` | `traefik` | Services to disable (comma-separated) |
| `K3S_NODE_NAME` | `k3s-server` | Node name for the K3s server |
| `K3S_DIND_CPU_LIMIT` | `2.00` | CPU limit (cores) | | `K3S_DIND_CPU_LIMIT` | `2.00` | CPU limit (cores) |
| `K3S_DIND_MEMORY_LIMIT` | `4G` | Memory limit | | `K3S_DIND_MEMORY_LIMIT` | `4G` | Memory limit |
| `K3S_DIND_CPU_RESERVATION` | `0.50` | CPU reservation (cores) | | `K3S_DIND_CPU_RESERVATION` | `0.50` | CPU reservation (cores) |

View File

@@ -62,13 +62,14 @@
| 变量 | 默认值 | 说明 | | 变量 | 默认值 | 说明 |
| ----------------------------- | -------------- | ------------------------- | | ----------------------------- | -------------- | ------------------------- |
| `K3S_VERSION` | `v1.28.2+k3s1` | 要安装的 K3s 版本 | | `K3S_VERSION` | `v1.28.2+k3s1` | 要安装的 K3s 版本 |
| `K3S_DIND_VERSION` | `0.2.0` | 构建的镜像版本标签 | | `K3S_DIND_VERSION` | `0.2.1` | 构建的镜像版本标签 |
| `PRELOAD_IMAGES` | `true` | 构建时预下载镜像 | | `PRELOAD_IMAGES` | `true` | 构建时预下载镜像 |
| `TZ` | `UTC` | 容器时区 | | `TZ` | `UTC` | 容器时区 |
| `K3S_API_PORT_OVERRIDE` | `6443` | Kubernetes API 服务器端口 | | `K3S_API_PORT_OVERRIDE` | `6443` | Kubernetes API 服务器端口 |
| `DOCKER_TLS_PORT_OVERRIDE` | `2376` | Docker 守护进程 TLS 端口 | | `DOCKER_TLS_PORT_OVERRIDE` | `2376` | Docker 守护进程 TLS 端口 |
| `K3S_TOKEN` | (空) | 集群加入的共享密钥 | | `K3S_TOKEN` | (空) | 集群加入的共享密钥 |
| `K3S_DISABLE_SERVICES` | `traefik` | 要禁用的服务(逗号分隔) | | `K3S_DISABLE_SERVICES` | `traefik` | 要禁用的服务(逗号分隔) |
| `K3S_NODE_NAME` | `k3s-server` | K3s 服务器的节点名称 |
| `K3S_DIND_CPU_LIMIT` | `2.00` | CPU 限制(核心数) | | `K3S_DIND_CPU_LIMIT` | `2.00` | CPU 限制(核心数) |
| `K3S_DIND_MEMORY_LIMIT` | `4G` | 内存限制 | | `K3S_DIND_MEMORY_LIMIT` | `4G` | 内存限制 |
| `K3S_DIND_CPU_RESERVATION` | `0.50` | CPU 预留(核心数) | | `K3S_DIND_CPU_RESERVATION` | `0.50` | CPU 预留(核心数) |

View File

@@ -13,7 +13,7 @@ x-defaults: &defaults
services: services:
k3s: k3s:
<<: *defaults <<: *defaults
image: ${GLOBAL_REGISTRY:-}alexsuntop/k3s-inside-dind:${K3S_DIND_VERSION:-0.2.0} image: ${GLOBAL_REGISTRY:-}alexsuntop/k3s-inside-dind:${K3S_DIND_VERSION:-0.2.1}
build: build:
context: . context: .
dockerfile: Dockerfile dockerfile: Dockerfile
@@ -31,6 +31,7 @@ services:
- TZ=${TZ:-UTC} - TZ=${TZ:-UTC}
- K3S_TOKEN=${K3S_TOKEN:-} - K3S_TOKEN=${K3S_TOKEN:-}
- K3S_DISABLE_SERVICES=${K3S_DISABLE_SERVICES:-traefik} - K3S_DISABLE_SERVICES=${K3S_DISABLE_SERVICES:-traefik}
- K3S_NODE_NAME=${K3S_NODE_NAME:-k3s-server}
healthcheck: healthcheck:
test: ["CMD", "k3s", "kubectl", "get", "--raw", "/healthz"] test: ["CMD", "k3s", "kubectl", "get", "--raw", "/healthz"]
interval: 30s interval: 30s

View File

@@ -19,7 +19,12 @@ echo "Docker is ready."
echo "Starting K3s..." echo "Starting K3s..."
# Build K3s server arguments # Build K3s server arguments
if [ -n "$K3S_ARGS" ]; then
echo "Using custom K3S_ARGS: $K3S_ARGS"
else
echo "No custom K3S_ARGS provided, using defaults."
K3S_ARGS="--snapshotter=native --write-kubeconfig-mode=644 --https-listen-port=6443" K3S_ARGS="--snapshotter=native --write-kubeconfig-mode=644 --https-listen-port=6443"
fi
# Add disable services if specified # Add disable services if specified
if [ -n "$K3S_DISABLE_SERVICES" ]; then if [ -n "$K3S_DISABLE_SERVICES" ]; then

View File

@@ -1,230 +0,0 @@
# Langflow
Langflow is a low-code visual framework for building AI applications. It's Python-based and agnostic to any model, API, or database, making it easy to build RAG applications, multi-agent systems, and custom AI workflows.
## Features
- **Visual Flow Builder**: Drag-and-drop interface for building AI applications
- **Multi-Model Support**: Works with OpenAI, Anthropic, Google, HuggingFace, and more
- **RAG Components**: Built-in support for vector databases and retrieval
- **Custom Components**: Create your own Python components
- **Agent Support**: Build multi-agent systems with memory and tools
- **Real-Time Monitoring**: Track executions and debug flows
- **API Integration**: REST API for programmatic access
## Quick Start
1. Copy `.env.example` to `.env`:
```bash
cp .env.example .env
```
2. (Optional) Edit `.env` to customize settings:
- Generate a secure `LANGFLOW_SECRET_KEY` for production
- Set `LANGFLOW_AUTO_LOGIN=false` to require authentication
- Configure superuser credentials
- Add API keys for LLM providers
3. Start Langflow:
```bash
docker compose up -d
```
4. Wait for services to be ready
5. Access Langflow UI at `http://localhost:7860`
6. Start building your AI application!
## Default Configuration
| Service | Port | Description |
| ---------- | ---- | ------------------- |
| Langflow | 7860 | Web UI and API |
| PostgreSQL | 5432 | Database (internal) |
**Default Credentials** (if authentication enabled):
- Username: `langflow`
- Password: `langflow`
## Environment Variables
Key environment variables (see `.env.example` for full list):
| Variable | Description | Default |
| ----------------------------- | ----------------------------- | ---------- |
| `LANGFLOW_VERSION` | Langflow image version | `latest` |
| `LANGFLOW_PORT_OVERRIDE` | Host port for UI | `7860` |
| `POSTGRES_PASSWORD` | Database password | `langflow` |
| `LANGFLOW_AUTO_LOGIN` | Auto-login (disable for auth) | `true` |
| `LANGFLOW_SUPERUSER` | Superuser username | `langflow` |
| `LANGFLOW_SUPERUSER_PASSWORD` | Superuser password | `langflow` |
| `LANGFLOW_SECRET_KEY` | Secret key for sessions | (empty) |
| `LANGFLOW_COMPONENTS_PATH` | Custom components directory | (empty) |
| `LANGFLOW_LOAD_FLOWS_PATH` | Auto-load flows directory | (empty) |
| `TZ` | Timezone | `UTC` |
## Resource Requirements
**Minimum**:
- CPU: 1 core
- RAM: 1GB
- Disk: 5GB
**Recommended**:
- CPU: 2+ cores
- RAM: 2GB+
- Disk: 20GB+
## Volumes
- `postgres_data`: PostgreSQL database data
- `langflow_data`: Langflow configuration, flows, and logs
## Using Langflow
### Building Your First Flow
1. Access the UI at `http://localhost:7860`
2. Click "New Flow" or use a template
3. Drag components from the sidebar to the canvas
4. Connect components by dragging between ports
5. Configure component parameters
6. Click "Run" to test your flow
7. Use the API or integrate with your application
### Adding API Keys
You can add API keys for LLM providers in two ways:
#### Option 1: Global Variables (Recommended)
1. Click your profile icon → Settings
2. Go to "Global Variables"
3. Add your API keys (e.g., `OPENAI_API_KEY`)
4. Reference them in components using `{OPENAI_API_KEY}`
#### Option 2: Environment Variables
Add to your `.env` file:
```text
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GOOGLE_API_KEY=...
```
Langflow will automatically create global variables from these.
### Using the API
Get your API token from the UI:
1. Click your profile icon → Settings
2. Go to "API Keys"
3. Create a new API key
Example: Run a flow
```bash
curl -X POST http://localhost:7860/api/v1/run/YOUR_FLOW_ID \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"input_value": "Hello"}'
```
### Custom Components
1. Create a directory for your components
2. Set `LANGFLOW_COMPONENTS_PATH` in `.env`
3. Create Python files with your component classes
4. Restart Langflow to load them
Example component structure:
```python
from langflow import CustomComponent
class MyComponent(CustomComponent):
display_name = "My Component"
description = "Does something cool"
def build(self):
# Your component logic
return result
```
## Security Considerations
1. **Secret Key**: Generate a strong `LANGFLOW_SECRET_KEY` for production:
```bash
python -c "from secrets import token_urlsafe; print(token_urlsafe(32))"
```
2. **Authentication**: Set `LANGFLOW_AUTO_LOGIN=false` to require login
3. **Database Password**: Use a strong PostgreSQL password
4. **API Keys**: Store sensitive keys as global variables, not in flows
5. **SSL/TLS**: Use reverse proxy with HTTPS in production
6. **Network Access**: Restrict access with firewall rules
## Upgrading
To upgrade Langflow:
1. Update `LANGFLOW_VERSION` in `.env` (or use `latest`)
2. Pull and restart:
```bash
docker compose pull
docker compose up -d
```
3. Check logs:
```bash
docker compose logs -f langflow
```
## Troubleshooting
**Service won't start:**
- Check logs: `docker compose logs langflow`
- Verify database: `docker compose ps postgres`
- Ensure sufficient resources allocated
**Cannot access UI:**
- Check port 7860 is not in use: `netstat -an | findstr 7860`
- Verify firewall settings
- Check container health: `docker compose ps`
**API key not working:**
- Verify the key is set in Global Variables
- Check the variable name matches in your components
- Ensure `LANGFLOW_STORE_ENVIRONMENT_VARIABLES=true`
**Flow execution errors:**
- Check component configurations
- Review logs in the UI under each component
- Verify API keys have sufficient credits/permissions
## References
- Official Website: <https://langflow.org>
- Documentation: <https://docs.langflow.org>
- GitHub: <https://github.com/langflow-ai/langflow>
- Discord Community: <https://discord.gg/EqksyE2EX9>
- Docker Hub: <https://hub.docker.com/r/langflowai/langflow>
## License
Langflow is licensed under MIT. See [LICENSE](https://github.com/langflow-ai/langflow/blob/main/LICENSE) for more information.

View File

@@ -1,230 +0,0 @@
# Langflow
Langflow 是一个低代码可视化框架,用于构建 AI 应用。它基于 Python与任何模型、API 或数据库无关,可轻松构建 RAG 应用、多智能体系统和自定义 AI 工作流。
## 功能特点
- **可视化流构建器**:拖放界面构建 AI 应用
- **多模型支持**:支持 OpenAI、Anthropic、Google、HuggingFace 等
- **RAG 组件**:内置向量数据库和检索支持
- **自定义组件**:创建您自己的 Python 组件
- **智能体支持**:构建具有记忆和工具的多智能体系统
- **实时监控**:跟踪执行并调试流程
- **API 集成**:用于编程访问的 REST API
## 快速开始
1. 复制 `.env.example``.env`
```bash
copy .env.example .env
```
2. (可选)编辑 `.env` 自定义设置:
- 为生产环境生成安全的 `LANGFLOW_SECRET_KEY`
- 设置 `LANGFLOW_AUTO_LOGIN=false` 以要求身份验证
- 配置超级用户凭证
- 为 LLM 提供商添加 API 密钥
3. 启动 Langflow
```bash
docker compose up -d
```
4. 等待服务就绪
5. 访问 Langflow UI`http://localhost:7860`
6. 开始构建您的 AI 应用!
## 默认配置
| 服务 | 端口 | 说明 |
| ---------- | ---- | -------------- |
| Langflow | 7860 | Web UI 和 API |
| PostgreSQL | 5432 | 数据库(内部) |
**默认凭证**(如果启用了身份验证):
- 用户名:`langflow`
- 密码:`langflow`
## 环境变量
主要环境变量(完整列表请参阅 `.env.example`
| 变量 | 说明 | 默认值 |
| ----------------------------- | ------------------------------ | ---------- |
| `LANGFLOW_VERSION` | Langflow 镜像版本 | `latest` |
| `LANGFLOW_PORT_OVERRIDE` | UI 的主机端口 | `7860` |
| `POSTGRES_PASSWORD` | 数据库密码 | `langflow` |
| `LANGFLOW_AUTO_LOGIN` | 自动登录(禁用以启用身份验证) | `true` |
| `LANGFLOW_SUPERUSER` | 超级用户用户名 | `langflow` |
| `LANGFLOW_SUPERUSER_PASSWORD` | 超级用户密码 | `langflow` |
| `LANGFLOW_SECRET_KEY` | 会话密钥 | (空) |
| `LANGFLOW_COMPONENTS_PATH` | 自定义组件目录 | (空) |
| `LANGFLOW_LOAD_FLOWS_PATH` | 自动加载流目录 | (空) |
| `TZ` | 时区 | `UTC` |
## 资源需求
**最低要求**
- CPU1 核心
- 内存1GB
- 磁盘5GB
**推荐配置**
- CPU2+ 核心
- 内存2GB+
- 磁盘20GB+
## 数据卷
- `postgres_data`PostgreSQL 数据库数据
- `langflow_data`Langflow 配置、流和日志
## 使用 Langflow
### 构建您的第一个流
1. 访问 UI`http://localhost:7860`
2. 点击 "New Flow" 或使用模板
3. 从侧边栏拖动组件到画布
4. 通过在端口之间拖动来连接组件
5. 配置组件参数
6. 点击 "Run" 测试您的流
7. 使用 API 或与您的应用集成
### 添加 API 密钥
您可以通过两种方式为 LLM 提供商添加 API 密钥:
#### 方法 1全局变量推荐
1. 点击您的个人资料图标 → Settings
2. 进入 "Global Variables"
3. 添加您的 API 密钥(例如 `OPENAI_API_KEY`
4. 在组件中使用 `{OPENAI_API_KEY}` 引用它们
#### 方法 2环境变量
添加到您的 `.env` 文件:
```text
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GOOGLE_API_KEY=...
```
Langflow 会自动从这些创建全局变量。
### 使用 API
从 UI 获取您的 API 令牌:
1. 点击您的个人资料图标 → Settings
2. 进入 "API Keys"
3. 创建新的 API 密钥
示例:运行流
```bash
curl -X POST http://localhost:7860/api/v1/run/YOUR_FLOW_ID \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"input_value": "Hello"}'
```
### 自定义组件
1. 为您的组件创建一个目录
2. 在 `.env` 中设置 `LANGFLOW_COMPONENTS_PATH`
3. 使用您的组件类创建 Python 文件
4. 重启 Langflow 以加载它们
示例组件结构:
```python
from langflow import CustomComponent
class MyComponent(CustomComponent):
display_name = "My Component"
description = "Does something cool"
def build(self):
# Your component logic
return result
```
## 安全注意事项
1. **密钥**:为生产环境生成强 `LANGFLOW_SECRET_KEY`
```bash
python -c "from secrets import token_urlsafe; print(token_urlsafe(32))"
```
2. **身份验证**:设置 `LANGFLOW_AUTO_LOGIN=false` 以要求登录
3. **数据库密码**:使用强 PostgreSQL 密码
4. **API 密钥**:将敏感密钥存储为全局变量,而不是在流中
5. **SSL/TLS**:在生产环境中使用带 HTTPS 的反向代理
6. **网络访问**:使用防火墙规则限制访问
## 升级
升级 Langflow
1. 在 `.env` 中更新 `LANGFLOW_VERSION`(或使用 `latest`
2. 拉取并重启:
```bash
docker compose pull
docker compose up -d
```
3. 检查日志:
```bash
docker compose logs -f langflow
```
## 故障排除
**服务无法启动:**
- 检查日志:`docker compose logs langflow`
- 验证数据库:`docker compose ps postgres`
- 确保分配了足够的资源
**无法访问 UI**
- 检查端口 7860 未被占用:`netstat -an | findstr 7860`
- 验证防火墙设置
- 检查容器健康:`docker compose ps`
**API 密钥不工作:**
- 验证密钥已在全局变量中设置
- 检查变量名称在组件中匹配
- 确保 `LANGFLOW_STORE_ENVIRONMENT_VARIABLES=true`
**流执行错误:**
- 检查组件配置
- 在 UI 中每个组件下查看日志
- 验证 API 密钥有足够的额度/权限
## 参考资料
- 官方网站:<https://langflow.org>
- 文档:<https://docs.langflow.org>
- GitHub<https://github.com/langflow-ai/langflow>
- Discord 社区:<https://discord.gg/EqksyE2EX9>
- Docker Hub<https://hub.docker.com/r/langflowai/langflow>
## 许可证
Langflow 使用 MIT 许可证。详情请参阅 [LICENSE](https://github.com/langflow-ai/langflow/blob/main/LICENSE)。

37
src/pingap/.env.example Normal file
View File

@@ -0,0 +1,37 @@
# Pingap Configuration
# Global Settings
# GLOBAL_REGISTRY=your-registry.com/
TZ=UTC
# Pingap Version
# Use version-full for production (includes OpenTelemetry, Sentry, image compression plugins)
# Available tags: latest, full, 0.12.1, 0.12.1-full
PINGAP_VERSION=0.12.1-full
# Container Name (optional, leave empty for auto-generated name)
PINGAP_CONTAINER_NAME=
# Port Overrides
# HTTP port - exposed on host
PINGAP_HTTP_PORT_OVERRIDE=80
# HTTPS port - exposed on host
PINGAP_HTTPS_PORT_OVERRIDE=443
# Data Directory
# Path for persistent storage of configuration and data
PINGAP_DATA_DIR=./pingap
# Admin Configuration
# Admin interface address (format: host:port/path)
PINGAP_ADMIN_ADDR=0.0.0.0:80/pingap
# Admin username
PINGAP_ADMIN_USER=admin
# Admin password (REQUIRED - set a strong password!)
PINGAP_ADMIN_PASSWORD=changeme
# Resource Limits
PINGAP_CPU_LIMIT=1.0
PINGAP_MEMORY_LIMIT=512M
PINGAP_CPU_RESERVATION=0.5
PINGAP_MEMORY_RESERVATION=256M

127
src/pingap/README.md Normal file
View File

@@ -0,0 +1,127 @@
# Pingap
[中文说明](./README.zh.md)
A high-performance reverse proxy built on Cloudflare Pingora, designed as a more efficient alternative to Nginx with dynamic configuration, hot-reloading capabilities, and an intuitive web admin interface.
## Features
- **High Performance**: Built on Cloudflare's Pingora framework for exceptional performance
- **Dynamic Configuration**: Hot-reload configuration changes without downtime
- **Web Admin Interface**: Manage your proxy through an intuitive web UI
- **Plugin Ecosystem**: Rich plugin support for extended functionality
- **Full Version Features**: Includes OpenTelemetry, Sentry, and image compression plugins
- **Zero Downtime**: Configuration changes applied without service interruption
- **TOML Configuration**: Simple and concise configuration files
## Quick Start
1. Copy the environment file and configure it:
```bash
cp .env.example .env
```
2. **IMPORTANT**: Edit `.env` and set a strong password:
```bash
PINGAP_ADMIN_PASSWORD=your-strong-password-here
```
3. Start the service:
```bash
docker compose up -d
```
4. Access the web admin interface at:
```text
http://localhost/pingap/
```
- Default username: `admin`
- Password: The one you set in `.env`
## Configuration
### Environment Variables
| Variable | Description | Default |
| ---------------------------- | ------------------------------------------ | ------------------- |
| `PINGAP_VERSION` | Image version (recommended: `0.12.1-full`) | `0.12.1-full` |
| `PINGAP_HTTP_PORT_OVERRIDE` | HTTP port on host | `80` |
| `PINGAP_HTTPS_PORT_OVERRIDE` | HTTPS port on host | `443` |
| `PINGAP_DATA_DIR` | Data directory for persistent storage | `./pingap` |
| `PINGAP_ADMIN_ADDR` | Admin interface address | `0.0.0.0:80/pingap` |
| `PINGAP_ADMIN_USER` | Admin username | `admin` |
| `PINGAP_ADMIN_PASSWORD` | Admin password (REQUIRED) | - |
| `PINGAP_CPU_LIMIT` | CPU limit | `1.0` |
| `PINGAP_MEMORY_LIMIT` | Memory limit | `512M` |
### Image Versions
- `vicanso/pingap:latest` - Latest development version (not recommended for production)
- `vicanso/pingap:full` - Latest development version with all features
- `vicanso/pingap:0.12.1` - Stable version without extra dependencies
- `vicanso/pingap:0.12.1-full` - **Recommended**: Stable version with OpenTelemetry, Sentry, and image compression
### Persistent Storage
Configuration and data are stored in the `PINGAP_DATA_DIR` directory (default: `./pingap`). This directory will be created automatically on first run.
## Usage
### Viewing Logs
```bash
docker compose logs -f pingap
```
### Restarting After Configuration Changes
While Pingap supports hot-reloading for most configuration changes (upstream, location, certificate), changes to server configuration require a restart:
```bash
docker compose restart pingap
```
### Stopping the Service
```bash
docker compose down
```
## Important Notes
### Security
- **Always set a strong password** for `PINGAP_ADMIN_PASSWORD`
- Change the default admin username if possible
- Consider restricting admin interface access to specific IPs
- Use HTTPS for the admin interface in production
### Production Recommendations
- Use versioned tags (e.g., `0.12.1-full`) instead of `latest` or `full`
- Configure appropriate resource limits based on your traffic
- Set up proper monitoring and logging
- Enable HTTPS with valid certificates
- Regular backups of the `pingap` data directory
### Docker Best Practices
- The container runs with `--autoreload` flag for hot configuration updates
- Avoid using `--autorestart` in Docker as it conflicts with container lifecycle
- Use `docker compose restart` for server-level configuration changes
## Links
- [Official Website](https://pingap.io/)
- [Documentation](https://pingap.io/pingap-zh/docs/docker)
- [GitHub Repository](https://github.com/vicanso/pingap)
- [Docker Hub](https://hub.docker.com/r/vicanso/pingap)
## License
This Docker Compose configuration is provided as-is. Pingap is licensed under the Apache License 2.0.

127
src/pingap/README.zh.md Normal file
View File

@@ -0,0 +1,127 @@
# Pingap
[English](./README.md)
基于 Cloudflare Pingora 构建的高性能反向代理,设计为比 Nginx 更高效的替代方案,具有动态配置、热重载功能和直观的 Web 管理界面。
## 功能特性
- **高性能**:基于 Cloudflare 的 Pingora 框架,性能卓越
- **动态配置**:支持热重载配置更改,无需停机
- **Web 管理界面**:通过直观的 Web UI 管理代理服务
- **插件生态系统**:丰富的插件支持,可扩展功能
- **完整版功能**:包含 OpenTelemetry、Sentry 和图片压缩插件
- **零停机时间**:配置变更无需中断服务
- **TOML 配置**:简单明了的配置文件格式
## 快速开始
1. 复制环境变量文件并进行配置:
```bash
cp .env.example .env
```
2. **重要**:编辑 `.env` 文件,设置强密码:
```bash
PINGAP_ADMIN_PASSWORD=your-strong-password-here
```
3. 启动服务:
```bash
docker compose up -d
```
4. 访问 Web 管理后台:
```text
http://localhost/pingap/
```
- 默认用户名:`admin`
- 密码:在 `.env` 中设置的密码
## 配置说明
### 环境变量
| 变量名 | 说明 | 默认值 |
| ---------------------------- | ------------------------------- | ------------------- |
| `PINGAP_VERSION` | 镜像版本(推荐:`0.12.1-full` | `0.12.1-full` |
| `PINGAP_HTTP_PORT_OVERRIDE` | 主机 HTTP 端口 | `80` |
| `PINGAP_HTTPS_PORT_OVERRIDE` | 主机 HTTPS 端口 | `443` |
| `PINGAP_DATA_DIR` | 持久化数据目录 | `./pingap` |
| `PINGAP_ADMIN_ADDR` | 管理界面地址 | `0.0.0.0:80/pingap` |
| `PINGAP_ADMIN_USER` | 管理员用户名 | `admin` |
| `PINGAP_ADMIN_PASSWORD` | 管理员密码(必填) | - |
| `PINGAP_CPU_LIMIT` | CPU 限制 | `1.0` |
| `PINGAP_MEMORY_LIMIT` | 内存限制 | `512M` |
### 镜像版本
- `vicanso/pingap:latest` - 最新开发版(不推荐用于生产环境)
- `vicanso/pingap:full` - 包含所有功能的最新开发版
- `vicanso/pingap:0.12.1` - 不含额外依赖的稳定版
- `vicanso/pingap:0.12.1-full` - **推荐**:包含 OpenTelemetry、Sentry 和图片压缩的稳定版
### 持久化存储
配置和数据存储在 `PINGAP_DATA_DIR` 目录中(默认:`./pingap`)。该目录将在首次运行时自动创建。
## 使用方法
### 查看日志
```bash
docker compose logs -f pingap
```
### 配置更改后重启
虽然 Pingap 支持大多数配置更改的热重载upstream、location、certificate但对 server 配置的更改需要重启:
```bash
docker compose restart pingap
```
### 停止服务
```bash
docker compose down
```
## 重要提示
### 安全性
- **务必设置强密码**给 `PINGAP_ADMIN_PASSWORD`
- 建议更改默认的管理员用户名
- 考虑限制管理界面只能从特定 IP 访问
- 生产环境中建议对管理界面启用 HTTPS
### 生产环境建议
- 使用带版本号的标签(如 `0.12.1-full`),而非 `latest` 或 `full`
- 根据流量情况配置适当的资源限制
- 设置适当的监控和日志记录
- 启用 HTTPS 并使用有效证书
- 定期备份 `pingap` 数据目录
### Docker 最佳实践
- 容器使用 `--autoreload` 标志以支持配置热更新
- 避免在 Docker 中使用 `--autorestart`,因为它与容器生命周期冲突
- 对于服务器级别的配置更改,使用 `docker compose restart`
## 相关链接
- [官方网站](https://pingap.io/)
- [文档](https://pingap.io/pingap-zh/docs/docker)
- [GitHub 仓库](https://github.com/vicanso/pingap)
- [Docker Hub](https://hub.docker.com/r/vicanso/pingap)
## 许可证
本 Docker Compose 配置按原样提供。Pingap 基于 Apache License 2.0 许可证。

View File

@@ -0,0 +1,44 @@
# Docker Compose for Pingap - High-performance reverse proxy
# https://pingap.io/
x-defaults: &defaults
restart: unless-stopped
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
pingap:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}vicanso/pingap:${PINGAP_VERSION:-0.12.1-full}
container_name: ${PINGAP_CONTAINER_NAME:-}
ports:
- "${PINGAP_HTTP_PORT_OVERRIDE:-80}:80"
- "${PINGAP_HTTPS_PORT_OVERRIDE:-443}:443"
volumes:
- ${PINGAP_DATA_DIR:-./pingap}:/opt/pingap
environment:
- TZ=${TZ:-UTC}
- PINGAP_CONF=/opt/pingap/conf
- PINGAP_ADMIN_ADDR=${PINGAP_ADMIN_ADDR:-0.0.0.0:80/pingap}
- PINGAP_ADMIN_USER=${PINGAP_ADMIN_USER:-admin}
- PINGAP_ADMIN_PASSWORD=${PINGAP_ADMIN_PASSWORD:?PINGAP_ADMIN_PASSWORD must be set}
command:
- pingap
- --autoreload
healthcheck:
test: ["CMD", "wget", "--spider", "-q", "http://localhost:80/pingap/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
deploy:
resources:
limits:
cpus: ${PINGAP_CPU_LIMIT:-1.0}
memory: ${PINGAP_MEMORY_LIMIT:-512M}
reservations:
cpus: ${PINGAP_CPU_RESERVATION:-0.5}
memory: ${PINGAP_MEMORY_RESERVATION:-256M}