feat: add sim & pingap

This commit is contained in:
Sun-ZhenXing
2025-12-27 11:24:44 +08:00
parent 72b36f2748
commit d536fbc995
25 changed files with 1727 additions and 483 deletions

View File

@@ -0,0 +1,67 @@
# Langflow Configuration
# Versions
LANGFLOW_VERSION=1.1.1
POSTGRES_VERSION=16-alpine
# Port Configuration
LANGFLOW_PORT_OVERRIDE=7860
# PostgreSQL Configuration
POSTGRES_DB=langflow
POSTGRES_USER=langflow
POSTGRES_PASSWORD=langflow
# Storage Configuration
LANGFLOW_CONFIG_DIR=/app/langflow
# Server Configuration
LANGFLOW_HOST=0.0.0.0
LANGFLOW_WORKERS=1
# Authentication - IMPORTANT: Configure for production!
# Set LANGFLOW_AUTO_LOGIN=false to require login
LANGFLOW_AUTO_LOGIN=true
LANGFLOW_SUPERUSER=langflow
LANGFLOW_SUPERUSER_PASSWORD=langflow
# Security - IMPORTANT: Generate a secure secret key for production!
# Use: python -c "from secrets import token_urlsafe; print(token_urlsafe(32))"
LANGFLOW_SECRET_KEY=
# Features
LANGFLOW_AUTO_SAVING=true
LANGFLOW_AUTO_SAVING_INTERVAL=1000
LANGFLOW_STORE_ENVIRONMENT_VARIABLES=true
LANGFLOW_FALLBACK_TO_ENV_VAR=true
# Optional: Custom components directory
LANGFLOW_COMPONENTS_PATH=
# Optional: Load flows from directory on startup
LANGFLOW_LOAD_FLOWS_PATH=
# Logging
LANGFLOW_LOG_LEVEL=error
# Timezone
TZ=UTC
# Analytics
DO_NOT_TRACK=false
# Resource Limits - Langflow
LANGFLOW_CPU_LIMIT=2.0
LANGFLOW_MEMORY_LIMIT=2G
LANGFLOW_CPU_RESERVATION=0.5
LANGFLOW_MEMORY_RESERVATION=512M
# Resource Limits - PostgreSQL
POSTGRES_CPU_LIMIT=1.0
POSTGRES_MEMORY_LIMIT=1G
POSTGRES_CPU_RESERVATION=0.25
POSTGRES_MEMORY_RESERVATION=256M
# Logging Configuration
LOG_MAX_SIZE=100m
LOG_MAX_FILE=3

336
apps/langflow/README.md Normal file
View File

@@ -0,0 +1,336 @@
# Langflow
Langflow is a low-code visual framework for building AI applications. It's Python-based and agnostic to any model, API, or database, making it easy to build RAG applications, multi-agent systems, and custom AI workflows.
## Features
- **Visual Flow Builder**: Drag-and-drop interface for building AI applications
- **Multi-Model Support**: Works with OpenAI, Anthropic, Google, HuggingFace, and more
- **RAG Components**: Built-in support for vector databases and retrieval
- **Custom Components**: Create your own Python components
- **Agent Support**: Build multi-agent systems with memory and tools
- **Real-Time Monitoring**: Track executions and debug flows
- **API Integration**: REST API for programmatic access
## Quick Start
1. Copy `.env.example` to `.env`:
```bash
cp .env.example .env
```
2. (Optional) Edit `.env` to customize settings:
- Generate a secure `LANGFLOW_SECRET_KEY` for production
- Set `LANGFLOW_AUTO_LOGIN=false` to require authentication
- Configure superuser credentials
- Add API keys for LLM providers
3. Start Langflow:
```bash
docker compose up -d
```
4. Wait for services to be ready (usually takes 1-2 minutes)
5. Access Langflow UI at `http://localhost:7860`
6. Start building your AI application!
## Default Configuration
| Service | Port | Description |
| ---------- | ---- | ------------------- |
| Langflow | 7860 | Web UI and API |
| PostgreSQL | 5432 | Database (internal) |
**Default Credentials** (if authentication enabled):
- Username: `langflow`
- Password: `langflow`
## Environment Variables
Key environment variables (see `.env.example` for full list):
| Variable | Description | Default |
| ----------------------------- | ----------------------------- | ---------- |
| `LANGFLOW_VERSION` | Langflow image version | `1.1.1` |
| `LANGFLOW_PORT_OVERRIDE` | Host port for UI | `7860` |
| `POSTGRES_PASSWORD` | Database password | `langflow` |
| `LANGFLOW_AUTO_LOGIN` | Auto-login (disable for auth) | `true` |
| `LANGFLOW_SUPERUSER` | Superuser username | `langflow` |
| `LANGFLOW_SUPERUSER_PASSWORD` | Superuser password | `langflow` |
| `LANGFLOW_SECRET_KEY` | Secret key for sessions | (empty) |
| `LANGFLOW_COMPONENTS_PATH` | Custom components directory | (empty) |
| `LANGFLOW_LOAD_FLOWS_PATH` | Auto-load flows directory | (empty) |
| `TZ` | Timezone | `UTC` |
## Resource Requirements
**Minimum**:
- CPU: 1 core
- RAM: 1GB
- Disk: 5GB
**Recommended**:
- CPU: 2+ cores
- RAM: 2GB+
- Disk: 20GB+
## Volumes
- `postgres_data`: PostgreSQL database data
- `langflow_data`: Langflow configuration, flows, and logs
## Using Langflow
### Building Your First Flow
1. Access the UI at `http://localhost:7860`
2. Click "New Flow" or use a template
3. Drag components from the sidebar to the canvas
4. Connect components by dragging between ports
5. Configure component parameters
6. Click "Run" to test your flow
7. Use the API or integrate with your application
### Adding LLM Providers
To use external LLM providers, configure their API keys:
1. In Langflow UI, go to Settings > Global Variables
2. Add your API keys (e.g., `OPENAI_API_KEY`, `ANTHROPIC_API_KEY`)
3. Reference these variables in your flow components
Alternatively, add them to your `.env` file and restart:
```bash
# Example LLM API Keys (add to .env)
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GOOGLE_API_KEY=...
```
### Custom Components
To add custom components:
1. Create a directory for your components (e.g., `./custom_components`)
2. Update `.env`:
```bash
LANGFLOW_COMPONENTS_PATH=/app/langflow/custom_components
```
3. Mount the directory in `docker-compose.yaml`:
```yaml
volumes:
- ./custom_components:/app/langflow/custom_components
```
4. Restart Langflow
### Auto-Loading Flows
To automatically load flows on startup:
1. Export your flows as JSON files
2. Create a directory (e.g., `./flows`)
3. Update `.env`:
```bash
LANGFLOW_LOAD_FLOWS_PATH=/app/langflow/flows
```
4. Mount the directory in `docker-compose.yaml`:
```yaml
volumes:
- ./flows:/app/langflow/flows
```
5. Restart Langflow
## API Usage
Langflow provides a REST API for running flows programmatically.
### Get Flow ID
1. Open your flow in the UI
2. The flow ID is in the URL: `http://localhost:7860/flow/{flow_id}`
### Run Flow via API
```bash
curl -X POST http://localhost:7860/api/v1/run/{flow_id} \
-H "Content-Type: application/json" \
-d '{
"inputs": {
"input_field": "your input value"
}
}'
```
### With Authentication
If authentication is enabled, first get a token:
```bash
# Login
curl -X POST http://localhost:7860/api/v1/login \
-H "Content-Type: application/json" \
-d '{
"username": "langflow",
"password": "langflow"
}'
# Use token in subsequent requests
curl -X POST http://localhost:7860/api/v1/run/{flow_id} \
-H "Authorization: Bearer {token}" \
-H "Content-Type: application/json" \
-d '{
"inputs": {
"input_field": "your input value"
}
}'
```
## Production Deployment
For production deployments:
1. **Enable Authentication**:
```bash
LANGFLOW_AUTO_LOGIN=false
LANGFLOW_SUPERUSER=admin
LANGFLOW_SUPERUSER_PASSWORD=<strong-password>
```
2. **Set Secret Key**:
```bash
# Generate a secure key
python -c "from secrets import token_urlsafe; print(token_urlsafe(32))"
# Add to .env
LANGFLOW_SECRET_KEY=<generated-key>
```
3. **Use Strong Database Password**:
```bash
POSTGRES_PASSWORD=<strong-password>
```
4. **Enable SSL/TLS**: Use a reverse proxy (nginx, traefik) with SSL certificates
5. **Configure Resource Limits**: Adjust CPU and memory limits based on your workload
6. **Backup Database**: Regularly backup the PostgreSQL data volume
## Troubleshooting
### Langflow Won't Start
- Check logs: `docker compose logs langflow`
- Ensure PostgreSQL is healthy: `docker compose ps postgres`
- Verify port 7860 is not in use
### Components Not Loading
- Check custom components path is correct
- Ensure Python dependencies are installed in custom components
- Check logs for component errors
### Slow Performance
- Increase resource limits in `.env`
- Reduce `LANGFLOW_WORKERS` if low on memory
- Optimize your flows (reduce unnecessary components)
### Database Connection Errors
- Verify PostgreSQL is running: `docker compose ps postgres`
- Check database credentials in `.env`
- Ensure `LANGFLOW_DATABASE_URL` is correct
## Maintenance
### Backup
Backup volumes:
```bash
docker compose down
docker run --rm -v compose-anything_postgres_data:/data -v $(pwd):/backup alpine tar czf /backup/postgres-backup.tar.gz -C /data .
docker run --rm -v compose-anything_langflow_data:/data -v $(pwd):/backup alpine tar czf /backup/langflow-backup.tar.gz -C /data .
docker compose up -d
```
### Restore
Restore from backup:
```bash
docker compose down
docker run --rm -v compose-anything_postgres_data:/data -v $(pwd):/backup alpine sh -c "cd /data && tar xzf /backup/postgres-backup.tar.gz"
docker run --rm -v compose-anything_langflow_data:/data -v $(pwd):/backup alpine sh -c "cd /data && tar xzf /backup/langflow-backup.tar.gz"
docker compose up -d
```
### Upgrade
To upgrade Langflow:
1. Update version in `.env`:
```bash
LANGFLOW_VERSION=1.2.0
```
2. Pull new image and restart:
```bash
docker compose pull
docker compose up -d
```
3. Check for breaking changes in release notes
## Useful Commands
```bash
# View logs
docker compose logs -f langflow
# Restart Langflow
docker compose restart langflow
# Access PostgreSQL
docker compose exec postgres psql -U langflow -d langflow
# Check resource usage
docker stats
# Clean up
docker compose down -v # WARNING: Deletes all data
```
## References
- [Official Documentation](https://docs.langflow.org/)
- [GitHub Repository](https://github.com/langflow-ai/langflow)
- [Component Documentation](https://docs.langflow.org/components/)
- [API Documentation](https://docs.langflow.org/api/)
- [Community Discord](https://discord.gg/langflow)
## License
MIT - See [LICENSE](https://github.com/langflow-ai/langflow/blob/main/LICENSE)

336
apps/langflow/README.zh.md Normal file
View File

@@ -0,0 +1,336 @@
# Langflow
Langflow 是一个低代码可视化框架,用于构建 AI 应用。它基于 Python与任何模型、API 或数据库无关,可轻松构建 RAG 应用、多智能体系统和自定义 AI 工作流。
## 功能特点
- **可视化流构建器**:拖放界面构建 AI 应用
- **多模型支持**:支持 OpenAI、Anthropic、Google、HuggingFace 等
- **RAG 组件**:内置向量数据库和检索支持
- **自定义组件**:创建您自己的 Python 组件
- **智能体支持**:构建具有记忆和工具的多智能体系统
- **实时监控**:跟踪执行并调试流程
- **API 集成**:用于编程访问的 REST API
## 快速开始
1. 复制 `.env.example``.env`
```bash
copy .env.example .env
```
2. (可选)编辑 `.env` 自定义设置:
- 为生产环境生成安全的 `LANGFLOW_SECRET_KEY`
- 设置 `LANGFLOW_AUTO_LOGIN=false` 以要求身份验证
- 配置超级用户凭证
- 为 LLM 提供商添加 API 密钥
3. 启动 Langflow
```bash
docker compose up -d
```
4. 等待服务就绪(通常需要 1-2 分钟)
5. 访问 Langflow UI`http://localhost:7860`
6. 开始构建您的 AI 应用!
## 默认配置
| 服务 | 端口 | 说明 |
| ---------- | ---- | -------------- |
| Langflow | 7860 | Web UI 和 API |
| PostgreSQL | 5432 | 数据库(内部) |
**默认凭证**(如果启用了身份验证):
- 用户名:`langflow`
- 密码:`langflow`
## 环境变量
主要环境变量(完整列表请参阅 `.env.example`
| 变量 | 说明 | 默认值 |
| ----------------------------- | ------------------------------ | ---------- |
| `LANGFLOW_VERSION` | Langflow 镜像版本 | `1.1.1` |
| `LANGFLOW_PORT_OVERRIDE` | UI 的主机端口 | `7860` |
| `POSTGRES_PASSWORD` | 数据库密码 | `langflow` |
| `LANGFLOW_AUTO_LOGIN` | 自动登录(禁用以启用身份验证) | `true` |
| `LANGFLOW_SUPERUSER` | 超级用户用户名 | `langflow` |
| `LANGFLOW_SUPERUSER_PASSWORD` | 超级用户密码 | `langflow` |
| `LANGFLOW_SECRET_KEY` | 会话密钥 | (空) |
| `LANGFLOW_COMPONENTS_PATH` | 自定义组件目录 | (空) |
| `LANGFLOW_LOAD_FLOWS_PATH` | 自动加载流目录 | (空) |
| `TZ` | 时区 | `UTC` |
## 资源需求
**最低要求**
- CPU1 核心
- 内存1GB
- 磁盘5GB
**推荐配置**
- CPU2+ 核心
- 内存2GB+
- 磁盘20GB+
## 数据卷
- `postgres_data`PostgreSQL 数据库数据
- `langflow_data`Langflow 配置、流和日志
## 使用 Langflow
### 构建您的第一个流
1. 访问 UI`http://localhost:7860`
2. 点击 "New Flow" 或使用模板
3. 从侧边栏拖动组件到画布
4. 通过在端口之间拖动来连接组件
5. 配置组件参数
6. 点击 "Run" 测试您的流
7. 使用 API 或与您的应用集成
### 添加 LLM 提供商
要使用外部 LLM 提供商,请配置其 API 密钥:
1. 在 Langflow UI 中,转到 Settings > Global Variables
2. 添加您的 API 密钥(例如,`OPENAI_API_KEY`、`ANTHROPIC_API_KEY`
3. 在您的流组件中引用这些变量
或者,将它们添加到您的 `.env` 文件并重启:
```bash
# LLM API 密钥示例(添加到 .env
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GOOGLE_API_KEY=...
```
### 自定义组件
要添加自定义组件:
1. 为您的组件创建一个目录(例如,`./custom_components`
2. 更新 `.env`
```bash
LANGFLOW_COMPONENTS_PATH=/app/langflow/custom_components
```
3. 在 `docker-compose.yaml` 中挂载目录:
```yaml
volumes:
- ./custom_components:/app/langflow/custom_components
```
4. 重启 Langflow
### 自动加载流
要在启动时自动加载流:
1. 将您的流导出为 JSON 文件
2. 创建一个目录(例如,`./flows`
3. 更新 `.env`
```bash
LANGFLOW_LOAD_FLOWS_PATH=/app/langflow/flows
```
4. 在 `docker-compose.yaml` 中挂载目录:
```yaml
volumes:
- ./flows:/app/langflow/flows
```
5. 重启 Langflow
## API 使用
Langflow 提供 REST API 用于以编程方式运行流。
### 获取流 ID
1. 在 UI 中打开您的流
2. 流 ID 在 URL 中:`http://localhost:7860/flow/{flow_id}`
### 通过 API 运行流
```bash
curl -X POST http://localhost:7860/api/v1/run/{flow_id} \
-H "Content-Type: application/json" \
-d '{
"inputs": {
"input_field": "your input value"
}
}'
```
### 使用身份验证
如果启用了身份验证,首先获取令牌:
```bash
# 登录
curl -X POST http://localhost:7860/api/v1/login \
-H "Content-Type: application/json" \
-d '{
"username": "langflow",
"password": "langflow"
}'
# 在后续请求中使用令牌
curl -X POST http://localhost:7860/api/v1/run/{flow_id} \
-H "Authorization: Bearer {token}" \
-H "Content-Type: application/json" \
-d '{
"inputs": {
"input_field": "your input value"
}
}'
```
## 生产部署
对于生产部署:
1. **启用身份验证**
```bash
LANGFLOW_AUTO_LOGIN=false
LANGFLOW_SUPERUSER=admin
LANGFLOW_SUPERUSER_PASSWORD=<强密码>
```
2. **设置密钥**
```bash
# 生成安全密钥
python -c "from secrets import token_urlsafe; print(token_urlsafe(32))"
# 添加到 .env
LANGFLOW_SECRET_KEY=<生成的密钥>
```
3. **使用强数据库密码**
```bash
POSTGRES_PASSWORD=<强密码>
```
4. **启用 SSL/TLS**:使用带有 SSL 证书的反向代理nginx、traefik
5. **配置资源限制**:根据您的工作负载调整 CPU 和内存限制
6. **备份数据库**:定期备份 PostgreSQL 数据卷
## 故障排除
### Langflow 无法启动
- 查看日志:`docker compose logs langflow`
- 确保 PostgreSQL 健康:`docker compose ps postgres`
- 验证端口 7860 未被使用
### 组件未加载
- 检查自定义组件路径是否正确
- 确保在自定义组件中安装了 Python 依赖项
- 检查日志中的组件错误
### 性能缓慢
- 在 `.env` 中增加资源限制
- 如果内存不足,减少 `LANGFLOW_WORKERS`
- 优化您的流(减少不必要的组件)
### 数据库连接错误
- 验证 PostgreSQL 正在运行:`docker compose ps postgres`
- 检查 `.env` 中的数据库凭证
- 确保 `LANGFLOW_DATABASE_URL` 正确
## 维护
### 备份
备份数据卷:
```bash
docker compose down
docker run --rm -v compose-anything_postgres_data:/data -v $(pwd):/backup alpine tar czf /backup/postgres-backup.tar.gz -C /data .
docker run --rm -v compose-anything_langflow_data:/data -v $(pwd):/backup alpine tar czf /backup/langflow-backup.tar.gz -C /data .
docker compose up -d
```
### 恢复
从备份恢复:
```bash
docker compose down
docker run --rm -v compose-anything_postgres_data:/data -v $(pwd):/backup alpine sh -c "cd /data && tar xzf /backup/postgres-backup.tar.gz"
docker run --rm -v compose-anything_langflow_data:/data -v $(pwd):/backup alpine sh -c "cd /data && tar xzf /backup/langflow-backup.tar.gz"
docker compose up -d
```
### 升级
升级 Langflow
1. 在 `.env` 中更新版本:
```bash
LANGFLOW_VERSION=1.2.0
```
2. 拉取新镜像并重启:
```bash
docker compose pull
docker compose up -d
```
3. 检查发布说明中的重大更改
## 常用命令
```bash
# 查看日志
docker compose logs -f langflow
# 重启 Langflow
docker compose restart langflow
# 访问 PostgreSQL
docker compose exec postgres psql -U langflow -d langflow
# 检查资源使用
docker stats
# 清理
docker compose down -v # 警告:删除所有数据
```
## 参考资料
- [官方文档](https://docs.langflow.org/)
- [GitHub 仓库](https://github.com/langflow-ai/langflow)
- [组件文档](https://docs.langflow.org/components/)
- [API 文档](https://docs.langflow.org/api/)
- [社区 Discord](https://discord.gg/langflow)
## 许可证
MIT - 查看 [LICENSE](https://github.com/langflow-ai/langflow/blob/main/LICENSE)

View File

@@ -0,0 +1,127 @@
# Langflow - Visual Framework for Building AI Applications
# https://github.com/langflow-ai/langflow
#
# Langflow is a low-code app builder for RAG and multi-agent AI applications.
# It's Python-based and agnostic to any model, API, or database.
#
# Key Features:
# - Visual flow builder for AI applications
# - Support for multiple LLMs (OpenAI, Anthropic, Google, etc.)
# - Built-in components for RAG, agents, and chains
# - Custom component support
# - Real-time monitoring and logging
# - Multi-user support with authentication
#
# Default Access:
# - Access UI at http://localhost:7860
# - No authentication by default (set LANGFLOW_AUTO_LOGIN=false to enable)
#
# Security Notes:
# - Set LANGFLOW_SECRET_KEY for production
# - Use strong database passwords
# - Enable authentication in production
# - Store API keys as global variables, not in flows
# - Enable SSL/TLS in production
#
# License: MIT (https://github.com/langflow-ai/langflow/blob/main/LICENSE)
x-defaults: &defaults
restart: unless-stopped
logging:
driver: json-file
options:
max-size: ${LOG_MAX_SIZE:-100m}
max-file: "${LOG_MAX_FILE:-3}"
services:
langflow:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}langflowai/langflow:${LANGFLOW_VERSION:-1.1.1}
ports:
- "${LANGFLOW_PORT_OVERRIDE:-7860}:7860"
environment:
# Database configuration
- LANGFLOW_DATABASE_URL=postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@postgres:5432/${POSTGRES_DB}
# Storage configuration
- LANGFLOW_CONFIG_DIR=${LANGFLOW_CONFIG_DIR:-/app/langflow}
# Server configuration
- LANGFLOW_HOST=${LANGFLOW_HOST:-0.0.0.0}
- LANGFLOW_PORT=7860
- LANGFLOW_WORKERS=${LANGFLOW_WORKERS:-1}
# Authentication - IMPORTANT: Configure for production
- LANGFLOW_AUTO_LOGIN=${LANGFLOW_AUTO_LOGIN:-true}
- LANGFLOW_SUPERUSER=${LANGFLOW_SUPERUSER:-langflow}
- LANGFLOW_SUPERUSER_PASSWORD=${LANGFLOW_SUPERUSER_PASSWORD:-langflow}
- LANGFLOW_SECRET_KEY=${LANGFLOW_SECRET_KEY:-}
# Features
- LANGFLOW_AUTO_SAVING=${LANGFLOW_AUTO_SAVING:-true}
- LANGFLOW_AUTO_SAVING_INTERVAL=${LANGFLOW_AUTO_SAVING_INTERVAL:-1000}
- LANGFLOW_STORE_ENVIRONMENT_VARIABLES=${LANGFLOW_STORE_ENVIRONMENT_VARIABLES:-true}
- LANGFLOW_FALLBACK_TO_ENV_VAR=${LANGFLOW_FALLBACK_TO_ENV_VAR:-true}
# Optional: Custom components path
- LANGFLOW_COMPONENTS_PATH=${LANGFLOW_COMPONENTS_PATH:-}
# Optional: Load flows from directory
- LANGFLOW_LOAD_FLOWS_PATH=${LANGFLOW_LOAD_FLOWS_PATH:-}
# Logging
- LANGFLOW_LOG_LEVEL=${LANGFLOW_LOG_LEVEL:-error}
# Other settings
- TZ=${TZ:-UTC}
- DO_NOT_TRACK=${DO_NOT_TRACK:-false}
volumes:
- langflow_data:${LANGFLOW_CONFIG_DIR:-/app/langflow}
depends_on:
postgres:
condition: service_healthy
healthcheck:
test: ["CMD", "python", "-c", "import requests; requests.get('http://localhost:7860/health_check', timeout=5)"]
interval: 30s
timeout: 10s
retries: 5
start_period: 60s
deploy:
resources:
limits:
cpus: "${LANGFLOW_CPU_LIMIT:-2.0}"
memory: "${LANGFLOW_MEMORY_LIMIT:-2G}"
reservations:
cpus: "${LANGFLOW_CPU_RESERVATION:-0.5}"
memory: "${LANGFLOW_MEMORY_RESERVATION:-512M}"
postgres:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}postgres:${POSTGRES_VERSION:-16-alpine}
environment:
- POSTGRES_DB=${POSTGRES_DB:-langflow}
- POSTGRES_USER=${POSTGRES_USER:-langflow}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-langflow}
- POSTGRES_INITDB_ARGS=--encoding=UTF8
- TZ=${TZ:-UTC}
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER:-langflow} -d ${POSTGRES_DB:-langflow}"]
interval: 10s
timeout: 5s
retries: 5
start_period: 10s
deploy:
resources:
limits:
cpus: "${POSTGRES_CPU_LIMIT:-1.0}"
memory: "${POSTGRES_MEMORY_LIMIT:-1G}"
reservations:
cpus: "${POSTGRES_CPU_RESERVATION:-0.25}"
memory: "${POSTGRES_MEMORY_RESERVATION:-256M}"
volumes:
postgres_data:
langflow_data:

136
apps/langfuse/.env.example Normal file
View File

@@ -0,0 +1,136 @@
# Global Settings
GLOBAL_REGISTRY=
TZ=UTC
# Service Versions
LANGFUSE_VERSION=3
POSTGRES_VERSION=17
CLICKHOUSE_VERSION=latest
MINIO_VERSION=latest
REDIS_VERSION=7
# Ports
LANGFUSE_PORT_OVERRIDE=3000
LANGFUSE_WORKER_PORT_OVERRIDE=3030
MINIO_PORT_OVERRIDE=9090
MINIO_CONSOLE_PORT_OVERRIDE=9091
# PostgreSQL
POSTGRES_USER=postgres
POSTGRES_PASSWORD=postgres
POSTGRES_DB=postgres
# Authentication & Security (CHANGEME: These are defaults, please update them)
NEXTAUTH_URL=http://localhost:3000
NEXTAUTH_SECRET=mysecret
SALT=mysalt
ENCRYPTION_KEY=0000000000000000000000000000000000000000000000000000000000000000
# ClickHouse
CLICKHOUSE_USER=clickhouse
CLICKHOUSE_PASSWORD=clickhouse
CLICKHOUSE_MIGRATION_URL=clickhouse://clickhouse:9000
CLICKHOUSE_URL=http://clickhouse:8123
CLICKHOUSE_CLUSTER_ENABLED=false
# MinIO / S3
MINIO_ROOT_USER=minio
MINIO_ROOT_PASSWORD=miniosecret
# S3 Event Upload
LANGFUSE_S3_EVENT_UPLOAD_BUCKET=langfuse
LANGFUSE_S3_EVENT_UPLOAD_REGION=auto
LANGFUSE_S3_EVENT_UPLOAD_ACCESS_KEY_ID=minio
LANGFUSE_S3_EVENT_UPLOAD_SECRET_ACCESS_KEY=miniosecret
LANGFUSE_S3_EVENT_UPLOAD_ENDPOINT=http://minio:9000
LANGFUSE_S3_EVENT_UPLOAD_FORCE_PATH_STYLE=true
LANGFUSE_S3_EVENT_UPLOAD_PREFIX=events/
# S3 Media Upload
LANGFUSE_S3_MEDIA_UPLOAD_BUCKET=langfuse
LANGFUSE_S3_MEDIA_UPLOAD_REGION=auto
LANGFUSE_S3_MEDIA_UPLOAD_ACCESS_KEY_ID=minio
LANGFUSE_S3_MEDIA_UPLOAD_SECRET_ACCESS_KEY=miniosecret
LANGFUSE_S3_MEDIA_UPLOAD_ENDPOINT=http://localhost:9090
LANGFUSE_S3_MEDIA_UPLOAD_FORCE_PATH_STYLE=true
LANGFUSE_S3_MEDIA_UPLOAD_PREFIX=media/
# S3 Batch Export
LANGFUSE_S3_BATCH_EXPORT_ENABLED=false
LANGFUSE_S3_BATCH_EXPORT_BUCKET=langfuse
LANGFUSE_S3_BATCH_EXPORT_PREFIX=exports/
LANGFUSE_S3_BATCH_EXPORT_REGION=auto
LANGFUSE_S3_BATCH_EXPORT_ENDPOINT=http://minio:9000
LANGFUSE_S3_BATCH_EXPORT_EXTERNAL_ENDPOINT=http://localhost:9090
LANGFUSE_S3_BATCH_EXPORT_ACCESS_KEY_ID=minio
LANGFUSE_S3_BATCH_EXPORT_SECRET_ACCESS_KEY=miniosecret
LANGFUSE_S3_BATCH_EXPORT_FORCE_PATH_STYLE=true
# Redis
REDIS_HOST=redis
REDIS_PORT=6379
REDIS_AUTH=myredissecret
REDIS_TLS_ENABLED=false
REDIS_TLS_CA=/certs/ca.crt
REDIS_TLS_CERT=/certs/redis.crt
REDIS_TLS_KEY=/certs/redis.key
# Features
TELEMETRY_ENABLED=true
LANGFUSE_ENABLE_EXPERIMENTAL_FEATURES=true
LANGFUSE_USE_AZURE_BLOB=false
# Ingestion Queue
LANGFUSE_INGESTION_QUEUE_DELAY_MS=
LANGFUSE_INGESTION_CLICKHOUSE_WRITE_INTERVAL_MS=
# Email/SMTP (Optional)
EMAIL_FROM_ADDRESS=
SMTP_CONNECTION_URL=
# Initialization (Optional - for setting up initial org/project/user)
LANGFUSE_INIT_ORG_ID=
LANGFUSE_INIT_ORG_NAME=
LANGFUSE_INIT_PROJECT_ID=
LANGFUSE_INIT_PROJECT_NAME=
LANGFUSE_INIT_PROJECT_PUBLIC_KEY=
LANGFUSE_INIT_PROJECT_SECRET_KEY=
LANGFUSE_INIT_USER_EMAIL=
LANGFUSE_INIT_USER_NAME=
LANGFUSE_INIT_USER_PASSWORD=
# Resource Limits - Langfuse Worker
LANGFUSE_WORKER_CPU_LIMIT=2.0
LANGFUSE_WORKER_MEMORY_LIMIT=2G
LANGFUSE_WORKER_CPU_RESERVATION=0.5
LANGFUSE_WORKER_MEMORY_RESERVATION=512M
# Resource Limits - Langfuse Web
LANGFUSE_WEB_CPU_LIMIT=2.0
LANGFUSE_WEB_MEMORY_LIMIT=2G
LANGFUSE_WEB_CPU_RESERVATION=0.5
LANGFUSE_WEB_MEMORY_RESERVATION=512M
# Resource Limits - ClickHouse
CLICKHOUSE_CPU_LIMIT=2.0
CLICKHOUSE_MEMORY_LIMIT=4G
CLICKHOUSE_CPU_RESERVATION=0.5
CLICKHOUSE_MEMORY_RESERVATION=1G
# Resource Limits - MinIO
MINIO_CPU_LIMIT=1.0
MINIO_MEMORY_LIMIT=1G
MINIO_CPU_RESERVATION=0.25
MINIO_MEMORY_RESERVATION=256M
# Resource Limits - Redis
REDIS_CPU_LIMIT=1.0
REDIS_MEMORY_LIMIT=512M
REDIS_CPU_RESERVATION=0.25
REDIS_MEMORY_RESERVATION=256M
# Resource Limits - PostgreSQL
POSTGRES_CPU_LIMIT=2.0
POSTGRES_MEMORY_LIMIT=2G
POSTGRES_CPU_RESERVATION=0.5
POSTGRES_MEMORY_RESERVATION=512M

169
apps/langfuse/README.md Normal file
View File

@@ -0,0 +1,169 @@
# Langfuse
[English](./README.md) | [中文](./README.zh.md)
This service deploys Langfuse, an open-source LLM engineering platform for observability, metrics, evaluations, and prompt management.
## Services
- **langfuse-worker**: Background worker service for processing LLM operations
- **langfuse-web**: Main Langfuse web application server
- **postgres**: PostgreSQL database
- **clickhouse**: ClickHouse analytics database for event storage
- **minio**: S3-compatible object storage for media and exports
- **redis**: In-memory data store for caching and job queues
## Quick Start
1. Copy `.env.example` to `.env`:
```bash
cp .env.example .env
```
2. Update critical secrets in `.env`:
```bash
# Generate secure secrets
NEXTAUTH_SECRET=$(openssl rand -base64 32)
ENCRYPTION_KEY=$(openssl rand -hex 32)
POSTGRES_PASSWORD=your-secure-password
CLICKHOUSE_PASSWORD=your-secure-password
MINIO_ROOT_PASSWORD=your-secure-password
REDIS_AUTH=your-secure-redis-password
```
3. Start the services:
```bash
docker compose up -d
```
4. Access Langfuse at `http://localhost:3000`
## Core Environment Variables
| Variable | Description | Default |
| --------------------------------------- | ----------------------------------------------- | ----------------------- |
| `LANGFUSE_VERSION` | Langfuse container image version | `3` |
| `LANGFUSE_PORT` | Web interface port | `3000` |
| `NEXTAUTH_URL` | Public URL of Langfuse instance | `http://localhost:3000` |
| `NEXTAUTH_SECRET` | NextAuth.js secret (required for production) | `mysecret` |
| `ENCRYPTION_KEY` | Encryption key for sensitive data (64-char hex) | `0...0` |
| `SALT` | Salt for password hashing | `mysalt` |
| `TELEMETRY_ENABLED` | Enable anonymous telemetry | `true` |
| `LANGFUSE_ENABLE_EXPERIMENTAL_FEATURES` | Enable beta features | `true` |
## Database Configuration
| Variable | Description | Default |
| --------------------- | ------------------- | ------------ |
| `POSTGRES_VERSION` | PostgreSQL version | `17` |
| `POSTGRES_USER` | Database user | `postgres` |
| `POSTGRES_PASSWORD` | Database password | `postgres` |
| `POSTGRES_DB` | Database name | `postgres` |
| `CLICKHOUSE_USER` | ClickHouse user | `clickhouse` |
| `CLICKHOUSE_PASSWORD` | ClickHouse password | `clickhouse` |
## Storage & Cache Configuration
| Variable | Description | Default |
| --------------------- | -------------------- | --------------- |
| `MINIO_ROOT_USER` | MinIO admin username | `minio` |
| `MINIO_ROOT_PASSWORD` | MinIO admin password | `miniosecret` |
| `REDIS_AUTH` | Redis password | `myredissecret` |
## S3/Media Configuration
| Variable | Description | Default |
| ----------------------------------- | ------------------------- | ----------------------- |
| `LANGFUSE_S3_MEDIA_UPLOAD_ENDPOINT` | Media upload S3 endpoint | `http://localhost:9090` |
| `LANGFUSE_S3_EVENT_UPLOAD_ENDPOINT` | Event upload S3 endpoint | `http://minio:9000` |
| `LANGFUSE_S3_BATCH_EXPORT_ENABLED` | Enable batch export to S3 | `false` |
## Volumes
- `langfuse_postgres_data`: PostgreSQL data persistence
- `langfuse_clickhouse_data`: ClickHouse event data
- `langfuse_clickhouse_logs`: ClickHouse logs
- `langfuse_minio_data`: MinIO object storage data
## Resource Limits
All services have configurable CPU and memory limits:
- **langfuse-worker**: 2 CPU cores, 2GB RAM
- **langfuse-web**: 2 CPU cores, 2GB RAM
- **clickhouse**: 2 CPU cores, 4GB RAM
- **minio**: 1 CPU core, 1GB RAM
- **redis**: 1 CPU core, 512MB RAM
- **postgres**: 2 CPU cores, 2GB RAM
Adjust limits in `.env` by modifying `*_CPU_LIMIT`, `*_MEMORY_LIMIT`, `*_CPU_RESERVATION`, and `*_MEMORY_RESERVATION` variables.
## Network Access
- **langfuse-web** (port 3000): Open to all interfaces for external access
- **minio** (port 9090): Open to all interfaces for media uploads
- **All other services**: Bound to `127.0.0.1` (localhost only)
In production, restrict external access using a firewall or reverse proxy.
## Production Setup
For production deployments:
1. **Security**:
- Generate strong secrets with `openssl rand -base64 32` and `openssl rand -hex 32`
- Use a reverse proxy (nginx, Caddy) with SSL/TLS
- Change all default passwords
- Enable HTTPS by setting `NEXTAUTH_URL` to your domain
2. **Persistence**:
- Use external volumes or cloud storage for data
- Configure regular PostgreSQL backups
- Monitor ClickHouse disk usage
3. **Performance**:
- Increase resource limits based on workload
- Consider dedicated ClickHouse cluster for large deployments
- Configure Redis persistence if needed
## Ports
- **3000**: Langfuse web interface (external)
- **3030**: Langfuse worker API (localhost only)
- **5432**: PostgreSQL (localhost only)
- **8123**: ClickHouse HTTP (localhost only)
- **9000**: ClickHouse native (localhost only)
- **9090**: MinIO S3 API (external)
- **9091**: MinIO console (localhost only)
- **6379**: Redis (localhost only)
## Health Checks
All services include health checks with automatic restart on failure.
## Documentation
- [Langfuse Documentation](https://langfuse.com/docs)
- [Langfuse GitHub](https://github.com/langfuse/langfuse)
## Troubleshooting
### Services failing to start
- Check logs: `docker compose logs <service-name>`
- Ensure all required environment variables are set
- Verify sufficient disk space and system resources
### Database connection errors
- Verify `POSTGRES_PASSWORD` matches between services
- Check that PostgreSQL service is healthy: `docker compose ps`
- Ensure ports are not already in use
### MinIO permission issues
- Clear MinIO data and restart: `docker compose down -v`
- Regenerate MinIO credentials in `.env`

169
apps/langfuse/README.zh.md Normal file
View File

@@ -0,0 +1,169 @@
# Langfuse
[English](./README.md) | [中文](./README.zh.md)
此服务部署 Langfuse一个用于 LLM 应用可观测性、指标、评估和提示管理的开源平台。
## 服务
- **langfuse-worker**:处理 LLM 操作的后台工作者服务
- **langfuse-web**Langfuse 主 Web 应用服务器
- **postgres**PostgreSQL 数据库
- **clickhouse**:用于事件存储的 ClickHouse 分析数据库
- **minio**:兼容 S3 的对象存储,用于媒体和导出
- **redis**:用于缓存和作业队列的内存数据存储
## 快速开始
1.`.env.example` 复制为 `.env`
```bash
cp .env.example .env
```
2. 在 `.env` 中更新关键的密钥:
```bash
# 生成安全的密钥
NEXTAUTH_SECRET=$(openssl rand -base64 32)
ENCRYPTION_KEY=$(openssl rand -hex 32)
POSTGRES_PASSWORD=your-secure-password
CLICKHOUSE_PASSWORD=your-secure-password
MINIO_ROOT_PASSWORD=your-secure-password
REDIS_AUTH=your-secure-redis-password
```
3. 启动服务:
```bash
docker compose up -d
```
4. 访问 `http://localhost:3000` 打开 Langfuse
## 核心环境变量
| 变量 | 描述 | 默认值 |
| --------------------------------------- | ------------------------------------- | ----------------------- |
| `LANGFUSE_VERSION` | Langfuse 容器镜像版本 | `3` |
| `LANGFUSE_PORT` | Web 界面端口 | `3000` |
| `NEXTAUTH_URL` | Langfuse 实例的公开 URL | `http://localhost:3000` |
| `NEXTAUTH_SECRET` | NextAuth.js 密钥(生产环境必需) | `mysecret` |
| `ENCRYPTION_KEY` | 敏感数据加密密钥64 个十六进制字符) | `0...0` |
| `SALT` | 密码哈希盐值 | `mysalt` |
| `TELEMETRY_ENABLED` | 启用匿名遥测 | `true` |
| `LANGFUSE_ENABLE_EXPERIMENTAL_FEATURES` | 启用测试版功能 | `true` |
## 数据库配置
| 变量 | 描述 | 默认值 |
| --------------------- | --------------- | ------------ |
| `POSTGRES_VERSION` | PostgreSQL 版本 | `17` |
| `POSTGRES_USER` | 数据库用户 | `postgres` |
| `POSTGRES_PASSWORD` | 数据库密码 | `postgres` |
| `POSTGRES_DB` | 数据库名称 | `postgres` |
| `CLICKHOUSE_USER` | ClickHouse 用户 | `clickhouse` |
| `CLICKHOUSE_PASSWORD` | ClickHouse 密码 | `clickhouse` |
## 存储和缓存配置
| 变量 | 描述 | 默认值 |
| --------------------- | ------------------ | --------------- |
| `MINIO_ROOT_USER` | MinIO 管理员用户名 | `minio` |
| `MINIO_ROOT_PASSWORD` | MinIO 管理员密码 | `miniosecret` |
| `REDIS_AUTH` | Redis 密码 | `myredissecret` |
## S3/媒体配置
| 变量 | 描述 | 默认值 |
| ----------------------------------- | ----------------- | ----------------------- |
| `LANGFUSE_S3_MEDIA_UPLOAD_ENDPOINT` | 媒体上传 S3 端点 | `http://localhost:9090` |
| `LANGFUSE_S3_EVENT_UPLOAD_ENDPOINT` | 事件上传 S3 端点 | `http://minio:9000` |
| `LANGFUSE_S3_BATCH_EXPORT_ENABLED` | 启用批量导出到 S3 | `false` |
## 数据卷
- `langfuse_postgres_data`PostgreSQL 数据持久化
- `langfuse_clickhouse_data`ClickHouse 事件数据
- `langfuse_clickhouse_logs`ClickHouse 日志
- `langfuse_minio_data`MinIO 对象存储数据
## 资源限制
所有服务都有可配置的 CPU 和内存限制:
- **langfuse-worker**2 个 CPU 核心2GB RAM
- **langfuse-web**2 个 CPU 核心2GB RAM
- **clickhouse**2 个 CPU 核心4GB RAM
- **minio**1 个 CPU 核心1GB RAM
- **redis**1 个 CPU 核心512MB RAM
- **postgres**2 个 CPU 核心2GB RAM
通过修改 `.env` 中的 `*_CPU_LIMIT`、`*_MEMORY_LIMIT`、`*_CPU_RESERVATION` 和 `*_MEMORY_RESERVATION` 变量来调整限制。
## 网络访问
- **langfuse-web**(端口 3000对所有接口开放用于外部访问
- **minio**(端口 9090对所有接口开放用于媒体上传
- **所有其他服务**:绑定到 `127.0.0.1`(仅限本地)
在生产环境中,使用防火墙或反向代理限制外部访问。
## 生产部署
用于生产部署的建议:
1. **安全性**
- 使用 `openssl rand -base64 32` 和 `openssl rand -hex 32` 生成强密钥
- 使用具有 SSL/TLS 的反向代理nginx、Caddy
- 更改所有默认密码
- 通过将 `NEXTAUTH_URL` 设置为您的域来启用 HTTPS
2. **数据持久化**
- 对数据使用外部卷或云存储
- 配置定期 PostgreSQL 备份
- 监控 ClickHouse 磁盘使用情况
3. **性能**
- 根据工作负载增加资源限制
- 大规模部署时考虑使用专用 ClickHouse 集群
- 如需要,配置 Redis 持久化
## 端口
- **3000**Langfuse Web 界面(外部)
- **3030**Langfuse 工作者 API仅限本地
- **5432**PostgreSQL仅限本地
- **8123**ClickHouse HTTP仅限本地
- **9000**ClickHouse 原生协议(仅限本地)
- **9090**MinIO S3 API外部
- **9091**MinIO 控制台(仅限本地)
- **6379**Redis仅限本地
## 健康检查
所有服务都包括健康检查,失败时会自动重新启动。
## 文档
- [Langfuse 文档](https://langfuse.com/docs)
- [Langfuse GitHub](https://github.com/langfuse/langfuse)
## 故障排除
### 服务无法启动
- 查看日志:`docker compose logs <service-name>`
- 确保设置了所有必需的环境变量
- 验证磁盘空间和系统资源是否充足
### 数据库连接错误
- 验证 `POSTGRES_PASSWORD` 在服务之间匹配
- 检查 PostgreSQL 服务是否健康:`docker compose ps`
- 确保端口未被占用
### MinIO 权限问题
- 清除 MinIO 数据并重新启动:`docker compose down -v`
- 在 `.env` 中重新生成 MinIO 凭证

View File

@@ -0,0 +1,229 @@
# Make sure to update the credential placeholders with your own secrets.
# We mark them with # CHANGEME in the file below.
# In addition, we recommend to restrict inbound traffic on the host to
# langfuse-web (port 3000) and minio (port 9090) only.
# All other components are bound to localhost (127.0.0.1) to only accept
# connections from the local machine.
# External connections from other machines will not be able to reach these
# services directly.
x-defaults: &defaults
restart: unless-stopped
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
langfuse-worker:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}langfuse/langfuse-worker:${LANGFUSE_VERSION:-3}
depends_on: &langfuse-depends-on
postgres:
condition: service_healthy
minio:
condition: service_healthy
redis:
condition: service_healthy
clickhouse:
condition: service_healthy
ports:
- ${LANGFUSE_WORKER_PORT_OVERRIDE:-3030}:3030
environment: &langfuse-worker-env
TZ: ${TZ:-UTC}
NEXTAUTH_URL: ${NEXTAUTH_URL:-http://localhost:3000}
DATABASE_URL: ${DATABASE_URL:-postgresql://postgres:postgres@postgres:5432/postgres}
SALT: ${SALT:-mysalt}
ENCRYPTION_KEY: ${ENCRYPTION_KEY:-0000000000000000000000000000000000000000000000000000000000000000}
TELEMETRY_ENABLED: ${TELEMETRY_ENABLED:-true}
LANGFUSE_ENABLE_EXPERIMENTAL_FEATURES: ${LANGFUSE_ENABLE_EXPERIMENTAL_FEATURES:-true}
CLICKHOUSE_MIGRATION_URL: ${CLICKHOUSE_MIGRATION_URL:-clickhouse://clickhouse:9000}
CLICKHOUSE_URL: ${CLICKHOUSE_URL:-http://clickhouse:8123}
CLICKHOUSE_USER: ${CLICKHOUSE_USER:-clickhouse}
CLICKHOUSE_PASSWORD: ${CLICKHOUSE_PASSWORD:-clickhouse}
CLICKHOUSE_CLUSTER_ENABLED: ${CLICKHOUSE_CLUSTER_ENABLED:-false}
LANGFUSE_USE_AZURE_BLOB: ${LANGFUSE_USE_AZURE_BLOB:-false}
LANGFUSE_S3_EVENT_UPLOAD_BUCKET: ${LANGFUSE_S3_EVENT_UPLOAD_BUCKET:-langfuse}
LANGFUSE_S3_EVENT_UPLOAD_REGION: ${LANGFUSE_S3_EVENT_UPLOAD_REGION:-auto}
LANGFUSE_S3_EVENT_UPLOAD_ACCESS_KEY_ID: ${LANGFUSE_S3_EVENT_UPLOAD_ACCESS_KEY_ID:-minio}
LANGFUSE_S3_EVENT_UPLOAD_SECRET_ACCESS_KEY: ${LANGFUSE_S3_EVENT_UPLOAD_SECRET_ACCESS_KEY:-miniosecret}
LANGFUSE_S3_EVENT_UPLOAD_ENDPOINT: ${LANGFUSE_S3_EVENT_UPLOAD_ENDPOINT:-http://minio:9000}
LANGFUSE_S3_EVENT_UPLOAD_FORCE_PATH_STYLE: ${LANGFUSE_S3_EVENT_UPLOAD_FORCE_PATH_STYLE:-true}
LANGFUSE_S3_EVENT_UPLOAD_PREFIX: ${LANGFUSE_S3_EVENT_UPLOAD_PREFIX:-events/}
LANGFUSE_S3_MEDIA_UPLOAD_BUCKET: ${LANGFUSE_S3_MEDIA_UPLOAD_BUCKET:-langfuse}
LANGFUSE_S3_MEDIA_UPLOAD_REGION: ${LANGFUSE_S3_MEDIA_UPLOAD_REGION:-auto}
LANGFUSE_S3_MEDIA_UPLOAD_ACCESS_KEY_ID: ${LANGFUSE_S3_MEDIA_UPLOAD_ACCESS_KEY_ID:-minio}
LANGFUSE_S3_MEDIA_UPLOAD_SECRET_ACCESS_KEY: ${LANGFUSE_S3_MEDIA_UPLOAD_SECRET_ACCESS_KEY:-miniosecret}
LANGFUSE_S3_MEDIA_UPLOAD_ENDPOINT: ${LANGFUSE_S3_MEDIA_UPLOAD_ENDPOINT:-http://localhost:9090}
LANGFUSE_S3_MEDIA_UPLOAD_FORCE_PATH_STYLE: ${LANGFUSE_S3_MEDIA_UPLOAD_FORCE_PATH_STYLE:-true}
LANGFUSE_S3_MEDIA_UPLOAD_PREFIX: ${LANGFUSE_S3_MEDIA_UPLOAD_PREFIX:-media/}
LANGFUSE_S3_BATCH_EXPORT_ENABLED: ${LANGFUSE_S3_BATCH_EXPORT_ENABLED:-false}
LANGFUSE_S3_BATCH_EXPORT_BUCKET: ${LANGFUSE_S3_BATCH_EXPORT_BUCKET:-langfuse}
LANGFUSE_S3_BATCH_EXPORT_PREFIX: ${LANGFUSE_S3_BATCH_EXPORT_PREFIX:-exports/}
LANGFUSE_S3_BATCH_EXPORT_REGION: ${LANGFUSE_S3_BATCH_EXPORT_REGION:-auto}
LANGFUSE_S3_BATCH_EXPORT_ENDPOINT: ${LANGFUSE_S3_BATCH_EXPORT_ENDPOINT:-http://minio:9000}
LANGFUSE_S3_BATCH_EXPORT_EXTERNAL_ENDPOINT: ${LANGFUSE_S3_BATCH_EXPORT_EXTERNAL_ENDPOINT:-http://localhost:9090}
LANGFUSE_S3_BATCH_EXPORT_ACCESS_KEY_ID: ${LANGFUSE_S3_BATCH_EXPORT_ACCESS_KEY_ID:-minio}
LANGFUSE_S3_BATCH_EXPORT_SECRET_ACCESS_KEY: ${LANGFUSE_S3_BATCH_EXPORT_SECRET_ACCESS_KEY:-miniosecret}
LANGFUSE_S3_BATCH_EXPORT_FORCE_PATH_STYLE: ${LANGFUSE_S3_BATCH_EXPORT_FORCE_PATH_STYLE:-true}
LANGFUSE_INGESTION_QUEUE_DELAY_MS: ${LANGFUSE_INGESTION_QUEUE_DELAY_MS:-}
LANGFUSE_INGESTION_CLICKHOUSE_WRITE_INTERVAL_MS: ${LANGFUSE_INGESTION_CLICKHOUSE_WRITE_INTERVAL_MS:-}
REDIS_HOST: ${REDIS_HOST:-redis}
REDIS_PORT: ${REDIS_PORT:-6379}
REDIS_AUTH: ${REDIS_AUTH:-myredissecret}
REDIS_TLS_ENABLED: ${REDIS_TLS_ENABLED:-false}
REDIS_TLS_CA: ${REDIS_TLS_CA:-/certs/ca.crt}
REDIS_TLS_CERT: ${REDIS_TLS_CERT:-/certs/redis.crt}
REDIS_TLS_KEY: ${REDIS_TLS_KEY:-/certs/redis.key}
EMAIL_FROM_ADDRESS: ${EMAIL_FROM_ADDRESS:-}
SMTP_CONNECTION_URL: ${SMTP_CONNECTION_URL:-}
deploy:
resources:
limits:
cpus: ${LANGFUSE_WORKER_CPU_LIMIT:-2.0}
memory: ${LANGFUSE_WORKER_MEMORY_LIMIT:-2G}
reservations:
cpus: ${LANGFUSE_WORKER_CPU_RESERVATION:-0.5}
memory: ${LANGFUSE_WORKER_MEMORY_RESERVATION:-512M}
langfuse-web:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}langfuse/langfuse:${LANGFUSE_VERSION:-3}
depends_on: *langfuse-depends-on
ports:
- "${LANGFUSE_PORT_OVERRIDE:-3000}:3000"
environment:
<<: *langfuse-worker-env
NEXTAUTH_SECRET: ${NEXTAUTH_SECRET:-mysecret}
LANGFUSE_INIT_ORG_ID: ${LANGFUSE_INIT_ORG_ID:-}
LANGFUSE_INIT_ORG_NAME: ${LANGFUSE_INIT_ORG_NAME:-}
LANGFUSE_INIT_PROJECT_ID: ${LANGFUSE_INIT_PROJECT_ID:-}
LANGFUSE_INIT_PROJECT_NAME: ${LANGFUSE_INIT_PROJECT_NAME:-}
LANGFUSE_INIT_PROJECT_PUBLIC_KEY: ${LANGFUSE_INIT_PROJECT_PUBLIC_KEY:-}
LANGFUSE_INIT_PROJECT_SECRET_KEY: ${LANGFUSE_INIT_PROJECT_SECRET_KEY:-}
LANGFUSE_INIT_USER_EMAIL: ${LANGFUSE_INIT_USER_EMAIL:-}
LANGFUSE_INIT_USER_NAME: ${LANGFUSE_INIT_USER_NAME:-}
LANGFUSE_INIT_USER_PASSWORD: ${LANGFUSE_INIT_USER_PASSWORD:-}
deploy:
resources:
limits:
cpus: ${LANGFUSE_WEB_CPU_LIMIT:-2.0}
memory: ${LANGFUSE_WEB_MEMORY_LIMIT:-2G}
reservations:
cpus: ${LANGFUSE_WEB_CPU_RESERVATION:-0.5}
memory: ${LANGFUSE_WEB_MEMORY_RESERVATION:-512M}
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:3000/api/public/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
clickhouse:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}clickhouse/clickhouse-server:${CLICKHOUSE_VERSION:-latest}
user: "101:101"
environment:
CLICKHOUSE_DB: default
CLICKHOUSE_USER: ${CLICKHOUSE_USER:-clickhouse}
CLICKHOUSE_PASSWORD: ${CLICKHOUSE_PASSWORD:-clickhouse}
TZ: ${TZ:-UTC}
volumes:
- langfuse_clickhouse_data:/var/lib/clickhouse
- langfuse_clickhouse_logs:/var/log/clickhouse-server
ports:
- ${CLICKHOUSE_PORT_OVERRIDE:-8123}:8123
- ${CLICKHOUSE_TCP_PORT_OVERRIDE:-9000}:9000
healthcheck:
test: wget --no-verbose --tries=1 --spider http://localhost:8123/ping || exit 1
interval: 5s
timeout: 5s
retries: 10
start_period: 1s
deploy:
resources:
limits:
cpus: ${CLICKHOUSE_CPU_LIMIT:-2.0}
memory: ${CLICKHOUSE_MEMORY_LIMIT:-4G}
reservations:
cpus: ${CLICKHOUSE_CPU_RESERVATION:-0.5}
memory: ${CLICKHOUSE_MEMORY_RESERVATION:-1G}
minio:
<<: *defaults
image: ${CGR_DEV_REGISTRY:-cgr.dev/}chainguard/minio:${MINIO_VERSION:-latest}
entrypoint: sh
# create the 'langfuse' bucket before starting the service
command: -c 'mkdir -p /data/langfuse && minio server --address ":9000" --console-address ":9001" /data'
environment:
MINIO_ROOT_USER: ${MINIO_ROOT_USER:-minio}
MINIO_ROOT_PASSWORD: ${MINIO_ROOT_PASSWORD:-miniosecret}
TZ: ${TZ:-UTC}
volumes:
- langfuse_minio_data:/data
healthcheck:
test: ["CMD", "mc", "ready", "local"]
interval: 1s
timeout: 5s
retries: 5
start_period: 1s
deploy:
resources:
limits:
cpus: ${MINIO_CPU_LIMIT:-1.0}
memory: ${MINIO_MEMORY_LIMIT:-1G}
reservations:
cpus: ${MINIO_CPU_RESERVATION:-0.25}
memory: ${MINIO_MEMORY_RESERVATION:-256M}
redis:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}redis:${REDIS_VERSION:-7}
command: >
--requirepass ${REDIS_AUTH:-myredissecret}
--maxmemory-policy noeviction
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 3s
timeout: 10s
retries: 10
deploy:
resources:
limits:
cpus: ${REDIS_CPU_LIMIT:-1.0}
memory: ${REDIS_MEMORY_LIMIT:-512M}
reservations:
cpus: ${REDIS_CPU_RESERVATION:-0.25}
memory: ${REDIS_MEMORY_RESERVATION:-256M}
postgres:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}postgres:${POSTGRES_VERSION:-17}
environment:
POSTGRES_USER: ${POSTGRES_USER:-postgres}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-postgres}
POSTGRES_DB: ${POSTGRES_DB:-postgres}
TZ: UTC
PGTZ: UTC
volumes:
- langfuse_postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 3s
timeout: 3s
retries: 10
deploy:
resources:
limits:
cpus: ${POSTGRES_CPU_LIMIT:-2.0}
memory: ${POSTGRES_MEMORY_LIMIT:-2G}
reservations:
cpus: ${POSTGRES_CPU_RESERVATION:-0.5}
memory: ${POSTGRES_MEMORY_RESERVATION:-512M}
volumes:
langfuse_postgres_data:
langfuse_clickhouse_data:
langfuse_clickhouse_logs:
langfuse_minio_data:

105
apps/sim/.env.example Normal file
View File

@@ -0,0 +1,105 @@
# =============================================================================
# Sim - AI Agent Workflow Builder Configuration
# =============================================================================
# Documentation: https://docs.sim.ai
# GitHub: https://github.com/simstudioai/sim
# -----------------------------------------------------------------------------
# Time Zone Configuration
# -----------------------------------------------------------------------------
TZ=UTC
# -----------------------------------------------------------------------------
# Image Versions
# -----------------------------------------------------------------------------
SIM_VERSION=latest
SIM_REALTIME_VERSION=latest
SIM_MIGRATIONS_VERSION=latest
PGVECTOR_VERSION=pg17
# -----------------------------------------------------------------------------
# Global Registry (optional, e.g., mirror.example.com/)
# -----------------------------------------------------------------------------
# GLOBAL_REGISTRY=
# -----------------------------------------------------------------------------
# Port Overrides
# -----------------------------------------------------------------------------
SIM_PORT_OVERRIDE=3000
SIM_REALTIME_PORT_OVERRIDE=3002
POSTGRES_PORT_OVERRIDE=5432
# -----------------------------------------------------------------------------
# Application Configuration
# -----------------------------------------------------------------------------
NODE_ENV=production
NEXT_PUBLIC_APP_URL=http://localhost:3000
BETTER_AUTH_URL=http://localhost:3000
SOCKET_SERVER_URL=http://localhost:3002
NEXT_PUBLIC_SOCKET_URL=http://localhost:3002
# -----------------------------------------------------------------------------
# Security Secrets (REQUIRED: Generate secure values in production)
# -----------------------------------------------------------------------------
# Generate with: openssl rand -hex 32
BETTER_AUTH_SECRET=your_auth_secret_here
ENCRYPTION_KEY=your_encryption_key_here
# -----------------------------------------------------------------------------
# API Keys (Optional)
# -----------------------------------------------------------------------------
# COPILOT_API_KEY=
# SIM_AGENT_API_URL=
# -----------------------------------------------------------------------------
# Ollama Configuration
# -----------------------------------------------------------------------------
# For external Ollama on host machine:
# - macOS/Windows: http://host.docker.internal:11434
# - Linux: http://YOUR_HOST_IP:11434 (e.g., http://192.168.1.100:11434)
OLLAMA_URL=http://localhost:11434
# -----------------------------------------------------------------------------
# PostgreSQL Configuration
# -----------------------------------------------------------------------------
POSTGRES_USER=postgres
POSTGRES_PASSWORD=postgres
POSTGRES_DB=simstudio
# -----------------------------------------------------------------------------
# Resource Limits - Main Application
# -----------------------------------------------------------------------------
SIM_CPU_LIMIT=4.0
SIM_CPU_RESERVATION=2.0
SIM_MEMORY_LIMIT=8G
SIM_MEMORY_RESERVATION=4G
# -----------------------------------------------------------------------------
# Resource Limits - Realtime Server
# -----------------------------------------------------------------------------
SIM_REALTIME_CPU_LIMIT=2.0
SIM_REALTIME_CPU_RESERVATION=1.0
SIM_REALTIME_MEMORY_LIMIT=4G
SIM_REALTIME_MEMORY_RESERVATION=2G
# -----------------------------------------------------------------------------
# Resource Limits - Database Migrations
# -----------------------------------------------------------------------------
SIM_MIGRATIONS_CPU_LIMIT=1.0
SIM_MIGRATIONS_CPU_RESERVATION=0.5
SIM_MIGRATIONS_MEMORY_LIMIT=512M
SIM_MIGRATIONS_MEMORY_RESERVATION=256M
# -----------------------------------------------------------------------------
# Resource Limits - PostgreSQL
# -----------------------------------------------------------------------------
POSTGRES_CPU_LIMIT=2.0
POSTGRES_CPU_RESERVATION=1.0
POSTGRES_MEMORY_LIMIT=2G
POSTGRES_MEMORY_RESERVATION=1G
# -----------------------------------------------------------------------------
# Logging Configuration
# -----------------------------------------------------------------------------
LOG_MAX_SIZE=100m
LOG_MAX_FILE=3

224
apps/sim/README.md Normal file
View File

@@ -0,0 +1,224 @@
# Sim - AI Agent Workflow Builder
Open-source platform to build and deploy AI agent workflows. Developers at trail-blazing startups to Fortune 500 companies deploy agentic workflows on the Sim platform.
## Features
- **Visual Workflow Builder**: Multi-step AI agents and tools with drag-and-drop interface
- **LLM Orchestration**: Coordinate LLM calls, tools, webhooks, and external APIs
- **Scheduled Execution**: Event-driven and scheduled agent executions
- **RAG Support**: First-class support for retrieval-augmented generation
- **Multi-tenant**: Workspace-based access model for teams
- **100+ Integrations**: Connect with popular services and APIs
## Requirements
| Resource | Minimum | Recommended |
| -------- | --------- | ----------- |
| CPU | 2 cores | 4+ cores |
| RAM | 12 GB | 16+ GB |
| Storage | 20 GB SSD | 50+ GB SSD |
| Docker | 20.10+ | Latest |
## Quick Start
```bash
# Copy environment file
cp .env.example .env
# IMPORTANT: Generate secure secrets in production
sed -i "s/your_auth_secret_here/$(openssl rand -hex 32)/" .env
sed -i "s/your_encryption_key_here/$(openssl rand -hex 32)/" .env
# Start services
docker compose up -d
# View logs
docker compose logs -f simstudio
```
Access the application at [http://localhost:3000](http://localhost:3000)
## Configuration
### Required Environment Variables
Before deployment, update these critical settings in `.env`:
```bash
# Security (REQUIRED - generate with: openssl rand -hex 32)
BETTER_AUTH_SECRET=<your-secret-here>
ENCRYPTION_KEY=<your-secret-here>
# Application URLs (update for production)
NEXT_PUBLIC_APP_URL=https://sim.yourdomain.com
BETTER_AUTH_URL=https://sim.yourdomain.com
NEXT_PUBLIC_SOCKET_URL=https://sim.yourdomain.com
# Database credentials (change defaults in production)
POSTGRES_USER=postgres
POSTGRES_PASSWORD=<strong-password>
POSTGRES_DB=simstudio
```
### Using with Ollama
Sim can work with local AI models using [Ollama](https://ollama.ai):
**External Ollama (running on host machine)**:
```bash
# macOS/Windows
OLLAMA_URL=http://host.docker.internal:11434
# Linux - use your actual host IP
OLLAMA_URL=http://192.168.1.100:11434
```
> **Note**: Inside Docker, `localhost` refers to the container. Use `host.docker.internal` (macOS/Windows) or your host's IP address (Linux).
### Port Configuration
Default ports can be overridden via environment variables:
```bash
SIM_PORT_OVERRIDE=3000 # Main application
SIM_REALTIME_PORT_OVERRIDE=3002 # Realtime server
POSTGRES_PORT_OVERRIDE=5432 # PostgreSQL database
```
### Resource Limits
Adjust resource allocation based on your workload:
```bash
# Main application
SIM_CPU_LIMIT=4.0
SIM_MEMORY_LIMIT=8G
# Realtime server
SIM_REALTIME_CPU_LIMIT=2.0
SIM_REALTIME_MEMORY_LIMIT=4G
# PostgreSQL
POSTGRES_CPU_LIMIT=2.0
POSTGRES_MEMORY_LIMIT=2G
```
## Service Architecture
The deployment consists of 4 services:
1. **simstudio**: Main Next.js application (port 3000)
2. **realtime**: WebSocket server for real-time features (port 3002)
3. **migrations**: Database schema management (runs once)
4. **db**: PostgreSQL 17 with pgvector extension (port 5432)
## Common Operations
### View Logs
```bash
# All services
docker compose logs -f
# Specific service
docker compose logs -f simstudio
```
### Stop Services
```bash
docker compose down
```
### Update to Latest Version
```bash
docker compose pull
docker compose up -d
```
### Backup Database
```bash
docker compose exec db pg_dump -U postgres simstudio > backup_$(date +%Y%m%d).sql
```
### Restore Database
```bash
cat backup.sql | docker compose exec -T db psql -U postgres simstudio
```
## Security Considerations
- **Change default credentials**: Update `POSTGRES_PASSWORD` in production
- **Generate strong secrets**: Use `openssl rand -hex 32` for all secret values
- **Use HTTPS**: Configure reverse proxy (Nginx/Caddy) with SSL certificates
- **Network isolation**: Keep database on internal network
- **Regular backups**: Automate database backups
- **Update regularly**: Pull latest images to get security patches
## Production Deployment
For production deployments:
1. **Use reverse proxy** (Nginx, Caddy, Traefik) for SSL/TLS termination
2. **Configure firewall** to restrict database access
3. **Set up monitoring** (health checks, metrics, logs)
4. **Enable backups** (automated PostgreSQL backups)
5. **Use external database** for better performance and reliability (optional)
Example Caddy configuration:
```caddy
sim.yourdomain.com {
reverse_proxy localhost:3000
handle /socket.io/* {
reverse_proxy localhost:3002
}
}
```
## Troubleshooting
### Models not showing in dropdown
If using external Ollama on host machine, ensure `OLLAMA_URL` uses `host.docker.internal` or your host's IP address, not `localhost`.
### Database connection errors
- Verify PostgreSQL is healthy: `docker compose ps`
- Check database logs: `docker compose logs db`
- Ensure migrations completed: `docker compose logs migrations`
### Port conflicts
If ports are already in use, override them:
```bash
SIM_PORT_OVERRIDE=3100 \
SIM_REALTIME_PORT_OVERRIDE=3102 \
POSTGRES_PORT_OVERRIDE=5433 \
docker compose up -d
```
## Additional Resources
- **Official Documentation**: [https://docs.sim.ai](https://docs.sim.ai)
- **GitHub Repository**: [https://github.com/simstudioai/sim](https://github.com/simstudioai/sim)
- **Cloud-hosted Version**: [https://sim.ai](https://sim.ai)
- **Self-hosting Guide**: [https://docs.sim.ai/self-hosting](https://docs.sim.ai/self-hosting)
## License
This configuration follows the Sim project licensing. Check the [official repository](https://github.com/simstudioai/sim) for license details.
## Support
For issues and questions:
- GitHub Issues: [https://github.com/simstudioai/sim/issues](https://github.com/simstudioai/sim/issues)
- Documentation: [https://docs.sim.ai](https://docs.sim.ai)

224
apps/sim/README.zh.md Normal file
View File

@@ -0,0 +1,224 @@
# Sim - AI Agent Workflow Builder
开源 AI 智能体工作流构建和部署平台。从初创公司到世界 500 强企业的开发者都在 Sim 平台上部署智能体工作流。
## 功能特性
- **可视化工作流构建器**:通过拖拽界面构建多步骤 AI 智能体和工具
- **LLM 编排**:协调 LLM 调用、工具、Webhook 和外部 API
- **计划执行**:支持事件驱动和定时调度的智能体执行
- **RAG 支持**一流的检索增强生成RAG支持
- **多租户**:基于工作空间的团队访问模型
- **100+ 集成**:连接流行的服务和 API
## 系统要求
| 资源 | 最低要求 | 推荐配置 |
| ------ | --------- | ---------------- |
| CPU | 2 核 | 4 核及以上 |
| 内存 | 12 GB | 16 GB 及以上 |
| 存储 | 20 GB SSD | 50 GB 及以上 SSD |
| Docker | 20.10+ | 最新版本 |
## 快速开始
```bash
# 复制环境配置文件
cp .env.example .env
# 重要:在生产环境中生成安全密钥
sed -i "s/your_auth_secret_here/$(openssl rand -hex 32)/" .env
sed -i "s/your_encryption_key_here/$(openssl rand -hex 32)/" .env
# 启动服务
docker compose up -d
# 查看日志
docker compose logs -f simstudio
```
访问应用:[http://localhost:3000](http://localhost:3000)
## 配置说明
### 必需的环境变量
在部署前,请在 `.env` 文件中更新这些关键设置:
```bash
# 安全密钥(必需 - 使用以下命令生成openssl rand -hex 32
BETTER_AUTH_SECRET=<your-secret-here>
ENCRYPTION_KEY=<your-secret-here>
# 应用 URL生产环境需更新
NEXT_PUBLIC_APP_URL=https://sim.yourdomain.com
BETTER_AUTH_URL=https://sim.yourdomain.com
NEXT_PUBLIC_SOCKET_URL=https://sim.yourdomain.com
# 数据库凭据(生产环境需更改默认值)
POSTGRES_USER=postgres
POSTGRES_PASSWORD=<strong-password>
POSTGRES_DB=simstudio
```
### 使用 Ollama
Sim 可以配合本地 AI 模型使用 [Ollama](https://ollama.ai)
**外部 Ollama运行在宿主机上**
```bash
# macOS/Windows
OLLAMA_URL=http://host.docker.internal:11434
# Linux - 使用宿主机实际 IP
OLLAMA_URL=http://192.168.1.100:11434
```
> **注意**:在 Docker 内部,`localhost` 指向容器本身。请使用 `host.docker.internal`macOS/Windows或宿主机 IP 地址Linux
### 端口配置
默认端口可通过环境变量覆盖:
```bash
SIM_PORT_OVERRIDE=3000 # 主应用
SIM_REALTIME_PORT_OVERRIDE=3002 # 实时服务器
POSTGRES_PORT_OVERRIDE=5432 # PostgreSQL 数据库
```
### 资源限制
根据工作负载调整资源分配:
```bash
# 主应用
SIM_CPU_LIMIT=4.0
SIM_MEMORY_LIMIT=8G
# 实时服务器
SIM_REALTIME_CPU_LIMIT=2.0
SIM_REALTIME_MEMORY_LIMIT=4G
# PostgreSQL
POSTGRES_CPU_LIMIT=2.0
POSTGRES_MEMORY_LIMIT=2G
```
## 服务架构
部署包含 4 个服务:
1. **simstudio**:主 Next.js 应用(端口 3000
2. **realtime**WebSocket 实时功能服务器(端口 3002
3. **migrations**:数据库架构管理(仅运行一次)
4. **db**PostgreSQL 17 with pgvector 扩展(端口 5432
## 常用操作
### 查看日志
```bash
# 所有服务
docker compose logs -f
# 特定服务
docker compose logs -f simstudio
```
### 停止服务
```bash
docker compose down
```
### 更新到最新版本
```bash
docker compose pull
docker compose up -d
```
### 备份数据库
```bash
docker compose exec db pg_dump -U postgres simstudio > backup_$(date +%Y%m%d).sql
```
### 恢复数据库
```bash
cat backup.sql | docker compose exec -T db psql -U postgres simstudio
```
## 安全注意事项
- **更改默认凭据**:在生产环境中更新 `POSTGRES_PASSWORD`
- **生成强密钥**:使用 `openssl rand -hex 32` 生成所有密钥值
- **使用 HTTPS**配置反向代理Nginx/Caddy和 SSL 证书
- **网络隔离**:将数据库保持在内部网络
- **定期备份**:自动化数据库备份
- **定期更新**:拉取最新镜像获取安全补丁
## 生产环境部署
生产环境部署建议:
1. **使用反向代理**Nginx、Caddy、Traefik进行 SSL/TLS 终止
2. **配置防火墙**限制数据库访问
3. **设置监控**(健康检查、指标、日志)
4. **启用备份**(自动化 PostgreSQL 备份)
5. **使用外部数据库**以获得更好的性能和可靠性(可选)
Caddy 配置示例:
```caddy
sim.yourdomain.com {
reverse_proxy localhost:3000
handle /socket.io/* {
reverse_proxy localhost:3002
}
}
```
## 故障排查
### 模型不显示在下拉列表中
如果使用宿主机上的外部 Ollama请确保 `OLLAMA_URL` 使用 `host.docker.internal` 或宿主机 IP 地址,而不是 `localhost`
### 数据库连接错误
- 验证 PostgreSQL 健康状态:`docker compose ps`
- 检查数据库日志:`docker compose logs db`
- 确保迁移完成:`docker compose logs migrations`
### 端口冲突
如果端口已被占用,可以覆盖它们:
```bash
SIM_PORT_OVERRIDE=3100 \
SIM_REALTIME_PORT_OVERRIDE=3102 \
POSTGRES_PORT_OVERRIDE=5433 \
docker compose up -d
```
## 相关资源
- **官方文档**[https://docs.sim.ai](https://docs.sim.ai)
- **GitHub 仓库**[https://github.com/simstudioai/sim](https://github.com/simstudioai/sim)
- **云托管版本**[https://sim.ai](https://sim.ai)
- **自托管指南**[https://docs.sim.ai/self-hosting](https://docs.sim.ai/self-hosting)
## 许可证
此配置遵循 Sim 项目许可。请查看[官方仓库](https://github.com/simstudioai/sim)了解许可证详情。
## 支持
如有问题和疑问:
- GitHub Issues[https://github.com/simstudioai/sim/issues](https://github.com/simstudioai/sim/issues)
- 文档:[https://docs.sim.ai](https://docs.sim.ai)

View File

@@ -0,0 +1,136 @@
# Sim - AI Agent Workflow Builder
# Open-source platform to build and deploy AI agent workflows
# Documentation: https://docs.sim.ai
x-defaults: &defaults
restart: unless-stopped
logging:
driver: json-file
options:
max-size: ${LOG_MAX_SIZE:-100m}
max-file: "${LOG_MAX_FILE:-3}"
services:
simstudio:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}ghcr.io/simstudioai/simstudio:${SIM_VERSION:-latest}
ports:
- "${SIM_PORT_OVERRIDE:-3000}:3000"
environment:
- TZ=${TZ:-UTC}
- NODE_ENV=production
- DATABASE_URL=postgresql://${POSTGRES_USER:-postgres}:${POSTGRES_PASSWORD:-postgres}@db:5432/${POSTGRES_DB:-simstudio}
- BETTER_AUTH_URL=${NEXT_PUBLIC_APP_URL:-http://localhost:3000}
- NEXT_PUBLIC_APP_URL=${NEXT_PUBLIC_APP_URL:-http://localhost:3000}
- BETTER_AUTH_SECRET=${BETTER_AUTH_SECRET:-your_auth_secret_here}
- ENCRYPTION_KEY=${ENCRYPTION_KEY:-your_encryption_key_here}
- COPILOT_API_KEY=${COPILOT_API_KEY:-}
- SIM_AGENT_API_URL=${SIM_AGENT_API_URL:-}
- OLLAMA_URL=${OLLAMA_URL:-http://localhost:11434}
- SOCKET_SERVER_URL=${SOCKET_SERVER_URL:-http://localhost:3002}
- NEXT_PUBLIC_SOCKET_URL=${NEXT_PUBLIC_SOCKET_URL:-http://localhost:3002}
depends_on:
db:
condition: service_healthy
migrations:
condition: service_completed_successfully
realtime:
condition: service_healthy
healthcheck:
test: ["CMD", "wget", "--spider", "--quiet", "http://127.0.0.1:3000"]
interval: 90s
timeout: 10s
retries: 3
start_period: 30s
deploy:
resources:
limits:
cpus: ${SIM_CPU_LIMIT:-4.0}
memory: ${SIM_MEMORY_LIMIT:-8G}
reservations:
cpus: ${SIM_CPU_RESERVATION:-2.0}
memory: ${SIM_MEMORY_RESERVATION:-4G}
realtime:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}ghcr.io/simstudioai/realtime:${SIM_REALTIME_VERSION:-latest}
ports:
- "${SIM_REALTIME_PORT_OVERRIDE:-3002}:3002"
environment:
- TZ=${TZ:-UTC}
- DATABASE_URL=postgresql://${POSTGRES_USER:-postgres}:${POSTGRES_PASSWORD:-postgres}@db:5432/${POSTGRES_DB:-simstudio}
- NEXT_PUBLIC_APP_URL=${NEXT_PUBLIC_APP_URL:-http://localhost:3000}
- BETTER_AUTH_URL=${BETTER_AUTH_URL:-http://localhost:3000}
- BETTER_AUTH_SECRET=${BETTER_AUTH_SECRET:-your_auth_secret_here}
depends_on:
db:
condition: service_healthy
healthcheck:
test: ["CMD", "wget", "--spider", "--quiet", "http://127.0.0.1:3002/health"]
interval: 90s
timeout: 10s
retries: 3
start_period: 30s
deploy:
resources:
limits:
cpus: ${SIM_REALTIME_CPU_LIMIT:-2.0}
memory: ${SIM_REALTIME_MEMORY_LIMIT:-4G}
reservations:
cpus: ${SIM_REALTIME_CPU_RESERVATION:-1.0}
memory: ${SIM_REALTIME_MEMORY_RESERVATION:-2G}
migrations:
image: ${GLOBAL_REGISTRY:-}ghcr.io/simstudioai/migrations:${SIM_MIGRATIONS_VERSION:-latest}
working_dir: /app/packages/db
environment:
- TZ=${TZ:-UTC}
- DATABASE_URL=postgresql://${POSTGRES_USER:-postgres}:${POSTGRES_PASSWORD:-postgres}@db:5432/${POSTGRES_DB:-simstudio}
depends_on:
db:
condition: service_healthy
command: ["bun", "run", "db:migrate"]
restart: "no"
logging:
driver: json-file
options:
max-size: ${LOG_MAX_SIZE:-100m}
max-file: "${LOG_MAX_FILE:-3}"
deploy:
resources:
limits:
cpus: ${SIM_MIGRATIONS_CPU_LIMIT:-1.0}
memory: ${SIM_MIGRATIONS_MEMORY_LIMIT:-512M}
reservations:
cpus: ${SIM_MIGRATIONS_CPU_RESERVATION:-0.5}
memory: ${SIM_MIGRATIONS_MEMORY_RESERVATION:-256M}
db:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}pgvector/pgvector:${PGVECTOR_VERSION:-pg17}
ports:
- "${POSTGRES_PORT_OVERRIDE:-5432}:5432"
environment:
- TZ=${TZ:-UTC}
- POSTGRES_USER=${POSTGRES_USER:-postgres}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-postgres}
- POSTGRES_DB=${POSTGRES_DB:-simstudio}
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER:-postgres}"]
interval: 10s
timeout: 5s
retries: 5
start_period: 10s
deploy:
resources:
limits:
cpus: ${POSTGRES_CPU_LIMIT:-2.0}
memory: ${POSTGRES_MEMORY_LIMIT:-2G}
reservations:
cpus: ${POSTGRES_CPU_RESERVATION:-1.0}
memory: ${POSTGRES_MEMORY_RESERVATION:-1G}
volumes:
postgres_data: