feat: add sim & pingap

This commit is contained in:
Sun-ZhenXing
2025-12-27 11:24:44 +08:00
parent 72b36f2748
commit d536fbc995
25 changed files with 1727 additions and 483 deletions

View File

@@ -1,63 +0,0 @@
# Langflow Configuration
# Versions
LANGFLOW_VERSION=latest
POSTGRES_VERSION=16-alpine
# Port Configuration
LANGFLOW_PORT_OVERRIDE=7860
# PostgreSQL Configuration
POSTGRES_DB=langflow
POSTGRES_USER=langflow
POSTGRES_PASSWORD=langflow
# Storage Configuration
LANGFLOW_CONFIG_DIR=/app/langflow
# Server Configuration
LANGFLOW_HOST=0.0.0.0
LANGFLOW_WORKERS=1
# Authentication - IMPORTANT: Configure for production!
# Set LANGFLOW_AUTO_LOGIN=false to require login
LANGFLOW_AUTO_LOGIN=true
LANGFLOW_SUPERUSER=langflow
LANGFLOW_SUPERUSER_PASSWORD=langflow
# Security - IMPORTANT: Generate a secure secret key for production!
# Use: python -c "from secrets import token_urlsafe; print(token_urlsafe(32))"
LANGFLOW_SECRET_KEY=
# Features
LANGFLOW_AUTO_SAVING=true
LANGFLOW_AUTO_SAVING_INTERVAL=1000
LANGFLOW_STORE_ENVIRONMENT_VARIABLES=true
LANGFLOW_FALLBACK_TO_ENV_VAR=true
# Optional: Custom components directory
LANGFLOW_COMPONENTS_PATH=
# Optional: Load flows from directory on startup
LANGFLOW_LOAD_FLOWS_PATH=
# Logging
LANGFLOW_LOG_LEVEL=error
# Timezone
TZ=UTC
# Analytics
DO_NOT_TRACK=false
# Resource Limits - Langflow
LANGFLOW_CPU_LIMIT=2.0
LANGFLOW_CPU_RESERVATION=0.5
LANGFLOW_MEMORY_LIMIT=2G
LANGFLOW_MEMORY_RESERVATION=512M
# Resource Limits - PostgreSQL
POSTGRES_CPU_LIMIT=1.0
POSTGRES_CPU_RESERVATION=0.25
POSTGRES_MEMORY_LIMIT=1G
POSTGRES_MEMORY_RESERVATION=256M

View File

@@ -1,230 +0,0 @@
# Langflow
Langflow is a low-code visual framework for building AI applications. It's Python-based and agnostic to any model, API, or database, making it easy to build RAG applications, multi-agent systems, and custom AI workflows.
## Features
- **Visual Flow Builder**: Drag-and-drop interface for building AI applications
- **Multi-Model Support**: Works with OpenAI, Anthropic, Google, HuggingFace, and more
- **RAG Components**: Built-in support for vector databases and retrieval
- **Custom Components**: Create your own Python components
- **Agent Support**: Build multi-agent systems with memory and tools
- **Real-Time Monitoring**: Track executions and debug flows
- **API Integration**: REST API for programmatic access
## Quick Start
1. Copy `.env.example` to `.env`:
```bash
cp .env.example .env
```
2. (Optional) Edit `.env` to customize settings:
- Generate a secure `LANGFLOW_SECRET_KEY` for production
- Set `LANGFLOW_AUTO_LOGIN=false` to require authentication
- Configure superuser credentials
- Add API keys for LLM providers
3. Start Langflow:
```bash
docker compose up -d
```
4. Wait for services to be ready
5. Access Langflow UI at `http://localhost:7860`
6. Start building your AI application!
## Default Configuration
| Service | Port | Description |
| ---------- | ---- | ------------------- |
| Langflow | 7860 | Web UI and API |
| PostgreSQL | 5432 | Database (internal) |
**Default Credentials** (if authentication enabled):
- Username: `langflow`
- Password: `langflow`
## Environment Variables
Key environment variables (see `.env.example` for full list):
| Variable | Description | Default |
| ----------------------------- | ----------------------------- | ---------- |
| `LANGFLOW_VERSION` | Langflow image version | `latest` |
| `LANGFLOW_PORT_OVERRIDE` | Host port for UI | `7860` |
| `POSTGRES_PASSWORD` | Database password | `langflow` |
| `LANGFLOW_AUTO_LOGIN` | Auto-login (disable for auth) | `true` |
| `LANGFLOW_SUPERUSER` | Superuser username | `langflow` |
| `LANGFLOW_SUPERUSER_PASSWORD` | Superuser password | `langflow` |
| `LANGFLOW_SECRET_KEY` | Secret key for sessions | (empty) |
| `LANGFLOW_COMPONENTS_PATH` | Custom components directory | (empty) |
| `LANGFLOW_LOAD_FLOWS_PATH` | Auto-load flows directory | (empty) |
| `TZ` | Timezone | `UTC` |
## Resource Requirements
**Minimum**:
- CPU: 1 core
- RAM: 1GB
- Disk: 5GB
**Recommended**:
- CPU: 2+ cores
- RAM: 2GB+
- Disk: 20GB+
## Volumes
- `postgres_data`: PostgreSQL database data
- `langflow_data`: Langflow configuration, flows, and logs
## Using Langflow
### Building Your First Flow
1. Access the UI at `http://localhost:7860`
2. Click "New Flow" or use a template
3. Drag components from the sidebar to the canvas
4. Connect components by dragging between ports
5. Configure component parameters
6. Click "Run" to test your flow
7. Use the API or integrate with your application
### Adding API Keys
You can add API keys for LLM providers in two ways:
#### Option 1: Global Variables (Recommended)
1. Click your profile icon → Settings
2. Go to "Global Variables"
3. Add your API keys (e.g., `OPENAI_API_KEY`)
4. Reference them in components using `{OPENAI_API_KEY}`
#### Option 2: Environment Variables
Add to your `.env` file:
```text
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GOOGLE_API_KEY=...
```
Langflow will automatically create global variables from these.
### Using the API
Get your API token from the UI:
1. Click your profile icon → Settings
2. Go to "API Keys"
3. Create a new API key
Example: Run a flow
```bash
curl -X POST http://localhost:7860/api/v1/run/YOUR_FLOW_ID \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"input_value": "Hello"}'
```
### Custom Components
1. Create a directory for your components
2. Set `LANGFLOW_COMPONENTS_PATH` in `.env`
3. Create Python files with your component classes
4. Restart Langflow to load them
Example component structure:
```python
from langflow import CustomComponent
class MyComponent(CustomComponent):
display_name = "My Component"
description = "Does something cool"
def build(self):
# Your component logic
return result
```
## Security Considerations
1. **Secret Key**: Generate a strong `LANGFLOW_SECRET_KEY` for production:
```bash
python -c "from secrets import token_urlsafe; print(token_urlsafe(32))"
```
2. **Authentication**: Set `LANGFLOW_AUTO_LOGIN=false` to require login
3. **Database Password**: Use a strong PostgreSQL password
4. **API Keys**: Store sensitive keys as global variables, not in flows
5. **SSL/TLS**: Use reverse proxy with HTTPS in production
6. **Network Access**: Restrict access with firewall rules
## Upgrading
To upgrade Langflow:
1. Update `LANGFLOW_VERSION` in `.env` (or use `latest`)
2. Pull and restart:
```bash
docker compose pull
docker compose up -d
```
3. Check logs:
```bash
docker compose logs -f langflow
```
## Troubleshooting
**Service won't start:**
- Check logs: `docker compose logs langflow`
- Verify database: `docker compose ps postgres`
- Ensure sufficient resources allocated
**Cannot access UI:**
- Check port 7860 is not in use: `netstat -an | findstr 7860`
- Verify firewall settings
- Check container health: `docker compose ps`
**API key not working:**
- Verify the key is set in Global Variables
- Check the variable name matches in your components
- Ensure `LANGFLOW_STORE_ENVIRONMENT_VARIABLES=true`
**Flow execution errors:**
- Check component configurations
- Review logs in the UI under each component
- Verify API keys have sufficient credits/permissions
## References
- Official Website: <https://langflow.org>
- Documentation: <https://docs.langflow.org>
- GitHub: <https://github.com/langflow-ai/langflow>
- Discord Community: <https://discord.gg/EqksyE2EX9>
- Docker Hub: <https://hub.docker.com/r/langflowai/langflow>
## License
Langflow is licensed under MIT. See [LICENSE](https://github.com/langflow-ai/langflow/blob/main/LICENSE) for more information.

View File

@@ -1,230 +0,0 @@
# Langflow
Langflow 是一个低代码可视化框架,用于构建 AI 应用。它基于 Python与任何模型、API 或数据库无关,可轻松构建 RAG 应用、多智能体系统和自定义 AI 工作流。
## 功能特点
- **可视化流构建器**:拖放界面构建 AI 应用
- **多模型支持**:支持 OpenAI、Anthropic、Google、HuggingFace 等
- **RAG 组件**:内置向量数据库和检索支持
- **自定义组件**:创建您自己的 Python 组件
- **智能体支持**:构建具有记忆和工具的多智能体系统
- **实时监控**:跟踪执行并调试流程
- **API 集成**:用于编程访问的 REST API
## 快速开始
1. 复制 `.env.example``.env`
```bash
copy .env.example .env
```
2. (可选)编辑 `.env` 自定义设置:
- 为生产环境生成安全的 `LANGFLOW_SECRET_KEY`
- 设置 `LANGFLOW_AUTO_LOGIN=false` 以要求身份验证
- 配置超级用户凭证
- 为 LLM 提供商添加 API 密钥
3. 启动 Langflow
```bash
docker compose up -d
```
4. 等待服务就绪
5. 访问 Langflow UI`http://localhost:7860`
6. 开始构建您的 AI 应用!
## 默认配置
| 服务 | 端口 | 说明 |
| ---------- | ---- | -------------- |
| Langflow | 7860 | Web UI 和 API |
| PostgreSQL | 5432 | 数据库(内部) |
**默认凭证**(如果启用了身份验证):
- 用户名:`langflow`
- 密码:`langflow`
## 环境变量
主要环境变量(完整列表请参阅 `.env.example`
| 变量 | 说明 | 默认值 |
| ----------------------------- | ------------------------------ | ---------- |
| `LANGFLOW_VERSION` | Langflow 镜像版本 | `latest` |
| `LANGFLOW_PORT_OVERRIDE` | UI 的主机端口 | `7860` |
| `POSTGRES_PASSWORD` | 数据库密码 | `langflow` |
| `LANGFLOW_AUTO_LOGIN` | 自动登录(禁用以启用身份验证) | `true` |
| `LANGFLOW_SUPERUSER` | 超级用户用户名 | `langflow` |
| `LANGFLOW_SUPERUSER_PASSWORD` | 超级用户密码 | `langflow` |
| `LANGFLOW_SECRET_KEY` | 会话密钥 | (空) |
| `LANGFLOW_COMPONENTS_PATH` | 自定义组件目录 | (空) |
| `LANGFLOW_LOAD_FLOWS_PATH` | 自动加载流目录 | (空) |
| `TZ` | 时区 | `UTC` |
## 资源需求
**最低要求**
- CPU1 核心
- 内存1GB
- 磁盘5GB
**推荐配置**
- CPU2+ 核心
- 内存2GB+
- 磁盘20GB+
## 数据卷
- `postgres_data`PostgreSQL 数据库数据
- `langflow_data`Langflow 配置、流和日志
## 使用 Langflow
### 构建您的第一个流
1. 访问 UI`http://localhost:7860`
2. 点击 "New Flow" 或使用模板
3. 从侧边栏拖动组件到画布
4. 通过在端口之间拖动来连接组件
5. 配置组件参数
6. 点击 "Run" 测试您的流
7. 使用 API 或与您的应用集成
### 添加 API 密钥
您可以通过两种方式为 LLM 提供商添加 API 密钥:
#### 方法 1全局变量推荐
1. 点击您的个人资料图标 → Settings
2. 进入 "Global Variables"
3. 添加您的 API 密钥(例如 `OPENAI_API_KEY`
4. 在组件中使用 `{OPENAI_API_KEY}` 引用它们
#### 方法 2环境变量
添加到您的 `.env` 文件:
```text
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GOOGLE_API_KEY=...
```
Langflow 会自动从这些创建全局变量。
### 使用 API
从 UI 获取您的 API 令牌:
1. 点击您的个人资料图标 → Settings
2. 进入 "API Keys"
3. 创建新的 API 密钥
示例:运行流
```bash
curl -X POST http://localhost:7860/api/v1/run/YOUR_FLOW_ID \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"input_value": "Hello"}'
```
### 自定义组件
1. 为您的组件创建一个目录
2. 在 `.env` 中设置 `LANGFLOW_COMPONENTS_PATH`
3. 使用您的组件类创建 Python 文件
4. 重启 Langflow 以加载它们
示例组件结构:
```python
from langflow import CustomComponent
class MyComponent(CustomComponent):
display_name = "My Component"
description = "Does something cool"
def build(self):
# Your component logic
return result
```
## 安全注意事项
1. **密钥**:为生产环境生成强 `LANGFLOW_SECRET_KEY`
```bash
python -c "from secrets import token_urlsafe; print(token_urlsafe(32))"
```
2. **身份验证**:设置 `LANGFLOW_AUTO_LOGIN=false` 以要求登录
3. **数据库密码**:使用强 PostgreSQL 密码
4. **API 密钥**:将敏感密钥存储为全局变量,而不是在流中
5. **SSL/TLS**:在生产环境中使用带 HTTPS 的反向代理
6. **网络访问**:使用防火墙规则限制访问
## 升级
升级 Langflow
1. 在 `.env` 中更新 `LANGFLOW_VERSION`(或使用 `latest`
2. 拉取并重启:
```bash
docker compose pull
docker compose up -d
```
3. 检查日志:
```bash
docker compose logs -f langflow
```
## 故障排除
**服务无法启动:**
- 检查日志:`docker compose logs langflow`
- 验证数据库:`docker compose ps postgres`
- 确保分配了足够的资源
**无法访问 UI**
- 检查端口 7860 未被占用:`netstat -an | findstr 7860`
- 验证防火墙设置
- 检查容器健康:`docker compose ps`
**API 密钥不工作:**
- 验证密钥已在全局变量中设置
- 检查变量名称在组件中匹配
- 确保 `LANGFLOW_STORE_ENVIRONMENT_VARIABLES=true`
**流执行错误:**
- 检查组件配置
- 在 UI 中每个组件下查看日志
- 验证 API 密钥有足够的额度/权限
## 参考资料
- 官方网站:<https://langflow.org>
- 文档:<https://docs.langflow.org>
- GitHub<https://github.com/langflow-ai/langflow>
- Discord 社区:<https://discord.gg/EqksyE2EX9>
- Docker Hub<https://hub.docker.com/r/langflowai/langflow>
## 许可证
Langflow 使用 MIT 许可证。详情请参阅 [LICENSE](https://github.com/langflow-ai/langflow/blob/main/LICENSE)。

View File

@@ -1,129 +0,0 @@
# Langflow - Visual Framework for Building AI Applications
# https://github.com/langflow-ai/langflow
#
# Langflow is a low-code app builder for RAG and multi-agent AI applications.
# It's Python-based and agnostic to any model, API, or database.
#
# Key Features:
# - Visual flow builder for AI applications
# - Support for multiple LLMs (OpenAI, Anthropic, Google, etc.)
# - Built-in components for RAG, agents, and chains
# - Custom component support
# - Real-time monitoring and logging
# - Multi-user support with authentication
#
# Default Credentials:
# - Access UI at http://localhost:7860
# - No authentication by default (set LANGFLOW_AUTO_LOGIN=false to enable)
#
# Security Notes:
# - Set LANGFLOW_SECRET_KEY for production
# - Use strong database passwords
# - Enable authentication in production
# - Store API keys as global variables, not in flows
# - Enable SSL/TLS in production
#
# License: MIT (https://github.com/langflow-ai/langflow/blob/main/LICENSE)
x-defaults: &defaults
restart: unless-stopped
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
langflow:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}langflowai/langflow:${LANGFLOW_VERSION:-latest}
pull_policy: always
container_name: langflow
ports:
- "${LANGFLOW_PORT_OVERRIDE:-7860}:7860"
environment:
# Database configuration
- LANGFLOW_DATABASE_URL=postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@postgres:5432/${POSTGRES_DB}
# Storage configuration
- LANGFLOW_CONFIG_DIR=${LANGFLOW_CONFIG_DIR:-/app/langflow}
# Server configuration
- LANGFLOW_HOST=${LANGFLOW_HOST:-0.0.0.0}
- LANGFLOW_PORT=7860
- LANGFLOW_WORKERS=${LANGFLOW_WORKERS:-1}
# Authentication - IMPORTANT: Configure for production
- LANGFLOW_AUTO_LOGIN=${LANGFLOW_AUTO_LOGIN:-true}
- LANGFLOW_SUPERUSER=${LANGFLOW_SUPERUSER:-langflow}
- LANGFLOW_SUPERUSER_PASSWORD=${LANGFLOW_SUPERUSER_PASSWORD:-langflow}
- LANGFLOW_SECRET_KEY=${LANGFLOW_SECRET_KEY:-}
# Features
- LANGFLOW_AUTO_SAVING=${LANGFLOW_AUTO_SAVING:-true}
- LANGFLOW_AUTO_SAVING_INTERVAL=${LANGFLOW_AUTO_SAVING_INTERVAL:-1000}
- LANGFLOW_STORE_ENVIRONMENT_VARIABLES=${LANGFLOW_STORE_ENVIRONMENT_VARIABLES:-true}
- LANGFLOW_FALLBACK_TO_ENV_VAR=${LANGFLOW_FALLBACK_TO_ENV_VAR:-true}
# Optional: Custom components path
- LANGFLOW_COMPONENTS_PATH=${LANGFLOW_COMPONENTS_PATH:-}
# Optional: Load flows from directory
- LANGFLOW_LOAD_FLOWS_PATH=${LANGFLOW_LOAD_FLOWS_PATH:-}
# Logging
- LANGFLOW_LOG_LEVEL=${LANGFLOW_LOG_LEVEL:-error}
# Other settings
- TZ=${TZ:-UTC}
- DO_NOT_TRACK=${DO_NOT_TRACK:-false}
volumes:
- langflow_data:${LANGFLOW_CONFIG_DIR:-/app/langflow}
depends_on:
postgres:
condition: service_healthy
healthcheck:
test: ["CMD", "python", "-c", "import requests; requests.get('http://localhost:7860/health_check', timeout=5)"]
interval: 30s
timeout: 10s
retries: 5
start_period: 60s
deploy:
resources:
limits:
cpus: "${LANGFLOW_CPU_LIMIT:-2.0}"
memory: "${LANGFLOW_MEMORY_LIMIT:-2G}"
reservations:
cpus: "${LANGFLOW_CPU_RESERVATION:-0.5}"
memory: "${LANGFLOW_MEMORY_RESERVATION:-512M}"
postgres:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}postgres:${POSTGRES_VERSION:-16-alpine}
container_name: langflow-postgres
environment:
- POSTGRES_DB=${POSTGRES_DB:-langflow}
- POSTGRES_USER=${POSTGRES_USER:-langflow}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-langflow}
- POSTGRES_INITDB_ARGS=--encoding=UTF8
- TZ=${TZ:-UTC}
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER:-langflow} -d ${POSTGRES_DB:-langflow}"]
interval: 10s
timeout: 5s
retries: 5
deploy:
resources:
limits:
cpus: "${POSTGRES_CPU_LIMIT:-1.0}"
memory: "${POSTGRES_MEMORY_LIMIT:-1G}"
reservations:
cpus: "${POSTGRES_CPU_RESERVATION:-0.25}"
memory: "${POSTGRES_MEMORY_RESERVATION:-256M}"
volumes:
postgres_data:
langflow_data:

View File

@@ -1,136 +0,0 @@
# Global Settings
GLOBAL_REGISTRY=
TZ=UTC
# Service Versions
LANGFUSE_VERSION=3
POSTGRES_VERSION=17
CLICKHOUSE_VERSION=latest
MINIO_VERSION=latest
REDIS_VERSION=7
# Ports
LANGFUSE_PORT_OVERRIDE=3000
LANGFUSE_WORKER_PORT_OVERRIDE=3030
MINIO_PORT_OVERRIDE=9090
MINIO_CONSOLE_PORT_OVERRIDE=9091
# PostgreSQL
POSTGRES_USER=postgres
POSTGRES_PASSWORD=postgres
POSTGRES_DB=postgres
# Authentication & Security (CHANGEME: These are defaults, please update them)
NEXTAUTH_URL=http://localhost:3000
NEXTAUTH_SECRET=mysecret
SALT=mysalt
ENCRYPTION_KEY=0000000000000000000000000000000000000000000000000000000000000000
# ClickHouse
CLICKHOUSE_USER=clickhouse
CLICKHOUSE_PASSWORD=clickhouse
CLICKHOUSE_MIGRATION_URL=clickhouse://clickhouse:9000
CLICKHOUSE_URL=http://clickhouse:8123
CLICKHOUSE_CLUSTER_ENABLED=false
# MinIO / S3
MINIO_ROOT_USER=minio
MINIO_ROOT_PASSWORD=miniosecret
# S3 Event Upload
LANGFUSE_S3_EVENT_UPLOAD_BUCKET=langfuse
LANGFUSE_S3_EVENT_UPLOAD_REGION=auto
LANGFUSE_S3_EVENT_UPLOAD_ACCESS_KEY_ID=minio
LANGFUSE_S3_EVENT_UPLOAD_SECRET_ACCESS_KEY=miniosecret
LANGFUSE_S3_EVENT_UPLOAD_ENDPOINT=http://minio:9000
LANGFUSE_S3_EVENT_UPLOAD_FORCE_PATH_STYLE=true
LANGFUSE_S3_EVENT_UPLOAD_PREFIX=events/
# S3 Media Upload
LANGFUSE_S3_MEDIA_UPLOAD_BUCKET=langfuse
LANGFUSE_S3_MEDIA_UPLOAD_REGION=auto
LANGFUSE_S3_MEDIA_UPLOAD_ACCESS_KEY_ID=minio
LANGFUSE_S3_MEDIA_UPLOAD_SECRET_ACCESS_KEY=miniosecret
LANGFUSE_S3_MEDIA_UPLOAD_ENDPOINT=http://localhost:9090
LANGFUSE_S3_MEDIA_UPLOAD_FORCE_PATH_STYLE=true
LANGFUSE_S3_MEDIA_UPLOAD_PREFIX=media/
# S3 Batch Export
LANGFUSE_S3_BATCH_EXPORT_ENABLED=false
LANGFUSE_S3_BATCH_EXPORT_BUCKET=langfuse
LANGFUSE_S3_BATCH_EXPORT_PREFIX=exports/
LANGFUSE_S3_BATCH_EXPORT_REGION=auto
LANGFUSE_S3_BATCH_EXPORT_ENDPOINT=http://minio:9000
LANGFUSE_S3_BATCH_EXPORT_EXTERNAL_ENDPOINT=http://localhost:9090
LANGFUSE_S3_BATCH_EXPORT_ACCESS_KEY_ID=minio
LANGFUSE_S3_BATCH_EXPORT_SECRET_ACCESS_KEY=miniosecret
LANGFUSE_S3_BATCH_EXPORT_FORCE_PATH_STYLE=true
# Redis
REDIS_HOST=redis
REDIS_PORT=6379
REDIS_AUTH=myredissecret
REDIS_TLS_ENABLED=false
REDIS_TLS_CA=/certs/ca.crt
REDIS_TLS_CERT=/certs/redis.crt
REDIS_TLS_KEY=/certs/redis.key
# Features
TELEMETRY_ENABLED=true
LANGFUSE_ENABLE_EXPERIMENTAL_FEATURES=true
LANGFUSE_USE_AZURE_BLOB=false
# Ingestion Queue
LANGFUSE_INGESTION_QUEUE_DELAY_MS=
LANGFUSE_INGESTION_CLICKHOUSE_WRITE_INTERVAL_MS=
# Email/SMTP (Optional)
EMAIL_FROM_ADDRESS=
SMTP_CONNECTION_URL=
# Initialization (Optional - for setting up initial org/project/user)
LANGFUSE_INIT_ORG_ID=
LANGFUSE_INIT_ORG_NAME=
LANGFUSE_INIT_PROJECT_ID=
LANGFUSE_INIT_PROJECT_NAME=
LANGFUSE_INIT_PROJECT_PUBLIC_KEY=
LANGFUSE_INIT_PROJECT_SECRET_KEY=
LANGFUSE_INIT_USER_EMAIL=
LANGFUSE_INIT_USER_NAME=
LANGFUSE_INIT_USER_PASSWORD=
# Resource Limits - Langfuse Worker
LANGFUSE_WORKER_CPU_LIMIT=2.0
LANGFUSE_WORKER_MEMORY_LIMIT=2G
LANGFUSE_WORKER_CPU_RESERVATION=0.5
LANGFUSE_WORKER_MEMORY_RESERVATION=512M
# Resource Limits - Langfuse Web
LANGFUSE_WEB_CPU_LIMIT=2.0
LANGFUSE_WEB_MEMORY_LIMIT=2G
LANGFUSE_WEB_CPU_RESERVATION=0.5
LANGFUSE_WEB_MEMORY_RESERVATION=512M
# Resource Limits - ClickHouse
CLICKHOUSE_CPU_LIMIT=2.0
CLICKHOUSE_MEMORY_LIMIT=4G
CLICKHOUSE_CPU_RESERVATION=0.5
CLICKHOUSE_MEMORY_RESERVATION=1G
# Resource Limits - MinIO
MINIO_CPU_LIMIT=1.0
MINIO_MEMORY_LIMIT=1G
MINIO_CPU_RESERVATION=0.25
MINIO_MEMORY_RESERVATION=256M
# Resource Limits - Redis
REDIS_CPU_LIMIT=1.0
REDIS_MEMORY_LIMIT=512M
REDIS_CPU_RESERVATION=0.25
REDIS_MEMORY_RESERVATION=256M
# Resource Limits - PostgreSQL
POSTGRES_CPU_LIMIT=2.0
POSTGRES_MEMORY_LIMIT=2G
POSTGRES_CPU_RESERVATION=0.5
POSTGRES_MEMORY_RESERVATION=512M

View File

@@ -1,169 +0,0 @@
# Langfuse
[English](./README.md) | [中文](./README.zh.md)
This service deploys Langfuse, an open-source LLM engineering platform for observability, metrics, evaluations, and prompt management.
## Services
- **langfuse-worker**: Background worker service for processing LLM operations
- **langfuse-web**: Main Langfuse web application server
- **postgres**: PostgreSQL database
- **clickhouse**: ClickHouse analytics database for event storage
- **minio**: S3-compatible object storage for media and exports
- **redis**: In-memory data store for caching and job queues
## Quick Start
1. Copy `.env.example` to `.env`:
```bash
cp .env.example .env
```
2. Update critical secrets in `.env`:
```bash
# Generate secure secrets
NEXTAUTH_SECRET=$(openssl rand -base64 32)
ENCRYPTION_KEY=$(openssl rand -hex 32)
POSTGRES_PASSWORD=your-secure-password
CLICKHOUSE_PASSWORD=your-secure-password
MINIO_ROOT_PASSWORD=your-secure-password
REDIS_AUTH=your-secure-redis-password
```
3. Start the services:
```bash
docker compose up -d
```
4. Access Langfuse at `http://localhost:3000`
## Core Environment Variables
| Variable | Description | Default |
| --------------------------------------- | ----------------------------------------------- | ----------------------- |
| `LANGFUSE_VERSION` | Langfuse container image version | `3` |
| `LANGFUSE_PORT` | Web interface port | `3000` |
| `NEXTAUTH_URL` | Public URL of Langfuse instance | `http://localhost:3000` |
| `NEXTAUTH_SECRET` | NextAuth.js secret (required for production) | `mysecret` |
| `ENCRYPTION_KEY` | Encryption key for sensitive data (64-char hex) | `0...0` |
| `SALT` | Salt for password hashing | `mysalt` |
| `TELEMETRY_ENABLED` | Enable anonymous telemetry | `true` |
| `LANGFUSE_ENABLE_EXPERIMENTAL_FEATURES` | Enable beta features | `true` |
## Database Configuration
| Variable | Description | Default |
| --------------------- | ------------------- | ------------ |
| `POSTGRES_VERSION` | PostgreSQL version | `17` |
| `POSTGRES_USER` | Database user | `postgres` |
| `POSTGRES_PASSWORD` | Database password | `postgres` |
| `POSTGRES_DB` | Database name | `postgres` |
| `CLICKHOUSE_USER` | ClickHouse user | `clickhouse` |
| `CLICKHOUSE_PASSWORD` | ClickHouse password | `clickhouse` |
## Storage & Cache Configuration
| Variable | Description | Default |
| --------------------- | -------------------- | --------------- |
| `MINIO_ROOT_USER` | MinIO admin username | `minio` |
| `MINIO_ROOT_PASSWORD` | MinIO admin password | `miniosecret` |
| `REDIS_AUTH` | Redis password | `myredissecret` |
## S3/Media Configuration
| Variable | Description | Default |
| ----------------------------------- | ------------------------- | ----------------------- |
| `LANGFUSE_S3_MEDIA_UPLOAD_ENDPOINT` | Media upload S3 endpoint | `http://localhost:9090` |
| `LANGFUSE_S3_EVENT_UPLOAD_ENDPOINT` | Event upload S3 endpoint | `http://minio:9000` |
| `LANGFUSE_S3_BATCH_EXPORT_ENABLED` | Enable batch export to S3 | `false` |
## Volumes
- `langfuse_postgres_data`: PostgreSQL data persistence
- `langfuse_clickhouse_data`: ClickHouse event data
- `langfuse_clickhouse_logs`: ClickHouse logs
- `langfuse_minio_data`: MinIO object storage data
## Resource Limits
All services have configurable CPU and memory limits:
- **langfuse-worker**: 2 CPU cores, 2GB RAM
- **langfuse-web**: 2 CPU cores, 2GB RAM
- **clickhouse**: 2 CPU cores, 4GB RAM
- **minio**: 1 CPU core, 1GB RAM
- **redis**: 1 CPU core, 512MB RAM
- **postgres**: 2 CPU cores, 2GB RAM
Adjust limits in `.env` by modifying `*_CPU_LIMIT`, `*_MEMORY_LIMIT`, `*_CPU_RESERVATION`, and `*_MEMORY_RESERVATION` variables.
## Network Access
- **langfuse-web** (port 3000): Open to all interfaces for external access
- **minio** (port 9090): Open to all interfaces for media uploads
- **All other services**: Bound to `127.0.0.1` (localhost only)
In production, restrict external access using a firewall or reverse proxy.
## Production Setup
For production deployments:
1. **Security**:
- Generate strong secrets with `openssl rand -base64 32` and `openssl rand -hex 32`
- Use a reverse proxy (nginx, Caddy) with SSL/TLS
- Change all default passwords
- Enable HTTPS by setting `NEXTAUTH_URL` to your domain
2. **Persistence**:
- Use external volumes or cloud storage for data
- Configure regular PostgreSQL backups
- Monitor ClickHouse disk usage
3. **Performance**:
- Increase resource limits based on workload
- Consider dedicated ClickHouse cluster for large deployments
- Configure Redis persistence if needed
## Ports
- **3000**: Langfuse web interface (external)
- **3030**: Langfuse worker API (localhost only)
- **5432**: PostgreSQL (localhost only)
- **8123**: ClickHouse HTTP (localhost only)
- **9000**: ClickHouse native (localhost only)
- **9090**: MinIO S3 API (external)
- **9091**: MinIO console (localhost only)
- **6379**: Redis (localhost only)
## Health Checks
All services include health checks with automatic restart on failure.
## Documentation
- [Langfuse Documentation](https://langfuse.com/docs)
- [Langfuse GitHub](https://github.com/langfuse/langfuse)
## Troubleshooting
### Services failing to start
- Check logs: `docker compose logs <service-name>`
- Ensure all required environment variables are set
- Verify sufficient disk space and system resources
### Database connection errors
- Verify `POSTGRES_PASSWORD` matches between services
- Check that PostgreSQL service is healthy: `docker compose ps`
- Ensure ports are not already in use
### MinIO permission issues
- Clear MinIO data and restart: `docker compose down -v`
- Regenerate MinIO credentials in `.env`

View File

@@ -1,169 +0,0 @@
# Langfuse
[English](./README.md) | [中文](./README.zh.md)
此服务部署 Langfuse一个用于 LLM 应用可观测性、指标、评估和提示管理的开源平台。
## 服务
- **langfuse-worker**:处理 LLM 操作的后台工作者服务
- **langfuse-web**Langfuse 主 Web 应用服务器
- **postgres**PostgreSQL 数据库
- **clickhouse**:用于事件存储的 ClickHouse 分析数据库
- **minio**:兼容 S3 的对象存储,用于媒体和导出
- **redis**:用于缓存和作业队列的内存数据存储
## 快速开始
1.`.env.example` 复制为 `.env`
```bash
cp .env.example .env
```
2. 在 `.env` 中更新关键的密钥:
```bash
# 生成安全的密钥
NEXTAUTH_SECRET=$(openssl rand -base64 32)
ENCRYPTION_KEY=$(openssl rand -hex 32)
POSTGRES_PASSWORD=your-secure-password
CLICKHOUSE_PASSWORD=your-secure-password
MINIO_ROOT_PASSWORD=your-secure-password
REDIS_AUTH=your-secure-redis-password
```
3. 启动服务:
```bash
docker compose up -d
```
4. 访问 `http://localhost:3000` 打开 Langfuse
## 核心环境变量
| 变量 | 描述 | 默认值 |
| --------------------------------------- | ------------------------------------- | ----------------------- |
| `LANGFUSE_VERSION` | Langfuse 容器镜像版本 | `3` |
| `LANGFUSE_PORT` | Web 界面端口 | `3000` |
| `NEXTAUTH_URL` | Langfuse 实例的公开 URL | `http://localhost:3000` |
| `NEXTAUTH_SECRET` | NextAuth.js 密钥(生产环境必需) | `mysecret` |
| `ENCRYPTION_KEY` | 敏感数据加密密钥64 个十六进制字符) | `0...0` |
| `SALT` | 密码哈希盐值 | `mysalt` |
| `TELEMETRY_ENABLED` | 启用匿名遥测 | `true` |
| `LANGFUSE_ENABLE_EXPERIMENTAL_FEATURES` | 启用测试版功能 | `true` |
## 数据库配置
| 变量 | 描述 | 默认值 |
| --------------------- | --------------- | ------------ |
| `POSTGRES_VERSION` | PostgreSQL 版本 | `17` |
| `POSTGRES_USER` | 数据库用户 | `postgres` |
| `POSTGRES_PASSWORD` | 数据库密码 | `postgres` |
| `POSTGRES_DB` | 数据库名称 | `postgres` |
| `CLICKHOUSE_USER` | ClickHouse 用户 | `clickhouse` |
| `CLICKHOUSE_PASSWORD` | ClickHouse 密码 | `clickhouse` |
## 存储和缓存配置
| 变量 | 描述 | 默认值 |
| --------------------- | ------------------ | --------------- |
| `MINIO_ROOT_USER` | MinIO 管理员用户名 | `minio` |
| `MINIO_ROOT_PASSWORD` | MinIO 管理员密码 | `miniosecret` |
| `REDIS_AUTH` | Redis 密码 | `myredissecret` |
## S3/媒体配置
| 变量 | 描述 | 默认值 |
| ----------------------------------- | ----------------- | ----------------------- |
| `LANGFUSE_S3_MEDIA_UPLOAD_ENDPOINT` | 媒体上传 S3 端点 | `http://localhost:9090` |
| `LANGFUSE_S3_EVENT_UPLOAD_ENDPOINT` | 事件上传 S3 端点 | `http://minio:9000` |
| `LANGFUSE_S3_BATCH_EXPORT_ENABLED` | 启用批量导出到 S3 | `false` |
## 数据卷
- `langfuse_postgres_data`PostgreSQL 数据持久化
- `langfuse_clickhouse_data`ClickHouse 事件数据
- `langfuse_clickhouse_logs`ClickHouse 日志
- `langfuse_minio_data`MinIO 对象存储数据
## 资源限制
所有服务都有可配置的 CPU 和内存限制:
- **langfuse-worker**2 个 CPU 核心2GB RAM
- **langfuse-web**2 个 CPU 核心2GB RAM
- **clickhouse**2 个 CPU 核心4GB RAM
- **minio**1 个 CPU 核心1GB RAM
- **redis**1 个 CPU 核心512MB RAM
- **postgres**2 个 CPU 核心2GB RAM
通过修改 `.env` 中的 `*_CPU_LIMIT`、`*_MEMORY_LIMIT`、`*_CPU_RESERVATION` 和 `*_MEMORY_RESERVATION` 变量来调整限制。
## 网络访问
- **langfuse-web**(端口 3000对所有接口开放用于外部访问
- **minio**(端口 9090对所有接口开放用于媒体上传
- **所有其他服务**:绑定到 `127.0.0.1`(仅限本地)
在生产环境中,使用防火墙或反向代理限制外部访问。
## 生产部署
用于生产部署的建议:
1. **安全性**
- 使用 `openssl rand -base64 32` 和 `openssl rand -hex 32` 生成强密钥
- 使用具有 SSL/TLS 的反向代理nginx、Caddy
- 更改所有默认密码
- 通过将 `NEXTAUTH_URL` 设置为您的域来启用 HTTPS
2. **数据持久化**
- 对数据使用外部卷或云存储
- 配置定期 PostgreSQL 备份
- 监控 ClickHouse 磁盘使用情况
3. **性能**
- 根据工作负载增加资源限制
- 大规模部署时考虑使用专用 ClickHouse 集群
- 如需要,配置 Redis 持久化
## 端口
- **3000**Langfuse Web 界面(外部)
- **3030**Langfuse 工作者 API仅限本地
- **5432**PostgreSQL仅限本地
- **8123**ClickHouse HTTP仅限本地
- **9000**ClickHouse 原生协议(仅限本地)
- **9090**MinIO S3 API外部
- **9091**MinIO 控制台(仅限本地)
- **6379**Redis仅限本地
## 健康检查
所有服务都包括健康检查,失败时会自动重新启动。
## 文档
- [Langfuse 文档](https://langfuse.com/docs)
- [Langfuse GitHub](https://github.com/langfuse/langfuse)
## 故障排除
### 服务无法启动
- 查看日志:`docker compose logs <service-name>`
- 确保设置了所有必需的环境变量
- 验证磁盘空间和系统资源是否充足
### 数据库连接错误
- 验证 `POSTGRES_PASSWORD` 在服务之间匹配
- 检查 PostgreSQL 服务是否健康:`docker compose ps`
- 确保端口未被占用
### MinIO 权限问题
- 清除 MinIO 数据并重新启动:`docker compose down -v`
- 在 `.env` 中重新生成 MinIO 凭证

View File

@@ -1,233 +0,0 @@
# Make sure to update the credential placeholders with your own secrets.
# We mark them with # CHANGEME in the file below.
# In addition, we recommend to restrict inbound traffic on the host to
# langfuse-web (port 3000) and minio (port 9090) only.
# All other components are bound to localhost (127.0.0.1) to only accept
# connections from the local machine.
# External connections from other machines will not be able to reach these
# services directly.
x-defaults: &defaults
restart: unless-stopped
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
langfuse-worker:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}langfuse/langfuse-worker:${LANGFUSE_VERSION:-3}
depends_on: &langfuse-depends-on
postgres:
condition: service_healthy
minio:
condition: service_healthy
redis:
condition: service_healthy
clickhouse:
condition: service_healthy
ports:
- ${LANGFUSE_WORKER_PORT_OVERRIDE:-3030}:3030
environment: &langfuse-worker-env
TZ: ${TZ:-UTC}
NEXTAUTH_URL: ${NEXTAUTH_URL:-http://localhost:3000}
DATABASE_URL: ${DATABASE_URL:-postgresql://postgres:postgres@postgres:5432/postgres}
SALT: ${SALT:-mysalt}
ENCRYPTION_KEY: ${ENCRYPTION_KEY:-0000000000000000000000000000000000000000000000000000000000000000}
TELEMETRY_ENABLED: ${TELEMETRY_ENABLED:-true}
LANGFUSE_ENABLE_EXPERIMENTAL_FEATURES: ${LANGFUSE_ENABLE_EXPERIMENTAL_FEATURES:-true}
CLICKHOUSE_MIGRATION_URL: ${CLICKHOUSE_MIGRATION_URL:-clickhouse://clickhouse:9000}
CLICKHOUSE_URL: ${CLICKHOUSE_URL:-http://clickhouse:8123}
CLICKHOUSE_USER: ${CLICKHOUSE_USER:-clickhouse}
CLICKHOUSE_PASSWORD: ${CLICKHOUSE_PASSWORD:-clickhouse}
CLICKHOUSE_CLUSTER_ENABLED: ${CLICKHOUSE_CLUSTER_ENABLED:-false}
LANGFUSE_USE_AZURE_BLOB: ${LANGFUSE_USE_AZURE_BLOB:-false}
LANGFUSE_S3_EVENT_UPLOAD_BUCKET: ${LANGFUSE_S3_EVENT_UPLOAD_BUCKET:-langfuse}
LANGFUSE_S3_EVENT_UPLOAD_REGION: ${LANGFUSE_S3_EVENT_UPLOAD_REGION:-auto}
LANGFUSE_S3_EVENT_UPLOAD_ACCESS_KEY_ID: ${LANGFUSE_S3_EVENT_UPLOAD_ACCESS_KEY_ID:-minio}
LANGFUSE_S3_EVENT_UPLOAD_SECRET_ACCESS_KEY: ${LANGFUSE_S3_EVENT_UPLOAD_SECRET_ACCESS_KEY:-miniosecret}
LANGFUSE_S3_EVENT_UPLOAD_ENDPOINT: ${LANGFUSE_S3_EVENT_UPLOAD_ENDPOINT:-http://minio:9000}
LANGFUSE_S3_EVENT_UPLOAD_FORCE_PATH_STYLE: ${LANGFUSE_S3_EVENT_UPLOAD_FORCE_PATH_STYLE:-true}
LANGFUSE_S3_EVENT_UPLOAD_PREFIX: ${LANGFUSE_S3_EVENT_UPLOAD_PREFIX:-events/}
LANGFUSE_S3_MEDIA_UPLOAD_BUCKET: ${LANGFUSE_S3_MEDIA_UPLOAD_BUCKET:-langfuse}
LANGFUSE_S3_MEDIA_UPLOAD_REGION: ${LANGFUSE_S3_MEDIA_UPLOAD_REGION:-auto}
LANGFUSE_S3_MEDIA_UPLOAD_ACCESS_KEY_ID: ${LANGFUSE_S3_MEDIA_UPLOAD_ACCESS_KEY_ID:-minio}
LANGFUSE_S3_MEDIA_UPLOAD_SECRET_ACCESS_KEY: ${LANGFUSE_S3_MEDIA_UPLOAD_SECRET_ACCESS_KEY:-miniosecret}
LANGFUSE_S3_MEDIA_UPLOAD_ENDPOINT: ${LANGFUSE_S3_MEDIA_UPLOAD_ENDPOINT:-http://localhost:9090}
LANGFUSE_S3_MEDIA_UPLOAD_FORCE_PATH_STYLE: ${LANGFUSE_S3_MEDIA_UPLOAD_FORCE_PATH_STYLE:-true}
LANGFUSE_S3_MEDIA_UPLOAD_PREFIX: ${LANGFUSE_S3_MEDIA_UPLOAD_PREFIX:-media/}
LANGFUSE_S3_BATCH_EXPORT_ENABLED: ${LANGFUSE_S3_BATCH_EXPORT_ENABLED:-false}
LANGFUSE_S3_BATCH_EXPORT_BUCKET: ${LANGFUSE_S3_BATCH_EXPORT_BUCKET:-langfuse}
LANGFUSE_S3_BATCH_EXPORT_PREFIX: ${LANGFUSE_S3_BATCH_EXPORT_PREFIX:-exports/}
LANGFUSE_S3_BATCH_EXPORT_REGION: ${LANGFUSE_S3_BATCH_EXPORT_REGION:-auto}
LANGFUSE_S3_BATCH_EXPORT_ENDPOINT: ${LANGFUSE_S3_BATCH_EXPORT_ENDPOINT:-http://minio:9000}
LANGFUSE_S3_BATCH_EXPORT_EXTERNAL_ENDPOINT: ${LANGFUSE_S3_BATCH_EXPORT_EXTERNAL_ENDPOINT:-http://localhost:9090}
LANGFUSE_S3_BATCH_EXPORT_ACCESS_KEY_ID: ${LANGFUSE_S3_BATCH_EXPORT_ACCESS_KEY_ID:-minio}
LANGFUSE_S3_BATCH_EXPORT_SECRET_ACCESS_KEY: ${LANGFUSE_S3_BATCH_EXPORT_SECRET_ACCESS_KEY:-miniosecret}
LANGFUSE_S3_BATCH_EXPORT_FORCE_PATH_STYLE: ${LANGFUSE_S3_BATCH_EXPORT_FORCE_PATH_STYLE:-true}
LANGFUSE_INGESTION_QUEUE_DELAY_MS: ${LANGFUSE_INGESTION_QUEUE_DELAY_MS:-}
LANGFUSE_INGESTION_CLICKHOUSE_WRITE_INTERVAL_MS: ${LANGFUSE_INGESTION_CLICKHOUSE_WRITE_INTERVAL_MS:-}
REDIS_HOST: ${REDIS_HOST:-redis}
REDIS_PORT: ${REDIS_PORT:-6379}
REDIS_AUTH: ${REDIS_AUTH:-myredissecret}
REDIS_TLS_ENABLED: ${REDIS_TLS_ENABLED:-false}
REDIS_TLS_CA: ${REDIS_TLS_CA:-/certs/ca.crt}
REDIS_TLS_CERT: ${REDIS_TLS_CERT:-/certs/redis.crt}
REDIS_TLS_KEY: ${REDIS_TLS_KEY:-/certs/redis.key}
EMAIL_FROM_ADDRESS: ${EMAIL_FROM_ADDRESS:-}
SMTP_CONNECTION_URL: ${SMTP_CONNECTION_URL:-}
deploy:
resources:
limits:
cpus: ${LANGFUSE_WORKER_CPU_LIMIT:-2.0}
memory: ${LANGFUSE_WORKER_MEMORY_LIMIT:-2G}
reservations:
cpus: ${LANGFUSE_WORKER_CPU_RESERVATION:-0.5}
memory: ${LANGFUSE_WORKER_MEMORY_RESERVATION:-512M}
langfuse-web:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}langfuse/langfuse:${LANGFUSE_VERSION:-3}
depends_on: *langfuse-depends-on
ports:
- "${LANGFUSE_PORT_OVERRIDE:-3000}:3000"
environment:
<<: *langfuse-worker-env
NEXTAUTH_SECRET: ${NEXTAUTH_SECRET:-mysecret}
LANGFUSE_INIT_ORG_ID: ${LANGFUSE_INIT_ORG_ID:-}
LANGFUSE_INIT_ORG_NAME: ${LANGFUSE_INIT_ORG_NAME:-}
LANGFUSE_INIT_PROJECT_ID: ${LANGFUSE_INIT_PROJECT_ID:-}
LANGFUSE_INIT_PROJECT_NAME: ${LANGFUSE_INIT_PROJECT_NAME:-}
LANGFUSE_INIT_PROJECT_PUBLIC_KEY: ${LANGFUSE_INIT_PROJECT_PUBLIC_KEY:-}
LANGFUSE_INIT_PROJECT_SECRET_KEY: ${LANGFUSE_INIT_PROJECT_SECRET_KEY:-}
LANGFUSE_INIT_USER_EMAIL: ${LANGFUSE_INIT_USER_EMAIL:-}
LANGFUSE_INIT_USER_NAME: ${LANGFUSE_INIT_USER_NAME:-}
LANGFUSE_INIT_USER_PASSWORD: ${LANGFUSE_INIT_USER_PASSWORD:-}
deploy:
resources:
limits:
cpus: ${LANGFUSE_WEB_CPU_LIMIT:-2.0}
memory: ${LANGFUSE_WEB_MEMORY_LIMIT:-2G}
reservations:
cpus: ${LANGFUSE_WEB_CPU_RESERVATION:-0.5}
memory: ${LANGFUSE_WEB_MEMORY_RESERVATION:-512M}
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:3000/api/public/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
clickhouse:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}clickhouse/clickhouse-server:${CLICKHOUSE_VERSION:-latest}
user: "101:101"
environment:
CLICKHOUSE_DB: default
CLICKHOUSE_USER: ${CLICKHOUSE_USER:-clickhouse}
CLICKHOUSE_PASSWORD: ${CLICKHOUSE_PASSWORD:-clickhouse}
TZ: ${TZ:-UTC}
volumes:
- langfuse_clickhouse_data:/var/lib/clickhouse
- langfuse_clickhouse_logs:/var/log/clickhouse-server
ports:
- ${CLICKHOUSE_PORT_OVERRIDE:-8123}:8123
- ${CLICKHOUSE_TCP_PORT_OVERRIDE:-9000}:9000
healthcheck:
test: wget --no-verbose --tries=1 --spider http://localhost:8123/ping || exit 1
interval: 5s
timeout: 5s
retries: 10
start_period: 1s
deploy:
resources:
limits:
cpus: ${CLICKHOUSE_CPU_LIMIT:-2.0}
memory: ${CLICKHOUSE_MEMORY_LIMIT:-4G}
reservations:
cpus: ${CLICKHOUSE_CPU_RESERVATION:-0.5}
memory: ${CLICKHOUSE_MEMORY_RESERVATION:-1G}
minio:
<<: *defaults
image: ${CGR_DEV_REGISTRY:-cgr.dev/}chainguard/minio:${MINIO_VERSION:-latest}
entrypoint: sh
# create the 'langfuse' bucket before starting the service
command: -c 'mkdir -p /data/langfuse && minio server --address ":9000" --console-address ":9001" /data'
environment:
MINIO_ROOT_USER: ${MINIO_ROOT_USER:-minio}
MINIO_ROOT_PASSWORD: ${MINIO_ROOT_PASSWORD:-miniosecret}
TZ: ${TZ:-UTC}
volumes:
- langfuse_minio_data:/data
healthcheck:
test: ["CMD", "mc", "ready", "local"]
interval: 1s
timeout: 5s
retries: 5
start_period: 1s
deploy:
resources:
limits:
cpus: ${MINIO_CPU_LIMIT:-1.0}
memory: ${MINIO_MEMORY_LIMIT:-1G}
reservations:
cpus: ${MINIO_CPU_RESERVATION:-0.25}
memory: ${MINIO_MEMORY_RESERVATION:-256M}
redis:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}redis:${REDIS_VERSION:-7}
command: >
--requirepass ${REDIS_AUTH:-myredissecret}
--maxmemory-policy noeviction
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 3s
timeout: 10s
retries: 10
deploy:
resources:
limits:
cpus: ${REDIS_CPU_LIMIT:-1.0}
memory: ${REDIS_MEMORY_LIMIT:-512M}
reservations:
cpus: ${REDIS_CPU_RESERVATION:-0.25}
memory: ${REDIS_MEMORY_RESERVATION:-256M}
postgres:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}postgres:${POSTGRES_VERSION:-17}
environment:
POSTGRES_USER: ${POSTGRES_USER:-postgres}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-postgres}
POSTGRES_DB: ${POSTGRES_DB:-postgres}
TZ: UTC
PGTZ: UTC
volumes:
- langfuse_postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 3s
timeout: 3s
retries: 10
deploy:
resources:
limits:
cpus: ${POSTGRES_CPU_LIMIT:-2.0}
memory: ${POSTGRES_MEMORY_LIMIT:-2G}
reservations:
cpus: ${POSTGRES_CPU_RESERVATION:-0.5}
memory: ${POSTGRES_MEMORY_RESERVATION:-512M}
volumes:
langfuse_postgres_data:
driver: local
langfuse_clickhouse_data:
driver: local
langfuse_clickhouse_logs:
driver: local
langfuse_minio_data:
driver: local

37
src/pingap/.env.example Normal file
View File

@@ -0,0 +1,37 @@
# Pingap Configuration
# Global Settings
# GLOBAL_REGISTRY=your-registry.com/
TZ=UTC
# Pingap Version
# Use version-full for production (includes OpenTelemetry, Sentry, image compression plugins)
# Available tags: latest, full, 0.12.1, 0.12.1-full
PINGAP_VERSION=0.12.1-full
# Container Name (optional, leave empty for auto-generated name)
PINGAP_CONTAINER_NAME=
# Port Overrides
# HTTP port - exposed on host
PINGAP_HTTP_PORT_OVERRIDE=80
# HTTPS port - exposed on host
PINGAP_HTTPS_PORT_OVERRIDE=443
# Data Directory
# Path for persistent storage of configuration and data
PINGAP_DATA_DIR=./pingap
# Admin Configuration
# Admin interface address (format: host:port/path)
PINGAP_ADMIN_ADDR=0.0.0.0:80/pingap
# Admin username
PINGAP_ADMIN_USER=admin
# Admin password (REQUIRED - set a strong password!)
PINGAP_ADMIN_PASSWORD=changeme
# Resource Limits
PINGAP_CPU_LIMIT=1.0
PINGAP_MEMORY_LIMIT=512M
PINGAP_CPU_RESERVATION=0.5
PINGAP_MEMORY_RESERVATION=256M

127
src/pingap/README.md Normal file
View File

@@ -0,0 +1,127 @@
# Pingap
[中文说明](./README.zh.md)
A high-performance reverse proxy built on Cloudflare Pingora, designed as a more efficient alternative to Nginx with dynamic configuration, hot-reloading capabilities, and an intuitive web admin interface.
## Features
- **High Performance**: Built on Cloudflare's Pingora framework for exceptional performance
- **Dynamic Configuration**: Hot-reload configuration changes without downtime
- **Web Admin Interface**: Manage your proxy through an intuitive web UI
- **Plugin Ecosystem**: Rich plugin support for extended functionality
- **Full Version Features**: Includes OpenTelemetry, Sentry, and image compression plugins
- **Zero Downtime**: Configuration changes applied without service interruption
- **TOML Configuration**: Simple and concise configuration files
## Quick Start
1. Copy the environment file and configure it:
```bash
cp .env.example .env
```
2. **IMPORTANT**: Edit `.env` and set a strong password:
```bash
PINGAP_ADMIN_PASSWORD=your-strong-password-here
```
3. Start the service:
```bash
docker compose up -d
```
4. Access the web admin interface at:
```text
http://localhost/pingap/
```
- Default username: `admin`
- Password: The one you set in `.env`
## Configuration
### Environment Variables
| Variable | Description | Default |
| ---------------------------- | ------------------------------------------ | ------------------- |
| `PINGAP_VERSION` | Image version (recommended: `0.12.1-full`) | `0.12.1-full` |
| `PINGAP_HTTP_PORT_OVERRIDE` | HTTP port on host | `80` |
| `PINGAP_HTTPS_PORT_OVERRIDE` | HTTPS port on host | `443` |
| `PINGAP_DATA_DIR` | Data directory for persistent storage | `./pingap` |
| `PINGAP_ADMIN_ADDR` | Admin interface address | `0.0.0.0:80/pingap` |
| `PINGAP_ADMIN_USER` | Admin username | `admin` |
| `PINGAP_ADMIN_PASSWORD` | Admin password (REQUIRED) | - |
| `PINGAP_CPU_LIMIT` | CPU limit | `1.0` |
| `PINGAP_MEMORY_LIMIT` | Memory limit | `512M` |
### Image Versions
- `vicanso/pingap:latest` - Latest development version (not recommended for production)
- `vicanso/pingap:full` - Latest development version with all features
- `vicanso/pingap:0.12.1` - Stable version without extra dependencies
- `vicanso/pingap:0.12.1-full` - **Recommended**: Stable version with OpenTelemetry, Sentry, and image compression
### Persistent Storage
Configuration and data are stored in the `PINGAP_DATA_DIR` directory (default: `./pingap`). This directory will be created automatically on first run.
## Usage
### Viewing Logs
```bash
docker compose logs -f pingap
```
### Restarting After Configuration Changes
While Pingap supports hot-reloading for most configuration changes (upstream, location, certificate), changes to server configuration require a restart:
```bash
docker compose restart pingap
```
### Stopping the Service
```bash
docker compose down
```
## Important Notes
### Security
- **Always set a strong password** for `PINGAP_ADMIN_PASSWORD`
- Change the default admin username if possible
- Consider restricting admin interface access to specific IPs
- Use HTTPS for the admin interface in production
### Production Recommendations
- Use versioned tags (e.g., `0.12.1-full`) instead of `latest` or `full`
- Configure appropriate resource limits based on your traffic
- Set up proper monitoring and logging
- Enable HTTPS with valid certificates
- Regular backups of the `pingap` data directory
### Docker Best Practices
- The container runs with `--autoreload` flag for hot configuration updates
- Avoid using `--autorestart` in Docker as it conflicts with container lifecycle
- Use `docker compose restart` for server-level configuration changes
## Links
- [Official Website](https://pingap.io/)
- [Documentation](https://pingap.io/pingap-zh/docs/docker)
- [GitHub Repository](https://github.com/vicanso/pingap)
- [Docker Hub](https://hub.docker.com/r/vicanso/pingap)
## License
This Docker Compose configuration is provided as-is. Pingap is licensed under the Apache License 2.0.

127
src/pingap/README.zh.md Normal file
View File

@@ -0,0 +1,127 @@
# Pingap
[English](./README.md)
基于 Cloudflare Pingora 构建的高性能反向代理,设计为比 Nginx 更高效的替代方案,具有动态配置、热重载功能和直观的 Web 管理界面。
## 功能特性
- **高性能**:基于 Cloudflare 的 Pingora 框架,性能卓越
- **动态配置**:支持热重载配置更改,无需停机
- **Web 管理界面**:通过直观的 Web UI 管理代理服务
- **插件生态系统**:丰富的插件支持,可扩展功能
- **完整版功能**:包含 OpenTelemetry、Sentry 和图片压缩插件
- **零停机时间**:配置变更无需中断服务
- **TOML 配置**:简单明了的配置文件格式
## 快速开始
1. 复制环境变量文件并进行配置:
```bash
cp .env.example .env
```
2. **重要**:编辑 `.env` 文件,设置强密码:
```bash
PINGAP_ADMIN_PASSWORD=your-strong-password-here
```
3. 启动服务:
```bash
docker compose up -d
```
4. 访问 Web 管理后台:
```text
http://localhost/pingap/
```
- 默认用户名:`admin`
- 密码:在 `.env` 中设置的密码
## 配置说明
### 环境变量
| 变量名 | 说明 | 默认值 |
| ---------------------------- | ------------------------------- | ------------------- |
| `PINGAP_VERSION` | 镜像版本(推荐:`0.12.1-full` | `0.12.1-full` |
| `PINGAP_HTTP_PORT_OVERRIDE` | 主机 HTTP 端口 | `80` |
| `PINGAP_HTTPS_PORT_OVERRIDE` | 主机 HTTPS 端口 | `443` |
| `PINGAP_DATA_DIR` | 持久化数据目录 | `./pingap` |
| `PINGAP_ADMIN_ADDR` | 管理界面地址 | `0.0.0.0:80/pingap` |
| `PINGAP_ADMIN_USER` | 管理员用户名 | `admin` |
| `PINGAP_ADMIN_PASSWORD` | 管理员密码(必填) | - |
| `PINGAP_CPU_LIMIT` | CPU 限制 | `1.0` |
| `PINGAP_MEMORY_LIMIT` | 内存限制 | `512M` |
### 镜像版本
- `vicanso/pingap:latest` - 最新开发版(不推荐用于生产环境)
- `vicanso/pingap:full` - 包含所有功能的最新开发版
- `vicanso/pingap:0.12.1` - 不含额外依赖的稳定版
- `vicanso/pingap:0.12.1-full` - **推荐**:包含 OpenTelemetry、Sentry 和图片压缩的稳定版
### 持久化存储
配置和数据存储在 `PINGAP_DATA_DIR` 目录中(默认:`./pingap`)。该目录将在首次运行时自动创建。
## 使用方法
### 查看日志
```bash
docker compose logs -f pingap
```
### 配置更改后重启
虽然 Pingap 支持大多数配置更改的热重载upstream、location、certificate但对 server 配置的更改需要重启:
```bash
docker compose restart pingap
```
### 停止服务
```bash
docker compose down
```
## 重要提示
### 安全性
- **务必设置强密码**给 `PINGAP_ADMIN_PASSWORD`
- 建议更改默认的管理员用户名
- 考虑限制管理界面只能从特定 IP 访问
- 生产环境中建议对管理界面启用 HTTPS
### 生产环境建议
- 使用带版本号的标签(如 `0.12.1-full`),而非 `latest` 或 `full`
- 根据流量情况配置适当的资源限制
- 设置适当的监控和日志记录
- 启用 HTTPS 并使用有效证书
- 定期备份 `pingap` 数据目录
### Docker 最佳实践
- 容器使用 `--autoreload` 标志以支持配置热更新
- 避免在 Docker 中使用 `--autorestart`,因为它与容器生命周期冲突
- 对于服务器级别的配置更改,使用 `docker compose restart`
## 相关链接
- [官方网站](https://pingap.io/)
- [文档](https://pingap.io/pingap-zh/docs/docker)
- [GitHub 仓库](https://github.com/vicanso/pingap)
- [Docker Hub](https://hub.docker.com/r/vicanso/pingap)
## 许可证
本 Docker Compose 配置按原样提供。Pingap 基于 Apache License 2.0 许可证。

View File

@@ -0,0 +1,44 @@
# Docker Compose for Pingap - High-performance reverse proxy
# https://pingap.io/
x-defaults: &defaults
restart: unless-stopped
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
pingap:
<<: *defaults
image: ${GLOBAL_REGISTRY:-}vicanso/pingap:${PINGAP_VERSION:-0.12.1-full}
container_name: ${PINGAP_CONTAINER_NAME:-}
ports:
- "${PINGAP_HTTP_PORT_OVERRIDE:-80}:80"
- "${PINGAP_HTTPS_PORT_OVERRIDE:-443}:443"
volumes:
- ${PINGAP_DATA_DIR:-./pingap}:/opt/pingap
environment:
- TZ=${TZ:-UTC}
- PINGAP_CONF=/opt/pingap/conf
- PINGAP_ADMIN_ADDR=${PINGAP_ADMIN_ADDR:-0.0.0.0:80/pingap}
- PINGAP_ADMIN_USER=${PINGAP_ADMIN_USER:-admin}
- PINGAP_ADMIN_PASSWORD=${PINGAP_ADMIN_PASSWORD:?PINGAP_ADMIN_PASSWORD must be set}
command:
- pingap
- --autoreload
healthcheck:
test: ["CMD", "wget", "--spider", "-q", "http://localhost:80/pingap/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
deploy:
resources:
limits:
cpus: ${PINGAP_CPU_LIMIT:-1.0}
memory: ${PINGAP_MEMORY_LIMIT:-512M}
reservations:
cpus: ${PINGAP_CPU_RESERVATION:-0.5}
memory: ${PINGAP_MEMORY_RESERVATION:-256M}