Add environment configuration and documentation for various services

- Created .env.example files for Kafka, Kibana, KodBox, Kong, Langfuse, Logstash, n8n, Nginx, OceanBase, OpenCoze, RocketMQ, TiDB, and TiKV.
- Added README.md and README.zh.md files for OceanBase, RocketMQ, TiDB, and TiKV, detailing usage, configuration, and access instructions.
- Implemented docker-compose.yaml files for OceanBase, RocketMQ, TiDB, and TiKV, defining service configurations, health checks, and resource limits.
- Included broker.conf for RocketMQ to specify broker settings.
- Established a consistent timezone (UTC) across all services.
- Provided optional port overrides in .env.example files for flexibility in deployment.
This commit is contained in:
Sun-ZhenXing
2025-10-22 11:46:50 +08:00
parent 84e8b85990
commit ece59b42bf
49 changed files with 2326 additions and 0 deletions

5
src/duckdb/.env.example Normal file
View File

@@ -0,0 +1,5 @@
# DuckDB version
DUCKDB_VERSION=v1.1.3
# Timezone
TZ=UTC

91
src/duckdb/README.md Normal file
View File

@@ -0,0 +1,91 @@
# DuckDB
DuckDB is an in-process SQL OLAP database management system designed to support analytical query workloads. It's embedded, zero-dependency, and extremely fast.
## Usage
```bash
docker compose up -d
```
## Access
### Interactive Shell
Access DuckDB CLI:
```bash
docker compose exec duckdb duckdb /data/duckdb.db
```
### Execute Queries
Run queries directly:
```bash
docker compose exec duckdb duckdb /data/duckdb.db -c "SELECT 1"
```
### Execute SQL File
```bash
docker compose exec duckdb duckdb /data/duckdb.db < query.sql
```
## Example Usage
```sql
-- Create a table
CREATE TABLE users (id INTEGER, name VARCHAR);
-- Insert data
INSERT INTO users VALUES (1, 'Alice'), (2, 'Bob');
-- Query data
SELECT * FROM users;
-- Load CSV file
COPY users FROM '/import/users.csv' (HEADER);
-- Export to CSV
COPY users TO '/data/users_export.csv' (HEADER);
-- Read Parquet file directly
SELECT * FROM '/import/data.parquet';
```
## Features
- **Embeddable**: No separate server process needed
- **Fast**: Vectorized query execution engine
- **Feature-rich**: Full SQL support with window functions, CTEs, etc.
- **File formats**: Native support for CSV, JSON, Parquet
- **Extensions**: PostgreSQL-compatible extensions
## Mounting Data Files
To import data files, mount them as volumes:
```yaml
volumes:
- ./data:/import:ro
```
Then access files in SQL:
```sql
SELECT * FROM '/import/data.csv';
```
## Notes
- DuckDB is designed for analytical (OLAP) workloads, not transactional (OLTP)
- The database file is stored in `/data/duckdb.db`
- Data persists in the named volume `duckdb_data`
- DuckDB can query files directly without importing them
- For production workloads, ensure sufficient memory is allocated
## References
- [DuckDB Official Documentation](https://duckdb.org/docs/)
- [DuckDB Docker Image](https://hub.docker.com/r/davidgasquez/duckdb)

91
src/duckdb/README.zh.md Normal file
View File

@@ -0,0 +1,91 @@
# DuckDB
DuckDB 是一个进程内 SQL OLAP 数据库管理系统,专为支持分析查询工作负载而设计。它是嵌入式的、零依赖的,并且速度极快。
## 使用方法
```bash
docker compose up -d
```
## 访问方式
### 交互式 Shell
访问 DuckDB CLI
```bash
docker compose exec duckdb duckdb /data/duckdb.db
```
### 执行查询
直接运行查询:
```bash
docker compose exec duckdb duckdb /data/duckdb.db -c "SELECT 1"
```
### 执行 SQL 文件
```bash
docker compose exec duckdb duckdb /data/duckdb.db < query.sql
```
## 使用示例
```sql
-- 创建表
CREATE TABLE users (id INTEGER, name VARCHAR);
-- 插入数据
INSERT INTO users VALUES (1, 'Alice'), (2, 'Bob');
-- 查询数据
SELECT * FROM users;
-- 加载 CSV 文件
COPY users FROM '/import/users.csv' (HEADER);
-- 导出到 CSV
COPY users TO '/data/users_export.csv' (HEADER);
-- 直接读取 Parquet 文件
SELECT * FROM '/import/data.parquet';
```
## 特性
- **可嵌入**:无需单独的服务器进程
- **快速**:向量化查询执行引擎
- **功能丰富**:完整的 SQL 支持包括窗口函数、CTE 等
- **文件格式**:原生支持 CSV、JSON、Parquet
- **扩展**:兼容 PostgreSQL 的扩展
## 挂载数据文件
要导入数据文件,将它们作为卷挂载:
```yaml
volumes:
- ./data:/import:ro
```
然后在 SQL 中访问文件:
```sql
SELECT * FROM '/import/data.csv';
```
## 注意事项
- DuckDB 专为分析OLAP工作负载设计而非事务OLTP
- 数据库文件存储在 `/data/duckdb.db`
- 数据持久化在命名卷 `duckdb_data`
- DuckDB 可以直接查询文件而无需导入
- 生产工作负载需确保分配足够的内存
## 参考资料
- [DuckDB 官方文档](https://duckdb.org/docs/)
- [DuckDB Docker 镜像](https://hub.docker.com/r/davidgasquez/duckdb)

View File

@@ -0,0 +1,38 @@
x-default: &default
restart: unless-stopped
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
duckdb:
<<: *default
image: davidgasquez/duckdb:${DUCKDB_VERSION:-v1.1.3}
command: ["duckdb", "/data/duckdb.db"]
stdin_open: true
tty: true
environment:
TZ: ${TZ:-UTC}
volumes:
- duckdb_data:/data
# Mount additional data files
# - ./data:/import:ro
deploy:
resources:
limits:
cpus: '2.0'
memory: 4G
reservations:
cpus: '0.5'
memory: 512M
healthcheck:
test: ["CMD-SHELL", "duckdb /data/duckdb.db -c 'SELECT 1' || exit 1"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
volumes:
duckdb_data: