Add environment configuration and documentation for various services

- Created .env.example files for Kafka, Kibana, KodBox, Kong, Langfuse, Logstash, n8n, Nginx, OceanBase, OpenCoze, RocketMQ, TiDB, and TiKV.
- Added README.md and README.zh.md files for OceanBase, RocketMQ, TiDB, and TiKV, detailing usage, configuration, and access instructions.
- Implemented docker-compose.yaml files for OceanBase, RocketMQ, TiDB, and TiKV, defining service configurations, health checks, and resource limits.
- Included broker.conf for RocketMQ to specify broker settings.
- Established a consistent timezone (UTC) across all services.
- Provided optional port overrides in .env.example files for flexibility in deployment.
This commit is contained in:
Sun-ZhenXing
2025-10-22 11:46:50 +08:00
parent 84e8b85990
commit ece59b42bf
49 changed files with 2326 additions and 0 deletions

91
src/duckdb/README.md Normal file
View File

@@ -0,0 +1,91 @@
# DuckDB
DuckDB is an in-process SQL OLAP database management system designed to support analytical query workloads. It's embedded, zero-dependency, and extremely fast.
## Usage
```bash
docker compose up -d
```
## Access
### Interactive Shell
Access DuckDB CLI:
```bash
docker compose exec duckdb duckdb /data/duckdb.db
```
### Execute Queries
Run queries directly:
```bash
docker compose exec duckdb duckdb /data/duckdb.db -c "SELECT 1"
```
### Execute SQL File
```bash
docker compose exec duckdb duckdb /data/duckdb.db < query.sql
```
## Example Usage
```sql
-- Create a table
CREATE TABLE users (id INTEGER, name VARCHAR);
-- Insert data
INSERT INTO users VALUES (1, 'Alice'), (2, 'Bob');
-- Query data
SELECT * FROM users;
-- Load CSV file
COPY users FROM '/import/users.csv' (HEADER);
-- Export to CSV
COPY users TO '/data/users_export.csv' (HEADER);
-- Read Parquet file directly
SELECT * FROM '/import/data.parquet';
```
## Features
- **Embeddable**: No separate server process needed
- **Fast**: Vectorized query execution engine
- **Feature-rich**: Full SQL support with window functions, CTEs, etc.
- **File formats**: Native support for CSV, JSON, Parquet
- **Extensions**: PostgreSQL-compatible extensions
## Mounting Data Files
To import data files, mount them as volumes:
```yaml
volumes:
- ./data:/import:ro
```
Then access files in SQL:
```sql
SELECT * FROM '/import/data.csv';
```
## Notes
- DuckDB is designed for analytical (OLAP) workloads, not transactional (OLTP)
- The database file is stored in `/data/duckdb.db`
- Data persists in the named volume `duckdb_data`
- DuckDB can query files directly without importing them
- For production workloads, ensure sufficient memory is allocated
## References
- [DuckDB Official Documentation](https://duckdb.org/docs/)
- [DuckDB Docker Image](https://hub.docker.com/r/davidgasquez/duckdb)