Add environment configuration and documentation for various services
- Created .env.example files for Kafka, Kibana, KodBox, Kong, Langfuse, Logstash, n8n, Nginx, OceanBase, OpenCoze, RocketMQ, TiDB, and TiKV. - Added README.md and README.zh.md files for OceanBase, RocketMQ, TiDB, and TiKV, detailing usage, configuration, and access instructions. - Implemented docker-compose.yaml files for OceanBase, RocketMQ, TiDB, and TiKV, defining service configurations, health checks, and resource limits. - Included broker.conf for RocketMQ to specify broker settings. - Established a consistent timezone (UTC) across all services. - Provided optional port overrides in .env.example files for flexibility in deployment.
This commit is contained in:
13
src/apache/.env.example
Normal file
13
src/apache/.env.example
Normal file
@@ -0,0 +1,13 @@
|
||||
# Apache version
|
||||
APACHE_VERSION=2.4.62-alpine3.20
|
||||
|
||||
# Timezone
|
||||
TZ=UTC
|
||||
|
||||
# Apache run user and group
|
||||
APACHE_RUN_USER=www-data
|
||||
APACHE_RUN_GROUP=www-data
|
||||
|
||||
# Port overrides (optional)
|
||||
# APACHE_HTTP_PORT_OVERRIDE=80
|
||||
# APACHE_HTTPS_PORT_OVERRIDE=443
|
||||
22
src/cassandra/.env.example
Normal file
22
src/cassandra/.env.example
Normal file
@@ -0,0 +1,22 @@
|
||||
# Cassandra version
|
||||
CASSANDRA_VERSION=5.0.2
|
||||
|
||||
# Timezone
|
||||
TZ=UTC
|
||||
|
||||
# Cluster configuration
|
||||
CASSANDRA_CLUSTER_NAME=Test Cluster
|
||||
CASSANDRA_DC=datacenter1
|
||||
CASSANDRA_RACK=rack1
|
||||
CASSANDRA_ENDPOINT_SNITCH=GossipingPropertyFileSnitch
|
||||
CASSANDRA_NUM_TOKENS=256
|
||||
CASSANDRA_SEEDS=cassandra
|
||||
CASSANDRA_START_RPC=false
|
||||
|
||||
# JVM heap sizes
|
||||
MAX_HEAP_SIZE=1G
|
||||
HEAP_NEWSIZE=100M
|
||||
|
||||
# Port overrides (optional)
|
||||
# CASSANDRA_CQL_PORT_OVERRIDE=9042
|
||||
# CASSANDRA_THRIFT_PORT_OVERRIDE=9160
|
||||
17
src/clickhouse/.env.example
Normal file
17
src/clickhouse/.env.example
Normal file
@@ -0,0 +1,17 @@
|
||||
# ClickHouse version
|
||||
CLICKHOUSE_VERSION=24.11.1.2557
|
||||
|
||||
# Timezone
|
||||
TZ=UTC
|
||||
|
||||
# Database configuration
|
||||
CLICKHOUSE_DB=default
|
||||
CLICKHOUSE_USER=default
|
||||
CLICKHOUSE_PASSWORD=clickhouse
|
||||
CLICKHOUSE_DEFAULT_ACCESS_MANAGEMENT=1
|
||||
|
||||
# Port overrides (optional)
|
||||
# CLICKHOUSE_HTTP_PORT_OVERRIDE=8123
|
||||
# CLICKHOUSE_NATIVE_PORT_OVERRIDE=9000
|
||||
# CLICKHOUSE_MYSQL_PORT_OVERRIDE=9004
|
||||
# CLICKHOUSE_POSTGRES_PORT_OVERRIDE=9005
|
||||
81
src/clickhouse/README.md
Normal file
81
src/clickhouse/README.md
Normal file
@@ -0,0 +1,81 @@
|
||||
# ClickHouse
|
||||
|
||||
ClickHouse is a fast open-source column-oriented database management system that allows generating analytical data reports in real-time.
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
Key environment variables:
|
||||
|
||||
- `CLICKHOUSE_DB`: Default database name (default: `default`)
|
||||
- `CLICKHOUSE_USER`: Default user name (default: `default`)
|
||||
- `CLICKHOUSE_PASSWORD`: Default user password (default: `clickhouse`)
|
||||
- `CLICKHOUSE_DEFAULT_ACCESS_MANAGEMENT`: Enable SQL-driven access control (default: `1`)
|
||||
|
||||
## Ports
|
||||
|
||||
- `8123`: HTTP interface
|
||||
- `9000`: Native TCP protocol
|
||||
- `9004`: MySQL protocol emulation
|
||||
- `9005`: PostgreSQL protocol emulation
|
||||
|
||||
## Access
|
||||
|
||||
### HTTP Interface
|
||||
|
||||
```bash
|
||||
curl 'http://localhost:8123/?user=default&password=clickhouse' -d 'SELECT 1'
|
||||
```
|
||||
|
||||
### ClickHouse Client
|
||||
|
||||
```bash
|
||||
docker compose exec clickhouse clickhouse-client --user default --password clickhouse
|
||||
```
|
||||
|
||||
### MySQL Protocol
|
||||
|
||||
```bash
|
||||
mysql -h127.0.0.1 -P9004 -udefault -pclickhouse
|
||||
```
|
||||
|
||||
### PostgreSQL Protocol
|
||||
|
||||
```bash
|
||||
psql -h127.0.0.1 -p9005 -Udefault
|
||||
```
|
||||
|
||||
## Example Queries
|
||||
|
||||
```sql
|
||||
-- Create a table
|
||||
CREATE TABLE events (
|
||||
event_date Date,
|
||||
event_type String,
|
||||
user_id UInt32
|
||||
) ENGINE = MergeTree()
|
||||
ORDER BY (event_date, event_type);
|
||||
|
||||
-- Insert data
|
||||
INSERT INTO events VALUES ('2024-01-01', 'click', 1), ('2024-01-01', 'view', 2);
|
||||
|
||||
-- Query data
|
||||
SELECT * FROM events;
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- ClickHouse is optimized for OLAP (Online Analytical Processing) workloads
|
||||
- It excels at aggregating large amounts of data quickly
|
||||
- For production, consider using a cluster setup with replication
|
||||
- Custom configurations can be mounted in `/etc/clickhouse-server/config.d/` and `/etc/clickhouse-server/users.d/`
|
||||
|
||||
## References
|
||||
|
||||
- [ClickHouse Official Documentation](https://clickhouse.com/docs)
|
||||
- [ClickHouse Docker Hub](https://hub.docker.com/r/clickhouse/clickhouse-server)
|
||||
81
src/clickhouse/README.zh.md
Normal file
81
src/clickhouse/README.zh.md
Normal file
@@ -0,0 +1,81 @@
|
||||
# ClickHouse
|
||||
|
||||
ClickHouse 是一个快速的开源列式数据库管理系统,支持实时生成分析数据报告。
|
||||
|
||||
## 使用方法
|
||||
|
||||
```bash
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
## 配置说明
|
||||
|
||||
主要环境变量:
|
||||
|
||||
- `CLICKHOUSE_DB`:默认数据库名称(默认:`default`)
|
||||
- `CLICKHOUSE_USER`:默认用户名(默认:`default`)
|
||||
- `CLICKHOUSE_PASSWORD`:默认用户密码(默认:`clickhouse`)
|
||||
- `CLICKHOUSE_DEFAULT_ACCESS_MANAGEMENT`:启用 SQL 驱动的访问控制(默认:`1`)
|
||||
|
||||
## 端口说明
|
||||
|
||||
- `8123`:HTTP 接口
|
||||
- `9000`:Native TCP 协议
|
||||
- `9004`:MySQL 协议模拟
|
||||
- `9005`:PostgreSQL 协议模拟
|
||||
|
||||
## 访问方式
|
||||
|
||||
### HTTP 接口
|
||||
|
||||
```bash
|
||||
curl 'http://localhost:8123/?user=default&password=clickhouse' -d 'SELECT 1'
|
||||
```
|
||||
|
||||
### ClickHouse 客户端
|
||||
|
||||
```bash
|
||||
docker compose exec clickhouse clickhouse-client --user default --password clickhouse
|
||||
```
|
||||
|
||||
### MySQL 协议
|
||||
|
||||
```bash
|
||||
mysql -h127.0.0.1 -P9004 -udefault -pclickhouse
|
||||
```
|
||||
|
||||
### PostgreSQL 协议
|
||||
|
||||
```bash
|
||||
psql -h127.0.0.1 -p9005 -Udefault
|
||||
```
|
||||
|
||||
## 示例查询
|
||||
|
||||
```sql
|
||||
-- 创建表
|
||||
CREATE TABLE events (
|
||||
event_date Date,
|
||||
event_type String,
|
||||
user_id UInt32
|
||||
) ENGINE = MergeTree()
|
||||
ORDER BY (event_date, event_type);
|
||||
|
||||
-- 插入数据
|
||||
INSERT INTO events VALUES ('2024-01-01', 'click', 1), ('2024-01-01', 'view', 2);
|
||||
|
||||
-- 查询数据
|
||||
SELECT * FROM events;
|
||||
```
|
||||
|
||||
## 注意事项
|
||||
|
||||
- ClickHouse 专为 OLAP(在线分析处理)工作负载优化
|
||||
- 擅长快速聚合大量数据
|
||||
- 生产环境建议使用集群配置和复制功能
|
||||
- 自定义配置可以挂载到 `/etc/clickhouse-server/config.d/` 和 `/etc/clickhouse-server/users.d/`
|
||||
|
||||
## 参考资料
|
||||
|
||||
- [ClickHouse 官方文档](https://clickhouse.com/docs)
|
||||
- [ClickHouse Docker Hub](https://hub.docker.com/r/clickhouse/clickhouse-server)
|
||||
52
src/clickhouse/docker-compose.yaml
Normal file
52
src/clickhouse/docker-compose.yaml
Normal file
@@ -0,0 +1,52 @@
|
||||
x-default: &default
|
||||
restart: unless-stopped
|
||||
logging:
|
||||
driver: json-file
|
||||
options:
|
||||
max-size: 100m
|
||||
max-file: "3"
|
||||
|
||||
services:
|
||||
clickhouse:
|
||||
<<: *default
|
||||
image: clickhouse/clickhouse-server:${CLICKHOUSE_VERSION:-24.11.1.2557}
|
||||
hostname: clickhouse
|
||||
environment:
|
||||
TZ: ${TZ:-UTC}
|
||||
CLICKHOUSE_DB: ${CLICKHOUSE_DB:-default}
|
||||
CLICKHOUSE_USER: ${CLICKHOUSE_USER:-default}
|
||||
CLICKHOUSE_PASSWORD: ${CLICKHOUSE_PASSWORD:-clickhouse}
|
||||
CLICKHOUSE_DEFAULT_ACCESS_MANAGEMENT: ${CLICKHOUSE_DEFAULT_ACCESS_MANAGEMENT:-1}
|
||||
volumes:
|
||||
- clickhouse_data:/var/lib/clickhouse
|
||||
- clickhouse_logs:/var/log/clickhouse-server
|
||||
# Custom configuration
|
||||
# - ./config.xml:/etc/clickhouse-server/config.d/config.xml
|
||||
# - ./users.xml:/etc/clickhouse-server/users.d/users.xml
|
||||
ports:
|
||||
- "${CLICKHOUSE_HTTP_PORT_OVERRIDE:-8123}:8123"
|
||||
- "${CLICKHOUSE_NATIVE_PORT_OVERRIDE:-9000}:9000"
|
||||
- "${CLICKHOUSE_MYSQL_PORT_OVERRIDE:-9004}:9004"
|
||||
- "${CLICKHOUSE_POSTGRES_PORT_OVERRIDE:-9005}:9005"
|
||||
ulimits:
|
||||
nofile:
|
||||
soft: 262144
|
||||
hard: 262144
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '4.0'
|
||||
memory: 4G
|
||||
reservations:
|
||||
cpus: '1.0'
|
||||
memory: 1G
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "wget --no-verbose --tries=1 --spider http://localhost:8123/ping || exit 1"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 30s
|
||||
|
||||
volumes:
|
||||
clickhouse_data:
|
||||
clickhouse_logs:
|
||||
19
src/consul/.env.example
Normal file
19
src/consul/.env.example
Normal file
@@ -0,0 +1,19 @@
|
||||
# Consul version
|
||||
CONSUL_VERSION=1.20.3
|
||||
|
||||
# Timezone
|
||||
TZ=UTC
|
||||
|
||||
# Network interface configuration
|
||||
CONSUL_BIND_INTERFACE=eth0
|
||||
CONSUL_CLIENT_INTERFACE=eth0
|
||||
|
||||
# Consul local configuration (JSON string)
|
||||
CONSUL_LOCAL_CONFIG={"datacenter":"dc1","server":true,"ui_config":{"enabled":true},"bootstrap_expect":1,"log_level":"INFO"}
|
||||
|
||||
# Port overrides (optional)
|
||||
# CONSUL_HTTP_PORT_OVERRIDE=8500
|
||||
# CONSUL_DNS_PORT_OVERRIDE=8600
|
||||
# CONSUL_SERF_LAN_PORT_OVERRIDE=8301
|
||||
# CONSUL_SERF_WAN_PORT_OVERRIDE=8302
|
||||
# CONSUL_SERVER_RPC_PORT_OVERRIDE=8300
|
||||
5
src/duckdb/.env.example
Normal file
5
src/duckdb/.env.example
Normal file
@@ -0,0 +1,5 @@
|
||||
# DuckDB version
|
||||
DUCKDB_VERSION=v1.1.3
|
||||
|
||||
# Timezone
|
||||
TZ=UTC
|
||||
91
src/duckdb/README.md
Normal file
91
src/duckdb/README.md
Normal file
@@ -0,0 +1,91 @@
|
||||
# DuckDB
|
||||
|
||||
DuckDB is an in-process SQL OLAP database management system designed to support analytical query workloads. It's embedded, zero-dependency, and extremely fast.
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
## Access
|
||||
|
||||
### Interactive Shell
|
||||
|
||||
Access DuckDB CLI:
|
||||
|
||||
```bash
|
||||
docker compose exec duckdb duckdb /data/duckdb.db
|
||||
```
|
||||
|
||||
### Execute Queries
|
||||
|
||||
Run queries directly:
|
||||
|
||||
```bash
|
||||
docker compose exec duckdb duckdb /data/duckdb.db -c "SELECT 1"
|
||||
```
|
||||
|
||||
### Execute SQL File
|
||||
|
||||
```bash
|
||||
docker compose exec duckdb duckdb /data/duckdb.db < query.sql
|
||||
```
|
||||
|
||||
## Example Usage
|
||||
|
||||
```sql
|
||||
-- Create a table
|
||||
CREATE TABLE users (id INTEGER, name VARCHAR);
|
||||
|
||||
-- Insert data
|
||||
INSERT INTO users VALUES (1, 'Alice'), (2, 'Bob');
|
||||
|
||||
-- Query data
|
||||
SELECT * FROM users;
|
||||
|
||||
-- Load CSV file
|
||||
COPY users FROM '/import/users.csv' (HEADER);
|
||||
|
||||
-- Export to CSV
|
||||
COPY users TO '/data/users_export.csv' (HEADER);
|
||||
|
||||
-- Read Parquet file directly
|
||||
SELECT * FROM '/import/data.parquet';
|
||||
```
|
||||
|
||||
## Features
|
||||
|
||||
- **Embeddable**: No separate server process needed
|
||||
- **Fast**: Vectorized query execution engine
|
||||
- **Feature-rich**: Full SQL support with window functions, CTEs, etc.
|
||||
- **File formats**: Native support for CSV, JSON, Parquet
|
||||
- **Extensions**: PostgreSQL-compatible extensions
|
||||
|
||||
## Mounting Data Files
|
||||
|
||||
To import data files, mount them as volumes:
|
||||
|
||||
```yaml
|
||||
volumes:
|
||||
- ./data:/import:ro
|
||||
```
|
||||
|
||||
Then access files in SQL:
|
||||
|
||||
```sql
|
||||
SELECT * FROM '/import/data.csv';
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- DuckDB is designed for analytical (OLAP) workloads, not transactional (OLTP)
|
||||
- The database file is stored in `/data/duckdb.db`
|
||||
- Data persists in the named volume `duckdb_data`
|
||||
- DuckDB can query files directly without importing them
|
||||
- For production workloads, ensure sufficient memory is allocated
|
||||
|
||||
## References
|
||||
|
||||
- [DuckDB Official Documentation](https://duckdb.org/docs/)
|
||||
- [DuckDB Docker Image](https://hub.docker.com/r/davidgasquez/duckdb)
|
||||
91
src/duckdb/README.zh.md
Normal file
91
src/duckdb/README.zh.md
Normal file
@@ -0,0 +1,91 @@
|
||||
# DuckDB
|
||||
|
||||
DuckDB 是一个进程内 SQL OLAP 数据库管理系统,专为支持分析查询工作负载而设计。它是嵌入式的、零依赖的,并且速度极快。
|
||||
|
||||
## 使用方法
|
||||
|
||||
```bash
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
## 访问方式
|
||||
|
||||
### 交互式 Shell
|
||||
|
||||
访问 DuckDB CLI:
|
||||
|
||||
```bash
|
||||
docker compose exec duckdb duckdb /data/duckdb.db
|
||||
```
|
||||
|
||||
### 执行查询
|
||||
|
||||
直接运行查询:
|
||||
|
||||
```bash
|
||||
docker compose exec duckdb duckdb /data/duckdb.db -c "SELECT 1"
|
||||
```
|
||||
|
||||
### 执行 SQL 文件
|
||||
|
||||
```bash
|
||||
docker compose exec duckdb duckdb /data/duckdb.db < query.sql
|
||||
```
|
||||
|
||||
## 使用示例
|
||||
|
||||
```sql
|
||||
-- 创建表
|
||||
CREATE TABLE users (id INTEGER, name VARCHAR);
|
||||
|
||||
-- 插入数据
|
||||
INSERT INTO users VALUES (1, 'Alice'), (2, 'Bob');
|
||||
|
||||
-- 查询数据
|
||||
SELECT * FROM users;
|
||||
|
||||
-- 加载 CSV 文件
|
||||
COPY users FROM '/import/users.csv' (HEADER);
|
||||
|
||||
-- 导出到 CSV
|
||||
COPY users TO '/data/users_export.csv' (HEADER);
|
||||
|
||||
-- 直接读取 Parquet 文件
|
||||
SELECT * FROM '/import/data.parquet';
|
||||
```
|
||||
|
||||
## 特性
|
||||
|
||||
- **可嵌入**:无需单独的服务器进程
|
||||
- **快速**:向量化查询执行引擎
|
||||
- **功能丰富**:完整的 SQL 支持,包括窗口函数、CTE 等
|
||||
- **文件格式**:原生支持 CSV、JSON、Parquet
|
||||
- **扩展**:兼容 PostgreSQL 的扩展
|
||||
|
||||
## 挂载数据文件
|
||||
|
||||
要导入数据文件,将它们作为卷挂载:
|
||||
|
||||
```yaml
|
||||
volumes:
|
||||
- ./data:/import:ro
|
||||
```
|
||||
|
||||
然后在 SQL 中访问文件:
|
||||
|
||||
```sql
|
||||
SELECT * FROM '/import/data.csv';
|
||||
```
|
||||
|
||||
## 注意事项
|
||||
|
||||
- DuckDB 专为分析(OLAP)工作负载设计,而非事务(OLTP)
|
||||
- 数据库文件存储在 `/data/duckdb.db`
|
||||
- 数据持久化在命名卷 `duckdb_data` 中
|
||||
- DuckDB 可以直接查询文件而无需导入
|
||||
- 生产工作负载需确保分配足够的内存
|
||||
|
||||
## 参考资料
|
||||
|
||||
- [DuckDB 官方文档](https://duckdb.org/docs/)
|
||||
- [DuckDB Docker 镜像](https://hub.docker.com/r/davidgasquez/duckdb)
|
||||
38
src/duckdb/docker-compose.yaml
Normal file
38
src/duckdb/docker-compose.yaml
Normal file
@@ -0,0 +1,38 @@
|
||||
x-default: &default
|
||||
restart: unless-stopped
|
||||
logging:
|
||||
driver: json-file
|
||||
options:
|
||||
max-size: 100m
|
||||
max-file: "3"
|
||||
|
||||
services:
|
||||
duckdb:
|
||||
<<: *default
|
||||
image: davidgasquez/duckdb:${DUCKDB_VERSION:-v1.1.3}
|
||||
command: ["duckdb", "/data/duckdb.db"]
|
||||
stdin_open: true
|
||||
tty: true
|
||||
environment:
|
||||
TZ: ${TZ:-UTC}
|
||||
volumes:
|
||||
- duckdb_data:/data
|
||||
# Mount additional data files
|
||||
# - ./data:/import:ro
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '2.0'
|
||||
memory: 4G
|
||||
reservations:
|
||||
cpus: '0.5'
|
||||
memory: 512M
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "duckdb /data/duckdb.db -c 'SELECT 1' || exit 1"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 10s
|
||||
|
||||
volumes:
|
||||
duckdb_data:
|
||||
20
src/elasticsearch/.env.example
Normal file
20
src/elasticsearch/.env.example
Normal file
@@ -0,0 +1,20 @@
|
||||
# Elasticsearch version
|
||||
ELASTICSEARCH_VERSION=8.16.1
|
||||
|
||||
# Timezone
|
||||
TZ=UTC
|
||||
|
||||
# Cluster configuration
|
||||
ELASTICSEARCH_CLUSTER_NAME=docker-cluster
|
||||
ELASTICSEARCH_DISCOVERY_TYPE=single-node
|
||||
|
||||
# Security settings
|
||||
ELASTICSEARCH_SECURITY_ENABLED=false
|
||||
ELASTICSEARCH_SSL_ENABLED=false
|
||||
|
||||
# JVM heap size
|
||||
ELASTICSEARCH_HEAP_SIZE=1g
|
||||
|
||||
# Port overrides (optional)
|
||||
# ELASTICSEARCH_HTTP_PORT_OVERRIDE=9200
|
||||
# ELASTICSEARCH_TRANSPORT_PORT_OVERRIDE=9300
|
||||
9
src/flink/.env.example
Normal file
9
src/flink/.env.example
Normal file
@@ -0,0 +1,9 @@
|
||||
# Flink version
|
||||
FLINK_VERSION=1.20.0-scala_2.12-java11
|
||||
|
||||
# Timezone
|
||||
TZ=UTC
|
||||
|
||||
# Port overrides (optional)
|
||||
# FLINK_JOBMANAGER_RPC_PORT_OVERRIDE=6123
|
||||
# FLINK_JOBMANAGER_UI_PORT_OVERRIDE=8081
|
||||
104
src/flink/README.md
Normal file
104
src/flink/README.md
Normal file
@@ -0,0 +1,104 @@
|
||||
# Apache Flink
|
||||
|
||||
Apache Flink is a framework and distributed processing engine for stateful computations over unbounded and bounded data streams.
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
## Components
|
||||
|
||||
This setup includes:
|
||||
|
||||
- **JobManager**: Coordinates Flink jobs and manages resources
|
||||
- **TaskManager**: Executes tasks and manages data streams
|
||||
|
||||
## Configuration
|
||||
|
||||
Key environment variables in `FLINK_PROPERTIES`:
|
||||
|
||||
- `jobmanager.rpc.address`: JobManager RPC address
|
||||
- `jobmanager.memory.process.size`: JobManager memory (default: 1600m)
|
||||
- `taskmanager.memory.process.size`: TaskManager memory (default: 1600m)
|
||||
- `taskmanager.numberOfTaskSlots`: Number of task slots per TaskManager (default: 2)
|
||||
|
||||
## Ports
|
||||
|
||||
- `6123`: JobManager RPC port
|
||||
- `8081`: Flink Web UI
|
||||
|
||||
## Access
|
||||
|
||||
### Web UI
|
||||
|
||||
Access Flink Dashboard at: <http://localhost:8081>
|
||||
|
||||
### Submit Jobs
|
||||
|
||||
Submit a Flink job:
|
||||
|
||||
```bash
|
||||
docker compose exec jobmanager ./bin/flink run /opt/flink/examples/streaming/WordCount.jar
|
||||
```
|
||||
|
||||
Submit a custom job:
|
||||
|
||||
```bash
|
||||
docker compose exec jobmanager ./bin/flink run /opt/flink/jobs/my-job.jar
|
||||
```
|
||||
|
||||
### Job Management
|
||||
|
||||
```bash
|
||||
# List running jobs
|
||||
docker compose exec jobmanager ./bin/flink list
|
||||
|
||||
# Cancel a job
|
||||
docker compose exec jobmanager ./bin/flink cancel <job-id>
|
||||
|
||||
# Show job details
|
||||
docker compose exec jobmanager ./bin/flink info /path/to/job.jar
|
||||
```
|
||||
|
||||
## Example: WordCount
|
||||
|
||||
Run the built-in WordCount example:
|
||||
|
||||
```bash
|
||||
docker compose exec jobmanager ./bin/flink run /opt/flink/examples/streaming/WordCount.jar
|
||||
```
|
||||
|
||||
## Scaling TaskManagers
|
||||
|
||||
To scale TaskManagers for more processing capacity:
|
||||
|
||||
```bash
|
||||
docker compose up -d --scale taskmanager=3
|
||||
```
|
||||
|
||||
## Custom Jobs
|
||||
|
||||
Mount your custom Flink jobs by uncommenting the volume in `docker-compose.yaml`:
|
||||
|
||||
```yaml
|
||||
volumes:
|
||||
- ./jobs:/opt/flink/jobs
|
||||
```
|
||||
|
||||
Then place your JAR files in the `./jobs` directory.
|
||||
|
||||
## Notes
|
||||
|
||||
- This is a standalone cluster setup suitable for development
|
||||
- For production, consider using Flink on Kubernetes or YARN
|
||||
- Adjust memory settings based on your workload requirements
|
||||
- Task slots determine parallelism; more slots allow more parallel tasks
|
||||
- Data is persisted in the named volume `flink_data`
|
||||
|
||||
## References
|
||||
|
||||
- [Apache Flink Official Documentation](https://flink.apache.org/docs/stable/)
|
||||
- [Flink Docker Setup](https://nightlies.apache.org/flink/flink-docs-stable/docs/deployment/resource-providers/standalone/docker/)
|
||||
- [Flink Docker Hub](https://hub.docker.com/_/flink)
|
||||
104
src/flink/README.zh.md
Normal file
104
src/flink/README.zh.md
Normal file
@@ -0,0 +1,104 @@
|
||||
# Apache Flink
|
||||
|
||||
Apache Flink 是一个框架和分布式处理引擎,用于对无界和有界数据流进行有状态计算。
|
||||
|
||||
## 使用方法
|
||||
|
||||
```bash
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
## 组件说明
|
||||
|
||||
此配置包含:
|
||||
|
||||
- **JobManager**:协调 Flink 作业并管理资源
|
||||
- **TaskManager**:执行任务并管理数据流
|
||||
|
||||
## 配置说明
|
||||
|
||||
`FLINK_PROPERTIES` 中的关键环境变量:
|
||||
|
||||
- `jobmanager.rpc.address`:JobManager RPC 地址
|
||||
- `jobmanager.memory.process.size`:JobManager 内存(默认:1600m)
|
||||
- `taskmanager.memory.process.size`:TaskManager 内存(默认:1600m)
|
||||
- `taskmanager.numberOfTaskSlots`:每个 TaskManager 的任务槽数量(默认:2)
|
||||
|
||||
## 端口说明
|
||||
|
||||
- `6123`:JobManager RPC 端口
|
||||
- `8081`:Flink Web UI
|
||||
|
||||
## 访问方式
|
||||
|
||||
### Web UI
|
||||
|
||||
访问 Flink Dashboard:<http://localhost:8081>
|
||||
|
||||
### 提交作业
|
||||
|
||||
提交 Flink 作业:
|
||||
|
||||
```bash
|
||||
docker compose exec jobmanager ./bin/flink run /opt/flink/examples/streaming/WordCount.jar
|
||||
```
|
||||
|
||||
提交自定义作业:
|
||||
|
||||
```bash
|
||||
docker compose exec jobmanager ./bin/flink run /opt/flink/jobs/my-job.jar
|
||||
```
|
||||
|
||||
### 作业管理
|
||||
|
||||
```bash
|
||||
# 列出运行中的作业
|
||||
docker compose exec jobmanager ./bin/flink list
|
||||
|
||||
# 取消作业
|
||||
docker compose exec jobmanager ./bin/flink cancel <job-id>
|
||||
|
||||
# 显示作业详情
|
||||
docker compose exec jobmanager ./bin/flink info /path/to/job.jar
|
||||
```
|
||||
|
||||
## 示例:WordCount
|
||||
|
||||
运行内置的 WordCount 示例:
|
||||
|
||||
```bash
|
||||
docker compose exec jobmanager ./bin/flink run /opt/flink/examples/streaming/WordCount.jar
|
||||
```
|
||||
|
||||
## 扩展 TaskManager
|
||||
|
||||
要扩展 TaskManager 以获得更多处理能力:
|
||||
|
||||
```bash
|
||||
docker compose up -d --scale taskmanager=3
|
||||
```
|
||||
|
||||
## 自定义作业
|
||||
|
||||
通过取消注释 `docker-compose.yaml` 中的卷来挂载自定义 Flink 作业:
|
||||
|
||||
```yaml
|
||||
volumes:
|
||||
- ./jobs:/opt/flink/jobs
|
||||
```
|
||||
|
||||
然后将 JAR 文件放在 `./jobs` 目录中。
|
||||
|
||||
## 注意事项
|
||||
|
||||
- 这是一个独立集群配置,适合开发环境
|
||||
- 生产环境建议在 Kubernetes 或 YARN 上使用 Flink
|
||||
- 根据工作负载需求调整内存设置
|
||||
- 任务槽决定并行度,更多槽允许更多并行任务
|
||||
- 数据持久化在命名卷 `flink_data` 中
|
||||
|
||||
## 参考资料
|
||||
|
||||
- [Apache Flink 官方文档](https://flink.apache.org/zh/docs/stable/)
|
||||
- [Flink Docker 设置](https://nightlies.apache.org/flink/flink-docs-stable/docs/deployment/resource-providers/standalone/docker/)
|
||||
- [Flink Docker Hub](https://hub.docker.com/_/flink)
|
||||
78
src/flink/docker-compose.yaml
Normal file
78
src/flink/docker-compose.yaml
Normal file
@@ -0,0 +1,78 @@
|
||||
x-default: &default
|
||||
restart: unless-stopped
|
||||
logging:
|
||||
driver: json-file
|
||||
options:
|
||||
max-size: 100m
|
||||
max-file: "3"
|
||||
|
||||
services:
|
||||
jobmanager:
|
||||
<<: *default
|
||||
image: flink:${FLINK_VERSION:-1.20.0-scala_2.12-java11}
|
||||
hostname: jobmanager
|
||||
command: jobmanager
|
||||
environment:
|
||||
TZ: ${TZ:-UTC}
|
||||
FLINK_PROPERTIES: |
|
||||
jobmanager.rpc.address: jobmanager
|
||||
jobmanager.memory.process.size: 1600m
|
||||
taskmanager.memory.process.size: 1600m
|
||||
taskmanager.numberOfTaskSlots: 2
|
||||
volumes:
|
||||
- flink_data:/opt/flink/data
|
||||
# Custom Flink jobs
|
||||
# - ./jobs:/opt/flink/jobs
|
||||
ports:
|
||||
- "${FLINK_JOBMANAGER_RPC_PORT_OVERRIDE:-6123}:6123"
|
||||
- "${FLINK_JOBMANAGER_UI_PORT_OVERRIDE:-8081}:8081"
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '2.0'
|
||||
memory: 2G
|
||||
reservations:
|
||||
cpus: '0.5'
|
||||
memory: 1G
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "wget --no-verbose --tries=1 --spider http://localhost:8081 || exit 1"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 30s
|
||||
|
||||
taskmanager:
|
||||
<<: *default
|
||||
image: flink:${FLINK_VERSION:-1.20.0-scala_2.12-java11}
|
||||
command: taskmanager
|
||||
environment:
|
||||
TZ: ${TZ:-UTC}
|
||||
FLINK_PROPERTIES: |
|
||||
jobmanager.rpc.address: jobmanager
|
||||
jobmanager.memory.process.size: 1600m
|
||||
taskmanager.memory.process.size: 1600m
|
||||
taskmanager.numberOfTaskSlots: 2
|
||||
volumes:
|
||||
- flink_data:/opt/flink/data
|
||||
# Custom Flink jobs
|
||||
# - ./jobs:/opt/flink/jobs
|
||||
depends_on:
|
||||
jobmanager:
|
||||
condition: service_healthy
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '2.0'
|
||||
memory: 2G
|
||||
reservations:
|
||||
cpus: '0.5'
|
||||
memory: 1G
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "ps aux | grep -v grep | grep -q taskmanager || exit 1"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 30s
|
||||
|
||||
volumes:
|
||||
flink_data:
|
||||
22
src/halo/.env.example
Normal file
22
src/halo/.env.example
Normal file
@@ -0,0 +1,22 @@
|
||||
# Halo version
|
||||
HALO_VERSION=2.21.9
|
||||
|
||||
# Timezone
|
||||
TZ=UTC
|
||||
|
||||
# Halo port
|
||||
HALO_PORT=8090
|
||||
|
||||
# External URL (should match your domain)
|
||||
HALO_EXTERNAL_URL=http://localhost:8090
|
||||
|
||||
# Admin credentials
|
||||
HALO_ADMIN_USERNAME=admin
|
||||
# HALO_ADMIN_PASSWORD= # Leave empty for random password on first start
|
||||
|
||||
# Database configuration
|
||||
POSTGRES_USER=postgres
|
||||
POSTGRES_PASSWORD=postgres
|
||||
POSTGRES_DB=halo
|
||||
SPRING_R2DBC_URL=r2dbc:pool:postgresql://halo-db:5432/halo
|
||||
SPRING_SQL_INIT_PLATFORM=postgresql
|
||||
23
src/harbor/.env.example
Normal file
23
src/harbor/.env.example
Normal file
@@ -0,0 +1,23 @@
|
||||
# Harbor version
|
||||
HARBOR_VERSION=v2.12.0
|
||||
|
||||
# Timezone
|
||||
TZ=UTC
|
||||
|
||||
# Harbor secrets (generate random strings for production)
|
||||
HARBOR_CORE_SECRET=
|
||||
HARBOR_JOBSERVICE_SECRET=
|
||||
|
||||
# Database configuration
|
||||
POSTGRES_PASSWORD=changeit
|
||||
REDIS_PASSWORD=
|
||||
|
||||
# Admin password
|
||||
HARBOR_ADMIN_PASSWORD=Harbor12345
|
||||
|
||||
# External URL
|
||||
HARBOR_EXTERNAL_URL=http://localhost:80
|
||||
|
||||
# Port overrides (optional)
|
||||
# HARBOR_HTTP_PORT_OVERRIDE=80
|
||||
# HARBOR_HTTPS_PORT_OVERRIDE=443
|
||||
12
src/hbase/.env.example
Normal file
12
src/hbase/.env.example
Normal file
@@ -0,0 +1,12 @@
|
||||
# HBase version
|
||||
HBASE_VERSION=2.6
|
||||
|
||||
# Timezone
|
||||
TZ=UTC
|
||||
|
||||
# Port overrides (optional)
|
||||
# HBASE_MASTER_PORT_OVERRIDE=16000
|
||||
# HBASE_MASTER_INFO_PORT_OVERRIDE=16010
|
||||
# HBASE_REGIONSERVER_PORT_OVERRIDE=16020
|
||||
# HBASE_REGIONSERVER_INFO_PORT_OVERRIDE=16030
|
||||
# HBASE_ZOOKEEPER_PORT_OVERRIDE=2181
|
||||
63
src/hbase/README.md
Normal file
63
src/hbase/README.md
Normal file
@@ -0,0 +1,63 @@
|
||||
# HBase
|
||||
|
||||
HBase is a distributed, scalable, big data store built on top of Hadoop. It provides random, real-time read/write access to your Big Data.
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
This setup runs HBase in standalone mode with embedded ZooKeeper.
|
||||
|
||||
## Ports
|
||||
|
||||
- `16000`: HBase Master port
|
||||
- `16010`: HBase Master Web UI
|
||||
- `16020`: HBase RegionServer port
|
||||
- `16030`: HBase RegionServer Web UI
|
||||
- `2181`: ZooKeeper client port
|
||||
|
||||
## Access
|
||||
|
||||
### HBase Shell
|
||||
|
||||
Access HBase shell:
|
||||
|
||||
```bash
|
||||
docker compose exec hbase hbase shell
|
||||
```
|
||||
|
||||
### Web UI
|
||||
|
||||
- HBase Master UI: <http://localhost:16010>
|
||||
- HBase RegionServer UI: <http://localhost:16030>
|
||||
|
||||
### Example Commands
|
||||
|
||||
```bash
|
||||
# List tables
|
||||
echo "list" | docker compose exec -T hbase hbase shell -n
|
||||
|
||||
# Create a table
|
||||
echo "create 'test', 'cf'" | docker compose exec -T hbase hbase shell -n
|
||||
|
||||
# Put data
|
||||
echo "put 'test', 'row1', 'cf:a', 'value1'" | docker compose exec -T hbase hbase shell -n
|
||||
|
||||
# Scan table
|
||||
echo "scan 'test'" | docker compose exec -T hbase hbase shell -n
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- This is a standalone setup suitable for development and testing
|
||||
- For production, consider using a distributed HBase cluster with external ZooKeeper and HDFS
|
||||
- Data is persisted in named volumes
|
||||
|
||||
## References
|
||||
|
||||
- [HBase Official Documentation](https://hbase.apache.org/book.html)
|
||||
- [HBase Docker Image](https://hub.docker.com/r/harisekhon/hbase)
|
||||
63
src/hbase/README.zh.md
Normal file
63
src/hbase/README.zh.md
Normal file
@@ -0,0 +1,63 @@
|
||||
# HBase
|
||||
|
||||
HBase 是一个构建在 Hadoop 之上的分布式、可扩展的大数据存储系统,提供对大数据的随机、实时读写访问。
|
||||
|
||||
## 使用方法
|
||||
|
||||
```bash
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
## 配置说明
|
||||
|
||||
此配置运行 HBase 独立模式,内置 ZooKeeper。
|
||||
|
||||
## 端口说明
|
||||
|
||||
- `16000`:HBase Master 端口
|
||||
- `16010`:HBase Master Web UI
|
||||
- `16020`:HBase RegionServer 端口
|
||||
- `16030`:HBase RegionServer Web UI
|
||||
- `2181`:ZooKeeper 客户端端口
|
||||
|
||||
## 访问方式
|
||||
|
||||
### HBase Shell
|
||||
|
||||
访问 HBase shell:
|
||||
|
||||
```bash
|
||||
docker compose exec hbase hbase shell
|
||||
```
|
||||
|
||||
### Web UI
|
||||
|
||||
- HBase Master UI:<http://localhost:16010>
|
||||
- HBase RegionServer UI:<http://localhost:16030>
|
||||
|
||||
### 示例命令
|
||||
|
||||
```bash
|
||||
# 列出所有表
|
||||
echo "list" | docker compose exec -T hbase hbase shell -n
|
||||
|
||||
# 创建表
|
||||
echo "create 'test', 'cf'" | docker compose exec -T hbase hbase shell -n
|
||||
|
||||
# 插入数据
|
||||
echo "put 'test', 'row1', 'cf:a', 'value1'" | docker compose exec -T hbase hbase shell -n
|
||||
|
||||
# 扫描表
|
||||
echo "scan 'test'" | docker compose exec -T hbase hbase shell -n
|
||||
```
|
||||
|
||||
## 注意事项
|
||||
|
||||
- 这是一个独立模式配置,适合开发和测试
|
||||
- 生产环境建议使用分布式 HBase 集群,配合外部 ZooKeeper 和 HDFS
|
||||
- 数据持久化在命名卷中
|
||||
|
||||
## 参考资料
|
||||
|
||||
- [HBase 官方文档](https://hbase.apache.org/book.html)
|
||||
- [HBase Docker 镜像](https://hub.docker.com/r/harisekhon/hbase)
|
||||
42
src/hbase/docker-compose.yaml
Normal file
42
src/hbase/docker-compose.yaml
Normal file
@@ -0,0 +1,42 @@
|
||||
x-default: &default
|
||||
restart: unless-stopped
|
||||
logging:
|
||||
driver: json-file
|
||||
options:
|
||||
max-size: 100m
|
||||
max-file: "3"
|
||||
|
||||
services:
|
||||
hbase:
|
||||
<<: *default
|
||||
image: harisekhon/hbase:${HBASE_VERSION:-2.6}
|
||||
hostname: hbase
|
||||
environment:
|
||||
TZ: ${TZ:-UTC}
|
||||
volumes:
|
||||
- hbase_data:/hbase-data
|
||||
- hbase_zookeeper_data:/zookeeper-data
|
||||
ports:
|
||||
- "${HBASE_MASTER_PORT_OVERRIDE:-16000}:16000"
|
||||
- "${HBASE_MASTER_INFO_PORT_OVERRIDE:-16010}:16010"
|
||||
- "${HBASE_REGIONSERVER_PORT_OVERRIDE:-16020}:16020"
|
||||
- "${HBASE_REGIONSERVER_INFO_PORT_OVERRIDE:-16030}:16030"
|
||||
- "${HBASE_ZOOKEEPER_PORT_OVERRIDE:-2181}:2181"
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '4.0'
|
||||
memory: 4G
|
||||
reservations:
|
||||
cpus: '1.0'
|
||||
memory: 2G
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "echo 'status' | hbase shell -n || exit 1"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 5
|
||||
start_period: 60s
|
||||
|
||||
volumes:
|
||||
hbase_data:
|
||||
hbase_zookeeper_data:
|
||||
20
src/jenkins/.env.example
Normal file
20
src/jenkins/.env.example
Normal file
@@ -0,0 +1,20 @@
|
||||
# Jenkins version
|
||||
JENKINS_VERSION=2.486-lts-jdk17
|
||||
|
||||
# Timezone
|
||||
TZ=UTC
|
||||
|
||||
# Jenkins options
|
||||
JENKINS_OPTS=--httpPort=8080
|
||||
JAVA_OPTS=-Djenkins.install.runSetupWizard=false -Xmx2g
|
||||
|
||||
# Configuration as Code
|
||||
CASC_JENKINS_CONFIG=/var/jenkins_home/casc_configs
|
||||
|
||||
# User and group IDs (adjust to match your host user)
|
||||
JENKINS_USER_ID=1000
|
||||
JENKINS_GROUP_ID=1000
|
||||
|
||||
# Port overrides (optional)
|
||||
# JENKINS_HTTP_PORT_OVERRIDE=8080
|
||||
# JENKINS_AGENT_PORT_OVERRIDE=50000
|
||||
10
src/kafka/.env.example
Normal file
10
src/kafka/.env.example
Normal file
@@ -0,0 +1,10 @@
|
||||
# Kafka version (Confluent Platform version)
|
||||
KAFKA_VERSION=7.8.0
|
||||
|
||||
# Timezone
|
||||
TZ=UTC
|
||||
|
||||
# Port overrides (optional)
|
||||
# ZOOKEEPER_CLIENT_PORT_OVERRIDE=2181
|
||||
# KAFKA_BROKER_PORT_OVERRIDE=9092
|
||||
# KAFKA_JMX_PORT_OVERRIDE=9999
|
||||
20
src/kibana/.env.example
Normal file
20
src/kibana/.env.example
Normal file
@@ -0,0 +1,20 @@
|
||||
# Kibana version
|
||||
KIBANA_VERSION=8.16.1
|
||||
|
||||
# Timezone
|
||||
TZ=UTC
|
||||
|
||||
# Elasticsearch connection
|
||||
ELASTICSEARCH_HOSTS=http://elasticsearch:9200
|
||||
ELASTICSEARCH_USERNAME=
|
||||
ELASTICSEARCH_PASSWORD=
|
||||
|
||||
# Security settings
|
||||
KIBANA_SECURITY_ENABLED=false
|
||||
KIBANA_ENCRYPTION_KEY=
|
||||
|
||||
# Logging
|
||||
KIBANA_LOG_LEVEL=info
|
||||
|
||||
# Port overrides (optional)
|
||||
# KIBANA_PORT_OVERRIDE=5601
|
||||
21
src/kodbox/.env.example
Normal file
21
src/kodbox/.env.example
Normal file
@@ -0,0 +1,21 @@
|
||||
# KodBox version
|
||||
KODBOX_VERSION=1.62
|
||||
|
||||
# Timezone
|
||||
TZ=UTC
|
||||
|
||||
# Port
|
||||
KODBOX_PORT=80
|
||||
|
||||
# MySQL configuration
|
||||
MYSQL_HOST=kodbox-db
|
||||
MYSQL_PORT=3306
|
||||
MYSQL_DATABASE=kodbox
|
||||
MYSQL_USER=kodbox
|
||||
MYSQL_PASSWORD=kodbox123
|
||||
MYSQL_ROOT_PASSWORD=root123
|
||||
|
||||
# Redis configuration
|
||||
REDIS_HOST=kodbox-redis
|
||||
REDIS_PORT=6379
|
||||
REDIS_PASSWORD=
|
||||
26
src/kong/.env.example
Normal file
26
src/kong/.env.example
Normal file
@@ -0,0 +1,26 @@
|
||||
# Kong version
|
||||
KONG_VERSION=3.8.0-alpine
|
||||
|
||||
# PostgreSQL version
|
||||
POSTGRES_VERSION=16.6-alpine3.21
|
||||
|
||||
# Timezone
|
||||
TZ=UTC
|
||||
|
||||
# Database password
|
||||
KONG_DB_PASSWORD=kongpass
|
||||
|
||||
# Kong Admin API configuration
|
||||
KONG_ADMIN_ACCESS_LOG=/dev/stdout
|
||||
KONG_ADMIN_ERROR_LOG=/dev/stderr
|
||||
KONG_ADMIN_LISTEN=0.0.0.0:8001
|
||||
|
||||
# Kong Proxy configuration
|
||||
KONG_PROXY_ACCESS_LOG=/dev/stdout
|
||||
KONG_PROXY_ERROR_LOG=/dev/stderr
|
||||
|
||||
# Port overrides (optional)
|
||||
# KONG_PROXY_PORT_OVERRIDE=8000
|
||||
# KONG_PROXY_SSL_PORT_OVERRIDE=8443
|
||||
# KONG_ADMIN_PORT_OVERRIDE=8001
|
||||
# KONG_ADMIN_SSL_PORT_OVERRIDE=8444
|
||||
22
src/langfuse/.env.example
Normal file
22
src/langfuse/.env.example
Normal file
@@ -0,0 +1,22 @@
|
||||
# Langfuse version
|
||||
LANGFUSE_VERSION=3.115.0
|
||||
|
||||
# Timezone
|
||||
TZ=UTC
|
||||
|
||||
# Port
|
||||
LANGFUSE_PORT=3000
|
||||
|
||||
# Database configuration
|
||||
POSTGRES_USER=postgres
|
||||
POSTGRES_PASSWORD=postgres
|
||||
POSTGRES_DB=langfuse
|
||||
|
||||
# NextAuth configuration
|
||||
NEXTAUTH_URL=http://localhost:3000
|
||||
NEXTAUTH_SECRET= # Generate with: openssl rand -base64 32
|
||||
SALT= # Generate with: openssl rand -base64 32
|
||||
|
||||
# Feature flags
|
||||
TELEMETRY_ENABLED=true
|
||||
LANGFUSE_ENABLE_EXPERIMENTAL_FEATURES=false
|
||||
30
src/logstash/.env.example
Normal file
30
src/logstash/.env.example
Normal file
@@ -0,0 +1,30 @@
|
||||
# Logstash version
|
||||
LOGSTASH_VERSION=8.16.1
|
||||
|
||||
# Timezone
|
||||
TZ=UTC
|
||||
|
||||
# Elasticsearch connection
|
||||
ELASTICSEARCH_HOSTS=http://elasticsearch:9200
|
||||
ELASTICSEARCH_USERNAME=
|
||||
ELASTICSEARCH_PASSWORD=
|
||||
|
||||
# Monitoring
|
||||
LOGSTASH_MONITORING_ENABLED=false
|
||||
|
||||
# JVM options
|
||||
LS_JAVA_OPTS=-Xmx1g -Xms1g
|
||||
|
||||
# Pipeline configuration
|
||||
LOGSTASH_PIPELINE_WORKERS=2
|
||||
LOGSTASH_PIPELINE_BATCH_SIZE=125
|
||||
LOGSTASH_PIPELINE_BATCH_DELAY=50
|
||||
|
||||
# Logging
|
||||
LOGSTASH_LOG_LEVEL=info
|
||||
|
||||
# Port overrides (optional)
|
||||
# LOGSTASH_BEATS_PORT_OVERRIDE=5044
|
||||
# LOGSTASH_TCP_PORT_OVERRIDE=5000
|
||||
# LOGSTASH_UDP_PORT_OVERRIDE=5000
|
||||
# LOGSTASH_HTTP_PORT_OVERRIDE=9600
|
||||
38
src/n8n/.env.example
Normal file
38
src/n8n/.env.example
Normal file
@@ -0,0 +1,38 @@
|
||||
# n8n version
|
||||
N8N_VERSION=1.114.0
|
||||
|
||||
# Timezone
|
||||
TZ=UTC
|
||||
GENERIC_TIMEZONE=UTC
|
||||
|
||||
# Port
|
||||
N8N_PORT=5678
|
||||
|
||||
# Basic auth
|
||||
N8N_BASIC_AUTH_ACTIVE=true
|
||||
N8N_BASIC_AUTH_USER=
|
||||
N8N_BASIC_AUTH_PASSWORD=
|
||||
|
||||
# Host configuration
|
||||
N8N_HOST=0.0.0.0
|
||||
N8N_PROTOCOL=http
|
||||
WEBHOOK_URL=http://localhost:5678/
|
||||
|
||||
# Database configuration (SQLite by default, PostgreSQL optional)
|
||||
DB_TYPE=sqlite
|
||||
DB_POSTGRESDB_DATABASE=n8n
|
||||
DB_POSTGRESDB_HOST=n8n-db
|
||||
DB_POSTGRESDB_PORT=5432
|
||||
DB_POSTGRESDB_USER=n8n
|
||||
DB_POSTGRESDB_PASSWORD=
|
||||
|
||||
# Execution mode
|
||||
EXECUTIONS_MODE=regular
|
||||
|
||||
# Encryption key (generate with: openssl rand -base64 32)
|
||||
N8N_ENCRYPTION_KEY=
|
||||
|
||||
# PostgreSQL configuration (if using PostgreSQL)
|
||||
POSTGRES_USER=n8n
|
||||
POSTGRES_PASSWORD=n8n
|
||||
POSTGRES_DB=n8n
|
||||
13
src/nginx/.env.example
Normal file
13
src/nginx/.env.example
Normal file
@@ -0,0 +1,13 @@
|
||||
# Nginx version
|
||||
NGINX_VERSION=1.29.2-alpine3.22
|
||||
|
||||
# Timezone
|
||||
TZ=UTC
|
||||
|
||||
# Server configuration
|
||||
NGINX_HOST=localhost
|
||||
NGINX_PORT=80
|
||||
|
||||
# Port overrides (optional)
|
||||
# NGINX_HTTP_PORT_OVERRIDE=80
|
||||
# NGINX_HTTPS_PORT_OVERRIDE=443
|
||||
22
src/oceanbase/.env.example
Normal file
22
src/oceanbase/.env.example
Normal file
@@ -0,0 +1,22 @@
|
||||
# OceanBase version
|
||||
OCEANBASE_VERSION=4.3.3.1-106000012024110114
|
||||
|
||||
# Timezone
|
||||
TZ=UTC
|
||||
|
||||
# Root password
|
||||
OB_ROOT_PASSWORD=oceanbase
|
||||
|
||||
# Cluster configuration
|
||||
OB_CLUSTER_NAME=obcluster
|
||||
OB_TENANT_NAME=test
|
||||
OB_TENANT_PASSWORD=oceanbase
|
||||
|
||||
# Resource limits
|
||||
OB_MEMORY_LIMIT=8G
|
||||
OB_DATAFILE_SIZE=10G
|
||||
OB_LOG_DISK_SIZE=6G
|
||||
|
||||
# Port overrides (optional)
|
||||
# OCEANBASE_SQL_PORT_OVERRIDE=2881
|
||||
# OCEANBASE_RPC_PORT_OVERRIDE=2882
|
||||
51
src/oceanbase/README.md
Normal file
51
src/oceanbase/README.md
Normal file
@@ -0,0 +1,51 @@
|
||||
# OceanBase
|
||||
|
||||
OceanBase is a distributed relational database developed by Ant Group. It features high availability, high scalability, and is compatible with MySQL.
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
Key environment variables:
|
||||
|
||||
- `OB_ROOT_PASSWORD`: Root user password (default: `oceanbase`)
|
||||
- `OB_TENANT_NAME`: Tenant name (default: `test`)
|
||||
- `OB_TENANT_PASSWORD`: Tenant password (default: `oceanbase`)
|
||||
- `OB_MEMORY_LIMIT`: Memory limit (default: `8G`, minimum: `8G`)
|
||||
- `OB_DATAFILE_SIZE`: Data file size (default: `10G`)
|
||||
- `OB_LOG_DISK_SIZE`: Log disk size (default: `6G`)
|
||||
|
||||
## Ports
|
||||
|
||||
- `2881`: MySQL protocol port
|
||||
- `2882`: RPC port
|
||||
|
||||
## Connection
|
||||
|
||||
Connect using MySQL client:
|
||||
|
||||
```bash
|
||||
mysql -h127.0.0.1 -P2881 -uroot@test -poceanbase
|
||||
```
|
||||
|
||||
Or connect to sys tenant:
|
||||
|
||||
```bash
|
||||
mysql -h127.0.0.1 -P2881 -uroot -poceanbase
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- OceanBase requires at least 8GB of memory to run properly
|
||||
- First startup may take several minutes to initialize
|
||||
- Use `slim` mode for development/testing environments
|
||||
- For production, consider using `normal` mode and a dedicated cluster
|
||||
|
||||
## References
|
||||
|
||||
- [OceanBase Official Documentation](https://www.oceanbase.com/docs)
|
||||
- [OceanBase Docker Hub](https://hub.docker.com/r/oceanbase/oceanbase-ce)
|
||||
51
src/oceanbase/README.zh.md
Normal file
51
src/oceanbase/README.zh.md
Normal file
@@ -0,0 +1,51 @@
|
||||
# OceanBase
|
||||
|
||||
OceanBase 是由蚂蚁集团开发的分布式关系型数据库,具有高可用、高扩展性的特点,并兼容 MySQL 协议。
|
||||
|
||||
## 使用方法
|
||||
|
||||
```bash
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
## 配置说明
|
||||
|
||||
主要环境变量:
|
||||
|
||||
- `OB_ROOT_PASSWORD`:root 用户密码(默认:`oceanbase`)
|
||||
- `OB_TENANT_NAME`:租户名称(默认:`test`)
|
||||
- `OB_TENANT_PASSWORD`:租户密码(默认:`oceanbase`)
|
||||
- `OB_MEMORY_LIMIT`:内存限制(默认:`8G`,最小:`8G`)
|
||||
- `OB_DATAFILE_SIZE`:数据文件大小(默认:`10G`)
|
||||
- `OB_LOG_DISK_SIZE`:日志磁盘大小(默认:`6G`)
|
||||
|
||||
## 端口说明
|
||||
|
||||
- `2881`:MySQL 协议端口
|
||||
- `2882`:RPC 端口
|
||||
|
||||
## 连接方式
|
||||
|
||||
使用 MySQL 客户端连接:
|
||||
|
||||
```bash
|
||||
mysql -h127.0.0.1 -P2881 -uroot@test -poceanbase
|
||||
```
|
||||
|
||||
或连接到 sys 租户:
|
||||
|
||||
```bash
|
||||
mysql -h127.0.0.1 -P2881 -uroot -poceanbase
|
||||
```
|
||||
|
||||
## 注意事项
|
||||
|
||||
- OceanBase 需要至少 8GB 内存才能正常运行
|
||||
- 首次启动可能需要几分钟时间进行初始化
|
||||
- 使用 `slim` 模式适合开发/测试环境
|
||||
- 生产环境建议使用 `normal` 模式和专用集群
|
||||
|
||||
## 参考资料
|
||||
|
||||
- [OceanBase 官方文档](https://www.oceanbase.com/docs)
|
||||
- [OceanBase Docker Hub](https://hub.docker.com/r/oceanbase/oceanbase-ce)
|
||||
44
src/oceanbase/docker-compose.yaml
Normal file
44
src/oceanbase/docker-compose.yaml
Normal file
@@ -0,0 +1,44 @@
|
||||
x-default: &default
|
||||
restart: unless-stopped
|
||||
logging:
|
||||
driver: json-file
|
||||
options:
|
||||
max-size: 100m
|
||||
max-file: "3"
|
||||
|
||||
services:
|
||||
oceanbase:
|
||||
<<: *default
|
||||
image: oceanbase/oceanbase-ce:${OCEANBASE_VERSION:-4.3.3.1-106000012024110114}
|
||||
environment:
|
||||
TZ: ${TZ:-UTC}
|
||||
MODE: slim
|
||||
OB_ROOT_PASSWORD: ${OB_ROOT_PASSWORD:-oceanbase}
|
||||
OB_CLUSTER_NAME: ${OB_CLUSTER_NAME:-obcluster}
|
||||
OB_TENANT_NAME: ${OB_TENANT_NAME:-test}
|
||||
OB_TENANT_PASSWORD: ${OB_TENANT_PASSWORD:-oceanbase}
|
||||
OB_MEMORY_LIMIT: ${OB_MEMORY_LIMIT:-8G}
|
||||
OB_DATAFILE_SIZE: ${OB_DATAFILE_SIZE:-10G}
|
||||
OB_LOG_DISK_SIZE: ${OB_LOG_DISK_SIZE:-6G}
|
||||
volumes:
|
||||
- oceanbase_data:/root/ob
|
||||
ports:
|
||||
- "${OCEANBASE_SQL_PORT_OVERRIDE:-2881}:2881"
|
||||
- "${OCEANBASE_RPC_PORT_OVERRIDE:-2882}:2882"
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '4.0'
|
||||
memory: 10G
|
||||
reservations:
|
||||
cpus: '2.0'
|
||||
memory: 8G
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "mysql -h127.0.0.1 -P2881 -uroot -p$$OB_ROOT_PASSWORD -e 'SELECT 1' || exit 1"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 5
|
||||
start_period: 120s
|
||||
|
||||
volumes:
|
||||
oceanbase_data:
|
||||
7
src/opencoze/.env.example
Normal file
7
src/opencoze/.env.example
Normal file
@@ -0,0 +1,7 @@
|
||||
# Timezone
|
||||
TZ=UTC
|
||||
|
||||
# Note: OpenCoze is a complex multi-service platform.
|
||||
# This is a placeholder configuration.
|
||||
# For full deployment, please refer to:
|
||||
# https://github.com/coze-dev/coze-studio/tree/main/docker
|
||||
13
src/rocketmq/.env.example
Normal file
13
src/rocketmq/.env.example
Normal file
@@ -0,0 +1,13 @@
|
||||
# RocketMQ version
|
||||
ROCKETMQ_VERSION=5.3.1
|
||||
ROCKETMQ_DASHBOARD_VERSION=2.0.0
|
||||
|
||||
# Timezone
|
||||
TZ=UTC
|
||||
|
||||
# Port overrides (optional)
|
||||
# ROCKETMQ_NAMESRV_PORT_OVERRIDE=9876
|
||||
# ROCKETMQ_BROKER_PORT_OVERRIDE=10909
|
||||
# ROCKETMQ_BROKER_VIP_PORT_OVERRIDE=10911
|
||||
# ROCKETMQ_BROKER_HA_PORT_OVERRIDE=10912
|
||||
# ROCKETMQ_DASHBOARD_PORT_OVERRIDE=8080
|
||||
87
src/rocketmq/README.md
Normal file
87
src/rocketmq/README.md
Normal file
@@ -0,0 +1,87 @@
|
||||
# RocketMQ
|
||||
|
||||
Apache RocketMQ is a distributed messaging and streaming platform with low latency, high performance and reliability, trillion-level capacity and flexible scalability.
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
## Components
|
||||
|
||||
This setup includes:
|
||||
|
||||
- **NameServer**: Manages broker routing information
|
||||
- **Broker**: Stores and delivers messages
|
||||
- **Dashboard**: Web UI for monitoring and management
|
||||
|
||||
## Configuration
|
||||
|
||||
The broker configuration is in `broker.conf`. Key settings:
|
||||
|
||||
- `brokerClusterName`: Cluster name
|
||||
- `brokerName`: Broker name
|
||||
- `autoCreateTopicEnable`: Auto-create topics (enabled by default)
|
||||
- `flushDiskType`: Disk flush strategy (`ASYNC_FLUSH` for better performance)
|
||||
|
||||
## Ports
|
||||
|
||||
- `9876`: NameServer port
|
||||
- `10909`: Broker listening port (fastRemotingServer)
|
||||
- `10911`: Broker port (remoting server)
|
||||
- `10912`: Broker HA port
|
||||
- `8080`: Dashboard Web UI
|
||||
|
||||
## Access
|
||||
|
||||
### Dashboard
|
||||
|
||||
Access the RocketMQ Dashboard at: <http://localhost:8080>
|
||||
|
||||
### Command Line Tools
|
||||
|
||||
Execute admin commands:
|
||||
|
||||
```bash
|
||||
# List clusters
|
||||
docker compose exec broker mqadmin clusterList -n namesrv:9876
|
||||
|
||||
# List topics
|
||||
docker compose exec broker mqadmin topicList -n namesrv:9876
|
||||
|
||||
# Create topic
|
||||
docker compose exec broker mqadmin updateTopic -n namesrv:9876 -c DefaultCluster -t TestTopic
|
||||
|
||||
# Query message
|
||||
docker compose exec broker mqadmin queryMsgById -n namesrv:9876 -i <messageId>
|
||||
```
|
||||
|
||||
## Example: Send and Receive Messages
|
||||
|
||||
```bash
|
||||
# Send messages
|
||||
docker compose exec broker sh /home/rocketmq/rocketmq/bin/tools.sh org.apache.rocketmq.example.quickstart.Producer
|
||||
|
||||
# Consume messages
|
||||
docker compose exec broker sh /home/rocketmq/rocketmq/bin/tools.sh org.apache.rocketmq.example.quickstart.Consumer
|
||||
```
|
||||
|
||||
## Client Connection
|
||||
|
||||
Configure your RocketMQ client to connect to:
|
||||
|
||||
- NameServer: `localhost:9876`
|
||||
|
||||
## Notes
|
||||
|
||||
- This is a single-master setup suitable for development
|
||||
- For production, use a multi-master or multi-master-multi-slave setup
|
||||
- Adjust JVM heap sizes in `JAVA_OPT_EXT` based on your needs
|
||||
- Data is persisted in named volumes
|
||||
|
||||
## References
|
||||
|
||||
- [RocketMQ Official Documentation](https://rocketmq.apache.org/docs/quick-start/)
|
||||
- [RocketMQ Docker Hub](https://hub.docker.com/r/apache/rocketmq)
|
||||
- [RocketMQ Dashboard](https://github.com/apache/rocketmq-dashboard)
|
||||
87
src/rocketmq/README.zh.md
Normal file
87
src/rocketmq/README.zh.md
Normal file
@@ -0,0 +1,87 @@
|
||||
# RocketMQ
|
||||
|
||||
Apache RocketMQ 是一个分布式消息和流平台,具有低延迟、高性能和可靠性,万亿级容量和灵活的可扩展性。
|
||||
|
||||
## 使用方法
|
||||
|
||||
```bash
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
## 组件说明
|
||||
|
||||
此配置包含:
|
||||
|
||||
- **NameServer**:管理 Broker 路由信息
|
||||
- **Broker**:存储和传递消息
|
||||
- **Dashboard**:监控和管理的 Web UI
|
||||
|
||||
## 配置说明
|
||||
|
||||
Broker 配置在 `broker.conf` 文件中,主要设置:
|
||||
|
||||
- `brokerClusterName`:集群名称
|
||||
- `brokerName`:Broker 名称
|
||||
- `autoCreateTopicEnable`:自动创建主题(默认启用)
|
||||
- `flushDiskType`:磁盘刷新策略(`ASYNC_FLUSH` 性能更好)
|
||||
|
||||
## 端口说明
|
||||
|
||||
- `9876`:NameServer 端口
|
||||
- `10909`:Broker 监听端口(fastRemotingServer)
|
||||
- `10911`:Broker 端口(remoting server)
|
||||
- `10912`:Broker HA 端口
|
||||
- `8080`:Dashboard Web UI
|
||||
|
||||
## 访问方式
|
||||
|
||||
### Dashboard
|
||||
|
||||
访问 RocketMQ Dashboard:<http://localhost:8080>
|
||||
|
||||
### 命令行工具
|
||||
|
||||
执行管理命令:
|
||||
|
||||
```bash
|
||||
# 列出集群
|
||||
docker compose exec broker mqadmin clusterList -n namesrv:9876
|
||||
|
||||
# 列出主题
|
||||
docker compose exec broker mqadmin topicList -n namesrv:9876
|
||||
|
||||
# 创建主题
|
||||
docker compose exec broker mqadmin updateTopic -n namesrv:9876 -c DefaultCluster -t TestTopic
|
||||
|
||||
# 查询消息
|
||||
docker compose exec broker mqadmin queryMsgById -n namesrv:9876 -i <messageId>
|
||||
```
|
||||
|
||||
## 示例:发送和接收消息
|
||||
|
||||
```bash
|
||||
# 发送消息
|
||||
docker compose exec broker sh /home/rocketmq/rocketmq/bin/tools.sh org.apache.rocketmq.example.quickstart.Producer
|
||||
|
||||
# 消费消息
|
||||
docker compose exec broker sh /home/rocketmq/rocketmq/bin/tools.sh org.apache.rocketmq.example.quickstart.Consumer
|
||||
```
|
||||
|
||||
## 客户端连接
|
||||
|
||||
配置 RocketMQ 客户端连接到:
|
||||
|
||||
- NameServer:`localhost:9876`
|
||||
|
||||
## 注意事项
|
||||
|
||||
- 这是一个单主配置,适合开发环境
|
||||
- 生产环境建议使用多主或多主多从配置
|
||||
- 根据需要在 `JAVA_OPT_EXT` 中调整 JVM 堆大小
|
||||
- 数据持久化在命名卷中
|
||||
|
||||
## 参考资料
|
||||
|
||||
- [RocketMQ 官方文档](https://rocketmq.apache.org/docs/quick-start/)
|
||||
- [RocketMQ Docker Hub](https://hub.docker.com/r/apache/rocketmq)
|
||||
- [RocketMQ Dashboard](https://github.com/apache/rocketmq-dashboard)
|
||||
22
src/rocketmq/broker.conf
Normal file
22
src/rocketmq/broker.conf
Normal file
@@ -0,0 +1,22 @@
|
||||
# Broker configuration
|
||||
brokerClusterName=DefaultCluster
|
||||
brokerName=broker-a
|
||||
brokerId=0
|
||||
|
||||
# Network settings
|
||||
brokerIP1=broker
|
||||
listenPort=10911
|
||||
|
||||
# Storage settings
|
||||
storePathRootDir=/home/rocketmq/store
|
||||
storePathCommitLog=/home/rocketmq/store/commitlog
|
||||
|
||||
# Auto create topic
|
||||
autoCreateTopicEnable=true
|
||||
autoCreateSubscriptionGroup=true
|
||||
|
||||
# Delete when
|
||||
deleteWhen=04
|
||||
|
||||
# Flush settings
|
||||
flushDiskType=ASYNC_FLUSH
|
||||
98
src/rocketmq/docker-compose.yaml
Normal file
98
src/rocketmq/docker-compose.yaml
Normal file
@@ -0,0 +1,98 @@
|
||||
x-default: &default
|
||||
restart: unless-stopped
|
||||
logging:
|
||||
driver: json-file
|
||||
options:
|
||||
max-size: 100m
|
||||
max-file: "3"
|
||||
|
||||
services:
|
||||
namesrv:
|
||||
<<: *default
|
||||
image: apache/rocketmq:${ROCKETMQ_VERSION:-5.3.1}
|
||||
command: sh mqnamesrv
|
||||
environment:
|
||||
TZ: ${TZ:-UTC}
|
||||
JAVA_OPT_EXT: "-Xms512m -Xmx512m"
|
||||
ports:
|
||||
- "${ROCKETMQ_NAMESRV_PORT_OVERRIDE:-9876}:9876"
|
||||
volumes:
|
||||
- namesrv_logs:/home/rocketmq/logs
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '1.0'
|
||||
memory: 1G
|
||||
reservations:
|
||||
cpus: '0.25'
|
||||
memory: 512M
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "mqadmin clusterList -n localhost:9876 || exit 1"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 30s
|
||||
|
||||
broker:
|
||||
<<: *default
|
||||
image: apache/rocketmq:${ROCKETMQ_VERSION:-5.3.1}
|
||||
command: sh mqbroker -n namesrv:9876 -c /home/rocketmq/broker.conf
|
||||
environment:
|
||||
TZ: ${TZ:-UTC}
|
||||
JAVA_OPT_EXT: "-Xms1g -Xmx1g"
|
||||
ports:
|
||||
- "${ROCKETMQ_BROKER_PORT_OVERRIDE:-10909}:10909"
|
||||
- "${ROCKETMQ_BROKER_VIP_PORT_OVERRIDE:-10911}:10911"
|
||||
- "${ROCKETMQ_BROKER_HA_PORT_OVERRIDE:-10912}:10912"
|
||||
volumes:
|
||||
- broker_logs:/home/rocketmq/logs
|
||||
- broker_store:/home/rocketmq/store
|
||||
- ./broker.conf:/home/rocketmq/broker.conf:ro
|
||||
depends_on:
|
||||
namesrv:
|
||||
condition: service_healthy
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '2.0'
|
||||
memory: 2G
|
||||
reservations:
|
||||
cpus: '0.5'
|
||||
memory: 1G
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "mqadmin clusterList -n namesrv:9876 | grep -q broker || exit 1"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 30s
|
||||
|
||||
dashboard:
|
||||
<<: *default
|
||||
image: apacherocketmq/rocketmq-dashboard:${ROCKETMQ_DASHBOARD_VERSION:-2.0.0}
|
||||
environment:
|
||||
TZ: ${TZ:-UTC}
|
||||
JAVA_OPTS: "-Xms256m -Xmx256m -Drocketmq.namesrv.addr=namesrv:9876"
|
||||
ports:
|
||||
- "${ROCKETMQ_DASHBOARD_PORT_OVERRIDE:-8080}:8080"
|
||||
depends_on:
|
||||
namesrv:
|
||||
condition: service_healthy
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '1.0'
|
||||
memory: 512M
|
||||
reservations:
|
||||
cpus: '0.25'
|
||||
memory: 256M
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "wget --no-verbose --tries=1 --spider http://localhost:8080 || exit 1"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 30s
|
||||
|
||||
volumes:
|
||||
namesrv_logs:
|
||||
broker_logs:
|
||||
broker_store:
|
||||
11
src/tidb/.env.example
Normal file
11
src/tidb/.env.example
Normal file
@@ -0,0 +1,11 @@
|
||||
# TiDB version (applies to PD, TiKV, and TiDB)
|
||||
TIDB_VERSION=v8.5.0
|
||||
|
||||
# Timezone
|
||||
TZ=UTC
|
||||
|
||||
# Port overrides (optional)
|
||||
# TIDB_PD_PORT_OVERRIDE=2379
|
||||
# TIDB_TIKV_PORT_OVERRIDE=20160
|
||||
# TIDB_PORT_OVERRIDE=4000
|
||||
# TIDB_STATUS_PORT_OVERRIDE=10080
|
||||
95
src/tidb/README.md
Normal file
95
src/tidb/README.md
Normal file
@@ -0,0 +1,95 @@
|
||||
# TiDB
|
||||
|
||||
TiDB is an open-source, cloud-native, distributed SQL database designed for modern applications. It is MySQL compatible and provides horizontal scalability, strong consistency, and high availability.
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
## Components
|
||||
|
||||
This setup includes:
|
||||
|
||||
- **PD (Placement Driver)**: Manages and schedules TiKV
|
||||
- **TiKV**: Distributed transactional key-value storage
|
||||
- **TiDB**: Stateless SQL layer
|
||||
|
||||
## Ports
|
||||
|
||||
- `4000`: TiDB MySQL protocol port
|
||||
- `10080`: TiDB status and metrics port
|
||||
- `2379`: PD client port
|
||||
- `20160`: TiKV port
|
||||
|
||||
## Access
|
||||
|
||||
### MySQL Client
|
||||
|
||||
TiDB is compatible with MySQL protocol:
|
||||
|
||||
```bash
|
||||
mysql -h127.0.0.1 -P4000 -uroot
|
||||
```
|
||||
|
||||
### Example Usage
|
||||
|
||||
```sql
|
||||
-- Create database
|
||||
CREATE DATABASE test;
|
||||
USE test;
|
||||
|
||||
-- Create table
|
||||
CREATE TABLE users (
|
||||
id INT PRIMARY KEY,
|
||||
name VARCHAR(50),
|
||||
email VARCHAR(100)
|
||||
);
|
||||
|
||||
-- Insert data
|
||||
INSERT INTO users VALUES (1, 'Alice', 'alice@example.com');
|
||||
|
||||
-- Query data
|
||||
SELECT * FROM users;
|
||||
```
|
||||
|
||||
### Status and Metrics
|
||||
|
||||
Check TiDB status:
|
||||
|
||||
```bash
|
||||
curl http://localhost:10080/status
|
||||
```
|
||||
|
||||
## Features
|
||||
|
||||
- **MySQL Compatible**: Drop-in replacement for MySQL
|
||||
- **Horizontal Scalability**: Scale out by adding more nodes
|
||||
- **Strong Consistency**: ACID transactions across distributed data
|
||||
- **High Availability**: Automatic failover with no data loss
|
||||
- **Hybrid Transactional/Analytical Processing (HTAP)**: Both OLTP and OLAP workloads
|
||||
|
||||
## Notes
|
||||
|
||||
- This is a minimal single-node setup for development
|
||||
- For production, deploy multiple PD, TiKV, and TiDB nodes
|
||||
- Consider adding TiFlash for analytical workloads
|
||||
- Monitor using Prometheus and Grafana for production deployments
|
||||
- Data is persisted in named volumes
|
||||
|
||||
## Advanced Configuration
|
||||
|
||||
For production deployments, consider:
|
||||
|
||||
- Using separate machines for PD, TiKV, and TiDB
|
||||
- Deploying at least 3 PD nodes for high availability
|
||||
- Deploying at least 3 TiKV nodes for data replication
|
||||
- Adding TiFlash for columnar storage and faster analytical queries
|
||||
- Setting up monitoring with TiDB Dashboard, Prometheus, and Grafana
|
||||
|
||||
## References
|
||||
|
||||
- [TiDB Official Documentation](https://docs.pingcap.com/tidb/stable)
|
||||
- [TiDB Quick Start](https://docs.pingcap.com/tidb/stable/quick-start-with-tidb)
|
||||
- [TiDB Docker Images](https://hub.docker.com/u/pingcap)
|
||||
95
src/tidb/README.zh.md
Normal file
95
src/tidb/README.zh.md
Normal file
@@ -0,0 +1,95 @@
|
||||
# TiDB
|
||||
|
||||
TiDB 是一个开源、云原生、分布式 SQL 数据库,专为现代应用程序设计。它兼容 MySQL 协议,提供水平扩展能力、强一致性和高可用性。
|
||||
|
||||
## 使用方法
|
||||
|
||||
```bash
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
## 组件说明
|
||||
|
||||
此配置包含:
|
||||
|
||||
- **PD (Placement Driver)**:管理和调度 TiKV
|
||||
- **TiKV**:分布式事务键值存储
|
||||
- **TiDB**:无状态 SQL 层
|
||||
|
||||
## 端口说明
|
||||
|
||||
- `4000`:TiDB MySQL 协议端口
|
||||
- `10080`:TiDB 状态和指标端口
|
||||
- `2379`:PD 客户端端口
|
||||
- `20160`:TiKV 端口
|
||||
|
||||
## 访问方式
|
||||
|
||||
### MySQL 客户端
|
||||
|
||||
TiDB 兼容 MySQL 协议:
|
||||
|
||||
```bash
|
||||
mysql -h127.0.0.1 -P4000 -uroot
|
||||
```
|
||||
|
||||
### 使用示例
|
||||
|
||||
```sql
|
||||
-- 创建数据库
|
||||
CREATE DATABASE test;
|
||||
USE test;
|
||||
|
||||
-- 创建表
|
||||
CREATE TABLE users (
|
||||
id INT PRIMARY KEY,
|
||||
name VARCHAR(50),
|
||||
email VARCHAR(100)
|
||||
);
|
||||
|
||||
-- 插入数据
|
||||
INSERT INTO users VALUES (1, 'Alice', 'alice@example.com');
|
||||
|
||||
-- 查询数据
|
||||
SELECT * FROM users;
|
||||
```
|
||||
|
||||
### 状态和指标
|
||||
|
||||
检查 TiDB 状态:
|
||||
|
||||
```bash
|
||||
curl http://localhost:10080/status
|
||||
```
|
||||
|
||||
## 特性
|
||||
|
||||
- **MySQL 兼容**:可作为 MySQL 的直接替代品
|
||||
- **水平扩展**:通过添加更多节点进行扩展
|
||||
- **强一致性**:分布式数据的 ACID 事务
|
||||
- **高可用性**:自动故障转移,无数据丢失
|
||||
- **混合事务/分析处理(HTAP)**:同时支持 OLTP 和 OLAP 工作负载
|
||||
|
||||
## 注意事项
|
||||
|
||||
- 这是一个最小的单节点配置,适合开发环境
|
||||
- 生产环境需要部署多个 PD、TiKV 和 TiDB 节点
|
||||
- 考虑添加 TiFlash 以支持分析工作负载
|
||||
- 生产部署使用 Prometheus 和 Grafana 进行监控
|
||||
- 数据持久化在命名卷中
|
||||
|
||||
## 高级配置
|
||||
|
||||
生产部署建议:
|
||||
|
||||
- 为 PD、TiKV 和 TiDB 使用独立的机器
|
||||
- 部署至少 3 个 PD 节点以实现高可用
|
||||
- 部署至少 3 个 TiKV 节点以实现数据复制
|
||||
- 添加 TiFlash 以提供列式存储和更快的分析查询
|
||||
- 使用 TiDB Dashboard、Prometheus 和 Grafana 设置监控
|
||||
|
||||
## 参考资料
|
||||
|
||||
- [TiDB 官方文档](https://docs.pingcap.com/zh/tidb/stable)
|
||||
- [TiDB 快速开始](https://docs.pingcap.com/zh/tidb/stable/quick-start-with-tidb)
|
||||
- [TiDB Docker 镜像](https://hub.docker.com/u/pingcap)
|
||||
105
src/tidb/docker-compose.yaml
Normal file
105
src/tidb/docker-compose.yaml
Normal file
@@ -0,0 +1,105 @@
|
||||
x-default: &default
|
||||
restart: unless-stopped
|
||||
logging:
|
||||
driver: json-file
|
||||
options:
|
||||
max-size: 100m
|
||||
max-file: "3"
|
||||
|
||||
services:
|
||||
pd:
|
||||
<<: *default
|
||||
image: pingcap/pd:${TIDB_VERSION:-v8.5.0}
|
||||
command:
|
||||
- --name=pd
|
||||
- --client-urls=http://0.0.0.0:2379
|
||||
- --peer-urls=http://0.0.0.0:2380
|
||||
- --advertise-client-urls=http://pd:2379
|
||||
- --advertise-peer-urls=http://pd:2380
|
||||
- --data-dir=/data
|
||||
environment:
|
||||
TZ: ${TZ:-UTC}
|
||||
volumes:
|
||||
- pd_data:/data
|
||||
ports:
|
||||
- "${TIDB_PD_PORT_OVERRIDE:-2379}:2379"
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '1.0'
|
||||
memory: 1G
|
||||
reservations:
|
||||
cpus: '0.25'
|
||||
memory: 512M
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "wget -q -O - http://localhost:2379/health || exit 1"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 30s
|
||||
|
||||
tikv:
|
||||
<<: *default
|
||||
image: pingcap/tikv:${TIDB_VERSION:-v8.5.0}
|
||||
command:
|
||||
- --addr=0.0.0.0:20160
|
||||
- --advertise-addr=tikv:20160
|
||||
- --pd=http://pd:2379
|
||||
- --data-dir=/data
|
||||
environment:
|
||||
TZ: ${TZ:-UTC}
|
||||
volumes:
|
||||
- tikv_data:/data
|
||||
ports:
|
||||
- "${TIDB_TIKV_PORT_OVERRIDE:-20160}:20160"
|
||||
depends_on:
|
||||
pd:
|
||||
condition: service_healthy
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '2.0'
|
||||
memory: 2G
|
||||
reservations:
|
||||
cpus: '0.5'
|
||||
memory: 1G
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "wget -q -O - http://localhost:20180/status || exit 1"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 5
|
||||
start_period: 60s
|
||||
|
||||
tidb:
|
||||
<<: *default
|
||||
image: pingcap/tidb:${TIDB_VERSION:-v8.5.0}
|
||||
command:
|
||||
- --store=tikv
|
||||
- --path=pd:2379
|
||||
- --advertise-address=tidb
|
||||
environment:
|
||||
TZ: ${TZ:-UTC}
|
||||
ports:
|
||||
- "${TIDB_PORT_OVERRIDE:-4000}:4000"
|
||||
- "${TIDB_STATUS_PORT_OVERRIDE:-10080}:10080"
|
||||
depends_on:
|
||||
tikv:
|
||||
condition: service_healthy
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '2.0'
|
||||
memory: 2G
|
||||
reservations:
|
||||
cpus: '0.5'
|
||||
memory: 512M
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "wget -q -O - http://localhost:10080/status || exit 1"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 30s
|
||||
|
||||
volumes:
|
||||
pd_data:
|
||||
tikv_data:
|
||||
11
src/tikv/.env.example
Normal file
11
src/tikv/.env.example
Normal file
@@ -0,0 +1,11 @@
|
||||
# TiKV version (applies to PD and TiKV)
|
||||
TIKV_VERSION=v8.5.0
|
||||
|
||||
# Timezone
|
||||
TZ=UTC
|
||||
|
||||
# Port overrides (optional)
|
||||
# TIKV_PD_PORT_OVERRIDE=2379
|
||||
# TIKV_PD_PEER_PORT_OVERRIDE=2380
|
||||
# TIKV_PORT_OVERRIDE=20160
|
||||
# TIKV_STATUS_PORT_OVERRIDE=20180
|
||||
98
src/tikv/README.md
Normal file
98
src/tikv/README.md
Normal file
@@ -0,0 +1,98 @@
|
||||
# TiKV
|
||||
|
||||
TiKV is an open-source, distributed, transactional key-value database. It provides APIs in multiple languages and is designed to complement or work independently of TiDB.
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
## Components
|
||||
|
||||
This setup includes:
|
||||
|
||||
- **PD (Placement Driver)**: Manages and schedules TiKV clusters
|
||||
- **TiKV**: Distributed transactional key-value storage engine
|
||||
|
||||
## Ports
|
||||
|
||||
- `2379`: PD client port
|
||||
- `2380`: PD peer port
|
||||
- `20160`: TiKV service port
|
||||
- `20180`: TiKV status and metrics port
|
||||
|
||||
## Access
|
||||
|
||||
### Using TiKV Client
|
||||
|
||||
TiKV provides client libraries for multiple languages:
|
||||
|
||||
- [Rust Client](https://github.com/tikv/client-rust)
|
||||
- [Go Client](https://github.com/tikv/client-go)
|
||||
- [Java Client](https://github.com/tikv/client-java)
|
||||
- [Python Client](https://github.com/tikv/client-py)
|
||||
|
||||
### Example (using tikv-client-rust)
|
||||
|
||||
```rust
|
||||
use tikv_client::{RawClient, Config};
|
||||
|
||||
#[tokio::main]
|
||||
async fn main() {
|
||||
let client = RawClient::new(vec!["127.0.0.1:2379"], None)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// Put a key-value pair
|
||||
client.put("key".to_owned(), "value".to_owned()).await.unwrap();
|
||||
|
||||
// Get the value
|
||||
let value = client.get("key".to_owned()).await.unwrap();
|
||||
println!("Value: {:?}", value);
|
||||
}
|
||||
```
|
||||
|
||||
### Status and Metrics
|
||||
|
||||
Check TiKV status:
|
||||
|
||||
```bash
|
||||
curl http://localhost:20180/status
|
||||
```
|
||||
|
||||
Get metrics in Prometheus format:
|
||||
|
||||
```bash
|
||||
curl http://localhost:20180/metrics
|
||||
```
|
||||
|
||||
## Features
|
||||
|
||||
- **Distributed Transactions**: ACID transactions across multiple keys
|
||||
- **Geo-Replication**: Data replication across data centers
|
||||
- **Horizontal Scalability**: Scale storage by adding more TiKV nodes
|
||||
- **Consistent Snapshot**: Snapshot isolation for reads
|
||||
- **Cloud Native**: Designed for cloud environments
|
||||
|
||||
## Use Cases
|
||||
|
||||
- **As a key-value store**: Standalone distributed key-value database
|
||||
- **With TiDB**: Storage layer for TiDB (see `tidb` service)
|
||||
- **Cache backend**: Distributed cache with persistence
|
||||
- **Metadata store**: Store metadata for distributed systems
|
||||
|
||||
## Notes
|
||||
|
||||
- This is a minimal single-node setup for development
|
||||
- For production, deploy at least 3 TiKV nodes for data replication
|
||||
- Deploy at least 3 PD nodes for high availability
|
||||
- Monitor using Prometheus and Grafana
|
||||
- Data is persisted in named volumes
|
||||
|
||||
## References
|
||||
|
||||
- [TiKV Official Documentation](https://tikv.org/docs/stable/)
|
||||
- [TiKV Deep Dive](https://tikv.org/deep-dive/)
|
||||
- [TiKV Docker Images](https://hub.docker.com/r/pingcap/tikv)
|
||||
- [TiKV Clients](https://tikv.org/docs/stable/develop/clients/introduction/)
|
||||
98
src/tikv/README.zh.md
Normal file
98
src/tikv/README.zh.md
Normal file
@@ -0,0 +1,98 @@
|
||||
# TiKV
|
||||
|
||||
TiKV 是一个开源、分布式、事务型键值数据库。它提供多种语言的 API,可以独立使用或与 TiDB 配合使用。
|
||||
|
||||
## 使用方法
|
||||
|
||||
```bash
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
## 组件说明
|
||||
|
||||
此配置包含:
|
||||
|
||||
- **PD (Placement Driver)**:管理和调度 TiKV 集群
|
||||
- **TiKV**:分布式事务键值存储引擎
|
||||
|
||||
## 端口说明
|
||||
|
||||
- `2379`:PD 客户端端口
|
||||
- `2380`:PD 对等端口
|
||||
- `20160`:TiKV 服务端口
|
||||
- `20180`:TiKV 状态和指标端口
|
||||
|
||||
## 访问方式
|
||||
|
||||
### 使用 TiKV 客户端
|
||||
|
||||
TiKV 提供多种语言的客户端库:
|
||||
|
||||
- [Rust 客户端](https://github.com/tikv/client-rust)
|
||||
- [Go 客户端](https://github.com/tikv/client-go)
|
||||
- [Java 客户端](https://github.com/tikv/client-java)
|
||||
- [Python 客户端](https://github.com/tikv/client-py)
|
||||
|
||||
### 示例(使用 tikv-client-rust)
|
||||
|
||||
```rust
|
||||
use tikv_client::{RawClient, Config};
|
||||
|
||||
#[tokio::main]
|
||||
async fn main() {
|
||||
let client = RawClient::new(vec!["127.0.0.1:2379"], None)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// 存储键值对
|
||||
client.put("key".to_owned(), "value".to_owned()).await.unwrap();
|
||||
|
||||
// 获取值
|
||||
let value = client.get("key".to_owned()).await.unwrap();
|
||||
println!("Value: {:?}", value);
|
||||
}
|
||||
```
|
||||
|
||||
### 状态和指标
|
||||
|
||||
检查 TiKV 状态:
|
||||
|
||||
```bash
|
||||
curl http://localhost:20180/status
|
||||
```
|
||||
|
||||
获取 Prometheus 格式的指标:
|
||||
|
||||
```bash
|
||||
curl http://localhost:20180/metrics
|
||||
```
|
||||
|
||||
## 特性
|
||||
|
||||
- **分布式事务**:跨多个键的 ACID 事务
|
||||
- **地理复制**:跨数据中心的数据复制
|
||||
- **水平扩展**:通过添加更多 TiKV 节点扩展存储
|
||||
- **一致性快照**:读取的快照隔离
|
||||
- **云原生**:专为云环境设计
|
||||
|
||||
## 使用场景
|
||||
|
||||
- **键值存储**:独立的分布式键值数据库
|
||||
- **与 TiDB 配合**:作为 TiDB 的存储层(参见 `tidb` 服务)
|
||||
- **缓存后端**:具有持久化能力的分布式缓存
|
||||
- **元数据存储**:为分布式系统存储元数据
|
||||
|
||||
## 注意事项
|
||||
|
||||
- 这是一个最小的单节点配置,适合开发环境
|
||||
- 生产环境需要部署至少 3 个 TiKV 节点以实现数据复制
|
||||
- 部署至少 3 个 PD 节点以实现高可用
|
||||
- 使用 Prometheus 和 Grafana 进行监控
|
||||
- 数据持久化在命名卷中
|
||||
|
||||
## 参考资料
|
||||
|
||||
- [TiKV 官方文档](https://tikv.org/docs/stable/)
|
||||
- [TiKV 深入了解](https://tikv.org/deep-dive/)
|
||||
- [TiKV Docker 镜像](https://hub.docker.com/r/pingcap/tikv)
|
||||
- [TiKV 客户端](https://tikv.org/docs/stable/develop/clients/introduction/)
|
||||
81
src/tikv/docker-compose.yaml
Normal file
81
src/tikv/docker-compose.yaml
Normal file
@@ -0,0 +1,81 @@
|
||||
x-default: &default
|
||||
restart: unless-stopped
|
||||
logging:
|
||||
driver: json-file
|
||||
options:
|
||||
max-size: 100m
|
||||
max-file: "3"
|
||||
|
||||
services:
|
||||
pd:
|
||||
<<: *default
|
||||
image: pingcap/pd:${TIKV_VERSION:-v8.5.0}
|
||||
command:
|
||||
- --name=pd
|
||||
- --client-urls=http://0.0.0.0:2379
|
||||
- --peer-urls=http://0.0.0.0:2380
|
||||
- --advertise-client-urls=http://pd:2379
|
||||
- --advertise-peer-urls=http://pd:2380
|
||||
- --data-dir=/data
|
||||
environment:
|
||||
TZ: ${TZ:-UTC}
|
||||
volumes:
|
||||
- pd_data:/data
|
||||
ports:
|
||||
- "${TIKV_PD_PORT_OVERRIDE:-2379}:2379"
|
||||
- "${TIKV_PD_PEER_PORT_OVERRIDE:-2380}:2380"
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '1.0'
|
||||
memory: 1G
|
||||
reservations:
|
||||
cpus: '0.25'
|
||||
memory: 512M
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "wget -q -O - http://localhost:2379/health || exit 1"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 30s
|
||||
|
||||
tikv:
|
||||
<<: *default
|
||||
image: pingcap/tikv:${TIKV_VERSION:-v8.5.0}
|
||||
command:
|
||||
- --addr=0.0.0.0:20160
|
||||
- --advertise-addr=tikv:20160
|
||||
- --status-addr=0.0.0.0:20180
|
||||
- --pd=http://pd:2379
|
||||
- --data-dir=/data
|
||||
- --log-file=/logs/tikv.log
|
||||
environment:
|
||||
TZ: ${TZ:-UTC}
|
||||
volumes:
|
||||
- tikv_data:/data
|
||||
- tikv_logs:/logs
|
||||
ports:
|
||||
- "${TIKV_PORT_OVERRIDE:-20160}:20160"
|
||||
- "${TIKV_STATUS_PORT_OVERRIDE:-20180}:20180"
|
||||
depends_on:
|
||||
pd:
|
||||
condition: service_healthy
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '2.0'
|
||||
memory: 4G
|
||||
reservations:
|
||||
cpus: '0.5'
|
||||
memory: 2G
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "wget -q -O - http://localhost:20180/status || exit 1"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 5
|
||||
start_period: 60s
|
||||
|
||||
volumes:
|
||||
pd_data:
|
||||
tikv_data:
|
||||
tikv_logs:
|
||||
Reference in New Issue
Block a user