feat: add FalkorDB, LMDeploy, and Pogocache with configuration files and documentation
This commit is contained in:
18
src/falkordb/.env.example
Normal file
18
src/falkordb/.env.example
Normal file
@@ -0,0 +1,18 @@
|
||||
# FalkorDB Version
|
||||
# Latest stable version can be found at https://hub.docker.com/r/falkordb/falkordb/tags
|
||||
FALKORDB_VERSION=v4.14.11
|
||||
|
||||
# Port configuration
|
||||
# Port for Redis protocol (Graph Database)
|
||||
FALKORDB_PORT_OVERRIDE=6379
|
||||
# Port for FalkorDB Browser UI
|
||||
FALKORDB_BROWSER_PORT_OVERRIDE=3000
|
||||
|
||||
# Resource limits
|
||||
FALKORDB_CPU_LIMIT=1.00
|
||||
FALKORDB_MEMORY_LIMIT=2G
|
||||
FALKORDB_CPU_RESERVATION=0.25
|
||||
FALKORDB_MEMORY_RESERVATION=512M
|
||||
|
||||
# Timezone
|
||||
TZ=UTC
|
||||
31
src/falkordb/README.md
Normal file
31
src/falkordb/README.md
Normal file
@@ -0,0 +1,31 @@
|
||||
# FalkorDB
|
||||
|
||||
[FalkorDB](https://falkordb.com/) is a low-latency property graph database that leverages sparse matrices and linear algebra for high-performance graph queries. It is a community-driven fork of RedisGraph, optimized for large-scale knowledge graphs and AI-powered applications.
|
||||
|
||||
## Getting Started
|
||||
|
||||
1. Copy `.env.example` to `.env` and adjust the configuration as needed.
|
||||
2. Start the service:
|
||||
|
||||
```bash
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
3. Access the FalkorDB Browser at `http://localhost:3000`.
|
||||
4. Connect to the database using `redis-cli` or any Redis-compatible client on port `6379`.
|
||||
|
||||
## Environment Variables
|
||||
|
||||
| Variable | Description | Default |
|
||||
| -------------------------------- | ---------------------------- | ---------- |
|
||||
| `FALKORDB_VERSION` | Version of FalkorDB image | `v4.14.11` |
|
||||
| `FALKORDB_PORT_OVERRIDE` | Host port for Redis protocol | `6379` |
|
||||
| `FALKORDB_BROWSER_PORT_OVERRIDE` | Host port for Browser UI | `3000` |
|
||||
| `FALKORDB_CPU_LIMIT` | Maximum CPU cycles | `1.00` |
|
||||
| `FALKORDB_MEMORY_LIMIT` | Maximum memory | `2G` |
|
||||
|
||||
## Resources
|
||||
|
||||
- [Official Documentation](https://docs.falkordb.com/)
|
||||
- [GitHub Repository](https://github.com/FalkorDB/FalkorDB)
|
||||
- [Docker Hub](https://hub.docker.com/r/falkordb/falkordb)
|
||||
31
src/falkordb/README.zh.md
Normal file
31
src/falkordb/README.zh.md
Normal file
@@ -0,0 +1,31 @@
|
||||
# FalkorDB
|
||||
|
||||
[FalkorDB](https://falkordb.com/) 是一个低延迟的属性图数据库,利用稀疏矩阵和线性代数实现高性能图查询。它是 RedisGraph 的社区驱动分支,针对大规模知识图谱和 AI 驱动的应用进行了优化。
|
||||
|
||||
## 快速开始
|
||||
|
||||
1. 将 `.env.example` 复制为 `.env` 并根据需要调整配置。
|
||||
2. 启动服务:
|
||||
|
||||
```bash
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
3. 通过 `http://localhost:3000` 访问 FalkorDB Browser 界面。
|
||||
4. 使用 `redis-cli` 或任何兼容 Redis 的客户端连接到 `6379` 端口。
|
||||
|
||||
## 环境变量
|
||||
|
||||
| 变量名 | 描述 | 默认值 |
|
||||
| -------------------------------- | -------------------- | ---------- |
|
||||
| `FALKORDB_VERSION` | FalkorDB 镜像版本 | `v4.14.11` |
|
||||
| `FALKORDB_PORT_OVERRIDE` | Redis 协议的主机端口 | `6379` |
|
||||
| `FALKORDB_BROWSER_PORT_OVERRIDE` | 浏览器界面的主机端口 | `3000` |
|
||||
| `FALKORDB_CPU_LIMIT` | 最大 CPU 使用率 | `1.00` |
|
||||
| `FALKORDB_MEMORY_LIMIT` | 最大内存限制 | `2G` |
|
||||
|
||||
## 相关资源
|
||||
|
||||
- [官方文档](https://docs.falkordb.com/)
|
||||
- [GitHub 仓库](https://github.com/FalkorDB/FalkorDB)
|
||||
- [Docker Hub](https://hub.docker.com/r/falkordb/falkordb)
|
||||
36
src/falkordb/docker-compose.yaml
Normal file
36
src/falkordb/docker-compose.yaml
Normal file
@@ -0,0 +1,36 @@
|
||||
x-defaults: &defaults
|
||||
restart: unless-stopped
|
||||
logging:
|
||||
driver: json-file
|
||||
options:
|
||||
max-size: 100m
|
||||
max-file: "3"
|
||||
|
||||
services:
|
||||
falkordb:
|
||||
<<: *defaults
|
||||
image: ${GLOBAL_REGISTRY:-}falkordb/falkordb:${FALKORDB_VERSION:-v4.14.11}
|
||||
ports:
|
||||
- "${FALKORDB_PORT_OVERRIDE:-6379}:6379"
|
||||
- "${FALKORDB_BROWSER_PORT_OVERRIDE:-3000}:3000"
|
||||
volumes:
|
||||
- falkordb_data:/data
|
||||
environment:
|
||||
- TZ=${TZ:-UTC}
|
||||
healthcheck:
|
||||
test: ["CMD", "redis-cli", "ping"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 10s
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: ${FALKORDB_CPU_LIMIT:-1.00}
|
||||
memory: ${FALKORDB_MEMORY_LIMIT:-2G}
|
||||
reservations:
|
||||
cpus: ${FALKORDB_CPU_RESERVATION:-0.25}
|
||||
memory: ${FALKORDB_MEMORY_RESERVATION:-512M}
|
||||
|
||||
volumes:
|
||||
falkordb_data:
|
||||
27
src/lmdeploy/.env.example
Normal file
27
src/lmdeploy/.env.example
Normal file
@@ -0,0 +1,27 @@
|
||||
# LMDeploy Version
|
||||
# Find more tags at: https://hub.docker.com/r/openmmlab/lmdeploy/tags
|
||||
LMDEPLOY_VERSION=v0.11.1-cu12.8
|
||||
|
||||
# Host port override
|
||||
LMDEPLOY_PORT_OVERRIDE=23333
|
||||
|
||||
# Model path or HuggingFace model ID
|
||||
# Examples:
|
||||
# - internlm/internlm2-chat-1_8b
|
||||
# - Qwen/Qwen2.5-7B-Instruct
|
||||
LMDEPLOY_MODEL=internlm/internlm2-chat-1_8b
|
||||
|
||||
# HuggingFace token for private models
|
||||
HF_TOKEN=
|
||||
|
||||
# Resource limits
|
||||
LMDEPLOY_CPU_LIMIT=4.0
|
||||
LMDEPLOY_MEMORY_LIMIT=8G
|
||||
LMDEPLOY_CPU_RESERVATION=2.0
|
||||
LMDEPLOY_MEMORY_RESERVATION=4G
|
||||
|
||||
# Shared memory size (required for some models)
|
||||
LMDEPLOY_SHM_SIZE=4g
|
||||
|
||||
# Timezone
|
||||
TZ=UTC
|
||||
29
src/lmdeploy/README.md
Normal file
29
src/lmdeploy/README.md
Normal file
@@ -0,0 +1,29 @@
|
||||
# LMDeploy Docker Compose
|
||||
|
||||
[LMDeploy](https://github.com/InternLM/lmdeploy) is a toolkit for compressing, deploying, and serving LLMs.
|
||||
|
||||
## Quick Start
|
||||
|
||||
1. (Optional) Configure the model and port in `.env`.
|
||||
2. Start the service:
|
||||
```bash
|
||||
docker compose up -d
|
||||
```
|
||||
3. Access the OpenAI compatible API at `http://localhost:23333/v1`.
|
||||
|
||||
## Configuration
|
||||
|
||||
| Environment Variable | Default | Description |
|
||||
| ------------------------ | ------------------------------ | ------------------------------------ |
|
||||
| `LMDEPLOY_VERSION` | `v0.11.1-cu12.8` | LMDeploy image version |
|
||||
| `LMDEPLOY_PORT_OVERRIDE` | `23333` | Host port for the API server |
|
||||
| `LMDEPLOY_MODEL` | `internlm/internlm2-chat-1_8b` | HuggingFace model ID or local path |
|
||||
| `HF_TOKEN` | | HuggingFace token for private models |
|
||||
|
||||
## Monitoring Health
|
||||
|
||||
The service includes a health check that verifies if the OpenAI `/v1/models` endpoint is responsive.
|
||||
|
||||
## GPU Support
|
||||
|
||||
By default, this configuration reserves 1 NVIDIA GPU. Ensure you have the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html) installed on your host.
|
||||
29
src/lmdeploy/README.zh.md
Normal file
29
src/lmdeploy/README.zh.md
Normal file
@@ -0,0 +1,29 @@
|
||||
# LMDeploy Docker Compose
|
||||
|
||||
[LMDeploy](https://github.com/InternLM/lmdeploy) 是一个用于压缩、部署和服务大语言模型(LLM)的工具包。
|
||||
|
||||
## 快速开始
|
||||
|
||||
1. (可选)在 `.env` 中配置模型和端口。
|
||||
2. 启动服务:
|
||||
```bash
|
||||
docker compose up -d
|
||||
```
|
||||
3. 通过 `http://localhost:23333/v1` 访问与 OpenAI 兼容的 API。
|
||||
|
||||
## 配置项
|
||||
|
||||
| 环境变量 | 默认值 | 说明 |
|
||||
| ------------------------ | ------------------------------ | ------------------------------------ |
|
||||
| `LMDEPLOY_VERSION` | `v0.11.1-cu12.8` | LMDeploy 镜像版本 |
|
||||
| `LMDEPLOY_PORT_OVERRIDE` | `23333` | API 服务器的主机端口 |
|
||||
| `LMDEPLOY_MODEL` | `internlm/internlm2-chat-1_8b` | HuggingFace 模型 ID 或本地路径 |
|
||||
| `HF_TOKEN` | | 用于访问私有模型的 HuggingFace Token |
|
||||
|
||||
## 健康检查
|
||||
|
||||
该配置包含健康检查,用于验证 OpenAI `/v1/models` 接口是否响应。
|
||||
|
||||
## GPU 支持
|
||||
|
||||
默认情况下,此配置会预留 1 个 NVIDIA GPU。请确保您的主机已安装 [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html)。
|
||||
50
src/lmdeploy/docker-compose.yaml
Normal file
50
src/lmdeploy/docker-compose.yaml
Normal file
@@ -0,0 +1,50 @@
|
||||
x-defaults: &defaults
|
||||
restart: unless-stopped
|
||||
logging:
|
||||
driver: json-file
|
||||
options:
|
||||
max-size: 100m
|
||||
max-file: "3"
|
||||
|
||||
services:
|
||||
lmdeploy:
|
||||
<<: *defaults
|
||||
image: ${GLOBAL_REGISTRY:-}openmmlab/lmdeploy:${LMDEPLOY_VERSION:-v0.11.1-cu12.8}
|
||||
ports:
|
||||
- "${LMDEPLOY_PORT_OVERRIDE:-23333}:23333"
|
||||
volumes:
|
||||
- lmdeploy_data:/root/.cache
|
||||
environment:
|
||||
- TZ=${TZ:-UTC}
|
||||
- HF_TOKEN=${HF_TOKEN:-}
|
||||
command:
|
||||
- lmdeploy
|
||||
- serve
|
||||
- api_server
|
||||
- ${LMDEPLOY_MODEL:-internlm/internlm2-chat-1_8b}
|
||||
- --server-name
|
||||
- "0.0.0.0"
|
||||
- --server-port
|
||||
- "23333"
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-f", "http://localhost:23333/v1/models"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 60s
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: ${LMDEPLOY_CPU_LIMIT:-4.0}
|
||||
memory: ${LMDEPLOY_MEMORY_LIMIT:-8G}
|
||||
reservations:
|
||||
cpus: ${LMDEPLOY_CPU_RESERVATION:-2.0}
|
||||
memory: ${LMDEPLOY_MEMORY_RESERVATION:-4G}
|
||||
devices:
|
||||
- driver: nvidia
|
||||
count: 1
|
||||
capabilities: [gpu]
|
||||
shm_size: ${LMDEPLOY_SHM_SIZE:-4g}
|
||||
|
||||
volumes:
|
||||
lmdeploy_data:
|
||||
15
src/pogocache/.env.example
Normal file
15
src/pogocache/.env.example
Normal file
@@ -0,0 +1,15 @@
|
||||
# Pogocache Version
|
||||
POGOCACHE_VERSION=1.3.1
|
||||
|
||||
# Host port override
|
||||
POGOCACHE_PORT_OVERRIDE=9401
|
||||
|
||||
# Resource limits
|
||||
POGOCACHE_CPU_LIMIT=0.50
|
||||
POGOCACHE_MEMORY_LIMIT=512M
|
||||
POGOCACHE_CPU_RESERVATION=0.10
|
||||
POGOCACHE_MEMORY_RESERVATION=128M
|
||||
|
||||
# Extra arguments for pogocache
|
||||
# Example: --auth mypassword --threads 4
|
||||
POGOCACHE_EXTRA_ARGS=
|
||||
35
src/pogocache/README.md
Normal file
35
src/pogocache/README.md
Normal file
@@ -0,0 +1,35 @@
|
||||
# Pogocache
|
||||
|
||||
[Pogocache](https://github.com/tidwall/pogocache) is fast caching software built from scratch with a focus on low latency and cpu efficiency. It is a high-performance, multi-protocol Redis alternative.
|
||||
|
||||
## Features
|
||||
|
||||
- **Fast**: Faster than Memcached, Valkey, Redis, Dragonfly, and Garnet.
|
||||
- **Multi-protocol**: Supports Redis RESP, Memcached, PostgreSQL wire protocol, and HTTP.
|
||||
- **Persistence**: Supports AOF-style persistence.
|
||||
- **Resource Efficient**: Low CPU and memory overhead.
|
||||
|
||||
## Deployment
|
||||
|
||||
```bash
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
| Variable | Default | Description |
|
||||
| ------------------------- | ------- | --------------------------------------------- |
|
||||
| `POGOCACHE_VERSION` | `1.3.1` | Pogocache image version |
|
||||
| `POGOCACHE_PORT_OVERRIDE` | `9401` | Host port for Pogocache |
|
||||
| `POGOCACHE_EXTRA_ARGS` | | Additional CLI arguments (e.g. `--auth pass`) |
|
||||
|
||||
## Accessing Pogocache
|
||||
|
||||
- **Redis**: `redis-cli -p 9401`
|
||||
- **Postgres**: `psql -h localhost -p 9401`
|
||||
- **HTTP**: `curl http://localhost:9401/key`
|
||||
- **Memcached**: `telnet localhost 9401`
|
||||
|
||||
## Persistence
|
||||
|
||||
By default, the data is persisted to a named volume `pogocache_data` at `/data/pogocache.db`.
|
||||
35
src/pogocache/README.zh.md
Normal file
35
src/pogocache/README.zh.md
Normal file
@@ -0,0 +1,35 @@
|
||||
# Pogocache
|
||||
|
||||
[Pogocache](https://github.com/tidwall/pogocache) 是一款从零开始构建的高速缓存软件,专注于低延迟和 CPU 效率。它是一个高性能、多协议的 Redis 替代方案。
|
||||
|
||||
## 特性
|
||||
|
||||
- **极速**:比 Memcached、Valkey、Redis、Dragonfly 和 Garnet 更快。
|
||||
- **多协议支持**:支持 Redis RESP、Memcached、PostgreSQL 线缆协议和 HTTP。
|
||||
- **持久化**:支持 AOF 风格的持久化。
|
||||
- **资源高效**:极低的 CPU 和内存开销。
|
||||
|
||||
## 部署
|
||||
|
||||
```bash
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
## 配置说明
|
||||
|
||||
| 变量名 | 默认值 | 描述 |
|
||||
| ------------------------- | ------- | -------------------------------------- |
|
||||
| `POGOCACHE_VERSION` | `1.3.1` | Pogocache 镜像版本 |
|
||||
| `POGOCACHE_PORT_OVERRIDE` | `9401` | 主机端口 |
|
||||
| `POGOCACHE_EXTRA_ARGS` | | 额外的命令行参数(例如 `--auth pass`) |
|
||||
|
||||
## 访问方式
|
||||
|
||||
- **Redis**:`redis-cli -p 9401`
|
||||
- **Postgres**:`psql -h localhost -p 9401`
|
||||
- **HTTP**:`curl http://localhost:9401/key`
|
||||
- **Memcached**:`telnet localhost 9401`
|
||||
|
||||
## 持久化
|
||||
|
||||
默认情况下,数据持久化到命名卷 `pogocache_data` 中的 `/data/pogocache.db`。
|
||||
42
src/pogocache/docker-compose.yaml
Normal file
42
src/pogocache/docker-compose.yaml
Normal file
@@ -0,0 +1,42 @@
|
||||
# Docker Compose for Pogocache
|
||||
# Pogocache is fast caching software built from scratch with a focus on low latency and cpu efficiency.
|
||||
# See: https://github.com/tidwall/pogocache
|
||||
|
||||
x-defaults: &defaults
|
||||
restart: unless-stopped
|
||||
logging:
|
||||
driver: json-file
|
||||
options:
|
||||
max-size: 10m
|
||||
max-file: "3"
|
||||
|
||||
services:
|
||||
pogocache:
|
||||
<<: *defaults
|
||||
image: ${GLOBAL_REGISTRY:-}pogocache/pogocache:${POGOCACHE_VERSION:-1.3.1}
|
||||
ports:
|
||||
- "${POGOCACHE_PORT_OVERRIDE:-9401}:9401"
|
||||
environment:
|
||||
- TZ=${TZ:-UTC}
|
||||
volumes:
|
||||
- pogocache_data:/data
|
||||
command: >
|
||||
${POGOCACHE_EXTRA_ARGS:-}
|
||||
--persist /data/pogocache.db
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "nc -z localhost 9401 || exit 1"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 5
|
||||
start_period: 5s
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: ${POGOCACHE_CPU_LIMIT:-0.50}
|
||||
memory: ${POGOCACHE_MEMORY_LIMIT:-512M}
|
||||
reservations:
|
||||
cpus: ${POGOCACHE_CPU_RESERVATION:-0.10}
|
||||
memory: ${POGOCACHE_MEMORY_RESERVATION:-128M}
|
||||
|
||||
volumes:
|
||||
pogocache_data:
|
||||
Reference in New Issue
Block a user