feat: add mcp-servers/**

This commit is contained in:
Sun-ZhenXing
2025-10-23 09:08:07 +08:00
parent ece59b42bf
commit f603ed5db9
57 changed files with 3061 additions and 95 deletions

View File

@@ -51,4 +51,8 @@ Reference template: `.compose-template.yaml` in the repo root.
If you want to find image tags, try fetch url like `https://hub.docker.com/v2/repositories/library/nginx/tags?page_size=1&ordering=last_updated`.
注意:所有中文的文档都使用中文的标点符号,如 “,”、“()” 等,中文和英文之间要留有空格。
Every service must have `.env.example`.
After update all of the services, please update `/README.md` & `/README.zh.md` to reflect the changes.
**注意**:所有中文的文档都使用中文的标点符号,如 “,”、“()” 等,中文和英文之间要留有空格。对于 Docker Compose 文件和 `.env.example` 文件中的注释部分,请使用英语而不是中文。请为每个服务提供英文说明 README.md 和中文说明 `README.zh.md`

View File

@@ -14,6 +14,7 @@ Compose Anything helps users quickly deploy various services by providing a set
| [Bytebot](./src/bytebot) | edge |
| [Clash](./src/clash) | 1.18.0 |
| [Dify](./src/dify) | 0.18.2 |
| [DNSMasq](./src/dnsmasq) | 2.91 |
| [Docker Registry](./src/docker-registry) | 3.0.0 |
| [Elasticsearch](./src/elasticsearch) | 8.16.1 |
| [etcd](./src/etcd) | 3.6.0 |
@@ -34,6 +35,7 @@ Compose Anything helps users quickly deploy various services by providing a set
| [Kodbox](./src/kodbox) | 1.62 |
| [Kong](./src/kong) | 3.8.0 |
| [Langfuse](./src/langfuse) | 3.115.0 |
| [LiteLLM](./src/litellm) | main-stable |
| [Logstash](./src/logstash) | 8.16.1 |
| [Milvus Standalone Embed](./src/milvus-standalone-embed) | 2.6.2 |
| [Milvus Standalone](./src/milvus-standalone) | 2.6.2 |
@@ -55,6 +57,7 @@ Compose Anything helps users quickly deploy various services by providing a set
| [OpenCoze](./src/opencoze) | See Docs |
| [OpenCut](./src/opencut) | latest |
| [OpenList](./src/openlist) | latest |
| [Portainer](./src/portainer) | 2.27.3-alpine |
| [PocketBase](./src/pocketbase) | 0.30.0 |
| [PostgreSQL](./src/postgres) | 17.6 |
| [Prometheus](./src/prometheus) | 3.5.0 |
@@ -62,6 +65,8 @@ Compose Anything helps users quickly deploy various services by providing a set
| [RabbitMQ](./src/rabbitmq) | 4.1.4 |
| [Redis Cluster](./src/redis-cluster) | 8.2.1 |
| [Redis](./src/redis) | 8.2.1 |
| [SearXNG](./src/searxng) | 2025.1.20-1ce14ef99 |
| [Verdaccio](./src/verdaccio) | 6.1.2 |
| [vLLM](./src/vllm) | v0.8.0 |
| [ZooKeeper](./src/zookeeper) | 3.9.3 |

View File

@@ -14,6 +14,7 @@ Compose Anything 通过提供一组高质量的 Docker Compose 配置文件,
| [Bytebot](./src/bytebot) | edge |
| [Clash](./src/clash) | 1.18.0 |
| [Dify](./src/dify) | 0.18.2 |
| [DNSMasq](./src/dnsmasq) | 2.91 |
| [Docker Registry](./src/docker-registry) | 3.0.0 |
| [Elasticsearch](./src/elasticsearch) | 8.16.1 |
| [etcd](./src/etcd) | 3.6.0 |
@@ -34,6 +35,7 @@ Compose Anything 通过提供一组高质量的 Docker Compose 配置文件,
| [Kodbox](./src/kodbox) | 1.62 |
| [Kong](./src/kong) | 3.8.0 |
| [Langfuse](./src/langfuse) | 3.115.0 |
| [LiteLLM](./src/litellm) | main-stable |
| [Logstash](./src/logstash) | 8.16.1 |
| [Milvus Standalone Embed](./src/milvus-standalone-embed) | 2.6.2 |
| [Milvus Standalone](./src/milvus-standalone) | 2.6.2 |
@@ -55,6 +57,7 @@ Compose Anything 通过提供一组高质量的 Docker Compose 配置文件,
| [OpenCoze](./src/opencoze) | See Docs |
| [OpenCut](./src/opencut) | latest |
| [OpenList](./src/openlist) | latest |
| [Portainer](./src/portainer) | 2.27.3-alpine |
| [PocketBase](./src/pocketbase) | 0.30.0 |
| [PostgreSQL](./src/postgres) | 17.6 |
| [Prometheus](./src/prometheus) | 3.5.0 |
@@ -62,6 +65,8 @@ Compose Anything 通过提供一组高质量的 Docker Compose 配置文件,
| [RabbitMQ](./src/rabbitmq) | 4.1.4 |
| [Redis Cluster](./src/redis-cluster) | 8.2.1 |
| [Redis](./src/redis) | 8.2.1 |
| [SearXNG](./src/searxng) | 2025.1.20-1ce14ef99 |
| [Verdaccio](./src/verdaccio) | 6.1.2 |
| [vLLM](./src/vllm) | v0.8.0 |
| [ZooKeeper](./src/zookeeper) | 3.9.3 |

View File

@@ -0,0 +1,17 @@
# Docker image version
DOCKERHUB_MCP_VERSION=latest
# Host port override
DOCKERHUB_MCP_PORT_OVERRIDE=8000
# Docker Hub username (optional, for authentication)
DOCKERHUB_USERNAME=
# Docker Hub password (optional, for authentication)
DOCKERHUB_PASSWORD=
# Docker Hub access token (recommended for authentication)
DOCKERHUB_TOKEN=
# Timezone
TZ=UTC

View File

@@ -0,0 +1,185 @@
# Docker Hub MCP Server
Docker Hub MCP Server provides integration with Docker Hub through the Model Context Protocol, enabling image search, query, and management capabilities.
## Features
- 🔍 **Image Search** - Search for images on Docker Hub
- 📊 **Image Info** - Get detailed image information
- 🏷️ **Tag Management** - View image tags
- 📈 **Statistics** - View download counts and stars
- 👤 **User Management** - Manage Docker Hub account
- 📝 **Repository Info** - Repository information queries
## Environment Variables
| Variable | Default | Description |
| ----------------------------- | -------- | ---------------------------------------------- |
| `DOCKERHUB_MCP_VERSION` | `latest` | Docker image version |
| `DOCKERHUB_MCP_PORT_OVERRIDE` | `8000` | Service port |
| `DOCKERHUB_USERNAME` | - | Docker Hub username (optional, for auth) |
| `DOCKERHUB_PASSWORD` | - | Docker Hub password (optional, for auth) |
| `DOCKERHUB_TOKEN` | - | Docker Hub access token (recommended for auth) |
| `TZ` | `UTC` | Timezone |
## Authentication Methods
The service supports three authentication methods:
### 1. No Authentication (Public Access)
Only public images and information can be accessed.
### 2. Username & Password Authentication
```env
DOCKERHUB_USERNAME=your_username
DOCKERHUB_PASSWORD=your_password
```
### 3. Access Token Authentication (Recommended)
```env
DOCKERHUB_TOKEN=your_access_token
```
## Quick Start
### 1. Configure Environment
Create a `.env` file:
#### No Authentication Mode (Public Access Only)
```env
DOCKERHUB_MCP_VERSION=latest
DOCKERHUB_MCP_PORT_OVERRIDE=8000
TZ=Asia/Shanghai
```
#### Token Authentication Mode (Recommended)
```env
DOCKERHUB_MCP_VERSION=latest
DOCKERHUB_MCP_PORT_OVERRIDE=8000
DOCKERHUB_TOKEN=dckr_pat_your_token_here
TZ=Asia/Shanghai
```
### 2. Get Docker Hub Access Token
1. Login to [Docker Hub](https://hub.docker.com/)
2. Click avatar → **Account Settings**
3. Navigate to **Security****Access Tokens**
4. Click **New Access Token**
5. Set permissions (read-only recommended)
6. Generate and copy the token
### 3. Start Service
```bash
docker compose up -d
```
### 4. Verify Service
```bash
curl http://localhost:8000/health
```
## Resource Requirements
- Minimum memory: 128MB
- Recommended memory: 512MB
- CPU: 0.25-1.0 cores
## Common Use Cases
1. **Image Search** - Search for Docker images suitable for projects
2. **Version Query** - View all available tags for an image
3. **Dependency Analysis** - Analyze base images and dependencies
4. **Security Check** - View image security scan reports
5. **Download Statistics** - Check image popularity
## API Features
The MCP server provides the following main features:
- ✅ Search public and private images
- ✅ Get image tag lists
- ✅ View detailed image information
- ✅ Query repository statistics
- ✅ Check for image updates
- ✅ View Dockerfiles
## Permission Types
### Read-Only Token Permissions
Recommended for most query operations:
- ✅ Search images
- ✅ View image information
- ✅ Get tag lists
- ❌ Push images
- ❌ Delete images
### Read-Write Token Permissions
For management operations:
- ✅ All read-only operations
- ✅ Push images
- ✅ Delete images
- ✅ Update repository settings
## Security Recommendations
⚠️ **Important**:
1. **Prefer Access Tokens**: More secure than passwords
2. **Least Privilege**: Only grant necessary permissions
3. **Regular Rotation**: Update access tokens regularly
4. **Protect Environment Variables**: Don't commit `.env` to version control
5. **Monitor Access**: Regularly check token usage
6. **Use Read-Only Tokens**: Unless write access is needed
## Rate Limits
Docker Hub has API rate limits:
- **Unauthenticated**: 100 requests/6 hours
- **Free Account**: 200 requests/6 hours
- **Paid Account**: Higher rate limits
Authentication is recommended for higher rate limits.
## References
- [Docker Hub Official Site](https://hub.docker.com/)
- [Docker Hub API Documentation](https://docs.docker.com/docker-hub/api/latest/)
- [MCP Documentation](https://modelcontextprotocol.io/)
## License
MIT License
通过 AI 可以进行以下查询:
1. "搜索 nginx 相关的镜像"
2. "查看 python:3.11 镜像的所有标签"
3. "获取 redis:alpine 镜像的详细信息"
4. "查找最流行的 PostgreSQL 镜像"
5. "比较不同 Node.js 镜像的大小"
## 参考链接
- [Docker Hub](https://hub.docker.com/)
- [Docker Hub API 文档](https://docs.docker.com/docker-hub/api/latest/)
- [Docker Hub 访问令牌](https://docs.docker.com/docker-hub/access-tokens/)
- [MCP 官方文档](https://modelcontextprotocol.io/)
- [Docker Hub - mcp/dockerhub](https://hub.docker.com/r/mcp/dockerhub)
## 许可证
MIT License

View File

@@ -0,0 +1,165 @@
# Docker Hub MCP Server
Docker Hub MCP Server 提供通过模型上下文协议MCP与 Docker Hub 集成的能力,实现镜像搜索、查询和管理功能。
## 功能特性
- 🔍 **镜像搜索** - 在 Docker Hub 上搜索镜像
- 📊 **镜像信息** - 获取详细的镜像信息
- 🏷️ **标签管理** - 查看镜像标签
- 📈 **统计信息** - 查看下载量和星标数
- 👤 **用户管理** - 管理 Docker Hub 账户
- 📝 **仓库信息** - 仓库信息查询
## 环境变量
| 变量 | 默认值 | 说明 |
| ----------------------------- | -------- | ----------------------------------- |
| `DOCKERHUB_MCP_VERSION` | `latest` | Docker 镜像版本 |
| `DOCKERHUB_MCP_PORT_OVERRIDE` | `8000` | 服务端口 |
| `DOCKERHUB_USERNAME` | - | Docker Hub 用户名(可选,用于认证) |
| `DOCKERHUB_PASSWORD` | - | Docker Hub 密码(可选,用于认证) |
| `DOCKERHUB_TOKEN` | - | Docker Hub 访问令牌(推荐用于认证) |
| `TZ` | `UTC` | 时区 |
## 认证方式
该服务支持三种认证方式:
### 1. 无认证(公开访问)
仅能访问公开镜像和信息。
### 2. 用户名和密码认证
```env
DOCKERHUB_USERNAME=your_username
DOCKERHUB_PASSWORD=your_password
```
### 3. 访问令牌认证(推荐)
```env
DOCKERHUB_TOKEN=your_access_token
```
## 快速开始
### 1. 配置环境
创建 `.env` 文件:
#### 无认证模式(仅公开访问)
```env
DOCKERHUB_MCP_VERSION=latest
DOCKERHUB_MCP_PORT_OVERRIDE=8000
TZ=Asia/Shanghai
```
#### 令牌认证模式(推荐)
```env
DOCKERHUB_MCP_VERSION=latest
DOCKERHUB_MCP_PORT_OVERRIDE=8000
DOCKERHUB_TOKEN=dckr_pat_your_token_here
TZ=Asia/Shanghai
```
### 2. 获取 Docker Hub 访问令牌
1. 登录 [Docker Hub](https://hub.docker.com/)
2. 点击头像 → **Account Settings**
3. 导航到 **Security****Access Tokens**
4. 点击 **New Access Token**
5. 设置权限(推荐只读权限)
6. 生成并复制令牌
### 3. 启动服务
```bash
docker compose up -d
```
### 4. 验证服务
```bash
curl http://localhost:8000/health
```
## 资源需求
- 最小内存128MB
- 推荐内存512MB
- CPU0.25-1.0 核心
## 常见使用场景
1. **镜像搜索** - 搜索适合项目的 Docker 镜像
2. **版本查询** - 查看镜像的所有可用标签
3. **依赖分析** - 分析镜像的基础镜像和依赖
4. **安全检查** - 查看镜像的安全扫描报告
5. **下载统计** - 查看镜像的受欢迎程度
## API 功能
该 MCP 服务器提供以下主要功能:
- ✅ 搜索公开和私有镜像
- ✅ 获取镜像标签列表
- ✅ 查看镜像详细信息
- ✅ 查询仓库统计信息
- ✅ 检查镜像更新
- ✅ 查看 Dockerfiles
## 权限类型
### 只读令牌权限
推荐用于大多数查询操作:
- ✅ 搜索镜像
- ✅ 查看镜像信息
- ✅ 获取标签列表
- ❌ 推送镜像
- ❌ 删除镜像
### 读写令牌权限
用于管理操作:
- ✅ 所有只读操作
- ✅ 推送镜像
- ✅ 删除镜像
- ✅ 更新仓库设置
## 安全建议
⚠️ **重要**
1. **优先使用访问令牌**:比密码更安全
2. **最小权限原则**:只授予必要的权限
3. **定期轮换**:定期更新访问令牌
4. **保护环境变量**:不要将 `.env` 提交到版本控制
5. **监控访问**:定期检查令牌使用情况
6. **使用只读令牌**:除非需要写权限
## 速率限制
Docker Hub 有 API 速率限制:
- **未认证**100 次请求 / 6 小时
- **免费账户**200 次请求 / 6 小时
- **付费账户**:更高的速率限制
建议使用认证以获得更高的速率限制。
## 参考链接
- [Docker Hub 官方网站](https://hub.docker.com/)
- [Docker Hub API 文档](https://docs.docker.com/docker-hub/api/latest/)
- [MCP 文档](https://modelcontextprotocol.io/)
## 许可证
MIT License

View File

@@ -0,0 +1,34 @@
x-default: &default
restart: unless-stopped
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
dockerhub:
<<: *default
image: mcp/dockerhub:${DOCKERHUB_MCP_VERSION:-latest}
environment:
- DOCKERHUB_USERNAME=${DOCKERHUB_USERNAME}
- DOCKERHUB_PASSWORD=${DOCKERHUB_PASSWORD}
- DOCKERHUB_TOKEN=${DOCKERHUB_TOKEN}
- MCP_HOST=0.0.0.0
- TZ=${TZ:-UTC}
ports:
- "${DOCKERHUB_MCP_PORT_OVERRIDE:-8000}:8000"
healthcheck:
test: ["CMD", "wget", "--spider", "-q", "http://localhost:8000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
deploy:
resources:
limits:
cpus: '1.00'
memory: 512M
reservations:
cpus: '0.25'
memory: 128M

View File

@@ -0,0 +1,32 @@
# MCP Grafana service version
MCP_GRAFANA_VERSION=latest
# Grafana version
GRAFANA_VERSION=latest
# MCP Grafana service port
MCP_GRAFANA_PORT_OVERRIDE=8000
# Grafana port
GRAFANA_PORT_OVERRIDE=3000
# Grafana URL
GRAFANA_URL=http://grafana:3000
# Grafana API key (required)
GRAFANA_API_KEY=
# Grafana admin username
GRAFANA_ADMIN_USER=admin
# Grafana admin password
GRAFANA_ADMIN_PASSWORD=admin
# Grafana plugins to install (comma-separated)
GRAFANA_INSTALL_PLUGINS=
# Grafana root URL
GRAFANA_ROOT_URL=http://localhost:3000
# Timezone
TZ=UTC

View File

@@ -0,0 +1,118 @@
# Grafana MCP Server
Grafana MCP Server provides integration with Grafana monitoring and visualization platform through the Model Context Protocol.
## Features
- 📊 **Dashboard Management** - Create and manage dashboards
- 📈 **Query Datasources** - Query data sources
- 🔍 **Search Dashboards** - Search dashboards
- 🚨 **Incident Investigation** - Investigate incidents
- 📉 **Metrics Analysis** - Analyze metrics
- 🎨 **Visualization** - Data visualization
## Architecture
The service consists of two containers:
- **mcp-grafana**: MCP server providing AI interaction interface with Grafana
- **grafana**: Grafana instance
## Environment Variables
| Variable | Default | Description |
| --------------------------- | ----------------------- | ---------------------------------------- |
| `MCP_GRAFANA_VERSION` | `latest` | MCP Grafana image version |
| `GRAFANA_VERSION` | `latest` | Grafana version |
| `MCP_GRAFANA_PORT_OVERRIDE` | `8000` | MCP service port |
| `GRAFANA_PORT_OVERRIDE` | `3000` | Grafana port |
| `GRAFANA_URL` | `http://grafana:3000` | Grafana instance URL |
| `GRAFANA_API_KEY` | - | Grafana API key (required) |
| `GRAFANA_ADMIN_USER` | `admin` | Admin username |
| `GRAFANA_ADMIN_PASSWORD` | `admin` | Admin password (⚠️ change in production!) |
| `GRAFANA_INSTALL_PLUGINS` | - | Plugins to install (comma-separated) |
| `GRAFANA_ROOT_URL` | `http://localhost:3000` | Grafana root URL |
| `TZ` | `UTC` | Timezone |
## Quick Start
### 1. Configure Environment
Create a `.env` file:
```env
MCP_GRAFANA_VERSION=latest
GRAFANA_VERSION=latest
MCP_GRAFANA_PORT_OVERRIDE=8000
GRAFANA_PORT_OVERRIDE=3000
GRAFANA_ADMIN_USER=admin
GRAFANA_ADMIN_PASSWORD=your_secure_password
GRAFANA_ROOT_URL=http://localhost:3000
TZ=Asia/Shanghai
```
### 2. Start Services
```bash
docker compose up -d
```
### 3. Get API Key
1. Visit Grafana: <http://localhost:3000>
2. Login with admin credentials
3. Navigate to **Configuration****API Keys**
4. Create a new API key
5. Add the key to `.env` file: `GRAFANA_API_KEY=your_key_here`
6. Restart mcp-grafana service: `docker compose restart mcp-grafana`
### 4. Verify Services
```bash
# Verify MCP service
curl http://localhost:8000/health
# Verify Grafana service
curl http://localhost:3000/api/health
```
## Resource Requirements
- **MCP Service**: 128MB-512MB memory, 0.25-1.0 CPU
- **Grafana**: 256MB-1GB memory, 0.5-2.0 CPU
## Common Use Cases
1. **Dashboard Search** - Find dashboards using natural language
2. **Data Queries** - Query metric data from data sources
3. **Alert Management** - View and manage alert rules
4. **Visualization Creation** - Create new visualization panels
5. **Incident Analysis** - Investigate and analyze monitoring events
## Security Recommendations
⚠️ **Important**: In production environments:
1. Change default admin password
2. Use strong passwords and secure API keys
3. Enable HTTPS/TLS encryption
4. Restrict network access
5. Rotate API keys regularly
6. Set appropriate user permissions
## Data Persistence
- `grafana_data`: Grafana data directory
- `grafana_config`: Grafana configuration directory
- `grafana_logs`: Grafana logs directory
## References
- [Grafana Official Site](https://grafana.com/)
- [Grafana API Documentation](https://grafana.com/docs/grafana/latest/developers/http_api/)
- [MCP Documentation](https://modelcontextprotocol.io/)
- [Docker Hub - grafana/grafana](https://hub.docker.com/r/grafana/grafana)
## License
MIT License

View File

@@ -0,0 +1,118 @@
# Grafana MCP Server
Grafana MCP Server 提供通过模型上下文协议MCP与 Grafana 监控和可视化平台集成的能力。
## 功能特性
- 📊 **仪表板管理** - 创建和管理仪表板
- 📈 **查询数据源** - 查询数据源
- 🔍 **搜索仪表板** - 搜索仪表板
- 🚨 **事件调查** - 调查事件
- 📉 **指标分析** - 分析指标
- 🎨 **可视化** - 数据可视化
## 架构
该服务包含两个容器:
- **mcp-grafana**MCP 服务器,提供与 Grafana 的 AI 交互接口
- **grafana**Grafana 实例
## 环境变量
| 变量 | 默认值 | 说明 |
| --------------------------- | ----------------------- | -------------------------------- |
| `MCP_GRAFANA_VERSION` | `latest` | MCP Grafana 镜像版本 |
| `GRAFANA_VERSION` | `latest` | Grafana 版本 |
| `MCP_GRAFANA_PORT_OVERRIDE` | `8000` | MCP 服务端口 |
| `GRAFANA_PORT_OVERRIDE` | `3000` | Grafana 端口 |
| `GRAFANA_URL` | `http://grafana:3000` | Grafana 实例 URL |
| `GRAFANA_API_KEY` | - | Grafana API 密钥(必需) |
| `GRAFANA_ADMIN_USER` | `admin` | 管理员用户名 |
| `GRAFANA_ADMIN_PASSWORD` | `admin` | 管理员密码(⚠️ 生产环境请修改!) |
| `GRAFANA_INSTALL_PLUGINS` | - | 要安装的插件(逗号分隔) |
| `GRAFANA_ROOT_URL` | `http://localhost:3000` | Grafana 根 URL |
| `TZ` | `UTC` | 时区 |
## 快速开始
### 1. 配置环境
创建 `.env` 文件:
```env
MCP_GRAFANA_VERSION=latest
GRAFANA_VERSION=latest
MCP_GRAFANA_PORT_OVERRIDE=8000
GRAFANA_PORT_OVERRIDE=3000
GRAFANA_ADMIN_USER=admin
GRAFANA_ADMIN_PASSWORD=your_secure_password
GRAFANA_ROOT_URL=http://localhost:3000
TZ=Asia/Shanghai
```
### 2. 启动服务
```bash
docker compose up -d
```
### 3. 获取 API 密钥
1. 访问 Grafana<http://localhost:3000>
2. 使用管理员凭据登录
3. 导航到 **Configuration****API Keys**
4. 创建新的 API 密钥
5. 将密钥添加到 `.env` 文件:`GRAFANA_API_KEY=your_key_here`
6. 重启 mcp-grafana 服务:`docker compose restart mcp-grafana`
### 4. 验证服务
```bash
# 验证 MCP 服务
curl http://localhost:8000/health
# 验证 Grafana 服务
curl http://localhost:3000/api/health
```
## 资源需求
- **MCP 服务**128MB-512MB 内存0.25-1.0 CPU
- **Grafana**256MB-1GB 内存0.5-2.0 CPU
## 常见使用场景
1. **仪表板搜索** - 使用自然语言查找仪表板
2. **数据查询** - 从数据源查询指标数据
3. **告警管理** - 查看和管理告警规则
4. **可视化创建** - 创建新的可视化面板
5. **事件分析** - 调查和分析监控事件
## 安全建议
⚠️ **重要**:在生产环境中:
1. 修改默认管理员密码
2. 使用强密码和安全的 API 密钥
3. 启用 HTTPS/TLS 加密
4. 限制网络访问
5. 定期轮换 API 密钥
6. 设置适当的用户权限
## 数据持久化
- `grafana_data`Grafana 数据目录
- `grafana_config`Grafana 配置目录
- `grafana_logs`Grafana 日志目录
## 参考链接
- [Grafana 官方网站](https://grafana.com/)
- [Grafana API 文档](https://grafana.com/docs/grafana/latest/developers/http_api/)
- [MCP 文档](https://modelcontextprotocol.io/)
- [Docker Hub - grafana/grafana](https://hub.docker.com/r/grafana/grafana)
## 许可证
MIT License

View File

@@ -0,0 +1,74 @@
x-default: &default
restart: unless-stopped
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
mcp-grafana:
<<: *default
image: mcp/grafana:${MCP_GRAFANA_VERSION:-latest}
environment:
- GRAFANA_URL=${GRAFANA_URL:-http://grafana:3000}
- GRAFANA_API_KEY=${GRAFANA_API_KEY}
- MCP_HOST=0.0.0.0
- TZ=${TZ:-UTC}
ports:
- "${MCP_GRAFANA_PORT_OVERRIDE:-8000}:8000"
depends_on:
grafana:
condition: service_healthy
healthcheck:
test: ["CMD", "wget", "--spider", "-q", "http://localhost:8000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
deploy:
resources:
limits:
cpus: '1.00'
memory: 512M
reservations:
cpus: '0.25'
memory: 128M
grafana:
<<: *default
image: grafana/grafana:${GRAFANA_VERSION:-latest}
environment:
- GF_SECURITY_ADMIN_USER=${GRAFANA_ADMIN_USER:-admin}
- GF_SECURITY_ADMIN_PASSWORD=${GRAFANA_ADMIN_PASSWORD:-admin}
- GF_INSTALL_PLUGINS=${GRAFANA_INSTALL_PLUGINS:-}
- GF_SERVER_ROOT_URL=${GRAFANA_ROOT_URL:-http://localhost:3000}
- TZ=${TZ:-UTC}
ports:
- "${GRAFANA_PORT_OVERRIDE:-3000}:3000"
volumes:
- grafana_data:/var/lib/grafana
- grafana_config:/etc/grafana
- grafana_logs:/var/log/grafana
healthcheck:
test: ["CMD", "wget", "--spider", "-q", "http://localhost:3000/api/health"]
interval: 10s
timeout: 5s
retries: 5
start_period: 30s
deploy:
resources:
limits:
cpus: '2.00'
memory: 1G
reservations:
cpus: '0.5'
memory: 256M
volumes:
grafana_data:
driver: local
grafana_config:
driver: local
grafana_logs:
driver: local

View File

@@ -0,0 +1,26 @@
# MCP MongoDB Service version
MCP_MONGODB_VERSION=latest
# MongoDB version
MONGODB_VERSION=7
# MCP MongoDB service port
MCP_MONGODB_PORT_OVERRIDE=8000
# MongoDB port
MONGODB_PORT_OVERRIDE=27017
# MongoDB connection URI
MONGODB_URI=mongodb://mongodb:27017
# MongoDB database name
MONGODB_DATABASE=mcp_db
# MongoDB root username
MONGO_ROOT_USERNAME=admin
# MongoDB root password
MONGO_ROOT_PASSWORD=password
# Timezone
TZ=UTC

View File

@@ -0,0 +1,104 @@
# MongoDB MCP Server
MongoDB MCP Server provides MongoDB database interaction capabilities through the Model Context Protocol, including data querying, insertion, updating, and collection management.
## Features
- 📊 **Database Operations** - CRUD operations support
- 🔍 **Query & Aggregation** - Complex queries and aggregation pipelines
- 📝 **Collection Management** - Create, delete, and modify collections
- 🔐 **Authentication** - Built-in authentication support
- <20> **Monitoring** - Health checks and resource monitoring
- 🌐 **RESTful API** - MCP protocol-based API interface
## Architecture
The service includes two containers:
- **mcp-mongodb**: MCP protocol adapter providing API interface
- **mongodb**: MongoDB database instance
## Environment Variables
| Variable | Default | Description |
| --------------------------- | ------------------------- | --------------------------------------- |
| `MCP_MONGODB_VERSION` | `latest` | MCP MongoDB service version |
| `MONGODB_VERSION` | `7` | MongoDB version |
| `MCP_MONGODB_PORT_OVERRIDE` | `8000` | MCP service port |
| `MONGODB_PORT_OVERRIDE` | `27017` | MongoDB port |
| `MONGODB_URI` | `mongodb://mongodb:27017` | MongoDB connection URI |
| `MONGODB_DATABASE` | `mcp_db` | Database name |
| `MONGO_ROOT_USERNAME` | `admin` | Root username |
| `MONGO_ROOT_PASSWORD` | `password` | Root password (⚠️ change in production!) |
| `TZ` | `UTC` | Timezone |
## Quick Start
### 1. Configure Environment
Create a `.env` file:
```env
MCP_MONGODB_VERSION=latest
MONGODB_VERSION=7
MCP_MONGODB_PORT_OVERRIDE=8000
MONGODB_PORT_OVERRIDE=27017
MONGODB_DATABASE=mcp_db
MONGO_ROOT_USERNAME=admin
MONGO_ROOT_PASSWORD=your_secure_password
TZ=Asia/Shanghai
```
### 2. Start Services
```bash
docker compose up -d
```
### 3. Verify Services
Check MCP service:
```bash
curl http://localhost:8000/health
```
Connect to MongoDB:
```bash
docker compose exec mongodb mongosh -u admin -p your_secure_password
```
## Resource Requirements
- **MCP Service**: 128MB-512MB memory, 0.25-1.0 CPU
- **MongoDB**: 512MB-2GB memory, 0.5-2.0 CPU
## Security Recommendations
1. **Change Default Password**: Always change `MONGO_ROOT_PASSWORD` in production
2. **Network Isolation**: Use internal networks, avoid exposing MongoDB port publicly
3. **Enable Authentication**: Ensure MongoDB authentication is enabled
4. **Regular Backups**: Set up regular data backup schedules
## Data Persistence
- `mongodb_data`: MongoDB data directory
- `mongodb_config`: MongoDB configuration directory
## Common Use Cases
1. **Application Backend** - As database backend for applications
2. **Data Analysis** - Store and query analysis data
3. **Document Storage** - Store and retrieve JSON documents
4. **Session Management** - Store user sessions
## References
- [MongoDB Official Documentation](https://docs.mongodb.com/)
- [MCP Documentation](https://modelcontextprotocol.io/)
- [Docker Hub - mongo](https://hub.docker.com/_/mongo)
## License
MIT License

View File

@@ -0,0 +1,104 @@
# MongoDB MCP Server
MongoDB MCP Server 提供通过模型上下文协议MCP与 MongoDB 数据库交互的能力,包括数据查询、插入、更新和集合管理。
## 功能特性
- 📊 **数据库操作** - 支持 CRUD 操作
- 🔍 **查询和聚合** - 复杂查询和聚合管道
- 📝 **集合管理** - 创建、删除、修改集合
- 🔐 **身份认证** - 内置认证支持
- 📈 **监控** - 健康检查和资源监控
- 🌐 **RESTful API** - 基于 MCP 协议的 API 接口
## 架构
该服务包含两个容器:
- **mcp-mongodb**MCP 协议适配器,提供 API 接口
- **mongodb**MongoDB 数据库实例
## 环境变量
| 变量 | 默认值 | 说明 |
| --------------------------- | ------------------------- | -------------------------------- |
| `MCP_MONGODB_VERSION` | `latest` | MCP MongoDB 服务版本 |
| `MONGODB_VERSION` | `7` | MongoDB 版本 |
| `MCP_MONGODB_PORT_OVERRIDE` | `8000` | MCP 服务端口 |
| `MONGODB_PORT_OVERRIDE` | `27017` | MongoDB 端口 |
| `MONGODB_URI` | `mongodb://mongodb:27017` | MongoDB 连接 URI |
| `MONGODB_DATABASE` | `mcp_db` | 数据库名称 |
| `MONGO_ROOT_USERNAME` | `admin` | 管理员用户名 |
| `MONGO_ROOT_PASSWORD` | `password` | 管理员密码(⚠️ 生产环境请修改!) |
| `TZ` | `UTC` | 时区 |
## 快速开始
### 1. 配置环境
创建 `.env` 文件:
```env
MCP_MONGODB_VERSION=latest
MONGODB_VERSION=7
MCP_MONGODB_PORT_OVERRIDE=8000
MONGODB_PORT_OVERRIDE=27017
MONGODB_DATABASE=mcp_db
MONGO_ROOT_USERNAME=admin
MONGO_ROOT_PASSWORD=your_secure_password
TZ=Asia/Shanghai
```
### 2. 启动服务
```bash
docker compose up -d
```
### 3. 验证服务
检查 MCP 服务:
```bash
curl http://localhost:8000/health
```
连接 MongoDB
```bash
docker compose exec mongodb mongosh -u admin -p your_secure_password
```
## 资源需求
- **MCP 服务**128MB-512MB 内存0.25-1.0 CPU
- **MongoDB**512MB-2GB 内存0.5-2.0 CPU
## 安全建议
1. **修改默认密码**:生产环境务必修改 `MONGO_ROOT_PASSWORD`
2. **网络隔离**:使用内部网络,避免 MongoDB 端口暴露到公网
3. **启用认证**:确保 MongoDB 认证已启用
4. **定期备份**:设置定期数据备份计划
## 数据持久化
- `mongodb_data`MongoDB 数据目录
- `mongodb_config`MongoDB 配置目录
## 常见使用场景
1. **应用后端** - 作为应用程序的数据库后端
2. **数据分析** - 存储和查询分析数据
3. **文档存储** - 存储和检索 JSON 文档
4. **会话管理** - 存储用户会话
## 参考链接
- [MongoDB 官方文档](https://docs.mongodb.com/)
- [MCP 文档](https://modelcontextprotocol.io/)
- [Docker Hub - mongo](https://hub.docker.com/_/mongo)
## 许可证
MIT License

View File

@@ -0,0 +1,70 @@
x-default: &default
restart: unless-stopped
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
mcp-mongodb:
<<: *default
image: mcp/mongodb:${MCP_MONGODB_VERSION:-latest}
environment:
- MONGODB_URI=${MONGODB_URI:-mongodb://mongodb:27017}
- MONGODB_DATABASE=${MONGODB_DATABASE:-mcp_db}
- MCP_HOST=0.0.0.0
- TZ=${TZ:-UTC}
ports:
- "${MCP_MONGODB_PORT_OVERRIDE:-8000}:8000"
depends_on:
mongodb:
condition: service_healthy
healthcheck:
test: ["CMD", "wget", "--spider", "-q", "http://localhost:8000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
deploy:
resources:
limits:
cpus: '1.00'
memory: 512M
reservations:
cpus: '0.25'
memory: 128M
mongodb:
<<: *default
image: mongo:${MONGODB_VERSION:-7}
environment:
- MONGO_INITDB_ROOT_USERNAME=${MONGO_ROOT_USERNAME:-admin}
- MONGO_INITDB_ROOT_PASSWORD=${MONGO_ROOT_PASSWORD:-password}
- MONGO_INITDB_DATABASE=${MONGODB_DATABASE:-mcp_db}
- TZ=${TZ:-UTC}
ports:
- "${MONGODB_PORT_OVERRIDE:-27017}:27017"
volumes:
- mongodb_data:/data/db
- mongodb_config:/data/configdb
healthcheck:
test: ["CMD", "mongosh", "--eval", "db.adminCommand('ping')"]
interval: 10s
timeout: 5s
retries: 5
start_period: 30s
deploy:
resources:
limits:
cpus: '2.00'
memory: 2G
reservations:
cpus: '0.5'
memory: 512M
volumes:
mongodb_data:
driver: local
mongodb_config:
driver: local

View File

@@ -0,0 +1,8 @@
# Docker image version
PLAYWRIGHT_VERSION=latest
# Host port override
PLAYWRIGHT_PORT_OVERRIDE=8000
# Timezone
TZ=UTC

View File

@@ -0,0 +1,67 @@
# Playwright MCP Server
Playwright MCP Server provides browser automation and web scraping capabilities through the Model Context Protocol.
## Features
- 🌐 **Browser Automation** - Automate browser operations
- 📸 **Screenshot Capture** - Capture web page screenshots
- 🔍 **Web Scraping** - Intelligent web content extraction
- 📝 **Form Filling** - Automated form filling
- 🎭 **Multi-Browser** - Support for Chromium, Firefox, WebKit
- 🔐 **Cookie & Session** - Cookie and session management
## Environment Variables
| Variable | Default | Description |
| -------------------------- | -------- | -------------------- |
| `PLAYWRIGHT_VERSION` | `latest` | Docker image version |
| `PLAYWRIGHT_PORT_OVERRIDE` | `8000` | Service port |
| `TZ` | `UTC` | Timezone |
## Quick Start
### 1. Configure Environment
Create a `.env` file:
```env
PLAYWRIGHT_VERSION=latest
PLAYWRIGHT_PORT_OVERRIDE=8000
TZ=Asia/Shanghai
```
### 2. Start Service
```bash
docker compose up -d
```
### 3. Verify Service
```bash
curl http://localhost:8000/health
```
## Resource Requirements
- Minimum memory: 512MB
- Recommended memory: 2GB
- Shared memory: 2GB (configured)
## Common Use Cases
1. **Web Screenshots** - Automatically visit and capture screenshots
2. **Data Scraping** - Extract data from dynamic web pages
3. **UI Testing** - Automated UI testing scenarios
4. **Form Automation** - Batch fill and submit forms
## References
- [Playwright Official Site](https://playwright.dev/)
- [MCP Documentation](https://modelcontextprotocol.io/)
- [Docker Hub - mcp/playwright](https://hub.docker.com/r/mcp/playwright)
## License
MIT License

View File

@@ -0,0 +1,67 @@
# Playwright MCP Server
Playwright MCP Server 是一个基于 Playwright 的模型上下文协议MCP服务器提供浏览器自动化和网页抓取功能。
## 功能特性
- 🌐 **浏览器自动化** - 自动化浏览器操作
- 📸 **截图捕获** - 捕获网页截图
- 🔍 **网页抓取** - 智能提取网页内容
- 📝 **表单填写** - 自动化表单填写
- 🎭 **多浏览器支持** - 支持 Chromium、Firefox、WebKit
- 🔐 **Cookie 和会话管理** - Cookie 和会话管理
## 环境变量
| 变量 | 默认值 | 说明 |
| -------------------------- | -------- | --------------- |
| `PLAYWRIGHT_VERSION` | `latest` | Docker 镜像版本 |
| `PLAYWRIGHT_PORT_OVERRIDE` | `8000` | 服务端口 |
| `TZ` | `UTC` | 时区 |
## 快速开始
### 1. 配置环境
创建 `.env` 文件:
```env
PLAYWRIGHT_VERSION=latest
PLAYWRIGHT_PORT_OVERRIDE=8000
TZ=Asia/Shanghai
```
### 2. 启动服务
```bash
docker compose up -d
```
### 3. 验证服务
```bash
curl http://localhost:8000/health
```
## 资源需求
- 最小内存512MB
- 推荐内存2GB
- 共享内存2GB已配置
## 常见使用场景
1. **网页截图** - 自动访问并捕获截图
2. **数据抓取** - 从动态网页提取数据
3. **UI 测试** - 自动化 UI 测试场景
4. **表单自动化** - 批量填写和提交表单
## 参考链接
- [Playwright 官方网站](https://playwright.dev/)
- [MCP 文档](https://modelcontextprotocol.io/)
- [Docker Hub - mcp/playwright](https://hub.docker.com/r/mcp/playwright)
## 许可证
MIT License

View File

@@ -0,0 +1,42 @@
x-default: &default
restart: unless-stopped
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
playwright:
<<: *default
image: mcp/playwright:${PLAYWRIGHT_VERSION:-latest}
environment:
- MCP_HOST=0.0.0.0
- TZ=${TZ:-UTC}
ports:
- "${PLAYWRIGHT_PORT_OVERRIDE:-8000}:8000"
# 如果需要访问本地文件,可以挂载卷
volumes:
- playwright_data:/app/data
healthcheck:
test: ["CMD", "wget", "--spider", "-q", "http://localhost:8000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 20s
deploy:
resources:
limits:
cpus: '2.00'
memory: 2G
reservations:
cpus: '0.5'
memory: 512M
# Playwright 需要额外的权限来运行浏览器
security_opt:
- seccomp:unconfined
shm_size: '2gb'
volumes:
playwright_data:
driver: local

View File

@@ -0,0 +1,14 @@
# MCP Redis version
MCP_REDIS_VERSION=latest
# MCP port (default: 8000)
MCP_PORT_OVERRIDE=8000
# Redis version
REDIS_VERSION=7-alpine
# Redis port (default: 6379)
REDIS_PORT_OVERRIDE=6379
# Timezone
TZ=UTC

View File

@@ -0,0 +1,40 @@
# Redis MCP Server
[English](./README.md) | [中文](./README.zh.md)
This service deploys an MCP (Model Context Protocol) server for Redis, providing a standardized interface to interact with Redis databases.
## Services
- `mcp`: The MCP Redis server
- `redis`: Redis database service
## Environment Variables
| Variable Name | Description | Default Value |
| ------------------- | ----------------------------------------------------------------- | ------------- |
| MCP_REDIS_VERSION | MCP Redis image version | `latest` |
| MCP_PORT_OVERRIDE | Host port mapping for MCP server (maps to port 8000 in container) | 8000 |
| REDIS_VERSION | Redis image version | `7-alpine` |
| REDIS_PORT_OVERRIDE | Host port mapping for Redis (maps to port 6379 in container) | 6379 |
| TZ | Timezone setting | `UTC` |
Please modify the `.env` file as needed for your use case.
## Volumes
- `redis_data`: Redis data persistence
## Ports
- `8000`: MCP server API
- `6379`: Redis database
## Usage
The MCP server provides a standardized interface to interact with Redis. Access the MCP API at `http://localhost:8000`.
## Additional Information
- Model Context Protocol: <https://modelcontextprotocol.io/>
- Redis Documentation: <https://redis.io/documentation>

View File

@@ -0,0 +1,40 @@
# Redis MCP 服务器
[English](./README.md) | [中文](./README.zh.md)
此服务部署一个用于 Redis 的 MCP模型上下文协议服务器提供与 Redis 数据库交互的标准化接口。
## 服务
- `mcp`MCP Redis 服务器
- `redis`Redis 数据库服务
## 环境变量
| 变量名 | 说明 | 默认值 |
| ------------------- | ----------------------------------------------- | ---------- |
| MCP_REDIS_VERSION | MCP Redis 镜像版本 | `latest` |
| MCP_PORT_OVERRIDE | MCP 服务器主机端口映射(映射到容器内端口 8000 | 8000 |
| REDIS_VERSION | Redis 镜像版本 | `7-alpine` |
| REDIS_PORT_OVERRIDE | Redis 主机端口映射(映射到容器内端口 6379 | 6379 |
| TZ | 时区设置 | `UTC` |
请根据实际需求修改 `.env` 文件。
## 卷
- `redis_data`Redis 数据持久化
## 端口
- `8000`MCP 服务器 API
- `6379`Redis 数据库
## 使用方法
MCP 服务器提供了与 Redis 交互的标准化接口。访问 MCP API`http://localhost:8000`
## 附加信息
- 模型上下文协议:<https://modelcontextprotocol.io/>
- Redis 文档:<https://redis.io/documentation>

View File

@@ -0,0 +1,64 @@
x-default: &default
restart: unless-stopped
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
mcp:
<<: *default
image: mcp/redis:${MCP_REDIS_VERSION:-latest}
environment:
- REDIS_HOST=redis
- REDIS_PORT=6379
- MCP_HOST=0.0.0.0
- TZ=${TZ:-UTC}
ports:
- "${MCP_PORT_OVERRIDE:-8000}:8000"
depends_on:
redis:
condition: service_healthy
healthcheck:
test: ["CMD", "wget", "--spider", "-q", "http://localhost:8000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
deploy:
resources:
limits:
cpus: '1.00'
memory: 512M
reservations:
cpus: '0.25'
memory: 128M
redis:
<<: *default
image: redis:${REDIS_VERSION:-7-alpine}
command: redis-server --appendonly yes
ports:
- "${REDIS_PORT_OVERRIDE:-6379}:6379"
volumes:
- redis_data:/data
environment:
- TZ=${TZ:-UTC}
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
deploy:
resources:
limits:
cpus: '0.50'
memory: 256M
reservations:
cpus: '0.10'
memory: 64M
volumes:
redis_data:

View File

@@ -0,0 +1,14 @@
# Docker image version
RUST_MCP_FILESYSTEM_VERSION=latest
# Host port override
RUST_MCP_FILESYSTEM_PORT_OVERRIDE=8000
# Allowed paths (inside container)
ALLOWED_PATHS=/projects
# Host workspace path to mount
HOST_WORKSPACE_PATH=./workspace
# Timezone
TZ=UTC

View File

@@ -0,0 +1,119 @@
# Rust MCP Filesystem Server
Rust MCP Filesystem Server is a high-performance filesystem MCP server built with Rust, providing fast and secure file operations.
## Features
- 🚀 **High Performance** - Rust-powered high-performance file operations
- 🔒 **Secure Access** - Configurable access control
- 📁 **File Operations** - File read/write, directory traversal
- 🔍 **File Search** - Fast file searching
- 📊 **File Info** - File metadata queries
-**Async I/O** - Asynchronous file I/O operations
## Environment Variables
| Variable | Default | Description |
| ----------------------------------- | ------------- | ---------------------------- |
| `RUST_MCP_FILESYSTEM_VERSION` | `latest` | Docker image version |
| `RUST_MCP_FILESYSTEM_PORT_OVERRIDE` | `8000` | Service port |
| `ALLOWED_PATHS` | `/projects` | Allowed access paths |
| `HOST_WORKSPACE_PATH` | `./workspace` | Host workspace path to mount |
| `TZ` | `UTC` | Timezone |
## Quick Start
### 1. Configure Environment
Create a `.env` file:
```env
RUST_MCP_FILESYSTEM_VERSION=latest
RUST_MCP_FILESYSTEM_PORT_OVERRIDE=8000
ALLOWED_PATHS=/projects
HOST_WORKSPACE_PATH=/path/to/your/workspace
TZ=Asia/Shanghai
```
### 2. Configure File Access
In `docker-compose.yaml`, configure directories to access:
```yaml
volumes:
# Read-only access
- /path/to/workspace:/projects/workspace:ro
# Read-write access (remove :ro)
- /path/to/data:/projects/data
```
### 3. Start Service
```bash
docker compose up -d
```
### 4. Verify Service
```bash
curl http://localhost:8000/health
```
## Security Features
The service implements multiple layers of security:
1. **Read-only Filesystem**: Container filesystem set to read-only
2. **Permission Restrictions**: Minimized container permissions
3. **Path Restrictions**: Only configured paths can be accessed
4. **No Privilege Escalation**: Prevents privilege escalation
5. **Capability Restrictions**: Only necessary Linux capabilities retained
## Performance Characteristics
-**Zero-copy**: Leverages Rust's zero-copy features
-**Async I/O**: High-concurrency file operations
-**Memory Safety**: Memory safety guaranteed by Rust
-**Low Resource Usage**: Minimum 64MB memory
## Resource Requirements
- Minimum memory: 64MB
- Recommended memory: 256MB
- CPU: 0.25-1.0 cores
## Common Use Cases
1. **Code Repository Access** - Allow AI to access and analyze codebases
2. **Document Processing** - Read and process document files
3. **Log Analysis** - Analyze log files
4. **Configuration Management** - Read and update configuration files
## Security Recommendations
⚠️ **Important**: When using:
1. Only mount necessary directories
2. Prefer read-only mode (`:ro`)
3. Do not mount sensitive system directories
4. Regularly audit access logs
5. Use firewall to restrict network access
## Comparison with Other Implementations
| Feature | Rust Implementation | Node.js Implementation |
| ------------- | ------------------- | ---------------------- |
| Performance | ⭐⭐⭐⭐⭐ | ⭐⭐⭐ |
| Memory Usage | 64MB+ | 128MB+ |
| Concurrency | Excellent | Good |
| Startup Speed | Fast | Medium |
## References
- [Rust Official Site](https://www.rust-lang.org/)
- [MCP Documentation](https://modelcontextprotocol.io/)
- [Docker Hub - mcp/rust-mcp-filesystem](https://hub.docker.com/r/mcp/rust-mcp-filesystem)
## License
MIT License

View File

@@ -0,0 +1,119 @@
# Rust MCP Filesystem Server
Rust MCP Filesystem Server 是一个使用 Rust 构建的高性能文件系统 MCP 服务器,提供快速、安全的文件操作能力。
## 功能特性
- 🚀 **高性能** - Rust 驱动的高性能文件操作
- 🔒 **安全访问** - 可配置的访问控制
- 📁 **文件操作** - 文件读写、目录遍历
- 🔍 **文件搜索** - 快速文件搜索
- 📊 **文件信息** - 文件元数据查询
-**异步 I/O** - 异步文件 I/O 操作
## 环境变量
| 变量 | 默认值 | 说明 |
| ----------------------------------- | ------------- | ---------------------- |
| `RUST_MCP_FILESYSTEM_VERSION` | `latest` | Docker 镜像版本 |
| `RUST_MCP_FILESYSTEM_PORT_OVERRIDE` | `8000` | 服务端口 |
| `ALLOWED_PATHS` | `/projects` | 允许访问的路径 |
| `HOST_WORKSPACE_PATH` | `./workspace` | 要挂载的主机工作区路径 |
| `TZ` | `UTC` | 时区 |
## 快速开始
### 1. 配置环境
创建 `.env` 文件:
```env
RUST_MCP_FILESYSTEM_VERSION=latest
RUST_MCP_FILESYSTEM_PORT_OVERRIDE=8000
ALLOWED_PATHS=/projects
HOST_WORKSPACE_PATH=/path/to/your/workspace
TZ=Asia/Shanghai
```
### 2. 配置文件访问
`docker-compose.yaml` 中配置需要访问的目录:
```yaml
volumes:
# 只读访问
- /path/to/workspace:/projects/workspace:ro
# 读写访问(移除 :ro
- /path/to/data:/projects/data
```
### 3. 启动服务
```bash
docker compose up -d
```
### 4. 验证服务
```bash
curl http://localhost:8000/health
```
## 安全特性
该服务实现了多层安全保护:
1. **只读文件系统**:容器文件系统设置为只读
2. **权限限制**:最小化容器权限
3. **路径限制**:只能访问配置的允许路径
4. **无特权提升**:防止权限提升
5. **Capability 限制**:只保留必要的 Linux Capabilities
## 性能特点
-**零拷贝**:利用 Rust 的零拷贝特性
-**异步 I/O**:高并发文件操作
-**内存安全**Rust 保证的内存安全
-**低资源占用**:最小 64MB 内存
## 资源需求
- 最小内存64MB
- 推荐内存256MB
- CPU0.25-1.0 核心
## 常见使用场景
1. **代码库访问** - 让 AI 访问和分析代码库
2. **文档处理** - 读取和处理文档文件
3. **日志分析** - 分析日志文件
4. **配置管理** - 读取和更新配置文件
## 安全建议
⚠️ **重要**:使用时请注意:
1. 只挂载必要的目录
2. 优先使用只读模式(`:ro`
3. 不要挂载敏感系统目录
4. 定期审查访问日志
5. 使用防火墙限制网络访问
## 与其他实现的对比
| 特性 | Rust 实现 | Node.js 实现 |
| -------- | --------- | ------------ |
| 性能 | ⭐⭐⭐⭐⭐ | ⭐⭐⭐ |
| 内存占用 | 64MB+ | 128MB+ |
| 并发处理 | 优秀 | 良好 |
| 启动速度 | 快速 | 中等 |
## 参考链接
- [Rust 官方网站](https://www.rust-lang.org/)
- [MCP 文档](https://modelcontextprotocol.io/)
- [Docker Hub - mcp/rust-mcp-filesystem](https://hub.docker.com/r/mcp/rust-mcp-filesystem)
## 许可证
MIT License

View File

@@ -0,0 +1,49 @@
x-default: &default
restart: unless-stopped
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
rust-mcp-filesystem:
<<: *default
image: mcp/rust-mcp-filesystem:${RUST_MCP_FILESYSTEM_VERSION:-latest}
environment:
- MCP_HOST=0.0.0.0
- ALLOWED_PATHS=${ALLOWED_PATHS:-/projects}
- TZ=${TZ:-UTC}
ports:
- "${RUST_MCP_FILESYSTEM_PORT_OVERRIDE:-8000}:8000"
volumes:
# 挂载需要访问的目录到 /projects 下
- ${HOST_WORKSPACE_PATH:-./workspace}:/projects/workspace:ro
# 如果需要写入权限,移除 :ro 标志
# - ${HOST_DATA_PATH:-./data}:/projects/data
healthcheck:
test: ["CMD", "wget", "--spider", "-q", "http://localhost:8000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
deploy:
resources:
limits:
cpus: '1.00'
memory: 256M
reservations:
cpus: '0.25'
memory: 64M
# 安全限制
read_only: true
security_opt:
- no-new-privileges:true
cap_drop:
- ALL
cap_add:
- CHOWN
- DAC_OVERRIDE
- FOWNER
- SETGID
- SETUID

View File

@@ -0,0 +1,11 @@
# Tavily API Key (required)
TAVILY_API_KEY=your_tavily_api_key_here
# Docker image version
TAVILY_VERSION=latest
# Host port override
TAVILY_PORT_OVERRIDE=8000
# Timezone
TZ=UTC

View File

@@ -0,0 +1,59 @@
# Tavily MCP Server
Tavily MCP Server provides powerful web search and data extraction capabilities through the Model Context Protocol.
## Features
- 🔍 **Web Search** - Intelligent web search using Tavily API
- 📄 **Content Extraction** - Extract and process web page content
- 🗺️ **Web Mapping** - Discover and map website structures
- 📰 **News Search** - Search for latest news and articles
- 🌐 **Multi-source** - Aggregate search across multiple data sources
## Environment Variables
| Variable | Default | Description |
| ---------------------- | -------- | ------------------------- |
| `TAVILY_API_KEY` | - | Tavily API key (required) |
| `TAVILY_VERSION` | `latest` | Docker image version |
| `TAVILY_PORT_OVERRIDE` | `8000` | Service port |
| `TZ` | `UTC` | Timezone |
## Quick Start
### 1. Configure Environment
Create a `.env` file:
```env
TAVILY_API_KEY=your_tavily_api_key_here
TAVILY_VERSION=latest
TAVILY_PORT_OVERRIDE=8000
TZ=Asia/Shanghai
```
### 2. Start Service
```bash
docker compose up -d
```
### 3. Verify Service
```bash
curl http://localhost:8000/health
```
## Get API Key
Visit [Tavily](https://tavily.com/) to obtain an API key.
## References
- [Tavily Official Site](https://tavily.com/)
- [MCP Documentation](https://modelcontextprotocol.io/)
- [Docker Hub - mcp/tavily](https://hub.docker.com/r/mcp/tavily)
## License
MIT License

View File

@@ -0,0 +1,59 @@
# Tavily MCP 服务器
Tavily MCP 服务器通过模型上下文协议提供强大的网络搜索和数据提取功能。
## 功能特性
- 🔍 **网络搜索** - 使用 Tavily API 进行智能网络搜索
- 📄 **内容提取** - 提取和处理网页内容
- 🗺️ **网站映射** - 发现和映射网站结构
- 📰 **新闻搜索** - 搜索最新新闻和文章
- 🌐 **多源聚合** - 跨多个数据源的聚合搜索
## 环境变量
| 变量 | 默认值 | 说明 |
| ---------------------- | -------- | ----------------------- |
| `TAVILY_API_KEY` | - | Tavily API 密钥(必需) |
| `TAVILY_VERSION` | `latest` | Docker 镜像版本 |
| `TAVILY_PORT_OVERRIDE` | `8000` | 服务端口 |
| `TZ` | `UTC` | 时区 |
## 快速开始
### 1. 配置环境
创建 `.env` 文件:
```env
TAVILY_API_KEY=your_tavily_api_key_here
TAVILY_VERSION=latest
TAVILY_PORT_OVERRIDE=8000
TZ=Asia/Shanghai
```
### 2. 启动服务
```bash
docker compose up -d
```
### 3. 验证服务
```bash
curl http://localhost:8000/health
```
## 获取 API 密钥
访问 [Tavily](https://tavily.com/) 获取 API 密钥。
## 参考链接
- [Tavily 官网](https://tavily.com/)
- [MCP 官方文档](https://modelcontextprotocol.io/)
- [Docker Hub - mcp/tavily](https://hub.docker.com/r/mcp/tavily)
## 许可证
MIT License

View File

@@ -0,0 +1,31 @@
x-default: &default
restart: unless-stopped
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
tavily:
<<: *default
image: mcp/tavily:${TAVILY_VERSION:-latest}
environment:
- TAVILY_API_KEY=${TAVILY_API_KEY}
- TZ=${TZ:-UTC}
ports:
- "${TAVILY_PORT_OVERRIDE:-8000}:8000"
healthcheck:
test: ["CMD", "wget", "--spider", "-q", "http://localhost:8000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
deploy:
resources:
limits:
cpus: '1.00'
memory: 512M
reservations:
cpus: '0.25'
memory: 128M

9
src/dnsmasq/.env.example Normal file
View File

@@ -0,0 +1,9 @@
# DNSMasq version
DNSMASQ_VERSION=2.91
# DNS port (default: 53)
# Note: Ports below 1024 require NET_BIND_SERVICE capability
DNSMASQ_DNS_PORT_OVERRIDE=53
# Timezone
TZ=UTC

49
src/dnsmasq/README.md Normal file
View File

@@ -0,0 +1,49 @@
# DNSMasq
[English](./README.md) | [中文](./README.zh.md)
This service deploys DNSMasq, a lightweight DNS forwarder and DHCP server.
## Services
- `dnsmasq`: The DNSMasq service.
## Environment Variables
| Variable Name | Description | Default Value |
| ------------------------- | ---------------------------------------------------- | ------------- |
| DNSMASQ_VERSION | DNSMasq image version | `2.91` |
| DNSMASQ_DNS_PORT_OVERRIDE | Host port mapping (maps to DNS port 53 in container) | 53 |
| TZ | Timezone setting | `UTC` |
Please modify the `.env` file as needed for your use case.
## Configuration
### Configure LAN DNS Resolution
Lines starting with `address` in the `dnsmasq.conf` file will be parsed as LAN DNS resolution rules.
```conf
address=/example.com/192.168.1.123
```
Router Configuration:
- Set the gateway to the router IP
- Bind the server IP address and MAC address, or assign a static IP address
- Configure the DHCP server to use the server IP address as the DNS server
## Volumes
- `dnsmasq.conf`: Configuration file for DNSMasq (mounted to `/etc/dnsmasq.conf`).
## Ports
- `53/tcp`: DNS service (TCP)
- `53/udp`: DNS service (UDP)
## Security Notes
- This service requires `NET_ADMIN` and `NET_BIND_SERVICE` capabilities to bind to privileged ports.
- Ensure proper firewall rules are in place to restrict access to the DNS service.

13
src/dnsmasq/README.zh.md Normal file
View File

@@ -0,0 +1,13 @@
# 配置局域网 DNS 解析
`dnsmasq.conf` 文件中以 `address` 开头的行会被解析为局域网 DNS 解析。
```conf
address=/example.com/192.168.1.123
```
在路由器中设置:
- 网关为路由器 IP
- 服务器 IP 地址和 MAC 地址绑定,或给定固定 IP 地址
- DHCP 服务器设置 DNS 服务器为服务器 IP 地址

2
src/dnsmasq/dnsmasq.conf Normal file
View File

@@ -0,0 +1,2 @@
interface=*
server=8.8.8.8

View File

@@ -0,0 +1,38 @@
x-default: &default
restart: unless-stopped
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
dnsmasq:
<<: *default
image: dockurr/dnsmasq:${DNSMASQ_VERSION:-2.91}
volumes:
- ./dnsmasq.conf:/etc/dnsmasq.conf:ro
ports:
- "${DNSMASQ_DNS_PORT_OVERRIDE:-53}:53/udp"
- "${DNSMASQ_DNS_PORT_OVERRIDE:-53}:53/tcp"
environment:
- TZ=${TZ:-UTC}
cap_drop:
- ALL
cap_add:
- NET_ADMIN
- NET_BIND_SERVICE
healthcheck:
test: ["CMD", "nslookup", "-timeout=1", "localhost", "127.0.0.1"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
deploy:
resources:
limits:
cpus: '0.50'
memory: 128M
reservations:
cpus: '0.10'
memory: 32M

View File

@@ -1,19 +1,22 @@
# GPUStack version
GPUSTACK_VERSION="v0.5.3"
GPUSTACK_VERSION=v0.7.1
# Timezone setting
TZ=UTC
# Server configuration
GPUSTACK_HOST="0.0.0.0"
GPUSTACK_HOST=0.0.0.0
GPUSTACK_PORT=80
GPUSTACK_DEBUG=false
# Admin bootstrap password
GPUSTACK_BOOTSTRAP_PASSWORD="admin"
GPUSTACK_BOOTSTRAP_PASSWORD=admin
# Token for worker registration (auto-generated if not set)
GPUSTACK_TOKEN=""
GPUSTACK_TOKEN=
# Hugging Face token for model downloads
HF_TOKEN=""
HF_TOKEN=
# Port to bind to on the host machine
GPUSTACK_PORT_OVERRIDE=80

View File

@@ -2,26 +2,39 @@
[English](./README.md) | [中文](./README.zh.md)
This service deploys GPUStack, an open-source GPU cluster manager for running large language models (LLMs).
GPUStack is an open-source GPU cluster manager for running and scaling large language models (LLMs).
## Quick Start
```bash
docker compose up -d
```
Access the web UI at <http://localhost:80> with default credentials `admin` / `admin`.
## Services
- `gpustack`: GPUStack server with built-in worker
- `gpustack`: GPUStack server with GPU support enabled by default
## Ports
| Service | Port |
| -------- | ---- |
| gpustack | 80 |
## Environment Variables
| Variable Name | Description | Default Value |
| --------------------------- | -------------------------------------- | ------------- |
| GPUSTACK_VERSION | GPUStack image version | `v0.5.3` |
| GPUSTACK_HOST | Host to bind the server to | `0.0.0.0` |
| GPUSTACK_PORT | Port to bind the server to | `80` |
| GPUSTACK_DEBUG | Enable debug mode | `false` |
| GPUSTACK_BOOTSTRAP_PASSWORD | Password for the bootstrap admin user | `admin` |
| GPUSTACK_TOKEN | Token for worker registration | (auto) |
| HF_TOKEN | Hugging Face token for model downloads | `""` |
| GPUSTACK_PORT_OVERRIDE | Host port mapping | `80` |
Please modify the `.env` file as needed for your use case.
| Variable | Description | Default |
| --------------------------- | -------------------------------------- | --------- |
| GPUSTACK_VERSION | GPUStack image version | `v0.7.1` |
| TZ | Timezone setting | `UTC` |
| GPUSTACK_HOST | Host to bind the server to | `0.0.0.0` |
| GPUSTACK_PORT | Port to bind the server to | `80` |
| GPUSTACK_DEBUG | Enable debug mode | `false` |
| GPUSTACK_BOOTSTRAP_PASSWORD | Password for the bootstrap admin user | `admin` |
| GPUSTACK_TOKEN | Token for worker registration | (auto) |
| HF_TOKEN | Hugging Face token for model downloads | (empty) |
| GPUSTACK_PORT_OVERRIDE | Host port mapping | `80` |
## Volumes
@@ -29,84 +42,79 @@ Please modify the `.env` file as needed for your use case.
## GPU Support
### NVIDIA GPU
Uncomment the GPU-related configuration in `docker-compose.yaml`:
This service is configured with NVIDIA GPU support enabled by default. The configuration uses:
```yaml
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
runtime: nvidia
deploy:
resources:
reservations:
devices:
- driver: nvidia
device_ids: [ '0' ]
capabilities: [ gpu ]
```
### Requirements
- NVIDIA GPU with CUDA support
- NVIDIA Container Toolkit installed on the host
- Docker 19.03+ with GPU support
### AMD GPU (ROCm)
Use the ROCm-specific image:
To use AMD GPUs with ROCm support:
```yaml
image: gpustack/gpustack:v0.5.3-rocm
```
1. Use the ROCm-specific image in `docker-compose.yaml`:
```yaml
image: gpustack/gpustack:${GPUSTACK_VERSION:-v0.7.1}-rocm
```
2. Change the device driver to `amdgpu`:
```yaml
deploy:
resources:
reservations:
devices:
- driver: amdgpu
device_ids: [ '0' ]
capabilities: [ gpu ]
```
## Usage
### Start GPUStack
```bash
docker compose up -d
```
### Access
- Web UI: <http://localhost:80>
- Default credentials: `admin` / `admin` (configured via `GPUSTACK_BOOTSTRAP_PASSWORD`)
### Deploy a Model
1. Log in to the web UI
2. Navigate to Models
3. Click "Deploy Model"
4. Select a model from the catalog or add a custom model
5. Configure the model parameters
6. Click "Deploy"
1. Log in to the web UI at <http://localhost:80>
2. Navigate to **Models** → **Deploy Model**
3. Select a model from the catalog or add a custom model
4. Configure the model parameters
5. Click **Deploy**
### Add Worker Nodes
To add more GPU nodes to the cluster:
To scale your cluster by adding more GPU nodes:
1. Get the registration token from the server:
```bash
docker exec gpustack cat /var/lib/gpustack/token
```
```bash
docker exec gpustack gpustack show-token
```
2. Start a worker on another node:
```bash
docker run -d --name gpustack-worker \
--gpus all \
--network host \
--ipc host \
-v gpustack-data:/var/lib/gpustack \
gpustack/gpustack:v0.5.3 \
--server-url http://your-server-ip:80 \
--token YOUR_TOKEN
```
```bash
docker run -d --name gpustack-worker \
--gpus all \
--network host \
--ipc host \
-v gpustack-worker-data:/var/lib/gpustack \
gpustack/gpustack:v0.7.1 \
gpustack start --server-url http://your-server-ip:80 --token YOUR_TOKEN
```
## Features
- **Model Management**: Deploy and manage LLM models from Hugging Face, ModelScope, or custom sources
- **GPU Scheduling**: Automatic GPU allocation and scheduling
- **Multi-Backend**: Supports llama-box, vLLM, and other backends
- **API Compatible**: OpenAI-compatible API endpoint
- **Web UI**: User-friendly web interface for management
- **Monitoring**: Resource usage and model metrics
## API Usage
### API Usage
GPUStack provides an OpenAI-compatible API:
@@ -120,19 +128,31 @@ curl http://localhost:80/v1/chat/completions \
}'
```
## Features
- **Model Management**: Deploy and manage LLM models from Hugging Face, ModelScope, or custom sources
- **GPU Scheduling**: Automatic GPU allocation and load balancing
- **Multi-Backend**: Supports llama-box, vLLM, and other inference backends
- **OpenAI-Compatible API**: Drop-in replacement for OpenAI API
- **Web UI**: User-friendly web interface for cluster management
- **Monitoring**: Real-time resource usage and model performance metrics
- **Multi-Node**: Scale across multiple GPU servers
## Notes
- For production use, change the default password
- GPU support requires NVIDIA Docker runtime or AMD ROCm support
- Model downloads can be large (several GB), ensure sufficient disk space
- First model deployment may take time as it downloads the model files
- **Production Security**: Change the default `GPUSTACK_BOOTSTRAP_PASSWORD` before deploying
- **GPU Requirements**: NVIDIA GPU with CUDA support is required; ensure NVIDIA Container Toolkit is installed
- **Disk Space**: Model downloads can be several gigabytes; ensure sufficient storage
- **First Deployment**: Initial model deployment may take time as it downloads model files
- **Network**: By default, the service binds to all interfaces (`0.0.0.0`); restrict access in production
## Security
- Change default admin password after first login
- Use strong passwords for API keys
- Consider using TLS for production deployments
- Restrict network access to trusted sources
- **Change Default Password**: Update `GPUSTACK_BOOTSTRAP_PASSWORD` after first login
- **API Keys**: Use strong, unique API keys for accessing the API
- **TLS/HTTPS**: Consider using a reverse proxy with TLS for production
- **Network Access**: Restrict access to trusted networks using firewalls
- **Updates**: Keep GPUStack updated to the latest stable version
## License

159
src/gpustack/README.zh.md Normal file
View File

@@ -0,0 +1,159 @@
# GPUStack
[English](./README.md) | [中文](./README.zh.md)
GPUStack 是一个开源的 GPU 集群管理器用于运行和扩展大型语言模型LLM
## 快速开始
```bash
docker compose up -d
```
<http://localhost:80> 访问 Web UI默认凭据为 `admin` / `admin`
## 服务
- `gpustack`:默认启用 GPU 支持的 GPUStack 服务器
## 端口
| 服务 | 端口 |
| -------- | ---- |
| gpustack | 80 |
## 环境变量
| 变量名 | 描述 | 默认值 |
| --------------------------- | ------------------------- | --------- |
| GPUSTACK_VERSION | GPUStack 镜像版本 | `v0.7.1` |
| TZ | 时区设置 | `UTC` |
| GPUSTACK_HOST | 服务器绑定的主机地址 | `0.0.0.0` |
| GPUSTACK_PORT | 服务器绑定的端口 | `80` |
| GPUSTACK_DEBUG | 启用调试模式 | `false` |
| GPUSTACK_BOOTSTRAP_PASSWORD | 引导管理员用户的密码 | `admin` |
| GPUSTACK_TOKEN | Worker 注册令牌 | (自动) |
| HF_TOKEN | Hugging Face 模型下载令牌 | (空) |
| GPUSTACK_PORT_OVERRIDE | 主机端口映射 | `80` |
## 卷
- `gpustack_data`GPUStack 数据目录
## GPU 支持
本服务默认配置了 NVIDIA GPU 支持。配置使用:
```yaml
deploy:
resources:
reservations:
devices:
- driver: nvidia
device_ids: [ '0' ]
capabilities: [ gpu ]
```
### 要求
- 支持 CUDA 的 NVIDIA GPU
- 主机上安装了 NVIDIA Container Toolkit
- Docker 19.03+ 支持 GPU
### AMD GPUROCm
要使用支持 ROCm 的 AMD GPU
1.`docker-compose.yaml` 中使用 ROCm 特定镜像:
```yaml
image: gpustack/gpustack:${GPUSTACK_VERSION:-v0.7.1}-rocm
```
2. 将设备驱动更改为 `amdgpu`
```yaml
deploy:
resources:
reservations:
devices:
- driver: amdgpu
device_ids: [ '0' ]
capabilities: [ gpu ]
```
## 使用方法
### 部署模型
1. 在 <http://localhost:80> 登录 Web UI
2. 导航到 **Models** → **Deploy Model**
3. 从目录中选择模型或添加自定义模型
4. 配置模型参数
5. 点击 **Deploy**
### 添加 Worker 节点
通过添加更多 GPU 节点来扩展集群:
1. 从服务器获取注册令牌:
```bash
docker exec gpustack gpustack show-token
```
2. 在另一个节点上启动 Worker
```bash
docker run -d --name gpustack-worker \
--gpus all \
--network host \
--ipc host \
-v gpustack-worker-data:/var/lib/gpustack \
gpustack/gpustack:v0.7.1 \
gpustack start --server-url http://your-server-ip:80 --token YOUR_TOKEN
```
### API 使用
GPUStack 提供与 OpenAI 兼容的 API
```bash
curl http://localhost:80/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "llama-3.2-3b-instruct",
"messages": [{"role": "user", "content": "Hello!"}]
}'
```
## 功能特性
- **模型管理**:从 Hugging Face、ModelScope 或自定义源部署和管理 LLM 模型
- **GPU 调度**:自动 GPU 分配和负载均衡
- **多后端支持**:支持 llama-box、vLLM 和其他推理后端
- **OpenAI 兼容 API**:可直接替代 OpenAI API
- **Web UI**:用户友好的 Web 界面,用于集群管理
- **监控**:实时资源使用和模型性能指标
- **多节点**:可跨多个 GPU 服务器扩展
## 注意事项
- **生产环境安全**:部署前请更改默认的 `GPUSTACK_BOOTSTRAP_PASSWORD`
- **GPU 要求**:需要支持 CUDA 的 NVIDIA GPU确保已安装 NVIDIA Container Toolkit
- **磁盘空间**:模型下载可能有数 GB确保有足够的存储空间
- **首次部署**:初次部署模型可能需要时间来下载模型文件
- **网络**:默认情况下,服务绑定到所有接口(`0.0.0.0`);在生产环境中请限制访问
## 安全
- **更改默认密码**:首次登录后更新 `GPUSTACK_BOOTSTRAP_PASSWORD`
- **API 密钥**:使用强且唯一的 API 密钥访问 API
- **TLS/HTTPS**:在生产环境中考虑使用带 TLS 的反向代理
- **网络访问**:使用防火墙将访问限制在受信任的网络
- **更新**:保持 GPUStack 更新到最新稳定版本
## 许可证
GPUStack 采用 Apache License 2.0 许可。更多信息请参见 [GPUStack GitHub](https://github.com/gpustack/gpustack)。

View File

@@ -9,7 +9,7 @@ x-default: &default
services:
gpustack:
<<: *default
image: gpustack/gpustack:${GPUSTACK_VERSION:-v0.5.3}
image: gpustack/gpustack:${GPUSTACK_VERSION:-v0.7.1}
ports:
- "${GPUSTACK_PORT_OVERRIDE:-80}:80"
volumes:
@@ -22,21 +22,19 @@ services:
- GPUSTACK_TOKEN=${GPUSTACK_TOKEN:-}
- GPUSTACK_BOOTSTRAP_PASSWORD=${GPUSTACK_BOOTSTRAP_PASSWORD:-admin}
- HF_TOKEN=${HF_TOKEN:-}
ipc: host
deploy:
resources:
limits:
cpus: '8.0'
memory: 8G
reservations:
cpus: '2.0'
memory: 4G
reservations:
cpus: '1.0'
memory: 2G
# Uncomment below for GPU support
# devices:
# - driver: nvidia
# count: 1
# capabilities: [gpu]
# For GPU support, uncomment the following section
# runtime: nvidia
devices:
- driver: nvidia
device_ids: [ '0' ]
capabilities: [ gpu ]
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:80/health"]
interval: 30s

55
src/litellm/.env.example Normal file
View File

@@ -0,0 +1,55 @@
# LiteLLM version
LITELLM_VERSION=main-stable
# LiteLLM port (default: 4000)
LITELLM_PORT_OVERRIDE=4000
# PostgreSQL configuration
POSTGRES_VERSION=16
POSTGRES_PASSWORD=xxxxxx
POSTGRES_PORT_OVERRIDE=5432
# Prometheus configuration (optional, enable with --profile metrics)
PROMETHEUS_VERSION=v3.3.1
PROMETHEUS_PORT_OVERRIDE=9090
# LiteLLM authentication keys
LITELLM_MASTER_KEY=sk-xxxxxx
LITELLM_SALT_KEY=sk-xxxxxx
# Timezone
TZ=UTC
# ===== API Keys =====
# OpenAI
OPENAI_API_KEY=
OPENAI_BASE_URL=
# Cohere
COHERE_API_KEY=
# OpenRouter
OR_SITE_URL=
OR_APP_NAME=LiteLLM Example app
OR_API_KEY=
# Azure
AZURE_API_BASE=
AZURE_API_VERSION=
AZURE_API_KEY=
# Replicate
REPLICATE_API_KEY=
REPLICATE_API_TOKEN=
# Anthropic
ANTHROPIC_API_KEY=
# Infisical
INFISICAL_TOKEN=
# Novita AI
NOVITA_API_KEY=
# INFINITY
INFINITY_API_KEY=

111
src/litellm/README.md Normal file
View File

@@ -0,0 +1,111 @@
# LiteLLM
[English](./README.md) | [中文](./README.zh.md)
This service deploys LiteLLM, a unified interface to 100+ LLM APIs (OpenAI, Azure, Anthropic, Cohere, Replicate, etc.) with load balancing, fallbacks, and cost tracking.
## Services
- `litellm`: The LiteLLM proxy service
- `db`: PostgreSQL database for storing model configurations and usage data
- `prometheus`: Prometheus metrics collector (optional, enabled with `--profile metrics`)
## Environment Variables
| Variable Name | Description | Default Value |
| ------------------------ | -------------------------------------------------------------- | ------------- |
| LITELLM_VERSION | LiteLLM image version | `main-stable` |
| LITELLM_PORT_OVERRIDE | Host port mapping for LiteLLM (maps to port 4000 in container) | 4000 |
| POSTGRES_VERSION | PostgreSQL image version | `16` |
| POSTGRES_PASSWORD | PostgreSQL database password | `xxxxxx` |
| POSTGRES_PORT_OVERRIDE | Host port mapping for PostgreSQL | 5432 |
| PROMETHEUS_VERSION | Prometheus image version (used with metrics profile) | `v3.3.1` |
| PROMETHEUS_PORT_OVERRIDE | Host port mapping for Prometheus | 9090 |
| LITELLM_MASTER_KEY | Master key for LiteLLM authentication | `sk-xxxxxx` |
| LITELLM_SALT_KEY | Salt key for secure key generation | `sk-xxxxxx` |
| TZ | Timezone setting | `UTC` |
Additional API keys can be configured in the `.env` file for various LLM providers (OpenAI, Azure, Anthropic, etc.).
Please modify the `.env` file as needed for your use case.
## Volumes
- `postgres_data`: PostgreSQL data persistence
- `prometheus_data`: Prometheus data storage (optional)
- `./config.yaml`: LiteLLM configuration file (optional, uncomment in docker-compose.yaml to use)
- `./prometheus.yml`: Prometheus configuration file (optional)
## Ports
- `4000`: LiteLLM proxy API and Web UI
- `5432`: PostgreSQL database
- `9090`: Prometheus metrics (optional, enabled with `--profile metrics`)
## First-Time Setup
1. Start the services (with optional metrics):
```bash
docker compose up -d
# Or with Prometheus metrics:
docker compose --profile metrics up -d
```
2. Access LiteLLM UI at `http://localhost:4000`
3. Default credentials:
- Username: `admin`
- Password: Value of `LITELLM_MASTER_KEY` environment variable
4. Configure your LLM API keys in the `.env` file or through the web UI
## Configuration
### Using a Config File
To use a `config.yaml` file for configuration:
1. Create a `config.yaml` file in the same directory as `docker-compose.yaml`
2. Uncomment the volumes and command sections in `docker-compose.yaml`
3. Configure your models, API keys, and routing rules in `config.yaml`
### API Keys
Add API keys for your LLM providers in the `.env` file:
- `OPENAI_API_KEY`: OpenAI API key
- `ANTHROPIC_API_KEY`: Anthropic API key
- `AZURE_API_KEY`: Azure OpenAI API key
- And more (see `.env.example`)
## Usage
### Making API Calls
Use the LiteLLM proxy endpoint with your master key:
```bash
curl -X POST http://localhost:4000/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $LITELLM_MASTER_KEY" \
-d '{
"model": "gpt-3.5-turbo",
"messages": [{"role": "user", "content": "Hello!"}]
}'
```
### Monitoring with Prometheus
If you enabled the metrics profile, access Prometheus at `http://localhost:9090` to view metrics about:
- Request counts and latencies
- Token usage
- Cost tracking
- Error rates
## Additional Information
- Official Documentation: <https://docs.litellm.ai/>
- GitHub Repository: <https://github.com/BerriAI/litellm>
- Supported LLM Providers: <https://docs.litellm.ai/docs/providers>

3
src/litellm/README.zh.md Normal file
View File

@@ -0,0 +1,3 @@
# LiteLLM
默认情况下,用户名是 `admin`,密码是 `$MASTER_KEY` 变量的值。

View File

@@ -0,0 +1,110 @@
x-default: &default
restart: unless-stopped
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
litellm:
<<: *default
build:
context: .
args:
target: runtime
image: ghcr.io/berriai/litellm:${LITELLM_VERSION:-main-stable}
# Uncomment these lines to start proxy with a config.yaml file
# volumes:
# - ./config.yaml:/app/config.yaml:ro
# command:
# - "--config=/app/config.yaml"
ports:
- "${LITELLM_PORT_OVERRIDE:-4000}:4000"
environment:
- DATABASE_URL=postgresql://llmproxy:${POSTGRES_PASSWORD}@db:5432/litellm
- STORE_MODEL_IN_DB=True
- TZ=${TZ:-UTC}
env_file:
- .env
depends_on:
db:
condition: service_healthy
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:4000/health/liveliness"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
extra_hosts:
- "host.docker.internal:host-gateway"
deploy:
resources:
limits:
cpus: '2.00'
memory: 2G
reservations:
cpus: '0.50'
memory: 512M
db:
<<: *default
image: postgres:${POSTGRES_VERSION:-16}
environment:
- POSTGRES_DB=litellm
- POSTGRES_USER=llmproxy
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- TZ=${TZ:-UTC}
ports:
- "${POSTGRES_PORT_OVERRIDE:-5432}:5432"
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -d litellm -U llmproxy"]
interval: 10s
timeout: 5s
retries: 5
start_period: 10s
deploy:
resources:
limits:
cpus: '1.00'
memory: 1G
reservations:
cpus: '0.25'
memory: 256M
prometheus:
<<: *default
image: prom/prometheus:${PROMETHEUS_VERSION:-v3.3.1}
profiles:
- metrics
volumes:
- prometheus_data:/prometheus
- ./prometheus.yml:/etc/prometheus/prometheus.yml:ro
ports:
- "${PROMETHEUS_PORT_OVERRIDE:-9090}:9090"
command:
- "--config.file=/etc/prometheus/prometheus.yml"
- "--storage.tsdb.path=/prometheus"
- "--storage.tsdb.retention.time=15d"
environment:
- TZ=${TZ:-UTC}
healthcheck:
test: ["CMD", "wget", "--spider", "-q", "http://localhost:9090/-/healthy"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
deploy:
resources:
limits:
cpus: '1.00'
memory: 1G
reservations:
cpus: '0.25'
memory: 256M
volumes:
prometheus_data:
postgres_data:

View File

@@ -0,0 +1,7 @@
global:
scrape_interval: 15s
scrape_configs:
- job_name: 'litellm'
static_configs:
- targets: ['litellm:4000'] # Assuming Litellm exposes metrics at port 4000

View File

@@ -0,0 +1,11 @@
# Portainer version
PORTAINER_VERSION=2.27.3-alpine
# Web UI port (default: 9000)
PORTAINER_WEB_PORT_OVERRIDE=9000
# Edge Agent port (default: 8000)
PORTAINER_EDGE_PORT_OVERRIDE=8000
# Timezone
TZ=UTC

51
src/portainer/README.md Normal file
View File

@@ -0,0 +1,51 @@
# Portainer
[English](./README.md) | [中文](./README.zh.md)
This service deploys Portainer CE (Community Edition), a lightweight management UI for Docker and Docker Swarm.
## Services
- `portainer`: The Portainer CE service.
## Environment Variables
| Variable Name | Description | Default Value |
| ---------------------------- | ----------------------------------------------------------------- | --------------- |
| PORTAINER_VERSION | Portainer image version | `2.27.3-alpine` |
| PORTAINER_WEB_PORT_OVERRIDE | Host port mapping for Web UI (maps to port 9000 in container) | 9000 |
| PORTAINER_EDGE_PORT_OVERRIDE | Host port mapping for Edge Agent (maps to port 8000 in container) | 8000 |
| TZ | Timezone setting | `UTC` |
Please modify the `.env` file as needed for your use case.
## Volumes
- `portainer_data`: A named volume for storing Portainer data.
- `/var/run/docker.sock`: Docker socket (required for Portainer to manage Docker).
## Ports
- `9000`: Portainer Web UI
- `8000`: Portainer Edge Agent
## Security Notes
⚠️ **Warning**: This service mounts the Docker socket (`/var/run/docker.sock`), which grants full control over the Docker daemon. This is required for Portainer to function properly, but it means:
- Any compromise of the Portainer container could lead to full system compromise
- Ensure Portainer is properly secured with strong passwords
- Consider restricting network access to the Portainer UI
- Keep Portainer updated to the latest version
## First-Time Setup
1. After starting the service, access Portainer at `http://localhost:9000`
2. Create an admin user account (this is required on first launch)
3. Choose to manage the local Docker environment
4. You can now manage your Docker containers, images, networks, and volumes through the UI
## Additional Information
- Official Documentation: <https://docs.portainer.io/>
- GitHub Repository: <https://github.com/portainer/portainer>

View File

@@ -0,0 +1,3 @@
# Portainer
Portainer 是一个轻量级的管理用户界面,用于 Docker包括 Docker Swarm 群集。 Portainer 提供了一个简单的 Web UI可以用来管理 Docker 容器,镜像,网络和卷。

View File

@@ -0,0 +1,39 @@
x-default: &default
restart: unless-stopped
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
portainer:
<<: *default
image: portainer/portainer-ce:${PORTAINER_VERSION:-2.27.3-alpine}
ports:
- "${PORTAINER_WEB_PORT_OVERRIDE:-9000}:9000"
- "${PORTAINER_EDGE_PORT_OVERRIDE:-8000}:8000"
volumes:
# ⚠️ Security Warning: Mounting Docker socket grants full control of Docker daemon
# This is required for Portainer to function, but ensure access is properly secured
- /var/run/docker.sock:/var/run/docker.sock
- portainer_data:/data
environment:
- TZ=${TZ:-UTC}
healthcheck:
test: ["CMD", "wget", "--spider", "-q", "http://localhost:9000/api/system/status"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
deploy:
resources:
limits:
cpus: '1.00'
memory: 512M
reservations:
cpus: '0.25'
memory: 128M
volumes:
portainer_data:

24
src/searxng/.env.example Normal file
View File

@@ -0,0 +1,24 @@
# SearXNG version
SEARXNG_VERSION=2025.1.20-1ce14ef99
# SearXNG port (default: 8080)
SEARXNG_PORT_OVERRIDE=8080
# SearXNG hostname (used for Caddy reverse proxy)
SEARXNG_HOSTNAME=http://localhost
# Let's Encrypt email (for HTTPS certificates, set to "internal" for self-signed)
LETSENCRYPT_EMAIL=internal
# uWSGI worker processes and threads
SEARXNG_UWSGI_WORKERS=4
SEARXNG_UWSGI_THREADS=4
# Valkey (Redis) version
VALKEY_VERSION=8-alpine
# Caddy version
CADDY_VERSION=2-alpine
# Timezone
TZ=UTC

75
src/searxng/README.md Normal file
View File

@@ -0,0 +1,75 @@
# SearXNG
[English](./README.md) | [中文](./README.zh.md)
This service deploys SearXNG, a privacy-respecting metasearch engine that aggregates results from multiple search engines without tracking users.
## Services
- `searxng`: The SearXNG metasearch engine
- `redis`: Valkey (Redis-compatible) for caching search results
- `caddy`: Reverse proxy and HTTPS termination (uses host network mode)
## Environment Variables
| Variable Name | Description | Default Value |
| --------------------- | ------------------------------------------------------------------------------ | --------------------- |
| SEARXNG_VERSION | SearXNG image version | `2025.1.20-1ce14ef99` |
| SEARXNG_PORT_OVERRIDE | Host port mapping for SearXNG (maps to port 8080 in container) | 8080 |
| SEARXNG_HOSTNAME | Hostname for Caddy reverse proxy | `http://localhost` |
| LETSENCRYPT_EMAIL | Email for Let's Encrypt HTTPS certificates (set to "internal" for self-signed) | `internal` |
| SEARXNG_UWSGI_WORKERS | Number of uWSGI worker processes | 4 |
| SEARXNG_UWSGI_THREADS | Number of uWSGI threads per worker | 4 |
| VALKEY_VERSION | Valkey (Redis) image version | `8-alpine` |
| CADDY_VERSION | Caddy reverse proxy version | `2-alpine` |
| TZ | Timezone setting | `UTC` |
Please modify the `.env` file as needed for your use case.
## Volumes
- `caddy-data`: Caddy data storage (certificates, etc.)
- `caddy-config`: Caddy configuration
- `valkey-data`: Valkey data persistence
- `./searxng`: SearXNG configuration directory (mounted to `/etc/searxng`)
## Ports
- `8080`: SearXNG Web UI (via Caddy reverse proxy when using host network mode)
## Configuration
### SearXNG Settings
Edit configuration files in the `./searxng` directory to customize:
- Search engines to use
- UI theme and appearance
- Privacy settings
- Result filtering
### HTTPS with Let's Encrypt
To enable HTTPS with Let's Encrypt certificates:
1. Set `LETSENCRYPT_EMAIL` to your email address in `.env`
2. Set `SEARXNG_HOSTNAME` to your domain name (e.g., `https://search.example.com`)
3. Ensure ports 80 and 443 are accessible from the internet
4. Create or update the `Caddyfile` with your domain configuration
### Self-Signed Certificates
By default (`LETSENCRYPT_EMAIL=internal`), Caddy will use self-signed certificates for HTTPS.
## First-Time Setup
1. Start the services
2. Access SearXNG at `http://localhost:8080` (or your configured hostname)
3. Configure your browser to use SearXNG as the default search engine (optional)
4. Customize settings through the web interface
## Additional Information
- Official Documentation: <https://docs.searxng.org/>
- GitHub Repository: <https://github.com/searxng/searxng>
- Original Project: <https://github.com/searxng/searxng-docker>

75
src/searxng/README.zh.md Normal file
View File

@@ -0,0 +1,75 @@
# SearXNG
[English](./README.md) | [中文](./README.zh.md)
此服务部署 SearXNG一个尊重隐私的元搜索引擎它聚合多个搜索引擎的结果而不跟踪用户。
## 服务
- `searxng`SearXNG 元搜索引擎
- `redis`ValkeyRedis 兼容)用于缓存搜索结果
- `caddy`:反向代理和 HTTPS 终止(使用主机网络模式)
## 环境变量
| 变量名 | 说明 | 默认值 |
| --------------------- | ------------------------------------------------------------------ | --------------------- |
| SEARXNG_VERSION | SearXNG 镜像版本 | `2025.1.20-1ce14ef99` |
| SEARXNG_PORT_OVERRIDE | SearXNG 主机端口映射(映射到容器内端口 8080 | 8080 |
| SEARXNG_HOSTNAME | Caddy 反向代理的主机名 | `http://localhost` |
| LETSENCRYPT_EMAIL | Let's Encrypt HTTPS 证书的邮箱(设置为 "internal" 使用自签名证书) | `internal` |
| SEARXNG_UWSGI_WORKERS | uWSGI 工作进程数 | 4 |
| SEARXNG_UWSGI_THREADS | 每个 uWSGI 工作进程的线程数 | 4 |
| VALKEY_VERSION | ValkeyRedis镜像版本 | `8-alpine` |
| CADDY_VERSION | Caddy 反向代理版本 | `2-alpine` |
| TZ | 时区设置 | `UTC` |
请根据实际需求修改 `.env` 文件。
## 卷
- `caddy-data`Caddy 数据存储(证书等)
- `caddy-config`Caddy 配置
- `valkey-data`Valkey 数据持久化
- `./searxng`SearXNG 配置目录(挂载到 `/etc/searxng`
## 端口
- `8080`SearXNG Web UI使用主机网络模式时通过 Caddy 反向代理)
## 配置
### SearXNG 设置
编辑 `./searxng` 目录中的配置文件以自定义:
- 要使用的搜索引擎
- UI 主题和外观
- 隐私设置
- 结果过滤
### 使用 Let's Encrypt 启用 HTTPS
要启用 Let's Encrypt 证书的 HTTPS
1.`.env` 中将 `LETSENCRYPT_EMAIL` 设置为你的邮箱地址
2.`SEARXNG_HOSTNAME` 设置为你的域名(例如,`https://search.example.com`
3. 确保端口 80 和 443 可从互联网访问
4. 创建或更新 `Caddyfile` 以包含你的域名配置
### 自签名证书
默认情况下(`LETSENCRYPT_EMAIL=internal`Caddy 将使用自签名证书进行 HTTPS。
## 首次设置
1. 启动服务
2. 访问 SearXNG`http://localhost:8080`(或你配置的主机名)
3. 将浏览器配置为使用 SearXNG 作为默认搜索引擎(可选)
4. 通过 Web 界面自定义设置
## 附加信息
- 官方文档:<https://docs.searxng.org/>
- GitHub 仓库:<https://github.com/searxng/searxng>
- 原始项目:<https://github.com/searxng/searxng-docker>

View File

@@ -0,0 +1,115 @@
# https://github.com/searxng/searxng-docker/blob/master/docker-compose.yaml
x-default: &default
restart: unless-stopped
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
caddy:
<<: *default
image: docker.io/library/caddy:${CADDY_VERSION:-2-alpine}
network_mode: host
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile:ro
- caddy-data:/data
- caddy-config:/config
environment:
- SEARXNG_HOSTNAME=${SEARXNG_HOSTNAME:-http://localhost}
- SEARXNG_TLS=${LETSENCRYPT_EMAIL:-internal}
- TZ=${TZ:-UTC}
cap_drop:
- ALL
cap_add:
- NET_BIND_SERVICE
healthcheck:
test: ["CMD", "wget", "--spider", "-q", "http://localhost:2019/config/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
deploy:
resources:
limits:
cpus: '0.50'
memory: 256M
reservations:
cpus: '0.10'
memory: 64M
redis:
<<: *default
image: docker.io/valkey/valkey:${VALKEY_VERSION:-8-alpine}
command: valkey-server --save 30 1 --loglevel warning
networks:
- searxng
volumes:
- valkey-data:/data
cap_drop:
- ALL
cap_add:
- SETGID
- SETUID
- DAC_OVERRIDE
healthcheck:
test: ["CMD", "valkey-cli", "ping"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
deploy:
resources:
limits:
cpus: '0.50'
memory: 256M
reservations:
cpus: '0.10'
memory: 64M
searxng:
<<: *default
image: docker.io/searxng/searxng:${SEARXNG_VERSION:-2025.1.20-1ce14ef99}
networks:
- searxng
ports:
- "${SEARXNG_PORT_OVERRIDE:-8080}:8080"
volumes:
- ./searxng:/etc/searxng:rw
environment:
- SEARXNG_BASE_URL=https://${SEARXNG_HOSTNAME:-localhost}/
- UWSGI_WORKERS=${SEARXNG_UWSGI_WORKERS:-4}
- UWSGI_THREADS=${SEARXNG_UWSGI_THREADS:-4}
- TZ=${TZ:-UTC}
cap_drop:
- ALL
cap_add:
- CHOWN
- SETGID
- SETUID
depends_on:
redis:
condition: service_healthy
healthcheck:
test: ["CMD", "wget", "--spider", "-q", "http://localhost:8080/healthz"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
deploy:
resources:
limits:
cpus: '1.00'
memory: 1G
reservations:
cpus: '0.25'
memory: 256M
networks:
searxng:
volumes:
caddy-data:
caddy-config:
valkey-data:

View File

@@ -0,0 +1,11 @@
# Verdaccio version
VERDACCIO_VERSION=6.1.2
# Verdaccio container internal port (default: 4873)
VERDACCIO_PORT=4873
# Verdaccio host port mapping (default: 4873)
VERDACCIO_PORT_OVERRIDE=4873
# Timezone
TZ=UTC

77
src/verdaccio/README.md Normal file
View File

@@ -0,0 +1,77 @@
# Verdaccio
[English](./README.md) | [中文](./README.zh.md)
This service deploys Verdaccio, a lightweight private npm registry proxy.
## Services
- `verdaccio`: The Verdaccio service.
## Environment Variables
| Variable Name | Description | Default Value |
| ----------------------- | ------------------------------------------------------------ | ------------- |
| VERDACCIO_VERSION | Verdaccio image version | `6.1.2` |
| VERDACCIO_PORT | Verdaccio container internal port | 4873 |
| VERDACCIO_PORT_OVERRIDE | Host port mapping (maps to Verdaccio port 4873 in container) | 4873 |
| TZ | Timezone setting | `UTC` |
Please modify the `.env` file as needed for your use case.
## Volumes
- `./storage`: Directory for storing published packages
- `./config`: Directory for Verdaccio configuration files
- `./plugins`: Directory for Verdaccio plugins
## Ports
- `4873`: Verdaccio Web UI and npm registry API
## First-Time Setup
1. After starting the service, access Verdaccio at `http://localhost:4873`
2. Create a user account:
```bash
npm adduser --registry http://localhost:4873
```
3. Configure npm to use your Verdaccio registry:
```bash
npm set registry http://localhost:4873
```
## Usage
### Publish a Package
```bash
npm publish --registry http://localhost:4873
```
### Install Packages
```bash
npm install <package-name> --registry http://localhost:4873
```
### Use as an Upstream Proxy
Verdaccio can proxy requests to the public npm registry. Packages not found locally will be fetched from npmjs.org and cached.
## Configuration
Edit the configuration file in `./config/config.yaml` to customize Verdaccio behavior:
- Authentication settings
- Package access control
- Upstream npm registry settings
- Web UI customization
## Additional Information
- Official Documentation: <https://verdaccio.org/docs/what-is-verdaccio>
- GitHub Repository: <https://github.com/verdaccio/verdaccio>

View File

@@ -0,0 +1,3 @@
# Verdaccio
Verdaccio 是一个轻量级的私有 npm 注册表,允许用户在本地或私有网络中托管和共享 npm 包。它是一个开源项目,旨在提供一个简单易用的解决方案,以便开发人员可以更好地管理他们的 npm 依赖项。

View File

@@ -0,0 +1,41 @@
x-default: &default
restart: unless-stopped
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
verdaccio:
<<: *default
image: verdaccio/verdaccio:${VERDACCIO_VERSION:-6.1.2}
networks:
- verdaccio
environment:
- VERDACCIO_PORT=${VERDACCIO_PORT:-4873}
- TZ=${TZ:-UTC}
ports:
- "${VERDACCIO_PORT_OVERRIDE:-4873}:4873"
volumes:
- ./storage:/verdaccio/storage
- ./config:/verdaccio/conf
- ./plugins:/verdaccio/plugins
healthcheck:
test: ["CMD", "wget", "--spider", "-q", "http://localhost:4873/-/ping"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
deploy:
resources:
limits:
cpus: '1.00'
memory: 512M
reservations:
cpus: '0.25'
memory: 128M
networks:
verdaccio:
driver: bridge