feat(nanobot): add README and docker-compose configuration for multi-channel AI assistant

feat(openclaw): introduce OpenClaw personal AI assistant with multi-channel support and CLI

fix(mineru): update MinerU version to 2.7.6 in Dockerfile and documentation
This commit is contained in:
Sun-ZhenXing
2026-02-07 13:34:36 +08:00
parent 28ed2462af
commit d53dffca83
14 changed files with 874 additions and 110 deletions

View File

@@ -1,5 +1,7 @@
# Compose Anything # Compose Anything
[中文说明](README.zh.md) | [English](README.md)
Compose Anything helps users quickly deploy various services by providing a set of high-quality Docker Compose configuration files. These configurations constrain resource usage, can be easily migrated to systems like K8S, and are easy to understand and modify. Compose Anything helps users quickly deploy various services by providing a set of high-quality Docker Compose configuration files. These configurations constrain resource usage, can be easily migrated to systems like K8S, and are easy to understand and modify.
## Build Services ## Build Services
@@ -12,7 +14,7 @@ These services require building custom Docker images from source.
| [goose](./builds/goose) | 1.18.0 | | [goose](./builds/goose) | 1.18.0 |
| [IOPaint](./builds/io-paint) | 1.6.0 | | [IOPaint](./builds/io-paint) | 1.6.0 |
| [K3s inside DinD](./builds/k3s-inside-dind) | 0.2.2 | | [K3s inside DinD](./builds/k3s-inside-dind) | 0.2.2 |
| [MinerU vLLM](./builds/mineru) | 2.7.2 | | [MinerU vLLM](./builds/mineru) | 2.7.6 |
## Supported Services ## Supported Services
@@ -89,6 +91,7 @@ These services require building custom Docker images from source.
| [MongoDB Standalone](./src/mongodb-standalone) | 8.2.3 | | [MongoDB Standalone](./src/mongodb-standalone) | 8.2.3 |
| [MySQL](./src/mysql) | 9.4.0 | | [MySQL](./src/mysql) | 9.4.0 |
| [n8n](./apps/n8n) | 1.114.0 | | [n8n](./apps/n8n) | 1.114.0 |
| [Nanobot](./apps/nanobot) | v0.1.3.post4 |
| [Nacos](./src/nacos) | v3.1.0 | | [Nacos](./src/nacos) | v3.1.0 |
| [NebulaGraph](./src/nebulagraph) | v3.8.0 | | [NebulaGraph](./src/nebulagraph) | v3.8.0 |
| [NexaSDK](./src/nexa-sdk) | v0.2.62 | | [NexaSDK](./src/nexa-sdk) | v0.2.62 |

View File

@@ -1,5 +1,7 @@
# Compose Anything # Compose Anything
[中文说明](README.zh.md) | [English](README.md)
Compose Anything 通过提供一组高质量的 Docker Compose 配置文件,帮助用户快速部署各种服务。这些配置约束了资源使用,可快速迁移到 K8S 等系统,并且易于理解和修改。 Compose Anything 通过提供一组高质量的 Docker Compose 配置文件,帮助用户快速部署各种服务。这些配置约束了资源使用,可快速迁移到 K8S 等系统,并且易于理解和修改。
## 构建服务 ## 构建服务
@@ -12,7 +14,7 @@ Compose Anything 通过提供一组高质量的 Docker Compose 配置文件,
| [goose](./builds/goose) | 1.18.0 | | [goose](./builds/goose) | 1.18.0 |
| [IOPaint](./builds/io-paint) | 1.6.0 | | [IOPaint](./builds/io-paint) | 1.6.0 |
| [K3s inside DinD](./builds/k3s-inside-dind) | 0.2.2 | | [K3s inside DinD](./builds/k3s-inside-dind) | 0.2.2 |
| [MinerU vLLM](./builds/mineru) | 2.7.2 | | [MinerU vLLM](./builds/mineru) | 2.7.6 |
## 已经支持的服务 ## 已经支持的服务
@@ -89,6 +91,7 @@ Compose Anything 通过提供一组高质量的 Docker Compose 配置文件,
| [MongoDB Standalone](./src/mongodb-standalone) | 8.2.3 | | [MongoDB Standalone](./src/mongodb-standalone) | 8.2.3 |
| [MySQL](./src/mysql) | 9.4.0 | | [MySQL](./src/mysql) | 9.4.0 |
| [n8n](./apps/n8n) | 1.114.0 | | [n8n](./apps/n8n) | 1.114.0 |
| [Nanobot](./apps/nanobot) | v0.1.3.post4 |
| [Nacos](./src/nacos) | v3.1.0 | | [Nacos](./src/nacos) | v3.1.0 |
| [NebulaGraph](./src/nebulagraph) | v3.8.0 | | [NebulaGraph](./src/nebulagraph) | v3.8.0 |
| [NexaSDK](./src/nexa-sdk) | v0.2.62 | | [NexaSDK](./src/nexa-sdk) | v0.2.62 |

144
apps/nanobot/.env.example Normal file
View File

@@ -0,0 +1,144 @@
# Nanobot version
NANOBOT_VERSION=v0.1.3.post4
# Timezone
TZ=UTC
# Port override
NANOBOT_PORT_OVERRIDE=18790
# Command to run (gateway, onboard, status, agent, etc.)
NANOBOT_COMMAND=gateway
# ============================================================================
# LLM Provider Configuration
# ============================================================================
# OpenRouter (recommended for global access to all models)
# Get API key: https://openrouter.ai/keys
OPENROUTER_API_KEY=
# Anthropic (Claude direct access)
# Get API key: https://console.anthropic.com
ANTHROPIC_API_KEY=
# OpenAI (GPT direct access)
# Get API key: https://platform.openai.com
OPENAI_API_KEY=
# Google Gemini
# Get API key: https://aistudio.google.com
GEMINI_API_KEY=
# DeepSeek
# Get API key: https://platform.deepseek.com
DEEPSEEK_API_KEY=
# Groq (LLM + Voice transcription)
# Get API key: https://console.groq.com
GROQ_API_KEY=
# Zhipu AI (智谱 AI)
# Get API key: https://open.bigmodel.cn
ZHIPU_API_KEY=
# Alibaba Cloud DashScope (阿里云通义千问)
# Get API key: https://dashscope.console.aliyun.com
DASHSCOPE_API_KEY=
# Moonshot (月之暗面 Kimi)
# Get API key: https://platform.moonshot.cn
MOONSHOT_API_KEY=
# vLLM / Local LLM Server
# For local models running on vLLM or OpenAI-compatible server
VLLM_API_KEY=dummy
VLLM_API_BASE=http://localhost:8000/v1
# ============================================================================
# Agent Configuration
# ============================================================================
# Model to use (examples: anthropic/claude-opus-4-5, gpt-4, deepseek/deepseek-chat)
NANOBOT_MODEL=anthropic/claude-opus-4-5
# Maximum tokens for model response
NANOBOT_MAX_TOKENS=8192
# Temperature for model (0.0-1.0, higher = more creative)
NANOBOT_TEMPERATURE=0.7
# Maximum tool iterations per turn
NANOBOT_MAX_TOOL_ITERATIONS=20
# ============================================================================
# Channel Configuration
# ============================================================================
# Telegram
# 1. Create bot via @BotFather on Telegram
# 2. Get your user ID from @userinfobot
TELEGRAM_ENABLED=false
TELEGRAM_TOKEN=
TELEGRAM_PROXY=
# Discord
# 1. Create bot at https://discord.com/developers/applications
# 2. Enable MESSAGE CONTENT INTENT in bot settings
DISCORD_ENABLED=false
DISCORD_TOKEN=
# WhatsApp (requires Node.js bridge)
# 1. Run `nanobot channels login` to scan QR code
# 2. Bridge URL points to the WhatsApp bridge server
WHATSAPP_ENABLED=false
WHATSAPP_BRIDGE_URL=ws://localhost:3001
# Feishu (飞书/Lark)
# 1. Create app at https://open.feishu.cn/app
# 2. Enable Bot capability and add im:message permission
FEISHU_ENABLED=false
FEISHU_APP_ID=
FEISHU_APP_SECRET=
FEISHU_ENCRYPT_KEY=
FEISHU_VERIFICATION_TOKEN=
# ============================================================================
# Tools Configuration
# ============================================================================
# Brave Search API (for web search tool)
# Get API key: https://brave.com/search/api/
BRAVE_API_KEY=
# Web search max results
WEB_SEARCH_MAX_RESULTS=5
# Shell command execution timeout (seconds)
EXEC_TIMEOUT=60
# Restrict all tool access to workspace directory
# Set to true for production/sandboxed environments
RESTRICT_TO_WORKSPACE=false
# ============================================================================
# Gateway Configuration
# ============================================================================
# Gateway host (0.0.0.0 allows external connections)
GATEWAY_HOST=0.0.0.0
# Gateway port (internal port, mapped via NANOBOT_PORT_OVERRIDE)
GATEWAY_PORT=18790
# ============================================================================
# Resource Limits
# ============================================================================
# CPU limits
NANOBOT_CPU_LIMIT=1.0
NANOBOT_CPU_RESERVATION=0.5
# Memory limits
NANOBOT_MEMORY_LIMIT=1G
NANOBOT_MEMORY_RESERVATION=512M

269
apps/nanobot/README.md Normal file
View File

@@ -0,0 +1,269 @@
# Nanobot
[中文说明](README.zh.md) | [English](README.md)
Nanobot is a lightweight, production-ready personal AI assistant with multi-channel support (Telegram, Discord, WhatsApp, Feishu), local model integration, and powerful tool capabilities.
## Features
- 🤖 **Multi-Provider LLM Support**: OpenRouter, Anthropic, OpenAI, DeepSeek, Groq, Gemini, and more
- 🖥️ **Local Models**: Run your own models with vLLM or any OpenAI-compatible server
- 💬 **Multi-Channel**: Telegram, Discord, WhatsApp, and Feishu (飞书) integration
- 🛠️ **Powerful Tools**: File operations, shell execution, web search, and custom skills
- 📅 **Scheduled Tasks**: Cron-like job scheduling with natural language
- 🎯 **Memory & Skills**: Persistent memory and extensible skill system
- 🔒 **Security**: Sandbox mode, access control, and safe command execution
## Quick Start
### Prerequisites
- Docker and Docker Compose installed
- At least one LLM provider API key (recommended: [OpenRouter](https://openrouter.ai/keys))
### Setup
1. **Copy the example environment file:**
```bash
cp .env.example .env
```
2. **Edit `.env` and configure at least one LLM provider:**
```bash
# For OpenRouter (recommended for global access)
OPENROUTER_API_KEY=sk-or-v1-xxxxx
# Or use any other provider
ANTHROPIC_API_KEY=sk-ant-xxxxx
OPENAI_API_KEY=sk-xxxxx
```
3. **Start the service:**
```bash
docker compose up -d
```
4. **Initialize configuration (first time only):**
```bash
docker compose exec nanobot nanobot onboard
```
5. **Check status:**
```bash
docker compose exec nanobot nanobot status
```
## Usage
### CLI Mode
Chat directly with nanobot:
```bash
docker compose exec nanobot nanobot agent -m "What is 2+2?"
```
Interactive mode:
```bash
docker compose exec nanobot nanobot agent
```
### Gateway Mode (Default)
The default `docker compose up` starts the gateway server which enables:
- Telegram bot integration
- Discord bot integration
- WhatsApp messaging (requires additional setup)
- Feishu/Lark integration
- HTTP API access (port 18790)
### Channel Setup
#### Telegram
1. Create a bot via [@BotFather](https://t.me/BotFather) on Telegram
2. Get your user ID from [@userinfobot](https://t.me/userinfobot)
3. Configure in `.env`:
```bash
TELEGRAM_ENABLED=true
TELEGRAM_TOKEN=your_bot_token
```
4. Restart the service
#### Discord
1. Create a bot at [Discord Developer Portal](https://discord.com/developers/applications)
2. Enable **MESSAGE CONTENT INTENT** in bot settings
3. Configure in `.env`:
```bash
DISCORD_ENABLED=true
DISCORD_TOKEN=your_bot_token
```
4. Restart the service
#### WhatsApp
Requires Node.js and additional setup. See [official documentation](https://github.com/HKUDS/nanobot#-chat-apps) for details.
#### Feishu (飞书)
1. Create an app at [Feishu Open Platform](https://open.feishu.cn/app)
2. Enable Bot capability and add `im:message` permission
3. Configure in `.env`:
```bash
FEISHU_ENABLED=true
FEISHU_APP_ID=your_app_id
FEISHU_APP_SECRET=your_app_secret
```
4. Restart the service
## Configuration
### Environment Variables
See [.env.example](.env.example) for all available configuration options.
Key settings:
| Variable | Description | Default |
| ----------------------- | ------------------------------------------ | --------------------------- |
| `NANOBOT_MODEL` | LLM model to use | `anthropic/claude-opus-4-5` |
| `NANOBOT_COMMAND` | Command to run (gateway, agent, status) | `gateway` |
| `RESTRICT_TO_WORKSPACE` | Sandbox mode - restrict tools to workspace | `false` |
| `BRAVE_API_KEY` | API key for web search tool | (empty) |
| `TELEGRAM_ENABLED` | Enable Telegram channel | `false` |
| `DISCORD_ENABLED` | Enable Discord channel | `false` |
### LLM Provider Priority
When multiple providers are configured, nanobot will:
1. Match provider based on model name (e.g., `gpt-4` → OpenAI)
2. Fall back to first available API key
### Security
For production deployments:
- Set `RESTRICT_TO_WORKSPACE=true` to sandbox all file and shell operations
- Configure `allowFrom` lists in the config file for channel access control
- Use dedicated user accounts for channel integrations
- Monitor API usage and set spending limits
- Keep credentials in environment variables, never in code
## Scheduled Tasks
Run tasks on a schedule:
```bash
# Add a daily reminder
docker compose exec nanobot nanobot cron add \
--name "morning" \
--message "Good morning! What's on the agenda?" \
--cron "0 9 * * *"
# List scheduled jobs
docker compose exec nanobot nanobot cron list
# Remove a job
docker compose exec nanobot nanobot cron remove <job_id>
```
## Local Models (vLLM)
Run nanobot with your own local models:
1. **Start a vLLM server:**
```bash
vllm serve meta-llama/Llama-3.1-8B-Instruct --port 8000
```
2. **Configure in `.env`:**
```bash
VLLM_API_KEY=dummy
VLLM_API_BASE=http://host.docker.internal:8000/v1
NANOBOT_MODEL=meta-llama/Llama-3.1-8B-Instruct
```
3. **Restart the service**
## Volumes
- `nanobot_config`: Configuration files and credentials
- `nanobot_workspace`: Agent workspace and files
## Ports
- `18790`: Gateway HTTP API (configurable via `NANOBOT_PORT_OVERRIDE`)
## Resource Limits
Default resource limits:
- CPU: 1.0 cores (limit), 0.5 cores (reservation)
- Memory: 1GB (limit), 512MB (reservation)
Adjust via environment variables: `NANOBOT_CPU_LIMIT`, `NANOBOT_MEMORY_LIMIT`, etc.
## Troubleshooting
### Check logs
```bash
docker compose logs -f nanobot
```
### Verify configuration
```bash
docker compose exec nanobot nanobot status
```
### Test LLM connection
```bash
docker compose exec nanobot nanobot agent -m "Hello!"
```
### Common issues
**No API key configured:**
- Ensure at least one provider API key is set in `.env`
- Restart the service after updating environment variables
**Channel not responding:**
- Check that the channel is enabled in `.env`
- Verify bot tokens are correct
- Check logs for connection errors
**File permission errors:**
- Ensure volumes have proper permissions
- Try running with `RESTRICT_TO_WORKSPACE=false` for debugging
## License
Nanobot is an open-source project. See the [official repository](https://github.com/HKUDS/nanobot) for license details.
## Links
- Official Repository: <https://github.com/HKUDS/nanobot>
- Documentation: <https://github.com/HKUDS/nanobot#readme>
- Issues: <https://github.com/HKUDS/nanobot/issues>

269
apps/nanobot/README.zh.md Normal file
View File

@@ -0,0 +1,269 @@
# Nanobot
[中文说明](README.zh.md) | [English](README.md)
Nanobot 是一个轻量级、生产就绪的个人 AI 助手支持多渠道Telegram、Discord、WhatsApp、飞书本地模型集成以及强大的工具能力。
## 特性
- 🤖 **多 LLM 提供商支持**OpenRouter、Anthropic、OpenAI、DeepSeek、Groq、Gemini 等
- 🖥️ **本地模型**:使用 vLLM 或任何 OpenAI 兼容服务器运行您自己的模型
- 💬 **多渠道**:集成 Telegram、Discord、WhatsApp 和飞书
- 🛠️ **强大工具**文件操作、Shell 执行、网络搜索和自定义技能
- 📅 **定时任务**:支持自然语言的类 Cron 任务调度
- 🎯 **记忆与技能**:持久化记忆和可扩展技能系统
- 🔒 **安全性**:沙盒模式、访问控制和安全命令执行
## 快速开始
### 前置要求
- 已安装 Docker 和 Docker Compose
- 至少一个 LLM 提供商 API 密钥(推荐:[OpenRouter](https://openrouter.ai/keys)
### 配置步骤
1. **复制环境变量示例文件:**
```bash
cp .env.example .env
```
2. **编辑 `.env` 并至少配置一个 LLM 提供商:**
```bash
# 使用 OpenRouter推荐可访问所有模型
OPENROUTER_API_KEY=sk-or-v1-xxxxx
# 或使用其他提供商
ANTHROPIC_API_KEY=sk-ant-xxxxx
OPENAI_API_KEY=sk-xxxxx
```
3. **启动服务:**
```bash
docker compose up -d
```
4. **初始化配置(仅首次需要):**
```bash
docker compose exec nanobot nanobot onboard
```
5. **检查状态:**
```bash
docker compose exec nanobot nanobot status
```
## 使用方法
### CLI 模式
直接与 nanobot 对话:
```bash
docker compose exec nanobot nanobot agent -m "2+2 等于多少?"
```
交互模式:
```bash
docker compose exec nanobot nanobot agent
```
### 网关模式(默认)
默认的 `docker compose up` 会启动网关服务器,支持:
- Telegram 机器人集成
- Discord 机器人集成
- WhatsApp 消息(需要额外配置)
- 飞书 / Lark 集成
- HTTP API 访问(端口 18790
### 渠道配置
#### Telegram
1. 通过 Telegram 上的 [@BotFather](https://t.me/BotFather) 创建机器人
2. 从 [@userinfobot](https://t.me/userinfobot) 获取您的用户 ID
3. 在 `.env` 中配置:
```bash
TELEGRAM_ENABLED=true
TELEGRAM_TOKEN=你的机器人令牌
```
4. 重启服务
#### Discord
1. 在 [Discord 开发者门户](https://discord.com/developers/applications) 创建机器人
2. 在机器人设置中启用 **MESSAGE CONTENT INTENT**
3. 在 `.env` 中配置:
```bash
DISCORD_ENABLED=true
DISCORD_TOKEN=你的机器人令牌
```
4. 重启服务
#### WhatsApp
需要 Node.js 和额外配置。详见 [官方文档](https://github.com/HKUDS/nanobot#-chat-apps)。
#### 飞书
1. 在 [飞书开放平台](https://open.feishu.cn/app) 创建应用
2. 启用机器人能力并添加 `im:message` 权限
3. 在 `.env` 中配置:
```bash
FEISHU_ENABLED=true
FEISHU_APP_ID=你的应用ID
FEISHU_APP_SECRET=你的应用密钥
```
4. 重启服务
## 配置
### 环境变量
所有可用配置选项请参见 [.env.example](.env.example)。
关键设置:
| 变量 | 描述 | 默认值 |
| ----------------------- | -------------------------------------- | --------------------------- |
| `NANOBOT_MODEL` | 要使用的 LLM 模型 | `anthropic/claude-opus-4-5` |
| `NANOBOT_COMMAND` | 要运行的命令gateway、agent、status | `gateway` |
| `RESTRICT_TO_WORKSPACE` | 沙盒模式 - 限制工具访问工作空间 | `false` |
| `BRAVE_API_KEY` | 网络搜索工具的 API 密钥 | (空) |
| `TELEGRAM_ENABLED` | 启用 Telegram 渠道 | `false` |
| `DISCORD_ENABLED` | 启用 Discord 渠道 | `false` |
### LLM 提供商优先级
当配置了多个提供商时nanobot 将:
1. 根据模型名称匹配提供商(例如 `gpt-4` → OpenAI
2. 回退到第一个可用的 API 密钥
### 安全性
对于生产部署:
- 设置 `RESTRICT_TO_WORKSPACE=true` 以沙盒化所有文件和 Shell 操作
- 在配置文件中为渠道访问控制配置 `allowFrom` 列表
- 为渠道集成使用专用用户账户
- 监控 API 使用并设置支出限制
- 将凭证保存在环境变量中,绝不在代码中
## 定时任务
按计划运行任务:
```bash
# 添加每日提醒
docker compose exec nanobot nanobot cron add \
--name "morning" \
--message "早上好!今天有什么安排?" \
--cron "0 9 * * *"
# 列出计划任务
docker compose exec nanobot nanobot cron list
# 删除任务
docker compose exec nanobot nanobot cron remove <job_id>
```
## 本地模型vLLM
使用您自己的本地模型运行 nanobot
1. **启动 vLLM 服务器:**
```bash
vllm serve meta-llama/Llama-3.1-8B-Instruct --port 8000
```
2. **在 `.env` 中配置:**
```bash
VLLM_API_KEY=dummy
VLLM_API_BASE=http://host.docker.internal:8000/v1
NANOBOT_MODEL=meta-llama/Llama-3.1-8B-Instruct
```
3. **重启服务**
## 数据卷
- `nanobot_config`:配置文件和凭证
- `nanobot_workspace`:代理工作空间和文件
## 端口
- `18790`:网关 HTTP API可通过 `NANOBOT_PORT_OVERRIDE` 配置)
## 资源限制
默认资源限制:
- CPU1.0 核心限制0.5 核心(预留)
- 内存1GB限制512MB预留
通过环境变量调整:`NANOBOT_CPU_LIMIT`、`NANOBOT_MEMORY_LIMIT` 等。
## 故障排除
### 查看日志
```bash
docker compose logs -f nanobot
```
### 验证配置
```bash
docker compose exec nanobot nanobot status
```
### 测试 LLM 连接
```bash
docker compose exec nanobot nanobot agent -m "你好!"
```
### 常见问题
**未配置 API 密钥:**
- 确保在 `.env` 中至少设置了一个提供商 API 密钥
- 更新环境变量后重启服务
**渠道无响应:**
- 检查渠道是否在 `.env` 中启用
- 验证机器人令牌是否正确
- 检查日志中的连接错误
**文件权限错误:**
- 确保数据卷具有适当的权限
- 调试时尝试使用 `RESTRICT_TO_WORKSPACE=false` 运行
## 许可证
Nanobot 是一个开源项目。许可证详情请参见 [官方仓库](https://github.com/HKUDS/nanobot)。
## 链接
- 官方仓库:<https://github.com/HKUDS/nanobot>
- 文档:<https://github.com/HKUDS/nanobot#readme>
- 问题反馈:<https://github.com/HKUDS/nanobot/issues>

View File

@@ -0,0 +1,76 @@
x-defaults: &defaults
restart: unless-stopped
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
nanobot:
<<: *defaults
image: ${GLOBAL_REGISTRY:-ghcr.io/}hkuds/nanobot:${NANOBOT_VERSION:-v0.1.3.post4}
ports:
- "${NANOBOT_PORT_OVERRIDE:-18790}:18790"
volumes:
- nanobot_config:/root/.nanobot
- nanobot_workspace:/root/.nanobot/workspace
environment:
- TZ=${TZ:-UTC}
# LLM Provider Configuration
- NANOBOT_PROVIDERS__OPENROUTER__API_KEY=${OPENROUTER_API_KEY:-}
- NANOBOT_PROVIDERS__ANTHROPIC__API_KEY=${ANTHROPIC_API_KEY:-}
- NANOBOT_PROVIDERS__OPENAI__API_KEY=${OPENAI_API_KEY:-}
- NANOBOT_PROVIDERS__GEMINI__API_KEY=${GEMINI_API_KEY:-}
- NANOBOT_PROVIDERS__DEEPSEEK__API_KEY=${DEEPSEEK_API_KEY:-}
- NANOBOT_PROVIDERS__GROQ__API_KEY=${GROQ_API_KEY:-}
- NANOBOT_PROVIDERS__ZHIPU__API_KEY=${ZHIPU_API_KEY:-}
- NANOBOT_PROVIDERS__DASHSCOPE__API_KEY=${DASHSCOPE_API_KEY:-}
- NANOBOT_PROVIDERS__MOONSHOT__API_KEY=${MOONSHOT_API_KEY:-}
- NANOBOT_PROVIDERS__VLLM__API_KEY=${VLLM_API_KEY:-}
- NANOBOT_PROVIDERS__VLLM__API_BASE=${VLLM_API_BASE:-}
# Agent Configuration
- NANOBOT_AGENTS__DEFAULTS__MODEL=${NANOBOT_MODEL:-anthropic/claude-opus-4-5}
- NANOBOT_AGENTS__DEFAULTS__MAX_TOKENS=${NANOBOT_MAX_TOKENS:-8192}
- NANOBOT_AGENTS__DEFAULTS__TEMPERATURE=${NANOBOT_TEMPERATURE:-0.7}
- NANOBOT_AGENTS__DEFAULTS__MAX_TOOL_ITERATIONS=${NANOBOT_MAX_TOOL_ITERATIONS:-20}
# Channel Configuration
- NANOBOT_CHANNELS__TELEGRAM__ENABLED=${TELEGRAM_ENABLED:-false}
- NANOBOT_CHANNELS__TELEGRAM__TOKEN=${TELEGRAM_TOKEN:-}
- NANOBOT_CHANNELS__TELEGRAM__PROXY=${TELEGRAM_PROXY:-}
- NANOBOT_CHANNELS__DISCORD__ENABLED=${DISCORD_ENABLED:-false}
- NANOBOT_CHANNELS__DISCORD__TOKEN=${DISCORD_TOKEN:-}
- NANOBOT_CHANNELS__WHATSAPP__ENABLED=${WHATSAPP_ENABLED:-false}
- NANOBOT_CHANNELS__WHATSAPP__BRIDGE_URL=${WHATSAPP_BRIDGE_URL:-ws://localhost:3001}
- NANOBOT_CHANNELS__FEISHU__ENABLED=${FEISHU_ENABLED:-false}
- NANOBOT_CHANNELS__FEISHU__APP_ID=${FEISHU_APP_ID:-}
- NANOBOT_CHANNELS__FEISHU__APP_SECRET=${FEISHU_APP_SECRET:-}
- NANOBOT_CHANNELS__FEISHU__ENCRYPT_KEY=${FEISHU_ENCRYPT_KEY:-}
- NANOBOT_CHANNELS__FEISHU__VERIFICATION_TOKEN=${FEISHU_VERIFICATION_TOKEN:-}
# Tools Configuration
- NANOBOT_TOOLS__WEB__SEARCH__API_KEY=${BRAVE_API_KEY:-}
- NANOBOT_TOOLS__WEB__SEARCH__MAX_RESULTS=${WEB_SEARCH_MAX_RESULTS:-5}
- NANOBOT_TOOLS__EXEC__TIMEOUT=${EXEC_TIMEOUT:-60}
- NANOBOT_TOOLS__RESTRICT_TO_WORKSPACE=${RESTRICT_TO_WORKSPACE:-false}
# Gateway Configuration
- NANOBOT_GATEWAY__HOST=${GATEWAY_HOST:-0.0.0.0}
- NANOBOT_GATEWAY__PORT=${GATEWAY_PORT:-18790}
command: ${NANOBOT_COMMAND:-gateway}
healthcheck:
test: ["CMD", "python", "-c", "import sys; sys.exit(0)"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
deploy:
resources:
limits:
cpus: ${NANOBOT_CPU_LIMIT:-1.0}
memory: ${NANOBOT_MEMORY_LIMIT:-1G}
reservations:
cpus: ${NANOBOT_CPU_RESERVATION:-0.5}
memory: ${NANOBOT_MEMORY_RESERVATION:-512M}
volumes:
nanobot_config:
nanobot_workspace:

View File

@@ -1,4 +1,4 @@
# MoltBot Environment Configuration # OpenClaw Environment Configuration
# Copy this file to .env and configure the values # Copy this file to .env and configure the values
# Timezone (default: UTC) # Timezone (default: UTC)
@@ -8,27 +8,27 @@ TZ=UTC
# Examples: docker.io/, ghcr.io/, your-registry.com/ # Examples: docker.io/, ghcr.io/, your-registry.com/
GLOBAL_REGISTRY= GLOBAL_REGISTRY=
# MoltBot Version # OpenClaw Version
# Use 'main' for latest, or specific version tag like 'v2026.1.27' # Use 'main' for latest, or specific version tag like 'v2026.2.3'
MOLTBOT_VERSION=main OPENCLAW_VERSION=2026.2.3
# === Gateway Configuration === # === Gateway Configuration ===
# Gateway access token (REQUIRED - generate a secure random token) # Gateway access token (REQUIRED - generate a secure random token)
# Example: openssl rand -hex 32 # Example: openssl rand -hex 32
MOLTBOT_GATEWAY_TOKEN=your-secure-token-here OPENCLAW_GATEWAY_TOKEN=your-secure-token-here
# Gateway bind address # Gateway bind address
# Options: loopback (127.0.0.1), lan (0.0.0.0 for LAN access) # Options: loopback (127.0.0.1), lan (0.0.0.0 for LAN access)
MOLTBOT_GATEWAY_BIND=lan OPENCLAW_GATEWAY_BIND=lan
# Gateway internal port (default: 18789) # Gateway internal port (default: 18789)
MOLTBOT_GATEWAY_PORT=18789 OPENCLAW_GATEWAY_PORT=18789
# Gateway host port override (default: 18789) # Gateway host port override (default: 18789)
MOLTBOT_GATEWAY_PORT_OVERRIDE=18789 OPENCLAW_GATEWAY_PORT_OVERRIDE=18789
# Bridge port override (default: 18790) # Bridge port override (default: 18790)
MOLTBOT_BRIDGE_PORT_OVERRIDE=18790 OPENCLAW_BRIDGE_PORT_OVERRIDE=18790
# === Model API Keys (Optional - if not using OAuth) === # === Model API Keys (Optional - if not using OAuth) ===
# Anthropic Claude API Key # Anthropic Claude API Key
@@ -44,11 +44,11 @@ CLAUDE_WEB_COOKIE=
# === Resource Limits === # === Resource Limits ===
# Gateway service resource limits # Gateway service resource limits
MOLTBOT_CPU_LIMIT=2.0 OPENCLAW_CPU_LIMIT=2.0
MOLTBOT_MEMORY_LIMIT=2G OPENCLAW_MEMORY_LIMIT=2G
MOLTBOT_CPU_RESERVATION=1.0 OPENCLAW_CPU_RESERVATION=1.0
MOLTBOT_MEMORY_RESERVATION=1G OPENCLAW_MEMORY_RESERVATION=1G
# CLI service resource limits # CLI service resource limits
MOLTBOT_CLI_CPU_LIMIT=1.0 OPENCLAW_CLI_CPU_LIMIT=1.0
MOLTBOT_CLI_MEMORY_LIMIT=512M OPENCLAW_CLI_MEMORY_LIMIT=512M

View File

@@ -1,6 +1,6 @@
# MoltBot # OpenClaw
MoltBot is a personal AI assistant that runs on your own devices. It integrates with multiple messaging platforms (WhatsApp, Telegram, Slack, Discord, Google Chat, Signal, iMessage, Microsoft Teams, WebChat) and provides AI-powered assistance across all your channels. OpenClaw is a personal AI assistant that runs on your own devices. It integrates with multiple messaging platforms (WhatsApp, Telegram, Slack, Discord, Google Chat, Signal, iMessage, Microsoft Teams, WebChat) and provides AI-powered assistance across all your channels.
## Features ## Features
@@ -32,7 +32,7 @@ MoltBot is a personal AI assistant that runs on your own devices. It integrates
``` ```
3. Edit `.env` and set at least: 3. Edit `.env` and set at least:
- `MOLTBOT_GATEWAY_TOKEN` - Your generated token - `OPENCLAW_GATEWAY_TOKEN` - Your generated token
- `ANTHROPIC_API_KEY` or `OPENAI_API_KEY` - If using API key auth - `ANTHROPIC_API_KEY` or `OPENAI_API_KEY` - If using API key auth
4. Start the gateway: 4. Start the gateway:
@@ -51,8 +51,8 @@ MoltBot is a personal AI assistant that runs on your own devices. It integrates
The gateway can be accessed in two ways: The gateway can be accessed in two ways:
- **Loopback** (`MOLTBOT_GATEWAY_BIND=loopback`): Only accessible from the host machine (127.0.0.1) - **Loopback** (`OPENCLAW_GATEWAY_BIND=loopback`): Only accessible from the host machine (127.0.0.1)
- **LAN** (`MOLTBOT_GATEWAY_BIND=lan`): Accessible from your local network (0.0.0.0) - **LAN** (`OPENCLAW_GATEWAY_BIND=lan`): Accessible from your local network (0.0.0.0)
For production deployments, consider: For production deployments, consider:
@@ -62,7 +62,7 @@ For production deployments, consider:
### Model Configuration ### Model Configuration
MoltBot supports multiple AI model providers: OpenClaw supports multiple AI model providers:
- **Anthropic Claude** (Recommended): Claude Pro/Max with OAuth or API key - **Anthropic Claude** (Recommended): Claude Pro/Max with OAuth or API key
- **OpenAI**: ChatGPT/Codex with OAuth or API key - **OpenAI**: ChatGPT/Codex with OAuth or API key
@@ -86,7 +86,7 @@ To connect messaging platforms:
4. **Slack**: Set `SLACK_BOT_TOKEN` and `SLACK_APP_TOKEN` in config 4. **Slack**: Set `SLACK_BOT_TOKEN` and `SLACK_APP_TOKEN` in config
See the [official documentation](https://docs.molt.bot/channels) for detailed setup instructions. See the [official documentation](https://docs.openclaw.bot/channels) for detailed setup instructions.
## Using the CLI ## Using the CLI
@@ -94,23 +94,23 @@ The CLI service is available via the `cli` profile:
```bash ```bash
# Run onboarding wizard # Run onboarding wizard
docker compose run --rm --service-ports moltbot-cli onboard docker compose run --rm --service-ports openclaw-cli onboard
# List providers # List providers
docker compose run --rm moltbot-cli providers list docker compose run --rm openclaw-cli providers list
# Send a message # Send a message
docker compose run --rm moltbot-cli message send --to +1234567890 --message "Hello" docker compose run --rm openclaw-cli message send --to +1234567890 --message "Hello"
# Check health # Check health
docker compose run --rm moltbot-cli health --port 18789 docker compose run --rm openclaw-cli health --port 18789
``` ```
## Security Considerations ## Security Considerations
1. **Gateway Token**: Keep your gateway token secure. This is the authentication method for the Control UI and WebSocket connections. 1. **Gateway Token**: Keep your gateway token secure. This is the authentication method for the Control UI and WebSocket connections.
2. **DM Access**: By default, MoltBot uses pairing mode for direct messages from unknown senders. They receive a pairing code that you must approve. 2. **DM Access**: By default, OpenClaw uses pairing mode for direct messages from unknown senders. They receive a pairing code that you must approve.
3. **Network Exposure**: If exposing the gateway beyond localhost, use proper authentication and encryption: 3. **Network Exposure**: If exposing the gateway beyond localhost, use proper authentication and encryption:
- Set up Tailscale for secure remote access - Set up Tailscale for secure remote access
@@ -128,29 +128,29 @@ docker compose run --rm moltbot-cli health --port 18789
Adjust CPU and memory limits in `.env`: Adjust CPU and memory limits in `.env`:
```env ```env
MOLTBOT_CPU_LIMIT=2.0 OPENCLAW_CPU_LIMIT=2.0
MOLTBOT_MEMORY_LIMIT=2G OPENCLAW_MEMORY_LIMIT=2G
MOLTBOT_CPU_RESERVATION=1.0 OPENCLAW_CPU_RESERVATION=1.0
MOLTBOT_MEMORY_RESERVATION=1G OPENCLAW_MEMORY_RESERVATION=1G
``` ```
### Persistent Data ### Persistent Data
Data is stored in two Docker volumes: Data is stored in two Docker volumes:
- `moltbot_config`: Configuration files and credentials (~/.clawdbot) - `openclaw_config`: Configuration files and credentials (~/.openclaw)
- `moltbot_workspace`: Agent workspace and skills (~/clawd) - `openclaw_workspace`: Agent workspace and skills (~/openclaw-workspace)
To backup your data: To backup your data:
```bash ```bash
docker run --rm -v moltbot_config:/data -v $(pwd):/backup alpine tar czf /backup/moltbot-config-backup.tar.gz /data docker run --rm -v openclaw_config:/data -v $(pwd):/backup alpine tar czf /backup/openclaw-config-backup.tar.gz /data
docker run --rm -v moltbot_workspace:/data -v $(pwd):/backup alpine tar czf /backup/moltbot-workspace-backup.tar.gz /data docker run --rm -v openclaw_workspace:/data -v $(pwd):/backup alpine tar czf /backup/openclaw-workspace-backup.tar.gz /data
``` ```
### Custom Configuration File ### Custom Configuration File
Create a custom config file at `~/.clawdbot/moltbot.json` (inside the container): Create a custom config file at `~/.openclaw/openclaw.json` (inside the container):
```json ```json
{ {
@@ -169,7 +169,7 @@ Create a custom config file at `~/.clawdbot/moltbot.json` (inside the container)
### Gateway Won't Start ### Gateway Won't Start
1. Check logs: `docker compose logs moltbot-gateway` 1. Check logs: `docker compose logs openclaw-gateway`
2. Verify gateway token is set in `.env` 2. Verify gateway token is set in `.env`
3. Ensure port 18789 is not already in use 3. Ensure port 18789 is not already in use
@@ -190,25 +190,25 @@ Create a custom config file at `~/.clawdbot/moltbot.json` (inside the container)
The doctor command helps diagnose common issues: The doctor command helps diagnose common issues:
```bash ```bash
docker compose run --rm moltbot-cli doctor docker compose run --rm openclaw-cli doctor
``` ```
## Documentation ## Documentation
- [Official Website](https://molt.bot) - [Official Website](https://openclaw.bot)
- [Full Documentation](https://docs.molt.bot) - [Full Documentation](https://docs.openclaw.bot)
- [Getting Started Guide](https://docs.molt.bot/start/getting-started) - [Getting Started Guide](https://docs.openclaw.bot/start/getting-started)
- [Configuration Reference](https://docs.molt.bot/gateway/configuration) - [Configuration Reference](https://docs.openclaw.bot/gateway/configuration)
- [Security Guide](https://docs.molt.bot/gateway/security) - [Security Guide](https://docs.openclaw.bot/gateway/security)
- [Docker Installation](https://docs.molt.bot/install/docker) - [Docker Installation](https://docs.openclaw.bot/install/docker)
- [GitHub Repository](https://github.com/moltbot/moltbot) - [GitHub Repository](https://github.com/openclaw/openclaw)
## License ## License
MoltBot is released under the MIT License. See the [LICENSE](https://github.com/moltbot/moltbot/blob/main/LICENSE) file for details. OpenClaw is released under the MIT License. See the [LICENSE](https://github.com/openclaw/openclaw/blob/main/LICENSE) file for details.
## Community ## Community
- [Discord](https://discord.gg/clawd) - [Discord](https://discord.gg/clawd)
- [GitHub Discussions](https://github.com/moltbot/moltbot/discussions) - [GitHub Discussions](https://github.com/openclaw/openclaw/discussions)
- [Issues](https://github.com/moltbot/moltbot/issues) - [Issues](https://github.com/openclaw/openclaw/issues)

View File

@@ -1,6 +1,6 @@
# MoltBot # OpenClaw
MoltBot 是一个运行在你自己设备上的个人 AI 助手。它集成了多个消息平台WhatsApp、Telegram、Slack、Discord、Google Chat、Signal、iMessage、Microsoft Teams、WebChat并在所有频道上提供 AI 驱动的帮助。 OpenClaw 是一个运行在你自己设备上的个人 AI 助手。它集成了多个消息平台WhatsApp、Telegram、Slack、Discord、Google Chat、Signal、iMessage、Microsoft Teams、WebChat并在所有频道上提供 AI 驱动的帮助。
## 功能特性 ## 功能特性
@@ -32,7 +32,7 @@ MoltBot 是一个运行在你自己设备上的个人 AI 助手。它集成了
``` ```
3. 编辑 `.env` 文件,至少设置: 3. 编辑 `.env` 文件,至少设置:
- `MOLTBOT_GATEWAY_TOKEN` - 你生成的令牌 - `OPENCLAW_GATEWAY_TOKEN` - 你生成的令牌
- `ANTHROPIC_API_KEY``OPENAI_API_KEY` - 如果使用 API 密钥认证 - `ANTHROPIC_API_KEY``OPENAI_API_KEY` - 如果使用 API 密钥认证
4. 启动网关: 4. 启动网关:
@@ -51,8 +51,8 @@ MoltBot 是一个运行在你自己设备上的个人 AI 助手。它集成了
网关可以通过两种方式访问: 网关可以通过两种方式访问:
- **回环地址**`MOLTBOT_GATEWAY_BIND=loopback`仅从主机访问127.0.0.1 - **回环地址**`OPENCLAW_GATEWAY_BIND=loopback`仅从主机访问127.0.0.1
- **局域网**`MOLTBOT_GATEWAY_BIND=lan`从本地网络访问0.0.0.0 - **局域网**`OPENCLAW_GATEWAY_BIND=lan`从本地网络访问0.0.0.0
对于生产部署,建议: 对于生产部署,建议:
@@ -62,7 +62,7 @@ MoltBot 是一个运行在你自己设备上的个人 AI 助手。它集成了
### 模型配置 ### 模型配置
MoltBot 支持多个 AI 模型提供商: OpenClaw 支持多个 AI 模型提供商:
- **Anthropic Claude**推荐Claude Pro/Max支持 OAuth 或 API 密钥 - **Anthropic Claude**推荐Claude Pro/Max支持 OAuth 或 API 密钥
- **OpenAI**ChatGPT/Codex支持 OAuth 或 API 密钥 - **OpenAI**ChatGPT/Codex支持 OAuth 或 API 密钥
@@ -86,7 +86,7 @@ MoltBot 支持多个 AI 模型提供商:
4. **Slack**:在配置中设置 `SLACK_BOT_TOKEN``SLACK_APP_TOKEN` 4. **Slack**:在配置中设置 `SLACK_BOT_TOKEN``SLACK_APP_TOKEN`
详细设置说明请参阅[官方文档](https://docs.molt.bot/channels)。 详细设置说明请参阅[官方文档](https://docs.openclaw.bot/channels)。
## 使用命令行界面 ## 使用命令行界面
@@ -94,23 +94,23 @@ CLI 服务可通过 `cli` 配置文件使用:
```bash ```bash
# 运行入门向导 # 运行入门向导
docker compose run --rm --service-ports moltbot-cli onboard docker compose run --rm --service-ports openclaw-cli onboard
# 列出提供商 # 列出提供商
docker compose run --rm moltbot-cli providers list docker compose run --rm openclaw-cli providers list
# 发送消息 # 发送消息
docker compose run --rm moltbot-cli message send --to +1234567890 --message "你好" docker compose run --rm openclaw-cli message send --to +1234567890 --message "你好"
# 检查健康状态 # 检查健康状态
docker compose run --rm moltbot-cli health --port 18789 docker compose run --rm openclaw-cli health --port 18789
``` ```
## 安全注意事项 ## 安全注意事项
1. **网关令牌**:保护好你的网关令牌。这是控制界面和 WebSocket 连接的认证方式。 1. **网关令牌**:保护好你的网关令牌。这是控制界面和 WebSocket 连接的认证方式。
2. **私信访问**:默认情况下,MoltBot 对来自未知发送者的私信使用配对模式。他们会收到一个配对码,你必须批准。 2. **私信访问**:默认情况下,OpenClaw 对来自未知发送者的私信使用配对模式。他们会收到一个配对码,你必须批准。
3. **网络暴露**:如果在 localhost 之外暴露网关,请使用适当的认证和加密: 3. **网络暴露**:如果在 localhost 之外暴露网关,请使用适当的认证和加密:
- 设置 Tailscale 进行安全的远程访问 - 设置 Tailscale 进行安全的远程访问
@@ -128,29 +128,29 @@ docker compose run --rm moltbot-cli health --port 18789
`.env` 文件中调整 CPU 和内存限制: `.env` 文件中调整 CPU 和内存限制:
```env ```env
MOLTBOT_CPU_LIMIT=2.0 OPENCLAW_CPU_LIMIT=2.0
MOLTBOT_MEMORY_LIMIT=2G OPENCLAW_MEMORY_LIMIT=2G
MOLTBOT_CPU_RESERVATION=1.0 OPENCLAW_CPU_RESERVATION=1.0
MOLTBOT_MEMORY_RESERVATION=1G OPENCLAW_MEMORY_RESERVATION=1G
``` ```
### 持久化数据 ### 持久化数据
数据存储在两个 Docker 卷中: 数据存储在两个 Docker 卷中:
- `moltbot_config`:配置文件和凭据(~/.clawdbot - `openclaw_config`:配置文件和凭据(~/.openclaw
- `moltbot_workspace`:代理工作区和技能(~/clawd - `openclaw_workspace`:代理工作区和技能(~/openclaw-workspace
备份数据: 备份数据:
```bash ```bash
docker run --rm -v moltbot_config:/data -v $(pwd):/backup alpine tar czf /backup/moltbot-config-backup.tar.gz /data docker run --rm -v openclaw_config:/data -v $(pwd):/backup alpine tar czf /backup/openclaw-config-backup.tar.gz /data
docker run --rm -v moltbot_workspace:/data -v $(pwd):/backup alpine tar czf /backup/moltbot-workspace-backup.tar.gz /data docker run --rm -v openclaw_workspace:/data -v $(pwd):/backup alpine tar czf /backup/openclaw-workspace-backup.tar.gz /data
``` ```
### 自定义配置文件 ### 自定义配置文件
`~/.clawdbot/moltbot.json`(容器内)创建自定义配置文件: `~/.openclaw/openclaw.json`(容器内)创建自定义配置文件:
```json ```json
{ {
@@ -169,7 +169,7 @@ docker run --rm -v moltbot_workspace:/data -v $(pwd):/backup alpine tar czf /bac
### 网关无法启动 ### 网关无法启动
1. 检查日志:`docker compose logs moltbot-gateway` 1. 检查日志:`docker compose logs openclaw-gateway`
2. 验证网关令牌是否在 `.env` 中设置 2. 验证网关令牌是否在 `.env` 中设置
3. 确保端口 18789 未被占用 3. 确保端口 18789 未被占用
@@ -190,25 +190,25 @@ docker run --rm -v moltbot_workspace:/data -v $(pwd):/backup alpine tar czf /bac
诊断命令可帮助诊断常见问题: 诊断命令可帮助诊断常见问题:
```bash ```bash
docker compose run --rm moltbot-cli doctor docker compose run --rm openclaw-cli doctor
``` ```
## 文档 ## 文档
- [官方网站](https://molt.bot) - [官方网站](https://openclaw.bot)
- [完整文档](https://docs.molt.bot) - [完整文档](https://docs.openclaw.bot)
- [入门指南](https://docs.molt.bot/start/getting-started) - [入门指南](https://docs.openclaw.bot/start/getting-started)
- [配置参考](https://docs.molt.bot/gateway/configuration) - [配置参考](https://docs.openclaw.bot/gateway/configuration)
- [安全指南](https://docs.molt.bot/gateway/security) - [安全指南](https://docs.openclaw.bot/gateway/security)
- [Docker 安装](https://docs.molt.bot/install/docker) - [Docker 安装](https://docs.openclaw.bot/install/docker)
- [GitHub 仓库](https://github.com/moltbot/moltbot) - [GitHub 仓库](https://github.com/openclaw/openclaw)
## 许可证 ## 许可证
MoltBot 使用 MIT 许可证发布。详情请参阅 [LICENSE](https://github.com/moltbot/moltbot/blob/main/LICENSE) 文件。 OpenClaw 使用 MIT 许可证发布。详情请参阅 [LICENSE](https://github.com/openclaw/openclaw/blob/main/LICENSE) 文件。
## 社区 ## 社区
- [Discord](https://discord.gg/clawd) - [Discord](https://discord.gg/clawd)
- [GitHub 讨论](https://github.com/moltbot/moltbot/discussions) - [GitHub 讨论](https://github.com/openclaw/openclaw/discussions)
- [问题跟踪](https://github.com/moltbot/moltbot/issues) - [问题跟踪](https://github.com/openclaw/openclaw/issues)

View File

@@ -1,6 +1,6 @@
# MoltBot - Personal AI Assistant Docker Compose Configuration # OpenClaw - Personal AI Assistant Docker Compose Configuration
# Official Repository: https://github.com/moltbot/moltbot # Official Repository: https://github.com/openclaw/openclaw
# Documentation: https://docs.molt.bot # Documentation: https://docs.openclaw.bot
x-defaults: &defaults x-defaults: &defaults
restart: unless-stopped restart: unless-stopped
@@ -11,18 +11,18 @@ x-defaults: &defaults
max-file: "3" max-file: "3"
services: services:
moltbot-gateway: openclaw-gateway:
<<: *defaults <<: *defaults
image: ${GLOBAL_REGISTRY:-ghcr.io}/moltbot/moltbot:${MOLTBOT_VERSION:-main} image: ${GLOBAL_REGISTRY:-ghcr.io}/openclaw/openclaw:${OPENCLAW_VERSION:-2026.2.3}
environment: environment:
- TZ=${TZ:-UTC} - TZ=${TZ:-UTC}
- HOME=/home/node - HOME=/home/node
- NODE_ENV=production - NODE_ENV=production
- TERM=xterm-256color - TERM=xterm-256color
# Gateway configuration # Gateway configuration
- CLAWDBOT_GATEWAY_TOKEN=${MOLTBOT_GATEWAY_TOKEN} - OPENCLAW_GATEWAY_TOKEN=${OPENCLAW_GATEWAY_TOKEN}
- CLAWDBOT_GATEWAY_BIND=${MOLTBOT_GATEWAY_BIND:-lan} - OPENCLAW_GATEWAY_BIND=${OPENCLAW_GATEWAY_BIND:-lan}
- CLAWDBOT_GATEWAY_PORT=${MOLTBOT_GATEWAY_PORT:-18789} - OPENCLAW_GATEWAY_PORT=${OPENCLAW_GATEWAY_PORT:-18789}
# Optional: Model API keys (if not using OAuth) # Optional: Model API keys (if not using OAuth)
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY:-} - ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY:-}
- OPENAI_API_KEY=${OPENAI_API_KEY:-} - OPENAI_API_KEY=${OPENAI_API_KEY:-}
@@ -30,17 +30,17 @@ services:
- CLAUDE_WEB_SESSION_KEY=${CLAUDE_WEB_SESSION_KEY:-} - CLAUDE_WEB_SESSION_KEY=${CLAUDE_WEB_SESSION_KEY:-}
- CLAUDE_WEB_COOKIE=${CLAUDE_WEB_COOKIE:-} - CLAUDE_WEB_COOKIE=${CLAUDE_WEB_COOKIE:-}
volumes: volumes:
- moltbot_config:/home/node/.clawdbot - openclaw_config:/home/node/.openclaw
- moltbot_workspace:/home/node/clawd - openclaw_workspace:/home/node/openclaw-workspace
ports: ports:
- "${MOLTBOT_GATEWAY_PORT_OVERRIDE:-18789}:18789" - "${OPENCLAW_GATEWAY_PORT_OVERRIDE:-18789}:18789"
- "${MOLTBOT_BRIDGE_PORT_OVERRIDE:-18790}:18790" - "${OPENCLAW_BRIDGE_PORT_OVERRIDE:-18790}:18790"
command: command:
- node - node
- dist/index.js - dist/index.js
- gateway - gateway
- --bind - --bind
- "${MOLTBOT_GATEWAY_BIND:-lan}" - "${OPENCLAW_GATEWAY_BIND:-lan}"
- --port - --port
- "18789" - "18789"
healthcheck: healthcheck:
@@ -52,15 +52,15 @@ services:
deploy: deploy:
resources: resources:
limits: limits:
cpus: ${MOLTBOT_CPU_LIMIT:-2.0} cpus: ${OPENCLAW_CPU_LIMIT:-2.0}
memory: ${MOLTBOT_MEMORY_LIMIT:-2G} memory: ${OPENCLAW_MEMORY_LIMIT:-2G}
reservations: reservations:
cpus: ${MOLTBOT_CPU_RESERVATION:-1.0} cpus: ${OPENCLAW_CPU_RESERVATION:-1.0}
memory: ${MOLTBOT_MEMORY_RESERVATION:-1G} memory: ${OPENCLAW_MEMORY_RESERVATION:-1G}
moltbot-cli: openclaw-cli:
<<: *defaults <<: *defaults
image: ${GLOBAL_REGISTRY:-ghcr.io}/moltbot/moltbot:${MOLTBOT_VERSION:-main} image: ${GLOBAL_REGISTRY:-ghcr.io}/openclaw/openclaw:${OPENCLAW_VERSION:-2026.2.3}
environment: environment:
- TZ=${TZ:-UTC} - TZ=${TZ:-UTC}
- HOME=/home/node - HOME=/home/node
@@ -80,9 +80,9 @@ services:
deploy: deploy:
resources: resources:
limits: limits:
cpus: ${MOLTBOT_CLI_CPU_LIMIT:-1.0} cpus: ${OPENCLAW_CLI_CPU_LIMIT:-1.0}
memory: ${MOLTBOT_CLI_MEMORY_LIMIT:-512M} memory: ${OPENCLAW_CLI_MEMORY_LIMIT:-512M}
volumes: volumes:
moltbot_config: openclaw_config:
moltbot_workspace: openclaw_workspace:

View File

@@ -19,7 +19,7 @@ RUN apt-get update && \
rm -rf /var/lib/apt/lists/* rm -rf /var/lib/apt/lists/*
# Install mineru latest # Install mineru latest
RUN python3 -m pip install -U 'mineru[core]>=2.7.3' --break-system-packages && \ RUN python3 -m pip install -U 'mineru[core]>=2.7.6' --break-system-packages && \
python3 -m pip cache purge python3 -m pip cache purge
# Download models and update the configuration file # Download models and update the configuration file

View File

@@ -39,7 +39,7 @@ mineru -p demo.pdf -o ./output -b vlm-http-client -u http://localhost:30000
## Configuration ## Configuration
- `MINERU_VERSION`: The version for MinerU, default is `2.7.3`. - `MINERU_VERSION`: The version for MinerU, default is `2.7.6`.
- `MINERU_PORT_OVERRIDE_VLLM`: The host port for the VLLM server, default is `30000`. - `MINERU_PORT_OVERRIDE_VLLM`: The host port for the VLLM server, default is `30000`.
- `MINERU_PORT_OVERRIDE_API`: The host port for the API service, default is `8000`. - `MINERU_PORT_OVERRIDE_API`: The host port for the API service, default is `8000`.
- `MINERU_PORT_OVERRIDE_GRADIO`: The host port for the Gradio WebUI, default is `7860`. - `MINERU_PORT_OVERRIDE_GRADIO`: The host port for the Gradio WebUI, default is `7860`.

View File

@@ -39,7 +39,7 @@ mineru -p demo.pdf -o ./output -b vlm-http-client -u http://localhost:30000
## 配置 ## 配置
- `MINERU_VERSION`: MinerU 的 Docker 镜像版本,默认为 `2.7.3`。 - `MINERU_VERSION`: MinerU 的 Docker 镜像版本,默认为 `2.7.6`。
- `MINERU_PORT_OVERRIDE_VLLM`: VLLM 服务器的主机端口,默认为 `30000`。 - `MINERU_PORT_OVERRIDE_VLLM`: VLLM 服务器的主机端口,默认为 `30000`。
- `MINERU_PORT_OVERRIDE_API`: API 服务的主机端口,默认为 `8000`。 - `MINERU_PORT_OVERRIDE_API`: API 服务的主机端口,默认为 `8000`。
- `MINERU_PORT_OVERRIDE_GRADIO`: Gradio WebUI 的主机端口,默认为 `7860`。 - `MINERU_PORT_OVERRIDE_GRADIO`: Gradio WebUI 的主机端口,默认为 `7860`。

View File

@@ -8,7 +8,7 @@ x-defaults: &defaults
x-mineru-vllm: &mineru-vllm x-mineru-vllm: &mineru-vllm
<<: *defaults <<: *defaults
image: ${GLOBAL_REGISTRY:-}alexsuntop/mineru:${MINERU_VERSION:-2.7.3} image: ${GLOBAL_REGISTRY:-}alexsuntop/mineru:${MINERU_VERSION:-2.7.6}
build: build:
context: . context: .
dockerfile: Dockerfile dockerfile: Dockerfile