feat(nanobot): add README and docker-compose configuration for multi-channel AI assistant

feat(openclaw): introduce OpenClaw personal AI assistant with multi-channel support and CLI

fix(mineru): update MinerU version to 2.7.6 in Dockerfile and documentation
This commit is contained in:
Sun-ZhenXing
2026-02-07 13:34:36 +08:00
parent 28ed2462af
commit d53dffca83
14 changed files with 874 additions and 110 deletions

144
apps/nanobot/.env.example Normal file
View File

@@ -0,0 +1,144 @@
# Nanobot version
NANOBOT_VERSION=v0.1.3.post4
# Timezone
TZ=UTC
# Port override
NANOBOT_PORT_OVERRIDE=18790
# Command to run (gateway, onboard, status, agent, etc.)
NANOBOT_COMMAND=gateway
# ============================================================================
# LLM Provider Configuration
# ============================================================================
# OpenRouter (recommended for global access to all models)
# Get API key: https://openrouter.ai/keys
OPENROUTER_API_KEY=
# Anthropic (Claude direct access)
# Get API key: https://console.anthropic.com
ANTHROPIC_API_KEY=
# OpenAI (GPT direct access)
# Get API key: https://platform.openai.com
OPENAI_API_KEY=
# Google Gemini
# Get API key: https://aistudio.google.com
GEMINI_API_KEY=
# DeepSeek
# Get API key: https://platform.deepseek.com
DEEPSEEK_API_KEY=
# Groq (LLM + Voice transcription)
# Get API key: https://console.groq.com
GROQ_API_KEY=
# Zhipu AI (智谱 AI)
# Get API key: https://open.bigmodel.cn
ZHIPU_API_KEY=
# Alibaba Cloud DashScope (阿里云通义千问)
# Get API key: https://dashscope.console.aliyun.com
DASHSCOPE_API_KEY=
# Moonshot (月之暗面 Kimi)
# Get API key: https://platform.moonshot.cn
MOONSHOT_API_KEY=
# vLLM / Local LLM Server
# For local models running on vLLM or OpenAI-compatible server
VLLM_API_KEY=dummy
VLLM_API_BASE=http://localhost:8000/v1
# ============================================================================
# Agent Configuration
# ============================================================================
# Model to use (examples: anthropic/claude-opus-4-5, gpt-4, deepseek/deepseek-chat)
NANOBOT_MODEL=anthropic/claude-opus-4-5
# Maximum tokens for model response
NANOBOT_MAX_TOKENS=8192
# Temperature for model (0.0-1.0, higher = more creative)
NANOBOT_TEMPERATURE=0.7
# Maximum tool iterations per turn
NANOBOT_MAX_TOOL_ITERATIONS=20
# ============================================================================
# Channel Configuration
# ============================================================================
# Telegram
# 1. Create bot via @BotFather on Telegram
# 2. Get your user ID from @userinfobot
TELEGRAM_ENABLED=false
TELEGRAM_TOKEN=
TELEGRAM_PROXY=
# Discord
# 1. Create bot at https://discord.com/developers/applications
# 2. Enable MESSAGE CONTENT INTENT in bot settings
DISCORD_ENABLED=false
DISCORD_TOKEN=
# WhatsApp (requires Node.js bridge)
# 1. Run `nanobot channels login` to scan QR code
# 2. Bridge URL points to the WhatsApp bridge server
WHATSAPP_ENABLED=false
WHATSAPP_BRIDGE_URL=ws://localhost:3001
# Feishu (飞书/Lark)
# 1. Create app at https://open.feishu.cn/app
# 2. Enable Bot capability and add im:message permission
FEISHU_ENABLED=false
FEISHU_APP_ID=
FEISHU_APP_SECRET=
FEISHU_ENCRYPT_KEY=
FEISHU_VERIFICATION_TOKEN=
# ============================================================================
# Tools Configuration
# ============================================================================
# Brave Search API (for web search tool)
# Get API key: https://brave.com/search/api/
BRAVE_API_KEY=
# Web search max results
WEB_SEARCH_MAX_RESULTS=5
# Shell command execution timeout (seconds)
EXEC_TIMEOUT=60
# Restrict all tool access to workspace directory
# Set to true for production/sandboxed environments
RESTRICT_TO_WORKSPACE=false
# ============================================================================
# Gateway Configuration
# ============================================================================
# Gateway host (0.0.0.0 allows external connections)
GATEWAY_HOST=0.0.0.0
# Gateway port (internal port, mapped via NANOBOT_PORT_OVERRIDE)
GATEWAY_PORT=18790
# ============================================================================
# Resource Limits
# ============================================================================
# CPU limits
NANOBOT_CPU_LIMIT=1.0
NANOBOT_CPU_RESERVATION=0.5
# Memory limits
NANOBOT_MEMORY_LIMIT=1G
NANOBOT_MEMORY_RESERVATION=512M

269
apps/nanobot/README.md Normal file
View File

@@ -0,0 +1,269 @@
# Nanobot
[中文说明](README.zh.md) | [English](README.md)
Nanobot is a lightweight, production-ready personal AI assistant with multi-channel support (Telegram, Discord, WhatsApp, Feishu), local model integration, and powerful tool capabilities.
## Features
- 🤖 **Multi-Provider LLM Support**: OpenRouter, Anthropic, OpenAI, DeepSeek, Groq, Gemini, and more
- 🖥️ **Local Models**: Run your own models with vLLM or any OpenAI-compatible server
- 💬 **Multi-Channel**: Telegram, Discord, WhatsApp, and Feishu (飞书) integration
- 🛠️ **Powerful Tools**: File operations, shell execution, web search, and custom skills
- 📅 **Scheduled Tasks**: Cron-like job scheduling with natural language
- 🎯 **Memory & Skills**: Persistent memory and extensible skill system
- 🔒 **Security**: Sandbox mode, access control, and safe command execution
## Quick Start
### Prerequisites
- Docker and Docker Compose installed
- At least one LLM provider API key (recommended: [OpenRouter](https://openrouter.ai/keys))
### Setup
1. **Copy the example environment file:**
```bash
cp .env.example .env
```
2. **Edit `.env` and configure at least one LLM provider:**
```bash
# For OpenRouter (recommended for global access)
OPENROUTER_API_KEY=sk-or-v1-xxxxx
# Or use any other provider
ANTHROPIC_API_KEY=sk-ant-xxxxx
OPENAI_API_KEY=sk-xxxxx
```
3. **Start the service:**
```bash
docker compose up -d
```
4. **Initialize configuration (first time only):**
```bash
docker compose exec nanobot nanobot onboard
```
5. **Check status:**
```bash
docker compose exec nanobot nanobot status
```
## Usage
### CLI Mode
Chat directly with nanobot:
```bash
docker compose exec nanobot nanobot agent -m "What is 2+2?"
```
Interactive mode:
```bash
docker compose exec nanobot nanobot agent
```
### Gateway Mode (Default)
The default `docker compose up` starts the gateway server which enables:
- Telegram bot integration
- Discord bot integration
- WhatsApp messaging (requires additional setup)
- Feishu/Lark integration
- HTTP API access (port 18790)
### Channel Setup
#### Telegram
1. Create a bot via [@BotFather](https://t.me/BotFather) on Telegram
2. Get your user ID from [@userinfobot](https://t.me/userinfobot)
3. Configure in `.env`:
```bash
TELEGRAM_ENABLED=true
TELEGRAM_TOKEN=your_bot_token
```
4. Restart the service
#### Discord
1. Create a bot at [Discord Developer Portal](https://discord.com/developers/applications)
2. Enable **MESSAGE CONTENT INTENT** in bot settings
3. Configure in `.env`:
```bash
DISCORD_ENABLED=true
DISCORD_TOKEN=your_bot_token
```
4. Restart the service
#### WhatsApp
Requires Node.js and additional setup. See [official documentation](https://github.com/HKUDS/nanobot#-chat-apps) for details.
#### Feishu (飞书)
1. Create an app at [Feishu Open Platform](https://open.feishu.cn/app)
2. Enable Bot capability and add `im:message` permission
3. Configure in `.env`:
```bash
FEISHU_ENABLED=true
FEISHU_APP_ID=your_app_id
FEISHU_APP_SECRET=your_app_secret
```
4. Restart the service
## Configuration
### Environment Variables
See [.env.example](.env.example) for all available configuration options.
Key settings:
| Variable | Description | Default |
| ----------------------- | ------------------------------------------ | --------------------------- |
| `NANOBOT_MODEL` | LLM model to use | `anthropic/claude-opus-4-5` |
| `NANOBOT_COMMAND` | Command to run (gateway, agent, status) | `gateway` |
| `RESTRICT_TO_WORKSPACE` | Sandbox mode - restrict tools to workspace | `false` |
| `BRAVE_API_KEY` | API key for web search tool | (empty) |
| `TELEGRAM_ENABLED` | Enable Telegram channel | `false` |
| `DISCORD_ENABLED` | Enable Discord channel | `false` |
### LLM Provider Priority
When multiple providers are configured, nanobot will:
1. Match provider based on model name (e.g., `gpt-4` → OpenAI)
2. Fall back to first available API key
### Security
For production deployments:
- Set `RESTRICT_TO_WORKSPACE=true` to sandbox all file and shell operations
- Configure `allowFrom` lists in the config file for channel access control
- Use dedicated user accounts for channel integrations
- Monitor API usage and set spending limits
- Keep credentials in environment variables, never in code
## Scheduled Tasks
Run tasks on a schedule:
```bash
# Add a daily reminder
docker compose exec nanobot nanobot cron add \
--name "morning" \
--message "Good morning! What's on the agenda?" \
--cron "0 9 * * *"
# List scheduled jobs
docker compose exec nanobot nanobot cron list
# Remove a job
docker compose exec nanobot nanobot cron remove <job_id>
```
## Local Models (vLLM)
Run nanobot with your own local models:
1. **Start a vLLM server:**
```bash
vllm serve meta-llama/Llama-3.1-8B-Instruct --port 8000
```
2. **Configure in `.env`:**
```bash
VLLM_API_KEY=dummy
VLLM_API_BASE=http://host.docker.internal:8000/v1
NANOBOT_MODEL=meta-llama/Llama-3.1-8B-Instruct
```
3. **Restart the service**
## Volumes
- `nanobot_config`: Configuration files and credentials
- `nanobot_workspace`: Agent workspace and files
## Ports
- `18790`: Gateway HTTP API (configurable via `NANOBOT_PORT_OVERRIDE`)
## Resource Limits
Default resource limits:
- CPU: 1.0 cores (limit), 0.5 cores (reservation)
- Memory: 1GB (limit), 512MB (reservation)
Adjust via environment variables: `NANOBOT_CPU_LIMIT`, `NANOBOT_MEMORY_LIMIT`, etc.
## Troubleshooting
### Check logs
```bash
docker compose logs -f nanobot
```
### Verify configuration
```bash
docker compose exec nanobot nanobot status
```
### Test LLM connection
```bash
docker compose exec nanobot nanobot agent -m "Hello!"
```
### Common issues
**No API key configured:**
- Ensure at least one provider API key is set in `.env`
- Restart the service after updating environment variables
**Channel not responding:**
- Check that the channel is enabled in `.env`
- Verify bot tokens are correct
- Check logs for connection errors
**File permission errors:**
- Ensure volumes have proper permissions
- Try running with `RESTRICT_TO_WORKSPACE=false` for debugging
## License
Nanobot is an open-source project. See the [official repository](https://github.com/HKUDS/nanobot) for license details.
## Links
- Official Repository: <https://github.com/HKUDS/nanobot>
- Documentation: <https://github.com/HKUDS/nanobot#readme>
- Issues: <https://github.com/HKUDS/nanobot/issues>

269
apps/nanobot/README.zh.md Normal file
View File

@@ -0,0 +1,269 @@
# Nanobot
[中文说明](README.zh.md) | [English](README.md)
Nanobot 是一个轻量级、生产就绪的个人 AI 助手支持多渠道Telegram、Discord、WhatsApp、飞书本地模型集成以及强大的工具能力。
## 特性
- 🤖 **多 LLM 提供商支持**OpenRouter、Anthropic、OpenAI、DeepSeek、Groq、Gemini 等
- 🖥️ **本地模型**:使用 vLLM 或任何 OpenAI 兼容服务器运行您自己的模型
- 💬 **多渠道**:集成 Telegram、Discord、WhatsApp 和飞书
- 🛠️ **强大工具**文件操作、Shell 执行、网络搜索和自定义技能
- 📅 **定时任务**:支持自然语言的类 Cron 任务调度
- 🎯 **记忆与技能**:持久化记忆和可扩展技能系统
- 🔒 **安全性**:沙盒模式、访问控制和安全命令执行
## 快速开始
### 前置要求
- 已安装 Docker 和 Docker Compose
- 至少一个 LLM 提供商 API 密钥(推荐:[OpenRouter](https://openrouter.ai/keys)
### 配置步骤
1. **复制环境变量示例文件:**
```bash
cp .env.example .env
```
2. **编辑 `.env` 并至少配置一个 LLM 提供商:**
```bash
# 使用 OpenRouter推荐可访问所有模型
OPENROUTER_API_KEY=sk-or-v1-xxxxx
# 或使用其他提供商
ANTHROPIC_API_KEY=sk-ant-xxxxx
OPENAI_API_KEY=sk-xxxxx
```
3. **启动服务:**
```bash
docker compose up -d
```
4. **初始化配置(仅首次需要):**
```bash
docker compose exec nanobot nanobot onboard
```
5. **检查状态:**
```bash
docker compose exec nanobot nanobot status
```
## 使用方法
### CLI 模式
直接与 nanobot 对话:
```bash
docker compose exec nanobot nanobot agent -m "2+2 等于多少?"
```
交互模式:
```bash
docker compose exec nanobot nanobot agent
```
### 网关模式(默认)
默认的 `docker compose up` 会启动网关服务器,支持:
- Telegram 机器人集成
- Discord 机器人集成
- WhatsApp 消息(需要额外配置)
- 飞书 / Lark 集成
- HTTP API 访问(端口 18790
### 渠道配置
#### Telegram
1. 通过 Telegram 上的 [@BotFather](https://t.me/BotFather) 创建机器人
2. 从 [@userinfobot](https://t.me/userinfobot) 获取您的用户 ID
3. 在 `.env` 中配置:
```bash
TELEGRAM_ENABLED=true
TELEGRAM_TOKEN=你的机器人令牌
```
4. 重启服务
#### Discord
1. 在 [Discord 开发者门户](https://discord.com/developers/applications) 创建机器人
2. 在机器人设置中启用 **MESSAGE CONTENT INTENT**
3. 在 `.env` 中配置:
```bash
DISCORD_ENABLED=true
DISCORD_TOKEN=你的机器人令牌
```
4. 重启服务
#### WhatsApp
需要 Node.js 和额外配置。详见 [官方文档](https://github.com/HKUDS/nanobot#-chat-apps)。
#### 飞书
1. 在 [飞书开放平台](https://open.feishu.cn/app) 创建应用
2. 启用机器人能力并添加 `im:message` 权限
3. 在 `.env` 中配置:
```bash
FEISHU_ENABLED=true
FEISHU_APP_ID=你的应用ID
FEISHU_APP_SECRET=你的应用密钥
```
4. 重启服务
## 配置
### 环境变量
所有可用配置选项请参见 [.env.example](.env.example)。
关键设置:
| 变量 | 描述 | 默认值 |
| ----------------------- | -------------------------------------- | --------------------------- |
| `NANOBOT_MODEL` | 要使用的 LLM 模型 | `anthropic/claude-opus-4-5` |
| `NANOBOT_COMMAND` | 要运行的命令gateway、agent、status | `gateway` |
| `RESTRICT_TO_WORKSPACE` | 沙盒模式 - 限制工具访问工作空间 | `false` |
| `BRAVE_API_KEY` | 网络搜索工具的 API 密钥 | (空) |
| `TELEGRAM_ENABLED` | 启用 Telegram 渠道 | `false` |
| `DISCORD_ENABLED` | 启用 Discord 渠道 | `false` |
### LLM 提供商优先级
当配置了多个提供商时nanobot 将:
1. 根据模型名称匹配提供商(例如 `gpt-4` → OpenAI
2. 回退到第一个可用的 API 密钥
### 安全性
对于生产部署:
- 设置 `RESTRICT_TO_WORKSPACE=true` 以沙盒化所有文件和 Shell 操作
- 在配置文件中为渠道访问控制配置 `allowFrom` 列表
- 为渠道集成使用专用用户账户
- 监控 API 使用并设置支出限制
- 将凭证保存在环境变量中,绝不在代码中
## 定时任务
按计划运行任务:
```bash
# 添加每日提醒
docker compose exec nanobot nanobot cron add \
--name "morning" \
--message "早上好!今天有什么安排?" \
--cron "0 9 * * *"
# 列出计划任务
docker compose exec nanobot nanobot cron list
# 删除任务
docker compose exec nanobot nanobot cron remove <job_id>
```
## 本地模型vLLM
使用您自己的本地模型运行 nanobot
1. **启动 vLLM 服务器:**
```bash
vllm serve meta-llama/Llama-3.1-8B-Instruct --port 8000
```
2. **在 `.env` 中配置:**
```bash
VLLM_API_KEY=dummy
VLLM_API_BASE=http://host.docker.internal:8000/v1
NANOBOT_MODEL=meta-llama/Llama-3.1-8B-Instruct
```
3. **重启服务**
## 数据卷
- `nanobot_config`:配置文件和凭证
- `nanobot_workspace`:代理工作空间和文件
## 端口
- `18790`:网关 HTTP API可通过 `NANOBOT_PORT_OVERRIDE` 配置)
## 资源限制
默认资源限制:
- CPU1.0 核心限制0.5 核心(预留)
- 内存1GB限制512MB预留
通过环境变量调整:`NANOBOT_CPU_LIMIT`、`NANOBOT_MEMORY_LIMIT` 等。
## 故障排除
### 查看日志
```bash
docker compose logs -f nanobot
```
### 验证配置
```bash
docker compose exec nanobot nanobot status
```
### 测试 LLM 连接
```bash
docker compose exec nanobot nanobot agent -m "你好!"
```
### 常见问题
**未配置 API 密钥:**
- 确保在 `.env` 中至少设置了一个提供商 API 密钥
- 更新环境变量后重启服务
**渠道无响应:**
- 检查渠道是否在 `.env` 中启用
- 验证机器人令牌是否正确
- 检查日志中的连接错误
**文件权限错误:**
- 确保数据卷具有适当的权限
- 调试时尝试使用 `RESTRICT_TO_WORKSPACE=false` 运行
## 许可证
Nanobot 是一个开源项目。许可证详情请参见 [官方仓库](https://github.com/HKUDS/nanobot)。
## 链接
- 官方仓库:<https://github.com/HKUDS/nanobot>
- 文档:<https://github.com/HKUDS/nanobot#readme>
- 问题反馈:<https://github.com/HKUDS/nanobot/issues>

View File

@@ -0,0 +1,76 @@
x-defaults: &defaults
restart: unless-stopped
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
nanobot:
<<: *defaults
image: ${GLOBAL_REGISTRY:-ghcr.io/}hkuds/nanobot:${NANOBOT_VERSION:-v0.1.3.post4}
ports:
- "${NANOBOT_PORT_OVERRIDE:-18790}:18790"
volumes:
- nanobot_config:/root/.nanobot
- nanobot_workspace:/root/.nanobot/workspace
environment:
- TZ=${TZ:-UTC}
# LLM Provider Configuration
- NANOBOT_PROVIDERS__OPENROUTER__API_KEY=${OPENROUTER_API_KEY:-}
- NANOBOT_PROVIDERS__ANTHROPIC__API_KEY=${ANTHROPIC_API_KEY:-}
- NANOBOT_PROVIDERS__OPENAI__API_KEY=${OPENAI_API_KEY:-}
- NANOBOT_PROVIDERS__GEMINI__API_KEY=${GEMINI_API_KEY:-}
- NANOBOT_PROVIDERS__DEEPSEEK__API_KEY=${DEEPSEEK_API_KEY:-}
- NANOBOT_PROVIDERS__GROQ__API_KEY=${GROQ_API_KEY:-}
- NANOBOT_PROVIDERS__ZHIPU__API_KEY=${ZHIPU_API_KEY:-}
- NANOBOT_PROVIDERS__DASHSCOPE__API_KEY=${DASHSCOPE_API_KEY:-}
- NANOBOT_PROVIDERS__MOONSHOT__API_KEY=${MOONSHOT_API_KEY:-}
- NANOBOT_PROVIDERS__VLLM__API_KEY=${VLLM_API_KEY:-}
- NANOBOT_PROVIDERS__VLLM__API_BASE=${VLLM_API_BASE:-}
# Agent Configuration
- NANOBOT_AGENTS__DEFAULTS__MODEL=${NANOBOT_MODEL:-anthropic/claude-opus-4-5}
- NANOBOT_AGENTS__DEFAULTS__MAX_TOKENS=${NANOBOT_MAX_TOKENS:-8192}
- NANOBOT_AGENTS__DEFAULTS__TEMPERATURE=${NANOBOT_TEMPERATURE:-0.7}
- NANOBOT_AGENTS__DEFAULTS__MAX_TOOL_ITERATIONS=${NANOBOT_MAX_TOOL_ITERATIONS:-20}
# Channel Configuration
- NANOBOT_CHANNELS__TELEGRAM__ENABLED=${TELEGRAM_ENABLED:-false}
- NANOBOT_CHANNELS__TELEGRAM__TOKEN=${TELEGRAM_TOKEN:-}
- NANOBOT_CHANNELS__TELEGRAM__PROXY=${TELEGRAM_PROXY:-}
- NANOBOT_CHANNELS__DISCORD__ENABLED=${DISCORD_ENABLED:-false}
- NANOBOT_CHANNELS__DISCORD__TOKEN=${DISCORD_TOKEN:-}
- NANOBOT_CHANNELS__WHATSAPP__ENABLED=${WHATSAPP_ENABLED:-false}
- NANOBOT_CHANNELS__WHATSAPP__BRIDGE_URL=${WHATSAPP_BRIDGE_URL:-ws://localhost:3001}
- NANOBOT_CHANNELS__FEISHU__ENABLED=${FEISHU_ENABLED:-false}
- NANOBOT_CHANNELS__FEISHU__APP_ID=${FEISHU_APP_ID:-}
- NANOBOT_CHANNELS__FEISHU__APP_SECRET=${FEISHU_APP_SECRET:-}
- NANOBOT_CHANNELS__FEISHU__ENCRYPT_KEY=${FEISHU_ENCRYPT_KEY:-}
- NANOBOT_CHANNELS__FEISHU__VERIFICATION_TOKEN=${FEISHU_VERIFICATION_TOKEN:-}
# Tools Configuration
- NANOBOT_TOOLS__WEB__SEARCH__API_KEY=${BRAVE_API_KEY:-}
- NANOBOT_TOOLS__WEB__SEARCH__MAX_RESULTS=${WEB_SEARCH_MAX_RESULTS:-5}
- NANOBOT_TOOLS__EXEC__TIMEOUT=${EXEC_TIMEOUT:-60}
- NANOBOT_TOOLS__RESTRICT_TO_WORKSPACE=${RESTRICT_TO_WORKSPACE:-false}
# Gateway Configuration
- NANOBOT_GATEWAY__HOST=${GATEWAY_HOST:-0.0.0.0}
- NANOBOT_GATEWAY__PORT=${GATEWAY_PORT:-18790}
command: ${NANOBOT_COMMAND:-gateway}
healthcheck:
test: ["CMD", "python", "-c", "import sys; sys.exit(0)"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
deploy:
resources:
limits:
cpus: ${NANOBOT_CPU_LIMIT:-1.0}
memory: ${NANOBOT_MEMORY_LIMIT:-1G}
reservations:
cpus: ${NANOBOT_CPU_RESERVATION:-0.5}
memory: ${NANOBOT_MEMORY_RESERVATION:-512M}
volumes:
nanobot_config:
nanobot_workspace: