feat: Add Chinese documentation and Docker Compose configurations for DeepTutor and llama.cpp

- Created README.zh.md for DeepTutor with comprehensive features, installation steps, and usage instructions in Chinese.
- Added docker-compose.yaml for DeepTutor to define services, environment variables, and resource limits.
- Introduced .env.example for llama.cpp with configuration options for server settings and resource management.
- Added README.md and README.zh.md for llama.cpp detailing features, prerequisites, quick start guides, and API documentation.
- Implemented docker-compose.yaml for llama.cpp to support various server configurations (CPU, CUDA, ROCm) and CLI usage.
This commit is contained in:
Sun-ZhenXing
2026-02-01 16:08:44 +08:00
parent e2ac465417
commit 28ed2462af
10 changed files with 1470 additions and 0 deletions

View File

@@ -0,0 +1,97 @@
# DeepTutor Configuration
# Copy this file to .env and fill in your API keys
#! ==================================================
#! General Settings
#! ==================================================
# Timezone (default: UTC)
TZ=UTC
# User and Group ID for file permissions (default: 1000)
# Adjust if your host user has a different UID/GID
PUID=1000
PGID=1000
# Global registry prefix (optional)
# Example: registry.example.com/ or leave empty for Docker Hub/GHCR
GLOBAL_REGISTRY=
#! ==================================================
#! DeepTutor Version
#! ==================================================
# Image version (default: latest)
# Available tags: latest, v0.5.x
# See: https://github.com/HKUDS/DeepTutor/pkgs/container/deeptutor
DEEPTUTOR_VERSION=latest
#! ==================================================
#! Port Configuration
#! ==================================================
# Backend port (internal: 8001)
BACKEND_PORT=8001
# Host port override for backend
DEEPTUTOR_BACKEND_PORT_OVERRIDE=8001
# Frontend port (internal: 3782)
FRONTEND_PORT=3782
# Host port override for frontend
DEEPTUTOR_FRONTEND_PORT_OVERRIDE=3782
#! ==================================================
#! API Base URLs
#! ==================================================
# Internal API base URL (used by frontend to communicate with backend)
NEXT_PUBLIC_API_BASE=http://localhost:8001
# External API base URL (for cloud deployment, set to your public URL)
# Example: https://your-server.com:8001
# For local deployment, use the same as NEXT_PUBLIC_API_BASE
NEXT_PUBLIC_API_BASE_EXTERNAL=http://localhost:8001
#! ==================================================
#! LLM API Keys (Required)
#! ==================================================
# OpenAI API Key (Required)
# Get from: https://platform.openai.com/api-keys
OPENAI_API_KEY=sk-your-openai-api-key-here
# OpenAI Base URL (default: https://api.openai.com/v1)
# For OpenAI-compatible APIs (e.g., Azure OpenAI, custom endpoints)
OPENAI_BASE_URL=https://api.openai.com/v1
# Default LLM Model (default: gpt-4o)
# Options: gpt-4o, gpt-4-turbo, gpt-4, gpt-3.5-turbo, etc.
DEFAULT_MODEL=gpt-4o
#! ==================================================
#! Additional LLM API Keys (Optional)
#! ==================================================
# Anthropic API Key (Optional, for Claude models)
# Get from: https://console.anthropic.com/
ANTHROPIC_API_KEY=
# Perplexity API Key (Optional, for web search)
# Get from: https://www.perplexity.ai/settings/api
PERPLEXITY_API_KEY=
# DashScope API Key (Optional, for Alibaba Cloud models)
# Get from: https://dashscope.console.aliyun.com/
DASHSCOPE_API_KEY=
#! ==================================================
#! Resource Limits
#! ==================================================
# CPU limits (default: 4.00 cores limit, 1.00 cores reservation)
DEEPTUTOR_CPU_LIMIT=4.00
DEEPTUTOR_CPU_RESERVATION=1.00
# Memory limits (default: 8G limit, 2G reservation)
DEEPTUTOR_MEMORY_LIMIT=8G
DEEPTUTOR_MEMORY_RESERVATION=2G

248
apps/deeptutor/README.md Normal file
View File

@@ -0,0 +1,248 @@
# DeepTutor
[中文说明](README.zh.md) | English
## Overview
DeepTutor is an AI-powered personalized learning assistant that transforms any document into an interactive learning experience with multi-agent intelligence. It helps you solve problems, generate questions, conduct research, collaborate on writing, organize notes, and guides you through learning paths.
**Project:** <https://github.com/HKUDS/DeepTutor>
**License:** Apache-2.0
**Documentation:** <https://hkuds.github.io/DeepTutor/>
## Features
- **Problem Solving** — Detailed step-by-step solutions with visual diagrams
- **Question Generation** — Adaptive questions based on your knowledge level
- **Research Assistant** — Deep research with multi-agent collaboration
- **Co-Writer** — Interactive idea generation and writing assistance
- **Smart Notebook** — Organize and retrieve learning materials efficiently
- **Guided Learning** — Personalized learning paths and progress tracking
- **Multi-Agent System** — Specialized agents for different learning tasks
- **RAG Integration** — LightRAG and RAG-Anything for knowledge retrieval
- **Code Execution** — Built-in code playground for practice
## Quick Start
### Prerequisites
- Docker and Docker Compose
- OpenAI API key (required)
- Optional: Anthropic, Perplexity, or DashScope API keys
### Installation
1. **Clone this repository**
```bash
git clone <your-compose-anything-repo>
cd apps/deeptutor
```
2. **Configure environment**
```bash
cp .env.example .env
# Edit .env and add your API keys
```
**Required configuration:**
- `OPENAI_API_KEY` — Your OpenAI API key
**Optional configuration:**
- `ANTHROPIC_API_KEY` — For Claude models
- `PERPLEXITY_API_KEY` — For web search
- `DASHSCOPE_API_KEY` — For Alibaba Cloud models
- Adjust ports if needed (default: 8001 for backend, 3782 for frontend)
- Set `NEXT_PUBLIC_API_BASE_EXTERNAL` for cloud deployments
3. **Optional: Custom agent configuration**
Create a `config/agents.yaml` file to customize agent behaviors (see [documentation](https://hkuds.github.io/DeepTutor/guide/config.html) for details).
4. **Start the service**
```bash
docker compose up -d
```
First run takes approximately 30-60 seconds to initialize.
5. **Access the application**
- **Frontend:** <http://localhost:3782>
- **Backend API:** <http://localhost:8001>
- **API Documentation:** <http://localhost:8001/docs>
## Usage
### Create Knowledge Base
1. Navigate to <http://localhost:3782/knowledge>
2. Click "New Knowledge Base"
3. Upload documents (supports PDF, DOCX, TXT, Markdown, HTML, etc.)
4. Wait for processing to complete
### Learning Modes
- **Solve** — Get step-by-step solutions to problems
- **Question** — Generate practice questions based on your materials
- **Research** — Deep research with multi-agent collaboration
- **Co-Writer** — Interactive writing and idea generation
- **Notebook** — Organize and manage your learning materials
- **Guide** — Follow personalized learning paths
### Advanced Features
- **Code Execution** — Practice coding directly in the interface
- **Visual Diagrams** — Automatic diagram generation for complex concepts
- **Export** — Download your work as PDF or Markdown
- **Multi-language** — Support for multiple languages
## Configuration
### Environment Variables
Key environment variables (see [.env.example](.env.example) for all options):
| Variable | Default | Description |
| ------------------------ | ---------- | ------------------------- |
| `OPENAI_API_KEY` | (required) | Your OpenAI API key |
| `DEFAULT_MODEL` | `gpt-4o` | Default LLM model |
| `BACKEND_PORT` | `8001` | Backend server port |
| `FRONTEND_PORT` | `3782` | Frontend application port |
| `DEEPTUTOR_CPU_LIMIT` | `4.00` | CPU limit (cores) |
| `DEEPTUTOR_MEMORY_LIMIT` | `8G` | Memory limit |
### Ports
- **8001** — Backend API server
- **3782** — Frontend web interface
### Volumes
- `deeptutor_data` — User data, knowledge bases, and learning materials
- `./config` — Custom agent configurations (optional)
## Resource Requirements
**Minimum:**
- CPU: 1 core
- Memory: 2GB
- Disk: 2GB + space for knowledge bases
**Recommended:**
- CPU: 4 cores
- Memory: 8GB
- Disk: 10GB+
## Supported Models
DeepTutor supports multiple LLM providers:
- **OpenAI** — GPT-4, GPT-4 Turbo, GPT-3.5 Turbo
- **Anthropic** — Claude 3 (Opus, Sonnet, Haiku)
- **Perplexity** — For web search integration
- **DashScope** — Alibaba Cloud models
- **OpenAI-compatible APIs** — Any API compatible with OpenAI format
## Troubleshooting
### Backend fails to start
- Verify `OPENAI_API_KEY` is set correctly in `.env`
- Check logs: `docker compose logs -f`
- Ensure ports 8001 and 3782 are not in use
- Verify sufficient disk space for volumes
### Frontend cannot connect to backend
- Confirm backend is running: visit <http://localhost:8001/docs>
- For cloud deployments, set `NEXT_PUBLIC_API_BASE_EXTERNAL` to your public URL
- Check firewall settings
### Knowledge base processing fails
- Ensure sufficient memory (recommended 8GB+)
- Check document format is supported
- Review logs for specific errors
### API rate limits
- Monitor your API usage on provider dashboards
- Consider upgrading your API plan
- Use different models for different tasks
## Security Notes
- **API Keys** — Keep your API keys secure, never commit them to version control
- **Network Exposure** — For production deployments, use HTTPS and proper authentication
- **Data Privacy** — User data is stored in Docker volumes; ensure proper backup and security
- **Resource Limits** — Set appropriate CPU and memory limits to prevent resource exhaustion
## Updates
To update to the latest version:
```bash
# Pull the latest image
docker compose pull
# Recreate containers
docker compose up -d
```
To update to a specific version, edit `DEEPTUTOR_VERSION` in `.env` and run:
```bash
docker compose up -d
```
## Advanced Usage
### Custom Agent Configuration
Create `config/agents.yaml` to customize agent behaviors:
```yaml
agents:
solver:
model: gpt-4o
temperature: 0.7
researcher:
model: gpt-4-turbo
max_tokens: 4000
```
See [official documentation](https://hkuds.github.io/DeepTutor/guide/config.html) for detailed configuration options.
### Cloud Deployment
For cloud deployment, additional configuration is needed:
1. Set public URL in `.env`:
```env
NEXT_PUBLIC_API_BASE_EXTERNAL=https://your-domain.com:8001
```
2. Configure reverse proxy (nginx/Caddy) for HTTPS
3. Ensure proper firewall rules
4. Consider using environment-specific secrets management
### Using Different Embedding Models
DeepTutor uses `text-embedding-3-large` by default. To use different embedding models, refer to the [official documentation](https://hkuds.github.io/DeepTutor/guide/config.html).
## Links
- **GitHub:** <https://github.com/HKUDS/DeepTutor>
- **Documentation:** <https://hkuds.github.io/DeepTutor/>
- **Issues:** <https://github.com/HKUDS/DeepTutor/issues>
- **Discussions:** <https://github.com/HKUDS/DeepTutor/discussions>
## License
DeepTutor is licensed under the Apache-2.0 License. See the [official repository](https://github.com/HKUDS/DeepTutor) for details.

248
apps/deeptutor/README.zh.md Normal file
View File

@@ -0,0 +1,248 @@
# DeepTutor
中文说明 | [English](README.md)
## 概述
DeepTutor 是一个 AI 驱动的个性化学习助手,通过多智能体系统将任何文档转化为交互式学习体验。它可以帮助您解决问题、生成题目、进行研究、协作写作、整理笔记,并引导您完成学习路径。
**项目地址:** <https://github.com/HKUDS/DeepTutor>
**许可证:** Apache-2.0
**文档:** <https://hkuds.github.io/DeepTutor/>
## 功能特性
- **问题求解** — 提供详细的分步解决方案和可视化图表
- **题目生成** — 根据您的知识水平生成自适应题目
- **研究助手** — 通过多智能体协作进行深度研究
- **协作写作** — 交互式创意生成和写作辅助
- **智能笔记** — 高效组织和检索学习材料
- **引导学习** — 个性化学习路径和进度跟踪
- **多智能体系统** — 针对不同学习任务的专业智能体
- **RAG 集成** — 使用 LightRAG 和 RAG-Anything 进行知识检索
- **代码执行** — 内置代码练习环境
## 快速开始
### 前置要求
- Docker 和 Docker Compose
- OpenAI API 密钥(必需)
- 可选Anthropic、Perplexity 或 DashScope API 密钥
### 安装步骤
1. **克隆仓库**
```bash
git clone <your-compose-anything-repo>
cd apps/deeptutor
```
2. **配置环境变量**
```bash
cp .env.example .env
# 编辑 .env 文件并添加您的 API 密钥
```
**必需配置:**
- `OPENAI_API_KEY` — 您的 OpenAI API 密钥
**可选配置:**
- `ANTHROPIC_API_KEY` — 用于 Claude 模型
- `PERPLEXITY_API_KEY` — 用于网络搜索
- `DASHSCOPE_API_KEY` — 用于阿里云模型
- 如需调整端口(默认:后端 8001前端 3782
- 云端部署时设置 `NEXT_PUBLIC_API_BASE_EXTERNAL`
3. **可选:自定义智能体配置**
创建 `config/agents.yaml` 文件以自定义智能体行为(详见[文档](https://hkuds.github.io/DeepTutor/guide/config.html))。
4. **启动服务**
```bash
docker compose up -d
```
首次运行需要约 30-60 秒初始化。
5. **访问应用**
- **前端界面:** <http://localhost:3782>
- **后端 API** <http://localhost:8001>
- **API 文档:** <http://localhost:8001/docs>
## 使用方法
### 创建知识库
1. 访问 <http://localhost:3782/knowledge>
2. 点击"新建知识库"
3. 上传文档(支持 PDF、DOCX、TXT、Markdown、HTML 等)
4. 等待处理完成
### 学习模式
- **求解Solve** — 获取问题的分步解决方案
- **题目Question** — 基于学习材料生成练习题
- **研究Research** — 通过多智能体协作进行深度研究
- **协作写作Co-Writer** — 交互式写作和创意生成
- **笔记Notebook** — 组织和管理学习材料
- **引导Guide** — 遵循个性化学习路径
### 高级功能
- **代码执行** — 在界面中直接练习编码
- **可视化图表** — 为复杂概念自动生成图表
- **导出** — 将您的工作下载为 PDF 或 Markdown
- **多语言支持** — 支持多种语言
## 配置说明
### 环境变量
主要环境变量(所有选项见 [.env.example](.env.example)
| 变量 | 默认值 | 描述 |
| ------------------------ | -------- | -------------------- |
| `OPENAI_API_KEY` | (必需) | 您的 OpenAI API 密钥 |
| `DEFAULT_MODEL` | `gpt-4o` | 默认 LLM 模型 |
| `BACKEND_PORT` | `8001` | 后端服务器端口 |
| `FRONTEND_PORT` | `3782` | 前端应用端口 |
| `DEEPTUTOR_CPU_LIMIT` | `4.00` | CPU 限制(核心数) |
| `DEEPTUTOR_MEMORY_LIMIT` | `8G` | 内存限制 |
### 端口说明
- **8001** — 后端 API 服务器
- **3782** — 前端 Web 界面
### 数据卷
- `deeptutor_data` — 用户数据、知识库和学习材料
- `./config` — 自定义智能体配置(可选)
## 资源要求
**最低配置:**
- CPU1 核心
- 内存2GB
- 磁盘2GB + 知识库所需空间
**推荐配置:**
- CPU4 核心
- 内存8GB
- 磁盘10GB+
## 支持的模型
DeepTutor 支持多个 LLM 提供商:
- **OpenAI** — GPT-4、GPT-4 Turbo、GPT-3.5 Turbo
- **Anthropic** — Claude 3Opus、Sonnet、Haiku
- **Perplexity** — 用于网络搜索集成
- **DashScope** — 阿里云模型
- **OpenAI 兼容 API** — 任何与 OpenAI 格式兼容的 API
## 故障排查
### 后端启动失败
- 验证 `.env` 中的 `OPENAI_API_KEY` 是否正确设置
- 查看日志:`docker compose logs -f`
- 确保端口 8001 和 3782 未被占用
- 验证数据卷有足够的磁盘空间
### 前端无法连接后端
- 确认后端正在运行:访问 <http://localhost:8001/docs>
- 云端部署时,将 `NEXT_PUBLIC_API_BASE_EXTERNAL` 设置为您的公网 URL
- 检查防火墙设置
### 知识库处理失败
- 确保有足够的内存(推荐 8GB+
- 检查文档格式是否支持
- 查看日志了解具体错误
### API 速率限制
- 在提供商控制台监控 API 使用情况
- 考虑升级 API 计划
- 为不同任务使用不同模型
## 安全提示
- **API 密钥** — 妥善保管您的 API 密钥,切勿提交到版本控制系统
- **网络暴露** — 生产环境部署时,使用 HTTPS 和适当的身份验证
- **数据隐私** — 用户数据存储在 Docker 卷中,请确保适当的备份和安全措施
- **资源限制** — 设置合适的 CPU 和内存限制以防止资源耗尽
## 更新
更新到最新版本:
```bash
# 拉取最新镜像
docker compose pull
# 重新创建容器
docker compose up -d
```
更新到特定版本,编辑 `.env` 中的 `DEEPTUTOR_VERSION` 并运行:
```bash
docker compose up -d
```
## 高级用法
### 自定义智能体配置
创建 `config/agents.yaml` 以自定义智能体行为:
```yaml
agents:
solver:
model: gpt-4o
temperature: 0.7
researcher:
model: gpt-4-turbo
max_tokens: 4000
```
详细配置选项请参见[官方文档](https://hkuds.github.io/DeepTutor/guide/config.html)。
### 云端部署
云端部署需要额外配置:
1. 在 `.env` 中设置公网 URL
```env
NEXT_PUBLIC_API_BASE_EXTERNAL=https://your-domain.com:8001
```
2. 配置反向代理nginx/Caddy以支持 HTTPS
3. 确保适当的防火墙规则
4. 考虑使用特定环境的密钥管理
### 使用不同的嵌入模型
DeepTutor 默认使用 `text-embedding-3-large`。要使用不同的嵌入模型,请参考[官方文档](https://hkuds.github.io/DeepTutor/guide/config.html)。
## 相关链接
- **GitHub** <https://github.com/HKUDS/DeepTutor>
- **文档:** <https://hkuds.github.io/DeepTutor/>
- **问题反馈:** <https://github.com/HKUDS/DeepTutor/issues>
- **讨论区:** <https://github.com/HKUDS/DeepTutor/discussions>
## 许可证
DeepTutor 使用 Apache-2.0 许可证。详情请参见[官方仓库](https://github.com/HKUDS/DeepTutor)。

View File

@@ -0,0 +1,68 @@
# DeepTutor: AI-Powered Personalized Learning Assistant
# https://github.com/HKUDS/DeepTutor
# Transform any document into an interactive learning experience with multi-agent intelligence
x-defaults: &defaults
restart: unless-stopped
logging:
driver: json-file
options:
max-size: 100m
max-file: "3"
services:
deeptutor:
<<: *defaults
image: ${GLOBAL_REGISTRY:-ghcr.io}/hkuds/deeptutor:${DEEPTUTOR_VERSION:-latest}
ports:
- "${DEEPTUTOR_BACKEND_PORT_OVERRIDE:-8001}:${BACKEND_PORT:-8001}"
- "${DEEPTUTOR_FRONTEND_PORT_OVERRIDE:-3782}:${FRONTEND_PORT:-3782}"
volumes:
- deeptutor_data:/app/data
- ./config:/app/config:ro
environment:
- TZ=${TZ:-UTC}
# Backend port
- BACKEND_PORT=${BACKEND_PORT:-8001}
# Frontend port
- FRONTEND_PORT=${FRONTEND_PORT:-3782}
# API base URLs
- NEXT_PUBLIC_API_BASE=${NEXT_PUBLIC_API_BASE:-http://localhost:8001}
- NEXT_PUBLIC_API_BASE_EXTERNAL=${NEXT_PUBLIC_API_BASE_EXTERNAL:-http://localhost:8001}
# LLM API Keys
- OPENAI_API_KEY=${OPENAI_API_KEY}
- OPENAI_BASE_URL=${OPENAI_BASE_URL:-https://api.openai.com/v1}
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY:-}
- PERPLEXITY_API_KEY=${PERPLEXITY_API_KEY:-}
- DASHSCOPE_API_KEY=${DASHSCOPE_API_KEY:-}
# Default LLM model
- DEFAULT_MODEL=${DEFAULT_MODEL:-gpt-4o}
# User ID and Group ID for permission management
- PUID=${PUID:-1000}
- PGID=${PGID:-1000}
healthcheck:
test:
[
"CMD",
"curl",
"-f",
"http://localhost:${BACKEND_PORT:-8001}/health",
"||",
"exit",
"1",
]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
deploy:
resources:
limits:
cpus: ${DEEPTUTOR_CPU_LIMIT:-4.00}
memory: ${DEEPTUTOR_MEMORY_LIMIT:-8G}
reservations:
cpus: ${DEEPTUTOR_CPU_RESERVATION:-1.00}
memory: ${DEEPTUTOR_MEMORY_RESERVATION:-2G}
volumes:
deeptutor_data: