Files
compose-anything/apps/deeptutor/.env.example
Sun-ZhenXing 28ed2462af feat: Add Chinese documentation and Docker Compose configurations for DeepTutor and llama.cpp
- Created README.zh.md for DeepTutor with comprehensive features, installation steps, and usage instructions in Chinese.
- Added docker-compose.yaml for DeepTutor to define services, environment variables, and resource limits.
- Introduced .env.example for llama.cpp with configuration options for server settings and resource management.
- Added README.md and README.zh.md for llama.cpp detailing features, prerequisites, quick start guides, and API documentation.
- Implemented docker-compose.yaml for llama.cpp to support various server configurations (CPU, CUDA, ROCm) and CLI usage.
2026-02-01 16:08:44 +08:00

98 lines
3.0 KiB
Plaintext

# DeepTutor Configuration
# Copy this file to .env and fill in your API keys
#! ==================================================
#! General Settings
#! ==================================================
# Timezone (default: UTC)
TZ=UTC
# User and Group ID for file permissions (default: 1000)
# Adjust if your host user has a different UID/GID
PUID=1000
PGID=1000
# Global registry prefix (optional)
# Example: registry.example.com/ or leave empty for Docker Hub/GHCR
GLOBAL_REGISTRY=
#! ==================================================
#! DeepTutor Version
#! ==================================================
# Image version (default: latest)
# Available tags: latest, v0.5.x
# See: https://github.com/HKUDS/DeepTutor/pkgs/container/deeptutor
DEEPTUTOR_VERSION=latest
#! ==================================================
#! Port Configuration
#! ==================================================
# Backend port (internal: 8001)
BACKEND_PORT=8001
# Host port override for backend
DEEPTUTOR_BACKEND_PORT_OVERRIDE=8001
# Frontend port (internal: 3782)
FRONTEND_PORT=3782
# Host port override for frontend
DEEPTUTOR_FRONTEND_PORT_OVERRIDE=3782
#! ==================================================
#! API Base URLs
#! ==================================================
# Internal API base URL (used by frontend to communicate with backend)
NEXT_PUBLIC_API_BASE=http://localhost:8001
# External API base URL (for cloud deployment, set to your public URL)
# Example: https://your-server.com:8001
# For local deployment, use the same as NEXT_PUBLIC_API_BASE
NEXT_PUBLIC_API_BASE_EXTERNAL=http://localhost:8001
#! ==================================================
#! LLM API Keys (Required)
#! ==================================================
# OpenAI API Key (Required)
# Get from: https://platform.openai.com/api-keys
OPENAI_API_KEY=sk-your-openai-api-key-here
# OpenAI Base URL (default: https://api.openai.com/v1)
# For OpenAI-compatible APIs (e.g., Azure OpenAI, custom endpoints)
OPENAI_BASE_URL=https://api.openai.com/v1
# Default LLM Model (default: gpt-4o)
# Options: gpt-4o, gpt-4-turbo, gpt-4, gpt-3.5-turbo, etc.
DEFAULT_MODEL=gpt-4o
#! ==================================================
#! Additional LLM API Keys (Optional)
#! ==================================================
# Anthropic API Key (Optional, for Claude models)
# Get from: https://console.anthropic.com/
ANTHROPIC_API_KEY=
# Perplexity API Key (Optional, for web search)
# Get from: https://www.perplexity.ai/settings/api
PERPLEXITY_API_KEY=
# DashScope API Key (Optional, for Alibaba Cloud models)
# Get from: https://dashscope.console.aliyun.com/
DASHSCOPE_API_KEY=
#! ==================================================
#! Resource Limits
#! ==================================================
# CPU limits (default: 4.00 cores limit, 1.00 cores reservation)
DEEPTUTOR_CPU_LIMIT=4.00
DEEPTUTOR_CPU_RESERVATION=1.00
# Memory limits (default: 8G limit, 2G reservation)
DEEPTUTOR_MEMORY_LIMIT=8G
DEEPTUTOR_MEMORY_RESERVATION=2G