feat: add services
- Introduced Convex, an open-source reactive database, with README and environment variable configurations. - Added Chinese translation for Convex documentation. - Created docker-compose configuration for Convex services. - Introduced llama-swap, a model swapping proxy for OpenAI/Anthropic compatible servers, with comprehensive README and example configuration. - Added Chinese translation for llama-swap documentation. - Included example environment file and docker-compose setup for llama-swap. - Configured health checks and resource limits for both Convex and llama-swap services.
This commit is contained in:
@@ -1,7 +1,6 @@
|
||||
# Use the official vllm image for gpu with Ampere、Ada Lovelace、Hopper architecture (8.0 <= Compute Capability <= 9.0)
|
||||
# Compute Capability version query (https://developer.nvidia.com/cuda-gpus)
|
||||
# only support x86_64 architecture
|
||||
FROM vllm/vllm-openai:v0.10.1.1
|
||||
FROM vllm/vllm-openai:v0.10.2
|
||||
|
||||
# Use the official vllm image for gpu with Volta、Turing、Blackwell architecture (7.0 < Compute Capability < 8.0 or Compute Capability >= 10.0)
|
||||
# support x86_64 architecture and ARM(AArch64) architecture
|
||||
|
||||
Reference in New Issue
Block a user