feat: add more services

This commit is contained in:
Sun-ZhenXing
2025-10-02 17:46:58 +08:00
parent 30014852ca
commit f330e00fa0
24 changed files with 1489 additions and 0 deletions

View File

@@ -6,10 +6,14 @@ Compose Anything helps users quickly deploy various services by providing a set
| Service | Version |
| -------------------------------------------------------- | ---------------------------- |
| [Apache HTTP Server](./src/apache) | 2.4.62 |
| [Apache APISIX](./src/apisix) | 3.13.0 |
| [Bifrost Gateway](./src/bifrost-gateway) | 1.2.15 |
| [Apache Cassandra](./src/cassandra) | 5.0.2 |
| [Clash](./src/clash) | 1.18.0 |
| [HashiCorp Consul](./src/consul) | 1.20.3 |
| [Docker Registry](./src/docker-registry) | 3.0.0 |
| [Elasticsearch](./src/elasticsearch) | 8.16.1 |
| [etcd](./src/etcd) | 3.6.0 |
| [frpc](./src/frpc) | 0.64.0 |
| [frps](./src/frps) | 0.64.0 |
@@ -18,7 +22,13 @@ Compose Anything helps users quickly deploy various services by providing a set
| [GitLab](./src/gitlab) | 17.10.4-ce.0 |
| [GitLab Runner](./src/gitlab-runner) | 17.10.1 |
| [Grafana](./src/grafana) | 12.1.1 |
| [Harbor](./src/harbor) | v2.12.0 |
| [IOPaint](./src/io-paint) | latest |
| [Jenkins](./src/jenkins) | 2.486-lts |
| [Apache Kafka](./src/kafka) | 7.8.0 |
| [Kibana](./src/kibana) | 8.16.1 |
| [Kong](./src/kong) | 3.8.0 |
| [Logstash](./src/logstash) | 8.16.1 |
| [Milvus Standalone](./src/milvus-standalone) | 2.6.2 |
| [Milvus Standalone Embed](./src/milvus-standalone-embed) | 2.6.2 |
| [MinerU SGALNG](./src/mineru-sgalng) | 2.2.2 |
@@ -27,6 +37,7 @@ Compose Anything helps users quickly deploy various services by providing a set
| [MongoDB ReplicaSet](./src/mongodb-replicaset) | 8.0.13 |
| [MongoDB Standalone](./src/mongodb-standalone) | 8.0.13 |
| [MySQL](./src/mysql) | 9.4.0 |
| [Nginx](./src/nginx) | 1.29.1 |
| [Ollama](./src/ollama) | 0.12.0 |
| [Open WebUI](./src/open-webui) | main |
| [OpenCut](./src/opencut) | latest |

View File

@@ -6,16 +6,29 @@ Compose Anything 通过提供一组高质量的 Docker Compose 配置文件,
| 服务 | 版本 |
| -------------------------------------------------------- | ---------------------------- |
| [Apache HTTP Server](./src/apache) | 2.4.62 |
| [Apache APISIX](./src/apisix) | 3.13.0 |
| [Bifrost Gateway](./src/bifrost-gateway) | 1.2.15 |
| [Apache Cassandra](./src/cassandra) | 5.0.2 |
| [Clash](./src/clash) | 1.18.0 |
| [HashiCorp Consul](./src/consul) | 1.20.3 |
| [Docker Registry](./src/docker-registry) | 3.0.0 |
| [Elasticsearch](./src/elasticsearch) | 8.16.1 |
| [etcd](./src/etcd) | 3.6.0 |
| [frpc](./src/frpc) | 0.64.0 |
| [frps](./src/frps) | 0.64.0 |
| [Gitea](./src/gitea) | 1.24.6 |
| [Gitea Runner](./src/gitea-runner) | 0.2.12 |
| [GitLab](./src/gitlab) | 17.10.4-ce.0 |
| [GitLab Runner](./src/gitlab-runner) | 17.10.1 |
| [Grafana](./src/grafana) | 12.1.1 |
| [Harbor](./src/harbor) | v2.12.0 |
| [IOPaint](./src/io-paint) | latest |
| [Jenkins](./src/jenkins) | 2.486-lts |
| [Apache Kafka](./src/kafka) | 7.8.0 |
| [Kibana](./src/kibana) | 8.16.1 |
| [Kong](./src/kong) | 3.8.0 |
| [Logstash](./src/logstash) | 8.16.1 |
| [Milvus Standalone](./src/milvus-standalone) | 2.6.2 |
| [Milvus Standalone Embed](./src/milvus-standalone-embed) | 2.6.2 |
| [MinerU SGALNG](./src/mineru-sgalng) | 2.2.2 |
@@ -24,11 +37,13 @@ Compose Anything 通过提供一组高质量的 Docker Compose 配置文件,
| [MongoDB ReplicaSet](./src/mongodb-replicaset) | 8.0.13 |
| [MongoDB Standalone](./src/mongodb-standalone) | 8.0.13 |
| [MySQL](./src/mysql) | 9.4.0 |
| [Nginx](./src/nginx) | 1.29.1 |
| [Ollama](./src/ollama) | 0.12.0 |
| [Open WebUI](./src/open-webui) | main |
| [OpenCut](./src/opencut) | latest |
| [PocketBase](./src/pocketbase) | 0.30.0 |
| [PostgreSQL](./src/postgres) | 17.6 |
| [Prometheus](./src/prometheus) | 3.5.0 |
| [Qdrant](./src/qdrant) | 1.15.4 |
| [RabbitMQ](./src/rabbitmq) | 4.1.4 |
| [Redis](./src/redis) | 8.2.1 |

62
src/apache/README.md Normal file
View File

@@ -0,0 +1,62 @@
# Apache HTTP Server
[English](./README.md) | [中文](./README.zh.md)
This service deploys Apache HTTP Server, a popular open-source web server.
## Services
- `apache`: The Apache HTTP Server service.
## Environment Variables
| Variable Name | Description | Default Value |
| -------------------------- | ---------------------------------------------- | ------------------- |
| APACHE_VERSION | Apache HTTP Server image version | `2.4.62-alpine3.20` |
| APACHE_HTTP_PORT_OVERRIDE | Host port mapping for HTTP (maps to port 80) | 80 |
| APACHE_HTTPS_PORT_OVERRIDE | Host port mapping for HTTPS (maps to port 443) | 443 |
| APACHE_RUN_USER | User to run Apache as | `www-data` |
| APACHE_RUN_GROUP | Group to run Apache as | `www-data` |
Please modify the `.env` file as needed for your use case.
## Volumes
- `apache_logs`: A volume for storing Apache logs.
- `./htdocs`: Directory for web content (mounted as read-only).
- `./httpd.conf`: Optional custom Apache configuration file.
- `./ssl`: Optional SSL certificates directory.
## Usage
1. Create the service directory structure:
```bash
mkdir -p htdocs
```
2. Add your web content to the `htdocs` directory:
```bash
echo "<h1>Hello World</h1>" > htdocs/index.html
```
3. Start the service:
```bash
docker compose up -d
```
4. Access the web server at `http://localhost` (or your configured port).
## Configuration
- Custom Apache configuration can be mounted at `/usr/local/apache2/conf/httpd.conf`
- SSL certificates can be mounted at `/usr/local/apache2/conf/ssl/`
- Web content should be placed in the `htdocs` directory
## Security Notes
- The default configuration runs Apache as the `www-data` user for security
- Consider using SSL/TLS certificates for production deployments
- Regularly update the Apache version to get security patches

62
src/apache/README.zh.md Normal file
View File

@@ -0,0 +1,62 @@
# Apache HTTP 服务器
[English](./README.md) | [中文](./README.zh.md)
此服务部署 Apache HTTP 服务器,一个流行的开源 Web 服务器。
## 服务
- `apache`Apache HTTP 服务器服务。
## 环境变量
| 变量名 | 描述 | 默认值 |
| -------------------------- | ------------------------------------ | ------------------- |
| APACHE_VERSION | Apache HTTP 服务器镜像版本 | `2.4.62-alpine3.20` |
| APACHE_HTTP_PORT_OVERRIDE | HTTP 主机端口映射(映射到端口 80 | 80 |
| APACHE_HTTPS_PORT_OVERRIDE | HTTPS 主机端口映射(映射到端口 443 | 443 |
| APACHE_RUN_USER | 运行 Apache 的用户 | `www-data` |
| APACHE_RUN_GROUP | 运行 Apache 的组 | `www-data` |
请根据您的使用情况修改 `.env` 文件。
## 卷
- `apache_logs`:用于存储 Apache 日志的卷。
- `./htdocs`Web 内容目录(以只读方式挂载)。
- `./httpd.conf`:可选的自定义 Apache 配置文件。
- `./ssl`:可选的 SSL 证书目录。
## 使用方法
1. 创建服务目录结构:
```bash
mkdir -p htdocs
```
2. 将您的 Web 内容添加到 `htdocs` 目录:
```bash
echo "<h1>Hello World</h1>" > htdocs/index.html
```
3. 启动服务:
```bash
docker compose up -d
```
4. 在 `http://localhost`(或您配置的端口)访问 Web 服务器。
## 配置
- 自定义 Apache 配置可以挂载到 `/usr/local/apache2/conf/httpd.conf`
- SSL 证书可以挂载到 `/usr/local/apache2/conf/ssl/`
- Web 内容应放置在 `htdocs` 目录中
## 安全注意事项
- 默认配置以 `www-data` 用户身份运行 Apache 以确保安全
- 生产环境部署时考虑使用 SSL/TLS 证书
- 定期更新 Apache 版本以获取安全补丁

View File

@@ -0,0 +1,41 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
services:
apache:
<<: *default
image: httpd:${APACHE_VERSION:-2.4.62-alpine3.20}
container_name: apache
ports:
- "${APACHE_HTTP_PORT_OVERRIDE:-80}:80"
- "${APACHE_HTTPS_PORT_OVERRIDE:-443}:443"
volumes:
- *localtime
- *timezone
- apache_logs:/usr/local/apache2/logs
- ./htdocs:/usr/local/apache2/htdocs:ro
# Custom configuration
# - ./httpd.conf:/usr/local/apache2/conf/httpd.conf:ro
# - ./ssl:/usr/local/apache2/conf/ssl:ro
environment:
- APACHE_RUN_USER=${APACHE_RUN_USER:-www-data}
- APACHE_RUN_GROUP=${APACHE_RUN_GROUP:-www-data}
deploy:
resources:
limits:
cpus: '1.00'
memory: 512M
reservations:
cpus: '0.25'
memory: 128M
volumes:
apache_logs:

View File

@@ -0,0 +1,12 @@
<!DOCTYPE html>
<html>
<head>
<title>Apache HTTP Server</title>
<meta charset="utf-8">
</head>
<body>
<h1>Welcome to Apache HTTP Server</h1>
<p>If you can see this page, the Apache HTTP server is working correctly.</p>
<p>This is the default page provided by the compose-anything project.</p>
</body>
</html>

90
src/cassandra/README.md Normal file
View File

@@ -0,0 +1,90 @@
# Apache Cassandra
[English](./README.md) | [中文](./README.zh.md)
This service deploys Apache Cassandra, a highly scalable NoSQL distributed database.
## Services
- `cassandra`: The Cassandra database service.
## Environment Variables
| Variable Name | Description | Default Value |
| ------------------------------ | ------------------------------------------------ | ----------------------------- |
| CASSANDRA_VERSION | Cassandra image version | `5.0.2` |
| CASSANDRA_CQL_PORT_OVERRIDE | Host port mapping for CQL (maps to port 9042) | 9042 |
| CASSANDRA_THRIFT_PORT_OVERRIDE | Host port mapping for Thrift (maps to port 9160) | 9160 |
| CASSANDRA_CLUSTER_NAME | Name of the Cassandra cluster | `Test Cluster` |
| CASSANDRA_DC | Datacenter name | `datacenter1` |
| CASSANDRA_RACK | Rack name | `rack1` |
| CASSANDRA_ENDPOINT_SNITCH | Endpoint snitch configuration | `GossipingPropertyFileSnitch` |
| CASSANDRA_NUM_TOKENS | Number of tokens per node | 256 |
| CASSANDRA_SEEDS | Seed nodes for cluster discovery | `cassandra` |
| CASSANDRA_START_RPC | Enable Thrift RPC interface | `false` |
| MAX_HEAP_SIZE | Maximum JVM heap size | `1G` |
| HEAP_NEWSIZE | JVM new generation heap size | `100M` |
Please modify the `.env` file as needed for your use case.
## Volumes
- `cassandra_data`: Cassandra data directory.
- `cassandra_logs`: Cassandra log directory.
- `./cassandra.yaml`: Optional custom Cassandra configuration file.
## Usage
1. Start the service:
```bash
docker compose up -d
```
2. Wait for Cassandra to be ready (check logs):
```bash
docker compose logs -f cassandra
```
3. Connect using cqlsh:
```bash
docker exec -it cassandra cqlsh
```
## Basic CQL Commands
```sql
-- Create a keyspace
CREATE KEYSPACE test_keyspace
WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 1};
-- Use the keyspace
USE test_keyspace;
-- Create a table
CREATE TABLE users (
id UUID PRIMARY KEY,
name TEXT,
email TEXT
);
-- Insert data
INSERT INTO users (id, name, email)
VALUES (uuid(), 'John Doe', 'john@example.com');
-- Query data
SELECT * FROM users;
```
## Health Check
The service includes a health check that verifies Cassandra is responding to CQL queries.
## Security Notes
- This configuration is for development/testing purposes
- For production, enable authentication and SSL/TLS
- Configure proper network security and firewall rules
- Regularly backup your data and update Cassandra version

View File

@@ -0,0 +1,54 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
services:
cassandra:
<<: *default
image: cassandra:${CASSANDRA_VERSION:-5.0.2}
container_name: cassandra
ports:
- "${CASSANDRA_CQL_PORT_OVERRIDE:-9042}:9042"
- "${CASSANDRA_THRIFT_PORT_OVERRIDE:-9160}:9160"
volumes:
- *localtime
- *timezone
- cassandra_data:/var/lib/cassandra
- cassandra_logs:/var/log/cassandra
# Custom configuration
# - ./cassandra.yaml:/etc/cassandra/cassandra.yaml:ro
environment:
- CASSANDRA_CLUSTER_NAME=${CASSANDRA_CLUSTER_NAME:-Test Cluster}
- CASSANDRA_DC=${CASSANDRA_DC:-datacenter1}
- CASSANDRA_RACK=${CASSANDRA_RACK:-rack1}
- CASSANDRA_ENDPOINT_SNITCH=${CASSANDRA_ENDPOINT_SNITCH:-GossipingPropertyFileSnitch}
- CASSANDRA_NUM_TOKENS=${CASSANDRA_NUM_TOKENS:-256}
- CASSANDRA_SEEDS=${CASSANDRA_SEEDS:-cassandra}
- CASSANDRA_START_RPC=${CASSANDRA_START_RPC:-false}
- MAX_HEAP_SIZE=${MAX_HEAP_SIZE:-1G}
- HEAP_NEWSIZE=${HEAP_NEWSIZE:-100M}
deploy:
resources:
limits:
cpus: '2.00'
memory: 2G
reservations:
cpus: '0.50'
memory: 1G
healthcheck:
test: ["CMD-SHELL", "cqlsh -e 'DESCRIBE CLUSTER'"]
interval: 30s
timeout: 10s
retries: 5
start_period: 60s
volumes:
cassandra_data:
cassandra_logs:

View File

@@ -0,0 +1,59 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
services:
consul:
<<: *default
image: consul:${CONSUL_VERSION:-1.20.3}
container_name: consul
ports:
- "${CONSUL_HTTP_PORT_OVERRIDE:-8500}:8500"
- "${CONSUL_DNS_PORT_OVERRIDE:-8600}:8600/udp"
- "${CONSUL_SERF_LAN_PORT_OVERRIDE:-8301}:8301"
- "${CONSUL_SERF_WAN_PORT_OVERRIDE:-8302}:8302"
- "${CONSUL_SERVER_RPC_PORT_OVERRIDE:-8300}:8300"
volumes:
- *localtime
- *timezone
- consul_data:/consul/data
- consul_config:/consul/config
# Custom configuration
# - ./consul.json:/consul/config/consul.json:ro
environment:
- CONSUL_BIND_INTERFACE=${CONSUL_BIND_INTERFACE:-eth0}
- CONSUL_CLIENT_INTERFACE=${CONSUL_CLIENT_INTERFACE:-eth0}
- CONSUL_LOCAL_CONFIG=${CONSUL_LOCAL_CONFIG:-'{"datacenter":"dc1","server":true,"ui_config":{"enabled":true},"bootstrap_expect":1,"log_level":"INFO"}'}
command:
- consul
- agent
- -server
- -bootstrap-expect=1
- -ui
- -client=0.0.0.0
- -bind={{ GetInterfaceIP "eth0" }}
deploy:
resources:
limits:
cpus: '1.00'
memory: 512M
reservations:
cpus: '0.25'
memory: 128M
healthcheck:
test: ["CMD-SHELL", "consul members"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
volumes:
consul_data:
consul_config:

View File

@@ -0,0 +1,92 @@
# Elasticsearch
[English](./README.md) | [中文](./README.zh.md)
This service deploys Elasticsearch, a distributed search and analytics engine.
## Services
- `elasticsearch`: The Elasticsearch service.
## Environment Variables
| Variable Name | Description | Default Value |
| ------------------------------------- | --------------------------------------------------- | ---------------- |
| ELASTICSEARCH_VERSION | Elasticsearch image version | `8.16.1` |
| ELASTICSEARCH_HTTP_PORT_OVERRIDE | Host port mapping for HTTP (maps to port 9200) | 9200 |
| ELASTICSEARCH_TRANSPORT_PORT_OVERRIDE | Host port mapping for transport (maps to port 9300) | 9300 |
| ELASTICSEARCH_CLUSTER_NAME | Name of the Elasticsearch cluster | `docker-cluster` |
| ELASTICSEARCH_DISCOVERY_TYPE | Discovery type for single-node setup | `single-node` |
| ELASTICSEARCH_SECURITY_ENABLED | Enable X-Pack security features | `false` |
| ELASTICSEARCH_SSL_ENABLED | Enable SSL/TLS | `false` |
| ELASTICSEARCH_HEAP_SIZE | JVM heap size | `1g` |
Please modify the `.env` file as needed for your use case.
## Volumes
- `elasticsearch_data`: Elasticsearch data directory.
- `elasticsearch_logs`: Elasticsearch log directory.
- `./elasticsearch.yml`: Optional custom Elasticsearch configuration file.
## Usage
1. Start the service:
```bash
docker compose up -d
```
2. Wait for Elasticsearch to be ready:
```bash
docker compose logs -f elasticsearch
```
3. Test the connection:
```bash
curl http://localhost:9200
```
## Basic Operations
```bash
# Check cluster health
curl http://localhost:9200/_cluster/health
# List all indices
curl http://localhost:9200/_cat/indices?v
# Create an index
curl -X PUT "localhost:9200/my-index"
# Index a document
curl -X POST "localhost:9200/my-index/_doc" \
-H "Content-Type: application/json" \
-d '{"name": "John Doe", "age": 30}'
# Search documents
curl -X GET "localhost:9200/my-index/_search" \
-H "Content-Type: application/json" \
-d '{"query": {"match_all": {}}}'
```
## Memory Configuration
Elasticsearch requires sufficient memory to operate effectively. The default configuration allocates 1GB of heap memory. For production environments, consider:
- Setting `ELASTICSEARCH_HEAP_SIZE` to 50% of available RAM (but not more than 31GB)
- Ensuring the host has at least 2GB of RAM available
- Configuring swap memory appropriately
## Health Check
The service includes a health check that verifies Elasticsearch cluster health.
## Security Notes
- This configuration disables security features for ease of development
- For production, enable X-Pack security, SSL/TLS, and authentication
- Configure proper network security and firewall rules
- Regularly backup your indices and update Elasticsearch version

View File

@@ -0,0 +1,57 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
services:
elasticsearch:
<<: *default
image: docker.elastic.co/elasticsearch/elasticsearch:${ELASTICSEARCH_VERSION:-8.16.1}
container_name: elasticsearch
ports:
- "${ELASTICSEARCH_HTTP_PORT_OVERRIDE:-9200}:9200"
- "${ELASTICSEARCH_TRANSPORT_PORT_OVERRIDE:-9300}:9300"
volumes:
- *localtime
- *timezone
- elasticsearch_data:/usr/share/elasticsearch/data
- elasticsearch_logs:/usr/share/elasticsearch/logs
# Custom configuration
# - ./elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro
environment:
- node.name=elasticsearch
- cluster.name=${ELASTICSEARCH_CLUSTER_NAME:-docker-cluster}
- discovery.type=${ELASTICSEARCH_DISCOVERY_TYPE:-single-node}
- bootstrap.memory_lock=true
- xpack.security.enabled=${ELASTICSEARCH_SECURITY_ENABLED:-false}
- xpack.security.http.ssl.enabled=${ELASTICSEARCH_SSL_ENABLED:-false}
- xpack.security.transport.ssl.enabled=${ELASTICSEARCH_SSL_ENABLED:-false}
- "ES_JAVA_OPTS=-Xms${ELASTICSEARCH_HEAP_SIZE:-1g} -Xmx${ELASTICSEARCH_HEAP_SIZE:-1g}"
ulimits:
memlock:
soft: -1
hard: -1
deploy:
resources:
limits:
cpus: '2.00'
memory: 2G
reservations:
cpus: '0.50'
memory: 1G
healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost:9200/_cluster/health || exit 1"]
interval: 30s
timeout: 10s
retries: 5
start_period: 60s
volumes:
elasticsearch_data:
elasticsearch_logs:

View File

@@ -0,0 +1,150 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
services:
# Harbor Core
harbor-core:
<<: *default
image: goharbor/harbor-core:${HARBOR_VERSION:-v2.12.0}
container_name: harbor-core
depends_on:
- harbor-db
- harbor-redis
volumes:
- *localtime
- *timezone
- harbor_config:/etc/core
- harbor_ca_download:/etc/core/ca
- harbor_secret:/etc/core/certificates
environment:
- CORE_SECRET=${HARBOR_CORE_SECRET:-}
- JOBSERVICE_SECRET=${HARBOR_JOBSERVICE_SECRET:-}
- DATABASE_TYPE=postgresql
- POSTGRESQL_HOST=harbor-db
- POSTGRESQL_PORT=5432
- POSTGRESQL_USERNAME=postgres
- POSTGRESQL_PASSWORD=${HARBOR_DB_PASSWORD:-password}
- POSTGRESQL_DATABASE=registry
- REGISTRY_URL=http://registry:5000
- TOKEN_SERVICE_URL=http://harbor-core:8080/service/token
- HARBOR_ADMIN_PASSWORD=${HARBOR_ADMIN_PASSWORD:-Harbor12345}
- CORE_URL=http://harbor-core:8080
- JOBSERVICE_URL=http://harbor-jobservice:8080
- REGISTRY_STORAGE_PROVIDER_NAME=filesystem
- READ_ONLY=false
- RELOAD_KEY=${HARBOR_RELOAD_KEY:-}
# Harbor JobService
harbor-jobservice:
<<: *default
image: goharbor/harbor-jobservice:${HARBOR_VERSION:-v2.12.0}
container_name: harbor-jobservice
depends_on:
- harbor-db
- harbor-redis
volumes:
- *localtime
- *timezone
- harbor_job_logs:/var/log/jobs
environment:
- CORE_SECRET=${HARBOR_CORE_SECRET:-}
- JOBSERVICE_SECRET=${HARBOR_JOBSERVICE_SECRET:-}
- CORE_URL=http://harbor-core:8080
- DATABASE_TYPE=postgresql
- POSTGRESQL_HOST=harbor-db
- POSTGRESQL_PORT=5432
- POSTGRESQL_USERNAME=postgres
- POSTGRESQL_PASSWORD=${HARBOR_DB_PASSWORD:-password}
- POSTGRESQL_DATABASE=registry
# Harbor Registry
harbor-registry:
<<: *default
image: goharbor/registry-photon:${HARBOR_VERSION:-v2.12.0}
container_name: harbor-registry
volumes:
- *localtime
- *timezone
- harbor_registry:/storage
environment:
- REGISTRY_HTTP_SECRET=${HARBOR_REGISTRY_SECRET:-}
# Harbor Portal (UI)
harbor-portal:
<<: *default
image: goharbor/harbor-portal:${HARBOR_VERSION:-v2.12.0}
container_name: harbor-portal
volumes:
- *localtime
- *timezone
# Harbor Proxy (Nginx)
harbor-proxy:
<<: *default
image: goharbor/nginx-photon:${HARBOR_VERSION:-v2.12.0}
container_name: harbor-proxy
ports:
- "${HARBOR_HTTP_PORT_OVERRIDE:-80}:8080"
- "${HARBOR_HTTPS_PORT_OVERRIDE:-443}:8443"
depends_on:
- harbor-core
- harbor-portal
- harbor-registry
volumes:
- *localtime
- *timezone
# Harbor Database
harbor-db:
<<: *default
image: goharbor/harbor-db:${HARBOR_VERSION:-v2.12.0}
container_name: harbor-db
volumes:
- *localtime
- *timezone
- harbor_db:/var/lib/postgresql/data
environment:
- POSTGRES_PASSWORD=${HARBOR_DB_PASSWORD:-password}
- POSTGRES_DB=registry
deploy:
resources:
limits:
cpus: '1.00'
memory: 1G
reservations:
cpus: '0.25'
memory: 256M
# Harbor Redis
harbor-redis:
<<: *default
image: goharbor/redis-photon:${HARBOR_VERSION:-v2.12.0}
container_name: harbor-redis
volumes:
- *localtime
- *timezone
- harbor_redis:/var/lib/redis
deploy:
resources:
limits:
cpus: '0.50'
memory: 256M
reservations:
cpus: '0.10'
memory: 64M
volumes:
harbor_config:
harbor_ca_download:
harbor_secret:
harbor_job_logs:
harbor_registry:
harbor_db:
harbor_redis:

75
src/jenkins/README.md Normal file
View File

@@ -0,0 +1,75 @@
# Jenkins
[English](./README.md) | [中文](./README.zh.md)
This service deploys Jenkins, an open-source automation server for CI/CD pipelines.
## Services
- `jenkins`: The Jenkins automation server.
## Environment Variables
| Variable Name | Description | Default Value |
| --------------------------- | ------------------------------------------------- | ----------------------------------------------- |
| JENKINS_VERSION | Jenkins image version | `2.486-lts-jdk17` |
| JENKINS_HTTP_PORT_OVERRIDE | Host port mapping for HTTP (maps to port 8080) | 8080 |
| JENKINS_AGENT_PORT_OVERRIDE | Host port mapping for agents (maps to port 50000) | 50000 |
| JENKINS_OPTS | Additional Jenkins options | `--httpPort=8080` |
| JAVA_OPTS | Java JVM options | `-Djenkins.install.runSetupWizard=false -Xmx2g` |
| CASC_JENKINS_CONFIG | Configuration as Code directory | `/var/jenkins_home/casc_configs` |
| JENKINS_USER_ID | User ID for Jenkins process | 1000 |
| JENKINS_GROUP_ID | Group ID for Jenkins process | 1000 |
Please modify the `.env` file as needed for your use case.
## Volumes
- `jenkins_home`: A volume for storing Jenkins data, configuration, and workspace.
- `/var/run/docker.sock`: Docker socket (read-only) for Docker-in-Docker functionality.
- `./jenkins.yaml`: Optional Configuration as Code file.
## Initial Setup
1. Start the service:
```bash
docker compose up -d
```
2. Get the initial admin password:
```bash
docker exec jenkins cat /var/jenkins_home/secrets/initialAdminPassword
```
3. Access Jenkins at `http://localhost:8080` and complete the setup wizard.
## Configuration as Code
Jenkins can be configured using Configuration as Code (JCasC). Create a `jenkins.yaml` file with your configuration and mount it to `/var/jenkins_home/casc_configs/jenkins.yaml`.
Example configuration:
```yaml
jenkins:
systemMessage: "Jenkins configured automatically by Jenkins Configuration as Code plugin"
securityRealm:
local:
allowsSignup: false
users:
- id: admin
password: admin123
authorizationStrategy:
loggedInUsersCanDoAnything:
allowAnonymousRead: false
```
## Security Notes
- Change the default admin password immediately after setup
- Consider using HTTPS for production deployments
- Regularly update Jenkins and its plugins for security patches
- Use proper authentication and authorization strategies
- Restrict access to the Jenkins web interface

View File

@@ -0,0 +1,48 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
services:
jenkins:
<<: *default
image: jenkins/jenkins:${JENKINS_VERSION:-2.486-lts-jdk17}
container_name: jenkins
ports:
- "${JENKINS_HTTP_PORT_OVERRIDE:-8080}:8080"
- "${JENKINS_AGENT_PORT_OVERRIDE:-50000}:50000"
volumes:
- *localtime
- *timezone
- jenkins_home:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock:ro
# Custom configuration
# - ./jenkins.yaml:/var/jenkins_home/casc_configs/jenkins.yaml:ro
environment:
- JENKINS_OPTS=${JENKINS_OPTS:---httpPort=8080}
- JAVA_OPTS=${JAVA_OPTS:--Djenkins.install.runSetupWizard=false -Xmx2g}
- CASC_JENKINS_CONFIG=${CASC_JENKINS_CONFIG:-/var/jenkins_home/casc_configs}
user: "${JENKINS_USER_ID:-1000}:${JENKINS_GROUP_ID:-1000}"
deploy:
resources:
limits:
cpus: '2.00'
memory: 3G
reservations:
cpus: '0.50'
memory: 1G
healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost:8080/login || exit 1"]
interval: 30s
timeout: 10s
retries: 5
start_period: 120s
volumes:
jenkins_home:

93
src/kafka/README.md Normal file
View File

@@ -0,0 +1,93 @@
# Apache Kafka
[English](./README.md) | [中文](./README.zh.md)
This service deploys Apache Kafka, a distributed streaming platform, along with Zookeeper and optional Kafka UI.
## Services
- `zookeeper`: Zookeeper service for Kafka coordination.
- `kafka`: The Kafka broker service.
- `kafka-ui`: Optional web UI for Kafka management (profile: `ui`).
## Environment Variables
| Variable Name | Description | Default Value |
| -------------------------------- | ---------------------------------------------------- | --------------- |
| KAFKA_VERSION | Kafka image version | `7.8.0` |
| KAFKA_UI_VERSION | Kafka UI image version | `latest` |
| ZOOKEEPER_CLIENT_PORT_OVERRIDE | Host port mapping for Zookeeper (maps to port 2181) | 2181 |
| KAFKA_BROKER_PORT_OVERRIDE | Host port mapping for Kafka (maps to port 9092) | 9092 |
| KAFKA_JMX_PORT_OVERRIDE | Host port mapping for JMX (maps to port 9999) | 9999 |
| KAFKA_UI_PORT_OVERRIDE | Host port mapping for Kafka UI (maps to port 8080) | 8080 |
| KAFKA_NUM_PARTITIONS | Default number of partitions for auto-created topics | 3 |
| KAFKA_DEFAULT_REPLICATION_FACTOR | Default replication factor | 1 |
| KAFKA_AUTO_CREATE_TOPICS_ENABLE | Enable automatic topic creation | `true` |
| KAFKA_DELETE_TOPIC_ENABLE | Enable topic deletion | `true` |
| KAFKA_LOG_RETENTION_HOURS | Log retention time in hours | 168 |
| KAFKA_LOG_SEGMENT_BYTES | Log segment size in bytes | 1073741824 |
| KAFKA_HEAP_OPTS | JVM heap options for Kafka | `-Xmx1G -Xms1G` |
| KAFKA_UI_READONLY | Set Kafka UI to readonly mode | `false` |
Please modify the `.env` file as needed for your use case.
## Volumes
- `zookeeper_data`: Zookeeper data directory.
- `zookeeper_log`: Zookeeper log directory.
- `kafka_data`: Kafka data directory.
## Usage
1. Start Kafka with Zookeeper:
```bash
docker compose up -d
```
2. Start with Kafka UI (optional):
```bash
docker compose --profile ui up -d
```
3. Access Kafka UI at `http://localhost:8080` (if enabled).
## Testing Kafka
1. Create a topic:
```bash
docker exec kafka kafka-topics --create --topic test-topic --bootstrap-server localhost:9092 --partitions 3 --replication-factor 1
```
2. List topics:
```bash
docker exec kafka kafka-topics --list --bootstrap-server localhost:9092
```
3. Produce messages:
```bash
docker exec -it kafka kafka-console-producer --topic test-topic --bootstrap-server localhost:9092
```
4. Consume messages:
```bash
docker exec -it kafka kafka-console-consumer --topic test-topic --from-beginning --bootstrap-server localhost:9092
```
## Configuration
- Kafka is configured for single-node deployment by default
- For production, consider adjusting replication factor and other settings
- Custom Kafka configuration can be added via environment variables
## Security Notes
- This configuration is for development/testing purposes
- For production, enable SSL/SASL authentication
- Secure Zookeeper communication
- Regularly update Kafka version for security patches

View File

@@ -0,0 +1,122 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
services:
# Zookeeper for Kafka coordination
zookeeper:
<<: *default
image: confluentinc/cp-zookeeper:${KAFKA_VERSION:-7.8.0}
container_name: zookeeper
ports:
- "${ZOOKEEPER_CLIENT_PORT_OVERRIDE:-2181}:2181"
volumes:
- *localtime
- *timezone
- zookeeper_data:/var/lib/zookeeper/data
- zookeeper_log:/var/lib/zookeeper/log
environment:
- ZOOKEEPER_CLIENT_PORT=2181
- ZOOKEEPER_TICK_TIME=2000
- ZOOKEEPER_SYNC_LIMIT=5
- ZOOKEEPER_INIT_LIMIT=10
- ZOOKEEPER_MAX_CLIENT_CNXNS=60
- ZOOKEEPER_AUTOPURGE_SNAP_RETAIN_COUNT=3
- ZOOKEEPER_AUTOPURGE_PURGE_INTERVAL=24
deploy:
resources:
limits:
cpus: '1.00'
memory: 1G
reservations:
cpus: '0.25'
memory: 256M
# Kafka broker
kafka:
<<: *default
image: confluentinc/cp-kafka:${KAFKA_VERSION:-7.8.0}
container_name: kafka
depends_on:
- zookeeper
ports:
- "${KAFKA_BROKER_PORT_OVERRIDE:-9092}:9092"
- "${KAFKA_JMX_PORT_OVERRIDE:-9999}:9999"
volumes:
- *localtime
- *timezone
- kafka_data:/var/lib/kafka/data
environment:
- KAFKA_BROKER_ID=1
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://localhost:9092
- KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=PLAINTEXT:PLAINTEXT
- KAFKA_INTER_BROKER_LISTENER_NAME=PLAINTEXT
- KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1
- KAFKA_TRANSACTION_STATE_LOG_MIN_ISR=1
- KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR=1
- KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS=0
- KAFKA_NUM_PARTITIONS=${KAFKA_NUM_PARTITIONS:-3}
- KAFKA_DEFAULT_REPLICATION_FACTOR=${KAFKA_DEFAULT_REPLICATION_FACTOR:-1}
- KAFKA_AUTO_CREATE_TOPICS_ENABLE=${KAFKA_AUTO_CREATE_TOPICS_ENABLE:-true}
- KAFKA_DELETE_TOPIC_ENABLE=${KAFKA_DELETE_TOPIC_ENABLE:-true}
- KAFKA_LOG_RETENTION_HOURS=${KAFKA_LOG_RETENTION_HOURS:-168}
- KAFKA_LOG_SEGMENT_BYTES=${KAFKA_LOG_SEGMENT_BYTES:-1073741824}
- KAFKA_LOG_RETENTION_CHECK_INTERVAL_MS=300000
- KAFKA_JMX_PORT=9999
- KAFKA_JMX_HOSTNAME=localhost
- KAFKA_HEAP_OPTS=${KAFKA_HEAP_OPTS:--Xmx1G -Xms1G}
deploy:
resources:
limits:
cpus: '2.00'
memory: 2G
reservations:
cpus: '0.50'
memory: 1G
healthcheck:
test: ["CMD-SHELL", "kafka-broker-api-versions --bootstrap-server localhost:9092"]
interval: 30s
timeout: 10s
retries: 5
start_period: 60s
# Kafka UI (optional)
kafka-ui:
<<: *default
image: provectuslabs/kafka-ui:${KAFKA_UI_VERSION:-latest}
container_name: kafka-ui
depends_on:
- kafka
- zookeeper
ports:
- "${KAFKA_UI_PORT_OVERRIDE:-8080}:8080"
volumes:
- *localtime
- *timezone
environment:
- KAFKA_CLUSTERS_0_NAME=local
- KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=kafka:9092
- KAFKA_CLUSTERS_0_ZOOKEEPER=zookeeper:2181
- KAFKA_CLUSTERS_0_READONLY=${KAFKA_UI_READONLY:-false}
deploy:
resources:
limits:
cpus: '0.50'
memory: 512M
reservations:
cpus: '0.10'
memory: 128M
profiles:
- ui
volumes:
zookeeper_data:
zookeeper_log:
kafka_data:

View File

@@ -0,0 +1,49 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
services:
kibana:
<<: *default
image: docker.elastic.co/kibana/kibana:${KIBANA_VERSION:-8.16.1}
container_name: kibana
ports:
- "${KIBANA_PORT_OVERRIDE:-5601}:5601"
volumes:
- *localtime
- *timezone
- kibana_data:/usr/share/kibana/data
# Custom configuration
# - ./kibana.yml:/usr/share/kibana/config/kibana.yml:ro
environment:
- SERVERNAME=kibana
- ELASTICSEARCH_HOSTS=${ELASTICSEARCH_HOSTS:-http://elasticsearch:9200}
- ELASTICSEARCH_USERNAME=${ELASTICSEARCH_USERNAME:-}
- ELASTICSEARCH_PASSWORD=${ELASTICSEARCH_PASSWORD:-}
- XPACK_SECURITY_ENABLED=${KIBANA_SECURITY_ENABLED:-false}
- XPACK_ENCRYPTEDSAVEDOBJECTS_ENCRYPTIONKEY=${KIBANA_ENCRYPTION_KEY:-}
- LOGGING_ROOT_LEVEL=${KIBANA_LOG_LEVEL:-info}
deploy:
resources:
limits:
cpus: '1.00'
memory: 1G
reservations:
cpus: '0.25'
memory: 512M
healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost:5601/api/status || exit 1"]
interval: 30s
timeout: 10s
retries: 5
start_period: 60s
volumes:
kibana_data:

View File

@@ -0,0 +1,130 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
services:
# Kong Database
kong-db:
<<: *default
image: postgres:${POSTGRES_VERSION:-16.6-alpine3.21}
container_name: kong-db
volumes:
- *localtime
- *timezone
- kong_db_data:/var/lib/postgresql/data
environment:
- POSTGRES_USER=kong
- POSTGRES_DB=kong
- POSTGRES_PASSWORD=${KONG_DB_PASSWORD:-kongpass}
deploy:
resources:
limits:
cpus: '1.00'
memory: 512M
reservations:
cpus: '0.25'
memory: 128M
healthcheck:
test: ["CMD-SHELL", "pg_isready -U kong"]
interval: 30s
timeout: 5s
retries: 5
# Kong Database Migration
kong-migrations:
<<: *default
image: kong:${KONG_VERSION:-3.8.0-alpine}
container_name: kong-migrations
depends_on:
- kong-db
environment:
- KONG_DATABASE=postgres
- KONG_PG_HOST=kong-db
- KONG_PG_USER=kong
- KONG_PG_PASSWORD=${KONG_DB_PASSWORD:-kongpass}
- KONG_PG_DATABASE=kong
command: kong migrations bootstrap
restart: "no"
# Kong Gateway
kong:
<<: *default
image: kong:${KONG_VERSION:-3.8.0-alpine}
container_name: kong
depends_on:
- kong-db
- kong-migrations
ports:
- "${KONG_PROXY_PORT_OVERRIDE:-8000}:8000"
- "${KONG_PROXY_SSL_PORT_OVERRIDE:-8443}:8443"
- "${KONG_ADMIN_API_PORT_OVERRIDE:-8001}:8001"
- "${KONG_ADMIN_SSL_PORT_OVERRIDE:-8444}:8444"
volumes:
- *localtime
- *timezone
# Custom configuration
# - ./kong.conf:/etc/kong/kong.conf:ro
environment:
- KONG_DATABASE=postgres
- KONG_PG_HOST=kong-db
- KONG_PG_USER=kong
- KONG_PG_PASSWORD=${KONG_DB_PASSWORD:-kongpass}
- KONG_PG_DATABASE=kong
- KONG_PROXY_ACCESS_LOG=/dev/stdout
- KONG_ADMIN_ACCESS_LOG=/dev/stdout
- KONG_PROXY_ERROR_LOG=/dev/stderr
- KONG_ADMIN_ERROR_LOG=/dev/stderr
- KONG_ADMIN_LISTEN=${KONG_ADMIN_LISTEN:-0.0.0.0:8001}
- KONG_ADMIN_GUI_URL=${KONG_ADMIN_GUI_URL:-http://localhost:8002}
deploy:
resources:
limits:
cpus: '1.00'
memory: 1G
reservations:
cpus: '0.25'
memory: 256M
healthcheck:
test: ["CMD-SHELL", "kong health"]
interval: 30s
timeout: 10s
retries: 5
start_period: 30s
# Kong Manager (Optional GUI)
kong-gui:
<<: *default
image: pantsel/konga:${KONGA_VERSION:-latest}
container_name: kong-gui
depends_on:
- kong
ports:
- "${KONG_GUI_PORT_OVERRIDE:-1337}:1337"
volumes:
- *localtime
- *timezone
- konga_data:/app/kongadata
environment:
- NODE_ENV=production
- KONGA_HOOK_TIMEOUT=120000
deploy:
resources:
limits:
cpus: '0.50'
memory: 256M
reservations:
cpus: '0.10'
memory: 64M
profiles:
- gui
volumes:
kong_db_data:
konga_data:

View File

@@ -0,0 +1,59 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
services:
logstash:
<<: *default
image: docker.elastic.co/logstash/logstash:${LOGSTASH_VERSION:-8.16.1}
container_name: logstash
ports:
- "${LOGSTASH_BEATS_PORT_OVERRIDE:-5044}:5044"
- "${LOGSTASH_TCP_PORT_OVERRIDE:-5000}:5000/tcp"
- "${LOGSTASH_UDP_PORT_OVERRIDE:-5000}:5000/udp"
- "${LOGSTASH_HTTP_PORT_OVERRIDE:-9600}:9600"
volumes:
- *localtime
- *timezone
- logstash_data:/usr/share/logstash/data
- logstash_logs:/usr/share/logstash/logs
- ./pipeline:/usr/share/logstash/pipeline:ro
# Custom configuration
# - ./logstash.yml:/usr/share/logstash/config/logstash.yml:ro
# - ./pipelines.yml:/usr/share/logstash/config/pipelines.yml:ro
environment:
- XPACK_MONITORING_ENABLED=${LOGSTASH_MONITORING_ENABLED:-false}
- XPACK_MONITORING_ELASTICSEARCH_HOSTS=${ELASTICSEARCH_HOSTS:-http://elasticsearch:9200}
- ELASTICSEARCH_HOSTS=${ELASTICSEARCH_HOSTS:-http://elasticsearch:9200}
- ELASTICSEARCH_USERNAME=${ELASTICSEARCH_USERNAME:-}
- ELASTICSEARCH_PASSWORD=${ELASTICSEARCH_PASSWORD:-}
- LS_JAVA_OPTS=${LS_JAVA_OPTS:--Xmx1g -Xms1g}
- PIPELINE_WORKERS=${LOGSTASH_PIPELINE_WORKERS:-2}
- PIPELINE_BATCH_SIZE=${LOGSTASH_PIPELINE_BATCH_SIZE:-125}
- PIPELINE_BATCH_DELAY=${LOGSTASH_PIPELINE_BATCH_DELAY:-50}
- LOG_LEVEL=${LOGSTASH_LOG_LEVEL:-info}
deploy:
resources:
limits:
cpus: '1.50'
memory: 2G
reservations:
cpus: '0.50'
memory: 1G
healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost:9600/_node/stats || exit 1"]
interval: 30s
timeout: 10s
retries: 5
start_period: 60s
volumes:
logstash_data:
logstash_logs:

View File

@@ -0,0 +1,46 @@
input {
beats {
port => 5044
}
tcp {
port => 5000
codec => json_lines
}
udp {
port => 5000
codec => json_lines
}
}
filter {
if [fields][log_type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:timestamp} %{GREEDYDATA:message}" }
}
}
if [fields][log_type] == "apache" {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}" }
}
}
date {
match => [ "timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
output {
elasticsearch {
hosts => ["${ELASTICSEARCH_HOSTS:http://elasticsearch:9200}"]
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
user => "${ELASTICSEARCH_USERNAME:}"
password => "${ELASTICSEARCH_PASSWORD:}"
}
stdout {
codec => rubydebug
}
}

54
src/nginx/README.md Normal file
View File

@@ -0,0 +1,54 @@
# Nginx
[English](./README.md) | [中文](./README.zh.md)
This service deploys Nginx, a high-performance web server and reverse proxy server.
## Services
- `nginx`: The Nginx web server service.
## Environment Variables
| Variable Name | Description | Default Value |
| ------------------------- | ---------------------------------------------- | ------------------- |
| NGINX_VERSION | Nginx image version | `1.29.1-alpine3.20` |
| NGINX_HTTP_PORT_OVERRIDE | Host port mapping for HTTP (maps to port 80) | 80 |
| NGINX_HTTPS_PORT_OVERRIDE | Host port mapping for HTTPS (maps to port 443) | 443 |
| NGINX_HOST | Server hostname for configuration | `localhost` |
| NGINX_PORT | Server port for configuration | 80 |
Please modify the `.env` file as needed for your use case.
## Volumes
- `nginx_logs`: A volume for storing Nginx logs.
- `./html`: Directory for web content (mounted as read-only).
- `./nginx.conf`: Optional custom Nginx configuration file.
- `./conf.d`: Optional directory for additional configuration files.
- `./ssl`: Optional SSL certificates directory.
## Usage
1. The `html` directory is already created with a default `index.html` file.
2. Start the service:
```bash
docker compose up -d
```
3. Access the web server at `http://localhost` (or your configured port).
## Configuration
- Custom Nginx configuration can be mounted at `/etc/nginx/nginx.conf`
- Additional server configurations can be placed in the `conf.d` directory
- SSL certificates can be mounted at `/etc/nginx/ssl/`
- Web content should be placed in the `html` directory
## Security Notes
- Consider using SSL/TLS certificates for production deployments
- Regularly update the Nginx version to get security patches
- Review and customize the Nginx configuration for your specific needs

54
src/nginx/README.zh.md Normal file
View File

@@ -0,0 +1,54 @@
# Nginx
[English](./README.md) | [中文](./README.zh.md)
此服务部署 Nginx一个高性能的 Web 服务器和反向代理服务器。
## 服务
- `nginx`Nginx Web 服务器服务。
## 环境变量
| 变量名 | 描述 | 默认值 |
| ------------------------- | ------------------------------------ | ------------------- |
| NGINX_VERSION | Nginx 镜像版本 | `1.29.1-alpine3.20` |
| NGINX_HTTP_PORT_OVERRIDE | HTTP 主机端口映射(映射到端口 80 | 80 |
| NGINX_HTTPS_PORT_OVERRIDE | HTTPS 主机端口映射(映射到端口 443 | 443 |
| NGINX_HOST | 配置的服务器主机名 | `localhost` |
| NGINX_PORT | 配置的服务器端口 | 80 |
请根据您的使用情况修改 `.env` 文件。
## 卷
- `nginx_logs`:用于存储 Nginx 日志的卷。
- `./html`Web 内容目录(以只读方式挂载)。
- `./nginx.conf`:可选的自定义 Nginx 配置文件。
- `./conf.d`:可选的附加配置文件目录。
- `./ssl`:可选的 SSL 证书目录。
## 使用方法
1. `html` 目录已创建并包含默认的 `index.html` 文件。
2. 启动服务:
```bash
docker compose up -d
```
3. 在 `http://localhost`(或您配置的端口)访问 Web 服务器。
## 配置
- 自定义 Nginx 配置可以挂载到 `/etc/nginx/nginx.conf`
- 附加服务器配置可以放置在 `conf.d` 目录中
- SSL 证书可以挂载到 `/etc/nginx/ssl/`
- Web 内容应放置在 `html` 目录中
## 安全注意事项
- 生产环境部署时考虑使用 SSL/TLS 证书
- 定期更新 Nginx 版本以获取安全补丁
- 根据您的具体需求审查和自定义 Nginx 配置

View File

@@ -0,0 +1,42 @@
x-default: &default
restart: unless-stopped
volumes:
- &localtime /etc/localtime:/etc/localtime:ro
- &timezone /etc/timezone:/etc/timezone:ro
logging:
driver: json-file
options:
max-size: 100m
services:
nginx:
<<: *default
image: nginx:${NGINX_VERSION:-1.29.1-alpine3.20}
container_name: nginx
ports:
- "${NGINX_HTTP_PORT_OVERRIDE:-80}:80"
- "${NGINX_HTTPS_PORT_OVERRIDE:-443}:443"
volumes:
- *localtime
- *timezone
- nginx_logs:/var/log/nginx
- ./html:/usr/share/nginx/html:ro
# Custom configuration
# - ./nginx.conf:/etc/nginx/nginx.conf:ro
# - ./conf.d:/etc/nginx/conf.d:ro
# - ./ssl:/etc/nginx/ssl:ro
environment:
- NGINX_HOST=${NGINX_HOST:-localhost}
- NGINX_PORT=${NGINX_PORT:-80}
deploy:
resources:
limits:
cpus: '1.00'
memory: 512M
reservations:
cpus: '0.25'
memory: 64M
volumes:
nginx_logs:

12
src/nginx/html/index.html Normal file
View File

@@ -0,0 +1,12 @@
<!DOCTYPE html>
<html>
<head>
<title>Nginx</title>
<meta charset="utf-8">
</head>
<body>
<h1>Welcome to Nginx</h1>
<p>If you can see this page, the nginx web server is successfully installed and working.</p>
<p>This is the default page provided by the compose-anything project.</p>
</body>
</html>