refactor: spilt operator/..

This commit is contained in:
Sun-ZhenXing
2026-04-15 10:44:10 +08:00
parent 72700c4db0
commit d0933d7b55
90 changed files with 2710 additions and 1107 deletions
@@ -1,7 +1,3 @@
---
description: Describe the guidelines for contributing to the Helm Command Template project.
---
# Helm Command Template Project Guidelines
## 1. Project Intent
@@ -35,6 +31,15 @@ When contributing to or maintaining this project, you must adhere to the followi
2. **Installation:** How to use `make install`.
3. **Usage:** Basic verification or connection steps.
* **Operator / Service Separation:**
* **Operators and services MUST be in separate directories.** Never mix an operator deployment and its managed service/cluster in the same directory.
* An operator-only directory should be named `<service>-operator/` (e.g., `mysql-operator/`, `cassandra-operator/`).
* A service/cluster directory should use the plain service name (e.g., `mysql/`, `cassandra/`).
* The operator directory's Makefile should only include `../_template/base.mk` and deploy the operator chart directly via the standard `install` target. Do **not** use `operator.mk` in either directory — `operator.mk` is deprecated for new services.
* The service/cluster directory's README must state that the operator must be installed first, with a link to the operator directory (e.g., `See [mysql-operator](../mysql-operator/)`).
* If a service is inherently operator-only with no separate cluster chart (e.g., Strimzi Kafka), the directory should still be named `<service>-operator/` and include any sample CRD manifests (e.g., `kafka-cluster.yaml`) for creating resources after the operator is installed.
* When evaluating a new service that uses the operator pattern, search the upstream operator's GitHub repository and Helm chart registry to confirm the correct chart names and repo URLs for both the operator and the cluster/service charts.
* **Command Interface:**
* The end-user interaction must remain simple. The primary entry point for any service is executing `make install` inside its directory.
+3 -3
View File
@@ -17,9 +17,9 @@
"prepare": "simple-git-hooks"
},
"devDependencies": {
"@antfu/eslint-config": "^7.4.3",
"eslint": "^10.0.1",
"lint-staged": "^16.2.7",
"@antfu/eslint-config": "^8.2.0",
"eslint": "^10.2.0",
"lint-staged": "^16.4.0",
"simple-git-hooks": "^2.13.1"
},
"simple-git-hooks": {
+652 -611
View File
File diff suppressed because it is too large Load Diff
+5 -1
View File
@@ -1,4 +1,8 @@
# Kubernetes Operator Installation Template
# DEPRECATED: This file is deprecated. Do not use for new services.
# Operators and services should be in separate directories, each using base.mk directly.
# See AGENTS.md "Operator / Service Separation" for details.
#
# Kubernetes Operator Installation Template (DEPRECATED)
# This file provides common targets for deploying services using the Operator pattern.
#
# Usage:
+15
View File
@@ -0,0 +1,15 @@
HELM_RELEASE_NAME ?= k8ssandra-operator
HELM_APPLICATION_NAME ?= k8ssandra-operator
HELM_NAMESPACE ?= k8ssandra-operator
HELM_DIR ?= ./helm
HELM_CHART_VERSION ?=
HELM_VALUES_FILE ?= ./values.yaml
HELM_OCI_REGISTRY ?=
HELM_OCI_NAMESPACE ?=
HELM_OCI_USERNAME ?=
HELM_OCI_PASSWORD ?=
HELM_REPO_NAME ?= k8ssandra
HELM_REPO_URL ?= https://helm.k8ssandra.io/stable
HELM_CHART_REPO ?= $(HELM_REPO_NAME)/k8ssandra-operator
include ../_template/base.mk
+60
View File
@@ -0,0 +1,60 @@
# K8ssandra Operator
## Introduction
K8ssandra Operator is a Kubernetes operator that manages the lifecycle of Apache Cassandra clusters using K8ssandra. It handles provisioning, scaling, repair scheduling (Reaper), backup management (Medusa), and monitoring.
This chart installs the K8ssandra Operator only. To deploy a K8ssandra/Cassandra cluster using Helm, see the [cassandra](../cassandra/) directory. Alternatively, Cassandra clusters can be created with K8ssandraCluster CRDs — a sample cluster definition is provided in `k8ssandra-cluster.yaml`.
## Installation
To install the K8ssandra Operator, run:
```bash
make install
```
## Usage
After installation, verify the operator is running:
```bash
kubectl get pods -n k8ssandra-operator
```
Create a Cassandra cluster using the sample CRD:
```bash
kubectl apply -f k8ssandra-cluster.yaml -n cassandra
```
Check the cluster status:
```bash
kubectl get k8ssandraclusters -n cassandra
```
Connect with cqlsh once the cluster is ready:
```bash
kubectl exec -it -n cassandra cassandra-cluster-dc1-default-sts-0 -- cqlsh
```
Check that CRDs are registered:
```bash
kubectl get crd | grep k8ssandra
```
## Uninstall
To uninstall:
```bash
make uninstall
```
## Documentation
- [K8ssandra Documentation](https://docs.k8ssandra.io/)
- [K8ssandra Operator GitHub](https://github.com/k8ssandra/k8ssandra-operator)
+60
View File
@@ -0,0 +1,60 @@
# K8ssandra Operator
## 简介
K8ssandra Operator 是一个 Kubernetes Operator,用于管理基于 K8ssandra 的 Apache Cassandra 集群的生命周期。它负责集群的配置、扩展、修复调度(Reaper)、备份管理(Medusa)和监控。
此 Chart 仅安装 K8ssandra Operator。要使用 Helm 部署 K8ssandra/Cassandra 集群,请参阅 [cassandra](../cassandra/) 目录。也可以通过 K8ssandraCluster CRD 创建 Cassandra 集群——示例集群定义见 `k8ssandra-cluster.yaml`
## 安装
安装 K8ssandra Operator
```bash
make install
```
## 使用
安装完成后,验证 Operator 是否正常运行:
```bash
kubectl get pods -n k8ssandra-operator
```
使用示例 CRD 创建 Cassandra 集群:
```bash
kubectl apply -f k8ssandra-cluster.yaml -n cassandra
```
检查集群状态:
```bash
kubectl get k8ssandraclusters -n cassandra
```
集群就绪后,可使用 cqlsh 连接:
```bash
kubectl exec -it -n cassandra cassandra-cluster-dc1-default-sts-0 -- cqlsh
```
检查 CRD 是否已注册:
```bash
kubectl get crd | grep k8ssandra
```
## 卸载
卸载:
```bash
make uninstall
```
## 文档
- [K8ssandra 文档](https://docs.k8ssandra.io/)
- [K8ssandra Operator GitHub](https://github.com/k8ssandra/k8ssandra-operator)
@@ -0,0 +1,38 @@
apiVersion: k8ssandra.io/v1alpha1
kind: K8ssandraCluster
metadata:
name: cassandra-cluster
spec:
cassandra:
serverVersion: 4.0.1
datacenters:
- metadata:
name: dc1
size: 3
racks:
- name: rack1
- name: rack2
- name: rack3
storageConfig:
cassandraDataVolumeClaimSpec:
storageClassName: standard
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
config:
jvmOptions:
heapSize: 1Gi
resources:
requests:
cpu: 1000m
memory: 4Gi
limits:
cpu: 2000m
memory: 4Gi
reaper:
autoScheduling:
enabled: false
medusa:
storageProperties: {}
+28
View File
@@ -0,0 +1,28 @@
# K8ssandra Operator Configuration
# https://github.com/k8ssandra/k8ssandra-operator
# Operator configuration
replicaCount: 1
resources:
requests:
cpu: 100m
memory: 256Mi
limits:
cpu: 500m
memory: 512Mi
# Medusa backup configuration
medusa:
enabled: false
# Prometheus monitoring
monitoring:
enabled: false
# Cluster-wide configuration
clusterScoped: false
# Webhook configuration
webhook:
enabled: false
+3 -18
View File
@@ -1,5 +1,5 @@
HELM_RELEASE_NAME ?= cassandra
HELM_APPLICATION_NAME ?= cassandra
HELM_RELEASE_NAME ?= k8ssandra
HELM_APPLICATION_NAME ?= k8ssandra
HELM_NAMESPACE ?= cassandra
HELM_DIR ?= ./helm
HELM_CHART_VERSION ?=
@@ -10,21 +10,6 @@ HELM_OCI_USERNAME ?=
HELM_OCI_PASSWORD ?=
HELM_REPO_NAME ?= k8ssandra
HELM_REPO_URL ?= https://helm.k8ssandra.io/stable
# Operator configuration
OPERATOR_RELEASE_NAME ?= k8ssandra-operator
OPERATOR_NAMESPACE ?= k8ssandra-operator
OPERATOR_CHART_REPO ?= $(HELM_REPO_NAME)/k8ssandra-operator
OPERATOR_CHART_VERSION ?=
OPERATOR_VALUES_FILE ?= ./values.yaml
# Cluster configuration
CLUSTER_RELEASE_NAME ?= cassandra-cluster
CLUSTER_CHART_REPO ?= $(HELM_REPO_NAME)/k8ssandra
CLUSTER_VALUES_FILE ?= ./cluster-values.yaml
# Enable CRD waiting
WAIT_FOR_CRD ?= true
HELM_CHART_REPO ?= $(HELM_REPO_NAME)/k8ssandra
include ../_template/base.mk
include ../_template/operator.mk
+19 -52
View File
@@ -2,13 +2,13 @@
## Introduction
Apache Cassandra is an open-source distributed NoSQL database management system designed to handle large amounts of data across many commodity servers. This deployment uses K8ssandra Operator, which provides a Kubernetes-native way to manage Cassandra clusters.
Apache Cassandra is a free, open-source, distributed, wide-column store, NoSQL database management system designed to handle large amounts of data across many commodity servers. This chart deploys a K8ssandra cluster, which manages Cassandra using the K8ssandra Operator.
K8ssandra is a cloud-native distribution of Apache Cassandra that runs on Kubernetes. It includes automation for operational tasks such as repairs, backups, and monitoring.
The K8ssandra Operator must be installed first — see the [cassandra-operator](../cassandra-operator/) directory.
## Installation
To install Cassandra, run:
To install Apache Cassandra, run:
```bash
make install
@@ -16,67 +16,34 @@ make install
## Usage
After installation, you can create a Cassandra cluster:
After installation, verify the deployment:
```bash
# Check if operator is running
kubectl get pods -n cassandra
# Create a Cassandra cluster
kubectl apply -f - <<EOF
apiVersion: k8ssandra.io/v1alpha1
kind: K8ssandraCluster
metadata:
name: demo
namespace: cassandra
spec:
cassandra:
serverVersion: "4.0.1"
datacenters:
- metadata:
name: dc1
size: 3
storageConfig:
cassandraDataVolumeClaimSpec:
storageClassName: standard
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 10Gi
config:
jvmOptions:
heapSize: 1Gi
EOF
```
## Configuration
The default configuration includes:
- K8ssandra Operator for managing Cassandra clusters
- Support for Cassandra 4.x
- Medusa for backup management
- Reaper for repair scheduling
- Metrics collection via Prometheus
## Features
- **Automated Repairs**: Reaper handles repair scheduling
- **Backup/Restore**: Medusa provides backup and restore capabilities
- **Monitoring**: Integrated Prometheus metrics
- **Multi-DC Support**: Deploy across multiple data centers
## Connecting to Cassandra
Check the Cassandra cluster status:
```bash
# Get CQLSH access
kubectl exec -it demo-dc1-default-sts-0 -n cassandra -c cassandra -- cqlsh
kubectl get k8ssandraclusters -n cassandra
```
Connect to Cassandra using cqlsh:
```bash
kubectl exec -it -n cassandra cassandra-cluster-dc1-default-sts-0 -- cqlsh
```
## Uninstall
To uninstall:
To uninstall Cassandra:
```bash
make uninstall
```
## Documentation
- [K8ssandra Documentation](https://docs.k8ssandra.io/)
- [Apache Cassandra Documentation](https://cassandra.apache.org/doc/latest/)
- [K8ssandra Helm Chart](https://github.com/k8ssandra/k8ssandra-helm)
+19 -52
View File
@@ -2,13 +2,13 @@
## 简介
Apache Cassandra 是一个开源的分布式 NoSQL 数据库管理系统,设计用于在大量商用服务器上处理大量数据。此部署使用 K8ssandra Operator,它提供了一种 Kubernetes 原生的方式来管理 Cassandra 集群
Apache Cassandra 是一个免费、开源的分布式宽列存储 NoSQL 数据库管理系统,旨在处理大量跨多台商用服务器的数据。此 Chart 通过 K8ssandra 部署 Cassandra 集群,由 K8ssandra Operator 管理
K8ssandra 是 Apache Cassandra 的云原生发行版,可在 Kubernetes 上运行。它包括修复、备份和监控等运维任务的自动化
必须先安装 K8ssandra Operator — 请参阅 [cassandra-operator](../cassandra-operator/) 目录
## 安装
安装 Cassandra
安装 Apache Cassandra,请运行
```bash
make install
@@ -16,67 +16,34 @@ make install
## 使用
安装完成后,您可以创建 Cassandra 集群
安装完成后,验证部署
```bash
# 检查 operator 是否运行
kubectl get pods -n cassandra
# 创建 Cassandra 集群
kubectl apply -f - <<EOF
apiVersion: k8ssandra.io/v1alpha1
kind: K8ssandraCluster
metadata:
name: demo
namespace: cassandra
spec:
cassandra:
serverVersion: "4.0.1"
datacenters:
- metadata:
name: dc1
size: 3
storageConfig:
cassandraDataVolumeClaimSpec:
storageClassName: standard
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 10Gi
config:
jvmOptions:
heapSize: 1Gi
EOF
```
## 配置
默认配置包括:
- 用于管理 Cassandra 集群的 K8ssandra Operator
- 支持 Cassandra 4.x
- Medusa 用于备份管理
- Reaper 用于修复调度
- 通过 Prometheus 收集指标
## 功能
- **自动修复**: Reaper 处理修复调度
- **备份/恢复**: Medusa 提供备份和恢复功能
- **监控**: 集成的 Prometheus 指标
- **多 DC 支持**: 跨多个数据中心部署
## 连接 Cassandra
检查 Cassandra 集群状态:
```bash
# 获取 CQLSH 访问
kubectl exec -it demo-dc1-default-sts-0 -n cassandra -c cassandra -- cqlsh
kubectl get k8ssandraclusters -n cassandra
```
使用 cqlsh 连接 Cassandra
```bash
kubectl exec -it -n cassandra cassandra-cluster-dc1-default-sts-0 -- cqlsh
```
## 卸载
卸载:
卸载 Cassandra
```bash
make uninstall
```
## 文档
- [K8ssandra 文档](https://docs.k8ssandra.io/)
- [Apache Cassandra 文档](https://cassandra.apache.org/doc/latest/)
- [K8ssandra Helm Chart](https://github.com/k8ssandra/k8ssandra-helm)
-41
View File
@@ -1,41 +0,0 @@
# K8ssandra Cluster Configuration
# https://github.com/k8ssandra/k8ssandra-operator
# Cluster name
cassandra:
clusterName: cassandra-cluster
datacenters:
- name: dc1
size: 3
racks:
- name: rack1
- name: rack2
- name: rack3
storage:
storageClassName: standard
size: 10Gi
resources:
requests:
cpu: 1000m
memory: 4Gi
limits:
cpu: 2000m
memory: 4Gi
# Stargate configuration
stargate:
enabled: false
size: 1
heapSize: 256Mi
# Reaper configuration
reaper:
enabled: false
# Medusa backup configuration
medusa:
enabled: false
# Prometheus monitoring
monitoring:
enabled: false
+23 -19
View File
@@ -1,28 +1,32 @@
# K8ssandra Operator Configuration
# https://github.com/k8ssandra/k8ssandra-operator
# K8ssandra Helm Chart Values
# https://github.com/k8ssandra/k8ssandra-helm
# Operator configuration
replicaCount: 1
resources:
cassandra:
clusterName: cassandra-cluster
datacenters:
- name: dc1
size: 3
racks:
- name: rack1
- name: rack2
- name: rack3
resources:
requests:
cpu: 100m
memory: 256Mi
cpu: 1000m
memory: 4Gi
limits:
cpu: 500m
memory: 512Mi
cpu: 2000m
memory: 4Gi
stargate:
enabled: false
reaper:
enabled: false
# Medusa backup configuration
medusa:
enabled: false
# Prometheus monitoring
monitoring:
enabled: false
# Cluster-wide configuration
clusterScoped: false
# Webhook configuration
webhook:
prometheus:
enabled: false
+15
View File
@@ -0,0 +1,15 @@
HELM_RELEASE_NAME ?= clickhouse-operator
HELM_APPLICATION_NAME ?= clickhouse-operator
HELM_NAMESPACE ?= clickhouse-operator
HELM_DIR ?= ./helm
HELM_CHART_VERSION ?=
HELM_VALUES_FILE ?= ./values.yaml
HELM_OCI_REGISTRY ?=
HELM_OCI_NAMESPACE ?=
HELM_OCI_USERNAME ?=
HELM_OCI_PASSWORD ?=
HELM_REPO_NAME ?= altinity
HELM_REPO_URL ?= https://helm.altinity.com
HELM_CHART_REPO ?= $(HELM_REPO_NAME)/altinity-clickhouse-operator
include ../_template/base.mk
+91
View File
@@ -0,0 +1,91 @@
# ClickHouse Operator
## Introduction
The Altinity ClickHouse Operator is a Kubernetes operator that manages ClickHouse clusters on Kubernetes. It automates deployment, scaling, configuration, and upgrades of ClickHouse instances using CRDs.
This chart installs the ClickHouse Operator only. ClickHouse clusters are created through ClickHouseInstallation CRDs after the operator is installed.
## Installation
To install the ClickHouse Operator, run:
```bash
make install
```
## Usage
After installation, verify the operator is running:
```bash
kubectl get pods -n clickhouse-operator
```
Check that CRDs are registered:
```bash
kubectl get crd | grep clickhouse
```
### Create a ClickHouse Cluster
```yaml
apiVersion: clickhouse.altinity.com/v1
kind: ClickHouseInstallation
metadata:
name: clickhouse-cluster
spec:
defaults:
templates:
dataVolumeClaimTemplate: data-volume-template
podTemplate: clickhouse-pod-template
configuration:
clusters:
- name: clickhouse-cluster
layout:
shardsCount: 1
replicasCount: 1
settings:
log_level: information
templates:
podTemplates:
- name: clickhouse-pod-template
spec:
containers:
- name: clickhouse
image: clickhouse/clickhouse-server:24.8
volumeClaimTemplates:
- name: data-volume-template
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
```
Apply the manifest:
```bash
kubectl apply -f clickhouse-cluster.yaml
```
Check the cluster status:
```bash
kubectl get clickhouseinstallation -n clickhouse
```
## Uninstall
To uninstall:
```bash
make uninstall
```
## Documentation
- [ClickHouse Operator Documentation](https://clickhouse.com/docs/en/manage/clickhouse-operator/)
- [Altinity ClickHouse Operator GitHub](https://github.com/Altinity/clickhouse-operator)
+91
View File
@@ -0,0 +1,91 @@
# ClickHouse Operator
## 简介
Altinity ClickHouse Operator 是一个 Kubernetes Operator,用于管理 Kubernetes 上的 ClickHouse 集群。它通过 CRD 自动化 ClickHouse 实例的部署、扩缩容、配置和升级。
此 Chart 仅安装 ClickHouse Operator。ClickHouse 集群在 Operator 安装后通过 ClickHouseInstallation CRD 创建。
## 安装
安装 ClickHouse Operator
```bash
make install
```
## 使用
安装完成后,验证 Operator 是否正常运行:
```bash
kubectl get pods -n clickhouse-operator
```
检查 CRD 是否已注册:
```bash
kubectl get crd | grep clickhouse
```
### 创建 ClickHouse 集群
```yaml
apiVersion: clickhouse.altinity.com/v1
kind: ClickHouseInstallation
metadata:
name: clickhouse-cluster
spec:
defaults:
templates:
dataVolumeClaimTemplate: data-volume-template
podTemplate: clickhouse-pod-template
configuration:
clusters:
- name: clickhouse-cluster
layout:
shardsCount: 1
replicasCount: 1
settings:
log_level: information
templates:
podTemplates:
- name: clickhouse-pod-template
spec:
containers:
- name: clickhouse
image: clickhouse/clickhouse-server:24.8
volumeClaimTemplates:
- name: data-volume-template
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
```
应用清单:
```bash
kubectl apply -f clickhouse-cluster.yaml
```
检查集群状态:
```bash
kubectl get clickhouseinstallation -n clickhouse
```
## 卸载
卸载:
```bash
make uninstall
```
## 文档
- [ClickHouse Operator 文档](https://clickhouse.com/docs/en/manage/clickhouse-operator/)
- [Altinity ClickHouse Operator GitHub](https://github.com/Altinity/clickhouse-operator)
+5
View File
@@ -0,0 +1,5 @@
# ClickHouse Operator Configuration
# https://github.com/Altinity/clickhouse-operator
# Default values for altinity-clickhouse-operator
# Uses chart defaults for most settings
+2
View File
@@ -4,6 +4,8 @@
ClickHouse is an open-source column-oriented database management system for online analytical processing (OLAP).
This chart deploys ClickHouse using the Altinity Helm chart. For operator-managed ClickHouse clusters with automated deployment, scaling, and configuration, see the [clickhouse-operator](../clickhouse-operator/) directory.
## Installation
To install ClickHouse, run:
+2
View File
@@ -4,6 +4,8 @@
ClickHouse 是一个开源的面向列的数据库管理系统,用于在线分析处理 (OLAP)。
此 Chart 使用 Altinity Helm chart 部署 ClickHouse。如需 Operator 管理的 ClickHouse 集群(支持自动部署、扩缩容和配置),请参阅 [clickhouse-operator](../clickhouse-operator/) 目录。
## 安装
要安装 ClickHouse,请运行:
+15
View File
@@ -0,0 +1,15 @@
HELM_RELEASE_NAME ?= elastic-operator
HELM_APPLICATION_NAME ?= eck-operator
HELM_NAMESPACE ?= elastic-system
HELM_DIR ?= ./helm
HELM_CHART_VERSION ?=
HELM_VALUES_FILE ?= ./values.yaml
HELM_OCI_REGISTRY ?=
HELM_OCI_NAMESPACE ?=
HELM_OCI_USERNAME ?=
HELM_OCI_PASSWORD ?=
HELM_REPO_NAME ?= elastic
HELM_REPO_URL ?= https://helm.elastic.co
HELM_CHART_REPO ?= $(HELM_REPO_NAME)/eck-operator
include ../_template/base.mk
+85
View File
@@ -0,0 +1,85 @@
# ECK Operator (Elastic Cloud on Kubernetes)
## Introduction
ECK (Elastic Cloud on Kubernetes) is the official Kubernetes operator from Elastic that orchestrates Elasticsearch, Kibana, APM Server, Enterprise Search, and Beats on Kubernetes.
This chart installs the ECK Operator only. Elasticsearch, Kibana, and other Elastic resources are created through Elasticsearch, Kibana, and other CRDs after the operator is installed.
## Installation
To install the ECK Operator, run:
```bash
make install
```
## Usage
After installation, verify the operator is running:
```bash
kubectl get pods -n elastic-system
```
Check that CRDs are registered:
```bash
kubectl get crd | grep elastic
```
### Create an Elasticsearch Cluster
```yaml
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: quickstart
spec:
version: 8.15.0
nodeSets:
- name: default
count: 1
config:
node.store.allow_mmap: false
```
Apply the manifest:
```bash
kubectl apply -f elasticsearch.yaml
```
### Create a Kibana Instance
```yaml
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: quickstart
spec:
version: 8.15.0
count: 1
elasticsearchRef:
name: quickstart
```
Apply the manifest:
```bash
kubectl apply -f kibana.yaml
```
## Uninstall
To uninstall:
```bash
make uninstall
```
## Documentation
- [ECK Documentation](https://www.elastic.co/guide/en/cloud-on-k8s/current/)
- [ECK GitHub](https://github.com/elastic/cloud-on-k8s)
- [ECK Helm Chart](https://github.com/elastic/cloud-on-k8s/tree/main/deploy/eck-operator)
+85
View File
@@ -0,0 +1,85 @@
# ECK Operator (Elastic Cloud on Kubernetes)
## 简介
ECK (Elastic Cloud on Kubernetes) 是 Elastic 官方的 Kubernetes Operator,用于编排 Elasticsearch、Kibana、APM Server、Enterprise Search 和 Beats。
此 Chart 仅安装 ECK Operator。Elasticsearch、Kibana 及其他 Elastic 资源在 Operator 安装后通过 CRD 创建。
## 安装
安装 ECK Operator
```bash
make install
```
## 使用
安装完成后,验证 Operator 是否正常运行:
```bash
kubectl get pods -n elastic-system
```
检查 CRD 是否已注册:
```bash
kubectl get crd | grep elastic
```
### 创建 Elasticsearch 集群
```yaml
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: quickstart
spec:
version: 8.15.0
nodeSets:
- name: default
count: 1
config:
node.store.allow_mmap: false
```
应用清单:
```bash
kubectl apply -f elasticsearch.yaml
```
### 创建 Kibana 实例
```yaml
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: quickstart
spec:
version: 8.15.0
count: 1
elasticsearchRef:
name: quickstart
```
应用清单:
```bash
kubectl apply -f kibana.yaml
```
## 卸载
卸载:
```bash
make uninstall
```
## 文档
- [ECK 文档](https://www.elastic.co/guide/en/cloud-on-k8s/current/)
- [ECK GitHub](https://github.com/elastic/cloud-on-k8s)
- [ECK Helm Chart](https://github.com/elastic/cloud-on-k8s/tree/main/deploy/eck-operator)
+20
View File
@@ -0,0 +1,20 @@
# ECK Operator Configuration
# https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-install-helm.html
# Default values for eck-operator
# Uses chart defaults for most settings
# Set to true to install CRDs
installCRDs: true
# Resource configuration
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
# Log level
logVerbosity: 0
+2
View File
@@ -4,6 +4,8 @@
Elasticsearch is a distributed, RESTful search and analytics engine capable of addressing a growing number of use cases.
This chart deploys a standalone Elasticsearch instance. For operator-managed Elasticsearch with automated provisioning, scaling, upgrades, and backup, see the [elasticsearch-operator](../elasticsearch-operator/) directory.
## Installation
To install Elasticsearch, run:
+2
View File
@@ -4,6 +4,8 @@
Elasticsearch 是一个分布式、RESTful 风格的搜索和数据分析引擎。
此 Chart 部署独立的 Elasticsearch 实例。如需 Operator 管理的 Elasticsearch(支持自动配置、扩缩容、升级和备份),请参阅 [elasticsearch-operator](../elasticsearch-operator/) 目录。
## 安装
要安装 Elasticsearch,请运行:
+15
View File
@@ -0,0 +1,15 @@
HELM_RELEASE_NAME ?= flink-operator
HELM_APPLICATION_NAME ?= flink-operator
HELM_NAMESPACE ?= flink-operator
HELM_DIR ?= ./helm
HELM_CHART_VERSION ?=
HELM_VALUES_FILE ?= ./values.yaml
HELM_OCI_REGISTRY ?=
HELM_OCI_NAMESPACE ?=
HELM_OCI_USERNAME ?=
HELM_OCI_PASSWORD ?=
HELM_REPO_NAME ?= flink-operator
HELM_REPO_URL ?= https://downloads.apache.org/flink/flink-kubernetes-operator-1.10.0/
HELM_CHART_REPO ?= $(HELM_REPO_NAME)/flink-kubernetes-operator
include ../_template/base.mk
+44
View File
@@ -0,0 +1,44 @@
# Apache Flink Kubernetes Operator
## Introduction
The Apache Flink Kubernetes Operator manages the lifecycle of Apache Flink applications on Kubernetes. This chart installs only the operator; Flink clusters are created through FlinkDeployment CRDs — there is no separate Flink cluster Helm chart.
A sample FlinkDeployment manifest is provided in `flink-deployment.yaml`.
## Installation
```bash
make install
```
## Usage
Verify the operator pods are running:
```bash
kubectl get pods -n flink-operator
```
Create a Flink cluster using the sample CRD:
```bash
kubectl apply -f flink-deployment.yaml -n flink
```
Check the deployed Flink resources:
```bash
kubectl get flinkdeployments -n flink
```
## Uninstall
```bash
make uninstall
```
## Documentation
- [Flink Kubernetes Operator Documentation](https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-stable/)
- [Apache Flink Kubernetes Operator GitHub](https://github.com/apache/flink-kubernetes-operator)
+44
View File
@@ -0,0 +1,44 @@
# Apache Flink Kubernetes Operator
## 简介
Apache Flink Kubernetes Operator 用于管理 Kubernetes 上 Apache Flink 应用的生命周期。此 Chart 仅安装 OperatorFlink 集群通过 FlinkDeployment CRD 创建——没有独立的 Flink 集群 Helm Chart。
示例 FlinkDeployment 清单位于 `flink-deployment.yaml`
## 安装
```bash
make install
```
## 使用
先验证 Operator Pod 运行正常:
```bash
kubectl get pods -n flink-operator
```
使用示例 CRD 创建 Flink 集群:
```bash
kubectl apply -f flink-deployment.yaml -n flink
```
检查 Flink 资源状态:
```bash
kubectl get flinkdeployments -n flink
```
## 卸载
```bash
make uninstall
```
## 文档
- [Flink Kubernetes Operator 文档](https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-stable/)
- [Apache Flink Kubernetes Operator GitHub](https://github.com/apache/flink-kubernetes-operator)
+25
View File
@@ -0,0 +1,25 @@
apiVersion: flink.apache.org/v1beta1
kind: FlinkDeployment
metadata:
name: flink-deployment-example
spec:
image: flink:1.19
flinkVersion: v1_19
flinkConfiguration:
taskmanager.numberOfTaskSlots: '2'
serviceAccount: flink
jobManager:
resource:
memory: 1024m
cpu: 0.5
replicas: 1
taskManager:
resource:
memory: 2048m
cpu: 1
replicas: 2
job:
jarURI: local:///opt/flink/examples/streaming/StateMachineExample.jar
parallelism: 2
upgradeMode: stateful
state: running
-30
View File
@@ -1,30 +0,0 @@
HELM_RELEASE_NAME ?= flink
HELM_APPLICATION_NAME ?= flink
HELM_NAMESPACE ?= flink
HELM_DIR ?= ./helm
HELM_CHART_VERSION ?=
HELM_VALUES_FILE ?= ./values.yaml
HELM_OCI_REGISTRY ?=
HELM_OCI_NAMESPACE ?=
HELM_OCI_USERNAME ?=
HELM_OCI_PASSWORD ?=
HELM_REPO_NAME ?= flink-operator
HELM_REPO_URL ?= https://downloads.apache.org/flink/flink-kubernetes-operator-1.9.0/
# Operator configuration
OPERATOR_RELEASE_NAME ?= flink-operator
OPERATOR_NAMESPACE ?= flink-operator
OPERATOR_CHART_REPO ?= $(HELM_REPO_NAME)/flink-operator
OPERATOR_CHART_VERSION ?=
OPERATOR_VALUES_FILE ?= ./values.yaml
# Cluster configuration (Flink uses FlinkDeployment CR, installed via kubectl or separate chart)
CLUSTER_RELEASE_NAME ?= flink-cluster
CLUSTER_CHART_REPO ?= $(HELM_REPO_NAME)/flink-cluster
CLUSTER_VALUES_FILE ?= ./cluster-values.yaml
# Enable CRD waiting
WAIT_FOR_CRD ?= true
include ../_template/base.mk
include ../_template/operator.mk
-29
View File
@@ -1,29 +0,0 @@
# Apache Flink
## Introduction
Apache Flink is a framework and distributed processing engine for stateful computations over unbounded and bounded data streams. Flink has been designed to run in all common cluster environments, perform computations at in-memory speed and at any scale.
## Installation
To install Apache Flink Kubernetes Operator, run:
```bash
make install
```
## Usage
After installation, verify the deployment:
```bash
kubectl get pods -n flink
```
To deploy a Flink job, create a FlinkDeployment custom resource.
## Documentation
- [Official Flink Documentation](https://nightlies.apache.org/flink/flink-docs-stable/)
- [Flink Kubernetes Operator Documentation](https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-stable/)
- [Helm Chart Source](https://github.com/apache/flink-kubernetes-operator)
-29
View File
@@ -1,29 +0,0 @@
# Apache Flink
## 简介
Apache Flink 是一个框架和分布式处理引擎,用于在无界和有界数据流上进行状态计算。Flink 旨在在所有常见的集群环境中运行,以内存速度和任何规模执行计算。
## 安装
要安装 Apache Flink Kubernetes Operator,请运行:
```bash
make install
```
## 使用
安装完成后,验证部署:
```bash
kubectl get pods -n flink
```
要部署 Flink 作业,请创建 FlinkDeployment 自定义资源。
## 文档
- [官方 Flink 文档](https://nightlies.apache.org/flink/flink-docs-stable/)
- [Flink Kubernetes Operator 文档](https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-stable/)
- [Helm Chart 源码](https://github.com/apache/flink-kubernetes-operator)
-33
View File
@@ -1,33 +0,0 @@
# Flink Cluster Configuration (FlinkDeployment CR)
# https://github.com/apache/flink-kubernetes-operator
# Flink cluster name
nameOverride: flink-cluster
# Flink version
flinkVersion: v1.19
# Job configuration
job:
jarURI: local:///opt/flink/examples/streaming/StateMachineExample.jar
parallelism: 2
upgradeMode: stateful
state: running
# TaskManager configuration
taskManager:
resource:
memory: 2048m
cpu: 1
replicas: 2
# JobManager configuration
jobManager:
resource:
memory: 1024m
cpu: 0.5
replicas: 1
# Service configuration
service:
type: ClusterIP
+15
View File
@@ -0,0 +1,15 @@
HELM_RELEASE_NAME ?= strimzi-kafka-operator
HELM_APPLICATION_NAME ?= strimzi-kafka-operator
HELM_NAMESPACE ?= strimzi-operator
HELM_DIR ?= ./helm
HELM_CHART_VERSION ?= 0.50.0
HELM_VALUES_FILE ?= ./values.yaml
HELM_OCI_REGISTRY ?= docker.io
HELM_OCI_NAMESPACE ?=
HELM_OCI_USERNAME ?=
HELM_OCI_PASSWORD ?=
HELM_REPO_NAME ?= strimzi
HELM_REPO_URL ?= https://strimzi.io/charts/
HELM_CHART_REPO ?= $(HELM_REPO_NAME)/strimzi-kafka-operator
include ../_template/base.mk
+48
View File
@@ -0,0 +1,48 @@
# Strimzi Kafka Operator
## Introduction
Strimzi provides a way to run Apache Kafka on Kubernetes. The Strimzi Kafka Operator manages Kafka clusters, topics, and users through Custom Resource Definitions (CRDs).
This chart installs the Strimzi Kafka Operator only. Kafka clusters are created using Kafka CRDs after the operator is installed — there is no separate Kafka cluster Helm chart. A sample cluster definition is provided in `kafka-cluster.yaml`.
## Installation
To install the Strimzi Kafka Operator, run:
```bash
make install
```
## Usage
After installation, verify the operator is running:
```bash
kubectl get pods -n strimzi-operator
```
Create a Kafka cluster using the sample CRD:
```bash
kubectl apply -f kafka-cluster.yaml -n kafka
```
Check the cluster status:
```bash
kubectl get kafka -n kafka
```
## Uninstall
To uninstall:
```bash
make uninstall
```
## Documentation
- [Strimzi Documentation](https://strimzi.io/docs/)
- [Strimzi GitHub](https://github.com/strimzi/strimzi-kafka-operator)
+48
View File
@@ -0,0 +1,48 @@
# Strimzi Kafka Operator
## 简介
Strimzi 提供了在 Kubernetes 上运行 Apache Kafka 的方式。Strimzi Kafka Operator 通过自定义资源定义(CRD)管理 Kafka 集群、主题和用户。
此 Chart 仅安装 Strimzi Kafka Operator。Kafka 集群在 Operator 安装后通过 Kafka CRD 创建——没有独立的 Kafka 集群 Helm Chart。`kafka-cluster.yaml` 中提供了示例集群定义。
## 安装
安装 Strimzi Kafka Operator
```bash
make install
```
## 使用
安装完成后,验证 Operator 是否正常运行:
```bash
kubectl get pods -n strimzi-operator
```
使用示例 CRD 创建 Kafka 集群:
```bash
kubectl apply -f kafka-cluster.yaml -n kafka
```
检查集群状态:
```bash
kubectl get kafka -n kafka
```
## 卸载
卸载:
```bash
make uninstall
```
## 文档
- [Strimzi 文档](https://strimzi.io/docs/)
- [Strimzi GitHub](https://github.com/strimzi/strimzi-kafka-operator)
-35
View File
@@ -1,35 +0,0 @@
HELM_RELEASE_NAME ?= kafka
HELM_APPLICATION_NAME ?= kafka
HELM_NAMESPACE ?= kafka
HELM_DIR ?= ./helm
HELM_CHART_VERSION ?= 0.50.0
HELM_VALUES_FILE ?= ./values.yaml
HELM_OCI_REGISTRY ?= docker.io
HELM_OCI_NAMESPACE ?=
HELM_REPO_NAME ?= strimzi
HELM_REPO_URL ?= https://strimzi.io/charts/
# Operator configuration (Strimzi only has operator, cluster is created via CRDs)
OPERATOR_RELEASE_NAME ?= strimzi-kafka-operator
OPERATOR_NAMESPACE ?= strimzi-operator
OPERATOR_CHART_REPO ?= $(HELM_REPO_NAME)/strimzi-kafka-operator
OPERATOR_CHART_VERSION ?= $(HELM_CHART_VERSION)
OPERATOR_VALUES_FILE ?= ./values.yaml
# For Strimzi, we only install the operator
# Kafka clusters are created using Kafka CRDs after operator is installed
include ../_template/base.mk
include ../_template/operator.mk
# Override install target to only install operator
.PHONY: install
install: install-operator
# Override uninstall target to only uninstall operator
.PHONY: uninstall
uninstall: uninstall-operator
# Override verify target
.PHONY: verify
verify: verify-operator
-27
View File
@@ -1,27 +0,0 @@
# Kafka
## Introduction
Apache Kafka is an open-source distributed event streaming platform used for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications.
## Installation
To install Kafka, run:
```bash
make install
```
## Usage
After installation, verify the deployment:
```bash
kubectl get pods -n kafka
```
To produce and consume messages, use Kafka tools:
```bash
kubectl -n kafka exec -it kafka-cluster-kafka-0 -- kafka-console-producer.sh --broker-list kafka-cluster-kafka-bootstrap:9092 --topic test
```
-27
View File
@@ -1,27 +0,0 @@
# Kafka
## 简介
Apache Kafka 是一个开源的分布式事件流平台,用于高性能数据管道、流分析、数据集成和关键任务应用。
## 安装
要安装 Kafka,请运行:
```bash
make install
```
## 使用
安装后,验证部署:
```bash
kubectl get pods -n kafka
```
要生产和消费消息,使用 Kafka 工具:
```bash
kubectl -n kafka exec -it kafka-cluster-kafka-0 -- kafka-console-producer.sh --broker-list kafka-cluster-kafka-bootstrap:9092 --topic test
```
+15
View File
@@ -0,0 +1,15 @@
HELM_RELEASE_NAME ?= milvus-operator
HELM_APPLICATION_NAME ?= milvus-operator
HELM_NAMESPACE ?= milvus-operator
HELM_DIR ?= ./helm
HELM_CHART_VERSION ?=
HELM_VALUES_FILE ?= ./values.yaml
HELM_OCI_REGISTRY ?=
HELM_OCI_NAMESPACE ?=
HELM_OCI_USERNAME ?=
HELM_OCI_PASSWORD ?=
HELM_REPO_NAME ?= milvus-operator
HELM_REPO_URL ?= https://zilliztech.github.io/milvus-operator/
HELM_CHART_REPO ?= $(HELM_REPO_NAME)/milvus-operator
include ../_template/base.mk
+87
View File
@@ -0,0 +1,87 @@
# Milvus Operator
## Introduction
Milvus Operator is a Kubernetes operator that automates the deployment and management of Milvus vector database clusters on Kubernetes. It provides an easy solution to deploy and manage the full Milvus service stack including etcd, Pulsar, and MinIO in a scalable and highly available way.
This chart installs the Milvus Operator only. Milvus clusters are created through MilvusCluster CRDs after the operator is installed.
## Installation
To install the Milvus Operator, run:
```bash
make install
```
## Usage
After installation, verify the operator is running:
```bash
kubectl get pods -n milvus-operator
```
Check that CRDs are registered:
```bash
kubectl get crd | grep milvus
```
### Create a Milvus Cluster
```yaml
apiVersion: milvus.io/v1beta1
kind: MilvusCluster
metadata:
name: my-milvus
namespace: milvus
spec:
components:
image: milvusdb/milvus:v2.4.17
proxy:
replicas: 1
rootCoord:
replicas: 1
dataCoord:
replicas: 1
indexCoord:
replicas: 1
queryCoord:
replicas: 1
dataNode:
replicas: 1
indexNode:
replicas: 1
queryNode:
replicas: 1
config:
milvus:
log:
level: info
```
Apply the manifest:
```bash
kubectl apply -f milvus-cluster.yaml
```
Check the cluster status:
```bash
kubectl get milvuscluster -n milvus
```
## Uninstall
To uninstall:
```bash
make uninstall
```
## Documentation
- [Milvus Operator Documentation](https://github.com/zilliztech/milvus-operator)
- [Milvus Documentation](https://milvus.io/docs/)
+87
View File
@@ -0,0 +1,87 @@
# Milvus Operator
## 简介
Milvus Operator 是一个 Kubernetes Operator,用于自动化 Milvus 向量数据库集群在 Kubernetes 上的部署和管理。它提供了简便的方案来部署和管理完整的 Milvus 服务栈,包括 etcd、Pulsar 和 MinIO,支持可扩展和高可用的部署。
此 Chart 仅安装 Milvus Operator。Milvus 集群在 Operator 安装后通过 MilvusCluster CRD 创建。
## 安装
安装 Milvus Operator
```bash
make install
```
## 使用
安装完成后,验证 Operator 是否正常运行:
```bash
kubectl get pods -n milvus-operator
```
检查 CRD 是否已注册:
```bash
kubectl get crd | grep milvus
```
### 创建 Milvus 集群
```yaml
apiVersion: milvus.io/v1beta1
kind: MilvusCluster
metadata:
name: my-milvus
namespace: milvus
spec:
components:
image: milvusdb/milvus:v2.4.17
proxy:
replicas: 1
rootCoord:
replicas: 1
dataCoord:
replicas: 1
indexCoord:
replicas: 1
queryCoord:
replicas: 1
dataNode:
replicas: 1
indexNode:
replicas: 1
queryNode:
replicas: 1
config:
milvus:
log:
level: info
```
应用清单:
```bash
kubectl apply -f milvus-cluster.yaml
```
检查集群状态:
```bash
kubectl get milvuscluster -n milvus
```
## 卸载
卸载:
```bash
make uninstall
```
## 文档
- [Milvus Operator 文档](https://github.com/zilliztech/milvus-operator)
- [Milvus 文档](https://milvus.io/docs/)
+14
View File
@@ -0,0 +1,14 @@
# Milvus Operator Configuration
# https://github.com/zilliztech/milvus-operator
# Default values for milvus-operator
# Uses chart defaults for most settings
# Resource configuration
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
+2
View File
@@ -4,6 +4,8 @@
Milvus is an open-source vector database built to power embedding similarity search and AI applications.
This chart deploys Milvus using the official Helm chart. For operator-managed Milvus clusters with automated stack deployment (including etcd, Pulsar, MinIO), see the [milvus-operator](../milvus-operator/) directory.
## Installation
To install Milvus, run:
+2
View File
@@ -4,6 +4,8 @@
Milvus 是一个开源的向量数据库,专为嵌入相似性搜索和 AI 应用而构建。
此 Chart 使用官方 Helm chart 部署 Milvus。如需 Operator 管理的 Milvus 集群(自动化部署包含 etcd、Pulsar、MinIO 在内的完整栈),请参阅 [milvus-operator](../milvus-operator/) 目录。
## 安装
要安装 Milvus,请运行:
+15
View File
@@ -0,0 +1,15 @@
HELM_RELEASE_NAME ?= mongodb-community-operator
HELM_APPLICATION_NAME ?= community-operator
HELM_NAMESPACE ?= mongodb-operator
HELM_DIR ?= ./helm
HELM_CHART_VERSION ?=
HELM_VALUES_FILE ?= ./values.yaml
HELM_OCI_REGISTRY ?=
HELM_OCI_NAMESPACE ?=
HELM_OCI_USERNAME ?=
HELM_OCI_PASSWORD ?=
HELM_REPO_NAME ?= mongodb
HELM_REPO_URL ?= https://mongodb.github.io/helm-charts
HELM_CHART_REPO ?= $(HELM_REPO_NAME)/community-operator
include ../_template/base.mk
+80
View File
@@ -0,0 +1,80 @@
# MongoDB Community Operator
## Introduction
MongoDB Community Operator is a Kubernetes operator that manages MongoDB Community replica sets on Kubernetes. It automates deployment, scaling, upgrades, and configuration of MongoDB clusters.
This chart installs the MongoDB Community Operator only. MongoDB replica sets are created through MongoDBCommunity CRDs after the operator is installed.
## Installation
To install the MongoDB Community Operator, run:
```bash
make install
```
## Usage
After installation, verify the operator is running:
```bash
kubectl get pods -n mongodb-operator
```
Check that CRDs are registered:
```bash
kubectl get crd | grep mongodb
```
### Create a MongoDB Replica Set
```yaml
apiVersion: mongodbcommunity.mongodb.com/v1
kind: MongoDBCommunity
metadata:
name: my-mongodb
spec:
members: 3
type: ReplicaSet
version: 7.0.12
security:
authentication:
modes: [SCRAM]
users:
- name: admin
db: admin
passwordSecretRef:
name: my-mongodb-secret
roles:
- name: clusterAdmin
db: admin
- name: userAdminAnyDatabase
db: admin
```
Apply the manifest:
```bash
kubectl apply -f mongodb-replicaset.yaml
```
Check the replica set status:
```bash
kubectl get mongodbcommunity -n mongodb
```
## Uninstall
To uninstall:
```bash
make uninstall
```
## Documentation
- [MongoDB Community Operator Documentation](https://github.com/mongodb/mongodb-kubernetes-operator)
- [MongoDB Kubernetes Documentation](https://www.mongodb.com/docs/kubernetes/)
+80
View File
@@ -0,0 +1,80 @@
# MongoDB Community Operator
## 简介
MongoDB Community Operator 是一个 Kubernetes Operator,用于管理 Kubernetes 上的 MongoDB Community 副本集。它自动化 MongoDB 集群的部署、扩缩容、升级和配置。
此 Chart 仅安装 MongoDB Community Operator。MongoDB 副本集在 Operator 安装后通过 MongoDBCommunity CRD 创建。
## 安装
安装 MongoDB Community Operator
```bash
make install
```
## 使用
安装完成后,验证 Operator 是否正常运行:
```bash
kubectl get pods -n mongodb-operator
```
检查 CRD 是否已注册:
```bash
kubectl get crd | grep mongodb
```
### 创建 MongoDB 副本集
```yaml
apiVersion: mongodbcommunity.mongodb.com/v1
kind: MongoDBCommunity
metadata:
name: my-mongodb
spec:
members: 3
type: ReplicaSet
version: 7.0.12
security:
authentication:
modes: [SCRAM]
users:
- name: admin
db: admin
passwordSecretRef:
name: my-mongodb-secret
roles:
- name: clusterAdmin
db: admin
- name: userAdminAnyDatabase
db: admin
```
应用清单:
```bash
kubectl apply -f mongodb-replicaset.yaml
```
检查副本集状态:
```bash
kubectl get mongodbcommunity -n mongodb
```
## 卸载
卸载:
```bash
make uninstall
```
## 文档
- [MongoDB Community Operator 文档](https://github.com/mongodb/mongodb-kubernetes-operator)
- [MongoDB Kubernetes 文档](https://www.mongodb.com/docs/kubernetes/)
+17
View File
@@ -0,0 +1,17 @@
# MongoDB Community Operator Configuration
# https://github.com/mongodb/mongodb-kubernetes-operator
# Default values for community-operator
# Uses chart defaults for most settings
# Resource configuration
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
# Watch namespace (empty string means all namespaces)
watchNamespace: ''
+2
View File
@@ -4,6 +4,8 @@
MongoDB is a source-available cross-platform document-oriented database program.
This chart deploys a standalone MongoDB instance. For operator-managed MongoDB replica sets with automated scaling, upgrades, and backup, see the [mongodb-operator](../mongodb-operator/) directory.
## Installation
To install MongoDB, run:
+2
View File
@@ -4,6 +4,8 @@
MongoDB 是一个源代码可用的跨平台面向文档的数据库程序。
此 Chart 部署独立的 MongoDB 实例。如需 Operator 管理的 MongoDB 副本集(支持自动扩缩容、升级和备份),请参阅 [mongodb-operator](../mongodb-operator/) 目录。
## 安装
要安装 MongoDB,请运行:
+15
View File
@@ -0,0 +1,15 @@
HELM_RELEASE_NAME ?= mysql-operator
HELM_APPLICATION_NAME ?= mysql-operator
HELM_NAMESPACE ?= mysql-operator
HELM_DIR ?= ./helm
HELM_CHART_VERSION ?=
HELM_VALUES_FILE ?= ./values.yaml
HELM_OCI_REGISTRY ?=
HELM_OCI_NAMESPACE ?=
HELM_OCI_USERNAME ?=
HELM_OCI_PASSWORD ?=
HELM_REPO_NAME ?= mysql-operator
HELM_REPO_URL ?= https://mysql.github.io/mysql-operator/
HELM_CHART_REPO ?= $(HELM_REPO_NAME)/mysql-operator
include ../_template/base.mk
+42
View File
@@ -0,0 +1,42 @@
# MySQL Operator
## Introduction
MySQL Operator for Kubernetes manages MySQL InnoDB Cluster setups inside a Kubernetes cluster. It is developed and maintained by the MySQL team at Oracle.
This chart installs the MySQL Operator only. To deploy a MySQL InnoDB Cluster, see the [mysql](../mysql/) directory.
## Installation
To install the MySQL Operator, run:
```bash
make install
```
## Usage
After installation, verify the operator is running:
```bash
kubectl get pods -n mysql-operator
```
Check that CRDs are registered:
```bash
kubectl get crd | grep mysql
```
## Uninstall
To uninstall:
```bash
make uninstall
```
## Documentation
- [MySQL Operator Documentation](https://dev.mysql.com/doc/mysql-operator/en/)
- [MySQL Operator GitHub](https://github.com/mysql/mysql-operator)
+42
View File
@@ -0,0 +1,42 @@
# MySQL Operator
## 简介
MySQL Operator for Kubernetes 在 Kubernetes 集群中管理 MySQL InnoDB Cluster。它由 Oracle 的 MySQL 团队开发和维护。
此 Chart 仅安装 MySQL Operator。要部署 MySQL InnoDB 集群,请参阅 [mysql](../mysql/) 目录。
## 安装
安装 MySQL Operator
```bash
make install
```
## 使用
安装完成后,验证 Operator 是否正常运行:
```bash
kubectl get pods -n mysql-operator
```
检查 CRD 是否已注册:
```bash
kubectl get crd | grep mysql
```
## 卸载
卸载:
```bash
make uninstall
```
## 文档
- [MySQL Operator 文档](https://dev.mysql.com/doc/mysql-operator/en/)
- [MySQL Operator GitHub](https://github.com/mysql/mysql-operator)
+5
View File
@@ -0,0 +1,5 @@
# MySQL Operator Configuration
# https://github.com/mysql/mysql-operator
# Default values for mysql-operator
# Uses chart defaults for most settings
+2 -17
View File
@@ -1,4 +1,4 @@
HELM_RELEASE_NAME ?= mysql
HELM_RELEASE_NAME ?= mysql-cluster
HELM_APPLICATION_NAME ?= mysql
HELM_NAMESPACE ?= mysql
HELM_DIR ?= ./helm
@@ -10,21 +10,6 @@ HELM_OCI_USERNAME ?=
HELM_OCI_PASSWORD ?=
HELM_REPO_NAME ?= mysql-operator
HELM_REPO_URL ?= https://mysql.github.io/mysql-operator/
# Operator configuration
OPERATOR_RELEASE_NAME ?= mysql-operator
OPERATOR_NAMESPACE ?= mysql-operator
OPERATOR_CHART_REPO ?= $(HELM_REPO_NAME)/mysql-operator
OPERATOR_CHART_VERSION ?=
OPERATOR_VALUES_FILE ?=
# Cluster configuration
CLUSTER_RELEASE_NAME ?= mysql-cluster
CLUSTER_CHART_REPO ?= $(HELM_REPO_NAME)/mysql-innodbcluster
CLUSTER_VALUES_FILE ?= ./values.yaml
# Enable CRD waiting
WAIT_FOR_CRD ?= true
HELM_CHART_REPO ?= $(HELM_REPO_NAME)/mysql-innodbcluster
include ../_template/base.mk
include ../_template/operator.mk
+5 -3
View File
@@ -1,12 +1,14 @@
# MySQL (Oracle MySQL Operator)
# MySQL InnoDB Cluster
## Introduction
MySQL Operator for Kubernetes manages MySQL InnoDB Cluster in Kubernetes. It is brought to you by the MySQL team at Oracle.
MySQL InnoDB Cluster provides a complete high availability solution for MySQL. This chart deploys a MySQL InnoDB Cluster instance.
The MySQL Operator must be installed first — see the [mysql-operator](../mysql-operator/) directory.
## Installation
To install MySQL Operator and MySQL InnoDB Cluster, run:
To install MySQL InnoDB Cluster, run:
```bash
make install
+5 -3
View File
@@ -1,12 +1,14 @@
# MySQL (Oracle MySQL Operator)
# MySQL InnoDB 集群
## 简介
MySQL Operator for Kubernetes 在 Kubernetes 中管理 MySQL InnoDB 集群。它由 Oracle 的 MySQL 团队提供
MySQL InnoDB Cluster 为 MySQL 提供完整的高可用解决方案。此 Chart 部署 MySQL InnoDB Cluster 实例
必须先安装 MySQL Operator — 请参阅 [mysql-operator](../mysql-operator/) 目录。
## 安装
要安装 MySQL Operator 和 MySQL InnoDB 集群,请运行:
要安装 MySQL InnoDB 集群,请运行:
```bash
make install
+15
View File
@@ -0,0 +1,15 @@
HELM_RELEASE_NAME ?= nebula-operator
HELM_APPLICATION_NAME ?= nebula-operator
HELM_NAMESPACE ?= nebula-operator-system
HELM_DIR ?= ./helm
HELM_CHART_VERSION ?=
HELM_VALUES_FILE ?= ./values.yaml
HELM_OCI_REGISTRY ?=
HELM_OCI_NAMESPACE ?=
HELM_OCI_USERNAME ?=
HELM_OCI_PASSWORD ?=
HELM_REPO_NAME ?= nebula-operator
HELM_REPO_URL ?= https://vesoft-inc.github.io/nebula-operator/charts
HELM_CHART_REPO ?= $(HELM_REPO_NAME)/nebula-operator
include ../_template/base.mk
+42
View File
@@ -0,0 +1,42 @@
# NebulaGraph Operator
## Introduction
NebulaGraph Operator is a Kubernetes operator that automates the deployment, scaling, and management of NebulaGraph clusters. It extends Kubernetes with Custom Resource Definitions (CRDs) for managing NebulaGraph components.
This chart installs the NebulaGraph Operator only. To deploy a NebulaGraph cluster, see the [nebulagraph](../nebulagraph/) directory.
## Installation
To install the NebulaGraph Operator, run:
```bash
make install
```
## Usage
After installation, verify the operator is running:
```bash
kubectl get pods -n nebula-operator-system
```
Check that CRDs are registered:
```bash
kubectl get crd | grep nebula
```
## Uninstall
To uninstall:
```bash
make uninstall
```
## Documentation
- [NebulaGraph Operator Documentation](https://docs.nebula-graph.io/master/k8s-operator/1.introduction/)
- [NebulaGraph Operator GitHub](https://github.com/vesoft-inc/nebula-operator)
+42
View File
@@ -0,0 +1,42 @@
# NebulaGraph Operator
## 简介
NebulaGraph Operator 是一个 Kubernetes Operator,用于自动化部署、扩展和管理 NebulaGraph 集群。它通过自定义资源定义(CRD)扩展 Kubernetes 以管理 NebulaGraph 组件。
此 Chart 仅安装 NebulaGraph Operator。要部署 NebulaGraph 集群,请参阅 [nebulagraph](../nebulagraph/) 目录。
## 安装
安装 NebulaGraph Operator
```bash
make install
```
## 使用
安装完成后,验证 Operator 是否正常运行:
```bash
kubectl get pods -n nebula-operator-system
```
检查 CRD 是否已注册:
```bash
kubectl get crd | grep nebula
```
## 卸载
卸载:
```bash
make uninstall
```
## 文档
- [NebulaGraph Operator 文档](https://docs.nebula-graph.io/master/k8s-operator/1.introduction/)
- [NebulaGraph Operator GitHub](https://github.com/vesoft-inc/nebula-operator)
+13
View File
@@ -0,0 +1,13 @@
# NebulaGraph Operator Configuration
# https://github.com/vesoft-inc/nebula-operator
# Default values for nebula-operator
replicaCount: 1
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 256Mi
+2 -17
View File
@@ -1,4 +1,4 @@
HELM_RELEASE_NAME ?= nebula
HELM_RELEASE_NAME ?= nebula-cluster
HELM_APPLICATION_NAME ?= nebula
HELM_NAMESPACE ?= nebula
HELM_DIR ?= ./helm
@@ -10,21 +10,6 @@ HELM_OCI_USERNAME ?=
HELM_OCI_PASSWORD ?=
HELM_REPO_NAME ?= nebula-operator
HELM_REPO_URL ?= https://vesoft-inc.github.io/nebula-operator/charts
# Operator configuration
OPERATOR_RELEASE_NAME ?= nebula-operator
OPERATOR_NAMESPACE ?= nebula-operator-system
OPERATOR_CHART_REPO ?= $(HELM_REPO_NAME)/nebula-operator
OPERATOR_CHART_VERSION ?=
OPERATOR_VALUES_FILE ?=
# Cluster configuration
CLUSTER_RELEASE_NAME ?= nebula-cluster
CLUSTER_CHART_REPO ?= $(HELM_REPO_NAME)/nebula-cluster
CLUSTER_VALUES_FILE ?= ./values.yaml
# Enable CRD waiting
WAIT_FOR_CRD ?= true
HELM_CHART_REPO ?= $(HELM_REPO_NAME)/nebula-cluster
include ../_template/base.mk
include ../_template/operator.mk
+4 -11
View File
@@ -1,10 +1,12 @@
# NebulaGraph
# NebulaGraph Cluster
## Introduction
NebulaGraph is an open-source distributed graph database built for super large-scale graphs with milliseconds of latency. It delivers high performance, scalability, and availability for storing and processing graph data.
This Helm chart deploys NebulaGraph cluster using the NebulaGraph Operator on Kubernetes.
This Helm chart deploys a NebulaGraph cluster on Kubernetes.
The NebulaGraph Operator must be installed first — see the [nebulagraph-operator](../nebulagraph-operator/) directory.
## Installation
@@ -14,15 +16,6 @@ To install NebulaGraph, run:
make install
```
## Prerequisites
NebulaGraph Operator must be installed first:
```bash
helm repo add nebula-operator https://vesoft-inc.github.io/nebula-operator/charts
helm install nebula-operator nebula-operator/nebula-operator --namespace nebula-operator --create-namespace
```
## Usage
After installation:
+4 -11
View File
@@ -1,10 +1,12 @@
# NebulaGraph
# NebulaGraph 集群
## 简介
NebulaGraph 是一个开源的分布式图数据库,专为超大规模图数据而设计,具有毫秒级延迟。它为存储和处理图数据提供高性能、可扩展性和可用性。
此 Helm Chart 使用 NebulaGraph Operator 在 Kubernetes 上部署 NebulaGraph 集群。
此 Helm Chart 用于在 Kubernetes 上部署 NebulaGraph 集群。
必须先安装 NebulaGraph Operator — 请参阅 [nebulagraph-operator](../nebulagraph-operator/) 目录。
## 安装
@@ -14,15 +16,6 @@ NebulaGraph 是一个开源的分布式图数据库,专为超大规模图数
make install
```
## 先决条件
必须首先安装 NebulaGraph Operator
```bash
helm repo add nebula-operator https://vesoft-inc.github.io/nebula-operator/charts
helm install nebula-operator nebula-operator/nebula-operator --namespace nebula-operator --create-namespace
```
## 使用
安装完成后:
+15
View File
@@ -0,0 +1,15 @@
HELM_RELEASE_NAME ?= openlit-operator
HELM_APPLICATION_NAME ?= openlit-operator
HELM_NAMESPACE ?= openlit
HELM_DIR ?= ./helm
HELM_CHART_VERSION ?=
HELM_VALUES_FILE ?= ./values.yaml
HELM_OCI_REGISTRY ?=
HELM_OCI_NAMESPACE ?=
HELM_OCI_USERNAME ?=
HELM_OCI_PASSWORD ?=
HELM_REPO_NAME ?= openlit
HELM_REPO_URL ?= https://openlit.github.io/helm/
HELM_CHART_REPO ?= $(HELM_REPO_NAME)/openlit-operator
include ../_template/base.mk
+51
View File
@@ -0,0 +1,51 @@
# OpenLIT Operator
## Introduction
OpenLIT Operator is a Kubernetes operator that provides zero-code AI observability for applications running in Kubernetes. It leverages OpenTelemetry to automatically instrument AI/LLM workloads (such as OpenAI, HuggingFace, LangChain, and vector databases) without any code changes, by injecting instrumentation via init containers.
For the OpenLIT backend that receives and visualizes telemetry data, see the [openlit](../openlit/) directory.
A sample AutoInstrumentation manifest is provided in `auto-instrumentation.yaml`.
## Installation
To install the OpenLIT Operator, run:
```bash
make install
```
## Usage
After installation, verify the operator is running:
```bash
kubectl get pods -n openlit
```
Create an `AutoInstrumentation` resource using the sample CRD:
```bash
kubectl apply -f auto-instrumentation.yaml -n default
```
Restart your application deployment to pick up the instrumentation:
```bash
kubectl rollout restart deployment <your-deployment-name>
```
## Uninstall
To uninstall:
```bash
make uninstall
```
## Documentation
- [OpenLIT Operator Overview](https://docs.openlit.io/latest/operator/overview)
- [Installation Guide](https://docs.openlit.io/latest/operator/installation)
- [Configuration](https://docs.openlit.io/latest/operator/configuration/operator)
+51
View File
@@ -0,0 +1,51 @@
# OpenLIT Operator
## 简介
OpenLIT Operator 是一个 Kubernetes Operator,为零代码 AI 可观测性提供支持。它利用 OpenTelemetry 通过 init 容器注入探针,自动为 AI/LLM 工作负载(如 OpenAI、HuggingFace、LangChain 和向量数据库)添加可观测性,无需修改代码。
用于接收和可视化遥测数据的 OpenLIT 后端,请参阅 [openlit](../openlit/) 目录。
`auto-instrumentation.yaml` 中提供了示例 AutoInstrumentation 清单。
## 安装
安装 OpenLIT Operator
```bash
make install
```
## 使用
安装完成后,验证 Operator 是否正常运行:
```bash
kubectl get pods -n openlit
```
使用示例 CRD 创建 `AutoInstrumentation` 资源:
```bash
kubectl apply -f auto-instrumentation.yaml -n default
```
重启你的应用 Deployment 以使 instrumentation 生效:
```bash
kubectl rollout restart deployment <your-deployment-name>
```
## 卸载
卸载:
```bash
make uninstall
```
## 文档
- [OpenLIT Operator 概览](https://docs.openlit.io/latest/operator/overview)
- [安装指南](https://docs.openlit.io/latest/operator/installation)
- [配置说明](https://docs.openlit.io/latest/operator/configuration/operator)
@@ -0,0 +1,13 @@
apiVersion: openlit.io/v1alpha1
kind: AutoInstrumentation
metadata:
name: my-instrumentation
spec:
selector:
matchLabels:
app: my-ai-app
python:
instrumentation:
enabled: true
otlp:
endpoint: 'http://openlit.openlit.svc:4318'
+19
View File
@@ -0,0 +1,19 @@
# OpenLIT Operator Helm Chart Values
# https://github.com/openlit/openlit
# Operator image configuration
image:
repository: ghcr.io/openlit/openlit-operator
pullPolicy: IfNotPresent
# Operator replica count
replicaCount: 1
# Resource limits and requests
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 256Mi
+15
View File
@@ -0,0 +1,15 @@
HELM_RELEASE_NAME ?= openlit
HELM_APPLICATION_NAME ?= openlit
HELM_NAMESPACE ?= openlit
HELM_DIR ?= ./helm
HELM_CHART_VERSION ?=
HELM_VALUES_FILE ?= ./values.yaml
HELM_OCI_REGISTRY ?=
HELM_OCI_NAMESPACE ?=
HELM_OCI_USERNAME ?=
HELM_OCI_PASSWORD ?=
HELM_REPO_NAME ?= openlit
HELM_REPO_URL ?= https://openlit.github.io/helm/
HELM_CHART_REPO ?= $(HELM_REPO_NAME)/openlit
include ../_template/base.mk
+45
View File
@@ -0,0 +1,45 @@
# OpenLIT
## Introduction
OpenLIT is an open-source observability platform for AI/LLM applications. It provides monitoring, tracing, and analytics for AI workloads including OpenAI, HuggingFace, LangChain, and vector databases. This chart deploys the OpenLIT backend that receives and visualizes telemetry data.
For zero-code AI instrumentation of your applications, install the [openlit-operator](../openlit-operator/) which automatically injects OpenTelemetry instrumentation via init containers.
## Installation
To install OpenLIT, run:
```bash
make install
```
## Usage
After installation, verify the deployment:
```bash
kubectl get pods -n openlit
```
To access the OpenLIT dashboard, port-forward the service:
```bash
kubectl port-forward svc/openlit 8080:8080 -n openlit
```
Then access at <http://localhost:8080>
## Uninstall
To uninstall OpenLIT:
```bash
make uninstall
```
## Documentation
- [OpenLIT Documentation](https://docs.openlit.io/)
- [OpenLIT GitHub](https://github.com/openlit/openlit)
- [OpenLIT Operator](https://docs.openlit.io/latest/operator/overview)
+45
View File
@@ -0,0 +1,45 @@
# OpenLIT
## 简介
OpenLIT 是一个开源的 AI/LLM 应用可观测性平台。它为 AI 工作负载(包括 OpenAI、HuggingFace、LangChain 和向量数据库)提供监控、追踪和分析功能。此 Chart 部署 OpenLIT 后端,用于接收和可视化遥测数据。
如需对应用进行零代码 AI 探针注入,请安装 [openlit-operator](../openlit-operator/),它通过 init 容器自动注入 OpenTelemetry 探针。
## 安装
要安装 OpenLIT,请运行:
```bash
make install
```
## 使用
安装完成后,验证部署:
```bash
kubectl get pods -n openlit
```
要访问 OpenLIT 仪表板,请端口转发服务:
```bash
kubectl port-forward svc/openlit 8080:8080 -n openlit
```
然后在 <http://localhost:8080> 访问
## 卸载
卸载 OpenLIT
```bash
make uninstall
```
## 文档
- [OpenLIT 文档](https://docs.openlit.io/)
- [OpenLIT GitHub](https://github.com/openlit/openlit)
- [OpenLIT Operator](https://docs.openlit.io/latest/operator/overview)
+22
View File
@@ -0,0 +1,22 @@
# OpenLIT Helm Chart Values
# https://github.com/openlit/openlit
# OpenLIT backend configuration
openlit:
# AI observability configuration
ai:
enabled: true
# Resource configuration
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
# Service configuration
service:
type: ClusterIP
port: 8080
+15
View File
@@ -0,0 +1,15 @@
HELM_RELEASE_NAME ?= cnpg
HELM_APPLICATION_NAME ?= cloudnative-pg
HELM_NAMESPACE ?= cnpg-system
HELM_DIR ?= ./helm
HELM_CHART_VERSION ?=
HELM_VALUES_FILE ?= ./values.yaml
HELM_OCI_REGISTRY ?=
HELM_OCI_NAMESPACE ?=
HELM_OCI_USERNAME ?=
HELM_OCI_PASSWORD ?=
HELM_REPO_NAME ?= cnpg
HELM_REPO_URL ?= https://cloudnative-pg.github.io/charts
HELM_CHART_REPO ?= $(HELM_REPO_NAME)/cloudnative-pg
include ../_template/base.mk
+76
View File
@@ -0,0 +1,76 @@
# CloudNativePG Operator
## Introduction
CloudNativePG is a Kubernetes operator for PostgreSQL that manages the entire PostgreSQL lifecycle — provisioning, replication, failover, backup, and monitoring — through Kubernetes-native CRDs.
This chart installs the CloudNativePG Operator only. PostgreSQL clusters are created through Cluster CRDs after the operator is installed.
## Installation
To install the CloudNativePG Operator, run:
```bash
make install
```
## Usage
After installation, verify the operator is running:
```bash
kubectl get pods -n cnpg-system
```
Check that CRDs are registered:
```bash
kubectl get crd | grep cnpg
```
### Create a PostgreSQL Cluster
```yaml
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: postgres-cluster
spec:
instances: 3
storage:
size: 1Gi
postgresql:
parameters:
max_connections: '200'
```
Apply the manifest:
```bash
kubectl apply -f postgres-cluster.yaml
```
Check the cluster status:
```bash
kubectl get cluster -n postgres
```
Connect to PostgreSQL:
```bash
kubectl exec -it postgres-cluster-1 -n postgres -- psql
```
## Uninstall
To uninstall:
```bash
make uninstall
```
## Documentation
- [CloudNativePG Documentation](https://cloudnative-pg.io/documentation/)
- [CloudNativePG GitHub](https://github.com/cloudnative-pg/cloudnative-pg)
+76
View File
@@ -0,0 +1,76 @@
# CloudNativePG Operator
## 简介
CloudNativePG 是一个 PostgreSQL Kubernetes Operator,通过 Kubernetes 原生 CRD 管理完整的 PostgreSQL 生命周期——配置、复制、故障转移、备份和监控。
此 Chart 仅安装 CloudNativePG Operator。PostgreSQL 集群在 Operator 安装后通过 Cluster CRD 创建。
## 安装
安装 CloudNativePG Operator
```bash
make install
```
## 使用
安装完成后,验证 Operator 是否正常运行:
```bash
kubectl get pods -n cnpg-system
```
检查 CRD 是否已注册:
```bash
kubectl get crd | grep cnpg
```
### 创建 PostgreSQL 集群
```yaml
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: postgres-cluster
spec:
instances: 3
storage:
size: 1Gi
postgresql:
parameters:
max_connections: '200'
```
应用清单:
```bash
kubectl apply -f postgres-cluster.yaml
```
检查集群状态:
```bash
kubectl get cluster -n postgres
```
连接 PostgreSQL
```bash
kubectl exec -it postgres-cluster-1 -n postgres -- psql
```
## 卸载
卸载:
```bash
make uninstall
```
## 文档
- [CloudNativePG 文档](https://cloudnative-pg.io/documentation/)
- [CloudNativePG GitHub](https://github.com/cloudnative-pg/cloudnative-pg)
+14
View File
@@ -0,0 +1,14 @@
# CloudNativePG Operator Configuration
# https://cloudnative-pg.io/documentation/
# Default values for cloudnative-pg
# Uses chart defaults for most settings
# Resource configuration
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
+2
View File
@@ -4,6 +4,8 @@
PostgreSQL is a powerful, open source object-relational database system with over 35 years of active development.
This chart deploys a standalone PostgreSQL instance. For operator-managed PostgreSQL with automated high availability, failover, backup, and monitoring, see the [postgres-operator](../postgres-operator/) directory.
## Installation
To install PostgreSQL, run:
+3 -1
View File
@@ -2,7 +2,9 @@
## 简介
PostgreSQL 是一个强大的开源对象关系数据库系统,有超过 35 年的活跃开发。
PostgreSQL 是一个功能强大的开源对象关系数据库系统,有超过 35 年的活跃开发历史
此 Chart 部署独立的 PostgreSQL 实例。如需 Operator 管理的 PostgreSQL(支持自动高可用、故障转移、备份和监控),请参阅 [postgres-operator](../postgres-operator/) 目录。
## 安装
+4 -26
View File
@@ -1,6 +1,6 @@
HELM_RELEASE_NAME ?= rabbitmq
HELM_APPLICATION_NAME ?= rabbitmq
HELM_NAMESPACE ?= rabbitmq
HELM_RELEASE_NAME ?= rabbitmq-cluster-operator
HELM_APPLICATION_NAME ?= rabbitmq-cluster-operator
HELM_NAMESPACE ?= rabbitmq-operator
HELM_DIR ?= ./helm
HELM_CHART_VERSION ?= 0.2.0
HELM_VALUES_FILE ?= ./values.yaml
@@ -8,28 +8,6 @@ HELM_OCI_REGISTRY ?= docker.io
HELM_OCI_NAMESPACE ?= cloudpirates
HELM_OCI_USERNAME ?=
HELM_OCI_PASSWORD ?=
# Operator configuration
OPERATOR_RELEASE_NAME ?= rabbitmq-cluster-operator
OPERATOR_NAMESPACE ?= rabbitmq-operator
OPERATOR_CHART_REPO ?= oci://$(HELM_OCI_REGISTRY)/$(HELM_OCI_NAMESPACE)/rabbitmq-cluster-operator
OPERATOR_CHART_VERSION ?= $(HELM_CHART_VERSION)
OPERATOR_VALUES_FILE ?= ./values.yaml
# For RabbitMQ Cluster Operator, we only install the operator
# RabbitMQ clusters are created using RabbitmqCluster CRDs after operator is installed
HELM_CHART_REPO ?= oci://$(HELM_OCI_REGISTRY)/$(HELM_OCI_NAMESPACE)/rabbitmq-cluster-operator
include ../_template/base.mk
include ../_template/operator.mk
# Override install target to only install operator
.PHONY: install
install: install-operator
# Override uninstall target to only uninstall operator
.PHONY: uninstall
uninstall: uninstall-operator
# Override verify target
.PHONY: verify
verify: verify-operator
+3 -1
View File
@@ -4,6 +4,8 @@
The RabbitMQ Cluster Operator is a Kubernetes operator that automates the deployment and management of RabbitMQ clusters on Kubernetes.
For a standalone RabbitMQ deployment (without operator), see the [rabbitmq](../rabbitmq/) directory.
## Installation
To install RabbitMQ Cluster Operator, run:
@@ -17,7 +19,7 @@ make install
After installation, verify the deployment:
```bash
kubectl get pods -n rabbitmq-cluster-operator
kubectl get pods -n rabbitmq-operator
```
To create a RabbitMQ cluster, apply a RabbitmqCluster custom resource:
+4 -2
View File
@@ -2,7 +2,9 @@
## 简介
RabbitMQ Cluster Operator 是一个 Kubernetes 运营商,用于在 Kubernetes 上自动部署和管理 RabbitMQ 集群。
RabbitMQ Cluster Operator 是一个 Kubernetes Operator,用于自动化 Kubernetes 上 RabbitMQ 集群的部署和管理
如需部署独立的 RabbitMQ(不使用 Operator),请参阅 [rabbitmq](../rabbitmq/) 目录。
## 安装
@@ -17,7 +19,7 @@ make install
安装后,验证部署:
```bash
kubectl get pods -n rabbitmq-cluster-operator
kubectl get pods -n rabbitmq-operator
```
要创建 RabbitMQ 集群,请应用 RabbitmqCluster 自定义资源:
+2
View File
@@ -4,6 +4,8 @@
RabbitMQ is the most widely deployed open source message broker.
This chart deploys a standalone RabbitMQ instance using the CloudPirates Helm chart. For operator-managed RabbitMQ clusters with advanced features like automatic scaling and self-healing, see the [rabbitmq-cluster-operator](../rabbitmq-cluster-operator/) directory.
## Installation
To install RabbitMQ, run:
+2
View File
@@ -4,6 +4,8 @@
RabbitMQ 是部署最广泛的开源消息代理。
此 Chart 使用 CloudPirates Helm chart 部署独立的 RabbitMQ 实例。如需 Operator 管理的 RabbitMQ 集群(支持自动扩缩容和自愈等高级功能),请参阅 [rabbitmq-cluster-operator](../rabbitmq-cluster-operator/) 目录。
## 安装
要安装 RabbitMQ,请运行:
+1 -1
View File
@@ -2,7 +2,7 @@ HELM_RELEASE_NAME ?= vault
HELM_APPLICATION_NAME ?= vault
HELM_NAMESPACE ?= vault
HELM_DIR ?= ./helm
HELM_CHART_VERSION ?=
HELM_CHART_VERSION ?= 0.31.0
HELM_VALUES_FILE ?= ./values.yaml
HELM_OCI_REGISTRY ?=
HELM_OCI_NAMESPACE ?=