From 3e284cd6fdc7505ad49428186f9d9b67754956cc Mon Sep 17 00:00:00 2001 From: bsctl Date: Tue, 25 Feb 2025 18:37:38 +0100 Subject: [PATCH 1/4] feat(docs): document cluster-api controlplane provider --- .../content/cluster-api/cluster-autoscaler.md | 1 + .../cluster-api/control-plane-provider.md | 98 +++++++++++++++++++ docs/content/cluster-api/index.md | 11 +++ .../cluster-api/vsphere-infra-provider.md | 1 + docs/content/guides/cluster-api.md | 6 -- docs/mkdocs.yml | 7 +- 6 files changed, 117 insertions(+), 7 deletions(-) create mode 100644 docs/content/cluster-api/cluster-autoscaler.md create mode 100644 docs/content/cluster-api/control-plane-provider.md create mode 100644 docs/content/cluster-api/index.md create mode 100644 docs/content/cluster-api/vsphere-infra-provider.md delete mode 100644 docs/content/guides/cluster-api.md diff --git a/docs/content/cluster-api/cluster-autoscaler.md b/docs/content/cluster-api/cluster-autoscaler.md new file mode 100644 index 00000000..3b224b2c --- /dev/null +++ b/docs/content/cluster-api/cluster-autoscaler.md @@ -0,0 +1 @@ +# Cluster autoscaling \ No newline at end of file diff --git a/docs/content/cluster-api/control-plane-provider.md b/docs/content/cluster-api/control-plane-provider.md new file mode 100644 index 00000000..9c6be419 --- /dev/null +++ b/docs/content/cluster-api/control-plane-provider.md @@ -0,0 +1,98 @@ +# Kamaji Control Plane Provider + +Kamaji can act as a Cluster API Control Plane provider via usage of the `KamajiControlPlane`, custom resource that defines the control plane of a Tenant Cluster. + +Here an example of a `KamajiControlPlane`: + +```yaml +kind: KamajiControlPlane +apiVersion: controlplane.cluster.x-k8s.io/v1alpha1 +metadata: + name: '${CLUSTER_NAME}' + namespace: '${CLUSTER_NAMESPACE}' +spec: + apiServer: + extraArgs: + - --cloud-provider=external + controllerManager: + extraArgs: + - --cloud-provider=external + dataStoreName: default + addons: + coreDNS: {} + kubeProxy: {} + konnectivity: {} + kubelet: + cgroupfs: systemd + preferredAddressTypes: + - InternalIP + - ExternalIP + - Hostname + network: + serviceAddress: '${CONTROL_PLANE_ENDPOINT_IP}' + serviceType: LoadBalancer + version: ${KUBERNETES_VERSION} +``` + +You can use it as reference in a standard `Cluster` custom resource as controlplane provider: + +```yaml +kind: Cluster +apiVersion: cluster.x-k8s.io/v1beta1 +metadata: + labels: + cluster.x-k8s.io/cluster-name: '${CLUSTER_NAME}' + name: '${CLUSTER_NAME}' + namespace: '${CLUSTER_NAMESPACE}' +spec: + controlPlaneRef: + apiVersion: controlplane.cluster.x-k8s.io/v1beta1 + kind: KamajiControlPlane + name: '${CLUSTER_NAME}' + clusterNetwork: + pods: + cidrBlocks: + - '${PODS_CIDR}' + services: + cidrBlocks: + - '${SERVICES_CIDR}' + infrastructureRef: + apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 + kind: ... # your infrastructure kind may vary + name: '${CLUSTER_NAME}' +``` + +!!! info "Full Reference" + For a full reference of the `KamajiControlPlane` custom resource, please see the [Reference APIs](reference/api.md). + +## Getting started with the Kamaji Control Plane Provider + +Cluster API Provider Kamaji is compliant with the `clusterctl` contract, which means you can use it with the `clusterctl` CLI to create and manage your Kamaji-based clusters. + +!!! info "Options for install Cluster API" + There are two ways to getting started with Cluster API: + * using `clusterctl` to install the Cluster API components as stated in this guide + * using the Cluster API Operator. Please refer to the [Cluster API Operator](https://cluster-api-operator.sigs.k8s.io/) guide for this option. + +### Prerequisites + +* [`clusterctl`](https://cluster-api.sigs.k8s.io/user/quick-start#install-clusterctl) installed in your workstation to handle the lifecycle of your Kamaji-based clusters. +* [`kubectl`](https://kubernetes.io/docs/tasks/tools/) installed in your workstation to interact with your Kamaji-based clusters. +* [Kamaji](../getting-started/getting-started.md) installed in your Management Cluster. + +### Initialize the Management Cluster + +Using `clusterctl` to initialize the Management Cluster. When executed for the first time, `clusterctl init` will fetch and install the Cluster API components in the Management Cluster + +```bash +clusterctl init --control-plane kamaji +``` + +As result, `clusterctl` the following Cluster API components will be installed: + +* Cluster API Provider in `capi-system` namespace +* Bootstrap Provider in `capi-kubeadm-bootstrap-system` namespace +* Kamaji Control Plane Provider in `kamaji-system` namespace + +We're still missing the infrastructure provider of choice, which is required to create a Kamaji-based cluster. And this is the next step. + diff --git a/docs/content/cluster-api/index.md b/docs/content/cluster-api/index.md new file mode 100644 index 00000000..a4b32e46 --- /dev/null +++ b/docs/content/cluster-api/index.md @@ -0,0 +1,11 @@ +# Cluster APIs Support + +The [Cluster API](https://github.com/kubernetes-sigs/cluster-api) brings declarative, Kubernetes-style APIs to the creation, configuration, and management of Kubernetes clusters. If you're not familiar with the Cluster API project, you can learn more from the [official documentation](https://cluster-api.sigs.k8s.io/). + +Users can utilize Kamaji in two distinct ways: + +* **Standalone:** Kamaji can be used as a standalone Kubernetes Operator installed in the Management Cluster to manage multiple Tenant Control Planes. Worker nodes of Tenant Clusters can join any infrastructure, whether it be cloud, data-center, or edge, using various automation tools such as _Ansible_, _Terraform_, or even manually with any script calling `kubeadm`. See [yaki](https://goyaki.clastix.io/) as an example. + +* **Cluster API Provider:** Kamaji can be used as a [Cluster API Control Plane Provider](https://cluster-api.sigs.k8s.io/reference/providers#control-plane) to manage multiple Tenant Control Planes across various infrastructures. Kamaji offers seamless integration with the most popular [Cluster API Infrastructure Providers](https://cluster-api.sigs.k8s.io/reference/providers#infrastructure). + +Check the currently supported infrastructure providers and the roadmap on the related [repository](https://github.com/clastix/cluster-api-control-plane-provider-kamaji). \ No newline at end of file diff --git a/docs/content/cluster-api/vsphere-infra-provider.md b/docs/content/cluster-api/vsphere-infra-provider.md new file mode 100644 index 00000000..b7c125ce --- /dev/null +++ b/docs/content/cluster-api/vsphere-infra-provider.md @@ -0,0 +1 @@ +# vSphere Infrastructure Provider \ No newline at end of file diff --git a/docs/content/guides/cluster-api.md b/docs/content/guides/cluster-api.md deleted file mode 100644 index 79727a93..00000000 --- a/docs/content/guides/cluster-api.md +++ /dev/null @@ -1,6 +0,0 @@ -# Cluster APIs Support - -The [Cluster API](https://github.com/kubernetes-sigs/cluster-api) brings declarative, Kubernetes-style APIs to creation of Kubernetes clusters, including configuration and management. - -Kamaji offers seamless integration with the most popular Cluster API Infrastructure Providers. Check the currently supported providers and the roadmap on the related [reposistory](https://github.com/clastix/cluster-api-control-plane-provider-kamaji). - diff --git a/docs/mkdocs.yml b/docs/mkdocs.yml index 42aca780..8b4d230b 100644 --- a/docs/mkdocs.yml +++ b/docs/mkdocs.yml @@ -9,7 +9,7 @@ site_author: bsctl site_description: >- Kamaji deploys and operates Kubernetes Control Plane at scale with a fraction of the operational burden. -copyright: Copyright © 2020 - 2023 Clastix Labs +copyright: Copyright © 2020 - 2025 Clastix Labs theme: name: material @@ -60,6 +60,11 @@ nav: - getting-started/getting-started.md - getting-started/kind.md - 'Concepts': concepts.md +- 'Cluster API': + - cluster-api/index.md + - cluster-api/control-plane-provider.md + - cluster-api/vsphere-infra-provider.md + - cluster-api/cluster-autoscaler.md - 'Guides': - guides/index.md - guides/kamaji-azure-deployment.md From 27af649d940604bac7b19b32c8b5d9b9c17a6c85 Mon Sep 17 00:00:00 2001 From: bsctl Date: Wed, 26 Feb 2025 10:29:14 +0100 Subject: [PATCH 2/4] feat(docs): document cluster-api vsphere provider --- docs/content/cluster-api/cluster-class.md | 1 + .../cluster-api/control-plane-provider.md | 21 ++- .../cluster-api/kubevirt-infra-provider.md | 1 + docs/content/cluster-api/other-providers.md | 21 +++ .../cluster-api/vsphere-infra-provider.md | 172 +++++++++++++++++- docs/mkdocs.yml | 4 +- 6 files changed, 209 insertions(+), 11 deletions(-) create mode 100644 docs/content/cluster-api/cluster-class.md create mode 100644 docs/content/cluster-api/kubevirt-infra-provider.md create mode 100644 docs/content/cluster-api/other-providers.md diff --git a/docs/content/cluster-api/cluster-class.md b/docs/content/cluster-api/cluster-class.md new file mode 100644 index 00000000..860bc71a --- /dev/null +++ b/docs/content/cluster-api/cluster-class.md @@ -0,0 +1 @@ +# Cluster Class \ No newline at end of file diff --git a/docs/content/cluster-api/control-plane-provider.md b/docs/content/cluster-api/control-plane-provider.md index 9c6be419..71ae45d0 100644 --- a/docs/content/cluster-api/control-plane-provider.md +++ b/docs/content/cluster-api/control-plane-provider.md @@ -1,8 +1,8 @@ # Kamaji Control Plane Provider -Kamaji can act as a Cluster API Control Plane provider via usage of the `KamajiControlPlane`, custom resource that defines the control plane of a Tenant Cluster. +Kamaji can act as a Cluster API Control Plane provider using the `KamajiControlPlane` custom resource, which defines the control plane of a Tenant Cluster. -Here an example of a `KamajiControlPlane`: +Here is an example of a `KamajiControlPlane`: ```yaml kind: KamajiControlPlane @@ -34,7 +34,7 @@ spec: version: ${KUBERNETES_VERSION} ``` -You can use it as reference in a standard `Cluster` custom resource as controlplane provider: +You can use this as reference in a standard `Cluster` custom resource as controlplane provider: ```yaml kind: Cluster @@ -67,22 +67,23 @@ spec: ## Getting started with the Kamaji Control Plane Provider -Cluster API Provider Kamaji is compliant with the `clusterctl` contract, which means you can use it with the `clusterctl` CLI to create and manage your Kamaji-based clusters. +Cluster API Provider Kamaji is compliant with the `clusterctl` contract, which means you can use it with the `clusterctl` CLI to create and manage your Kamaji based clusters. !!! info "Options for install Cluster API" There are two ways to getting started with Cluster API: - * using `clusterctl` to install the Cluster API components as stated in this guide + + * using `clusterctl` to install the Cluster API components. * using the Cluster API Operator. Please refer to the [Cluster API Operator](https://cluster-api-operator.sigs.k8s.io/) guide for this option. ### Prerequisites -* [`clusterctl`](https://cluster-api.sigs.k8s.io/user/quick-start#install-clusterctl) installed in your workstation to handle the lifecycle of your Kamaji-based clusters. -* [`kubectl`](https://kubernetes.io/docs/tasks/tools/) installed in your workstation to interact with your Kamaji-based clusters. +* [`clusterctl`](https://cluster-api.sigs.k8s.io/user/quick-start#install-clusterctl) installed in your workstation to handle the lifecycle of your clusters. +* [`kubectl`](https://kubernetes.io/docs/tasks/tools/) installed in your workstation to interact with your clusters. * [Kamaji](../getting-started/getting-started.md) installed in your Management Cluster. ### Initialize the Management Cluster -Using `clusterctl` to initialize the Management Cluster. When executed for the first time, `clusterctl init` will fetch and install the Cluster API components in the Management Cluster +Use `clusterctl` to initialize the Management Cluster. When executed for the first time, `clusterctl init` will fetch and install the Cluster API components in the Management Cluster ```bash clusterctl init --control-plane kamaji @@ -94,5 +95,7 @@ As result, `clusterctl` the following Cluster API components will be installed: * Bootstrap Provider in `capi-kubeadm-bootstrap-system` namespace * Kamaji Control Plane Provider in `kamaji-system` namespace -We're still missing the infrastructure provider of choice, which is required to create a Kamaji-based cluster. And this is the next step. +The next step, we will be to create a fully functional Kubernetes cluster on VMware vSphere using the Kamaji Control Plane Provider and the vSphere Infrastructure Provider. This is just an example, as Kamaji supports several other infrastructure providers. + +For a complete list of supported infrastructure providers, please refer to the [other providers](other-providers.md) page. diff --git a/docs/content/cluster-api/kubevirt-infra-provider.md b/docs/content/cluster-api/kubevirt-infra-provider.md new file mode 100644 index 00000000..4d5b68ef --- /dev/null +++ b/docs/content/cluster-api/kubevirt-infra-provider.md @@ -0,0 +1 @@ +# KubeVirt Infrastructure Provider \ No newline at end of file diff --git a/docs/content/cluster-api/other-providers.md b/docs/content/cluster-api/other-providers.md new file mode 100644 index 00000000..70ed7ff9 --- /dev/null +++ b/docs/content/cluster-api/other-providers.md @@ -0,0 +1,21 @@ +# Other Supported Infrastructure Providers + +Kamaji offers seamless integration with the most popular [Cluster API Infrastructure Providers](https://cluster-api.sigs.k8s.io/reference/providers#infrastructure): + +- AWS +- Azure +- Google Cloud +- Equinix/Packet +- Hetzner +- KubeVirt +- Metal³ +- Nutanix +- OpenStack +- Tinkerbell +- vSphere +- IONOS Cloud +- Proxmox by IONOS Cloud + +For the most up-to-date information and technical considerations, please always check the related [repository](https://github.com/clastix/cluster-api-control-plane-provider-kamaji). + + diff --git a/docs/content/cluster-api/vsphere-infra-provider.md b/docs/content/cluster-api/vsphere-infra-provider.md index b7c125ce..b44be307 100644 --- a/docs/content/cluster-api/vsphere-infra-provider.md +++ b/docs/content/cluster-api/vsphere-infra-provider.md @@ -1 +1,171 @@ -# vSphere Infrastructure Provider \ No newline at end of file +# vSphere Infrastructure Provider + +Use the [vSphere Infrastructure Provider](https://github.com/kubernetes-sigs/cluster-api-provider-vsphere) to create a fully functional Kubernetes cluster on **vSphere** using the [Kamaji Control Plane Provider](https://github.com/clastix/cluster-api-control-plane-provider-kamaji). + +!!! info "Virtual Machines Placement" + As Kamaji decouples the Control Plane from the infrastructure, the Kamaji Management Cluster hosting the Tenant control Plane, is not required to be on the same vSphere where worker machines will be. As network reachability is satisfied, you can have your Kamaji Management Cluster on a different vSphere or even on a different cloud provider. + +## vSphere Requirements + +You need to access a **vSphere** environment with the following requirements: + +- The vSphere environment should be configured with a DHCP service in the primary VM network for your tenant clusters. Alternatively you can use an [IPAM Provider](https://github.com/kubernetes-sigs/cluster-api-ipam-provider-in-cluster). + +- Configure one Resource Pool across the hosts onto which the tenant clusters will be provisioned. Every host in the Resource Pool will need access to a shared storage. + +- A Template VM based on published [OVA images](https://github.com/kubernetes-sigs/cluster-api-provider-vsphere). For production-like environments, it is highly recommended to build and use your own custom OVA images. Take a look to the [image-builder](https://github.com/kubernetes-sigs/image-builder) project. + +- To use the vSphere Container Storage Interface (CSI), your vSphere cluster needs support for Cloud Native Storage (CNS). CNS relies on a shared datastore. Ensure that your vSphere environment is properly configured to support CNS. + +## Install the vSphere Infrastructure Provider + +In order to use vSphere Cluster API provider, you must be able to connect and authenticate to a **vCenter**. Ensure you have credentials to your vCenter server: + +```bash +export VSPHERE_USERNAME="admin@vsphere.local" +export VSPHERE_PASSWORD="*******" +``` + +Install the vSphere Infrastructure Provider: + +```bash +clusterctl init --infrastructure vsphere +``` + +## Install the IPAM Provider + +If you intend to use IPAM to assign addresses to the nodes, you can use the in-cluster [IPAM provider](https://github.com/kubernetes-sigs/cluster-api-ipam-provider-in-cluster) instead of rely on DHCP service. To do so, initialize the Management Cluster with the `--ipam in-cluster` flag: + +```bash +clusterctl init --ipam in-cluster +``` + +## Create a Tenant Cluster + +Once all the controllers are up and running in the management cluster, you can apply the cluster manifests containing the specifications of the tenant cluster you want to provision. + +### Generate the Cluster Manifest using the template + +Using `clusterctl`, you can generate a tenant cluster manifest for your vSphere environment. Set the environment variables to match your vSphere configuration: + +```bash +# VSphere Configuration +export VSPHERE_USERNAME="admin@vsphere.local" +export VSPHERE_PASSWORD="changeme" +export VSPHERE_SERVER="vcenter.vsphere.local" +export VSPHERE_DATACENTER: "SDDC-Datacenter" +export VSPHERE_DATASTORE: "DefaultDatastore" +export VSPHERE_NETWORK: "VM Networkt" +export VSPHERE_RESOURCE_POOL: "*/Resources" +export VSPHERE_FOLDER: "kamaji-capi-pool" +export VSPHERE_TEMPLATE: "ubuntu-2404-kube-v1.31.0" +export VSPHERE_TLS_THUMBPRINT: "..." +export VSPHERE_STORAGE_POLICY: "" +export KUBERNETES_VERSION: "v1.31.0" +export CPI_IMAGE_K8S_VERSION: "v1.31.0" +export CSI_INSECURE: "1" +export VSPHERE_SSH_USER: "clastix" +export VSPHERE_SSH_AUTHORIZED_KEY: "ssh-rsa AAAAB3N..." +``` + +If you intend to use IPAM, set the environment variables to match your IPAM configuration: + +```bash +# IPAM Configuration +export NODE_IPAM_POOL_API_GROUP="ipam.cluster.x-k8s.io" +export NODE_IPAM_POOL_KIND="InClusterIPPool" +export NODE_IPAM_POOL_NAME="ipam-ip-pool" +export NODE_IPAM_POOL_RANGE="10.9.62.100-10.9.62.200" +export NODE_IPAM_POOL_PREFIX="24" +export NODE_IPAM_POOL_GATEWAY="10.9.62.1" +``` + +Set the environment variables to match your cluster configuration: + +```bash +# Cluster Configuration +export CLUSTER_NAME="sample" +export CLUSTER_NAMESPACE="default" +export POD_CIDR="10.36.0.0/16" +export SVC_CIDR="10.96.0.0/16" +export CONTROL_PLANE_REPLICAS=2 +export CONTROL_PLANE_ENDPOINT_IP="10.9.62.30" +export KUBERNETES_VERSION="v1.31.0" +export CPI_IMAGE_K8S_VERSION="v1.31.0" +export CSI_INSECURE="1" +export NODE_DISK_SIZE=25 +export NODE_MEMORY_SIZE=8192 +export NODE_CPU_COUNT=2 +export MACHINE_DEPLOY_REPLICAS=3 +export NAMESERVER="8.8.8.8" +``` + +The following command will generate a cluster manifest based on the [`capi-kamaji-vsphere-template.yaml`](https://raw.githubusercontent.com/clastix/cluster-api-control-plane-provider-kamaji/main/templates/capi-kamaji-vsphere-template.yaml) template file: + +```bash +clusterctl generate cluster --from capi-kamaji-vsphere-template.yaml > capi-kamaji-vsphere-cluster.yaml +``` + +### Apply the Cluster Manifest + +Apply the generated cluster manifest to create the tenant cluster: + +```bash +kubectl apply -f capi-kamaji-vsphere-cluster.yaml +``` + +You can check the status of the cluster deployment with `clusterctl`: + +```bash +clusterctl describe cluster sample + +NAME READY SEVERITY REASON SINCE MESSAGE +Cluster/sample True 33m +├─ClusterInfrastructure - VSphereCluster/sample True 34m +├─ControlPlane - KamajiControlPlane/sample True 34m +└─Workers + └─MachineDeployment/sample-md-0 True 80s + └─3 Machines... True 32m See ... +``` + +A new tenant cluster named `sample` is created with a Tenant Control Plane and three worker nodes. You can check the status of the tenant cluster with `kubectl`: + +```bash +kubectl get clusters -n default +``` + +and related tenant control plane created on Kamaji Management Cluster: + +```bash +kubectl get tcp -n default +``` + +## Access the Tenant Cluster + +To access the tenant cluster, you can estract the `kubeconfig` file from the Kamaji Management Cluster: + +```bash +kubectl get secret sample-kubeconfig -o jsonpath='{.data.value}' | base64 -d > ~/.kube/sample.kubeconfig +``` + +and use it to access the tenant cluster: + +```bash +export KUBECONFIG=~/.kube/sample.kubeconfig +kubectl cluster-info +``` + +## Cloud Controller Manager + +The template file `capi-kamaji-vsphere-template.yaml` includes the external [Cloud Controller Manager (CCM)](https://github.com/kubernetes/cloud-provider-vsphere) configuration for vSphere. The CCM is a Kubernetes controller that manages the cloud provider's resources. The CCM is responsible for creating and managing the cloud provider's resources, such as Load Balancers, Persistent Volumes, and Node Balancers. + +## Delete the Tenant Cluster + +For cluster deletion, use the following command: + +```bash +kubectl delete cluster sample +``` + +!!! warning "Orphan Resources" + Do NOT use `kubectl delete -f capi-kamaji-vsphere-cluster.yaml` as that can result in orphan resources. Always use `kubectl delete cluster sample` to delete the tenant cluster. \ No newline at end of file diff --git a/docs/mkdocs.yml b/docs/mkdocs.yml index 8b4d230b..8fa7e2c9 100644 --- a/docs/mkdocs.yml +++ b/docs/mkdocs.yml @@ -63,7 +63,10 @@ nav: - 'Cluster API': - cluster-api/index.md - cluster-api/control-plane-provider.md + - cluster-api/kubevirt-infra-provider.md - cluster-api/vsphere-infra-provider.md + - cluster-api/other-providers.md + - cluster-api/cluster-class.md - cluster-api/cluster-autoscaler.md - 'Guides': - guides/index.md @@ -75,7 +78,6 @@ nav: - guides/datastore-migration.md - guides/backup-and-restore.md - guides/certs-lifecycle.md - - guides/cluster-api.md - guides/console.md - 'Use Cases': use-cases.md - 'Reference': From 797ddd3946e324a326c5f8246b53d8452795c13d Mon Sep 17 00:00:00 2001 From: bsctl Date: Wed, 26 Feb 2025 12:05:10 +0100 Subject: [PATCH 3/4] feat(docs): refine cluster-api vsphere provider --- .../content/cluster-api/cluster-autoscaler.md | 2 +- .../cluster-api/control-plane-provider.md | 2 +- .../cluster-api/kubevirt-infra-provider.md | 2 +- docs/content/cluster-api/other-providers.md | 2 +- .../cluster-api/vsphere-infra-provider.md | 63 ++++++++++++------- 5 files changed, 45 insertions(+), 26 deletions(-) diff --git a/docs/content/cluster-api/cluster-autoscaler.md b/docs/content/cluster-api/cluster-autoscaler.md index 3b224b2c..6ff89ae6 100644 --- a/docs/content/cluster-api/cluster-autoscaler.md +++ b/docs/content/cluster-api/cluster-autoscaler.md @@ -1 +1 @@ -# Cluster autoscaling \ No newline at end of file +# Cluster Autoscaler \ No newline at end of file diff --git a/docs/content/cluster-api/control-plane-provider.md b/docs/content/cluster-api/control-plane-provider.md index 71ae45d0..845c9da8 100644 --- a/docs/content/cluster-api/control-plane-provider.md +++ b/docs/content/cluster-api/control-plane-provider.md @@ -95,7 +95,7 @@ As result, `clusterctl` the following Cluster API components will be installed: * Bootstrap Provider in `capi-kubeadm-bootstrap-system` namespace * Kamaji Control Plane Provider in `kamaji-system` namespace -The next step, we will be to create a fully functional Kubernetes cluster on VMware vSphere using the Kamaji Control Plane Provider and the vSphere Infrastructure Provider. This is just an example, as Kamaji supports several other infrastructure providers. +The next step, we will be to create a fully functional Kubernetes cluster using the Kamaji Control Plane Provider and the Infrastructure provider of choice. For a complete list of supported infrastructure providers, please refer to the [other providers](other-providers.md) page. diff --git a/docs/content/cluster-api/kubevirt-infra-provider.md b/docs/content/cluster-api/kubevirt-infra-provider.md index 4d5b68ef..327562f8 100644 --- a/docs/content/cluster-api/kubevirt-infra-provider.md +++ b/docs/content/cluster-api/kubevirt-infra-provider.md @@ -1 +1 @@ -# KubeVirt Infrastructure Provider \ No newline at end of file +# KubeVirt Infra Provider \ No newline at end of file diff --git a/docs/content/cluster-api/other-providers.md b/docs/content/cluster-api/other-providers.md index 70ed7ff9..a5171111 100644 --- a/docs/content/cluster-api/other-providers.md +++ b/docs/content/cluster-api/other-providers.md @@ -1,4 +1,4 @@ -# Other Supported Infrastructure Providers +# Other Infra Providers Kamaji offers seamless integration with the most popular [Cluster API Infrastructure Providers](https://cluster-api.sigs.k8s.io/reference/providers#infrastructure): diff --git a/docs/content/cluster-api/vsphere-infra-provider.md b/docs/content/cluster-api/vsphere-infra-provider.md index b44be307..2c956cdd 100644 --- a/docs/content/cluster-api/vsphere-infra-provider.md +++ b/docs/content/cluster-api/vsphere-infra-provider.md @@ -1,9 +1,9 @@ -# vSphere Infrastructure Provider +# vSphere Infra Provider Use the [vSphere Infrastructure Provider](https://github.com/kubernetes-sigs/cluster-api-provider-vsphere) to create a fully functional Kubernetes cluster on **vSphere** using the [Kamaji Control Plane Provider](https://github.com/clastix/cluster-api-control-plane-provider-kamaji). -!!! info "Virtual Machines Placement" - As Kamaji decouples the Control Plane from the infrastructure, the Kamaji Management Cluster hosting the Tenant control Plane, is not required to be on the same vSphere where worker machines will be. As network reachability is satisfied, you can have your Kamaji Management Cluster on a different vSphere or even on a different cloud provider. +!!! info "Control Plane and Infrastructure Decoupling" + Kamaji decouples the Control Plane from the infrastructure, so the Kamaji Management Cluster hosting the Tenant Control Plane does not need to be on the same vSphere as the worker machines. As long as network reachability is satisfied, you can have your Kamaji Management Cluster on a different vSphere or even on a different cloud provider. ## vSphere Requirements @@ -42,30 +42,30 @@ clusterctl init --ipam in-cluster ## Create a Tenant Cluster -Once all the controllers are up and running in the management cluster, you can apply the cluster manifests containing the specifications of the tenant cluster you want to provision. +Once all the controllers are up and running in the management cluster, you can generate and apply the cluster manifests of the tenant cluster you want to provision. ### Generate the Cluster Manifest using the template Using `clusterctl`, you can generate a tenant cluster manifest for your vSphere environment. Set the environment variables to match your vSphere configuration: ```bash -# VSphere Configuration +# vSphere Configuration export VSPHERE_USERNAME="admin@vsphere.local" export VSPHERE_PASSWORD="changeme" export VSPHERE_SERVER="vcenter.vsphere.local" -export VSPHERE_DATACENTER: "SDDC-Datacenter" -export VSPHERE_DATASTORE: "DefaultDatastore" -export VSPHERE_NETWORK: "VM Networkt" -export VSPHERE_RESOURCE_POOL: "*/Resources" -export VSPHERE_FOLDER: "kamaji-capi-pool" -export VSPHERE_TEMPLATE: "ubuntu-2404-kube-v1.31.0" -export VSPHERE_TLS_THUMBPRINT: "..." -export VSPHERE_STORAGE_POLICY: "" -export KUBERNETES_VERSION: "v1.31.0" -export CPI_IMAGE_K8S_VERSION: "v1.31.0" -export CSI_INSECURE: "1" -export VSPHERE_SSH_USER: "clastix" -export VSPHERE_SSH_AUTHORIZED_KEY: "ssh-rsa AAAAB3N..." +export VSPHERE_DATACENTER="SDDC-Datacenter" +export VSPHERE_DATASTORE="DefaultDatastore" +export VSPHERE_NETWORK="VM Network" +export VSPHERE_RESOURCE_POOL="*/Resources" +export VSPHERE_FOLDER="kamaji-capi-pool" +export VSPHERE_TEMPLATE="ubuntu-2404-kube-v1.31.0" +export VSPHERE_TLS_THUMBPRINT="..." +export VSPHERE_STORAGE_POLICY="" +export KUBERNETES_VERSION="v1.31.0" +export CPI_IMAGE_K8S_VERSION="v1.31.0" +export CSI_INSECURE="1" +export VSPHERE_SSH_USER="clastix" +export VSPHERE_SSH_AUTHORIZED_KEY="ssh-rsa AAAAB3N..." ``` If you intend to use IPAM, set the environment variables to match your IPAM configuration: @@ -100,10 +100,11 @@ export MACHINE_DEPLOY_REPLICAS=3 export NAMESERVER="8.8.8.8" ``` -The following command will generate a cluster manifest based on the [`capi-kamaji-vsphere-template.yaml`](https://raw.githubusercontent.com/clastix/cluster-api-control-plane-provider-kamaji/main/templates/capi-kamaji-vsphere-template.yaml) template file: +The following command will generate a cluster manifest based on the [`capi-kamaji-vsphere-template.yaml`](https://raw.githubusercontent.com/clastix/cluster-api-control-plane-provider-kamaji/master/templates/capi-kamaji-vsphere-template.yaml) template file: ```bash -clusterctl generate cluster --from capi-kamaji-vsphere-template.yaml > capi-kamaji-vsphere-cluster.yaml +clusterctl generate cluster \ + --from capi-kamaji-vsphere-template.yaml > capi-kamaji-vsphere-cluster.yaml ``` ### Apply the Cluster Manifest @@ -140,12 +141,26 @@ and related tenant control plane created on Kamaji Management Cluster: kubectl get tcp -n default ``` +## Install the Tenant Cluster as Helm Release + +Another option to create a Tenant Cluster is to use the Helm Chart: + +```bash +helm repo add clastix https://clastix.github.io/cluster-api-kamaji-vsphere +helm repo update +helm install sample clastix/cluster-api-kamaji-vsphere \ + --set cluster.name=sample \ + --namespace default \ + --values my-values.yaml +``` + ## Access the Tenant Cluster To access the tenant cluster, you can estract the `kubeconfig` file from the Kamaji Management Cluster: ```bash -kubectl get secret sample-kubeconfig -o jsonpath='{.data.value}' | base64 -d > ~/.kube/sample.kubeconfig +kubectl get secret sample-kubeconfig \ + -o jsonpath='{.data.value}' | base64 -d > ~/.kube/sample.kubeconfig ``` and use it to access the tenant cluster: @@ -157,7 +172,11 @@ kubectl cluster-info ## Cloud Controller Manager -The template file `capi-kamaji-vsphere-template.yaml` includes the external [Cloud Controller Manager (CCM)](https://github.com/kubernetes/cloud-provider-vsphere) configuration for vSphere. The CCM is a Kubernetes controller that manages the cloud provider's resources. The CCM is responsible for creating and managing the cloud provider's resources, such as Load Balancers, Persistent Volumes, and Node Balancers. +The template file `capi-kamaji-vsphere-template.yaml` includes the external [Cloud Controller Manager (CCM)](https://github.com/kubernetes/cloud-provider-vsphere) configuration for vSphere. The CCM is a Kubernetes controller that manages the cloud provider's resources. Usually, the CCM is deployed on control plane nodes, but in this case, the CCM is deployed on the worker nodes as daemonset. + +## vSphere CSI Driver + +The template file `capi-kamaji-vsphere-template.yaml` includes the [vSphere CSI Driver]() configuration for vSphere. The vSphere CSI Driver is a Container Storage Interface (CSI) driver that provides a way to use vSphere storage with Kubernetes. The template file also include a default storage class for the vSphere CSI Driver. ## Delete the Tenant Cluster From e433c8400569c93bfe96e26c10a1ee79843fdd63 Mon Sep 17 00:00:00 2001 From: bsctl Date: Wed, 26 Feb 2025 12:09:28 +0100 Subject: [PATCH 4/4] feat(docs): cluster-api vsphere template with dhcp --- docs/content/cluster-api/vsphere-infra-provider.md | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/docs/content/cluster-api/vsphere-infra-provider.md b/docs/content/cluster-api/vsphere-infra-provider.md index 2c956cdd..8a977654 100644 --- a/docs/content/cluster-api/vsphere-infra-provider.md +++ b/docs/content/cluster-api/vsphere-infra-provider.md @@ -107,6 +107,13 @@ clusterctl generate cluster \ --from capi-kamaji-vsphere-template.yaml > capi-kamaji-vsphere-cluster.yaml ``` +If you want to use DHCP instead of IPAM, use a different template file: + +```bash +clusterctl generate cluster \ + --from capi-kamaji-vsphere-dhcp-template.yaml > capi-kamaji-vsphere-cluster.yaml +``` + ### Apply the Cluster Manifest Apply the generated cluster manifest to create the tenant cluster: