diff --git a/docs/content/cluster-api/cluster-autoscaler.md b/docs/content/cluster-api/cluster-autoscaler.md new file mode 100644 index 00000000..6ff89ae6 --- /dev/null +++ b/docs/content/cluster-api/cluster-autoscaler.md @@ -0,0 +1 @@ +# Cluster Autoscaler \ No newline at end of file diff --git a/docs/content/cluster-api/cluster-class.md b/docs/content/cluster-api/cluster-class.md new file mode 100644 index 00000000..860bc71a --- /dev/null +++ b/docs/content/cluster-api/cluster-class.md @@ -0,0 +1 @@ +# Cluster Class \ No newline at end of file diff --git a/docs/content/cluster-api/control-plane-provider.md b/docs/content/cluster-api/control-plane-provider.md new file mode 100644 index 00000000..845c9da8 --- /dev/null +++ b/docs/content/cluster-api/control-plane-provider.md @@ -0,0 +1,101 @@ +# Kamaji Control Plane Provider + +Kamaji can act as a Cluster API Control Plane provider using the `KamajiControlPlane` custom resource, which defines the control plane of a Tenant Cluster. + +Here is an example of a `KamajiControlPlane`: + +```yaml +kind: KamajiControlPlane +apiVersion: controlplane.cluster.x-k8s.io/v1alpha1 +metadata: + name: '${CLUSTER_NAME}' + namespace: '${CLUSTER_NAMESPACE}' +spec: + apiServer: + extraArgs: + - --cloud-provider=external + controllerManager: + extraArgs: + - --cloud-provider=external + dataStoreName: default + addons: + coreDNS: {} + kubeProxy: {} + konnectivity: {} + kubelet: + cgroupfs: systemd + preferredAddressTypes: + - InternalIP + - ExternalIP + - Hostname + network: + serviceAddress: '${CONTROL_PLANE_ENDPOINT_IP}' + serviceType: LoadBalancer + version: ${KUBERNETES_VERSION} +``` + +You can use this as reference in a standard `Cluster` custom resource as controlplane provider: + +```yaml +kind: Cluster +apiVersion: cluster.x-k8s.io/v1beta1 +metadata: + labels: + cluster.x-k8s.io/cluster-name: '${CLUSTER_NAME}' + name: '${CLUSTER_NAME}' + namespace: '${CLUSTER_NAMESPACE}' +spec: + controlPlaneRef: + apiVersion: controlplane.cluster.x-k8s.io/v1beta1 + kind: KamajiControlPlane + name: '${CLUSTER_NAME}' + clusterNetwork: + pods: + cidrBlocks: + - '${PODS_CIDR}' + services: + cidrBlocks: + - '${SERVICES_CIDR}' + infrastructureRef: + apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 + kind: ... # your infrastructure kind may vary + name: '${CLUSTER_NAME}' +``` + +!!! info "Full Reference" + For a full reference of the `KamajiControlPlane` custom resource, please see the [Reference APIs](reference/api.md). + +## Getting started with the Kamaji Control Plane Provider + +Cluster API Provider Kamaji is compliant with the `clusterctl` contract, which means you can use it with the `clusterctl` CLI to create and manage your Kamaji based clusters. + +!!! info "Options for install Cluster API" + There are two ways to getting started with Cluster API: + + * using `clusterctl` to install the Cluster API components. + * using the Cluster API Operator. Please refer to the [Cluster API Operator](https://cluster-api-operator.sigs.k8s.io/) guide for this option. + +### Prerequisites + +* [`clusterctl`](https://cluster-api.sigs.k8s.io/user/quick-start#install-clusterctl) installed in your workstation to handle the lifecycle of your clusters. +* [`kubectl`](https://kubernetes.io/docs/tasks/tools/) installed in your workstation to interact with your clusters. +* [Kamaji](../getting-started/getting-started.md) installed in your Management Cluster. + +### Initialize the Management Cluster + +Use `clusterctl` to initialize the Management Cluster. When executed for the first time, `clusterctl init` will fetch and install the Cluster API components in the Management Cluster + +```bash +clusterctl init --control-plane kamaji +``` + +As result, `clusterctl` the following Cluster API components will be installed: + +* Cluster API Provider in `capi-system` namespace +* Bootstrap Provider in `capi-kubeadm-bootstrap-system` namespace +* Kamaji Control Plane Provider in `kamaji-system` namespace + +The next step, we will be to create a fully functional Kubernetes cluster using the Kamaji Control Plane Provider and the Infrastructure provider of choice. + +For a complete list of supported infrastructure providers, please refer to the [other providers](other-providers.md) page. + diff --git a/docs/content/cluster-api/index.md b/docs/content/cluster-api/index.md new file mode 100644 index 00000000..a4b32e46 --- /dev/null +++ b/docs/content/cluster-api/index.md @@ -0,0 +1,11 @@ +# Cluster APIs Support + +The [Cluster API](https://github.com/kubernetes-sigs/cluster-api) brings declarative, Kubernetes-style APIs to the creation, configuration, and management of Kubernetes clusters. If you're not familiar with the Cluster API project, you can learn more from the [official documentation](https://cluster-api.sigs.k8s.io/). + +Users can utilize Kamaji in two distinct ways: + +* **Standalone:** Kamaji can be used as a standalone Kubernetes Operator installed in the Management Cluster to manage multiple Tenant Control Planes. Worker nodes of Tenant Clusters can join any infrastructure, whether it be cloud, data-center, or edge, using various automation tools such as _Ansible_, _Terraform_, or even manually with any script calling `kubeadm`. See [yaki](https://goyaki.clastix.io/) as an example. + +* **Cluster API Provider:** Kamaji can be used as a [Cluster API Control Plane Provider](https://cluster-api.sigs.k8s.io/reference/providers#control-plane) to manage multiple Tenant Control Planes across various infrastructures. Kamaji offers seamless integration with the most popular [Cluster API Infrastructure Providers](https://cluster-api.sigs.k8s.io/reference/providers#infrastructure). + +Check the currently supported infrastructure providers and the roadmap on the related [repository](https://github.com/clastix/cluster-api-control-plane-provider-kamaji). \ No newline at end of file diff --git a/docs/content/cluster-api/kubevirt-infra-provider.md b/docs/content/cluster-api/kubevirt-infra-provider.md new file mode 100644 index 00000000..327562f8 --- /dev/null +++ b/docs/content/cluster-api/kubevirt-infra-provider.md @@ -0,0 +1 @@ +# KubeVirt Infra Provider \ No newline at end of file diff --git a/docs/content/cluster-api/other-providers.md b/docs/content/cluster-api/other-providers.md new file mode 100644 index 00000000..a5171111 --- /dev/null +++ b/docs/content/cluster-api/other-providers.md @@ -0,0 +1,21 @@ +# Other Infra Providers + +Kamaji offers seamless integration with the most popular [Cluster API Infrastructure Providers](https://cluster-api.sigs.k8s.io/reference/providers#infrastructure): + +- AWS +- Azure +- Google Cloud +- Equinix/Packet +- Hetzner +- KubeVirt +- Metal³ +- Nutanix +- OpenStack +- Tinkerbell +- vSphere +- IONOS Cloud +- Proxmox by IONOS Cloud + +For the most up-to-date information and technical considerations, please always check the related [repository](https://github.com/clastix/cluster-api-control-plane-provider-kamaji). + + diff --git a/docs/content/cluster-api/vsphere-infra-provider.md b/docs/content/cluster-api/vsphere-infra-provider.md new file mode 100644 index 00000000..8a977654 --- /dev/null +++ b/docs/content/cluster-api/vsphere-infra-provider.md @@ -0,0 +1,197 @@ +# vSphere Infra Provider + +Use the [vSphere Infrastructure Provider](https://github.com/kubernetes-sigs/cluster-api-provider-vsphere) to create a fully functional Kubernetes cluster on **vSphere** using the [Kamaji Control Plane Provider](https://github.com/clastix/cluster-api-control-plane-provider-kamaji). + +!!! info "Control Plane and Infrastructure Decoupling" + Kamaji decouples the Control Plane from the infrastructure, so the Kamaji Management Cluster hosting the Tenant Control Plane does not need to be on the same vSphere as the worker machines. As long as network reachability is satisfied, you can have your Kamaji Management Cluster on a different vSphere or even on a different cloud provider. + +## vSphere Requirements + +You need to access a **vSphere** environment with the following requirements: + +- The vSphere environment should be configured with a DHCP service in the primary VM network for your tenant clusters. Alternatively you can use an [IPAM Provider](https://github.com/kubernetes-sigs/cluster-api-ipam-provider-in-cluster). + +- Configure one Resource Pool across the hosts onto which the tenant clusters will be provisioned. Every host in the Resource Pool will need access to a shared storage. + +- A Template VM based on published [OVA images](https://github.com/kubernetes-sigs/cluster-api-provider-vsphere). For production-like environments, it is highly recommended to build and use your own custom OVA images. Take a look to the [image-builder](https://github.com/kubernetes-sigs/image-builder) project. + +- To use the vSphere Container Storage Interface (CSI), your vSphere cluster needs support for Cloud Native Storage (CNS). CNS relies on a shared datastore. Ensure that your vSphere environment is properly configured to support CNS. + +## Install the vSphere Infrastructure Provider + +In order to use vSphere Cluster API provider, you must be able to connect and authenticate to a **vCenter**. Ensure you have credentials to your vCenter server: + +```bash +export VSPHERE_USERNAME="admin@vsphere.local" +export VSPHERE_PASSWORD="*******" +``` + +Install the vSphere Infrastructure Provider: + +```bash +clusterctl init --infrastructure vsphere +``` + +## Install the IPAM Provider + +If you intend to use IPAM to assign addresses to the nodes, you can use the in-cluster [IPAM provider](https://github.com/kubernetes-sigs/cluster-api-ipam-provider-in-cluster) instead of rely on DHCP service. To do so, initialize the Management Cluster with the `--ipam in-cluster` flag: + +```bash +clusterctl init --ipam in-cluster +``` + +## Create a Tenant Cluster + +Once all the controllers are up and running in the management cluster, you can generate and apply the cluster manifests of the tenant cluster you want to provision. + +### Generate the Cluster Manifest using the template + +Using `clusterctl`, you can generate a tenant cluster manifest for your vSphere environment. Set the environment variables to match your vSphere configuration: + +```bash +# vSphere Configuration +export VSPHERE_USERNAME="admin@vsphere.local" +export VSPHERE_PASSWORD="changeme" +export VSPHERE_SERVER="vcenter.vsphere.local" +export VSPHERE_DATACENTER="SDDC-Datacenter" +export VSPHERE_DATASTORE="DefaultDatastore" +export VSPHERE_NETWORK="VM Network" +export VSPHERE_RESOURCE_POOL="*/Resources" +export VSPHERE_FOLDER="kamaji-capi-pool" +export VSPHERE_TEMPLATE="ubuntu-2404-kube-v1.31.0" +export VSPHERE_TLS_THUMBPRINT="..." +export VSPHERE_STORAGE_POLICY="" +export KUBERNETES_VERSION="v1.31.0" +export CPI_IMAGE_K8S_VERSION="v1.31.0" +export CSI_INSECURE="1" +export VSPHERE_SSH_USER="clastix" +export VSPHERE_SSH_AUTHORIZED_KEY="ssh-rsa AAAAB3N..." +``` + +If you intend to use IPAM, set the environment variables to match your IPAM configuration: + +```bash +# IPAM Configuration +export NODE_IPAM_POOL_API_GROUP="ipam.cluster.x-k8s.io" +export NODE_IPAM_POOL_KIND="InClusterIPPool" +export NODE_IPAM_POOL_NAME="ipam-ip-pool" +export NODE_IPAM_POOL_RANGE="10.9.62.100-10.9.62.200" +export NODE_IPAM_POOL_PREFIX="24" +export NODE_IPAM_POOL_GATEWAY="10.9.62.1" +``` + +Set the environment variables to match your cluster configuration: + +```bash +# Cluster Configuration +export CLUSTER_NAME="sample" +export CLUSTER_NAMESPACE="default" +export POD_CIDR="10.36.0.0/16" +export SVC_CIDR="10.96.0.0/16" +export CONTROL_PLANE_REPLICAS=2 +export CONTROL_PLANE_ENDPOINT_IP="10.9.62.30" +export KUBERNETES_VERSION="v1.31.0" +export CPI_IMAGE_K8S_VERSION="v1.31.0" +export CSI_INSECURE="1" +export NODE_DISK_SIZE=25 +export NODE_MEMORY_SIZE=8192 +export NODE_CPU_COUNT=2 +export MACHINE_DEPLOY_REPLICAS=3 +export NAMESERVER="8.8.8.8" +``` + +The following command will generate a cluster manifest based on the [`capi-kamaji-vsphere-template.yaml`](https://raw.githubusercontent.com/clastix/cluster-api-control-plane-provider-kamaji/master/templates/capi-kamaji-vsphere-template.yaml) template file: + +```bash +clusterctl generate cluster \ + --from capi-kamaji-vsphere-template.yaml > capi-kamaji-vsphere-cluster.yaml +``` + +If you want to use DHCP instead of IPAM, use a different template file: + +```bash +clusterctl generate cluster \ + --from capi-kamaji-vsphere-dhcp-template.yaml > capi-kamaji-vsphere-cluster.yaml +``` + +### Apply the Cluster Manifest + +Apply the generated cluster manifest to create the tenant cluster: + +```bash +kubectl apply -f capi-kamaji-vsphere-cluster.yaml +``` + +You can check the status of the cluster deployment with `clusterctl`: + +```bash +clusterctl describe cluster sample + +NAME READY SEVERITY REASON SINCE MESSAGE +Cluster/sample True 33m +├─ClusterInfrastructure - VSphereCluster/sample True 34m +├─ControlPlane - KamajiControlPlane/sample True 34m +└─Workers + └─MachineDeployment/sample-md-0 True 80s + └─3 Machines... True 32m See ... +``` + +A new tenant cluster named `sample` is created with a Tenant Control Plane and three worker nodes. You can check the status of the tenant cluster with `kubectl`: + +```bash +kubectl get clusters -n default +``` + +and related tenant control plane created on Kamaji Management Cluster: + +```bash +kubectl get tcp -n default +``` + +## Install the Tenant Cluster as Helm Release + +Another option to create a Tenant Cluster is to use the Helm Chart: + +```bash +helm repo add clastix https://clastix.github.io/cluster-api-kamaji-vsphere +helm repo update +helm install sample clastix/cluster-api-kamaji-vsphere \ + --set cluster.name=sample \ + --namespace default \ + --values my-values.yaml +``` + +## Access the Tenant Cluster + +To access the tenant cluster, you can estract the `kubeconfig` file from the Kamaji Management Cluster: + +```bash +kubectl get secret sample-kubeconfig \ + -o jsonpath='{.data.value}' | base64 -d > ~/.kube/sample.kubeconfig +``` + +and use it to access the tenant cluster: + +```bash +export KUBECONFIG=~/.kube/sample.kubeconfig +kubectl cluster-info +``` + +## Cloud Controller Manager + +The template file `capi-kamaji-vsphere-template.yaml` includes the external [Cloud Controller Manager (CCM)](https://github.com/kubernetes/cloud-provider-vsphere) configuration for vSphere. The CCM is a Kubernetes controller that manages the cloud provider's resources. Usually, the CCM is deployed on control plane nodes, but in this case, the CCM is deployed on the worker nodes as daemonset. + +## vSphere CSI Driver + +The template file `capi-kamaji-vsphere-template.yaml` includes the [vSphere CSI Driver]() configuration for vSphere. The vSphere CSI Driver is a Container Storage Interface (CSI) driver that provides a way to use vSphere storage with Kubernetes. The template file also include a default storage class for the vSphere CSI Driver. + +## Delete the Tenant Cluster + +For cluster deletion, use the following command: + +```bash +kubectl delete cluster sample +``` + +!!! warning "Orphan Resources" + Do NOT use `kubectl delete -f capi-kamaji-vsphere-cluster.yaml` as that can result in orphan resources. Always use `kubectl delete cluster sample` to delete the tenant cluster. \ No newline at end of file diff --git a/docs/content/guides/cluster-api.md b/docs/content/guides/cluster-api.md deleted file mode 100644 index 79727a93..00000000 --- a/docs/content/guides/cluster-api.md +++ /dev/null @@ -1,6 +0,0 @@ -# Cluster APIs Support - -The [Cluster API](https://github.com/kubernetes-sigs/cluster-api) brings declarative, Kubernetes-style APIs to creation of Kubernetes clusters, including configuration and management. - -Kamaji offers seamless integration with the most popular Cluster API Infrastructure Providers. Check the currently supported providers and the roadmap on the related [reposistory](https://github.com/clastix/cluster-api-control-plane-provider-kamaji). - diff --git a/docs/mkdocs.yml b/docs/mkdocs.yml index 42aca780..8fa7e2c9 100644 --- a/docs/mkdocs.yml +++ b/docs/mkdocs.yml @@ -9,7 +9,7 @@ site_author: bsctl site_description: >- Kamaji deploys and operates Kubernetes Control Plane at scale with a fraction of the operational burden. -copyright: Copyright © 2020 - 2023 Clastix Labs +copyright: Copyright © 2020 - 2025 Clastix Labs theme: name: material @@ -60,6 +60,14 @@ nav: - getting-started/getting-started.md - getting-started/kind.md - 'Concepts': concepts.md +- 'Cluster API': + - cluster-api/index.md + - cluster-api/control-plane-provider.md + - cluster-api/kubevirt-infra-provider.md + - cluster-api/vsphere-infra-provider.md + - cluster-api/other-providers.md + - cluster-api/cluster-class.md + - cluster-api/cluster-autoscaler.md - 'Guides': - guides/index.md - guides/kamaji-azure-deployment.md @@ -70,7 +78,6 @@ nav: - guides/datastore-migration.md - guides/backup-and-restore.md - guides/certs-lifecycle.md - - guides/cluster-api.md - guides/console.md - 'Use Cases': use-cases.md - 'Reference':