|
1 |
| -# vSphere Infrastructure Provider |
| 1 | +# vSphere Infrastructure Provider |
| 2 | + |
| 3 | +Use the [vSphere Infrastructure Provider](https://github.com/kubernetes-sigs/cluster-api-provider-vsphere) to create a fully functional Kubernetes cluster on **vSphere** using the [Kamaji Control Plane Provider](https://github.com/clastix/cluster-api-control-plane-provider-kamaji). |
| 4 | + |
| 5 | +!!! info "Virtual Machines Placement" |
| 6 | + As Kamaji decouples the Control Plane from the infrastructure, the Kamaji Management Cluster hosting the Tenant control Plane, is not required to be on the same vSphere where worker machines will be. As network reachability is satisfied, you can have your Kamaji Management Cluster on a different vSphere or even on a different cloud provider. |
| 7 | + |
| 8 | +## vSphere Requirements |
| 9 | + |
| 10 | +You need to access a **vSphere** environment with the following requirements: |
| 11 | + |
| 12 | +- The vSphere environment should be configured with a DHCP service in the primary VM network for your tenant clusters. Alternatively you can use an [IPAM Provider](https://github.com/kubernetes-sigs/cluster-api-ipam-provider-in-cluster). |
| 13 | + |
| 14 | +- Configure one Resource Pool across the hosts onto which the tenant clusters will be provisioned. Every host in the Resource Pool will need access to a shared storage. |
| 15 | + |
| 16 | +- A Template VM based on published [OVA images](https://github.com/kubernetes-sigs/cluster-api-provider-vsphere). For production-like environments, it is highly recommended to build and use your own custom OVA images. Take a look to the [image-builder](https://github.com/kubernetes-sigs/image-builder) project. |
| 17 | + |
| 18 | +- To use the vSphere Container Storage Interface (CSI), your vSphere cluster needs support for Cloud Native Storage (CNS). CNS relies on a shared datastore. Ensure that your vSphere environment is properly configured to support CNS. |
| 19 | + |
| 20 | +## Install the vSphere Infrastructure Provider |
| 21 | + |
| 22 | +In order to use vSphere Cluster API provider, you must be able to connect and authenticate to a **vCenter**. Ensure you have credentials to your vCenter server: |
| 23 | + |
| 24 | +```bash |
| 25 | +export VSPHERE_USERNAME= "[email protected]" |
| 26 | +export VSPHERE_PASSWORD="*******" |
| 27 | +``` |
| 28 | + |
| 29 | +Install the vSphere Infrastructure Provider: |
| 30 | + |
| 31 | +```bash |
| 32 | +clusterctl init --infrastructure vsphere |
| 33 | +``` |
| 34 | + |
| 35 | +## Install the IPAM Provider |
| 36 | + |
| 37 | +If you intend to use IPAM to assign addresses to the nodes, you can use the in-cluster [IPAM provider](https://github.com/kubernetes-sigs/cluster-api-ipam-provider-in-cluster) instead of rely on DHCP service. To do so, initialize the Management Cluster with the `--ipam in-cluster` flag: |
| 38 | + |
| 39 | +```bash |
| 40 | +clusterctl init --ipam in-cluster |
| 41 | +``` |
| 42 | + |
| 43 | +## Create a Tenant Cluster |
| 44 | + |
| 45 | +Once all the controllers are up and running in the management cluster, you can apply the cluster manifests containing the specifications of the tenant cluster you want to provision. |
| 46 | + |
| 47 | +### Generate the Cluster Manifest using the template |
| 48 | + |
| 49 | +Using `clusterctl`, you can generate a tenant cluster manifest for your vSphere environment. Set the environment variables to match your vSphere configuration: |
| 50 | + |
| 51 | +```bash |
| 52 | +# VSphere Configuration |
| 53 | +export VSPHERE_USERNAME= "[email protected]" |
| 54 | +export VSPHERE_PASSWORD="changeme" |
| 55 | +export VSPHERE_SERVER="vcenter.vsphere.local" |
| 56 | +export VSPHERE_DATACENTER: "SDDC-Datacenter" |
| 57 | +export VSPHERE_DATASTORE: "DefaultDatastore" |
| 58 | +export VSPHERE_NETWORK: "VM Networkt" |
| 59 | +export VSPHERE_RESOURCE_POOL: "*/Resources" |
| 60 | +export VSPHERE_FOLDER: "kamaji-capi-pool" |
| 61 | +export VSPHERE_TEMPLATE: "ubuntu-2404-kube-v1.31.0" |
| 62 | +export VSPHERE_TLS_THUMBPRINT: "..." |
| 63 | +export VSPHERE_STORAGE_POLICY: "" |
| 64 | +export KUBERNETES_VERSION: "v1.31.0" |
| 65 | +export CPI_IMAGE_K8S_VERSION: "v1.31.0" |
| 66 | +export CSI_INSECURE: "1" |
| 67 | +export VSPHERE_SSH_USER: "clastix" |
| 68 | +export VSPHERE_SSH_AUTHORIZED_KEY: "ssh-rsa AAAAB3N..." |
| 69 | +``` |
| 70 | + |
| 71 | +If you intend to use IPAM, set the environment variables to match your IPAM configuration: |
| 72 | + |
| 73 | +```bash |
| 74 | +# IPAM Configuration |
| 75 | +export NODE_IPAM_POOL_API_GROUP="ipam.cluster.x-k8s.io" |
| 76 | +export NODE_IPAM_POOL_KIND="InClusterIPPool" |
| 77 | +export NODE_IPAM_POOL_NAME="ipam-ip-pool" |
| 78 | +export NODE_IPAM_POOL_RANGE="10.9.62.100-10.9.62.200" |
| 79 | +export NODE_IPAM_POOL_PREFIX="24" |
| 80 | +export NODE_IPAM_POOL_GATEWAY="10.9.62.1" |
| 81 | +``` |
| 82 | + |
| 83 | +Set the environment variables to match your cluster configuration: |
| 84 | + |
| 85 | +```bash |
| 86 | +# Cluster Configuration |
| 87 | +export CLUSTER_NAME="sample" |
| 88 | +export CLUSTER_NAMESPACE="default" |
| 89 | +export POD_CIDR="10.36.0.0/16" |
| 90 | +export SVC_CIDR="10.96.0.0/16" |
| 91 | +export CONTROL_PLANE_REPLICAS=2 |
| 92 | +export CONTROL_PLANE_ENDPOINT_IP="10.9.62.30" |
| 93 | +export KUBERNETES_VERSION="v1.31.0" |
| 94 | +export CPI_IMAGE_K8S_VERSION="v1.31.0" |
| 95 | +export CSI_INSECURE="1" |
| 96 | +export NODE_DISK_SIZE=25 |
| 97 | +export NODE_MEMORY_SIZE=8192 |
| 98 | +export NODE_CPU_COUNT=2 |
| 99 | +export MACHINE_DEPLOY_REPLICAS=3 |
| 100 | +export NAMESERVER="8.8.8.8" |
| 101 | +``` |
| 102 | + |
| 103 | +The following command will generate a cluster manifest based on the [`capi-kamaji-vsphere-template.yaml`](https://raw.githubusercontent.com/clastix/cluster-api-control-plane-provider-kamaji/main/templates/capi-kamaji-vsphere-template.yaml) template file: |
| 104 | + |
| 105 | +```bash |
| 106 | +clusterctl generate cluster --from capi-kamaji-vsphere-template.yaml > capi-kamaji-vsphere-cluster.yaml |
| 107 | +``` |
| 108 | + |
| 109 | +### Apply the Cluster Manifest |
| 110 | + |
| 111 | +Apply the generated cluster manifest to create the tenant cluster: |
| 112 | + |
| 113 | +```bash |
| 114 | +kubectl apply -f capi-kamaji-vsphere-cluster.yaml |
| 115 | +``` |
| 116 | + |
| 117 | +You can check the status of the cluster deployment with `clusterctl`: |
| 118 | + |
| 119 | +```bash |
| 120 | +clusterctl describe cluster sample |
| 121 | + |
| 122 | +NAME READY SEVERITY REASON SINCE MESSAGE |
| 123 | +Cluster/sample True 33m |
| 124 | +├─ClusterInfrastructure - VSphereCluster/sample True 34m |
| 125 | +├─ControlPlane - KamajiControlPlane/sample True 34m |
| 126 | +└─Workers |
| 127 | + └─MachineDeployment/sample-md-0 True 80s |
| 128 | + └─3 Machines... True 32m See ... |
| 129 | +``` |
| 130 | + |
| 131 | +A new tenant cluster named `sample` is created with a Tenant Control Plane and three worker nodes. You can check the status of the tenant cluster with `kubectl`: |
| 132 | + |
| 133 | +```bash |
| 134 | +kubectl get clusters -n default |
| 135 | +``` |
| 136 | + |
| 137 | +and related tenant control plane created on Kamaji Management Cluster: |
| 138 | + |
| 139 | +```bash |
| 140 | +kubectl get tcp -n default |
| 141 | +``` |
| 142 | + |
| 143 | +## Access the Tenant Cluster |
| 144 | + |
| 145 | +To access the tenant cluster, you can estract the `kubeconfig` file from the Kamaji Management Cluster: |
| 146 | + |
| 147 | +```bash |
| 148 | +kubectl get secret sample-kubeconfig -o jsonpath='{.data.value}' | base64 -d > ~/.kube/sample.kubeconfig |
| 149 | +``` |
| 150 | + |
| 151 | +and use it to access the tenant cluster: |
| 152 | + |
| 153 | +```bash |
| 154 | +export KUBECONFIG=~/.kube/sample.kubeconfig |
| 155 | +kubectl cluster-info |
| 156 | +``` |
| 157 | + |
| 158 | +## Cloud Controller Manager |
| 159 | + |
| 160 | +The template file `capi-kamaji-vsphere-template.yaml` includes the external [Cloud Controller Manager (CCM)](https://github.com/kubernetes/cloud-provider-vsphere) configuration for vSphere. The CCM is a Kubernetes controller that manages the cloud provider's resources. The CCM is responsible for creating and managing the cloud provider's resources, such as Load Balancers, Persistent Volumes, and Node Balancers. |
| 161 | + |
| 162 | +## Delete the Tenant Cluster |
| 163 | + |
| 164 | +For cluster deletion, use the following command: |
| 165 | + |
| 166 | +```bash |
| 167 | +kubectl delete cluster sample |
| 168 | +``` |
| 169 | + |
| 170 | +!!! warning "Orphan Resources" |
| 171 | + Do NOT use `kubectl delete -f capi-kamaji-vsphere-cluster.yaml` as that can result in orphan resources. Always use `kubectl delete cluster sample` to delete the tenant cluster. |
0 commit comments