You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
50
55
51
56
## Summary
52
-
We need to run multiple CAPI instances in one cluster and divide the namespaces to be watched by given instances.
57
+
As a Service Provider/Consumer, a management cluster is used to provision and manage the lifecycle of Kubernetes clusters using the Kubernetes Cluster API (CAPI).
58
+
Two distinct paradigms coexist to address different operational and security requirements.
53
59
54
-
We want and consider:
55
-
- each CAPI instance:
56
-
- is running in separate namespace and is using its own service account
57
-
- can select by command the line arguments the list of namespaces:
58
-
- to watch - e.g.: `--namespace <ns1> --namespace <ns2>`
59
-
- to exclude from watching - e.g.: `--excluded-namespace <ns1> --excluded-namespace <ns2>`
60
-
- we are not supporting multiple versions of CAPI
61
-
- all running CAPI-instances:
62
-
- are using the same container image (same version of CAPI)
- NOTE: the web-hooks are pointing from the CRDs into the first instance only
71
-
- the `ClusterRole/capi-aggregated-manager-role`
72
-
- the `ClusterRoleBinding/capi-manager-rolebinding` to bind all service accounts for CAPI instances (e.g. `capi1-system:capi-manager`, ..., `capiN-system:capi-manager`) to the `ClusterRole`
60
+
### Paradigm 1: Isolated Cluster Management
61
+
Each Kubernetes cluster operates its own suite of CAPI controllers, targeting specific namespaces as a hidden implementation engine.
62
+
This paradigm avoids using webhooks and prioritizes isolation and granularity.
-**Granular Lifecycle Management**: Independent versioning and upgrades for each cluster's CAPI components.
66
+
-**Logging and Metrics**: Per-cluster logging, forwarding, and metric collection.
67
+
-**Resource Isolation**: Defined resource budgets for CPU, memory, and storage on a per-cluster basis.
68
+
-**Security Requirements**:
69
+
-**Network Policies**: Per-cluster isolation using tailored policies.
70
+
-**Cloud Provider Credentials**: Each cluster uses its own set of isolated credentials.
71
+
-**Kubeconfig Access**: Dedicated access controls for kubeconfig per cluster.
72
+
73
+
The extension to enable the existing command-line option `--namespace=<ns1, …>` multiple times is proposed in this PR [#11397](https://github.com/kubernetes-sigs/cluster-api/pull/11397).
74
+
75
+
---
76
+
77
+
### Paradigm 2: Centralized Cluster Management
78
+
This paradigm manages multiple Kubernetes clusters using a shared, centralized suite of CAPI controllers. It is designed for scenarios with less stringent isolation requirements.
79
+
80
+
**Characteristics**:
81
+
- Operates under simplified constraints compared to [Paradigm 1](#paradigm-1-isolated-cluster-management).
82
+
- Reduces management overhead through centralization.
83
+
- Prioritizes ease of use and scalability over strict isolation.
84
+
85
+
The addition of the new command-line option `--excluded-namespace=<ns1, …>` is proposed in this PR [#11370](https://github.com/kubernetes-sigs/cluster-api/pull/11370).
86
+
87
+
---
88
+
89
+
### Challenge: Coexistence of Both Paradigms
90
+
To enable [Paradigm 1](#paradigm-1-isolated-cluster-management) and [Paradigm 2](#paradigm-2-centralized-cluster-management) to coexist within the same management cluster, the following is required:
91
+
-**Scope Restriction**: Paradigm 2 must have the ability to restrict its scope to avoid interference with resources owned by Paradigm 1.
92
+
-**Resource Segregation**: Paradigm 2 must be unaware of CAPI resources managed by [Paradigm 1](#paradigm-1-isolated-cluster-management) to prevent cross-contamination and conflicts.
76
93
77
-
The proposed PRs implementing such a namespace separation:
78
-
*https://github.com/kubernetes-sigs/cluster-api/pull/11397 extend the commandline option `--namespace=<ns1, …>`
79
-
*https://github.com/kubernetes-sigs/cluster-api/pull/11370 add the new commandline option `--excluded-namespace=<ns1, …>`
94
+
This coexistence strategy ensures both paradigms can fulfill their respective use cases without compromising operational integrity.
80
95
81
96
## Motivation
82
-
Our motivation is to have a provisioning cluster which is provisioned cluster at the same time while using hierarchical structure of clusters.
83
-
Two namespaces are used by management cluster and the rest of namespaces are watched by CAPI manager to manage other managed clusters.
97
+
For multi-tenant environment a cluster is used as provision-er using different CAPI providers using CAPI requires careful consideration of namespace isolation
98
+
to maintain security and operational boundaries between tenants. In such setups, it is essential to configure the CAPI controller instances
99
+
to either watch or exclude specific groups of namespaces based on the isolation requirements.
100
+
This can be achieved by setting up namespace-scoped controllers or applying filters, such as label selectors, to define the namespaces each instance should monitor.
101
+
By doing so, administrators can ensure that the activities of one tenant do not interfere with others, while also reducing the resource overhead by limiting the scope of CAPI operations.
102
+
This approach enhances scalability, security, and manageability, making it well-suited for environments with strict multi-tenancy requirements.
103
+
Our motivation is to have a provisioning cluster that also serves as a provisioned cluster, leveraging a hierarchical structure of clusters.
104
+
Two namespaces are used by the management cluster, while the remaining namespaces are monitored by the CAPI manager to oversee other managed clusters.
84
105
85
-
Our enhancement is also widely required many times from the CAPI community:
We need to extend the existing feature to limit watching on specified namespace.
91
-
We need to run multiple CAPI controller instances:
92
-
- each watching only specified namespaces: `capi1-system`, …, `capi$(N-1)-system`
93
-
- and the last resort instance to watch the rest of namespaces excluding the namespaces already watched by previously mentioned instances
94
-
95
-
This change is only a small and strait forward update of the existing feature to limit watching on specified namespace by commandline `--namespace <ns>`
96
-
112
+
There are some restrictions while using multiple providers, see: https://cluster-api.sigs.k8s.io/developer/core/support-multiple-instances
113
+
But ee need to:
114
+
1. extend the existing feature to limit watching on specified namespace.
115
+
2. add new feature to watch on all namespaces except selected ones.
116
+
3. run multiple CAPI controller instances:
117
+
- each watching only specified namespaces: `capi1-system`, …, `capi$(N-1)-system`
118
+
- and the last resort instance to watch the rest of namespaces excluding the namespaces already watched by previously mentioned instances
97
119
98
120
### Non-Goals/Future Work
99
121
Non-goals:
100
-
* it's not necessary to work with the different versions of CRDs, we consider to:
101
-
* use same version of CAPi (the same container image):
122
+
* it's not necessary to work with the different versions of CRDs, we consider:
123
+
* use the same version of CAPi (the same container image):
102
124
* share the same CRDs
103
125
* the contract and RBAC need to be solved on specific provider (AWS, AZURE, ...)
104
126
105
127
106
128
## Proposal
107
129
We are proposing to:
108
130
* enable to select multiple namespaces: add `--namespace=<ns1, …>` to extend `--namespace=<ns>` to watch on selected namespaces
109
-
* the code change is only extending an existing hash with one item to multiple items
110
-
* the maintenance complexity shouldn't be extended here
131
+
* the code change involves extending an existing hash to accommodate multiple items.
132
+
*This change is only a small and straightforward update of the existing feature to limit watching on specified namespace. The maintenance complexity shouldn't be extended here
111
133
* add the new commandline option `--excluded-namespace=<ens1, …>` to define list of excluded namespaces
112
134
* the code change is only setting an option `Cache.Options.DefaultFieldSelector` to disable matching with any of specified namespace's names
113
135
* the maintenance complexity shouldn't be extended a lot here
114
136
137
+
Our objectives include:
138
+
- Each CAPI instance runs in a separate namespace and uses its own service account.
139
+
- can specify namespaces through command-line arguments:
140
+
- to watch - e.g.: `--namespace <ns1> --namespace <ns2>`
141
+
- to exclude from watching - e.g.: `--excluded-namespace <ns1> --excluded-namespace <ns2>`
142
+
- we do not support multiple versions of CAPI
143
+
- all running CAPI-instances:
144
+
- are using the same container image (same version of CAPI)
- NOTE: the web-hooks are pointing from the CRDs into the first instance only
153
+
- cluster roles and managing access:
154
+
- default CAPI deployment define global cluster role:
155
+
- the `ClusterRole/capi-aggregated-manager-role`
156
+
- the `ClusterRoleBinding/capi-manager-rolebinding` to bind the service account `<instance-namespace>:capi-manager` for CAPI instance (e.g. ) to the `ClusterRole`
157
+
- in case of [Paradigm 1](#paradigm-1-isolated-cluster-management) we can define a cluster role per instance and grant access only to namespaces whose will be watched from the given instance
158
+
- in case of [Paradigm 2](#paradigm-2-centralized-cluster-management) we need to have access to all namespaces as defined in default CAPI deployment cluster role.
159
+
160
+
115
161
### A deployment example
116
-
Let's consider an example how to deploy multiple instances:
162
+
Let's consider an example how to deploy multiple instances for the [Paradigm 1+2](#challenge-coexistence-of-both-paradigms)
117
163
118
164
#### Global resources:
119
165
* CRDs (*.cluster.x-k8s.io) - webhooks will point into first instance, e.g.:
@@ -167,15 +213,23 @@ Let's consider an example how to deploy multiple instances:
167
213
```
168
214
169
215
### User Stories
170
-
We need to deploy two CAPI instances in the same cluster and divide the list of namespaces to assign some well known namespaces to be watched from the first instance and rest of them to assign to the second instace.
216
+
We need to deploy multiple CAPI instances in the same cluster and divide the list of namespaces to assign certain well-known namespaces to be watched from the given instances and define an instance to watch on the rest of them.
217
+
E.g.:
218
+
* instance1 (deployed in `capi1-system`) is watching `ns1.1`, `ns1.2`, ... `ns1.n1`
219
+
* instance2 (deployed in `capi2-system`) is watching `ns2.1`, `ns2.2`, ... `ns2.n2`
220
+
* ...
221
+
* last-resort instance (deployed in `capiR-system`) is watching the rest of namespaces
222
+
223
+
#### Story 1 - Isolated Cluster Management
224
+
We need to limit the list of namespaces to watch. It's possible to do this now, but only on one namespace and we need to watch on multiple namespaces by one instance.
225
+
226
+
#### Story 2 - Centralized Cluster Management
227
+
We need to exclude the list of namespaces from watch to reduces management overhead through centralization.
171
228
172
-
#### Story 1 - RedHat Hierarchical deployment using CAPI
229
+
#### Story 3 - Hierarchical deployment using CAPI
173
230
Provisioning cluster which is also provisioned cluster at the same time while using hierarchical structure of clusters.
174
-
Two namespaces are used by management cluster and the rest of namespaces are watched by CAPI manager to manage other managed clusters.
231
+
Two namespaces are used by the management cluster, while the remaining namespaces are watched by the CAPI manager to oversee other managed clusters.
0 commit comments