You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: docs/how-to/h-cluster-migration.md
+17-15
Original file line number
Diff line number
Diff line change
@@ -1,31 +1,29 @@
1
-
# Cluster Migration using MirrorMaker2.0
1
+
# Cluster migration using MirrorMaker2.0
2
2
3
3
## Overview
4
4
5
5
This How-To guide covers executing a cluster migration to a Charmed Kafka K8s deployment using MirrorMaker2.0, running as a process on each of the Juju units in an active/passive setup, where MirrorMaker will act as a consumer from an existing cluster, and a producer to the Charmed Kafka K8s cluster. In parallel (one process on each unit), data and consumer offsets for all existing topics will be synced one-way until both clusters are in-sync, with all data replicated across both in real-time.
6
6
7
-
## MirrorMaker2 Overview
7
+
## MirrorMaker2 overview
8
8
9
9
Under the hood, MirrorMaker uses Kafka Connect source connectors to replicate data, those being the following:
10
10
-**MirrorSourceConnector** - replicates topics from an original cluster to a new cluster. It also replicates ACLs and is necessary for the MirrorCheckpointConnector to run
11
11
-**MirrorCheckpointConnector** - periodically tracks offsets. If enabled, it also synchronizes consumer group offsets between the original and new clusters
12
12
-**MirrorHeartbeatConnector** - periodically checks connectivity between the original and new clusters
13
13
14
-
Together, these allow for cluster->cluster replication of topics, consumer groups, topic configuration and ACLs, preserving partitioning and consumer offsets. For more detail on MirrorMaker internals, consult the [MirrorMaker README.md](https://github.com/apache/kafka/blob/trunk/connect/mirror/README.md) and the [MirrorMaker 2.0 KIP](https://cwiki.apache.org/confluence/display/KAFKA/KIP-382%3A+MirrorMaker+2.0). In practice, it allows one to sync data one-way between two live Kafka clusters with minimal impact on the ongoing production service.
14
+
Together, they are used for cluster->cluster replication of topics, consumer groups, topic configuration and ACLs, preserving partitioning and consumer offsets. For more detail on MirrorMaker internals, consult the [MirrorMaker README.md](https://github.com/apache/kafka/blob/trunk/connect/mirror/README.md) and the [MirrorMaker 2.0 KIP](https://cwiki.apache.org/confluence/display/KAFKA/KIP-382%3A+MirrorMaker+2.0). In practice, it allows one to sync data one-way between two live Kafka clusters with minimal impact on the ongoing production service.
15
15
16
16
In short, MirrorMaker runs as a distributed service on the new cluster, and consumes all topics, groups and offsets from the still-active original cluster in production, before producing them one-way to the new cluster that may not yet be serving traffic to external clients. The original, in-production cluster is referred to as an ‘active’ cluster, and the new cluster still waiting to serve external clients is ‘passive’. The MirrorMaker service can be configured using much the same configuration as available for Kafka Connect.
17
17
18
-
## Deploy and run MirrorMaker
19
-
20
-
### Pre-requisites
18
+
## Pre-requisites
21
19
22
20
- An existing Kafka cluster to migrate from. The clusters need to be reachable from/to Charmed Kafka K8s.
23
21
- A bootstrapped Juju K8s cloud running Charmed Kafka K8s to migrate to
24
22
- A tutorial on how to set-up a Charmed Kafka deployment can be found as part of the [Charmed Kafka K8s Tutorial](/t/charmed-kafka-k8s-documentation-tutorial-overview/11945)
25
23
- The CLI tool `yq` - https://github.com/mikefarah/yq
26
24
-`snap install yq --channel=v3/stable`
27
25
28
-
###Get cluster details and admin credentials
26
+
## Get cluster details and admin credentials
29
27
30
28
By design, the `kafka` charm will not expose any available connections until related to by a client. In this case, we deploy `data-integrator` charms and relating them to each `kafka` application, requesting `admin` level privileges:
In order to authenticate MirrorMaker to both clusters, it will need full `super.user` permissions on **BOTH** clusters. MirrorMaker supports every possible `security.protocol` supported by Apache Kafka. In this guide, we will make the assumption that the original cluster is using `SASL_PLAINTEXT` authentication, as such, the required information is as follows:
52
+
To authenticate MirrorMaker to both clusters, it will need full `super.user` permissions on **BOTH** clusters. MirrorMaker supports every possible `security.protocol` supported by Apache Kafka. In this guide, we will make the assumption that the original cluster is using `SASL_PLAINTEXT` authentication, as such, the required information is as follows:
55
53
56
54
```bash
57
55
# comma-separated list of kafka server IPs and ports to connect to
@@ -63,9 +61,9 @@ OLD_SASL_JAAS_CONFIG
63
61
64
62
> **NOTE** - If using `SSL` or `SASL_SSL` authentication, review the configuration options supported by Kafka Connect in the [Apache Kafka documentation](https://kafka.apache.org/documentation/#connectconfigs)
65
63
66
-
###Generating `mm2.properties` file on the Charmed Kafka cluster
64
+
## Generating `mm2.properties` file on the Charmed Kafka cluster
67
65
68
-
MirrorMaker takes a `.properties` file for its configuration to fine-tune behavior. See below an example `mm2.properties` file that can be placed on each of the Charmed Kafka units using the above credentials:
66
+
MirrorMaker takes a `.properties` file for its configuration to fine-tune behaviour. See below an example `mm2.properties` file that can be placed on each of the Charmed Kafka units using the above credentials:
69
67
70
68
```bash
71
69
# Aliases for each cluster, can be set to any unique alias
@@ -131,7 +129,8 @@ cat /tmp/mm2.properties | juju ssh kafka-k8s/<id> sudo -i 'sudo tee -a /etc/kafk
131
129
132
130
where `<id>` is the id of the Charmed Kafka unit.
133
131
134
-
### Starting a dedicated MirrorMaker cluster
132
+
## Starting a dedicated MirrorMaker cluster
133
+
135
134
It is strongly advised to run MirrorMaker services on the downstream cluster to avoid service impact due to resource use. Now that the properties are set on each unit of the new cluster, the MirrorMaker services can be started using with JMX metrics exporters using the following:
The migration process can be monitored using built-in Kafka bin-commands on the original cluster. In the Charmed Kafka cluster, these bin-commands are also mapped to snap commands on the units (e.g `charmed-kafka.get-offsets` or `charmed-kafka.topics`).
148
147
@@ -164,8 +163,11 @@ There is also a [range of different metrics](https://github.com/apache/kafka/blo
### Switching client traffic from original cluster to Charmed Kafka cluster
166
+
## Switching client traffic from original cluster to Charmed Kafka cluster
167
+
168
168
Once happy that all the necessary data has successfully migrated, stop all active consumer applications on the original cluster, and redirect them to the Charmed Kafka cluster, making sure to use the Charmed Kafka cluster server addresses and authentication. After doing so, they will re-join their original consumer groups at the last committed offset it had originally, and continue consuming as normal.
169
169
Finally, the producer client applications can be stopped, updated with the Charmed Kafka cluster server addresses and authentication, and restarted, with any newly produced messages being received by the migrated consumer client applications, completing the migration of both the data, and the client applications.
170
-
### Stopping MirrorMaker replication
170
+
171
+
## Stopping MirrorMaker replication
172
+
171
173
Once confident in the successful completion of the data an client migration, the running processes on each of the charm units can be killed, stopping the MirrorMaker processes active on the Charmed Kafka cluster.
Copy file name to clipboardexpand all lines: docs/how-to/h-enable-encryption.md
+3-1
Original file line number
Diff line number
Diff line change
@@ -1,8 +1,10 @@
1
1
# How to enable encryption
2
2
3
+
To enable encryption, you should first deploy a TLS certificates Provider charm.
4
+
3
5
## Deploy a TLS Provider charm
4
6
5
-
To enable encryption, you should first deploy a TLS certificates Provider charm. The Kafka K8s and ZooKeeper K8s charms implements the Requirer side of the [`tls-certificates/v1`](https://github.com/canonical/charm-relation-interfaces/blob/main/interfaces/tls_certificates/v1/README.md) charm relation.
7
+
The Kafka K8s and ZooKeeper K8s charms implement the Requirer side of the [`tls-certificates/v1`](https://github.com/canonical/charm-relation-interfaces/blob/main/interfaces/tls_certificates/v1/README.md) charm relation.
6
8
Therefore, any charm implementing the Provider side could be used.
7
9
8
10
One possible option, suitable for testing, could be to use the `self-signed-certificates`, although this setup is however not recommended for production clusters.
@@ -84,14 +95,15 @@ models, e.g. `<kafka_model_name>` and `<cos_model_name>`.
84
95
After this is complete, the monitoring COS stack should be up and running and ready to be used.
85
96
86
97
### Connect Grafana web interface
98
+
87
99
To connect to the Grafana web interface, follow the [Browse dashboards](https://charmhub.io/topics/canonical-observability-stack/tutorials/install-microk8s?_ga=2.201254254.1948444620.1704703837-757109492.1701777558#heading--browse-dashboards) section of the MicroK8s "Getting started" guide.
88
100
```shell
89
101
juju run grafana/leader get-admin-password --model <k8s_cos_controller>:<cos_model_name>
90
102
```
91
103
92
104
## Tune server logging level
93
105
94
-
In order to tune the level of the server logs for Kafka and ZooKeeper, configure the `log-level` and `log_level` properties accordingly
106
+
To tune the level of the server logs for Kafka and ZooKeeper, configure the `log-level` and `log_level` properties accordingly.
Copy file name to clipboardexpand all lines: docs/how-to/h-integrate-alerts-dashboards.md
+5-8
Original file line number
Diff line number
Diff line change
@@ -1,30 +1,28 @@
1
1
# Integrate custom alerting rules and dashboards
2
2
3
-
This guide shows you how to integrate an existing set of rules and/or dashboards to your Charmed Kafka and Charmed Zookeeper deployment to be consumed with the [Canonical Observability Stack (COS)](https://charmhub.io/topics/canonical-observability-stack).
4
-
To do so, we will sync resources stored in a git repo to COS Lite.
3
+
This guide shows you how to integrate an existing set of rules and/or dashboards to your Charmed Kafka and Charmed ZooKeeper deployment to be consumed with the [Canonical Observability Stack (COS)](https://charmhub.io/topics/canonical-observability-stack).
4
+
To do so, we will sync resources stored in a git repository to COS Lite.
5
5
6
6
## Prerequisites
7
7
8
-
Deploy the cos-lite bundle in a Kubernetes environment and integrate Charmed Kafka and Charmed ZooKeeper to the COS offers, as shown in the [How to Enable Monitoring](/t/charmed-kafka-k8s-how-to-enable-monitoring/10291) guide.
8
+
Deploy the `cos-lite` bundle in a Kubernetes environment and integrate Charmed Kafka and Charmed ZooKeeper to the COS offers, as shown in the [How to Enable Monitoring](/t/charmed-kafka-k8s-how-to-enable-monitoring/10291) guide.
9
9
This guide will refer to the models that charms are deployed into as:
10
10
11
-
*`<cos-model>` for the model containing observabilities charms (and deployed on k8s)
11
+
*`<cos-model>` for the model containing observability charms (and deployed on K8s)
12
12
13
13
*`<apps-model>` for the model containing Charmed Kafka and Charmed ZooKeeper
14
14
15
-
*`<apps-model>` for other optional charms (e.g. tls-certificates operators, `grafana-agent`, `data-integrator`, etc.).
15
+
*`<apps-model>` for other optional charms (e.g. TLS-certificates operators, `grafana-agent`, `data-integrator`, etc.).
16
16
17
17
## Create a repository with a custom monitoring setup
18
18
19
-
20
19
Create an empty git repository, or in an existing one, save your alert rules and dashboard models under the `<path_to_prom_rules>`, `<path_to_loki_rules>` and `<path_to_models>` folders.
21
20
22
21
If you want a primer to rule writing, refer to the [Prometheus documentation](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/).
23
22
You may also find an example in the [kafka-test-app repository](https://github.com/canonical/kafka-test-app).
24
23
25
24
Then, push your changes to the remote repository.
26
25
27
-
28
26
## Deploy the COS configuration charm
29
27
30
28
Deploy the [COS configuration](https://charmhub.io/cos-configuration-k8s) charm in the `<cos-model>` model:
@@ -43,7 +41,6 @@ Adding, updating or deleting an alert rule or a dashboard in the repository will
43
41
You need to manually refresh `cos-config`'s local repository with the *sync-now* action if you do not want to wait for the next [update-status event](/t/event-update-status/6484) to pull the latest changes.
44
42
[/Note]
45
43
46
-
47
44
## Forward the rules and dashboards
48
45
49
46
The path to the resource folders can be set after deployment:
0 commit comments