Skip to content

Commit 76b4b38

Browse files
[discourse-gatekeeper] Migrate charm docs (#120)
Initial Vale tests fixes Co-authored-by: upload-charms-docs-bot <[email protected]>
1 parent 55f121b commit 76b4b38

20 files changed

+195
-141
lines changed

docs/how-to/h-cluster-migration.md

+17-15
Original file line numberDiff line numberDiff line change
@@ -1,31 +1,29 @@
1-
# Cluster Migration using MirrorMaker2.0
1+
# Cluster migration using MirrorMaker2.0
22

33
## Overview
44

55
This How-To guide covers executing a cluster migration to a Charmed Kafka K8s deployment using MirrorMaker2.0, running as a process on each of the Juju units in an active/passive setup, where MirrorMaker will act as a consumer from an existing cluster, and a producer to the Charmed Kafka K8s cluster. In parallel (one process on each unit), data and consumer offsets for all existing topics will be synced one-way until both clusters are in-sync, with all data replicated across both in real-time.
66

7-
## MirrorMaker2 Overview
7+
## MirrorMaker2 overview
88

99
Under the hood, MirrorMaker uses Kafka Connect source connectors to replicate data, those being the following:
1010
- **MirrorSourceConnector** - replicates topics from an original cluster to a new cluster. It also replicates ACLs and is necessary for the MirrorCheckpointConnector to run
1111
- **MirrorCheckpointConnector** - periodically tracks offsets. If enabled, it also synchronizes consumer group offsets between the original and new clusters
1212
- **MirrorHeartbeatConnector** - periodically checks connectivity between the original and new clusters
1313

14-
Together, these allow for cluster->cluster replication of topics, consumer groups, topic configuration and ACLs, preserving partitioning and consumer offsets. For more detail on MirrorMaker internals, consult the [MirrorMaker README.md](https://github.com/apache/kafka/blob/trunk/connect/mirror/README.md) and the [MirrorMaker 2.0 KIP](https://cwiki.apache.org/confluence/display/KAFKA/KIP-382%3A+MirrorMaker+2.0). In practice, it allows one to sync data one-way between two live Kafka clusters with minimal impact on the ongoing production service.
14+
Together, they are used for cluster->cluster replication of topics, consumer groups, topic configuration and ACLs, preserving partitioning and consumer offsets. For more detail on MirrorMaker internals, consult the [MirrorMaker README.md](https://github.com/apache/kafka/blob/trunk/connect/mirror/README.md) and the [MirrorMaker 2.0 KIP](https://cwiki.apache.org/confluence/display/KAFKA/KIP-382%3A+MirrorMaker+2.0). In practice, it allows one to sync data one-way between two live Kafka clusters with minimal impact on the ongoing production service.
1515

1616
In short, MirrorMaker runs as a distributed service on the new cluster, and consumes all topics, groups and offsets from the still-active original cluster in production, before producing them one-way to the new cluster that may not yet be serving traffic to external clients. The original, in-production cluster is referred to as an ‘active’ cluster, and the new cluster still waiting to serve external clients is ‘passive’. The MirrorMaker service can be configured using much the same configuration as available for Kafka Connect.
1717

18-
## Deploy and run MirrorMaker
19-
20-
### Pre-requisites
18+
## Pre-requisites
2119

2220
- An existing Kafka cluster to migrate from. The clusters need to be reachable from/to Charmed Kafka K8s.
2321
- A bootstrapped Juju K8s cloud running Charmed Kafka K8s to migrate to
2422
- A tutorial on how to set-up a Charmed Kafka deployment can be found as part of the [Charmed Kafka K8s Tutorial](/t/charmed-kafka-k8s-documentation-tutorial-overview/11945)
2523
- The CLI tool `yq` - https://github.com/mikefarah/yq
2624
- `snap install yq --channel=v3/stable`
2725

28-
### Get cluster details and admin credentials
26+
## Get cluster details and admin credentials
2927

3028
By design, the `kafka` charm will not expose any available connections until related to by a client. In this case, we deploy `data-integrator` charms and relating them to each `kafka` application, requesting `admin` level privileges:
3129

@@ -49,9 +47,9 @@ export NEW_SERVERS=$(juju show-unit data-integrator/0 | yq -r '.. | .endpoints?
4947
export NEW_SASL_JAAS_CONFIG="org.apache.kafka.common.security.scram.ScramLoginModule required username=\""${NEW_USERNAME}"\" password=\""${NEW_PASSWORD}\"\;
5048
```
5149

52-
### Required source cluster credentials
50+
## Required source cluster credentials
5351

54-
In order to authenticate MirrorMaker to both clusters, it will need full `super.user` permissions on **BOTH** clusters. MirrorMaker supports every possible `security.protocol` supported by Apache Kafka. In this guide, we will make the assumption that the original cluster is using `SASL_PLAINTEXT` authentication, as such, the required information is as follows:
52+
To authenticate MirrorMaker to both clusters, it will need full `super.user` permissions on **BOTH** clusters. MirrorMaker supports every possible `security.protocol` supported by Apache Kafka. In this guide, we will make the assumption that the original cluster is using `SASL_PLAINTEXT` authentication, as such, the required information is as follows:
5553

5654
```bash
5755
# comma-separated list of kafka server IPs and ports to connect to
@@ -63,9 +61,9 @@ OLD_SASL_JAAS_CONFIG
6361

6462
> **NOTE** - If using `SSL` or `SASL_SSL` authentication, review the configuration options supported by Kafka Connect in the [Apache Kafka documentation](https://kafka.apache.org/documentation/#connectconfigs)
6563
66-
### Generating `mm2.properties` file on the Charmed Kafka cluster
64+
## Generating `mm2.properties` file on the Charmed Kafka cluster
6765

68-
MirrorMaker takes a `.properties` file for its configuration to fine-tune behavior. See below an example `mm2.properties` file that can be placed on each of the Charmed Kafka units using the above credentials:
66+
MirrorMaker takes a `.properties` file for its configuration to fine-tune behaviour. See below an example `mm2.properties` file that can be placed on each of the Charmed Kafka units using the above credentials:
6967

7068
```bash
7169
# Aliases for each cluster, can be set to any unique alias
@@ -131,7 +129,8 @@ cat /tmp/mm2.properties | juju ssh kafka-k8s/<id> sudo -i 'sudo tee -a /etc/kafk
131129

132130
where `<id>` is the id of the Charmed Kafka unit.
133131

134-
### Starting a dedicated MirrorMaker cluster
132+
## Starting a dedicated MirrorMaker cluster
133+
135134
It is strongly advised to run MirrorMaker services on the downstream cluster to avoid service impact due to resource use. Now that the properties are set on each unit of the new cluster, the MirrorMaker services can be started using with JMX metrics exporters using the following:
136135

137136
```bash
@@ -142,7 +141,7 @@ export KAFKA_OPTS = "-Djava.security.auth.login.config=/etc/kafka/zookeeper-jaas
142141
juju ssh kafka-k8s/<id> sudo -i 'cd /opt/kafka/bin && KAFKA_OPTS=$KAFKA_OPTS ./connect-mirror-maker.sh /etc/kafka/mm2.properties'
143142
```
144143

145-
### Monitoring and validating data replication
144+
## Monitoring and validating data replication
146145

147146
The migration process can be monitored using built-in Kafka bin-commands on the original cluster. In the Charmed Kafka cluster, these bin-commands are also mapped to snap commands on the units (e.g `charmed-kafka.get-offsets` or `charmed-kafka.topics`).
148147

@@ -164,8 +163,11 @@ There is also a [range of different metrics](https://github.com/apache/kafka/blo
164163
```
165164
curl 10.248.204.198:9099/metrics | grep records_count
166165
```
167-
### Switching client traffic from original cluster to Charmed Kafka cluster
166+
## Switching client traffic from original cluster to Charmed Kafka cluster
167+
168168
Once happy that all the necessary data has successfully migrated, stop all active consumer applications on the original cluster, and redirect them to the Charmed Kafka cluster, making sure to use the Charmed Kafka cluster server addresses and authentication. After doing so, they will re-join their original consumer groups at the last committed offset it had originally, and continue consuming as normal.
169169
Finally, the producer client applications can be stopped, updated with the Charmed Kafka cluster server addresses and authentication, and restarted, with any newly produced messages being received by the migrated consumer client applications, completing the migration of both the data, and the client applications.
170-
### Stopping MirrorMaker replication
170+
171+
## Stopping MirrorMaker replication
172+
171173
Once confident in the successful completion of the data an client migration, the running processes on each of the charm units can be killed, stopping the MirrorMaker processes active on the Charmed Kafka cluster.

docs/how-to/h-deploy.md

+6-2
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,11 @@ To deploy a Charmed Kafka K8s cluster:
88
3. Deploy and relate Kafka K8s and ZooKeeper K8s charms.
99
4. (Optionally) Create an external admin user
1010

11-
## Juju Controller setup
11+
In the next subsections, we will cover these steps separately by referring to
12+
relevant Juju documentation and providing details on the Charmed Kafka K8s specifics.
13+
If you already have a Juju controller and/or a Juju model, you can skip the associated steps.
14+
15+
## Juju controller setup
1216

1317
Make sure you have a Juju controller accessible from
1418
your local environment using the [Juju client snap](https://snapcraft.io/juju).
@@ -35,7 +39,7 @@ where `<cloud>` -- the cloud to deploy controller to, e.g., `localhost`. For mor
3539

3640
> **Note** See the [How to manage controllers](/t/1111) guide in Juju documentation for more options.
3741
38-
## Juju Model setup
42+
## Juju model setup
3943

4044
You can create a new Juju model using
4145

docs/how-to/h-enable-encryption.md

+3-1
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,10 @@
11
# How to enable encryption
22

3+
To enable encryption, you should first deploy a TLS certificates Provider charm.
4+
35
## Deploy a TLS Provider charm
46

5-
To enable encryption, you should first deploy a TLS certificates Provider charm. The Kafka K8s and ZooKeeper K8s charms implements the Requirer side of the [`tls-certificates/v1`](https://github.com/canonical/charm-relation-interfaces/blob/main/interfaces/tls_certificates/v1/README.md) charm relation.
7+
The Kafka K8s and ZooKeeper K8s charms implement the Requirer side of the [`tls-certificates/v1`](https://github.com/canonical/charm-relation-interfaces/blob/main/interfaces/tls_certificates/v1/README.md) charm relation.
68
Therefore, any charm implementing the Provider side could be used.
79

810
One possible option, suitable for testing, could be to use the `self-signed-certificates`, although this setup is however not recommended for production clusters.

docs/how-to/h-enable-monitoring.md

+14-2
Original file line numberDiff line numberDiff line change
@@ -6,35 +6,44 @@ The metrics can be queried by accessing the `http://<kafka-unit-ip>:9101/metrics
66
Additionally, the charm provides integration with the [Canonical Observability Stack](https://charmhub.io/topics/canonical-observability-stack).
77

88
## Prerequisites
9+
910
* A deployed [Charmed Kafka K8s and Charmed ZooKeeper K8s bundle](HERE)
1011
* A deployed [`cos-lite` bundle in a Kubernetes environment](https://charmhub.io/topics/canonical-observability-stack/tutorials/install-microk8s)
1112

1213
## Offer interfaces via the COS controller
14+
1315
First, we will switch to the COS K8s environment and offer COS interfaces to be cross-model integrated with the Charmed Kafka K8s model.
1416

1517
To switch to the Kubernetes controller for the COS model, run
18+
1619
```shell
1720
juju switch <k8s_cos_controller>:<cos_model_name>
1821
```
1922
To offer the COS interfaces, run
23+
2024
```shell
2125
juju offer grafana:grafana-dashboard grafana-dashboards
2226
juju offer loki:logging loki-logging
2327
juju offer prometheus:receive-remote-write prometheus-receive-remote-write
2428
```
29+
2530
## Consume offers via the Kafka model
31+
2632
Next, we will switch to the Charmed Kafka K8s model, find offers, and consume them.
2733

2834
We are currently on the Kubernetes controller for the COS model. To switch to the Kafka model, run
35+
2936
```shell
3037
juju switch <k8s_db_controller>:<kafka_model_name>
3138
```
3239
To find offers, run the following command (make sure not to miss the ":" at the end!):
40+
3341
```shell
3442
juju find-offers <k8s_cos_controller>:
3543
```
3644

37-
The output should be similar to the sample below, where `k8s` is the k8s controller name and `cos` is the model where `cos-lite` has been deployed:
45+
The output should be similar to the sample below, where `k8s` is the K8s controller name and `cos` is the model where `cos-lite` has been deployed:
46+
3847
```shell
3948
Store URL Access Interfaces
4049
k8s admin/cos.grafana-dashboards admin grafana_dashboard:grafana-dashboard
@@ -44,6 +53,7 @@ k8s admin/cos.prometheus-receive-remote-write admin prometheus_remote_write:
4453
```
4554

4655
To consume offers to be reachable in the current model, run
56+
4757
```shell
4858
juju consume <k8s_cos_controller>:admin/<cos_model_name>.grafana-dashboards
4959
juju consume <k8s_cos_controller>:admin/<cos_model_name>.loki-logging
@@ -52,6 +62,7 @@ juju consume <k8s_cos_controller>:admin/<cos_model_name>.prometheus-receive-remo
5262
## Deploy and integrate Grafana
5363

5464
First, deploy [grafana-agent-k8s](https://charmhub.io/grafana-agent-k8s):
65+
5566
```shell
5667
juju deploy grafana-agent-k8s --trust
5768
```
@@ -84,14 +95,15 @@ models, e.g. `<kafka_model_name>` and `<cos_model_name>`.
8495
After this is complete, the monitoring COS stack should be up and running and ready to be used.
8596

8697
### Connect Grafana web interface
98+
8799
To connect to the Grafana web interface, follow the [Browse dashboards](https://charmhub.io/topics/canonical-observability-stack/tutorials/install-microk8s?_ga=2.201254254.1948444620.1704703837-757109492.1701777558#heading--browse-dashboards) section of the MicroK8s "Getting started" guide.
88100
```shell
89101
juju run grafana/leader get-admin-password --model <k8s_cos_controller>:<cos_model_name>
90102
```
91103

92104
## Tune server logging level
93105

94-
In order to tune the level of the server logs for Kafka and ZooKeeper, configure the `log-level` and `log_level` properties accordingly
106+
To tune the level of the server logs for Kafka and ZooKeeper, configure the `log-level` and `log_level` properties accordingly.
95107

96108
### Kafka
97109

docs/how-to/h-integrate-alerts-dashboards.md

+5-8
Original file line numberDiff line numberDiff line change
@@ -1,30 +1,28 @@
11
# Integrate custom alerting rules and dashboards
22

3-
This guide shows you how to integrate an existing set of rules and/or dashboards to your Charmed Kafka and Charmed Zookeeper deployment to be consumed with the [Canonical Observability Stack (COS)](https://charmhub.io/topics/canonical-observability-stack).
4-
To do so, we will sync resources stored in a git repo to COS Lite.
3+
This guide shows you how to integrate an existing set of rules and/or dashboards to your Charmed Kafka and Charmed ZooKeeper deployment to be consumed with the [Canonical Observability Stack (COS)](https://charmhub.io/topics/canonical-observability-stack).
4+
To do so, we will sync resources stored in a git repository to COS Lite.
55

66
## Prerequisites
77

8-
Deploy the cos-lite bundle in a Kubernetes environment and integrate Charmed Kafka and Charmed ZooKeeper to the COS offers, as shown in the [How to Enable Monitoring](/t/charmed-kafka-k8s-how-to-enable-monitoring/10291) guide.
8+
Deploy the `cos-lite` bundle in a Kubernetes environment and integrate Charmed Kafka and Charmed ZooKeeper to the COS offers, as shown in the [How to Enable Monitoring](/t/charmed-kafka-k8s-how-to-enable-monitoring/10291) guide.
99
This guide will refer to the models that charms are deployed into as:
1010

11-
* `<cos-model>` for the model containing observabilities charms (and deployed on k8s)
11+
* `<cos-model>` for the model containing observability charms (and deployed on K8s)
1212

1313
* `<apps-model>` for the model containing Charmed Kafka and Charmed ZooKeeper
1414

15-
* `<apps-model>` for other optional charms (e.g. tls-certificates operators, `grafana-agent`, `data-integrator`, etc.).
15+
* `<apps-model>` for other optional charms (e.g. TLS-certificates operators, `grafana-agent`, `data-integrator`, etc.).
1616

1717
## Create a repository with a custom monitoring setup
1818

19-
2019
Create an empty git repository, or in an existing one, save your alert rules and dashboard models under the `<path_to_prom_rules>`, `<path_to_loki_rules>` and `<path_to_models>` folders.
2120

2221
If you want a primer to rule writing, refer to the [Prometheus documentation](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/).
2322
You may also find an example in the [kafka-test-app repository](https://github.com/canonical/kafka-test-app).
2423

2524
Then, push your changes to the remote repository.
2625

27-
2826
## Deploy the COS configuration charm
2927

3028
Deploy the [COS configuration](https://charmhub.io/cos-configuration-k8s) charm in the `<cos-model>` model:
@@ -43,7 +41,6 @@ Adding, updating or deleting an alert rule or a dashboard in the repository will
4341
You need to manually refresh `cos-config`'s local repository with the *sync-now* action if you do not want to wait for the next [update-status event](/t/event-update-status/6484) to pull the latest changes.
4442
[/Note]
4543

46-
4744
## Forward the rules and dashboards
4845

4946
The path to the resource folders can be set after deployment:

0 commit comments

Comments
 (0)