You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
where `<kafka-units>` and `<zookeeper-units>` -- the number of units to deploy for Kafka and ZooKeeper. We recommend values of at least `3` and `5` respectively.
72
72
73
+
> **NOTE** The `--trust` option is needed for the Kafka application to work properly, e.g., use NodePort or `juju refresh`. For more information about the trust options usage, see the [Juju documentation](/t/5476#heading--trust-an-application-with-a-credential).
74
+
73
75
Connect ZooKeeper and Kafka by relating/integrating the charms:
Copy file name to clipboardexpand all lines: docs/how-to/h-manage-units.md
+26-13
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ Unit management guide for scaling and running admin utility scripts.
5
5
## Replication and scaling
6
6
7
7
Increasing the number of Kafka brokers can be achieved by adding more units
8
-
to the Charmed Kafka K8s application, e.g.
8
+
to the Charmed Kafka K8s application:
9
9
10
10
```shell
11
11
juju add-unit kafka-k8s -n <num_brokers_to_add>
@@ -18,8 +18,8 @@ It is important to note that when adding more units, the Kafka cluster will not
18
18
will be used only when new topics and new partitions are created.
19
19
20
20
Partition reassignment can still be done manually by the admin user by using the
21
-
`/opt/kafka/bin/kafka-reassign-partitions.sh` Kafka bin utility script. Please refer to
22
-
its documentation for more information.
21
+
`/opt/kafka/bin/kafka-reassign-partitions.sh` Kafka bin utility script. Please refer to the
22
+
[Apache Kafka documentation](https://kafka.apache.org/documentation/#basic_ops_partitionassignment) for more information on the script usage.
23
23
24
24
> **IMPORTANT** Scaling down is currently not supported in the charm automation.
25
25
> If partition reassignment is not manually performed before scaling down in order
@@ -29,16 +29,17 @@ its documentation for more information.
29
29
## Running Kafka Admin utility scripts
30
30
31
31
Apache Kafka ships with `bin/*.sh` commands to do various administrative tasks such as:
32
+
32
33
*`bin/kafka-config.sh` to update cluster configuration
33
34
*`bin/kafka-topics.sh` for topic management
34
35
*`bin/kafka-acls.sh` for management of ACLs of Kafka users
35
36
36
-
Please refer to the upstream [Kafka project](https://github.com/apache/kafka/tree/trunk/bin),
37
+
Please refer to the upstream [Kafka project](https://github.com/apache/kafka/tree/trunk/bin) or its [documentation](https://kafka.apache.org/documentation/#basic_ops),
37
38
for a full list of the bash commands available in Kafka distributions. Also, you can
38
39
use `--help` argument for printing a short summary of the argument for a given
39
40
bash command.
40
41
41
-
These scripts can be found in the `/opt/kafka/bin` folder.
42
+
These scripts can be found in the `/opt/kafka/bin` folder of the workload container.
42
43
43
44
> **IMPORTANT** Before running bash scripts, make sure that some listeners have been correctly
44
45
> opened by creating appropriate integrations. Please refer to [this table](/t/charmed-kafka-k8s-documentation-reference-listeners/13270) for more
@@ -53,20 +54,22 @@ To run most of the scripts, you need to provide:
53
54
### Juju admins of the Kafka deployment
54
55
55
56
For Juju admins of the Kafka deployment, the bootstrap servers information can
56
-
be obtained using
57
+
be obtained using the `get-admin-credentials` action output:
where `kafka-k8s-0.kafka-k8s-endpoints:9092,kafka-k8s-1.kafka-k8s-endpoints:9092,kafka-k8s-2.kafka-k8s-endpoints:9092,kafka-k8s-3.kafka-k8s-endpoints:9092` - is the contents of the `$BOOTSTRAP_SERVERS` variable.
99
+
87
100
### Juju external users
88
101
89
102
For external users managed by the [Data Integrator Charm](https://charmhub.io/data-integrator),
90
-
the endpoints and credentials can be fetched using the dedicated action
103
+
the endpoints and credentials can be fetched using the dedicated action:
91
104
92
105
```shell
93
106
juju run data-integrator/leader get-credentials --format yaml
94
107
```
95
108
96
109
The `client.properties` file can be generated by substituting the relevant information in the
97
-
file available on the brokers at `/etc/kafka/client.properties`
110
+
file available on the brokers at `/etc/kafka/client.properties`.
98
111
99
112
To do so, fetch the information using `juju` commands:
0 commit comments