Skip to content

Commit d3da664

Browse files
authored
Merge branch 'main' into es/docs/contributing/issues
2 parents c6e862c + 8256a75 commit d3da664

File tree

5 files changed

+356
-102
lines changed

5 files changed

+356
-102
lines changed
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,186 @@
1+
---
2+
title: Kubernetes annotation-based discovery for the OpenTelemetry Collector
3+
linkTitle: K8s annotation-based discovery
4+
date: 2025-01-27
5+
author: >
6+
[Dmitrii Anoshin](https://github.com/dmitryax) (Cisco/Splunk), [Christos
7+
Markou](https://github.com/ChrsMark) (Elastic)
8+
sig: Collector
9+
issue: opentelemetry-collector-contrib#34427
10+
cSpell:ignore: Dmitrii Anoshin Markou
11+
---
12+
13+
In the world of containers and [Kubernetes](https://kubernetes.io/),
14+
observability is crucial. Users need to know the status of their workloads at
15+
any given time. In other words, they need observability into moving objects.
16+
17+
This is where the [OpenTelemetry Collector](/docs/collector/) and its
18+
[receiver creator](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/v0.117.0/receiver/receivercreator)
19+
component come in handy. Users can set up fairly complex monitoring scenarios
20+
with a self-service approach, following the principle of least privilege at the
21+
cluster level.
22+
23+
The self-service approach is great, but how much self-service can it actually
24+
be? In this blog post, we will explore a newly added feature of the Collector
25+
that makes dynamic workload discovery even easier, providing a seamless
26+
experience for both administrators and users.
27+
28+
## Automatic discovery for containers and pods
29+
30+
Applications running on containers and pods become moving targets for the
31+
monitoring system. With automatic discovery, monitoring agents like the
32+
Collector can track changes at the container and pod levels and dynamically
33+
adjust the monitoring configuration.
34+
35+
Today, the Collector—and specifically the receiver creator—can provide such an
36+
experience. Using the receiver creator, observability users can define
37+
configuration "templates" that rely on environment conditions. For example, as
38+
an observability engineer, you can configure your Collectors to enable the
39+
[NGINX receiver](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/v0.117.0/receiver/nginxreceiver)
40+
when a NGINX pod is deployed on the cluster. The following configuration can
41+
achieve this:
42+
43+
```yaml
44+
receivers:
45+
receiver_creator:
46+
watch_observers: [k8s_observer]
47+
receivers:
48+
nginx:
49+
rule: type == "port" && port == 80 && pod.name matches "(?i)nginx"
50+
config:
51+
endpoint: 'http://`endpoint`/nginx_status'
52+
collection_interval: '15s'
53+
```
54+
55+
The previous configuration is enabled when a pod is discovered via the
56+
Kubernetes API that exposes port `80` (the known port for NGINX) and its name
57+
matches the `nginx` keyword.
58+
59+
This is great, and as an SRE or Platform Engineer managing an observability
60+
solution, you can rely on this to meet your users' needs for monitoring NGINX
61+
workloads. However, what happens if another team wants to monitor a different
62+
type of workload, such as Apache servers? They would need to inform your team,
63+
and you would need to update the configuration with a new conditional
64+
configuration block, take it through a pull request and review process, and
65+
finally deploy it. This deployment would require the Collector instances to
66+
restart for the new configuration to take effect. While this process might not
67+
be a big deal for some teams, there is definitely room for improvement.
68+
69+
So, what if, as a Collector user, you could simply enable automatic discovery
70+
and then let your cluster users tell the Collector how their workloads should be
71+
monitored by annotating their pods properly? That sounds awesome, and it’s not
72+
actually something new. OpenTelemetry already supports auto-instrumentation
73+
through the [Kubernetes operator](/docs/kubernetes/operator/automatic/),
74+
allowing users to instrument their applications automatically just by annotating
75+
their pods. In addition, this is a feature that other monitoring agents in the
76+
observability industry already support, and users are familiar with it.
77+
78+
All this motivation led the OpenTelemetry community
79+
([GitHub issue](https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/17418))
80+
to create a similar feature for the Collector. We are happy to share that
81+
autodiscovery based on Kubernetes annotations is now supported in the Collector
82+
([GitHub issue](https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/34427))!
83+
84+
## A solution
85+
86+
The solution is built on top of the existing functionality provided by the
87+
[Kubernetes observer](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/v0.117.0/extension/observer/k8sobserver)
88+
and
89+
[receiver creator](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/v0.117.0/receiver/receivercreator).
90+
91+
The K8s observer notifies the receiver creator about the objects appearing in
92+
the K8s cluster and provides all the information about them. In addition to the
93+
K8s object metadata, the observer supplies information about the discovered
94+
endpoints that the collector can connect to. This means that each discovered
95+
endpoint can potentially be used by a particular scraping receiver to fetch
96+
metrics data.
97+
98+
Each scraping receiver has a default configuration with only one required field:
99+
`endpoint`. Given that the endpoint information is provided by the Kubernetes
100+
observer, the only information that the user needs to provide explicitly is
101+
which receiver/scraper should be used to scrape data from a discovered endpoint.
102+
That information can be configured on the Collector, but as mentioned before,
103+
this is inconvenient. A much more convenient place to define which receiver can
104+
be used to scrape telemetry from a particular pod is the pod itself. Pod’s
105+
annotations is the natural place to put that kind of detail. Given that the
106+
receiver creator has access to the annotations, it can instantiate the proper
107+
receiver with the receiver’s default configuration and discovered endpoint.
108+
109+
The following annotation instructs the receiver creator that this particular pod
110+
runs NGINX, and the
111+
[NGINX receiver](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/v0.117.0/receiver/nginxreceiver)
112+
can be used to scrape metrics from it:
113+
114+
```yaml
115+
io.opentelemetry.discovery.metrics/scraper: nginx
116+
```
117+
118+
Apart from that, the discovery on the pod needs to be explicitly enabled with
119+
the following annotation:
120+
121+
```yaml
122+
io.opentelemetry.discovery.metrics/enabled: 'true'
123+
```
124+
125+
In some scenarios, the default receiver’s configuration is not suitable for
126+
connecting to a particular pod. In that case, it’s possible to define custom
127+
configuration as part of another annotation:
128+
129+
```yaml
130+
io.opentelemetry.discovery.metrics/config: |
131+
endpoint: "http://`endpoint`/nginx_status"
132+
collection_interval: '20s'
133+
initial_delay: '20s'
134+
read_buffer_size: '10'
135+
```
136+
137+
It’s important to mention that the configuration defined in the annotations
138+
cannot point the receiver creator to another pod. The Collector will reject such
139+
configurations.
140+
141+
In addition to the metrics scraping, the annotation-based discovery also
142+
supports log collection with filelog receiver. The following annotation can be
143+
used to enable log collection on a particular pod:
144+
145+
```yaml
146+
io.opentelemetry.discovery.logs/enabled: 'true'
147+
```
148+
149+
Similar to metrics, an optional configuration can be provided in the following
150+
form:
151+
152+
```yaml
153+
io.opentelemetry.discovery.logs/config: |
154+
max_log_size: "2MiB"
155+
operators:
156+
- type: container
157+
id: container-parser
158+
- type: regex_parser
159+
regex: '^(?P<time>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}) (?P<sev>[A-Z]*) (?P<msg>.*)$'
160+
```
161+
162+
If the set of filelog receiver operators needs to be changed, the full list,
163+
including the default container parser, has to be redefined because list config
164+
fields are entirely replaced when merged into the default configuration struct.
165+
166+
The discovery functionality has to be explicitly enabled in the receiver creator
167+
by adding the following configuration field:
168+
169+
```yaml
170+
receivers:
171+
receiver_creator:
172+
watch_observers: [k8s_observer]
173+
discovery:
174+
enabled: true
175+
```
176+
177+
## Give it a try
178+
179+
If you are an OpenTelemetry Collector user on Kubernetes, and you find this new
180+
feature interesting, see [Receiver Creator configuration] section to learn more.
181+
182+
Give it a try and let us know what you think via the `#otel-collector` channel
183+
of the [CNCF Slack workspace](https://slack.cncf.io/).
184+
185+
[Receiver Creator configuration]:
186+
https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/v0.117.0/receiver/receivercreator/README.md#generate-receiver-configurations-from-provided-hints

content/en/docs/zero-code/php.md

+12-9
Original file line numberDiff line numberDiff line change
@@ -18,11 +18,11 @@ Automatic instrumentation with PHP requires:
1818
[instrumentation libraries](/ecosystem/registry/?component=instrumentation&language=php)
1919
- [Configuration](#configuration)
2020

21+
## Install the OpenTelemetry extension
22+
2123
{{% alert title="Important" color="warning" %}}Installing the OpenTelemetry
2224
extension by itself does not generate traces. {{% /alert %}}
2325

24-
## Install the OpenTelemetry extension
25-
2626
The extension can be installed via pecl,
2727
[pickle](https://github.com/FriendsOfPHP/pickle) or
2828
[php-extension-installer](https://github.com/mlocati/docker-php-extension-installer)
@@ -130,13 +130,16 @@ Automatic instrumentation is available for a number commonly used PHP libraries.
130130
For the full list, see
131131
[instrumentation libraries on packagist](https://packagist.org/search/?query=open-telemetry&tags=instrumentation).
132132

133-
Let's assume that your application uses Slim Framework and a PSR-18 HTTP client.
134-
You would then install the SDK and corresponding auto-instrumentation packages
135-
for these:
133+
Let's assume that your application uses Slim Framework and a PSR-18 HTTP client,
134+
and that we will export the traces with the OTLP protocol.
135+
136+
You would then install the SDK, an exporter, and auto-instrumentation packages
137+
for Slim Framework and PSR-18:
136138

137139
```shell
138140
composer require \
139141
open-telemetry/sdk \
142+
open-telemetry/exporter-otlp \
140143
open-telemetry/opentelemetry-auto-slim \
141144
open-telemetry/opentelemetry-auto-psr18
142145
```
@@ -152,8 +155,8 @@ variables or the `php.ini` file to configure auto-instrumentation.
152155
OTEL_PHP_AUTOLOAD_ENABLED=true \
153156
OTEL_SERVICE_NAME=your-service-name \
154157
OTEL_TRACES_EXPORTER=otlp \
155-
OTEL_EXPORTER_OTLP_PROTOCOL=grpc \
156-
OTEL_EXPORTER_OTLP_ENDPOINT=http://collector:4317 \
158+
OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf \
159+
OTEL_EXPORTER_OTLP_ENDPOINT=http://collector:4318 \
157160
OTEL_PROPAGATORS=baggage,tracecontext \
158161
php myapp.php
159162
```
@@ -167,8 +170,8 @@ by PHP:
167170
OTEL_PHP_AUTOLOAD_ENABLED="true"
168171
OTEL_SERVICE_NAME=your-service-name
169172
OTEL_TRACES_EXPORTER=otlp
170-
OTEL_EXPORTER_OTLP_PROTOCOL=grpc
171-
OTEL_EXPORTER_OTLP_ENDPOINT=http://collector:4317
173+
OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf
174+
OTEL_EXPORTER_OTLP_ENDPOINT=http://collector:4318
172175
OTEL_PROPAGATORS=baggage,tracecontext
173176
```
174177

scripts/content-modules/adjust-pages.pl

+5-5
Original file line numberDiff line numberDiff line change
@@ -82,10 +82,10 @@ ($$$)
8282

8383
return 0 if $patchMsgCount{$key};
8484

85-
if (($vers = $versions{$specName}) ne $targetVers) {
86-
print STDOUT "INFO: remove obsolete patch '$patchID' now that spec '$specName' is at v$vers, not v$targetVers - $0\n";
87-
} elsif (($vers = $versFromSubmod{$specName}) ne $targetVers) {
88-
print STDOUT "INFO [$patchID]: skipping patch '$patchID' since spec '$specName' submodule is at v$vers not v$targetVers - $0\n";
85+
if (($vers = $versions{$specName}) gt $targetVers) {
86+
print STDOUT "INFO: remove obsolete patch '$patchID' now that spec '$specName' is at v$vers > v$targetVers - $0\n";
87+
} elsif (($vers = $versFromSubmod{$specName}) gt $targetVers) {
88+
print STDOUT "INFO [$patchID]: skipping patch '$patchID' since spec '$specName' submodule is at v$vers > v$targetVers - $0\n";
8989
} else {
9090
return 'Apply the patch';
9191
}
@@ -103,7 +103,7 @@ ()
103103

104104
sub patchSemConv1_30_0() {
105105
return unless $ARGV =~ /^tmp\/semconv\/docs\//
106-
&& applyPatchOrPrintMsgIf('2025-01-24-emit-an-event', 'semconv', '1.30.0');
106+
&& applyPatchOrPrintMsgIf('2025-01-24-emit-an-event', 'semconv', '1.30.0-3-g');
107107

108108
s|Emit Event API|Log API|;
109109
s|(docs/specs/otel/logs/api.md#emit-a)n-event|$1-logrecord|;

0 commit comments

Comments
 (0)