-
Notifications
You must be signed in to change notification settings - Fork 641
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Eliminate logspam using filelog monitor #1032
base: master
Are you sure you want to change the base?
Conversation
When using a filelog monitor, the top-level `pluginConfig.message` filter provides a limited stream of log events to be checked against the `rules[].pattern`. I thus expect it to be normal for most log events to fail to match the top-level `pluginConfig.message` filter. Such a condition should not trigger a warning-level log message. And in fact such log messages should be suppressed by default, unless `-v=5` or higher is used for troubleshooting/debug purposes.
The committers listed above are authorized under a signed CLA. |
Welcome @skaven81! |
Hi @skaven81. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/ok-to-test |
@wangzhen127 I am not sure if |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: hakman, skaven81 The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/cc @wangzhen127 |
@@ -121,7 +121,7 @@ func (s *filelogWatcher) watchLoop() { | |||
} | |||
log, err := s.translator.translate(strings.TrimSuffix(line, "\n")) | |||
if err != nil { | |||
klog.Warningf("Unable to parse line: %q, %v", line, err) | |||
klog.V(5).Infof("Unable to parse line: %q, %v", line, err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are 2 types of errors: timestamp parsing error and message filtering error. This hides both errors. But I think you only want to hide the message filtering error, right? https://github.com/kubernetes/node-problem-detector/blob/master/pkg/systemlogmonitor/logwatchers/filelog/translator.go#L59
I think in the original design, we only considered the pattern where the message regular expression is common in the file log. This is true for kernel log for example. Can you share more of the use case here? It's probably valid and we may want to log here instead of erroring out: https://github.com/kubernetes/node-problem-detector/blob/master/pkg/systemlogmonitor/logwatchers/filelog/translator.go#L74
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you share more of the use case here?
I'll update the introductory comment on the PR with the use cases where this became a problem.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See also #1038
CC @nikhil-bhat |
When using a filelog monitor, the top-level
pluginConfig.message
filter provides a limited stream of log events to be checked against therules[].pattern
. I thus expect it to be normal for most log events to fail to match the top-levelpluginConfig.message
filter. Such a condition should not trigger a warning-level log message. And in fact such log messages should be suppressed by default, unless-v=5
or higher is used for troubleshooting/debug purposes.See also #1038
As discussed in the comments below, the original intent was that the
pluginConfig.message
configuration setting for logMonitors was expected to match all of the "expected" log events in the log file, and that the subsequentrules[].pattern
regexes then differentiate between different types of failure modes in the log stream. Thus, the warning message that I've proposed changing to "info" level and suppressing in normal operation, was expected to only appear in cases where unexpected log events show up in the log stream.But this causes problems when trying to detect node problems in a log file that has a wide array of event messages. Because the last
pluginConfig.message
regex capture group is what is used in node condition and Event message fields, it is only possible to include details about a single failure mode in a given logMonitor configuration. When viewed through this lens, the logMonitor architecture actually works quite well:pluginConfig.message
regex configured to isolate the class of log messages in the log file that are related to the failure mode, then use the last capture group to extract a diagnostic message to be included with the status condition or Eventrules
list in the logMonitor JSON enumerate the various sub-classes of the failure mode, with some perhaps generating permanent node conditions, while others generate temporary Events.The problem with using logMonitors in this way, is that using
pluginConfig.message
to filter out basically all of the log messages in the log file, means every log event generates a warning message due topluginConfig.message
regex not matching. This creates a massive amount of unnecessary logspam that can quickly fill up container log partitions and costs a fortune in enterprise log management platforms like Splunk and Datadog.This PR makes the most direct and least-impact approach to resolving this problem, by simply making the
Warning()
level alert downgraded toInfo()
and preventing it from being emitted unless the NPD is executed with a non-default verbosity setting.It is notable that this PR would not be required if a more comprehensive rework of the logMonitor message capture was performed. If the node condition and Event message was instead captured from the
rules[].pattern
regex instead of thepluginConfig.message
regex, then logMonitors could remain configured with a broad capture mode that matches all or nearly all of the log messages in the file.However, even with such a rework, I would still advise that the "log message doesn't match
pluginConfig.message
regex" alert message still be suppressed unless the administrator is explicitly debugging/troubleshooting NPD, as even in the case where the node condition/event message is captured fromrules[].pattern
(which would be my preference), I would still argue that many logMonitors would want to filter the log stream down to a known set of input strings that are then matched against the rule patterns. This allows the rule patterns to be simpler and easier to maintain, because they only have to match against a pre-filtered set of log events.The specific use-case where this logspam problem originated is actually in one of the included sample log monitors: https://github.com/kubernetes/node-problem-detector/blob/master/config/disk-log-message-filelog.json - observe that
pluginConfig.message
only matches the log messages in/var/log/messages
that actually match a failure condition. ALL other messages are filtered out (and thus trigger the warning log that I've modified in this PR).