The stack language provides some basic techniques to convert an input line into a set of signals that can be used to trigger and visualize alert conditions. This section assumes a familiarity with the stack language and the alerting philosophy.
A signal line is a time series that indicates whether or not a condition is true for a particular interval. They are modelled by having zero indicate false and non-zero, typically 1, indicating true. Alerting expressions map some input time series to a set of signal lines that indicate true when in a triggering state.
To start we need an input metric. For this example the input will be a sample metric showing high CPU usage for a period:
Lets say we want to trigger an alert when the CPU usage goes above 80%. To do that simply use the
:gt operator and append
80,:gt to the query:
The result is a signal line that is non-zero, typically 1, when in a triggering state and zero when everything is fine.
Our threshold alert above will trigger if the CPU usage is ever recorded to be above the threshold. Alert conditions are often combined with a check for the number of occurrences. This is done by using the :rolling-count operator to get a line showing how many times the input signal has been true withing a specified window and then applying a second threshold to the rolling count.
|Input||Rolling Count||Dampened Signal|
A signal line is useful to tell whether or not something is in a triggered state, but can be difficult for a person to follow. Alert expressions can be visualized by showing the input, threshold, and triggering state on the same graph.
nf.app,alerttest,:eq, name,ssCpuUser,:eq, :and, :sum, 80,:2over, :gt, :vspan, 40,:alpha, triggered,:legend, :rot, input,:legend, :rot, threshold,:legend, :rot
You should now know the basics of crafting an alert expression using the stack language. Other topics that may be of interest:
- Alerting Philosophy: overview of best practices associated with alerts.
- Stack Language Reference: comprehensive list of avialable operators.
- DES: double exponential smoothing. A technique for detecting anomalies in normally clean input signals where a precise threshold is unknown. For example, the requests per second hitting a service.