In simple words Service Level Indicators
are the metrics that represent how the reliability
is perceived by the consumers of the service. They are normalized to be a number between
0 and 100 using this formula:
You can read more about the definition of
Good
and
Valid
.
Common SLIs include latency, availability, yield, durability, correctness, etc.
πTip: Use Templates
Service Level Formula
The formula for calculating SLI for the given SLO window is
the percentage of good per valid.
Depending on whether the SLI is
time-based or event-based,
the formula calculates the percentage of bad time or bad events.
SLO
Service Level Objective (SLO) is the target percentage of
good {{ sloWindow.eventUnitNorm }}
out of total {{ sloWindow.eventUnitNorm }}
in {{ sloWindow }}.
Using the two sliders below you can fine tune the SLO to your needs.
The first slider is for the integer part of the percentage ({{ percL10n(sloInt) }}).
The second slider is for the fractional part of the percentage ({{ percL10n(sloFrac) }}).
Typical SLO values
Informal Name
SLO Value
{{ p.title }}
{{percL10n(p.slo)}}
Note: Just be mindful of the price tag for this high service level objective!
Everyone wants the highest possible number but
not everyone is willing to pay the price.
Note: this is an unusually low service level objective.
Typically service level objective is above {{ percL10n(90) }} with some rare exceptions.
Please check the Error budget for implications of your chosen SLO.
$UT
({{ metricUnit }})
The upper threshold (UT) defines the maximum possible values
for the {{ metricName }}
(in {{ metricUnit }})
to indicate good {{ sloWindow.eventUnitNorm }}.
UT is part of the SLO definition and for example allows for
Multi-Tierd SLOs
.
Maximum {{ metricName }} for good {{ sloWindow.eventUnitNorm }}
The upper threshold must be greater than the lower threshold.
Service Level Status Formula
Service Level Status (SLS)
is the percentage of good
{{ sloWindow.eventUnitNorm }} in a given time.
SLS is the status of the Service Level and directly relates to SLO.
Whenever SLS is below SLO, we have breached the SLO.
In case of SLA,
this may have severe consequences.
Error budget:
{{ percL10n(errorBudgetPerc) }}
Error budget is one of the core ideas behind using SLI/SLOs to improve reliability.
Instead of denying or forbidding errors, error budget allows the system to fail
within a pre-defined limit.
The number one enemy of reliability is change.
But we need change to be able to improve the system.
Error budgets do exactly that.
They provide a budget of error for the team to improve the system while keeping
the consumers happy enough.
Error budget is the complement of SLO.
It is the percentage of bad {{ sloWindow.eventUnitNorm }} that you can have
before you violate the SLO.
Subtract
Add
{{ numL10n(1) }}
{{ numL10n(10) }}
{{ numL10n(100) }}
{{ numL10n(1000) }}
{{ numL10n(10000) }}
Warning: The error budget is 0 based on your estimated number of valids {{ sloWindow.eventUnitNorm }}.
Expected
Here you can enter the numbers for your expected load and see
how many {{ eventUnit }} are allowed to fail during the SLO window
while still being within the error budget.
Number of {{ eventUnit }}
in
{{ sloWindow.humanTime }}
Average cost
How much does a bad
{{ sloWindow.eventUnitNorm }}
cost the business or your team?
This cost will be used to put a tangible number on various windows and events.
It might be hard to put a number on failures especially if some resilience patterns are part of the architecture.
There are many ways to make the failures cheaper.
In a future article, we will discuss all patterns of reliability and how to make errors cheap.
In the mean time check out the following techniques:
Fallback
Failover
For each of the failed {{ sloWindow.eventUnitNorm }}
Currency
You can set the currency to see how much it costs to violate the SLO.
If you can't put a currency on the errors, feel free to get creative.
Typical Currencies
Abbreviation
Description
{{ p.currency }}
{{ p.description }}
Alerting
What is the point of setting SLI/SLO if we are not going to take application
when the SLO is violated?
Alerting on error budgets enable us to be on top of the reliability of our system.
When using service levels, the alert triggers on the rate of consuming the error budget.
When setting an alert, the burn rate decides how quickly the alert reacts
to errors.
Too fast and it will lead to false positives (alerting unnecessarily)
and alert fatigue (too many alerts).
Too slow and the error burget will be burned before you know it.
Google SRE Workbook goes through
6 alerting strategies based on SLOs
.
Rate: {{ burnRate }}x
Burn rate is the rate at which the error budget is consumed.
It is the ratio of the error budget to the SLO window.
A burn rate of means that the error budget will be consumed during the
SLO window (accepted).
A burn rate of means that the error budget will be consumed in half the
SLO window. This is not acceptable because at this rate, the SLO will be
violated before the end of the SLO window.
You have selected a burn rate of {{ burnRate }}x.
This means the error budget ({{ errorBudget.eventCountL10n }} failed {{ errorBudget.eventUnitNorm }}) will be consumed in
instead of being spread across {{ sloWindow.humanTime }}.
If the error budget continues to burn at this rate throughout the SLO window,
there will be {{ sloWindowBudgetBurn }}.
Google SRE Workbook goes through 6 alerting strategies and
recommends
:
Burn Rate
Error Budget
Long-Window
Short-Window
Action
14.4x
{{ percL10n(2) }} Consumed
1 hour
5 minutes
Page
6x
{{ percL10n(5) }} Consumed
6 hours
30 minutes
Page
1x
{{ percL10n(10) }} Consumed
3 days
6 hours
Ticket
Note: The above values for Long-Window and Short-Window are based on a 1-month SLO window.
You can see your actual values in the comments below Long-Window and Short-Window.
Error budget is burning at the rate allowed by SLO
Error budget is burning faster than allowed by SLO
Long-window burn: {{ percL10n(longWindowPerc) }}
We don't want to wait for the entire error budget to be consumed before
alerting! It will be too late to take action.
Therefore the alert should trigger before a significant portion of the
error budget is consumed.
Based on your setup, the alert will trigger after we have consumed
{{ percL10n(longWindowPerc) }} of the entire time allotted for the error budget
(or SLO compliance window) which is
.
Which is
.
Assuming that the entire error budget was available at the begining of the incident,
the maximum time available to respond before the entire error budget is exhausted is:
Which is
.
Remember that this is the best case scenario.
In reality, you may have much less time if you don't want to consume the entire error budget for an incident.
Also note that the burn rate can be higher than {{ burnRate }}x.
After Burning {{ percL10n(longWindowPerc) }} of error budget
at or above {{ burnRate }}x
Warning: The alert is too "jumpy" and will trigger too often.
This may lead to alert fatigue or even worse: ignoring the alerts.
Note: The time to resolve (TTR) is too short for a human to react.
It is strongly recommended to automate the incident resolution instead of relying on human response to alerts.
Warning: Remember that the alert will trigger after {{ percL10n(longWindowPerc) }} of the error budget is consumed! That error budget is for {{ sloWindow.humanTime }}.
Based on your setting an alert burns {{ percL10n(longWindowPerc) }} just to trigger. Then it needs some time to resolve too.
How many alerts like this can you have in {{ sloWindow.humanTime }} before the
entire error budget is consumed?
Warning: Long alert Window is too short at this burn rate ({{ burnRate }}x)
which may lead to alert fatigue.
Error: Division by zero! Long alert Window is too short for enough valid {{ sloWindow.eventUnitNorm }} to be counted.
Use Short-Window Alert
The purpose of the short-window alert is to reduce false alerts.
It checks a shorter lookback window (hence the name) to make sure that
the burn rate is still high before triggering the alert.
This reduces false positives where an alert is triggered for a temporary high burn rate.
The Short-Window alert reduces false positives
at the expence of making the alerting setup more complex.
Ratio: 1/{{ shortWindowDivider }}
The Short-Window is usually 1/12th of the Long-Window (per
Google SRE Workbook
recommendation).
But you can play with different dividers to see how they impact
the detection time of the alert.
Long-Window alert triggers after consuming {{ longWindowPerc }}%
of the total error budget.
Therefore, the Short-Window alert triggers after consuming:
Converted to time based on the current window ({{ sloWindow.humanTime }}):
This means the alert will trigger only if we are still
burning the error budget at least at the
{{ burnRate }}x burn rate
in the past
.
Still burning above {{ burnRate }}x in the last 1/{{shortWindowDivider}}
of long window
Warning: Short alert Window is too short at this burn rate ({{ burnRate }}x)
which may lead to alert fatigue.
Error: Short alert Window is too short for enough valid events to be counted.
Alert Policy
This is a pseudo-code for trigerring alerts based on the
SLI metric in relation to the desired SLO (
{{ percL10n(slo) }}
).
You need to translate it to your observability and/or alerting tool.
β€
{{ percL10n(slo) }}
&&
β€
{{ percL10n(slo) }}
Share
This app completely runs in the browser and has no backend.
So all you have to do is to copy the following link. Whenever that link is clicked, it opens the app in the exact state when you copied it.