Read the announcement about this app.

Service Level Calculator

SLI

In simple words Service Level Indicators are the metrics that represent how the reliability is perceived by the consumers of the service. They are normalized to be a number between 0 and 100 using this formula:

SLI = Good Valid × 100

You can read more about the definition of Good and Valid .

Common SLIs include latency, availability, yield, durability, correctness, etc.

🎉Tip: Use Templates

Type

SLIs can either be either:

  • Time-Based: Concerned with the duration of good time in a given period. The duration is actually a time window where the data is aggregated for a good/bad result. In a sense the Time-Based SLI is also an Event-Based SLI where an event is an aggregation window.
  • Event-Based: Concerned with the count of good events in a set of valid events in a given time period.
Learn more here .

The time slot is the time window that the metric data is aggregated to calculate a good/bad time period.

For example, probing an endpoint every 60 seconds to see if it is available, assumes that the endpoint is available for the entire 60 seconds.

Another example is percentiles. When calculating the 99th percentile of the latency every 5 minutes, the aggregation window is 5 x 60 = 300 seconds.

Typical time slot lengths
Time Slot Seconds
{{ p.title }} {{ p.seconds }}

The unit of the event that the SLI is measuring. This is mainly used in the UI to make it easier to understand.

If the Good and Valid events are the same type, you can use a custom event name. Otherwise, you can just use the default and generic "event".

Common event units
Event Unit Use case
{{ p.unit }} {{ p.useCase }}

What are good {{ sloWindow.normUnit }} from consumer's perspective? How do good {{ sloWindow.normUnit }} look like? What is the metric that you can measure to identify the good {{ sloWindow.normUnit }} from all the valid {{ sloWindow.normUnit }}?

What are the {{ sloWindow.normUnit }} that are important to the service consumer? You probably don't want to count all the {{ sloWindow.normUnit }}. This is an opportunity to narrow down the scope of the optimization and what triggers an alert.

For simplicity, sometimes "total" is used instead of "valid". But there is a difference .

While Service level indicator guides the optimization, the definition of valid scopes that optimization for two reasons:

  • Focus the optimization effort
  • Clarify responsibility and control

Leave this field empty to use "total" instead.

Service Level Formula

The formula for calculating SLI for the given SLO window is the percentage of good per valid.

Depending on whether the SLI is time-based or event-based, the formula calculates the percentage of bad time or bad events.

SLI = Good {{ sloWindow.normUnit }} Valid {{ sloWindow.normUnit }} × 100

count ( , ) /
count ( , )
× 100

Service Level Objective (SLO) is the target percentage of good {{ sloWindow.normUnit }}.

Using the two sliders below you can fine tune the SLO to your needs. The first slider is for the integer part of the percentage ({{ percL10n(sloInt) }}). The second slider is for the fractional part of the percentage ({{ percL10n(sloFrac) }}).

Typical SLO values
Informal Name SLO Value
{{ p.title }} {{percL10n(p.slo)}}

Note: Just be mindful of the price tag for this high service level objective!

Everyone wants the highest possible number but not everyone is willing to pay the price.

Note: this is an unusually low service level objective. Typically service level objective is above {{ percL10n(90) }} with some rare exceptions. Please check the Error budget for implications of your chosen SLO.

This slider allows fine tuning the SLO. It is mostly for convenience when deciding a reasonable error budget while keeping an eye on it.

{{ sloInt }}.XYZ Subtract Add
X
Y
Z
XYZ

The SLO window (also known as the compliance period) is the time period for which the SLO is calculated.

Essentially this adjusts the forgiveness of the SLO. For example if the window is 30 days, we are not concerned with any incidents and breaches of SLO that happened before that.

Smaller windows also help prevent the error budget from accumulating too much. For example, if the SLO is 99% for a time-based Availability SLI (uptime), the error budget allows 432 minutes of downtime per month. This amount can be consumed in multiple down times during the month or one chunk of long downtime. But the same SLO allows only 100 minutes of downtime per week.

It is usually 30 days or 4 weeks.

You can play with different ranges to see how a given SLO translates to different good {{ sloWindow.normUnit }} and how it impacts the error budget.

Typical compliance periods
Window Days Advantage
{{ p.title }} {{ p.days }} {{ p.useCase }}

{{ sloWindow }}

Error budget: {{ percL10n(errorBudgetPerc) }}

Error budget is one of the core ideas behind using SLI/SLOs to improve reliability. Instead of denying or forbidding errors, error budget allows the system to fail within a pre-defined limit.

The number one enemy of reliability is change. But we need change to be able to improve the system. Error budgets do exactly that. They provide a budget of error for the team to improve the system while keeping the consumers happy enough.

Error budget is the complement of SLO. It is the percentage of bad {{ sloWindow.normUnit }} that you can have before you violate the SLO.

error_budget = 100 - SLO = {{ percL10n(100) }} - {{ percL10n(slo) }} = {{ percL10n(errorBudgetPerc) }}
Subtract Add
{{ numL10n(1) }}
{{ numL10n(10) }}
{{ numL10n(100) }}
{{ numL10n(1000) }}
{{ numL10n(10000) }}

Warning: The error budget is 0 based on the number of valids {{ sloWindow.normUnit }}.

Here you can enter the numbers for your expected load and see how many {{ sloWindow.normUnit }} are allowed to fail during the SLO window while still being within the error budget.

in :

When one of the violates the condition, how much does it cost the business or your team?

This cost will be used to put a tangible number on various windows and events. It might be hard to put a number on failures especially if some resilience patterns are part of the architecture.

There are many ways to make the failures cheaper. In a future article, we will discuss all patterns of reliability and how to make errors cheap. In the mean time check out the following techniques:

  • Fallback
  • Failover

You can set the currency to see how much it costs to violate the SLO. If you can't put a currency on the errors, feel free to get creative.

Typical Currencies
Abbreviation Description
{{ p.currency }} {{ p.description }}

Alerting

What is the point of setting SLI/SLO if we are not going to take application when the SLO is violated?

Alerting on error budgets enable us to be on top of the reliability of our system. When using service levels, the alert triggers on the rate of consuming the error budget.

When setting an alert, the burn rate decides how quickly the alert reacts to errors.

  • Too fast and it will lead to false positives (alerting unnecessarily) and alert fatigue (too many alerts).
  • Too slow and the error burget will be burned before you know it.

Google SRE Workbook goes through 6 alerting strategies based on SLOs .

Burn rate is the rate at which the error budget is consumed. It is the ratio of the error budget to the SLO window.

A burn rate of means that the error budget will be consumed during the SLO window (accepted).

A burn rate of means that the error budget will be consumed in half the SLO window. This is not acceptable because at this rate, the SLO will be violated before the end of the SLO window.

Google SRE Workbook goes through 6 alerting strategies and recommends :

Burn Rate Error Budget Long-Window Short-Window Action
14.4x {{ percL10n(2) }} Consumed 1 hour 5 minutes Page
6x {{ percL10n(5) }} Consumed 6 hours 30 minutes Page
1x {{ percL10n(10) }} Consumed 3 days 6 hours Ticket

Note: The above values for Long-Window and Short-Window are based on a 1-month SLO window. You can see your actual values in the comments below Long-Window and Short-Window.

Long-Window Long-window alert is the "normal" alert. The reason it is called "long" is to distinguish it from the "short-window" alert which is primarily used to reduce false positives and improve the alert reset time.

We don't want to wait for the entire error budget to be consumed before alerting! It will be too late to take action.

Therefore the alert should trigger before a significant portion of the error budget is consumed.

Based on your setup, the alert will trigger after we have consumed {{ percL10n(longWindowPerc) }} of the entire time allotted for the error budget (or SLO compliance window) which is .

TTTrigger = {{ percL10n(longWindowPerc) }} × {{ sloWindow.humanSec }} = {{ alertLongWindow.humanSec }}

Assuming that the entire error budget was available at the begining of the incident, the maximum time available to respond before the entire error budget is exhausted is:

Because:

TTRespond max = {{ errorBudgetBurn.humanSec }} - {{ alertLongWindow.humanSec }} = {{ alertTTRWindow.humanSec }}

Which is .

Remember that this is the best case scenario. In reality, you may have much less time if you don't want to consume the entire error budget for an incident. Also note that the burn rate can be higher than {{ burnRate }}x.

Warning: The alert is too "jumpy" as in: it will trigger too often. This may lead to alert fatigue or even worse: ignoring the alerts.

Note: The time to resolve (TTR) is too short for a human to react. It is strongly recommended to automate the incident resolution instead of relying on human response to alerts.

Warning: Remember that the alert will trigger after consuming {{ percL10n(longWindowPerc) }} of the error budget is consumed! That error budget is for {{ sloWindow.humanTime }}.

Based on your setting an alert burns {{ percL10n(longWindowPerc) }} just to trigger. Then it needs some time to resolve too.

How many alerts like this can you have in {{ sloWindow.humanTime }} before the entire error budget is consumed?

Warning: Long alert Window is too short at this burn rate ({{ burnRate }}x) which may lead to alert fatigue.

Error: Division by zero! Long alert Window is too short for enough valid {{ sloWindow.normUnit }} to be counted.

Alert Policy

This is a pseudo-code for trigerring alerts based on the SLI metric in relation to the desired SLO ( ). You need to translate it to your observability and/or alerting tool.

count ( , , ) /
{{ alertLongWindow.timeSlotCount }} ( , , )
≤
&&
count ( , , ) /
{{ alertShortWindow.timeSlotCount }} ( , , )
≤

The purpose of the short-window alert is to reduce false alerts.

It checks a shorter lookback window (hence the name) to make sure that the burn rate is still high before triggering the alert. This reduces false positives where an alert is triggered for a temporary high burn rate.

The Short-Window alert reduces false positives at the expence of making the alerting setup more complex.

The Short-Window is usually 1/12th of the Long-Window (per Google SRE Workbook recommendation). But you can play with different dividers to see how they impact the detection time of the alert.

Long-Window alert triggers after consuming {{ longWindowPerc }}% of the total error budget. Therefore, the Short-Window alert triggers after consuming:

Lookbcack Short = Lookback Long {{ shortWindowDivider }} = {{ percL10n(longWindowPerc) }} {{ shortWindowDivider }} = {{ percL10n(alertShortWindowPerc) }}

Converted to time based on the current window ({{ sloWindow.humanTime }}):

Lookbcack Short = {{ percL10n(alertShortWindowPerc) }} × {{ sloWindow.humanSec }} = {{ alertShortWindow.humanSec }}

This means the alert will trigger only if we are still burning the error budget at least at the x burn rate in the past .

Warning: Short alert Window is too short at this burn rate ({{ burnRate }}x) which may lead to alert fatigue.

Error: Short alert Window is too short for enough valid events to be counted.

Share

This app completely runs in the browser and has no backend.

So all you have to do is to copy the following link. Whenever that link is clicked, it opens the app in the exact state when you copied it.

{{ toastCaption }}

This site uses cookies from Google to deliver its services and to analyze traffic. Information about your use of this site is shared with Google. By using this site, you agree to its use of cookies.

Tip: If you want to disable Google Analytics on all sites, you can use Google's official workaround.

Learn More