Calculating Time-to-Resolution from Support Tickets
Your support team has an SLA: critical tickets resolved within 4 hours, high within 24 hours, medium within 72 hours. Someone exports last quarter's tickets from the help desk, hands you a CSV with 1,800 rows, and asks "are we meeting our SLAs?" Each row has a created_at timestamp and a resolved_at timestamp. The math is simple in theory. In practice, you need to handle open tickets, calculate durations, group by priority, and figure out the percentage that met the target.
Here's how to build that analysis in ExploreMyData with five pipeline steps.
The data
A standard ticket export has columns like
ticket_id,
created_at,
resolved_at,
status,
priority,
category, and
assignee. Some tickets are
still open, meaning resolved_at is empty. Those will mess up any time calculation if you don't
filter them out first.
Step 1: Filter to resolved tickets only
Open the toolbar and select Filter. Set the condition to
status = 'resolved'. This
removes open, pending, and escalated tickets from the analysis. You're measuring resolution time,
so you need tickets that actually have a resolution.
If you also want to include tickets marked as "closed" (some systems distinguish between resolved
and closed), adjust the filter:
status IN ('resolved', 'closed').
| ticket_id | status | priority | created_at | resolved_at | assignee |
|---|---|---|---|---|---|
| TKT-4411 | resolved | critical | 2026-01-08 09:14:00 | 2026-01-08 12:02:00 | A. Sharma |
| TKT-4412 | open | high | 2026-01-08 10:30:00 | R. Dawson | |
| TKT-4413 | resolved | medium | 2026-01-07 14:00:00 | 2026-01-09 11:45:00 | A. Sharma |
TKT-4412 is still open with no resolved_at value - filtering to status = 'resolved' removes it before any time calculation.
Step 2: Calculate resolution time
Select Date Diff from the Transform group. Set the start
column to created_at and the
end column to resolved_at.
Choose the unit: hours for SLA tracking, or days if your thresholds are measured in days.
This creates a new column - call it
resolution_hours -
with the number of hours between creation and resolution. Under the hood, DuckDB calculates
DATE_DIFF('hour', created_at, resolved_at).
Quick sanity check: scroll through the new column. If you see negative values, the timestamps might be swapped or there's a data quality issue. If you see values in the tens of thousands, some dates might be in the wrong format. Catch these now before the analysis.
Step 3: Extract the month
Select Extract Date and pull the month from
created_at. This gives you
a numeric month column (1-12) that you'll use to see trends over time.
If you want a more readable label, you can follow up with an Add Column step that maps numbers to names. But for pivoting purposes, the numeric month works fine.
Step 4: Flag SLA compliance
Select Add Column. Name the column
sla_met and enter:
CASE WHEN priority = 'critical' AND resolution_hours <= 4 THEN 'yes' WHEN priority = 'high' AND resolution_hours <= 24 THEN 'yes' WHEN priority = 'medium' AND resolution_hours <= 72 THEN 'yes' WHEN priority = 'low' AND resolution_hours <= 168 THEN 'yes' ELSE 'no' END
Each priority level gets its own threshold. Critical tickets have 4 hours. High gets 24. Medium gets 72 (three days). Low gets 168 (one week). Adjust these to match your actual SLA targets.
Now every resolved ticket has a yes/no flag. You can immediately see the ratio. But the real value comes from the next step, where you break it down by priority and month.
| ticket_id | priority | resolution_hours | created_at_month | sla_met |
|---|---|---|---|---|
| TKT-4411 | critical | 2.8 | 1 | yes |
| TKT-4413 | medium | 45.8 | 1 | yes |
| TKT-4398 | high | 31.2 | 1 | no |
| TKT-4401 | critical | 6.1 | 1 | no |
TKT-4401 is critical but took 6.1 hours - over the 4-hour threshold. TKT-4398 is high priority but took 31.2 hours, exceeding the 24-hour target.
Step 5: Pivot to see the summary
Select Pivot from the Data group. Set the row grouping to
priority, the column grouping
to created_at_month, and the
value to resolution_hours with
the aggregation set to AVG.
The result is a table with priority levels as rows and months as columns, showing the average resolution time for each combination. At a glance, you can see whether critical ticket resolution is trending up or down, and whether any priority level is consistently missing its target.
| priority | Month 1 (Jan) | Month 2 (Feb) | Month 3 (Mar) | SLA target |
|---|---|---|---|---|
| critical | 3.9 | 4.8 | 3.1 | ≤4 hrs |
| high | 19.4 | 22.1 | 17.8 | ≤24 hrs |
| medium | 54.2 | 61.0 | 48.5 | ≤72 hrs |
| low | 112.0 | 134.5 | 98.3 | ≤168 hrs |
Average resolution hours by priority and month. February shows critical tickets slipping above the 4-hour SLA - worth investigating.
Going deeper
The five-step pipeline gives you the core SLA report. But you can extend it. A few ideas:
-
By assignee: Change the pivot row grouping to
assigneeto see which team members resolve tickets fastest. -
By category: Pivot on
categoryto find which issue types take the longest. If "billing" tickets average 48 hours while "password reset" averages 2 hours, that tells you something about process. -
SLA compliance rate: Add another pivot with
sla_metas the value and COUNT as the aggregation. Then calculate the percentage of "yes" versus total.
The full pipeline
- Filter (status = 'resolved')
- Date Diff (resolved_at - created_at in hours)
- Extract Date (month from created_at)
- Add Column (sla_met: yes/no based on priority thresholds)
- Pivot (avg resolution_hours by priority and month)
Five steps. The pipeline takes a flat list of 1,800 tickets and turns it into a summary table that answers the question directly: are we meeting our SLAs, and where are we falling short?
Next month, when the new export comes in, load it and the pipeline rebuilds the report automatically. No formulas, no manual grouping, no pivot table wizardry in a spreadsheet.