Back to Blog

QA Velocity Metrics: The 7 Numbers That Prove Your Team Is Getting Better

Bug count per sprint is not a QA metric. Test case count is not a QA metric. Meaningful QA measurement tracks the outcomes the business cares about: escaped defects, release confidence, feedback loop speed, and automation ROI. This guide covers the metrics that demonstrate QA value and identify process weaknesses.

Published

7 min read

Reading time

QA Velocity Metrics: The 7 Numbers That Prove Your Team Is Getting Better

Every QA team gets asked "how do we know QA is working?" At most companies, the answer is some combination of test case count, bugs found per sprint, and automation percentage. These numbers are easy to collect and easy to game.

A QA team that writes low-quality test cases to inflate count, files noise bugs to look productive, and hits 80% "automated" by covering trivial paths while leaving critical user journeys manual — this team would score well on lagging metrics while delivering no real quality improvement.

Good QA metrics measure outcomes, not activities. Here is the framework.


The Metrics Framework

flowchart LR
    A[Leading Indicators\npredict future quality] --> B[Lagging Indicators\nconfirm past quality]

    A1[Test coverage delta] --> A
    A2[Defect detection rate\nin pre-production] --> A
    A3[Time to detect\na regression] --> A

    B1[Escaped defect rate] --> B
    B2[Mean time to recover\nfrom prod incident] --> B
    B3[Release success rate] --> B

Metric 1: Escaped Defect Rate

The single most important QA metric. An "escaped defect" is a bug that reached production without being caught in QA.

Calculation: $$\text{Escaped Defect Rate} = \frac{\text{Defects Found in Production}}{\text{Total Defects Found (Pre-prod + Prod)}} \times 100$$

Target: < 10% for most SaaS teams (< 5% for mature QA practices)

How to Track:

-- In your bug tracking database (simplified)
SELECT
  DATE_TRUNC('month', created_at) AS month,
  COUNT(*) FILTER (WHERE environment = 'production') AS escaped,
  COUNT(*) FILTER (WHERE environment != 'production') AS caught_early,
  ROUND(
    COUNT(*) FILTER (WHERE environment = 'production') * 100.0 / COUNT(*),
    1
  ) AS escape_rate_pct
FROM defects
WHERE created_at >= NOW() - INTERVAL '6 months'
GROUP BY 1
ORDER BY 1;

Metric 2: Defect Detection Rate by Stage

Where in the pipeline are bugs being caught? Early-stage detection is cheaper:

Detection Stage Relative Cost to Fix Ideal Detection %
During development (code review) 20–30%
Unit/integration tests 30–40%
QA/staging 10× 25–35%
Canary/beta release 25× 5–10%
Production (all users) 100× < 10%

If you're finding most bugs in staging rather than unit tests, your test pyramid is inverted.


Metric 3: Mean Time to Detect (MTTD)

How long does it take to detect a regression after it's introduced?

Why it matters: A regression that ships on Monday and is detected on Thursday has already been seen by users for 3 days. A regression detected by automated CI in 12 minutes is caught before deployment.

Tracking approach:

  1. For each production incident, record when the defective code was committed
  2. Record when the defect was detected
  3. MTTD = detection time − commit time

Improvement levers:

  • More granular CI test coverage → reduces MTTD to minutes
  • Post-deploy smoke tests → caps MTTD at deployment time
  • Real-user monitoring → caps MTTD at hours instead of days

Metric 4: Automation Coverage (with context)

"Automation coverage" is meaningless without knowing what is automated:

// Better automation coverage tracking: weight by risk
interface TestCoverage {
  feature: string;
  riskLevel: 'critical' | 'high' | 'medium' | 'low';
  manualTests: number;
  automatedTests: number;
  coveragePercent: number;
}

// A critical-path checkout flow with 40% automation coverage
// is worse than a low-risk admin page with 20% coverage

Track automation coverage by risk tier:

Risk Tier Feature Examples Target Automation
Critical Checkout, auth, billing, data export > 90%
High Core user journeys: create project, run scan > 70%
Medium Secondary features: settings, notifications > 50%
Low Admin tools, edge-case pages Best effort

An 80% automation rate across all tests is a worse outcome than 95% automation on critical paths and 30% on low-risk paths.


Metric 5: Test Suite Feedback Speed

How fast does the test suite tell you if something is broken?

Target benchmarks:

Suite Target Runtime If Slower
Unit tests < 60 seconds Investigate slow tests, parallelize
Integration tests < 5 minutes Parallelize, scope to changed modules
E2E smoke suite < 10 minutes Trim to critical path only
Full E2E regression < 30 minutes Parallelize with sharding

A test suite that takes 45 minutes to run will be skipped. Developers will merge without waiting for it. Fast feedback loops are a prerequisite for everything else.


Metric 6: Flaky Test Rate

Flaky tests are tests that fail intermittently without a code change. They erode trust in the test suite.

Tracking:

# Track flaky tests by looking at tests that fail then pass without code changes
# CI platforms like GitHub Actions provide per-run job results

# Simple daily check:
# Count reruns that passed: these are likely flaky
gh run list --workflow=ci.yml --limit=50 \
  | grep -i "failed on first" | wc -l

Target: < 2% flaky test rate in stable suite

Why it matters: A 10% flaky rate means developers lose trust and start ignoring failures. "Oh that just fails sometimes" is the most dangerous phrase in QA culture.

Related articles: Also see the QA manager full playbook for metrics and team strategy, dashboards that make QA velocity visible to the whole organisation, and turning velocity metrics into the ROI data leadership cares about.


The QA Monthly Report Template

Consistent, clear reporting builds organizational trust in QA:

## QA Monthly Report — [Month Year]

### Quality Indicators

| Metric                            | This Month | Last Month | Trend           |
| --------------------------------- | ---------- | ---------- | --------------- |
| Escaped defect rate               | 6.2%       | 9.1%       | ↓ Improving     |
| Defects caught in QA              | 47         | 39         | ↑ More coverage |
| Production incidents (QA-related) | 2          | 4          | ↓ Improving     |
| MTTD (average)                    | 4.2 hours  | 8.1 hours  | ↓ Improving     |

### Automation Health

| Metric                 | Value                 |
| ---------------------- | --------------------- |
| Critical path coverage | 87% (target: 90%)     |
| Flaky test rate        | 1.8% (target: <2%)    |
| CI smoke suite runtime | 8m 22s (target: <10m) |

### Notable Events

- Blocked release [v2.4.1] on 2026-03-14 due to S2 regression in Apple Pay checkout
- Introduced visual regression suite for dark mode (now covering 12 routes)
- Reduced flaky rate from 4.1% to 1.8% by fixing 6 timing-sensitive tests

### Next Month Focus

1. Increase critical path automation coverage from 87% → 90%
2. Complete security testing for OAuth2 refresh token flow
3. Establish MTTD baseline per feature area

Metrics are not the point. The point is the behavior change they drive: faster feedback loops, fewer escaped defects, more reliable releases. Choose metrics that reward those outcomes and watch the team optimize for things that actually matter.

Get quantifiable data on your application's health: Try ScanlyApp free and see real metrics on what's passing, failing, and changing in your application across every deploy.

Related Posts

Onboarding Junior QA Engineers: A 30-Day Plan That Actually Works
QA Leadership
7 min read

Onboarding Junior QA Engineers: A 30-Day Plan That Actually Works

Most engineering onboarding is thrown together. A Notion doc with setup instructions, a week of shadowing, and then 'dive in.' For junior QA specifically, this approach creates slow ramp times, a shallow understanding of the system, and bad habits that take months to unlearn. Here is a 30-day structured onboarding plan that gets junior QA engineers contributing meaningfully in four weeks.

The Art of Blocking a Release: QA's Go/No-Go Decision Framework
QA Leadership
7 min read

The Art of Blocking a Release: QA's Go/No-Go Decision Framework

Saying no to a release is one of the hardest things a QA engineer does. Done poorly, it creates adversarial relationships and gets QA bypassed. Done well, it protects users, demonstrates the value of the QA function, and earns lasting trust. This guide provides a decision framework and communication playbook for release gate decisions.