Back to Blog

Building a QA Center of Excellence: Standardisation That Scales Without the Bureaucracy

A QA Center of Excellence (CoE) standardizes testing practices, tools, and knowledge across teams — but done wrong, it becomes a bottleneck that slows everyone down. This guide covers how to structure a lightweight, effective QA CoE that elevates quality across an entire engineering organization without creating a centralized approval queue.

Published

7 min read

Reading time

Building a QA Center of Excellence: Standardisation That Scales Without the Bureaucracy

The phrase "Center of Excellence" often conjures images of approval gates, heavyweight processes, and committees that review pull requests before anything ships. Done that way, a QA CoE becomes the thing that slows engineering down and gets bypassed.

Done right, a QA CoE is an enablement function. It provides the shared tools, documented standards, training resources, and community that allows individual teams to operate with high quality autonomy — without each team reinventing the wheel or making the same mistakes.

The distinction: the CoE provides the platform, not the permissions. Individual teams decide how they work within that platform.


The QA CoE Operating Model

flowchart TD
    A[QA Center of Excellence] --> B[Shared Tools & Frameworks\nPlaywright setup, fixtures, utilities]
    A --> C[Standards & Guidelines\ncoverage targets, naming, test design patterns]
    A --> D[Knowledge Base\nplaybooks, retrospectives, training]
    A --> E[Community of Practice\nweekly sync, office hours, Slack]
    A --> F[Metrics & Visibility\norg-wide quality dashboard]

    B --> G[Feature Team A]
    C --> G
    D --> G

    B --> H[Feature Team B]
    C --> H
    D --> H

    B --> I[Feature Team C]
    C --> I
    D --> I

The individual feature teams retain ownership of their test suites. The CoE maintains the shared infrastructure they build on.


The Three Responsibilities of a QA CoE

1. Shared Test Infrastructure

Maintain and evolve the tooling that all teams use:

// packages/test-utils/src/index.ts
// Shared test utilities maintained by the CoE, consumed by all teams

export { createAuthenticatedPage } from './fixtures/auth';
export { mockApiEndpoints } from './fixtures/api-mock';
export { seedTestDatabase, cleanupTestData } from './fixtures/database';
export { generateTestUser, generateTestOrganization } from './factories/data';
export { waitForNetworkIdle, waitForAnimation } from './utils/waiters';
export { assertAccessibility, assertPagePerformance } from './assertions/quality';

Instead of each team duplicating authentication fixtures, page factories, and data seeding utilities — they import from the shared package. When the authentication flow changes, it's fixed once in the shared package and all tests pick it up.

2. Standards Documentation

The CoE maintains — but does not enforce through process gates — quality standards:

# ScanlyApp Testing Standards v2.1

## Test Naming Convention

Format: [feature] [action] [expected outcome]
Good: "checkout with expired card shows payment error"
Bad: "test_123" or "checkout test"

## Assertion Quality

- Prefer specific assertions over generic ones
  ✅ expect(button).toHaveText('Submit Order')
  ❌ expect(button).toBeVisible()
- Assert on user-observable outcomes, not implementation details
  ✅ expect(page).toHaveURL('/order-confirmation')
  ❌ expect(orderRepository.save).toHaveBeenCalled()

## Coverage Targets by Risk Tier

| Risk          | Minimum Automation |
| ------------- | ------------------ |
| Critical path | 90%                |
| High risk     | 70%                |
| Medium        | 50%                |
| Low           | Best effort        |

## Flaky Test Protocol

1. Tag the test @flaky immediately
2. Create a tracking issue within 24 hours
3. Do not merge new code that makes an existing flaky test worse
4. Fix within 2 sprints or delete the test

3. Community of Practice

The CoE is not just documents and tools — it is a community:

Ritual Frequency Purpose
QA Guild Sync Weekly (30 min) Share learnings, discuss challenges, review upcoming features
Test Review Office Hours 2× weekly (30 min each) Any engineer can bring test code for feedback
Quarterly QA Retrospective Quarterly (90 min) Process improvements, metrics review, standards update
New Hire QA Onboarding Per hire Standardized 30-day plan (see onboarding guide)
Incident Post-Mortems Per incident Always includes QA gap analysis

Measuring CoE Effectiveness

The CoE's success is measured through the teams it serves:

CoE Metric Leading Indicator Of
% teams using shared test utilities Consistency, lower maintenance
% teams meeting coverage targets Quality standard adoption
Time to onboard new team to test framework Ease of adoption
Cross-team defect escape rate Org-wide quality outcomes
Flaky test rate (org-wide) Test health
# QA knowledge articles consumed/month Knowledge sharing effectiveness

Common CoE Anti-Patterns to Avoid

Anti-Pattern 1: The Approval Gate

The CoE reviews and approves all test suites before merging. This creates a bottleneck, breeds resentment, and causes teams to minimize QA to avoid the queue.

Better: The CoE provides automated linting and style checks that run in CI without human approval. Reserve human review for new patterns and architectural decisions.

Anti-Pattern 2: The One-Size Tool Mandate

"All teams must use [Tool X], no exceptions." Feature teams have different contexts — a mobile team has different needs than a backend API team.

Better: Define the recommended standard and explain why. Allow exceptions with documented rationale. Let the community vote on standards evolution quarterly.

Anti-Pattern 3: The Ivory Tower CoE

The CoE team only reviews, never does. They write standards for writing tests but have no active test suites themselves.

Better: The CoE maintains the shared test infrastructure as a real, production-quality codebase. CoE members should be embedded in feature teams for at least one sprint per quarter to maintain credibility and stay connected to real problems.

Anti-Pattern 4: Big-Bang Standardization

"Starting Monday, all tests must follow the new standards." Existing test suites that don't comply become technical debt overnight, and teams must choose between shipping features and retroactively fixing tests.

Better: Apply new standards forward (new tests must comply, existing tests migrated opportunistically). Provide migration guides. Celebrate early adopters.

Related articles: Also see the management playbook for leading a QA Center of Excellence, building the team that a QA Center of Excellence is built around, and onboarding programmes that make CoE knowledge transfer systematic.


Starting a CoE from Zero

If your organization has no CoE and you're starting from scratch, the sequence matters:

Month 1: Listen and map
  → Survey all teams: what tools are they using? What's painful?
  → Identify common utilities being duplicated across repos
  → Find the 2-3 people across teams who care most about quality

Month 2: Quick wins
  → Create the shared package with the most-duplicated utilities
  → Establish the weekly sync (even with 4 people)
  → Write down the 5 most important existing best practices

Month 3: Community
  → Open the QA Guild to all engineers, not just "QA people"
  → Host the first office hours session
  → Create the quality metrics dashboard

Month 6: Standards
  → Propose the first formal standards document
  → Gather feedback from all teams before finalizing
  → Automate what can be automated

Year 1: Maturity
  → Ownership model clear: CoE owns the platform, teams own their tests
  → Cross-team escaped defect rate trending downward
  → New engineers onboard to test automation in < 1 week

A QA Center of Excellence built as an enablement function — providing tools, knowledge, and community without imposing process — raises the quality floor for the entire organization while preserving team autonomy and shipping velocity.

Give your whole team visibility into application quality with every deploy: Try ScanlyApp free and run automated checks across all your applications, shareable among the entire engineering organization.

Related Posts

QA Velocity Metrics: The 7 Numbers That Prove Your Team Is Getting Better
QA Leadership
7 min read

QA Velocity Metrics: The 7 Numbers That Prove Your Team Is Getting Better

Bug count per sprint is not a QA metric. Test case count is not a QA metric. Meaningful QA measurement tracks the outcomes the business cares about: escaped defects, release confidence, feedback loop speed, and automation ROI. This guide covers the metrics that demonstrate QA value and identify process weaknesses.

Onboarding Junior QA Engineers: A 30-Day Plan That Actually Works
QA Leadership
7 min read

Onboarding Junior QA Engineers: A 30-Day Plan That Actually Works

Most engineering onboarding is thrown together. A Notion doc with setup instructions, a week of shadowing, and then 'dive in.' For junior QA specifically, this approach creates slow ramp times, a shallow understanding of the system, and bad habits that take months to unlearn. Here is a 30-day structured onboarding plan that gets junior QA engineers contributing meaningfully in four weeks.

The Art of Blocking a Release: QA's Go/No-Go Decision Framework
QA Leadership
7 min read

The Art of Blocking a Release: QA's Go/No-Go Decision Framework

Saying no to a release is one of the hardest things a QA engineer does. Done poorly, it creates adversarial relationships and gets QA bypassed. Done well, it protects users, demonstrates the value of the QA function, and earns lasting trust. This guide provides a decision framework and communication playbook for release gate decisions.