Testing & Test Strategy
2023-10-26

Generate High-Quality Unit Tests

Design and generate high-quality unit tests that protect core behavior and edge cases while remaining maintainable and robust.

SCENARIO

Act as a senior Test Engineer and Quality Architect responsible for designing high-quality unit tests for production systems. Your task is to generate unit tests that protect critical behavior, detect regressions early, and remain stable as the code evolves.

CORE PRINCIPLE:

Unit tests exist to protect behavior and logic, not to mirror implementation or inflate coverage metrics.

CONTEXT:

The code under test may be newly written, recently modified, or historically fragile. The goal is to design unit tests that provide strong behavioral guarantees with minimal brittleness.

PRIMARY OBJECTIVE:

Generate unit tests that verify correct behavior across normal flows, edge cases, and failure conditions while remaining readable, deterministic, and maintainable.

TEST DESIGN ANALYSIS:

  1. Identify the public contract and intended behavior of the unit
  2. Enumerate valid inputs, invalid inputs, and boundary conditions
  3. Identify side effects, state changes, and error paths

BEHAVIOR COVERAGE:

  • Test normal and representative use cases
  • Cover boundary values, nulls, empties, and extreme inputs
  • Verify error handling, exceptions, and failure responses

MOCKING & ISOLATION STRATEGY:

  • Identify external dependencies that must be mocked or faked
  • Avoid over-mocking core business logic
  • Prefer testing real logic over internal interactions

ASSERTION QUALITY:

  • Assert outcomes and state, not internal implementation steps
  • Use precise, meaningful assertions
  • Ensure failures produce clear, actionable signals

WHAT NOT TO DO:

  • Do NOT write tests that simply mirror the code line by line
  • Do NOT over-mock and hide real integration bugs
  • Do NOT assert internal variables, call counts, or ordering unless required by contract
  • Do NOT generate large numbers of low-value or redundant tests

OUTPUT EXPECTATIONS:

  • A focused set of unit tests covering core behavior and edge cases
  • Explanation of what each test protects and why it matters
  • Notes on any risky or ambiguous behavior discovered during test design

QUALITY CHECK:

  • Ensure tests fail before fixes and pass after
  • Verify determinism and absence of flakiness
  • Confirm tests protect behavior, not implementation

FINAL CHECK:

  • If this logic changes incorrectly, will these tests catch it immediately?
  • Are the most important invariants and contracts protected?

INPUT:

Code or function under test: [Insert Code] Expected behavior: [Describe intent] Dependencies: [Describe external calls or state]

More Testing & Test Strategy Prompts

Testing & Test Strategy
Hot

Master Testing & Test Strategy Prompt

A senior-level framework to design test strategies, generate high-quality tests, and protect critical behavior in production systems.

Act as a senior Test Engineer and Quality Architect with extensive experience designing test strategies for large-scale production systems. Your task is to design a testing approach that maximizes behavioral protection, minimizes regression risk, and builds confidence in future changes.

CORE PRINCIPLE: Tests exist to protect behavior and production stability, not to satisfy coverage metrics or tooling requirements.

CONTEXT: The system contains production code that may be business-critical, lightly tested, recently modified, or at risk of regressions. A testing strategy is required before generating or modifying tests.

PRIMARY OBJECTIVE: Design a test strategy that identifies what must be tested, how it should be tested, and where testing effort provides the highest risk reduction.

SYSTEM & RISK ANALYSIS:

  1. Identify business-critical and user-facing behavior
  2. Highlight high-risk modules, complex logic, and recent changes
  3. Identify integration points, external dependencies, and failure-prone areas

TEST STRATEGY DESIGN:

  • Decide the appropriate test levels (unit, integration, end-to-end, contract)
  • Identify which behavior must be protected by automated tests
  • Determine where mocks, fakes, or real dependencies are required

TEST PRIORITIZATION:

  • Rank test targets by business impact and regression risk
  • Identify areas where missing tests pose unacceptable risk
  • Highlight code paths that require strong behavioral guarantees

QUALITY & COVERAGE GUIDANCE:

  • Evaluate current test coverage and its effectiveness
  • Identify coverage gaps that matter vs noise
  • Recommend tests that protect logic, boundaries, and failure modes

WHAT NOT TO DO:

  • Do NOT chase coverage percentages without protecting real behavior
  • Do NOT over-mock critical logic and hide integration bugs
  • Do NOT write tests that only assert implementation details
  • Do NOT generate large volumes of low-value tests

OUTPUT EXPECTATIONS:

  • A clear test strategy tailored to the system
  • Recommended test types and priorities
  • List of high-risk areas requiring immediate test coverage
  • Guidance on test structure, data, and assertions

VALIDATION & MAINTENANCE:

  • Define signals that indicate tests are effective or misleading
  • Suggest how to detect flaky, brittle, or low-value tests
  • Recommend long-term test maintenance practices

FINAL CHECK:

  • If this system regresses tomorrow, which tests will catch it first?
  • Are the most valuable business rules truly protected?

INPUT: Code or modules: [Insert Code] System context: [Criticality, users, data sensitivity] Existing tests (if any): [Describe or insert]

Testing & Test Strategy
Hot

Regression Test Generation

Design and generate regression tests that permanently protect fixed behavior and prevent previously resolved bugs from reappearing.

Act as a senior Test Engineer and Quality Architect responsible for preventing regressions in a production system. Your task is to design and generate regression tests that lock in corrected behavior and ensure the same failure can never occur again.

CORE PRINCIPLE: Every fixed bug must become a permanent test. If a regression is not captured by a test, it WILL return.

CONTEXT: A bug, incident, or incorrect behavior has been identified and fixed in a production or staging system. The goal is to ensure this failure can never silently reappear in the future.

PRIMARY OBJECTIVE: Design regression tests that precisely capture the failing scenario, protect the corrected behavior, and detect any future deviations early in the lifecycle.

FAILURE ANALYSIS PHASE:

  1. Describe the original failure or incorrect behavior in plain language
  2. Identify the exact inputs, system state, and environment that triggered it
  3. Determine whether the failure was deterministic, timing-based, or data-dependent

REGRESSION TEST DESIGN:

  • Create tests that reproduce the original failure reliably
  • Encode the expected correct behavior explicitly
  • Isolate the smallest reproducible scenario that demonstrates the bug

SCOPE & PLACEMENT:

  • Decide the correct test level (unit, integration, end-to-end)
  • Identify where the regression test should live in the test suite
  • Ensure the test runs early and consistently in CI pipelines

EDGE CASE & VARIANT ANALYSIS:

  • Identify related edge cases or boundary conditions
  • Consider similar inputs, timing windows, or state transitions
  • Suggest additional tests that guard against adjacent failures

WHAT NOT TO DO:

  • Do NOT write regression tests that only assert implementation details
  • Do NOT create brittle tests tied to logging, formatting, or internal ordering
  • Do NOT skip regression tests for "rare" or "one-time" failures

QUALITY & STABILITY CHECK:

  • Ensure the test fails before the fix and passes after it
  • Verify determinism and eliminate flakiness
  • Confirm the test protects behavior, not the patch

OUTPUT EXPECTATIONS:

  • One or more regression tests that reproduce the original failure
  • Clear explanation of the protected behavior
  • Notes on why this test is critical for long-term stability

FINAL CHECK:

  • If this exact bug reappears in six months, will this test catch it?
  • Is the failure signal clear and actionable when the test breaks?

INPUT: Bug or incident description: [Describe the failure] Fixed code or patch: [Insert code] System context: [Environment, data conditions, dependencies]

Testing & Test Strategy
Hot

Find Coverage Gaps & Missing Tests

Analyze existing tests to identify missing protection in high-risk, business-critical, and failure-prone code paths.

Act as a senior Test Engineer and Quality Architect responsible for evaluating the real effectiveness of a test suite. Your task is to identify coverage gaps that matter, expose false confidence, and recommend high-impact tests that reduce regression risk in production systems.

CORE PRINCIPLE: High coverage does not mean high safety. Tests must protect the right behavior, not just execute lines of code.

CONTEXT: The system has an existing automated test suite and reported coverage metrics, but production regressions or uncertainty remain. The goal is to assess whether critical behavior is truly protected.

PRIMARY OBJECTIVE: Identify untested or weakly tested behavior that represents unacceptable risk and recommend targeted tests that maximize regression protection.

SYSTEM & RISK ANALYSIS:

  1. Identify business-critical and user-facing flows
  2. Highlight complex logic, conditional branches, and edge-heavy code
  3. Identify recent changes, bug-prone areas, and historically unstable modules

COVERAGE INTERPRETATION:

  • Analyze line, branch, and path coverage in context
  • Identify areas with misleading or superficial coverage
  • Highlight code executed only by setup, mocks, or trivial assertions

GAP DETECTION:

  • Identify critical logic with no direct assertions
  • Find error paths, exception handling, and failure modes that are untested
  • Highlight integration points and data boundaries with weak coverage

PRIORITIZATION:

  • Rank missing tests by business impact and regression risk
  • Identify coverage gaps that could cause silent data corruption or revenue loss
  • Separate low-risk cosmetic gaps from high-risk behavioral gaps

WHAT NOT TO DO:

  • Do NOT chase coverage percentages blindly
  • Do NOT write tests solely to execute uncovered lines
  • Do NOT over-prioritize trivial getters, setters, or boilerplate
  • Do NOT ignore integration and state-based behavior

RECOMMENDED TEST DESIGN:

  • Suggest high-value unit, integration, or end-to-end tests
  • Propose edge-case, boundary, and failure-mode tests
  • Identify tests that should protect contracts and business invariants

OUTPUT EXPECTATIONS:

  • List of high-risk uncovered or weakly covered areas
  • Prioritized test recommendations with justification
  • Explanation of why each gap represents meaningful risk

VALIDATION:

  • Describe how new tests reduce regression probability
  • Suggest metrics or signals to verify improved protection

FINAL CHECK:

  • If this system regresses tomorrow, which missing test would have caught it?
  • Are the most valuable business rules truly protected by tests?

INPUT: Codebase or modules: [Insert Code] Existing tests and coverage report: [Insert or describe] System context: [Criticality, users, business impact]