Prompts Collection

A library of hard-coded prompts for every coding scenario.

Debugging
Hot

Master Debugging Prompt

A universal, step-by-step debugging framework to identify, explain, and fix bugs across any language or stack.

Act as an expert debugger with strong production experience. Use this process regardless of language, framework, or problem complexity.

PROBLEM CONTEXT:

  • Language: [YOUR PROGRAMMING LANGUAGE]
  • Expected behavior: [WHAT SHOULD HAPPEN]
  • Actual behavior: [WHAT IS HAPPENING INSTEAD]
  • Error messages (if any): [PASTE ERROR OR "NONE"]

CODE: [PASTE YOUR CODE HERE]

DEBUGGING TASK:

  1. Walk through the code step by step, including how inputs enter the system and how outputs are produced
  2. Track the value of relevant variables at each significant step
  3. Identify exactly where the logic diverges from the expected behavior
  4. Explain why the bug occurs, including incorrect assumptions, edge cases, or hidden side effects

DEBUGGING APPROACH:

  • Use a rubber duck debugging mindset
  • Simulate the debugging process using print statements, assertions, or checkpoints where appropriate
  • Clearly state what each check is verifying and what outcome is expected

FIX & IMPROVEMENT: 5. Provide a corrected version of the code with clear explanations for each change 6. Suggest small improvements that make the code easier to reason about, test, or maintain

VALIDATION: 7. Explain how to verify that the fix works 8. Describe how to prevent similar bugs from occurring in the future

If the bug cannot be fully diagnosed, list the most likely failure points in order of probability.

Debugging
Hot

O(1) Performance Analyzer

Systematically analyze code to identify performance bottlenecks and optimize critical paths toward O(1) complexity.

You are an expert performance engineer specializing in algorithmic optimization and critical-path analysis. Your task is to systematically analyze the given codebase through multiple iterations of deep reasoning, with the goal of identifying and eliminating performance bottlenecks. Where possible and reasonable, optimize operations toward O(1) complexity, while clearly documenting tradeoffs.

ANALYSIS PHASES:

PHASE 1: COMPONENT IDENTIFICATION Iterate through each major component:

  1. What is its primary responsibility?
  2. What operations does it perform?
  3. What data structures does it rely on?
  4. What are its dependencies and assumptions?

PHASE 2: COMPLEXITY ANALYSIS For each significant operation: OPERATION: [Name] CURRENT_COMPLEXITY: [Big-O notation] BREAKDOWN:

  • Step 1: [Operation] -> O(?)
  • Step 2: [Operation] -> O(?) BOTTLENECK: [Slowest step] REASONING: [Why this dominates performance]

PHASE 3: OPTIMIZATION OPPORTUNITIES For each suboptimal component: COMPONENT: [Name] CURRENT_APPROACH:

  • Implementation: [Relevant code]
  • Complexity: [Current Big-O]
  • Limitations: [Why this is not O(1)]

OPTIMIZATION PATH:

  1. First improvement
  • Change: [What to modify]
  • Impact: [Complexity improvement]
  • Code: [Implementation]
  1. Additional improvements (if applicable)

PHASE 4: SYSTEM-WIDE IMPACT Analyze the effects of proposed changes on:

  1. Memory usage
  2. Cache efficiency
  3. Resource utilization
  4. Scalability under load
  5. Code maintainability

OUTPUT REQUIREMENTS:

  1. PERFORMANCE ANALYSIS COMPONENT: [Name] ORIGINAL_COMPLEXITY: [Big-O] OPTIMIZED_COMPLEXITY: [Target complexity] PROOF:
  • Step 1: [Reasoning]
  • Step 2: [Reasoning] IMPLEMENTATION: [Code block]
  1. BOTTLENECK IDENTIFICATION BOTTLENECK #[n]: LOCATION: [Where] IMPACT: [Performance cost] SOLUTION: [Optimized approach] CODE: [Implementation] VERIFICATION: [How to validate complexity]

  2. OPTIMIZATION ROADMAP STAGE 1:

  • Changes: [What to modify]
  • Expected impact: [Improvement]
  • Implementation: [Code]
  • Verification: [Tests or benchmarks]

STAGE 2:

  • Continue as needed

ITERATION REQUIREMENTS:

  1. First pass: Identify all operations above O(1)
  2. Second pass: Evaluate feasibility of optimization
  3. Third pass: Design optimized solutions
  4. Fourth pass: Verify correctness and performance
  5. Final pass: Document tradeoffs and assumptions

Remember to:

  • Provide concrete examples
  • Consider edge cases
  • Justify tradeoffs between time, memory, and complexity
  • Avoid over-optimization where it harms clarity or safety

INPUT: Code: [Insert Code]

Debugging
Hot

Security Vulnerability Analyzer

Analyze code to identify, assess, and remediate security vulnerabilities using a multi-layered approach.

You are a security expert with deep experience in application and system security. Your task is to identify, assess, and fix security vulnerabilities in the provided code, focusing on real-world attack scenarios and production risks.

ANALYSIS METHODOLOGY:

LAYER 1: VULNERABILITY SCANNING Conduct a security audit across the following areas:

  • Input validation and sanitization
  • Authentication mechanisms
  • Authorization and access control
  • Data protection and sensitive data handling

For each finding, identify:

  • Vulnerability type
  • Risk level
  • Possible attack vectors

LAYER 2: MITIGATION STRATEGY For each identified vulnerability: VULNERABILITY DETAILS:

  • Description of the issue
  • Potential impact if exploited
  • Exploitation difficulty

SOLUTION:

  • Required code changes
  • Additional security controls or safeguards
  • Validation and verification steps

OUTPUT FORMAT:

VULNERABILITY #[n]: TYPE: [Category] SEVERITY: [Critical / High / Medium / Low] DESCRIPTION:

  • Attack scenario
  • Potential impact
  • Existing or missing protections

REMEDIATION:

  • Code fixes
  • Security measures
  • Testing and validation approach

PRINCIPLES TO FOLLOW:

  • Prioritize vulnerabilities based on real-world risk
  • Avoid theoretical issues with no practical exploit path
  • Follow secure coding best practices
  • Clearly explain tradeoffs between security, performance, and usability

INPUT: Code: [Insert Code] Context: [Application type, environment, threat model if known]

Large Refactors
Hot

Large Refactor Master Prompt

A senior-level framework to plan and execute large refactors safely, with explicit risk management and stop conditions. SAFETY NOTICE: Large refactors are inherently high-risk operations. This process does NOT guarantee correctness or system safety. If risks outweigh benefits, the correct recommendation may be to pause, reduce scope, or avoid refactoring entirely.

Act as a senior software engineer and system architect with extensive experience leading large-scale refactors in production systems. Your task is to analyze the provided codebase and design a safe, incremental refactor plan that improves structure, maintainability, and scalability without introducing regressions.

DECISION BOUNDARY: Explicitly recommend NOT proceeding if any of the following are true:

  • Core behavior cannot be validated or monitored
  • Rollback is not possible or is unsafe
  • Business-critical paths lack sufficient test coverage or observability
  • The refactor risk exceeds the expected business or technical benefit

CONTEXT: The codebase is functional but has accumulated technical debt, architectural issues, or scaling limitations. Small fixes are no longer sufficient, and a larger refactor is being considered.

REFACTOR OBJECTIVES:

  • Improve code structure and clarity
  • Reduce coupling and hidden dependencies
  • Increase maintainability and testability
  • Preserve existing behavior and system stability

ANALYSIS PHASE:

  1. Identify core responsibilities and system boundaries
  2. Highlight high-risk areas and change-sensitive components
  3. Detect architectural smells, tight coupling, and implicit dependencies
  4. Assess test coverage, observability, and refactor safety nets

REFACTOR STRATEGY:

  • Propose an incremental refactor plan, not a rewrite
  • Define small, reversible steps with clear ownership
  • Recommend abstractions only when they reduce real complexity
  • Explicitly state what should be refactored now vs deferred

WHAT NOT TO DO:

  • Do NOT rewrite working code without measurable benefit
  • Do NOT refactor and change behavior at the same time
  • Do NOT introduce abstractions without proven need
  • Do NOT refactor untested critical paths without first adding safety nets
  • Do NOT optimize or refactor based on personal preference or aesthetics

RISK MANAGEMENT:

  • Identify regression points and blast radius
  • Suggest feature flags, parallel implementations, or kill switches when appropriate
  • Define rollback strategies for each major step

IMPLEMENTATION GUIDANCE:

  • Show before-and-after examples for critical changes
  • Explain the intent behind each refactor decision
  • Keep changes boring, predictable, and easy to review

VALIDATION:

  • Explain how to verify behavior remains unchanged
  • Suggest tests, metrics, logs, or monitoring to validate each stage
  • Describe signals that indicate the refactor should be paused or rolled back

OUTPUT EXPECTATIONS:

  • A staged refactor roadmap with clear ordering
  • Explicit risks, tradeoffs, and assumptions
  • Areas intentionally left untouched and the reasons why

PRINCIPLES TO FOLLOW:

  • Prefer safety over elegance
  • Optimize for team understanding and long-term ownership
  • Treat refactoring as risk management, not cleanup

INPUT: Codebase or files to refactor: [Insert Code] Constraints: [Deadlines, team size, risk tolerance, environment]

Large Refactors
Hot

Rewrite vs Refactor Decision Prompt

A senior-level framework to decide whether a system should be refactored incrementally or rewritten from scratch.

Act as a senior software engineer and technical decision-maker with experience evaluating high-risk system changes. Your task is to assess whether the provided codebase should be incrementally refactored or fully rewritten, based on technical, operational, and business realities.

DECISION CONTEXT: The current system is functional but may suffer from technical debt, architectural limitations, or long-term maintainability concerns. A major change is being considered.

EVALUATION CRITERIA: Analyze the system across the following dimensions:

  • Codebase size and complexity
  • Business criticality and uptime requirements
  • Test coverage and observability
  • Domain knowledge embedded in the code
  • Team experience and available capacity
  • Delivery timelines and risk tolerance

REFACTOR ASSESSMENT:

  • Identify areas that can be safely refactored incrementally
  • Estimate risk reduction and long-term benefits
  • Call out refactor blockers or unsafe zones

REWRITE ASSESSMENT:

  • Identify what problems a rewrite would actually solve
  • Highlight assumptions that may be invalid or risky
  • Estimate hidden costs such as lost edge-case behavior and delayed value

WHAT NOT TO DO:

  • Do NOT recommend a rewrite based on code aesthetics alone
  • Do NOT assume rewriting is faster or cleaner by default
  • Do NOT ignore business timelines or operational risk

DECISION OUTPUT:

  • Clear recommendation: Refactor, Rewrite, or Hybrid approach
  • Justification grounded in evidence, not preference
  • Explicit risks and failure modes for the chosen path

ALTERNATIVES:

  • If neither option is safe, recommend risk-reduction steps instead
  • Suggest partial rewrites, strangler patterns, or parallel systems where applicable

VALIDATION:

  • Describe signals that would confirm the decision is correct
  • Identify early warning signs that indicate the decision should be revisited

INPUT: Codebase overview: [Description or code] Business constraints: [Deadlines, uptime requirements, budget] Team context: [Team size, experience, ownership]

Large Refactors
Hot

Refactor Scope and Goal Definition

Define clear goals, boundaries, and success criteria before starting a large refactor to prevent scope creep and stalled work.

Act as a senior software engineer helping plan a large refactor before any code changes are made. Your task is to turn a vague refactor intent into a clearly scoped, outcome-driven plan.

CONTEXT: The refactor request is currently broad or unclear (e.g., "clean this up" or "improve the architecture"). Without clear goals, the refactor risks expanding indefinitely or never feeling complete.

YOUR TASK:

  • Translate the stated refactor intent into concrete, measurable goals
  • Define what success looks like when the refactor is finished
  • Identify explicit boundaries to prevent scope creep

GOAL DEFINITION:

  • What specific problems must this refactor solve?
  • What problems are explicitly out of scope?
  • What behavior must remain unchanged?

SCOPE CONTROL:

  • Identify modules, components, or layers included in the refactor
  • Identify areas that must not be touched
  • Call out redesign temptations that should be deferred

COMPLETION CRITERIA:

  • Define objective signals that indicate the refactor is done
  • Specify acceptable follow-up work vs refactor creep

WHAT NOT TO DO:

  • Do NOT refactor and redesign product features at the same time
  • Do NOT expand scope without revisiting goals and timelines
  • Do NOT optimize unrelated code "while you're there"

OUTPUT EXPECTATIONS:

  • A short, written refactor goal statement
  • A clearly defined scope boundary
  • A checklist that signals when the refactor is complete

INPUT: Refactor request or description: [Insert description] Constraints: [Deadlines, team size, risk tolerance]

Large Refactors
Hot

Refactor Without Reliable Tests

Plan and execute refactors safely when automated tests are missing, brittle, or unreliable. SAFETY NOTICE: Refactoring without reliable tests significantly increases the risk of regressions. In some cases, the correct recommendation may be to delay refactoring until safety nets are added.

Act as a senior software engineer experienced in refactoring legacy systems with limited or unreliable test coverage. Your task is to determine whether a refactor is safe to attempt and, if so, how to reduce risk before making structural changes.

CONTEXT: The codebase lacks sufficient automated tests, or existing tests are brittle and tightly coupled to implementation details.

RISK ASSESSMENT:

  • Identify critical user-facing and business-critical behaviors
  • Assess current observability (logs, metrics, monitoring)
  • Estimate the blast radius of potential regressions

SAFETY NET STRATEGY:

  • Propose minimal "golden behavior" tests or smoke tests
  • Suggest characterization tests to lock in current behavior
  • Identify areas where refactoring should not begin yet

REFACTOR GUIDANCE:

  • Recommend the smallest possible refactor steps
  • Prioritize isolating code before restructuring it
  • Avoid touching business-critical paths without validation

WHAT NOT TO DO:

  • Do NOT perform large structural changes without observability
  • Do NOT trust brittle tests as proof of safety
  • Do NOT refactor multiple critical paths at once

DECISION OUTPUT:

  • Proceed with refactor / Delay refactor / Reduce scope
  • Justification for the decision
  • Required safety nets before proceeding

INPUT: Codebase or files: [Insert Code] Existing tests: [Description or links] Operational constraints: [Uptime, rollback options]

Incremental Feature Development
Hot

Kill Switch and Rollback Planning

Design kill switches and rollback strategies to safely disable new features when issues arise in production.

Act as a senior software engineer responsible for operational safety in production systems. Your task is to design a kill switch and rollback plan for a new feature so that failures can be mitigated quickly without causing widespread impact.

CONTEXT: A new feature is being introduced incrementally, but failures or unexpected behavior may occur once it is exposed to real users.

YOUR TASK:

  1. Identify failure scenarios that require immediate rollback or shutdown
  2. Design kill switches that can disable the feature without redeploying
  3. Ensure rollback actions are fast, safe, and well-understood

KILL SWITCH DESIGN:

  • Enforcement: Where the kill switch is enforced (entry points, service boundaries)
  • Control: Who can trigger it and how
  • Behavior: Expected system behavior when the switch is activated

ROLLBACK STRATEGY:

  • Code rollback vs configuration rollback
  • Data rollback considerations and limitations
  • How to handle partially rolled-out states

WHAT NOT TO DO:

  • Do NOT rely solely on redeployments as a rollback mechanism
  • Do NOT create kill switches that are hard to discover or operate
  • Do NOT ignore data consistency when disabling features

VALIDATION:

  • Explain how to test kill switches safely
  • Describe drills or checks to ensure rollback works under pressure

OUTPUT:

  • Detailed kill switch design
  • Step-by-step rollback procedures
  • Clear ownership and operational playbook

INPUT: Feature description: [Insert feature details] Operational constraints: [Uptime requirements, access controls]

Incremental Feature Development
Hot

Incremental Feature Development Master Prompt

A senior-level framework for shipping new features incrementally with minimal risk and clear rollback paths.

Act as a senior software engineer experienced in shipping features safely in production systems. Your task is to design an incremental feature development plan that allows new functionality to be introduced, tested, and rolled out without disrupting existing users.

CORE PRINCIPLE: New features must be shipped in small, reversible steps. The system should remain stable, observable, and recoverable at every stage.

FEATURE CONTEXT: A new feature is being added to an existing production system with active users.

INCREMENTAL STRATEGY:

  • Break the feature into the smallest meaningful delivery units
  • Identify which changes can be shipped before the feature is visible
  • Preserve backward compatibility at every step

ISOLATION & CONTROL:

  • Recommend feature flags or configuration gates
  • Define explicit enable, disable, and rollback mechanisms
  • Avoid shared mutable state between old and new behavior

ROLLOUT PLAN:

  • Define rollout stages (internal, beta, partial, full)
  • Specify monitoring signals and success criteria for each stage
  • Define clear rollback triggers

WHAT NOT TO DO:

  • Do NOT ship features that cannot be turned off
  • Do NOT bundle unrelated changes into the same rollout
  • Do NOT remove old behavior until the new path is proven stable

OUTPUT EXPECTATIONS:

  • A staged rollout plan
  • Rollback and kill-switch strategy
  • Explicit risks and mitigations

INPUT: Feature description: [Insert feature details] System constraints: [Traffic, uptime, data sensitivity]

Legacy Code Analysis
Hot

Legacy Code Analysis Master Prompt

A senior-level framework to understand, assess, and safely work with unfamiliar or fragile legacy codebases.

Act as a senior software engineer experienced in inheriting, analyzing, and stabilizing legacy production systems. Your task is to analyze the provided codebase to understand its behavior, structure, risks, and constraints before any debugging, refactoring, or feature development is attempted.

CORE PRINCIPLE: Do not change legacy code before you understand what it does, what depends on it, and how it can fail.

CONTEXT: The codebase is unfamiliar, poorly documented, partially outdated, or fragile. The goal is to build a mental model of the system and identify safe and unsafe areas for future work.

SYSTEM UNDERSTANDING PHASE:

  1. Summarize the high-level purpose of the system or module
  2. Identify major components, layers, and responsibilities
  3. Describe how data flows through the system
  4. Highlight critical entry points and hot paths

DEPENDENCY & COUPLING ANALYSIS:

  • Identify internal module dependencies and coupling
  • Call out external services, libraries, and APIs relied upon
  • Highlight hidden or implicit dependencies

RISK & FRAGILITY ASSESSMENT:

  • Identify areas likely to break when modified
  • Highlight business-critical and user-facing paths
  • Detect global state, side effects, or timing assumptions

TESTING & OBSERVABILITY CHECK:

  • Assess existing test coverage and reliability
  • Identify missing safety nets
  • Evaluate logging, monitoring, and debuggability

WHAT NOT TO DO:

  • Do NOT refactor or optimize before understanding behavior
  • Do NOT remove code that "looks unused" without proof
  • Do NOT assume naming or comments reflect reality

OUTPUT EXPECTATIONS:

  • A high-level system map and responsibility breakdown
  • A list of high-risk and low-risk areas
  • Key assumptions and unknowns that require validation
  • Recommendations for safe next steps (debug, test, refactor, or isolate)

PRINCIPLES TO FOLLOW:

  • Prefer observation before modification
  • Optimize for understanding, not elegance
  • Treat legacy code as production-critical until proven otherwise

INPUT: Codebase or files: [Insert Code] Context: [What the system is supposed to do, known issues, constraints]

Legacy Code Analysis
Hot

Legacy Code Impact & Blast Radius Analysis

Analyze the blast radius of a proposed change in legacy systems and identify exactly what can break before touching production code.

Act as a senior engineer performing a pre-change risk analysis on a fragile legacy production system. Your job is to determine exactly what will break, how badly it will break, and whether this change should be attempted at all.

CORE WARNING: Legacy systems fail silently and punish overconfidence. Assume the code is lying, the documentation is wrong, and the blast radius is larger than it looks.

CHANGE CONTEXT: A modification is being considered in a legacy codebase that has high coupling, poor documentation, and limited test coverage.

PRIMARY OBJECTIVE: Before any code is changed, determine the full impact of this modification and assess the real risk of regressions, data corruption, or production outages.

IMPACT ANALYSIS PHASE:

  1. Identify the exact variable, function, API, schema, or behavior being changed
  2. Trace all direct and indirect callers, consumers, and dependencies
  3. Identify cross-module, cross-service, and cross-database interactions
  4. Highlight hidden control flow such as reflection, dynamic dispatch, callbacks, or configuration-driven behavior

BLAST RADIUS ASSESSMENT:

  • List every component, service, job, or user flow that could be affected
  • Identify business-critical paths that depend on this behavior
  • Call out fan-out effects where a single change propagates widely

RISK CLASSIFICATION: For each affected area, classify:

  • Failure mode (crash, silent data corruption, degraded performance, security exposure)
  • Severity (Low / Medium / High / Catastrophic)
  • Likelihood based on coupling and observability

UNKNOWN & DANGER ZONES:

  • Identify areas where behavior cannot be confidently determined
  • Call out dynamic runtime behavior that static analysis may miss
  • Highlight modules with no tests, no ownership, or production-only execution paths

WHAT NOT TO DO:

  • Do NOT trust search results or static references alone
  • Do NOT assume unused code is actually dead
  • Do NOT proceed if critical flows cannot be traced or validated
  • Do NOT bundle this change with unrelated refactors

DECISION OUTPUT:

  • Clear verdict: Safe to change / High risk / Do NOT proceed
  • Ranked list of top failure risks
  • Areas that must be instrumented, tested, or isolated before any change

SAFETY RECOMMENDATIONS:

  • Suggest characterization or smoke tests to lock in current behavior
  • Recommend feature flags, guards, or shadow paths if change proceeds
  • Define rollback and kill-switch strategy for worst-case failure

FINAL CHECK:

  • If this change fails in production, who will be paged and what breaks first?
  • Is the business impact acceptable if the worst-case scenario occurs?

INPUT: Proposed change: [Describe the exact change] Relevant code or files: [Insert Code] System context: [Business criticality, traffic, data sensitivity]

Legacy Code Analysis
Hot

Business Logic & Knowledge Recovery

Reverse-engineer lost business rules and recover the real system behavior from undocumented or misleading legacy code.

Act as a senior engineer tasked with recovering lost business logic from a legacy production system where documentation is missing, outdated, or wrong. Your job is to determine what the system ACTUALLY does, not what people believe it does.

CORE WARNING: In legacy systems, the code is the only reliable source of truth. Comments lie. Documentation rots. Tribal knowledge is incomplete. Assume nothing.

CONTEXT: The system contains business-critical logic that is poorly documented, inconsistently implemented, or understood only by engineers who no longer work here.

PRIMARY OBJECTIVE: Recover the real business rules, invariants, and edge-case behavior embedded in the code so future changes do not accidentally violate critical assumptions.

DISCOVERY PHASE:

  1. Identify all entry points where business rules are enforced (validators, services, controllers, batch jobs)
  2. Trace the full execution paths that implement business decisions
  3. Identify conditional branches that encode policy, pricing, eligibility, limits, or compliance rules
  4. Highlight duplicated or conflicting rule implementations

TRUTH VS BELIEF ANALYSIS:

  • List documented or assumed business rules (from comments, tickets, specs)
  • Compare them against actual code behavior
  • Identify mismatches, silent overrides, or legacy exceptions

EDGE CASE & EXCEPTION MINING:

  • Identify hardcoded thresholds, magic numbers, and special-case flags
  • Highlight historical hacks, grandfathered behavior, or customer-specific logic
  • Call out time-based, locale-based, or data-dependent branching

RISK & FRAGILITY ASSESSMENT:

  • Identify rules that are business-critical or revenue-impacting
  • Highlight logic that is tightly coupled to persistence or external systems
  • Flag behavior that is not covered by tests or monitoring

WHAT NOT TO DO:

  • Do NOT trust comments, TODOs, or outdated specs without code verification
  • Do NOT remove logic that "looks obsolete" without proving it is unused
  • Do NOT simplify conditionals until the full behavior is understood

KNOWLEDGE RECOVERY OUTPUT:

  • A clear list of recovered business rules written in plain language
  • Explicit edge cases and historical exceptions
  • Conflicting or duplicated rules and where they live
  • Unknown or suspicious behavior that requires validation with domain experts

SAFETY RECOMMENDATIONS:

  • Suggest characterization tests to lock in recovered behavior
  • Recommend documentation updates derived directly from code
  • Identify rules that should be isolated behind stable interfaces

FINAL CHECK:

  • If this rule is changed incorrectly, what customer, revenue, or compliance impact occurs?
  • Who in the business must validate this behavior before it is modified?

INPUT: Code or module containing business logic: [Insert Code] Known rules or assumptions: [What people think the system does] Domain context: [Industry, regulations, critical workflows]

Legacy Code Analysis
Hot

Dependency & Architecture Mapping

Map hidden dependencies, coupling, and architecture in legacy systems to expose risk, structure, and safe refactor boundaries.

Act as a senior engineer tasked with reverse-engineering the real architecture of a legacy production system. Your job is to uncover how the system is actually wired, where the coupling lives, and which parts are safe or dangerous to change.

CORE WARNING: Legacy architecture is rarely documented and almost never matches diagrams. Assume hidden dependencies exist and that changing the wrong file can cascade across the system.

CONTEXT: The system is large, tightly coupled, poorly documented, and possibly evolved organically over many years.

PRIMARY OBJECTIVE: Build an accurate mental and structural map of the system so future refactors, migrations, or feature changes do not accidentally destabilize production.

ARCHITECTURE DISCOVERY PHASE:

  1. Identify major layers, subsystems, and boundaries (UI, services, domain, data, infra)
  2. Map primary entry points, long-running jobs, background workers, and scheduled tasks
  3. Trace critical request and data flow paths end-to-end

DEPENDENCY ANALYSIS:

  • List direct module-to-module dependencies
  • Identify cross-layer violations and circular references
  • Highlight shared global state, static singletons, or service locators
  • Call out reflection, dynamic loading, configuration-driven wiring, or runtime injection

COUPLING & FRAGILITY ASSESSMENT:

  • Identify tightly coupled clusters that cannot be changed independently
  • Highlight god classes, central coordinators, or "everything depends on this" modules
  • Flag cross-service, cross-repo, or cross-database dependencies that only appear at runtime

SEAM & BOUNDARY DETECTION:

  • Identify natural seams where the system can be split, isolated, or extracted
  • Suggest candidate module boundaries for refactoring or microservice extraction
  • Highlight areas where introducing interfaces or adapters would reduce risk

UNKNOWN & DANGER ZONES:

  • Identify code paths that cannot be statically analyzed
  • Highlight runtime-only wiring and environment-specific behavior
  • Flag modules with no clear ownership or production-only execution

WHAT NOT TO DO:

  • Do NOT trust package structure or folder names to reflect real architecture
  • Do NOT assume dependency graphs are complete without runtime confirmation
  • Do NOT attempt major refactors before this map exists

ARCHITECTURE OUTPUT:

  • A high-level architecture map (components, layers, boundaries)
  • Dependency graph with high-risk coupling highlighted
  • List of fragile hubs and critical shared modules
  • Candidate seams for safe refactoring or migration

SAFETY RECOMMENDATIONS:

  • Identify areas that require isolation before modification
  • Suggest instrumentation or tracing to confirm runtime dependencies
  • Recommend sequencing for future refactors based on coupling risk

FINAL CHECK:

  • If this central module fails, how many systems or customers are affected?
  • Which part of this system would cause the worst outage if changed incorrectly?

INPUT: Codebase or modules: [Insert Code] Known architecture assumptions: [What people believe the structure is] System context: [Monolith, microservices, data stores, critical flows]

Legacy Code Analysis
Hot

Understand Before Refactor

Assess whether a legacy system is safe to refactor by exposing hidden risks, missing safety nets, and dangerous assumptions before touching any code.

Act as a senior engineer performing a pre-refactor safety assessment on a fragile legacy production system. Your job is to decide whether refactoring should even begin, and if so, what must be understood or stabilized first to avoid breaking critical behavior.

CORE WARNING: Most legacy refactors fail not because of bad code, but because engineers refactor systems they do not understand. Assume this system can and will break if you proceed blindly.

CONTEXT: The codebase is poorly documented, lightly tested, tightly coupled, or business-critical. A refactor is being considered to improve maintainability, performance, or architecture.

PRIMARY OBJECTIVE: Before any refactoring begins, determine whether the system is sufficiently understood and protected to survive structural change.

SYSTEM UNDERSTANDING CHECK:

  • Summarize what this module or system is responsible for
  • Identify business-critical paths and revenue-impacting behavior
  • List known assumptions, invariants, and undocumented rules

SAFETY NET ASSESSMENT:

  • Evaluate existing test coverage and reliability
  • Identify areas with no tests, no monitoring, or no ownership
  • Assess logging, metrics, and debuggability in production

FRAGILITY & RISK SCAN:

  • Identify tight coupling, global state, and hidden side effects
  • Highlight dynamic behavior, reflection, configuration-driven wiring, or runtime-only paths
  • Flag code that only runs in production or rare edge cases

UNKNOWN TERRITORY:

  • Identify parts of the system whose behavior cannot be confidently explained
  • Call out missing documentation, outdated comments, or contradictory specs
  • Highlight modules with no clear owner or institutional knowledge

WHAT NOT TO DO:

  • Do NOT refactor code you cannot explain end-to-end
  • Do NOT touch business-critical paths without behavior locked in
  • Do NOT combine refactoring with feature changes
  • Do NOT trust green builds as proof of safety

DECISION OUTPUT:

  • Clear verdict: Safe to refactor / Unsafe to refactor / Delay and prepare
  • List of blocking unknowns that must be resolved first
  • Minimum safety steps required before refactoring begins

PREPARATION RECOMMENDATIONS:

  • Suggest characterization tests to capture current behavior
  • Recommend isolating modules, adding guards, or introducing seams
  • Identify monitoring or alerts required before changes

FINAL CHECK:

  • If this refactor breaks production, how visible and recoverable is the failure?
  • Is the business impact acceptable if the worst-case scenario occurs?

INPUT: Code or modules to refactor: [Insert Code] Refactor goal: [What you want to change] System context: [Criticality, traffic, users, constraints]

Testing & Test Strategy
Hot

Master Testing & Test Strategy Prompt

A senior-level framework to design test strategies, generate high-quality tests, and protect critical behavior in production systems.

Act as a senior Test Engineer and Quality Architect with extensive experience designing test strategies for large-scale production systems. Your task is to design a testing approach that maximizes behavioral protection, minimizes regression risk, and builds confidence in future changes.

CORE PRINCIPLE: Tests exist to protect behavior and production stability, not to satisfy coverage metrics or tooling requirements.

CONTEXT: The system contains production code that may be business-critical, lightly tested, recently modified, or at risk of regressions. A testing strategy is required before generating or modifying tests.

PRIMARY OBJECTIVE: Design a test strategy that identifies what must be tested, how it should be tested, and where testing effort provides the highest risk reduction.

SYSTEM & RISK ANALYSIS:

  1. Identify business-critical and user-facing behavior
  2. Highlight high-risk modules, complex logic, and recent changes
  3. Identify integration points, external dependencies, and failure-prone areas

TEST STRATEGY DESIGN:

  • Decide the appropriate test levels (unit, integration, end-to-end, contract)
  • Identify which behavior must be protected by automated tests
  • Determine where mocks, fakes, or real dependencies are required

TEST PRIORITIZATION:

  • Rank test targets by business impact and regression risk
  • Identify areas where missing tests pose unacceptable risk
  • Highlight code paths that require strong behavioral guarantees

QUALITY & COVERAGE GUIDANCE:

  • Evaluate current test coverage and its effectiveness
  • Identify coverage gaps that matter vs noise
  • Recommend tests that protect logic, boundaries, and failure modes

WHAT NOT TO DO:

  • Do NOT chase coverage percentages without protecting real behavior
  • Do NOT over-mock critical logic and hide integration bugs
  • Do NOT write tests that only assert implementation details
  • Do NOT generate large volumes of low-value tests

OUTPUT EXPECTATIONS:

  • A clear test strategy tailored to the system
  • Recommended test types and priorities
  • List of high-risk areas requiring immediate test coverage
  • Guidance on test structure, data, and assertions

VALIDATION & MAINTENANCE:

  • Define signals that indicate tests are effective or misleading
  • Suggest how to detect flaky, brittle, or low-value tests
  • Recommend long-term test maintenance practices

FINAL CHECK:

  • If this system regresses tomorrow, which tests will catch it first?
  • Are the most valuable business rules truly protected?

INPUT: Code or modules: [Insert Code] System context: [Criticality, users, data sensitivity] Existing tests (if any): [Describe or insert]

Testing & Test Strategy
Hot

Regression Test Generation

Design and generate regression tests that permanently protect fixed behavior and prevent previously resolved bugs from reappearing.

Act as a senior Test Engineer and Quality Architect responsible for preventing regressions in a production system. Your task is to design and generate regression tests that lock in corrected behavior and ensure the same failure can never occur again.

CORE PRINCIPLE: Every fixed bug must become a permanent test. If a regression is not captured by a test, it WILL return.

CONTEXT: A bug, incident, or incorrect behavior has been identified and fixed in a production or staging system. The goal is to ensure this failure can never silently reappear in the future.

PRIMARY OBJECTIVE: Design regression tests that precisely capture the failing scenario, protect the corrected behavior, and detect any future deviations early in the lifecycle.

FAILURE ANALYSIS PHASE:

  1. Describe the original failure or incorrect behavior in plain language
  2. Identify the exact inputs, system state, and environment that triggered it
  3. Determine whether the failure was deterministic, timing-based, or data-dependent

REGRESSION TEST DESIGN:

  • Create tests that reproduce the original failure reliably
  • Encode the expected correct behavior explicitly
  • Isolate the smallest reproducible scenario that demonstrates the bug

SCOPE & PLACEMENT:

  • Decide the correct test level (unit, integration, end-to-end)
  • Identify where the regression test should live in the test suite
  • Ensure the test runs early and consistently in CI pipelines

EDGE CASE & VARIANT ANALYSIS:

  • Identify related edge cases or boundary conditions
  • Consider similar inputs, timing windows, or state transitions
  • Suggest additional tests that guard against adjacent failures

WHAT NOT TO DO:

  • Do NOT write regression tests that only assert implementation details
  • Do NOT create brittle tests tied to logging, formatting, or internal ordering
  • Do NOT skip regression tests for "rare" or "one-time" failures

QUALITY & STABILITY CHECK:

  • Ensure the test fails before the fix and passes after it
  • Verify determinism and eliminate flakiness
  • Confirm the test protects behavior, not the patch

OUTPUT EXPECTATIONS:

  • One or more regression tests that reproduce the original failure
  • Clear explanation of the protected behavior
  • Notes on why this test is critical for long-term stability

FINAL CHECK:

  • If this exact bug reappears in six months, will this test catch it?
  • Is the failure signal clear and actionable when the test breaks?

INPUT: Bug or incident description: [Describe the failure] Fixed code or patch: [Insert code] System context: [Environment, data conditions, dependencies]

Testing & Test Strategy
Hot

Find Coverage Gaps & Missing Tests

Analyze existing tests to identify missing protection in high-risk, business-critical, and failure-prone code paths.

Act as a senior Test Engineer and Quality Architect responsible for evaluating the real effectiveness of a test suite. Your task is to identify coverage gaps that matter, expose false confidence, and recommend high-impact tests that reduce regression risk in production systems.

CORE PRINCIPLE: High coverage does not mean high safety. Tests must protect the right behavior, not just execute lines of code.

CONTEXT: The system has an existing automated test suite and reported coverage metrics, but production regressions or uncertainty remain. The goal is to assess whether critical behavior is truly protected.

PRIMARY OBJECTIVE: Identify untested or weakly tested behavior that represents unacceptable risk and recommend targeted tests that maximize regression protection.

SYSTEM & RISK ANALYSIS:

  1. Identify business-critical and user-facing flows
  2. Highlight complex logic, conditional branches, and edge-heavy code
  3. Identify recent changes, bug-prone areas, and historically unstable modules

COVERAGE INTERPRETATION:

  • Analyze line, branch, and path coverage in context
  • Identify areas with misleading or superficial coverage
  • Highlight code executed only by setup, mocks, or trivial assertions

GAP DETECTION:

  • Identify critical logic with no direct assertions
  • Find error paths, exception handling, and failure modes that are untested
  • Highlight integration points and data boundaries with weak coverage

PRIORITIZATION:

  • Rank missing tests by business impact and regression risk
  • Identify coverage gaps that could cause silent data corruption or revenue loss
  • Separate low-risk cosmetic gaps from high-risk behavioral gaps

WHAT NOT TO DO:

  • Do NOT chase coverage percentages blindly
  • Do NOT write tests solely to execute uncovered lines
  • Do NOT over-prioritize trivial getters, setters, or boilerplate
  • Do NOT ignore integration and state-based behavior

RECOMMENDED TEST DESIGN:

  • Suggest high-value unit, integration, or end-to-end tests
  • Propose edge-case, boundary, and failure-mode tests
  • Identify tests that should protect contracts and business invariants

OUTPUT EXPECTATIONS:

  • List of high-risk uncovered or weakly covered areas
  • Prioritized test recommendations with justification
  • Explanation of why each gap represents meaningful risk

VALIDATION:

  • Describe how new tests reduce regression probability
  • Suggest metrics or signals to verify improved protection

FINAL CHECK:

  • If this system regresses tomorrow, which missing test would have caught it?
  • Are the most valuable business rules truly protected by tests?

INPUT: Codebase or modules: [Insert Code] Existing tests and coverage report: [Insert or describe] System context: [Criticality, users, business impact]

Testing & Test Strategy
Hot

Generate High-Quality Unit Tests

Design and generate high-quality unit tests that protect core behavior and edge cases while remaining maintainable and robust.

Act as a senior Test Engineer and Quality Architect responsible for designing high-quality unit tests for production systems. Your task is to generate unit tests that protect critical behavior, detect regressions early, and remain stable as the code evolves.

CORE PRINCIPLE: Unit tests exist to protect behavior and logic, not to mirror implementation or inflate coverage metrics.

CONTEXT: The code under test may be newly written, recently modified, or historically fragile. The goal is to design unit tests that provide strong behavioral guarantees with minimal brittleness.

PRIMARY OBJECTIVE: Generate unit tests that verify correct behavior across normal flows, edge cases, and failure conditions while remaining readable, deterministic, and maintainable.

TEST DESIGN ANALYSIS:

  1. Identify the public contract and intended behavior of the unit
  2. Enumerate valid inputs, invalid inputs, and boundary conditions
  3. Identify side effects, state changes, and error paths

BEHAVIOR COVERAGE:

  • Test normal and representative use cases
  • Cover boundary values, nulls, empties, and extreme inputs
  • Verify error handling, exceptions, and failure responses

MOCKING & ISOLATION STRATEGY:

  • Identify external dependencies that must be mocked or faked
  • Avoid over-mocking core business logic
  • Prefer testing real logic over internal interactions

ASSERTION QUALITY:

  • Assert outcomes and state, not internal implementation steps
  • Use precise, meaningful assertions
  • Ensure failures produce clear, actionable signals

WHAT NOT TO DO:

  • Do NOT write tests that simply mirror the code line by line
  • Do NOT over-mock and hide real integration bugs
  • Do NOT assert internal variables, call counts, or ordering unless required by contract
  • Do NOT generate large numbers of low-value or redundant tests

OUTPUT EXPECTATIONS:

  • A focused set of unit tests covering core behavior and edge cases
  • Explanation of what each test protects and why it matters
  • Notes on any risky or ambiguous behavior discovered during test design

QUALITY CHECK:

  • Ensure tests fail before fixes and pass after
  • Verify determinism and absence of flakiness
  • Confirm tests protect behavior, not implementation

FINAL CHECK:

  • If this logic changes incorrectly, will these tests catch it immediately?
  • Are the most important invariants and contracts protected?

INPUT: Code or function under test: [Insert Code] Expected behavior: [Describe intent] Dependencies: [Describe external calls or state]

Testing & Test Strategy
Hot

Integration Test Design

Design high-value integration tests that validate real interactions between services, APIs, databases, and external dependencies.

Act as a senior Test Engineer and Quality Architect responsible for designing integration tests for a production system. Your task is to identify and design tests that validate real interactions between components, detect contract violations, and catch failures that unit tests cannot expose.

CORE PRINCIPLE: Most production failures happen at integration boundaries, not inside isolated functions. Integration tests exist to protect contracts, data flow, and system behavior across components.

CONTEXT: The system contains multiple interacting components such as services, APIs, databases, message queues, or external dependencies. Unit tests exist, but real-world failures still occur.

PRIMARY OBJECTIVE: Design integration tests that validate critical interactions, protect system contracts, and expose failures caused by configuration, data, timing, or dependency mismatches.

SYSTEM BOUNDARY ANALYSIS:

  1. Identify core integration points (API endpoints, service calls, database access, messaging)
  2. List external systems, third-party APIs, and shared infrastructure involved
  3. Identify data ownership and cross-component state transitions

CONTRACT & DATA FLOW PROTECTION:

  • Validate request and response schemas
  • Verify required fields, defaults, and backward compatibility
  • Protect serialization, deserialization, and data mapping logic

FAILURE MODE & ENVIRONMENT ANALYSIS:

  • Test timeouts, retries, partial failures, and network errors
  • Validate behavior under missing data, malformed responses, and stale state
  • Identify configuration-sensitive behavior across environments

TEST DESIGN STRATEGY:

  • Prefer realistic data and real dependencies where feasible
  • Use controlled test environments or containers for reproducibility
  • Isolate only truly external or unstable systems with fakes or simulators

PRIORITIZATION:

  • Focus first on business-critical flows and revenue-impacting paths
  • Prioritize integration points with high historical failure rates
  • Identify contracts that, if broken, cause cascading failures

WHAT NOT TO DO:

  • Do NOT over-mock and turn integration tests into unit tests
  • Do NOT test trivial wiring with heavy infrastructure
  • Do NOT create flaky tests without determinism guarantees
  • Do NOT ignore environment-specific behavior

OUTPUT EXPECTATIONS:

  • List of critical integration scenarios to test
  • Proposed test cases with inputs, setup, and expected outcomes
  • Identification of required test infrastructure and data

QUALITY & STABILITY CHECK:

  • Ensure tests are deterministic and environment-independent
  • Verify tests catch contract and configuration regressions
  • Confirm failures are actionable and clearly attributable

FINAL CHECK:

  • If this service misbehaves in production, which integration test would catch it?
  • Are the most fragile boundaries truly protected?

INPUT: Components or services involved: [Insert description] APIs / data contracts: [Insert schemas or interfaces] Environment context: [Local, CI, staging, production-like]

Authentication & Identity
Hot

Authentication & Identity Master Prompt

Design, review, and secure authentication systems to protect identities, prevent account compromise, and ensure correct behavior.

Act as a senior Security Engineer and Identity Architect with extensive experience designing authentication systems for large-scale production environments. Your task is to analyze, design, or review an authentication system to ensure correctness, security, usability, and long-term maintainability.

CORE PRINCIPLE: Authentication systems are part of the security perimeter. A single mistake can lead to account takeover, data breaches, and systemic compromise.

CONTEXT: The system includes login, signup, session or token handling, third-party identity providers, and user identity management. The goal is to ensure identities are authenticated correctly and safely.

PRIMARY OBJECTIVE: Design or review an authentication system that correctly verifies identity, resists common attack vectors, and behaves predictably across environments.

AUTHENTICATION FLOW ANALYSIS:

  1. Identify all authentication entry points (login, signup, refresh, callback, recovery)
  2. Trace the full authentication lifecycle from credential input to identity establishment
  3. Identify where identity is created, verified, persisted, and invalidated

CREDENTIAL & SECRET HANDLING:

  • Evaluate password handling, hashing, salting, and storage
  • Identify hardcoded secrets, API keys, or leaked credentials
  • Assess secret rotation and revocation mechanisms

TOKEN & SESSION STRATEGY:

  • Determine session vs token usage and rationale
  • Analyze token lifetimes, refresh behavior, and rotation policies
  • Review session invalidation, logout behavior, and multi-device handling

THREAT & ATTACK SURFACE REVIEW:

  • Identify risks such as brute force, credential stuffing, replay, fixation, and bypass
  • Evaluate CSRF, XSS, open redirect, and callback manipulation risks
  • Assess protection against enumeration and timing attacks

THIRD-PARTY & FEDERATED IDENTITY:

  • Review OAuth / SSO flow correctness
  • Validate scopes, callbacks, and identity mapping
  • Assess trust boundaries with external providers

FAILURE MODE & EDGE CASE ANALYSIS:

  • Token expiry, clock skew, network failures
  • Partial logins, interrupted flows, inconsistent state
  • Recovery flows and fallback behavior

WHAT NOT TO DO:

  • Do NOT mix authentication and authorization responsibilities
  • Do NOT trust client-side validation for identity decisions
  • Do NOT store or log sensitive credentials in plaintext
  • Do NOT assume happy-path behavior covers security correctness

OUTPUT EXPECTATIONS:

  • A clear description of the authentication architecture
  • Identified risks, weaknesses, and incorrect assumptions
  • Recommended improvements for security, correctness, and usability
  • Guidance on token, session, and identity handling

VALIDATION & SAFETY CHECK:

  • Describe how authentication correctness is verified
  • Identify logging and monitoring needed for auth failures and attacks
  • Suggest tests and audits required for long-term safety

FINAL CHECK:

  • If an attacker targets this system, where is the weakest point?
  • If authentication fails silently, how quickly will it be detected?

INPUT: Authentication flow or code: [Insert Code] System context: [Web, mobile, API, SaaS, enterprise] Identity providers (if any): [OAuth, SSO, custom] Threat model assumptions: [Public, internal, regulated]

Authentication & Identity
Hot

Authentication Threat Model & Attack Surface Review

Identify **attack vectors**, security weaknesses, and trust boundary failures in authentication systems before they lead to compromise.

Act as a senior Security Engineer and Identity Architect performing a threat model and attack surface review of an authentication system. Your task is to identify how this system could be attacked, bypassed, or abused, and recommend concrete defenses.

CORE PRINCIPLE: Authentication is the primary attack surface of most systems. If identity is compromised, every downstream control becomes irrelevant.

CONTEXT: The system exposes login, signup, token refresh, session handling, recovery flows, and third-party authentication endpoints. The goal is to identify realistic attack paths before attackers do.

PRIMARY OBJECTIVE: Systematically enumerate attack vectors, identify weak trust boundaries, and propose defenses that eliminate or reduce the likelihood of identity compromise.

ATTACK SURFACE ENUMERATION:

  1. List all externally reachable authentication endpoints
  2. Identify all credential entry points and token issuance paths
  3. Map trust boundaries between client, API, identity provider, and storage

THREAT MODELING PHASE:

  • Identify attacker goals (account takeover, session hijack, privilege escalation)
  • Identify attacker capabilities (unauthenticated, authenticated, insider, automated)
  • Enumerate assets at risk (credentials, tokens, sessions, personal data)

COMMON ATTACK VECTORS TO ANALYZE:

  • Brute force and credential stuffing
  • Account enumeration and timing attacks
  • Session fixation and session hijacking
  • Token replay and token leakage
  • CSRF and XSS in authentication flows
  • Open redirects and OAuth callback manipulation
  • Privilege escalation via identity confusion

TOKEN & SESSION ABUSE SCENARIOS:

  • Stolen refresh token reuse
  • Long-lived token exposure
  • Missing rotation or revocation
  • Multi-device session inconsistencies

THIRD-PARTY & FEDERATED RISKS:

  • OAuth misconfiguration and scope abuse
  • Incorrect identity mapping
  • Trust boundary violations with external providers

DEFENSE & CONTROL REVIEW:

  • Rate limiting, lockouts, and bot protection
  • MFA and step-up authentication
  • CSRF tokens and origin validation
  • Secure cookie flags and transport security
  • Logging, alerting, and anomaly detection

WHAT NOT TO DO:

  • Do NOT assume TLS alone protects authentication
  • Do NOT trust client-side enforcement for identity decisions
  • Do NOT ignore low-frequency or "theoretical" attacks
  • Do NOT deploy auth flows without monitoring and alerting

OUTPUT EXPECTATIONS:

  • List of realistic attack scenarios with step-by-step paths
  • Ranked vulnerabilities by severity and likelihood
  • Trust boundary diagram and weak points
  • Concrete defensive controls and mitigations

VALIDATION & VERIFICATION:

  • Suggest security tests and penetration scenarios
  • Identify logs and metrics required to detect attacks
  • Recommend periodic audits and review cadence

FINAL CHECK:

  • If an attacker targets this system tomorrow, what is their easiest path in?
  • Which single flaw would cause the largest identity breach?

INPUT: Authentication endpoints or flows: [Insert description or code] Token / session design: [JWT, cookies, refresh, rotation] Identity providers (if any): [OAuth, SSO] Deployment context: [Public, internal, regulated]

Authentication & Identity
Hot

Token & Session Lifecycle Analysis

Analyze token and session lifecycles to detect expiry bugs, leakage risks, rotation failures, and invalidation issues.

Act as a senior Security Engineer and Identity Architect responsible for reviewing the full lifecycle of tokens and sessions in a production authentication system. Your task is to ensure tokens and sessions are issued, refreshed, rotated, and invalidated correctly without enabling account takeover.

CORE PRINCIPLE: Most authentication failures are lifecycle failures. Tokens that live too long, refresh incorrectly, or fail to invalidate are the primary cause of compromise.

CONTEXT: The system uses sessions, JWTs, refresh tokens, or a combination of these to represent authenticated identities across web, mobile, and API clients.

PRIMARY OBJECTIVE: Verify that identity tokens and sessions are issued safely, expire predictably, rotate securely, and are revoked correctly across all devices.

LIFECYCLE MAPPING:

  1. Trace how authentication tokens or sessions are created
  2. Identify where they are stored (cookies, memory, local storage, headers)
  3. Trace refresh, renewal, and rotation flows
  4. Identify all invalidation and logout paths

EXPIRY & ROTATION ANALYSIS:

  • Evaluate access token lifetime and refresh token lifetime
  • Verify rotation on refresh and reuse detection
  • Identify long-lived tokens or permanent sessions
  • Analyze clock skew and time synchronization risks

INVALIDATION & LOGOUT BEHAVIOR:

  • Verify logout invalidates tokens and sessions server-side
  • Analyze multi-device and multi-session consistency
  • Identify orphaned, leaked, or non-revocable tokens

STORAGE & TRANSPORT SAFETY:

  • Review cookie flags (HttpOnly, Secure, SameSite)
  • Analyze local storage and in-memory risks
  • Verify TLS usage and header exposure

ABUSE & FAILURE SCENARIOS:

  • Refresh token replay
  • Token theft and reuse
  • Session fixation
  • Partial invalidation and ghost sessions
  • Environment-specific expiry behavior

WHAT NOT TO DO:

  • Do NOT use long-lived access tokens without rotation
  • Do NOT rely on client-side logout for invalidation
  • Do NOT store sensitive tokens in insecure storage
  • Do NOT assume expiry alone prevents abuse

OUTPUT EXPECTATIONS:

  • Full lifecycle diagram of tokens and sessions
  • Identified weaknesses in expiry, rotation, or invalidation
  • Ranked risks by likelihood and impact
  • Concrete recommendations for safer lifetimes and rotation

VALIDATION & MONITORING:

  • Suggest tests for expiry, rotation, and invalidation
  • Recommend logs and alerts for suspicious token behavior
  • Identify metrics for session anomalies and reuse

FINAL CHECK:

  • If a token leaks today, how long can an attacker use it?
  • Can all active sessions for a user be invalidated instantly?

INPUT: Token and session design: [JWT, cookies, refresh, sessions] Lifetimes and rotation rules: [Describe] Storage method: [Cookies, headers, local storage] Deployment context: [Web, mobile, API, multi-region]

Authentication & Identity
Hot

MFA & Account Recovery Review

Review MFA and account recovery flows to prevent bypasses, recovery attacks, and identity takeover in production systems.

Act as a senior Security Engineer and Identity Architect reviewing the design and implementation of multi-factor authentication (MFA) and account recovery flows. Your task is to ensure these flows strengthen security rather than becoming the easiest path to account takeover.

CORE PRINCIPLE: Most account takeovers do not break login. They bypass it through recovery, support flows, or weak MFA implementations.

CONTEXT: The system uses MFA, backup codes, password reset, account recovery, or support-assisted identity recovery to restore access. These flows operate under stress and are prime targets for attackers.

PRIMARY OBJECTIVE: Ensure MFA and recovery flows verify identity correctly, resist social engineering and automation, and do not allow attackers to bypass primary authentication controls.

MFA FLOW ANALYSIS:

  1. Identify all MFA methods supported (TOTP, SMS, email, push, WebAuthn)
  2. Trace MFA challenge issuance, verification, and failure handling
  3. Identify when MFA is enforced, skipped, or downgraded

RECOVERY & RESET PATHS:

  • Trace password reset, email recovery, and account unlock flows
  • Identify identity proofing requirements before recovery
  • Analyze recovery token generation, expiry, and reuse protection

BYPASS & DOWNGRADE RISKS:

  • Identify fallback paths that skip MFA
  • Analyze device trust, remember-me, and step-down behavior
  • Detect support or admin flows that override identity verification

ATTACK & ABUSE SCENARIOS:

  • SIM swap and SMS interception
  • Phishing of OTP and push fatigue attacks
  • Recovery token replay or brute force
  • Account enumeration via reset endpoints
  • Social engineering via support channels

RATE LIMITING & ANTI-AUTOMATION:

  • Verify throttling on OTP, reset, and recovery endpoints
  • Identify missing lockouts or CAPTCHA protections
  • Analyze detection of repeated failed recovery attempts

WHAT NOT TO DO:

  • Do NOT allow account recovery with weaker verification than login
  • Do NOT allow unlimited OTP or reset attempts
  • Do NOT reuse recovery tokens or allow long-lived reset links
  • Do NOT let support bypass identity verification informally

OUTPUT EXPECTATIONS:

  • Full MFA and recovery flow diagrams
  • Identified bypass paths and downgrade risks
  • Ranked vulnerabilities by likelihood and impact
  • Concrete recommendations for stronger verification and controls

VALIDATION & MONITORING:

  • Suggest tests for MFA enforcement and recovery correctness
  • Recommend logging for recovery attempts and MFA failures
  • Identify alerts for suspicious recovery and downgrade behavior

FINAL CHECK:

  • If an attacker cannot guess the password, can they still recover the account?
  • Is recovery harder than login, or accidentally easier?

INPUT: MFA methods supported: [TOTP, SMS, email, push, WebAuthn] Recovery flows: [Password reset, email recovery, support] Policies: [Lockout rules, retries, device trust] Threat model: [Public, regulated, high-value accounts]