Defect Life Cycle in Software Testing

Defect Life Cycle: Complete Guide to Bug Tracking and Resolution

Parul Dhingra - Senior Quality Analyst
Parul Dhingra13+ Years ExperienceHire Me

Senior Quality Analyst

Updated: 7/15/2025

Defect Life Cycle in Software TestingDefect Life Cycle in Software Testing

The defect life cycle (also called bug life cycle) is the sequence of states a software defect passes through from initial discovery until final closure. Every defect follows a defined path: it gets reported, assigned, fixed, verified, and closed.

Understanding this cycle matters because it directly affects how quickly your team resolves issues and how reliably your software performs. Teams that manage defects well ship better products faster. Teams that do not end up with confused developers, frustrated testers, and buggy releases.

This guide covers everything you need to manage defects effectively: the standard states and transitions, how to distinguish severity from priority, writing defect reports that developers can actually use, and selecting the right tools for your team.

Quick Reference: Defect Life Cycle States

StateWhat It MeansWho Owns ItNext Possible States
NewDefect just reported, awaiting reviewTester/QA LeadOpen, Rejected, Duplicate
OpenAccepted and ready for assignmentQA Lead/ManagerAssigned
AssignedDeveloper working on fixDeveloperFixed, Deferred, Not a Bug
FixedCode change completeDeveloperVerified, Reopened
VerifiedFix confirmed workingTesterClosed, Reopened
ClosedDefect resolved permanentlyTesterReopened
ReopenedIssue returned after fixTesterAssigned
RejectedNot a valid defectQA LeadClosed
DuplicateAlready reported elsewhereQA LeadClosed
DeferredPostponed to future releaseManagerOpen

What is the Defect Life Cycle?

The defect life cycle is a structured process that tracks every bug from the moment someone finds it until it is permanently resolved. Think of it as a workflow that ensures no defect gets lost, ignored, or fixed incorrectly.

Every software team deals with defects. The difference between high-performing teams and struggling ones often comes down to how systematically they handle those defects. A clear life cycle provides:

Accountability: Every defect has an owner at each stage. When a bug sits in "Assigned" status for two weeks, everyone knows who to ask about it.

Visibility: Managers can see how many defects are open, how long fixes take, and where bottlenecks occur.

Consistency: New team members can follow the same process as veterans. Defects do not slip through cracks when someone goes on vacation.

Traceability: Auditors, product owners, and stakeholders can track what was found, when it was fixed, and how it was verified.

The life cycle connects directly to the broader Software Testing Life Cycle (STLC). During test execution, testers identify defects. Those defects feed into the defect life cycle. When defects get fixed and verified, test execution can continue or conclude.

Standard Defect States Explained

Most defect tracking systems use similar states, though naming conventions vary. Here are the states you will encounter in most organizations:

New

A tester finds something wrong and logs it. The defect exists in the system but no one has reviewed it yet.

At this point, the defect report should contain:

  • Clear summary describing the issue
  • Steps to reproduce
  • Expected vs actual results
  • Environment details
  • Screenshots or logs if relevant

Open

A QA lead or triage team reviews the defect and confirms it is valid. The defect is real, reproducible, and worth fixing. It now enters the queue for assignment.

Some teams skip this state and go directly from New to Assigned. The choice depends on your team size and how much triage you need.

Assigned

The defect has an owner. A developer (or sometimes a team) is now responsible for investigating and fixing it.

The assigned developer should:

  • Review the defect report and reproduction steps
  • Investigate the root cause
  • Implement a fix
  • Write or update unit tests
  • Move the defect to Fixed status

Fixed

The developer has written code to address the issue and committed it to the codebase. This does not mean the defect is resolved; it means a fix exists and awaits verification.

Important: A defect in Fixed status has not been verified yet. The fix might not work, might introduce new problems, or might only partially address the issue.

Verified

A tester has confirmed the fix works. They followed the original reproduction steps and the defect no longer occurs. They also checked that the fix did not break anything else nearby.

Verification typically includes:

  • Retesting the exact scenario from the original report
  • Testing related functionality
  • Running relevant regression tests

Closed

The defect is done. The issue was fixed, verified, and no further action is needed. Closed defects remain in the system for historical reference and metrics.

Reopened

Sometimes a "fixed" defect comes back. Maybe the fix did not work in all environments. Maybe a subsequent code change reintroduced the bug. Maybe the tester found the fix was incomplete.

Reopening a defect signals that more work is needed. The defect goes back to Assigned status and the cycle continues.

Rejected

The triage team determined this is not actually a defect. Common reasons:

  • Working as designed (the behavior is intentional)
  • Cannot reproduce (no one else can make it happen)
  • Invalid report (missing information, unclear description)

Rejected defects should include an explanation so the reporter understands why.

Duplicate

This defect was already reported under a different ID. The duplicate gets closed with a reference to the original defect.

Good duplicate detection prevents wasted effort. Testers should search existing defects before logging new ones.

Deferred

The defect is valid but will not be fixed in the current release. Reasons include:

  • Low priority compared to other work
  • Requires significant architectural changes
  • Affects a feature being redesigned anyway

Deferred defects need a target release or review date. Without one, they become a graveyard of forgotten issues.

State Transitions: How Defects Move Through the Cycle

Understanding which transitions are valid helps teams maintain process discipline. Here are the standard transitions:

Primary Flow (Happy Path)

New → Open → Assigned → Fixed → Verified → Closed

This is the ideal path. A defect gets reported, triaged, assigned, fixed, verified, and closed. No complications.

Rejection Flow

New → Rejected → Closed

The triage team determines the report is not a valid defect. It gets rejected with an explanation and closed.

Duplicate Flow

New → Duplicate → Closed

The defect was already reported. Link to the original and close.

Deferral Flow

New → Open → Deferred

Then later:

Deferred → Open → Assigned → Fixed → Verified → Closed

The defect is acknowledged but postponed. When the team decides to address it, the defect moves back to Open.

Reopen Flow

Verified → Reopened → Assigned → Fixed → Verified → Closed

Or even:

Closed → Reopened → Assigned → Fixed → Verified → Closed

Something went wrong with the fix. The defect needs another round of work.

Invalid Transitions

Some transitions should never happen:

  • New directly to Fixed: Defects need review before developers work on them
  • Assigned directly to Closed: Every fix needs verification
  • Fixed directly to Closed: Testers must verify fixes before closure
  • Rejected to Assigned: Rejected defects need to go back to New if they are reopened for reconsideration

Defect tracking tools can enforce these rules through workflow configuration.

Severity vs Priority: Understanding the Difference

These two fields cause more confusion than any other part of defect management. They measure different things and should be set independently.

Severity: How Bad Is the Impact?

Severity describes the technical impact of the defect on the system. It answers: "How much damage does this bug cause?"

Severity LevelDescriptionExamples
CriticalSystem crash, data loss, security breach, complete feature failureApplication crashes on startup; user passwords exposed; payment processing fails
MajorMajor feature broken, significant functionality lost, no workaroundCannot save documents; search returns wrong results; export feature broken
MinorFeature works but with problems, workaround existsSlow performance on certain screens; awkward workflow; cosmetic calculation errors
TrivialCosmetic issues, typos, minor UI problemsMisspelled label; alignment off by a few pixels; inconsistent font

Severity is typically set by the tester who discovers the defect. It requires understanding the technical impact but not business context.

Priority: How Soon Should We Fix It?

Priority describes the business urgency of fixing the defect. It answers: "How quickly do we need this fixed?"

Priority LevelDescriptionTypical Timeline
UrgentFix immediately, blocks release or critical business functionWithin hours
HighFix soon, significant business impactWithin current sprint
MediumShould be fixed, but other work takes precedenceWithin 1-2 sprints
LowFix when convenient, minimal business impactWhen resources allow

Priority is typically set or adjusted by product owners, managers, or triage teams. It requires understanding business context, release schedules, and resource constraints.

Why They Differ

Consider these examples:

High Severity, Low Priority: The application crashes when a user enters a name longer than 500 characters. Technically severe (crash), but practically rare (who has a 500-character name?). Fix it eventually, but not urgent.

Low Severity, High Priority: A typo in the company name on the main landing page. Technically trivial (cosmetic), but business critical (brand damage, customer perception). Fix it today.

High Severity, High Priority: Users cannot complete checkout. Both technically severe (core feature broken) and business critical (lost revenue). Drop everything.

Low Severity, Low Priority: An obscure settings screen has inconsistent button colors. Neither technically severe nor business urgent. Add to backlog.

Best Practice: Testers set severity based on technical impact. Product owners adjust priority based on business needs. Both fields inform scheduling decisions.

Writing Effective Defect Reports

A good defect report helps developers fix issues quickly. A bad one wastes everyone's time with back-and-forth questions. Here is what makes the difference:

Essential Components

Summary/Title: One line that captures what is wrong. Be specific.

  • Bad: "Button does not work"
  • Good: "Submit button on checkout page does not respond to clicks after adding item to cart"

Steps to Reproduce: Numbered steps that anyone can follow to see the defect.

1. Log in as user "testuser@example.com"
2. Navigate to Products > Electronics
3. Add "Wireless Mouse" to cart
4. Click "Proceed to Checkout"
5. Click "Submit Order" button

Expected Result: What should happen when following those steps.

"Order confirmation page should display with order number"

Actual Result: What actually happens.

"Nothing happens. Button appears clicked (visual feedback) but page does not change. No error message appears."

Environment Details: Where you found the defect.

  • Browser/OS: Chrome 120 on Windows 11
  • Environment: Staging (staging.example.com)
  • Build/Version: v2.4.3
  • User role: Standard customer account

Evidence: Proof that supports your report.

  • Screenshots showing the issue
  • Console logs with error messages
  • Network tab showing failed requests
  • Video recording of the problem

Common Report Problems

Problem: Cannot reproduce

This usually means missing environment details or incomplete steps. Include:

  • Exact browser version, not just "Chrome"
  • Any browser extensions that might interfere
  • Network conditions (especially for mobile testing)
  • Specific test data used
  • User account permissions

Problem: Steps are unclear

Write steps so someone who has never seen your application could follow them. Avoid:

  • "Navigate to the settings page" (which settings page? how do I get there?)
  • "Enter some data" (what data? what format?)
  • "It breaks" (what specifically happens?)

Problem: Missing context

Include relevant background:

  • Did this work before? When did it stop?
  • Does it affect all users or specific accounts?
  • Is it consistent or intermittent?
  • Any recent changes that might relate?

Defect Report Template

Summary: [One-line description of the issue]

Severity: [Critical/Major/Minor/Trivial]
Priority: [Urgent/High/Medium/Low]

Environment:
- Application version:
- Browser/Device:
- Operating System:
- Test Environment:
- User Account:

Steps to Reproduce:
1.
2.
3.

Expected Result:
[What should happen]

Actual Result:
[What actually happens]

Additional Information:
- Frequency: [Always/Sometimes/Rarely]
- Related defects:
- Attachments: [Screenshots, logs, videos]

Defect Tracking Tools Comparison

Your choice of defect tracking tool affects how well your team can manage the defect life cycle. Here are the major options:

Jira

Best for: Teams already using Atlassian products, enterprise organizations, Agile teams

Jira dominates the market for good reason. It offers:

  • Highly customizable workflows
  • Integration with Confluence, Bitbucket, and other Atlassian tools
  • Robust reporting and dashboards
  • Plugins for nearly any need

Considerations: Can be complex to configure. Pricing adds up for larger teams. Some teams find it overpowered for simple needs.

Azure DevOps

Best for: Microsoft-centric organizations, teams using Azure cloud services

Azure DevOps provides:

  • Tight integration with Visual Studio and VS Code
  • Built-in CI/CD pipelines
  • Work item tracking that covers defects, tasks, and user stories
  • Good balance of features and simplicity

Considerations: Less flexible than Jira for custom workflows. Strongest when used with other Microsoft tools.

Bugzilla

Best for: Open-source projects, organizations wanting free/self-hosted solutions

Bugzilla is:

  • Completely free and open source
  • Battle-tested (used by Mozilla, Apache, and others)
  • Lightweight and straightforward
  • Self-hosted, giving you full control

Considerations: UI feels dated. Fewer integrations than commercial options. Requires technical staff to maintain.

Linear

Best for: Startups, small teams wanting modern UX, keyboard-driven workflows

Linear offers:

  • Clean, fast interface
  • Opinionated workflows that reduce configuration
  • Good GitHub integration
  • Excellent keyboard shortcuts

Considerations: Less customizable than Jira. May not scale for very large organizations. Relatively new product.

GitHub Issues / GitLab Issues

Best for: Development teams wanting everything in one place, open-source projects

Using your code repository's built-in issue tracking provides:

  • Tight connection between code and issues
  • No additional tool to manage
  • Good for smaller projects

Considerations: Limited workflow customization. Reporting is basic. Not designed primarily for defect management.

Tool Selection Criteria

When choosing a tool, consider:

  1. Team size: Enterprise tools may overwhelm small teams. Lightweight tools may not scale.

  2. Existing ecosystem: Integration with your current tools reduces friction.

  3. Workflow needs: How customizable do your workflows need to be?

  4. Reporting requirements: What metrics do you need to track and report?

  5. Budget: Costs range from free to significant per-user fees.

  6. Self-hosted vs cloud: Do you need to host data internally?

Defect Metrics That Matter

Measuring defect data helps you understand team performance and process effectiveness. Focus on metrics that drive decisions, not vanity numbers.

Defect Discovery Rate

What it measures: How many defects are found over time

Why it matters: Spikes indicate problem areas. Declining rates might mean quality is improving or testing is less thorough.

How to use it: Track by module, release, or sprint. Investigate unusual patterns.

Mean Time to Resolution (MTTR)

What it measures: Average time from defect creation to closure

Why it matters: Long resolution times indicate bottlenecks. Short times might indicate defects are not being properly verified.

How to use it: Break down by severity and priority. High-priority defects should have lower MTTR.

Defect Aging

What it measures: How long defects sit in each state

Why it matters: Defects stuck in "Assigned" suggest developer overload. Defects stuck in "Fixed" suggest testing bottlenecks.

How to use it: Set thresholds for each state. Alert when defects exceed them.

Reopen Rate

What it measures: Percentage of defects that get reopened after being marked fixed

Why it matters: High reopen rates indicate incomplete fixes, inadequate testing, or unclear requirements.

How to use it: Track by developer and module. Investigate patterns.

Defect Density

What it measures: Defects per unit of code (often per thousand lines)

Why it matters: Helps compare quality across modules of different sizes. High-density modules need attention.

How to use it: Track over releases. Aim for decreasing density as code matures.

Defect Leakage

What it measures: Defects found in production vs. defects found before release

Why it matters: Defects in production are expensive. This measures testing effectiveness.

Formula: (Production Defects / Total Defects) x 100

Target: Lower is better. Many teams aim for less than 10% leakage.

Warning: Do not use defect counts to evaluate individual performance. This creates perverse incentives: testers might log trivial issues to inflate numbers, developers might push back on valid defects to keep counts low.

Common Problems and How to Fix Them

Every team encounters defect management challenges. Here are the most common ones and practical solutions:

Problem: Defects Sit Unassigned

Symptoms: Growing backlog of Open defects. Testers frustrated that their findings are ignored.

Causes:

  • No clear owner for triage
  • Developers overloaded with feature work
  • Defects not considered important enough

Solutions:

  • Establish a daily or weekly triage meeting
  • Assign a rotation for defect triage duty
  • Set SLAs for how long defects can stay unassigned
  • Make defect resolution part of sprint planning

Problem: Defects Keep Getting Reopened

Symptoms: Same defects cycling between Fixed and Reopened. Developer frustration.

Causes:

  • Unclear defect descriptions
  • Incomplete fixes
  • Environment differences between development and test
  • Inadequate unit testing

Solutions:

  • Require reproducible steps in every defect report
  • Have developers verify their own fix before marking Fixed
  • Standardize environments
  • Include unit tests with every fix

Problem: "Cannot Reproduce" Responses

Symptoms: Developers close defects as "cannot reproduce." Testers know the issues are real.

Causes:

  • Missing environment details in reports
  • Intermittent issues
  • Test data differences
  • Environmental configuration differences

Solutions:

  • Require mandatory fields in defect reports
  • Have testers include video recordings
  • Pair testers with developers to reproduce together
  • Document test data and environment setup

Problem: Duplicate Defects

Symptoms: Multiple defects for the same issue. Wasted effort tracking and fixing.

Causes:

  • Poor searchability in defect tool
  • Testers not checking for existing defects
  • Defect summaries not descriptive enough

Solutions:

  • Train testers to search before logging
  • Establish naming conventions
  • Use defect categorization/labeling
  • Regular duplicate cleanup reviews

Problem: Defects Never Get Closed

Symptoms: Large numbers of ancient defects in Deferred or Open status.

Causes:

  • No process for reviewing old defects
  • Fear of closing defects without fixing them
  • Deferred defects forgotten

Solutions:

  • Quarterly review of old defects
  • Close defects that are no longer relevant
  • Set expiration policies for Deferred status
  • Include deferred defects in release planning

Problem: Severity/Priority Misuse

Symptoms: Everything is marked Critical/Urgent. Prioritization loses meaning.

Causes:

  • Confusion about what severity and priority mean
  • Fear that low-priority defects will never be fixed
  • No calibration of what each level means

Solutions:

  • Document clear definitions with examples
  • Separate who sets severity (tester) from who sets priority (product owner)
  • Regular calibration sessions with examples
  • Review and adjust priority mismatches in triage

Integrating Defect Management with Your Workflow

Defect management does not exist in isolation. It connects to your broader development and testing processes.

Agile Integration

In Agile teams, defects compete for attention with new features. Integrate them effectively:

Sprint Planning: Include defect fix time in capacity planning. Do not assume defects will fit in "spare time."

Backlog Grooming: Review and prioritize defects alongside user stories. Defects are work items too.

Definition of Done: Include "no critical defects" as a completion criterion for features.

Daily Standups: Mention defect work. "I'm investigating DEF-234" is just as valid as "I'm working on story 456."

CI/CD Integration

Modern pipelines can catch defects before they become defects:

Automated Testing: Tests in the pipeline catch regressions before code merges. A failed test is better than a defect report.

Static Analysis: Code scanning tools flag potential issues during code review.

Deployment Gates: Require defect counts below thresholds before deploying.

Link defects to the code changes that fix them. Most tools support referencing defect IDs in commit messages.

Connection to STLC

The defect life cycle intersects with the STLC at multiple points:

Conclusion

The defect life cycle is not bureaucracy; it is infrastructure. Just as code needs version control and deployments need pipelines, defects need structured management.

Effective defect management requires:

  1. Clear states and transitions that everyone understands
  2. Proper severity and priority classification to guide scheduling
  3. Quality defect reports that enable fast resolution
  4. Appropriate tooling that fits your team's needs
  5. Meaningful metrics that drive improvement
  6. Integration with your existing development workflow

Start with the basics: define your states, train your team on writing good reports, and choose a tool that does not get in the way. As your process matures, add metrics, automation, and optimization.

The goal is not a perfect process on paper. The goal is defects that get found early, fixed quickly, and verified thoroughly. Everything else serves that outcome.

Quiz on defect life cycle

Your Score: 0/9

Question: What is the correct sequence of states in the primary defect life cycle flow?

Continue Reading

The Software Testing Lifecycle: An OverviewDive into the crucial phase of Test Requirement Analysis in the Software Testing Lifecycle, understanding its purpose, activities, deliverables, and best practices to ensure a successful software testing process.How to Master Test Requirement Analysis?Learn how to master requirement analysis, an essential part of the Software Test Life Cycle (STLC), and improve the efficiency of your software testing process.Test PlanningDive into the world of Kanban with this comprehensive introduction, covering its principles, benefits, and applications in various industries.Test DesignLearn the essential steps in the test design phase of the software testing lifecycle, its deliverables, entry and exit criteria, and effective tips for successful test design.Test ExecutionLearn about the steps, deliverables, entry and exit criteria, risks and schedules in the Test Execution phase of the Software Testing Lifecycle, and tips for performing this phase effectively.Test Analysis PhaseDiscover the steps, deliverables, entry and exit criteria, risks and schedules in the Test Analysis phase of the Software Testing Lifecycle, and tips for performing this phase effectively.Test Reporting PhaseLearn the essential steps, deliverables, entry and exit criteria, risks, schedules, and tips for effective Test Reporting in the Software Testing Lifecycle to improve application quality and testing processes.Fixing PhaseExplore the crucial steps, deliverables, entry and exit criteria, risks, schedules, and tips for effective Fixing in the Software Testing Lifecycle to boost application quality and streamline the testing process.Test Closure PhaseDiscover the steps, deliverables, entry and exit criteria, risks, schedules, and tips for performing an effective Test Closure phase in the Software Testing Lifecycle, ensuring a successful and streamlined testing process.

Frequently Asked Questions (FAQs) / People Also Ask (PAA)

What is the defect life cycle in software testing?

What is the difference between defect severity and priority?

What should a good defect report include?

When should a defect be reopened vs creating a new defect?

How do I choose the right defect tracking tool for my team?

What defect metrics should QA teams track?

How should defect management integrate with Agile sprints?

What causes high 'cannot reproduce' rates and how do you fix it?