Test Execution Phase in STLC: The Complete Practical Guide

Parul Dhingra - Senior Quality Analyst
Parul Dhingra13+ Years ExperienceHire Me

Senior Quality Analyst

Updated: 1/22/2026

Test Execution Phase in STLC Test Execution Phase in STLC

Test execution is where planning meets reality. All the test cases you designed, all the environments you configured, and all the strategies you documented - they all converge here. This phase determines whether your software actually works the way stakeholders expect.

The difference between effective and chaotic test execution often comes down to discipline. Teams that follow structured execution processes catch defects systematically and deliver reliable quality assessments. Teams that wing it miss critical issues and struggle to explain what was actually tested.

This guide provides practical strategies for executing tests effectively, tracking defects properly, and delivering meaningful test results. You'll learn how to handle the real challenges testers face: unstable environments, blocked test cases, shifting priorities, and tight deadlines.

Test execution is the fourth phase in the Software Testing Life Cycle, following requirements analysis, test planning, and test design. The work done in those phases directly impacts execution quality - solid preparation makes execution smooth, while gaps in earlier phases create execution chaos.

Quick Answer: Test Execution at a Glance

AspectDetails
WhatRunning designed test cases against the software build to verify functionality and identify defects
WhenAfter test design completion and environment setup; when entry criteria are met
Key DeliverablesExecuted test cases with pass/fail status, defect reports, test execution logs, daily/weekly status reports, test summary report
WhoQA engineers, test leads, automation engineers; with coordination from developers and DevOps
Best ForValidating that software meets requirements before release to production or next development phase

Understanding Test Execution in STLC

Test execution transforms test designs into quality evidence. During this phase, testers run test cases against the application, record results, log defects, and track progress toward release readiness.

What Test Execution Actually Involves

Test execution isn't just clicking through test cases. It encompasses:

Running Test Cases: Executing manual or automated tests according to documented steps and comparing actual results against expected outcomes.

Recording Results: Documenting pass/fail status, capturing evidence (screenshots, logs, recordings), and noting any observations.

Defect Logging: Creating detailed defect reports when actual results differ from expected results, with sufficient information for developers to reproduce and fix issues.

Retesting: Verifying that fixed defects are actually resolved and no longer reproduce.

Regression Testing: Ensuring that defect fixes and new changes haven't broken existing functionality.

Progress Tracking: Monitoring execution progress, test pass rates, defect trends, and coverage metrics.

Status Communication: Providing stakeholders with visibility into testing progress and quality status.

Position in STLC

Test execution receives inputs from all preceding phases:

  • Requirements Analysis: Identifies what the software should do
  • Test Planning: Defines scope, approach, resources, and schedule
  • Test Design: Creates the specific test cases to execute
  • Environment Setup: Prepares the infrastructure for testing

Test execution outputs feed into subsequent phases:

  • Test Reporting: Summarizes execution results and quality assessment
  • Test Closure: Documents lessons learned and archives artifacts

Key Insight: Test execution quality depends heavily on the quality of preceding phases. Poorly designed test cases create execution confusion. Inadequate environment setup causes test failures unrelated to software defects. Invest time upfront to make execution efficient.

Types of Test Execution

Different testing contexts require different execution approaches:

Smoke Testing: Quick validation that the build is stable enough for detailed testing. Run immediately after deployment to catch obvious breakage.

Functional Testing: Systematic validation of feature requirements. Verifies the software does what it's supposed to do.

Integration Testing: Validates that components work together correctly. Tests interfaces, data flows, and system interactions.

Regression Testing: Confirms that existing functionality still works after changes. Critical for preventing defect reintroduction.

User Acceptance Testing (UAT): Business stakeholders validate that the software meets their needs. Often the final gate before production release.

Each type serves a distinct purpose in the overall quality assurance approach.

Test Execution Process: Step by Step

Effective test execution follows a structured process. While specific steps vary by organization and project, the core activities remain consistent.

Step 1: Verify Entry Criteria

Before starting execution, confirm all prerequisites are met:

  • Test environment is deployed and configured
  • Test data is loaded and accessible
  • Test cases are reviewed and approved
  • Build is deployed to the test environment
  • Smoke tests pass (build is stable)
  • Testing team has access to required tools and environments
⚠️

Common Mistake: Starting execution before the build is stable wastes significant effort. If smoke tests fail, don't proceed with detailed testing - send the build back to development. Executing against an unstable build generates noise that obscures real defects.

Step 2: Prepare for Execution

Set up everything needed for efficient test execution:

Review Test Cases: Ensure testers understand the test cases they'll execute. Clarify any ambiguous steps or expected results.

Organize Test Data: Verify test data is available and in the expected state. Reset data if previous test runs modified it.

Configure Tools: Ensure test management tools, defect trackers, and automation frameworks are accessible and properly configured.

Assign Test Cases: Distribute test cases among team members based on skills, domain knowledge, and availability.

Establish Communication Channels: Set up mechanisms for quick questions, blocker escalation, and status sharing.

Step 3: Execute Test Cases

Run test cases systematically, recording results as you go:

Follow Test Steps Precisely: Execute exactly what the test case specifies. Don't skip steps or take shortcuts that might mask defects.

Record Actual Results: Document what actually happened, not what you expected to happen. Capture details that help investigation.

Capture Evidence: Take screenshots, save logs, record videos - especially for failures. Evidence speeds defect investigation and verification.

Note Observations: Record anything unusual, even if the test passed. Observations might indicate latent issues or areas needing deeper testing.

Update Test Status: Mark each test case as passed, failed, blocked, or not run with appropriate notes.

Step 4: Log Defects

When tests fail, create quality defect reports:

Include Essential Information:

  • Clear, descriptive title
  • Steps to reproduce
  • Expected result vs. actual result
  • Environment details (browser, OS, build version)
  • Evidence (screenshots, logs, video)
  • Severity and priority assessment

Make Defects Reproducible: Developers need to replicate the issue to fix it. Include precise steps and any required preconditions.

Link to Test Cases: Connect defects to the test cases that found them for traceability.

Avoid Duplicates: Search existing defects before creating new ones. Multiple reports for the same issue waste everyone's time.

Step 5: Retest Fixed Defects

When developers fix defects, verify the fixes work:

Reproduce Original Issue First: Confirm you can still reproduce the defect before applying the fix (if testing against the same environment).

Apply Fix and Retest: Execute the same steps that originally revealed the defect. Verify the issue no longer occurs.

Check Related Functionality: Test closely related features that the fix might have affected.

Update Defect Status: Mark defects as verified/closed or reopen if fixes don't work.

Step 6: Perform Regression Testing

Validate that changes haven't broken existing functionality:

Prioritize Regression Scope: Focus on high-risk areas, recently modified features, and integration points. Complete regression isn't always feasible.

Use Automation: Automated regression suites provide fast, repeatable coverage. Run them frequently to catch regressions early.

Track Regression Failures: Distinguish between actual regressions (newly broken functionality) and existing defects or environmental issues.

Step 7: Monitor and Report Progress

Keep stakeholders informed throughout execution:

Track Key Metrics:

  • Test cases executed vs. planned
  • Pass rate (passed tests / executed tests)
  • Defect count by severity
  • Blocked test percentage
  • Requirement coverage

Communicate Status Daily: Provide brief daily updates during active execution. Highlight blockers, risks, and decisions needed.

Escalate Issues Promptly: Don't wait to report problems. Early escalation allows faster resolution.

Entry and Exit Criteria for Test Execution

Entry and exit criteria establish quality gates that prevent premature activities and ensure completion standards.

Entry Criteria

These conditions must be met before test execution can begin effectively:

Build Readiness:

  • Code is complete for features in test scope
  • Build successfully compiles and deploys
  • Build is deployed to the test environment
  • Smoke tests pass (basic functionality works)
  • Known critical defects from previous cycles are resolved

Test Readiness:

  • Test cases are designed, reviewed, and approved
  • Test data is prepared and loaded
  • Test environment is configured and validated
  • Testing team is assigned and trained
  • Tools and access are configured

Documentation Readiness:

  • Test plan is approved
  • Requirements are baselined (changes controlled)
  • Traceability matrix links requirements to test cases
💡

Best Practice: Create an entry criteria checklist and review it before starting each test cycle. Skipping this review leads to wasted effort when you discover missing prerequisites mid-execution.

Exit Criteria

These standards determine when test execution is complete:

Execution Coverage:

  • All planned test cases executed (or documented exceptions for unexecuted tests)
  • Minimum pass rate achieved (e.g., 95% of tests pass)
  • All high-priority test cases executed
  • Requirement coverage targets met

Defect Resolution:

  • No critical-severity defects remain open
  • No high-severity defects remain open (or approved exceptions documented)
  • All identified defects are logged with appropriate status
  • Defect retest completed for all fixed issues

Documentation Complete:

  • Test execution logs captured
  • Defect reports include sufficient detail
  • Test summary report prepared
  • Traceability updated with execution results

Suspension and Resumption Criteria

Define when testing should pause and restart:

Suspension Triggers:

  • Critical defects block major functionality (>50% of tests blocked)
  • Environment instability causes unreliable test results
  • Build quality is so poor that most tests fail immediately
  • Required resources become unavailable
  • Major requirement changes invalidate existing tests

Resumption Conditions:

  • Blocking defects are fixed and verified
  • Environment stability is confirmed
  • New build passes smoke tests
  • Resources are available
  • Test cases are updated for requirement changes

Test Execution Strategies

Different situations call for different execution approaches. Choose strategies based on risk, timeline, and resource constraints.

Risk-Based Execution

Prioritize test execution based on risk assessment:

Execute High-Risk Tests First: Test critical functionality, complex features, and frequently used workflows before lower-risk areas.

Allocate More Time to Risky Areas: Spend proportionally more execution time on high-risk features with thorough testing.

Adjust Based on Results: If high-risk areas show many defects, extend testing there. If they're stable, move to other areas.

Risk-based execution ensures that if time runs short, you've tested what matters most.

Cycle-Based Execution

Structure testing into multiple cycles:

Cycle 1 - Initial Execution: Execute all test cases for the first time. Log defects. Establish baseline quality.

Cycle 2 - Defect Verification: Retest fixed defects. Run regression tests. Fill coverage gaps from Cycle 1.

Cycle 3 - Final Validation: Complete any remaining tests. Verify all critical defects are fixed. Conduct final regression.

Each cycle builds on the previous, progressively improving quality.

Priority-Based Execution

Execute test cases in priority order:

Priority 1 (Critical): Must-pass tests covering core functionality. Execute first, every cycle.

Priority 2 (High): Important feature validation. Execute after Priority 1, most cycles.

Priority 3 (Medium): Standard functionality testing. Execute when time allows.

Priority 4 (Low): Edge cases and nice-to-have scenarios. Execute if schedule permits.

If time runs short, you've guaranteed coverage of the most important tests.

Parallel Execution

Run tests simultaneously to compress timelines:

Multiple Testers: Assign different test areas to different team members executing concurrently.

Multiple Environments: Run tests against different configurations (browsers, devices) in parallel.

Automated Parallel Execution: Configure automation frameworks to run tests across multiple threads or machines.

Parallel execution reduces elapsed time but requires coordination to avoid conflicts and ensure comprehensive coverage.

Defect Management During Execution

Effective defect management transforms test findings into actionable information developers can use.

Writing Quality Defect Reports

A good defect report answers: "What happened, how can I reproduce it, and what should have happened instead?"

Essential Components:

ComponentDescriptionExample
TitleClear, specific description"Checkout fails with error 500 when applying expired coupon code"
Steps to ReproduceNumbered sequence to recreate the issue1. Add item to cart 2. Go to checkout 3. Enter coupon "EXPIRED2024" 4. Click Apply
Expected ResultWhat should happen"System should display 'Invalid coupon code' message"
Actual ResultWhat actually happens"Page displays HTTP 500 error, checkout cannot continue"
EnvironmentTesting context"Chrome 120, Windows 11, Build 2.3.45, Staging environment"
EvidenceSupporting artifactsScreenshot of error, network log showing 500 response
SeverityImpact on functionality"Critical - prevents checkout completion"
⚠️

Common Mistake: Writing vague defect titles like "Checkout doesn't work" or "Error on page." Specific titles help developers quickly understand issues and help testers avoid duplicate reports.

Defect Severity and Priority

Severity measures the defect's impact on the system:

  • Critical: System crash, data loss, security breach, complete feature failure
  • High: Major functionality broken, no workaround available
  • Medium: Feature partially works, workaround exists
  • Low: Minor issue, cosmetic problem, doesn't affect functionality

Priority indicates urgency of the fix:

  • Urgent: Fix immediately, blocks release
  • High: Fix before release
  • Medium: Fix if time allows, can defer
  • Low: Fix when convenient, often deferred

Severity and priority don't always align. A typo on the homepage is low severity (cosmetic) but might be high priority (brand impact). A crash in an obscure feature is high severity but might be medium priority if few users encounter it.

Defect Lifecycle

Track defects through their lifecycle:

NewAssignedIn ProgressFixedReady for RetestVerifiedClosed

Alternative paths:

  • Rejected (not a defect, as designed, cannot reproduce)
  • Deferred (won't fix this release)
  • Reopened (fix didn't work)

Keep defects moving through the lifecycle. Stale defects indicate process problems.

Defect Triage

Regular triage meetings keep defect management on track:

Review New Defects: Validate severity/priority, assign to developers, clarify any questions.

Track Resolution Progress: Ensure assigned defects are being worked. Escalate blocks.

Make Disposition Decisions: Decide which defects to fix, defer, or reject.

Adjust Priorities: Reprioritize based on release timeline and resource availability.

Daily or every-other-day triage during active testing prevents defect backlog.

Handling Blocked and Failed Tests

Not all tests run smoothly. Handling blocked and failed tests requires systematic approaches.

Blocked Tests

Tests are blocked when they cannot execute due to external factors:

Common Blocking Reasons:

  • Environment unavailable or unstable
  • Required test data missing
  • Dependent feature not yet implemented
  • Build defects prevent reaching the test scenario
  • Third-party system unavailable

Handling Blocked Tests:

  1. Document the Blocker: Record why the test cannot execute and what's needed to unblock it.

  2. Log Blocking Defects: If the blocker is a software defect, log it with appropriate priority.

  3. Escalate Blockers: Notify the test lead and relevant parties. Blockers need attention to unblock testing.

  4. Move to Other Tests: Don't sit idle. Execute unblocked tests while waiting for resolution.

  5. Retest When Unblocked: Once the blocker is resolved, return to execute the blocked tests.

Track blocked percentage as a metric. High blocked rates indicate environment or build quality problems.

Failed Tests

Tests fail when actual results differ from expected results. Failed tests require investigation:

Verify the Failure:

  • Is the expected result correct (not an incorrect test case)?
  • Did you execute the steps correctly?
  • Is the environment in the expected state?

Log the Defect: If it's a genuine software defect, create a quality defect report.

Analyze Patterns: Multiple related failures might indicate a single underlying cause. One defect report for the root cause is better than separate reports for symptoms.

Distinguish Defect Types:

  • Functional Defect: The feature doesn't work as specified
  • Regression: Previously working functionality is now broken
  • Environment Issue: Test fails due to environment problems, not software defects

Test Execution Reporting

Execution reports communicate testing progress and quality status to stakeholders.

Daily Status Reports

Brief updates during active execution:

Key Information:

  • Test cases executed today vs. planned
  • Cumulative execution progress
  • Pass/fail/blocked counts
  • New defects found
  • Blockers and risks
  • Tomorrow's plan

Keep daily reports concise - stakeholders want quick updates, not lengthy documents.

Test Execution Summary Report

Comprehensive report at execution completion:

Report Sections:

Executive Summary: High-level quality assessment and release recommendation.

Scope: What was tested, what was excluded, any scope changes.

Execution Statistics:

  • Total test cases: Planned vs. Executed
  • Results: Passed, Failed, Blocked, Not Run
  • Pass rate percentage
  • Coverage by requirement/feature

Defect Summary:

  • Total defects found
  • Defects by severity
  • Defects by status (open, fixed, deferred)
  • Defect trends over time

Quality Assessment: Evaluation of software readiness based on test results.

Risks and Issues: Outstanding problems, unresolved defects, coverage gaps.

Recommendations: Release decision, conditions, or additional testing needed.

Key Metrics to Track

MetricCalculationPurpose
Execution Progress(Executed Tests / Planned Tests) x 100Track completion against plan
Pass Rate(Passed Tests / Executed Tests) x 100Measure overall quality
Defect DensityDefects Found / Features TestedIdentify problematic areas
Blocked Rate(Blocked Tests / Total Tests) x 100Indicate environment/build problems
Defect Closure RateClosed Defects / Total DefectsTrack resolution progress
Requirement Coverage(Covered Requirements / Total Requirements) x 100Ensure comprehensive testing

Common Test Execution Challenges

Real-world test execution rarely proceeds without obstacles. Anticipating common challenges helps you handle them effectively.

Unstable Test Environments

The Problem: Test environment crashes, performs inconsistently, or differs from production configuration.

Solutions:

  • Establish environment stability requirements before execution begins
  • Implement environment health checks and monitoring
  • Maintain backup environments for critical testing
  • Document environment issues separately from software defects
  • Coordinate with DevOps for rapid environment fixes

Time Pressure

The Problem: Schedule constraints don't allow complete test execution.

Solutions:

  • Apply risk-based testing to prioritize critical tests
  • Automate repetitive tests to increase throughput
  • Negotiate scope reduction with stakeholders rather than superficial coverage
  • Communicate trade-offs transparently - what won't be tested and what risks that creates

Frequent Build Changes

The Problem: New builds arrive before testing the previous build is complete.

Solutions:

  • Establish build stability requirements before accepting new builds
  • Implement code freeze periods for focused testing
  • Use continuous testing integrated with CI/CD for rapid feedback
  • Avoid context switching - complete critical tests before accepting changes

Incomplete or Changing Requirements

The Problem: Requirements aren't clear or change during test execution.

Solutions:

  • Clarify ambiguous requirements before executing affected tests
  • Document assumptions when proceeding despite uncertainty
  • Implement requirement change control to assess test impact
  • Update test cases promptly when requirements change - don't execute outdated tests

Communication Gaps

The Problem: Developers, testers, and stakeholders aren't aligned on priorities, progress, or issues.

Solutions:

  • Establish regular sync meetings during execution
  • Use shared dashboards for real-time visibility
  • Escalate blockers immediately, not in end-of-day reports
  • Over-communicate during critical execution phases

Key Insight: Most test execution problems stem from inadequate preparation in earlier STLC phases. Environment issues indicate incomplete environment planning. Requirement changes indicate insufficient requirements freeze. Address root causes, not just symptoms.

Test Execution Best Practices

These practices consistently improve test execution effectiveness.

Maintain Test Independence

Each test should be independently executable:

Don't Depend on Execution Order: Tests shouldn't rely on previous tests creating data or system state.

Set Up Required State: Each test should establish its own preconditions.

Clean Up After Tests: Return the system to a known state after each test.

Independent tests can run in any order, parallelize easily, and produce reliable results.

Execute Tests Consistently

Consistent execution produces comparable results:

Follow Test Steps Exactly: Don't improvise or skip steps, even for tests you've run many times.

Use Consistent Test Data: Run tests with specified data values, not whatever data happens to be available.

Document Any Deviations: If you must deviate from the test case, record what you did differently and why.

Capture Comprehensive Evidence

Evidence supports defect investigation and verification:

Screenshot Failures: Visual evidence of what went wrong.

Capture Logs: Application logs, console output, network traffic for technical investigation.

Record Complex Scenarios: Screen recordings help reproduce timing-dependent or multi-step issues.

Document Environment State: Note relevant configuration, data state, and conditions at failure time.

Communicate Proactively

Don't wait for people to ask for updates:

Report Blockers Immediately: Hours waiting to report a blocker are hours of testing lost.

Share Daily Progress: Brief updates keep stakeholders informed without lengthy meetings.

Flag Risks Early: Identify problems trending toward risk early enough to address them.

Learn from Execution

Each execution cycle offers improvement opportunities:

Track What Works: Note efficient practices to repeat.

Identify Waste: Recognize activities that don't add value.

Capture Lessons Learned: Document insights for future projects.

Refine Test Cases: Update tests that proved unclear, incomplete, or ineffective.

Tools for Test Execution

The right tools streamline execution, tracking, and reporting.

Test Management Tools

Test management platforms organize test execution:

TestRail: Comprehensive test case management with execution tracking, reporting, and integrations with Jira and automation frameworks.

Zephyr: Test management integrated with Jira, providing seamless connection between development and testing workflows.

qTest: Enterprise test management with release planning, execution tracking, and advanced analytics.

Azure Test Plans: Microsoft's test management integrated with Azure DevOps for teams using Microsoft tooling.

Defect Tracking Tools

Track defects through their lifecycle:

Jira: Industry-standard issue tracking with customizable workflows, integrations, and reporting.

Azure DevOps: Microsoft's integrated work tracking for teams using Azure ecosystem.

GitHub Issues: Lightweight issue tracking integrated with GitHub repositories.

Bugzilla: Open-source defect tracking with extensive customization options.

Test Automation Frameworks

Automation increases execution throughput:

Selenium/WebDriver: Browser automation for web application testing.

Cypress: Modern web testing framework with fast execution and debugging.

Playwright: Cross-browser automation from Microsoft with modern features.

Appium: Mobile application automation for iOS and Android.

REST Assured / Postman: API testing for service validation.

Reporting and Dashboards

Communicate results effectively:

Test Management Reports: Built-in reporting from TestRail, Zephyr, qTest.

CI/CD Dashboards: Jenkins, GitLab, Azure DevOps pipeline reports.

Custom Dashboards: Grafana, Power BI, Tableau for customized quality metrics visualization.

Conclusion

Test execution transforms test designs into quality evidence. Success depends on disciplined execution processes, effective defect management, and clear communication with stakeholders.

Remember these key principles:

Verify Entry Criteria Before Starting: Don't waste effort executing against unstable builds or incomplete environments. Confirm prerequisites are met.

Execute Systematically: Follow test cases precisely, record results accurately, and capture evidence for failures. Consistency produces reliable results.

Write Quality Defect Reports: Include steps to reproduce, expected vs. actual results, and supporting evidence. Good reports get fixed faster.

Communicate Status Proactively: Keep stakeholders informed of progress, blockers, and risks. Don't wait to be asked.

Track and Report Meaningful Metrics: Focus on metrics that indicate quality and progress, not just activity counts.

Effective test execution requires preparation, discipline, and adaptability. When earlier STLC phases are solid, execution flows smoothly. When execution challenges arise, address root causes rather than just symptoms.

Build on the foundation from test planning and test design to execute effectively, and feed your results into comprehensive test reporting that enables informed release decisions.

Quiz on test-execution

Your Score: 0/9

Question: What is the primary purpose of the test execution phase in STLC?

Continue Reading

The Software Testing Lifecycle: An OverviewDive into the crucial phase of Test Requirement Analysis in the Software Testing Lifecycle, understanding its purpose, activities, deliverables, and best practices to ensure a successful software testing process.Test Requirement AnalysisDive into the crucial phase of Test Requirement Analysis in the Software Testing Lifecycle, understanding its purpose, activities, deliverables, and best practices to ensure a successful software testing process.Test PlanningDive into the world of Kanban with this comprehensive introduction, covering its principles, benefits, and applications in various industries.Test DesignLearn the essential steps in the test design phase of the software testing lifecycle, its deliverables, entry and exit criteria, and effective tips for successful test design.Test Analysis PhaseDiscover the steps, deliverables, entry and exit criteria, risks and schedules in the Test Analysis phase of the Software Testing Lifecycle, and tips for performing this phase effectively.Test Reporting PhaseLearn the essential steps, deliverables, entry and exit criteria, risks, schedules, and tips for effective Test Reporting in the Software Testing Lifecycle to improve application quality and testing processes.Fixing PhaseExplore the crucial steps, deliverables, entry and exit criteria, risks, schedules, and tips for effective Fixing in the Software Testing Lifecycle to boost application quality and streamline the testing process.Test Closure PhaseDiscover the steps, deliverables, entry and exit criteria, risks, schedules, and tips for performing an effective Test Closure phase in the Software Testing Lifecycle, ensuring a successful and streamlined testing process.

Frequently Asked Questions (FAQs) / People Also Ask (PAA)

What is test execution and what activities does it include?

What are the entry and exit criteria for test execution?

How do I write effective defect reports during test execution?

What strategies can I use to prioritize test execution when time is limited?

How should I handle blocked test cases during execution?

What metrics should I track and report during test execution?

How do I integrate test execution with CI/CD pipelines?

What are common test execution problems and how do I solve them?