ISTQB Certifications
Foundation Level (CTFL)
Chapter 5: Managing Test Activities

ISTQB CTFL Chapter 5: Managing Test Activities

Parul Dhingra - Senior Quality Analyst
Parul Dhingra13+ Years ExperienceHire Me

Senior Quality Analyst

Updated: 1/23/2026

Managing test activities transforms testing from ad hoc effort into a controlled engineering discipline. This chapter covers the planning, monitoring, control, and reporting aspects that make testing predictable and effective. With approximately 8 questions (20% of the exam), Chapter 5 represents a significant portion of the CTFL certification.

Understanding how to plan tests, estimate effort, manage risks, and report progress distinguishes professional testers from those who simply execute test cases. These management skills become increasingly important as you advance in your testing career.

Test Planning

Test planning establishes the scope, approach, resources, and schedule of testing activities. A good test plan provides direction while remaining adaptable to change.

Purpose of Test Planning

Test planning serves multiple purposes:

  • Defines what testing will and won't cover
  • Identifies resources needed
  • Establishes schedules and milestones
  • Documents the testing approach
  • Provides a basis for monitoring progress

Test Plan Contents

A typical test plan addresses:

SectionContents
ContextProject background, test basis
ScopeWhat's in scope/out of scope
ObjectivesWhat testing aims to achieve
ApproachTesting levels, types, techniques
ResourcesPeople, tools, environments
ScheduleMilestones, dependencies
RisksTesting risks and mitigations
CriteriaEntry, exit, suspension criteria
DeliverablesReports, metrics, artifacts

Planning Throughout the Lifecycle

Test planning isn't a one-time activity:

Initial planning: High-level scope and approach Detailed planning: Test design and implementation details Ongoing planning: Adjustments based on progress and findings

Exam Tip: Questions often distinguish between what belongs in a test plan versus a test strategy. The plan is project-specific; the strategy is organizational or program-level.

Test Strategy vs Test Plan

Understanding the relationship between strategy and plan is crucial for the exam.

Test Strategy

The test strategy is a high-level document describing the test approach for a program or organization.

Characteristics:

  • Developed at organization or program level
  • Provides generic direction
  • Longer lifespan than projects
  • May be part of organizational standards

Contents:

  • Testing levels to use
  • Types of testing to perform
  • Risk management approach
  • Test automation policy
  • Tool standards
  • Quality criteria

Test Plan

The test plan applies the strategy to a specific project.

Characteristics:

  • Project-specific document
  • Details how strategy applies to this project
  • Created for each testing effort
  • May exist at multiple levels (master plan, level-specific plans)

The Relationship

Organization Strategy → Project Test Plan → Test Design

The strategy provides principles; the plan applies them to specific circumstances.

Entry and Exit Criteria

Entry and exit criteria define when testing phases can begin and end.

Entry Criteria

Conditions that must be met before starting a test activity:

Examples:

  • Test environment available and verified
  • Test tools installed and configured
  • Test data prepared
  • Required documentation available
  • Previous test level completed
  • Code deployed to test environment

Exit Criteria (Completion Criteria)

Conditions that define when a test activity can be considered complete:

Examples:

  • All planned tests executed
  • Coverage targets achieved (e.g., 80% branch coverage)
  • No open critical or high-severity defects
  • All known defects documented
  • Test summary report approved
  • All exit criteria from test plan satisfied

Using Criteria Effectively

Entry and exit criteria should be:

  • Measurable: Can objectively determine if met
  • Documented: Written in the test plan
  • Agreed: Stakeholders accept them
  • Realistic: Achievable given constraints
⚠️

Common Exam Question: "Which is NOT a valid exit criterion?" Look for subjective or unmeasurable options - valid criteria must be objectively verifiable.

Test Estimation

Test estimation predicts the effort required for testing activities. Accurate estimates enable realistic planning.

Estimation Approaches

Metrics-Based Estimation Uses historical data from similar projects:

  • Average defects per test case
  • Test cases per requirement
  • Execution time per test case
  • Defect fix and retest cycles

Expert-Based Estimation Leverages experience of knowledgeable individuals:

  • Subject matter experts
  • Experienced testers
  • Technical leads
  • Historical knowledge from similar work

Estimation Techniques

Estimation by Analogy Compare with similar past projects, adjusting for differences.

Work Breakdown Structure (WBS) Break testing into smaller tasks, estimate each, sum the total.

Wide-band Delphi Multiple experts estimate independently, discuss differences, re-estimate until convergence.

Three-Point Estimation Calculate: (Optimistic + 4×Most Likely + Pessimistic) / 6

Factors Affecting Test Effort

FactorImpact on Effort
Product complexityHigher complexity → more effort
Team experienceLess experience → more effort
Tool supportBetter tools → less effort
Test environment stabilityUnstable → more effort
Documentation qualityPoor docs → more effort
Defect densityMore defects → more retest effort
Time pressureCompression → risk/quality tradeoff

Risk-Based Testing

Risk-based testing prioritizes testing based on risk analysis, focusing effort where it matters most.

Types of Risk

Product Risk (Quality Risk) Risks associated with the product itself:

  • Functionality not working correctly
  • Performance not meeting requirements
  • Security vulnerabilities
  • Usability problems
  • Compatibility issues

Project Risk Risks affecting the testing project:

  • Schedule delays
  • Resource unavailability
  • Tool problems
  • Environment issues
  • Skill gaps

Risk Analysis

Risk level is determined by:

Risk Level = Likelihood × Impact

Likelihood factors:

  • Technical complexity
  • Team experience with technology
  • Similar past problems
  • External dependencies

Impact factors:

  • Business criticality
  • User visibility
  • Safety implications
  • Financial consequences

Risk-Based Test Prioritization

Risk LevelTesting Priority
High likelihood, High impactTest first and most thoroughly
High likelihood, Low impactTest early
Low likelihood, High impactTest with care
Low likelihood, Low impactTest if time permits

Benefits of Risk-Based Testing

  1. Focus: Testing effort aligned with business priorities
  2. Communication: Risk language stakeholders understand
  3. Justification: Defensible resource allocation
  4. Efficiency: Maximum value from available time
  5. Flexibility: Clear basis for scope adjustments

Exam Tip: Risk-based testing questions often ask about prioritization. Remember that high-risk items get tested first and most thoroughly.

Test Monitoring and Control

Monitoring tracks progress; control adjusts activities to meet objectives.

Test Monitoring

Monitoring involves gathering information about testing progress:

Metrics to monitor:

  • Test case execution progress
  • Defect discovery and resolution rates
  • Coverage achieved
  • Test environment availability
  • Resource utilization

Monitoring methods:

  • Daily stand-ups
  • Defect tracking systems
  • Test management tools
  • Dashboard reviews
  • Status meetings

Test Control

Control involves taking actions when monitoring reveals problems:

Control actions:

  • Reprioritize remaining tests
  • Add resources to critical areas
  • Adjust scope based on risks
  • Extend schedule if necessary
  • Escalate issues to management
  • Modify test strategy

Test Progress Metrics

MetricWhat It Measures
Test cases executedExecution progress
Test cases passed/failedQuality indication
Defects by severityProduct risk
Defects by statusResolution progress
Coverage percentageTest completeness
Blocked testsImpediments

The Test Pyramid and Testing Quadrants

Test Pyramid A model suggesting distribution of test types:

  • Base: Many unit tests (fast, cheap)
  • Middle: Fewer integration tests
  • Top: Even fewer UI/E2E tests (slow, expensive)

Testing Quadrants (Agile) Categorize tests by purpose and focus:

  • Q1: Technology-facing, supporting the team (unit tests)
  • Q2: Business-facing, supporting the team (functional tests)
  • Q3: Business-facing, critiquing the product (exploratory)
  • Q4: Technology-facing, critiquing the product (performance)

Test Reporting

Test reports communicate testing status and results to stakeholders.

Test Progress Report

Communicates ongoing status during testing:

Contents:

  • Planned vs actual progress
  • Defect summary
  • Risks and issues
  • Coverage status
  • Upcoming activities
  • Blocking factors

Audience: Project team, management

Frequency: Regular intervals (daily, weekly)

Test Completion Report

Summarizes results at the end of a test phase:

Contents:

  • Summary of testing performed
  • Deviations from plan
  • Defect summary by status
  • Coverage achieved
  • Quality evaluation
  • Lessons learned
  • Recommendations

Audience: Stakeholders, project closure

Timing: End of test level or project

Tailoring Reports

Effective reports match their audience:

AudienceFocus
DevelopersTechnical details, specific defects
ManagementSummary status, risks, decisions needed
Business stakeholdersQuality assessment, release readiness
RegulatorsCompliance evidence, coverage proof

Configuration Management

Configuration management ensures test assets remain consistent and traceable.

Purpose of Configuration Management

  • Identify and control test items
  • Track versions of all test artifacts
  • Ensure tests run against correct software versions
  • Enable reproduction of test results
  • Support defect analysis and verification

Test Items Under Configuration Control

  • Test plans and designs
  • Test cases and procedures
  • Test scripts (manual and automated)
  • Test data
  • Test environments
  • Test tools and configurations
  • Test results and reports

Configuration Management Activities

Identification: Assign unique identifiers to items Control: Manage changes through defined processes Status accounting: Track item versions and changes Audit: Verify integrity of configuration items

Supporting Testing

Configuration management enables:

  • Defect reproduction: Know exact versions tested
  • Regression testing: Return to known baseline
  • Audit trails: Evidence of what was tested
  • Parallel testing: Multiple versions simultaneously
⚠️

Exam Note: Configuration management questions often focus on traceability

  • knowing which version of software was tested and with which test artifacts.

Defect Management

Defect management is the process of recognizing, documenting, classifying, and managing defects through resolution.

Defect Report Contents

A complete defect report includes:

FieldPurpose
Unique identifierTracking and reference
TitleBrief description
DescriptionDetailed explanation
Steps to reproduceEnable reproduction
Expected resultWhat should happen
Actual resultWhat did happen
SeverityImpact on system
PriorityUrgency of fix
StatusCurrent state
EnvironmentWhere found
VersionSoftware version tested
AttachmentsScreenshots, logs

Defect Severity vs Priority

Severity: Technical impact on the system

  • Critical: System crash, data loss
  • High: Major function doesn't work
  • Medium: Function works incorrectly
  • Low: Minor cosmetic issue

Priority: Business urgency for fix

  • Urgent: Fix immediately
  • High: Fix in current sprint
  • Medium: Fix in next release
  • Low: Fix when convenient

A defect can be high severity but low priority (rare scenario) or low severity but high priority (CEO demo tomorrow).

Defect Lifecycle

New → Open → Assigned → Fixed → Verified → Closed
                ↓                    ↓
            Deferred            Reopened

Key states:

  • New: Just reported
  • Open: Confirmed as valid defect
  • Assigned: Developer working on fix
  • Fixed: Developer completed fix
  • Verified: Tester confirmed fix works
  • Closed: Defect resolved
  • Deferred: Won't fix now
  • Reopened: Fix didn't work

Defect Metrics

Useful defect metrics include:

  • Defects found vs fixed over time
  • Defects by severity and status
  • Average time to fix
  • Defect rejection rate
  • Defects by module or feature

Exam Preparation Tips

Chapter 5 tests management concepts that apply throughout a testing career.

High-Priority Topics

  1. Test plan vs test strategy

    • What each contains
    • How they relate
  2. Entry and exit criteria

    • Purpose of each
    • Examples of valid criteria
  3. Risk-based testing

    • Risk level calculation
    • How risks affect prioritization
  4. Test monitoring metrics

    • Common metrics and their meaning
    • Progress vs completion reporting
  5. Defect management

    • Report contents
    • Severity vs priority
    • Defect lifecycle states

Common Exam Question Patterns

"Which belongs in a test plan vs test strategy?" Plan = project-specific details; Strategy = organizational approach

"What determines risk level?" Likelihood × Impact

"Which is a valid exit criterion?" Look for measurable, objective conditions

"What's the difference between severity and priority?" Severity = technical impact; Priority = business urgency

Key Definitions to Know

  • Test plan: Project-specific testing approach document
  • Test strategy: Organization-level testing guidance
  • Entry criteria: Conditions to start testing
  • Exit criteria: Conditions to complete testing
  • Product risk: Risk to the quality of the product
  • Project risk: Risk to the success of the project

Test Your Knowledge

Quiz on ISTQB CTFL Managing Test Activities

Your Score: 0/10

Question: What is the main difference between a test strategy and a test plan?


Continue Your CTFL Preparation

Progress through the complete CTFL syllabus:


Frequently Asked Questions

Frequently Asked Questions (FAQs) / People Also Ask (PAA)

How many CTFL exam questions come from Chapter 5 on Managing Test Activities?

What should a test plan include according to ISTQB?

How do you calculate risk level in risk-based testing?

What's the difference between entry criteria and exit criteria?

What is the difference between a test progress report and a test completion report?

Why is configuration management important for testing?

What's the difference between defect severity and priority?

What are the main test estimation techniques mentioned in ISTQB?