
ISTQB CTFL Chapter 4: Test Analysis and Design
Test analysis and design form the intellectual core of professional software testing. This chapter teaches you how to systematically derive test cases from requirements and specifications using proven techniques. Chapter 4 typically contributes 11 questions (27.5% of the total), making it the most heavily weighted chapter on the CTFL exam.
Mastering these techniques transforms testing from random exploration into a disciplined engineering practice. You'll learn when to apply black-box techniques like equivalence partitioning and boundary value analysis, when white-box coverage criteria matter, and how experience-based approaches complement systematic methods.
Table Of Contents-
The Test Process Context
Before diving into techniques, understand where test analysis and design fit in the overall test process.
From Test Basis to Test Cases
Test Basis → Test Analysis → Test Design → Test Implementation → Test ExecutionTest basis includes any documentation used to derive test cases:
- Requirements specifications
- Design documents
- User stories
- Risk analysis results
- Code (for white-box testing)
Test analysis answers "What to test?" by identifying test conditions from the test basis.
Test design answers "How to test?" by creating test cases from test conditions.
Test Conditions and Test Cases
| Concept | Definition | Example |
|---|---|---|
| Test condition | An aspect of the test basis that can be verified | "Login with valid credentials" |
| Test case | A set of inputs, expected results, and preconditions | Username: "john", Password: "pass123", Expected: successful login |
| Coverage item | An attribute of a test basis element used to measure coverage | A requirement, a code branch, a state transition |
Exam Tip: Test analysis produces test conditions; test design produces test cases. This distinction appears frequently on exams.
Test Analysis vs Test Design
Understanding the boundary between analysis and design is crucial for the exam.
Test Analysis Activities
- Analyzing the test basis (requirements, designs, code)
- Identifying features and sets of features to be tested
- Defining and prioritizing test conditions
- Capturing bidirectional traceability between test conditions and test basis
Test Design Activities
- Designing test cases from test conditions
- Identifying necessary test data
- Designing the test environment
- Capturing traceability between test cases and test conditions
The Relationship
Test analysis identifies what needs testing; test design determines how to test it effectively. A single test condition might yield multiple test cases, and a single test case might cover multiple test conditions.
Black-Box Test Techniques
Black-box techniques derive test cases from external specifications without knowledge of internal code structure. These techniques apply at any test level.
Equivalence Partitioning (EP)
Equivalence partitioning divides input data into groups (partitions) where all values within a partition should be treated identically by the software.
The Principle: If one value from a partition works correctly, all values in that partition should work. If one value fails, all values should fail. Therefore, testing one representative value per partition is sufficient.
Example: Age Validation
For a system accepting ages 18-65 for a service:
| Partition | Values | Type | Test Value |
|---|---|---|---|
| Below minimum | < 18 | Invalid | 15 |
| Valid range | 18-65 | Valid | 35 |
| Above maximum | > 65 | Invalid | 70 |
Coverage:
- Minimum: Each partition has at least one test case
- The formula: Coverage = (Partitions tested / Total partitions) × 100%
⚠️
Common Exam Question: "How many test cases are needed for 100% equivalence partition coverage?" Count the number of partitions - you need at least one test case per partition.
Boundary Value Analysis (BVA)
Boundary value analysis focuses on values at the edges of equivalence partitions. Experience shows that defects cluster around boundaries.
Two-Value BVA (Standard) Tests the boundary value and its closest neighbor outside the partition.
For the age range 18-65:
- 17 (just below minimum)
- 18 (minimum boundary)
- 65 (maximum boundary)
- 66 (just above maximum)
Three-Value BVA (Extended) Adds one more value inside the boundary:
- 17, 18, 19 (around minimum)
- 64, 65, 66 (around maximum)
Coverage:
- Count the boundary values tested vs. total boundary values
- For a range with min and max, standard BVA has 4 boundary values
Decision Table Testing
Decision tables capture complex business rules with multiple conditions affecting multiple outcomes.
Structure:
| Conditions | Rule 1 | Rule 2 | Rule 3 | Rule 4 |
|---|---|---|---|---|
| Condition 1 | Y | Y | N | N |
| Condition 2 | Y | N | Y | N |
| Actions | ||||
| Action 1 | X | X | ||
| Action 2 | X | X |
Example: Insurance Premium Calculation
| Conditions | R1 | R2 | R3 | R4 |
|---|---|---|---|---|
| Age < 25 | Y | Y | N | N |
| Claims history | Y | N | Y | N |
| Actions | ||||
| High premium | X | |||
| Medium premium | X | X | ||
| Low premium | X |
Coverage: Each column (rule) represents one test case. 100% coverage means testing all rules.
Simplification: When a condition doesn't matter for certain rules, use "–" to indicate "don't care." This can collapse multiple rules into one.
State Transition Testing
State transition testing models software behavior as states and transitions between them.
Components:
- States: Conditions the system can be in
- Transitions: Changes from one state to another
- Events: Triggers that cause transitions
- Actions: Activities that occur during transitions
- Guards: Conditions that must be true for transitions
Example: ATM Card State Machine
States: Card Inserted → PIN Entered → Transaction Selected → Complete/Ejected
Transitions:
- Insert card → Card Inserted
- Enter valid PIN → PIN Verified
- Enter invalid PIN (1st, 2nd) → Back to PIN Entry
- Enter invalid PIN (3rd) → Card RetainedState Table:
| Current State | Event | Guard | Action | Next State |
|---|---|---|---|---|
| Idle | Insert card | Display PIN prompt | Card Inserted | |
| Card Inserted | Valid PIN | Show menu | PIN Verified | |
| Card Inserted | Invalid PIN | Attempts < 3 | Display retry | Card Inserted |
| Card Inserted | Invalid PIN | Attempts = 3 | Retain card | Card Retained |
Coverage Criteria:
- All states covered
- All valid transitions covered
- All invalid transitions covered (negative testing)
Exam Tip: For state transition questions, draw the diagram if not provided. Questions often ask about the number of test cases needed to cover all transitions.
White-Box Test Techniques
White-box techniques derive test cases from the internal structure of code. The CTFL syllabus focuses on two coverage criteria.
Statement Coverage
Statement coverage measures the percentage of executable statements exercised by tests.
Formula:
Statement Coverage = (Statements executed / Total statements) × 100%Example:
function checkDiscount(age, member) {
discount = 0 // Line 2
if (age > 60) {
// Line 3
discount = 10 // Line 4
}
if (member) {
// Line 6
discount = discount + 5 // Line 7
}
return discount // Line 9
}To achieve 100% statement coverage, tests must execute lines 4 and 7.
Test case: age=65, member=true
- Executes: lines 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 ✓
One test case achieves 100% statement coverage here.
Branch Coverage (Decision Coverage)
Branch coverage measures the percentage of decision outcomes exercised by tests.
Formula:
Branch Coverage = (Branches executed / Total branches) × 100%Branches in the Example:
if (age > 60)→ True branch, False branchif (member)→ True branch, False branch
Total: 4 branches
To achieve 100% branch coverage:
- Test 1: age=65, member=true → Both true branches
- Test 2: age=50, member=false → Both false branches
Two test cases needed for 100% branch coverage.
Relationship Between Coverage Types
100% branch coverage implies 100% statement coverage (but not vice versa).
If you execute all branches, you must execute all statements. However, executing all statements doesn't guarantee all branches are tested.
⚠️
Critical Exam Point: Branch coverage is stronger than statement coverage. The syllabus requires understanding this relationship.
The Value of Coverage
Coverage metrics tell you what your tests exercise, not whether they're effective. High coverage doesn't guarantee good testing - it's possible to achieve 100% coverage without checking any outcomes.
Coverage identifies untested code, which represents risk. Use coverage to find gaps, not to prove completeness.
Experience-Based Test Techniques
Experience-based techniques leverage tester knowledge and intuition. They complement systematic techniques rather than replacing them.
Error Guessing
Testers use experience to anticipate likely defects and design tests targeting them.
Common error categories:
- Division by zero
- Empty strings or null values
- Maximum/minimum values
- Special characters
- Concurrent operations
- Error handling paths
Error guessing works best when:
- Testers have domain expertise
- Similar systems have been tested before
- Defect history is available
Exploratory Testing
Exploratory testing combines learning, test design, and test execution simultaneously.
Characteristics:
- Tester decides what to test next based on results
- Minimal advance planning
- Heavy reliance on tester skills
- Often time-boxed into sessions
Session-Based Test Management:
- Charter: What to explore
- Time-box: How long to explore
- Debriefing: What was found
When to use exploratory testing:
- Limited documentation available
- Learning a new system
- Supplementing scripted tests
- Finding subtle defects
Checklist-Based Testing
Testers use predefined checklists as guides, with flexibility in how conditions are verified.
Characteristics:
- Checklists built from experience
- Less detailed than test cases
- Coverage consistent but not identical
- Can become stale without maintenance
Example checklist for login functionality:
- Valid credentials accepted
- Invalid username rejected
- Invalid password rejected
- Password masking works
- "Remember me" functions correctly
- Timeout after inactivity
- Account lockout after failed attempts
Collaboration-Based Test Approaches
Modern development practices emphasize collaboration between developers, testers, and business stakeholders.
User Story Testing
User stories follow a format that naturally supports testing:
As a [role]
I want [feature]
So that [benefit]Acceptance criteria define when the story is complete and testable.
ATDD (Acceptance Test-Driven Development)
Tests are defined before development begins, based on acceptance criteria.
Process:
- Define acceptance criteria collaboratively
- Create acceptance tests from criteria
- Develop code to pass tests
- Verify all acceptance tests pass
BDD (Behavior-Driven Development)
BDD uses a specific format for acceptance tests:
Given [precondition]
When [action]
Then [expected outcome]Example:
Given a registered user with valid credentials
When the user enters username and password
And clicks the login button
Then the user is redirected to the dashboard
And a welcome message is displayedBDD tests serve as both specifications and automated tests.
Choosing the Right Technique
Different techniques suit different situations. Here's guidance on selection.
When to Use Each Technique
| Technique | Best For |
|---|---|
| Equivalence Partitioning | Input validation, data processing |
| Boundary Value Analysis | Numeric ranges, date ranges |
| Decision Tables | Complex business rules |
| State Transition | Workflow, session management |
| Statement Coverage | Basic code coverage |
| Branch Coverage | Decision logic coverage |
| Error Guessing | Known problem areas |
| Exploratory Testing | New features, limited documentation |
Combining Techniques
Effective testing combines multiple techniques:
- Start with equivalence partitioning to identify value domains
- Add boundary value analysis for edges
- Use decision tables for complex rules
- Apply state transition for stateful behavior
- Measure coverage to find gaps
- Supplement with exploratory testing
Coverage Considerations
The "right" coverage depends on:
- Risk level of the feature
- Regulatory requirements
- Time and resource constraints
- Historical defect patterns
Exam Tip: Questions often present scenarios and ask which technique is most appropriate. Focus on matching technique characteristics to scenario needs.
Coverage and Test Completion
Coverage metrics help measure test progress and completeness.
Types of Coverage
Specification-based coverage:
- Requirements coverage
- Risk coverage
- Feature coverage
Structure-based coverage:
- Statement coverage
- Branch coverage
- Condition coverage (beyond CTFL scope)
Using Coverage Metrics
Coverage answers: "What portion of the test basis have tests exercised?"
Low coverage indicates:
- Missing tests
- Unreachable code (if coverage cannot improve)
- Incomplete test design
High coverage indicates:
- Tests exercise the test basis broadly
- Does NOT guarantee quality
- May still miss defects in tested code
Test Completion Criteria
Projects define exit criteria that may include:
- Coverage thresholds (e.g., 80% branch coverage)
- Defect thresholds (e.g., no critical defects open)
- Test execution percentages (e.g., 100% of planned tests)
Exam Preparation Tips
Chapter 4 is the most heavily weighted chapter. Focus preparation strategically.
High-Priority Topics
-
Equivalence partitioning and boundary value analysis
- Calculate number of test cases needed
- Identify partitions from requirements
-
Decision table structure and coverage
- Read and interpret decision tables
- Determine test cases needed
-
State transition diagrams
- Identify states and transitions
- Calculate valid/invalid transitions
-
Statement vs branch coverage
- Know the relationship (branch > statement)
- Calculate coverage from code examples
-
Experience-based techniques
- When to use each approach
- Characteristics of exploratory testing
Common Exam Question Patterns
"How many test cases for 100% EP coverage of a field accepting 1-100?" Three partitions: less than 1, 1-100, greater than 100 → 3 test cases minimum
"What is the minimum number of test cases for 100% branch coverage?" Count decision points, determine minimum paths to cover all branches
"Which technique is best for testing complex business rules?" Decision table testing
"Which coverage criterion is stronger?" Branch coverage is stronger than statement coverage
Calculation Practice
Practice these calculations:
- Counting equivalence partitions
- Identifying boundary values (2-value and 3-value)
- Counting rules in decision tables
- Calculating statement and branch coverage from code
Test Your Knowledge
Quiz on ISTQB CTFL Test Analysis and Design
Your Score: 0/10
Question: A text field accepts values from 1 to 999. Using equivalence partitioning, how many partitions exist?
Continue Your CTFL Preparation
Progress through the complete CTFL syllabus:
- Chapter 3: Static Testing
- Chapter 5: Managing Test Activities
- Chapter 6: Test Tools
- CTFL Practice Exam
Frequently Asked Questions
Frequently Asked Questions (FAQs) / People Also Ask (PAA)
How many questions on the ISTQB CTFL exam come from Chapter 4?
What is the difference between equivalence partitioning and boundary value analysis?
When should I use decision table testing vs state transition testing?
Why is branch coverage considered stronger than statement coverage?
How do I calculate the number of test cases needed for equivalence partition coverage?
What's the difference between 2-value and 3-value boundary value analysis?
When is exploratory testing more appropriate than scripted testing?
How do BDD scenarios relate to test design techniques?