Test Case Design Techniques

Test Case Design Techniques: Complete Practical Guide

Parul Dhingra - Senior Quality Analyst
Parul Dhingra13+ Years ExperienceHire Me

Senior Quality Analyst

Updated: 7/20/2025

Test Case Design TechniquesTest Case Design Techniques

QuestionQuick Answer
What are test case design techniques?Systematic methods for selecting test inputs that maximize defect detection while minimizing redundant tests.
Which technique is best for numeric inputs?Boundary Value Analysis (BVA) combined with Equivalence Partitioning. Test at edges and one value from each partition.
When should I use decision tables?When business logic involves multiple conditions that combine to produce different outcomes.
What is state transition testing for?Systems where behavior depends on previous actions or states, like workflows, shopping carts, or user sessions.
How does error guessing work?Uses tester experience to anticipate likely defects based on common bug patterns and past issues.
Should I use one technique or several?Combine techniques. Each catches different defect types. Start with equivalence partitioning and BVA for inputs.

Test case design techniques are systematic approaches for selecting test inputs that find defects efficiently. Instead of testing random values or every possible input, these techniques help you choose the right test cases based on how software typically fails.

The six core techniques covered in this guide:

  1. Boundary Value Analysis - Tests at range edges where defects cluster
  2. Equivalence Partitioning - Groups similar inputs to reduce redundant tests
  3. Decision Table Testing - Handles complex business logic with multiple conditions
  4. State Transition Testing - Tests systems where past actions affect current behavior
  5. Use Case Testing - Validates end-to-end user scenarios
  6. Error Guessing - Applies experience to predict likely failure points

Each technique targets specific defect types. Using them together provides thorough coverage without wasting effort on redundant tests.

Why Test Case Design Techniques Matter

Testing every possible input is impossible. A simple form with three fields accepting numbers 1-100 has one million combinations. Add a fourth field and you have 100 million.

Test case design techniques solve this problem by identifying which inputs actually matter. They focus testing effort on:

  • Boundaries where comparison logic often fails
  • Representatives from groups of similar inputs
  • Combinations of conditions that trigger different behavior
  • Sequences that expose state-dependent bugs
  • Scenarios that reflect real user behavior
  • Weak spots based on common defect patterns

The goal is not more tests. The goal is better tests that find more defects with less effort.

Without systematic techniques, testers default to "happy path" testing with obvious values like 50, 100, or typical user data. These values rarely expose defects. Bugs hide at edges, in unusual combinations, and in sequences users stumble into accidentally.

Boundary Value Analysis

Boundary Value Analysis (BVA) targets the edges of input ranges. The principle: defects cluster at boundaries because programmers make off-by-one errors and use wrong comparison operators.

How BVA Works

For any input with a defined range, BVA tests:

  • The minimum valid value
  • The maximum valid value
  • Values just below minimum (invalid)
  • Values just above maximum (invalid)

Example: Age Field (Valid Range: 18-65)

Test ValueExpected ResultPurpose
17RejectJust below minimum
18AcceptAt minimum boundary
65AcceptAt maximum boundary
66RejectJust above maximum

Two-Value vs Three-Value BVA

Two-value BVA tests the boundary and one adjacent value. Four test cases for a simple range.

Three-value BVA adds one more value on each side. Six test cases total, catching defects two-value might miss.

For a 1-100 range:

ApproachLower BoundaryUpper BoundaryTotal Tests
Two-value0, 1100, 1014
Three-value0, 1, 299, 100, 1016

Use three-value BVA for critical inputs like financial calculations. Use two-value when time is limited.

BVA Beyond Numbers

BVA applies to any bounded input:

String length (8-20 characters):

  • Test 7, 8, 20, 21 character strings

Dates (2024-01-01 to 2024-12-31):

  • Test December 31 2023, January 1 2024, December 31 2024, January 1 2025

File size (max 10 MB):

  • Test 10,485,759 bytes, 10,485,760 bytes, 10,485,761 bytes

Array size (max 100 items):

  • Test 99, 100, 101 items

Why Boundaries Fail

The most common boundary bugs:

// Bug: rejects valid age 65
if (age >= 18 && age < 65) { ... }

// Bug: accepts invalid age 17
if (age > 17 && age <= 65) { ... }

// Correct
if (age >= 18 && age <= 65) { ... }

A single character difference (< vs <=) creates a bug visible only at the exact boundary value.

Equivalence Partitioning

Equivalence Partitioning (EP) divides the input domain into groups where all values within a group should produce the same behavior. You then test one representative value from each group instead of testing every value.

How EP Works

  1. Identify all possible inputs
  2. Divide them into valid and invalid partitions
  3. Select one value from each partition
  4. Test only those values

Example: Age Field (Valid Range: 18-65)

PartitionRangeRepresentativeExpected Result
Invalid (too young)0-1710Reject
Valid18-6540Accept
Invalid (too old)66+80Reject

Instead of testing ages 0, 1, 2, 3... through 100, test three values. If the system handles age 40 correctly, it should handle 25, 35, 45, 50, and 60 the same way.

Partitioning Rules

Rule 1: Every input belongs to exactly one partition.

Do not create overlapping partitions. If one partition is "under 18" and another is "18-65", they do not overlap.

Rule 2: Test at least one value from every partition.

Including invalid partitions. A system might accept invalid ages if negative numbers are not checked.

Rule 3: All values in a partition should produce identical behavior.

If some values in your partition behave differently, split it into smaller partitions.

Types of Partitions

Valid partitions: Inputs the system should accept Invalid partitions: Inputs the system should reject

Both need testing. Invalid partition tests verify error handling works correctly.

Example: Username Field

Partition TypeDescriptionRepresentative
Valid3-20 alphanumeric characters"testuser1"
InvalidEmpty""
InvalidToo short (1-2 chars)"ab"
InvalidToo long (21+ chars)"abcdefghijklmnopqrstu"
InvalidContains spaces"test user"
InvalidContains special characters"test@user"

Multi-Input Partitioning

When a system has multiple inputs, partition each input independently, then consider combinations.

Example: Discount Calculator

Input 1: Customer Type

  • Partition A: Regular
  • Partition B: Premium
  • Partition C: VIP

Input 2: Order Amount

  • Partition X: $0-$99
  • Partition Y: $100-$499
  • Partition Z: $500+

Testing all combinations (A-X, A-Y, A-Z, B-X, B-Y, B-Z, C-X, C-Y, C-Z) gives 9 test cases. This covers interaction between inputs that single-input partitioning misses.

Using BVA and Equivalence Partitioning Together

BVA and EP complement each other. EP identifies groups; BVA tests the edges between groups.

Combined Approach

  1. Apply equivalence partitioning to identify partitions
  2. Select one representative from each partition interior
  3. Apply BVA to test boundaries between partitions

Example: Quantity Field (Valid: 1-999)

Step 1: EP Partitions

  • Invalid: 0 and below
  • Valid: 1-999
  • Invalid: 1000 and above

Step 2: EP Representatives

  • Invalid low: -5
  • Valid: 500
  • Invalid high: 2000

Step 3: BVA Values

  • Lower boundary: 0, 1
  • Upper boundary: 999, 1000

Combined Test Set:

ValueFrom TechniqueTests
-5EPNegative number handling
0BVAAt invalid/valid boundary
1BVAMinimum valid
500EPMiddle of valid partition
999BVAMaximum valid
1000BVAAt valid/invalid boundary
2000EPFar above maximum

Seven tests cover both partition behavior and boundary handling.

Decision Table Testing

Decision table testing handles complex business logic where multiple conditions combine to determine outcomes. When requirements say "if A and B, then X; if A and not B, then Y," a decision table captures all combinations systematically.

When to Use Decision Tables

  • Business rules with multiple conditions
  • Complex validation logic
  • Pricing with conditional discounts
  • Access control based on multiple factors
  • Insurance premium calculations
  • Loan approval logic

Building a Decision Table

Step 1: Identify conditions (inputs)

List all conditions that affect the outcome. Each condition is typically true or false.

Step 2: Identify actions (outputs)

List all possible system responses or outcomes.

Step 3: Calculate combinations

For N true/false conditions, there are 2^N combinations. Three conditions = 8 rules.

Step 4: Fill in all combinations

Create a column for each combination. Mark which actions apply.

Example: Free Shipping Logic

Business Rules:

  • Orders over $50 get free shipping
  • Premium members always get free shipping
  • Sale items have a $5 flat shipping fee regardless of other factors

Conditions:

  1. Order over $50? (Y/N)
  2. Premium member? (Y/N)
  3. Contains sale items? (Y/N)

Actions:

  1. Free shipping
  2. Standard shipping rate
  3. Flat $5 fee

Decision Table:

Rule12345678
Order > $50YYYYNNNN
Premium memberYYNNYYNN
Sale itemsYNYNYNYN
Actions
Free shipping-X-X-X--
Standard rate-------X
Flat $5X-X-X-X-

When sale items are present, the flat $5 fee applies regardless of other conditions. This table reveals that Rule 8 (non-premium, under $50, no sale items) is the only case paying standard shipping.

Simplifying Decision Tables

Tables with many conditions become large. Simplify by:

Collapsing rules with same actions:

If Rules 1, 3, 5, 7 all have the same action and only differ in conditions that do not matter, combine them into one rule with a dash (-) for irrelevant conditions.

Simplified table:

Rule123
Order > $50-Y or PremiumN
Premium member-Y or >$50N
Sale itemsYNN
ActionFlat $5Free shippingStandard rate

Creating Test Cases from Decision Tables

Each rule becomes at least one test case:

TestOrder AmountMembershipItemsExpected Shipping
1$75PremiumSale item$5 flat
2$75PremiumRegular itemFree
3$75RegularSale item$5 flat
4$75RegularRegular itemFree
5$30PremiumSale item$5 flat
6$30PremiumRegular itemFree
7$30RegularSale item$5 flat
8$30RegularRegular itemStandard rate

State Transition Testing

State transition testing applies when system behavior depends on previous actions. The same input can produce different outputs depending on what state the system is in.

When to Use State Transition Testing

  • User authentication flows (login, logout, locked accounts)
  • Shopping cart states (empty, has items, checkout)
  • Order status workflows (pending, confirmed, shipped, delivered)
  • Document approval processes
  • Subscription lifecycle (trial, active, expired, cancelled)
  • Media player states (stopped, playing, paused)

State Transition Diagrams

A state transition diagram shows:

  • States: Circles representing system conditions
  • Transitions: Arrows showing state changes
  • Events: Labels on arrows showing what triggers transitions
  • Guards: Conditions that must be true for transition to occur

Example: Login System

States:

  • Logged Out
  • Logged In
  • Locked (after 3 failed attempts)

Events:

  • Valid login
  • Invalid login
  • Logout
  • Wait 30 minutes (unlocks account)

State Transition Table:

Current StateEventNext StateAction
Logged OutValid loginLogged InGrant access, reset attempts
Logged OutInvalid login (attempts < 3)Logged OutIncrement attempts, show error
Logged OutInvalid login (attempts = 3)LockedLock account, show lockout message
Logged InLogoutLogged OutEnd session
Logged InSession timeoutLogged OutEnd session, show timeout message
LockedAny login attemptLockedShow "account locked"
Locked30 minutes passLogged OutUnlock, reset attempts

Test Cases from State Transitions

Coverage levels:

  1. All states: Ensure every state is reached at least once
  2. All transitions: Test every arrow in the diagram
  3. All transition pairs: Test sequences of two transitions
  4. Invalid transitions: Attempt transitions that should not be possible

Test cases for login system:

TestStart StateActionExpected State
1Logged OutValid credentialsLogged In
2Logged OutInvalid credentials x1Logged Out (1 attempt)
3Logged OutInvalid credentials x3Locked
4Logged InClick logoutLogged Out
5LockedValid credentialsLocked (still)
6LockedWait 30 minLogged Out
7Logged InWait for timeoutLogged Out

Switch Coverage (N-Switch Testing)

Switch coverage tests sequences of transitions:

  • 0-switch: Each transition tested once (basic coverage)
  • 1-switch: Each pair of consecutive transitions tested
  • 2-switch: Each sequence of three transitions tested

Higher switch coverage finds bugs in state history handling but requires more tests.

1-switch test example:

  1. Logged Out -> Invalid login -> Logged Out -> Valid login -> Logged In

This tests what happens after a failed attempt followed by success.

Use Case Testing

Use case testing validates complete user scenarios from start to finish. Instead of testing individual functions, you test the paths users take to accomplish goals.

Anatomy of a Use Case

  • Actor: Who performs the action (user, admin, external system)
  • Preconditions: What must be true before starting
  • Main flow: The happy path steps
  • Alternative flows: Valid variations from the main flow
  • Exception flows: Error handling paths
  • Postconditions: System state after completion

Example: Online Purchase Use Case

Use Case: Purchase Item

Actor: Registered customer

Preconditions: Customer is logged in, item is in stock

Main Flow:

  1. Customer adds item to cart
  2. Customer proceeds to checkout
  3. System displays order summary
  4. Customer selects shipping address
  5. Customer selects payment method
  6. Customer confirms order
  7. System processes payment
  8. System confirms order and sends email

Alternative Flows:

A1. New shipping address (at step 4)

  • Customer clicks "Add new address"
  • Customer enters address details
  • System validates address
  • Continue at step 5

A2. Saved payment method (at step 5)

  • Customer selects saved card
  • Continue at step 6

Exception Flows:

E1. Payment declined (at step 7)

  • System shows error message
  • Customer tries different payment method
  • Continue at step 5

E2. Item goes out of stock (at step 7)

  • System shows out of stock message
  • System removes item from order
  • Return to step 3

Test Cases from Use Cases

Main flow test: Test the complete happy path with all default options.

Alternative flow tests: Test each alternative flow by following its path.

Exception flow tests: Force each exception condition and verify handling.

Combined path tests: Test realistic combinations of alternatives and exceptions.

TestPathDescription
1MainStandard purchase with existing address and card
2Main + A1Purchase with new shipping address
3Main + A2Purchase with saved payment method
4Main + E1Payment declined, retry with different card
5Main + E2Item out of stock during checkout
6Main + A1 + E1New address, payment declined

Benefits of Use Case Testing

  • Tests from user perspective, not technical perspective
  • Catches integration issues between components
  • Validates that features work together as intended
  • Reveals gaps in requirements (what happens when...?)
  • Creates tests stakeholders can understand

Error Guessing

Error guessing uses tester experience and intuition to identify likely failure points. Unlike systematic techniques, it relies on knowledge of common bug patterns, past defects, and how software typically fails.

When to Use Error Guessing

  • After applying systematic techniques to find additional edge cases
  • When testing areas with history of defects
  • When time is limited and you need high-value tests
  • For exploratory testing sessions
  • When testing new or unfamiliar functionality

Common Error Categories

Input-related errors:

  • Empty or null values
  • Extremely long inputs
  • Special characters (quotes, slashes, Unicode)
  • Leading/trailing spaces
  • SQL injection patterns
  • Script injection (XSS)
  • Negative numbers where not expected
  • Zero as a divisor

Timing and sequence errors:

  • Double-clicking submit buttons
  • Back button after form submission
  • Refreshing during processing
  • Concurrent modifications
  • Session timeout during action
  • Network interruption mid-operation

State-related errors:

  • Operating on deleted records
  • Actions on expired sessions
  • Using stale cached data
  • Race conditions with multiple users

Environment errors:

  • Different browsers
  • Different screen sizes
  • Slow network connections
  • Low disk space
  • Different time zones

Building an Error Guessing Checklist

Create a checklist based on your experience:

Form Inputs:

  • Empty required fields
  • Only spaces in text fields
  • Maximum length + 1 character
  • Copy-paste formatted text from Word
  • Email without @ symbol
  • Dates in wrong format
  • Future dates where not allowed
  • Negative quantities

User Actions:

  • Double-click submit
  • Click submit then navigate away
  • Use browser back button after action
  • Open same form in two tabs
  • Leave form idle until session expires

Data Conditions:

  • First user/record in system
  • Empty database tables
  • Maximum records in list
  • Special characters in names (O'Brien, comma, quotes)

Error Guessing vs Random Testing

Error guessing is not random testing. Random testing picks arbitrary values without logic. Error guessing deliberately targets values likely to cause problems based on experience.

Random: Test age = 47 Error guessing: Test age = 0, age = -1, age = 999999999

Random testing might find bugs by chance. Error guessing systematically targets known weak spots.

Choosing the Right Technique

Different techniques excel in different situations. Here is when to use each:

SituationBest Technique
Numeric input with defined rangeBVA + Equivalence Partitioning
Multiple input fieldsEquivalence Partitioning
Complex business rules with conditionsDecision Table
Workflow or status-based systemsState Transition
End-to-end user scenariosUse Case Testing
Supplementing systematic testingError Guessing
Time-limited testingError Guessing + BVA

Technique Selection by Input Type

Single field, numeric: BVA + EP Single field, text: EP for categories, BVA for length Multiple fields, independent: EP per field Multiple fields, dependent: Decision Table Sequential operations: State Transition User journeys: Use Case Testing

Coverage Goals

GoalTechniques to Use
Input validation coverageBVA, EP
Business logic coverageDecision Table
Workflow coverageState Transition
User scenario coverageUse Case
Edge case coverageError Guessing
Complete coverageAll techniques combined

Combining Techniques in Practice

Real testing combines multiple techniques. Here is a practical approach:

Step-by-Step Process

Step 1: Start with Use Cases

Identify the main user scenarios. This gives you end-to-end tests and reveals what needs detailed testing.

Step 2: Apply EP to Inputs

For each input in your use cases, identify equivalence partitions. This ensures you test valid and invalid categories.

Step 3: Add BVA for Bounded Inputs

For inputs with ranges (numbers, dates, string lengths), add boundary values.

Step 4: Build Decision Tables for Complex Rules

When multiple conditions affect outcomes, create decision tables to ensure all combinations are covered.

Step 5: Analyze State-Dependent Behavior

If the system has workflows or statuses, map states and transitions. Add tests for each transition.

Step 6: Apply Error Guessing

Review your test set and add likely error scenarios based on experience.

Example: Testing a Checkout Process

Use Case: Customer completes checkout

EP + BVA for Cart Quantity:

  • Empty cart (invalid)
  • 1 item (minimum valid)
  • 99 items (max per order)
  • 100 items (invalid)

Decision Table for Shipping:

  • Conditions: Order total > $50, Premium member, Express selected
  • Actions: Free standard, Paid standard, Free express, Paid express

State Transitions:

  • Empty -> Has Items -> Checkout -> Payment -> Confirmed
  • Also: Checkout -> Empty (cancel), Payment -> Checkout (payment failed)

Error Guessing Additions:

  • Add item, remove item, add same item (duplicate handling)
  • Change quantity to 0 vs clicking remove
  • Checkout with expired promo code
  • Submit order twice rapidly

Combined, these techniques create thorough coverage of the checkout process.

Common Mistakes to Avoid

Mistake 1: Using Only One Technique

Each technique catches different defect types. Using only BVA misses business rule defects. Using only use cases misses boundary bugs.

Fix: Combine techniques appropriate to what you are testing.

Mistake 2: Not Testing Invalid Partitions

Testers focus on valid inputs. But error handling has bugs too. Invalid input tests verify the system fails gracefully.

Fix: Include invalid equivalence partitions in every test set.

Mistake 3: Testing Boundaries but Not the Boundary

Testing 99 and 101 but not 100. The boundary value itself is where < vs <= bugs appear.

Fix: Always test the boundary value, not just adjacent values.

Mistake 4: Ignoring State Dependencies

Testing features in isolation when they depend on previous actions. A function that works after login might fail for logged-out users.

Fix: Test features in realistic sequences, not just isolation.

Mistake 5: Over-Relying on Error Guessing

Error guessing is valuable but not systematic. Important boundaries and combinations get missed.

Fix: Use error guessing to supplement systematic techniques, not replace them.

Mistake 6: Creating Too Many Tests

Decision tables with 10 conditions create 1024 combinations. Not all need testing.

Fix: Focus on realistic combinations and high-risk rules. Use risk-based prioritization.

Summary and Key Takeaways

Test case design techniques transform testing from random to systematic. Each technique targets specific defect types:

Boundary Value Analysis:

  • Tests at range edges where off-by-one errors hide
  • Apply to any bounded input: numbers, strings, dates, arrays
  • Combine with EP for complete input coverage

Equivalence Partitioning:

  • Groups similar inputs to reduce redundant tests
  • Always include invalid partitions
  • Foundation for all other input-based testing

Decision Table Testing:

  • Handles complex conditions systematically
  • Ensures all rule combinations are tested
  • Best for business logic with multiple factors

State Transition Testing:

  • Tests behavior that depends on history
  • Covers workflows, statuses, and session states
  • Use switch coverage for thorough sequence testing

Use Case Testing:

  • Validates complete user journeys
  • Tests integration between components
  • Catches issues systematic techniques miss

Error Guessing:

  • Applies experience to find likely bugs
  • Supplements systematic approaches
  • Targets common failure patterns

Best Practices:

  1. Combine techniques based on what you are testing
  2. Start with use cases to understand scope
  3. Apply EP and BVA to all inputs systematically
  4. Use decision tables for conditional logic
  5. Map states when behavior depends on history
  6. Add error guessing tests based on experience
  7. Include invalid inputs in every test set
  8. Test boundaries, not just adjacent values
  9. Consider realistic combinations, not all combinations

Mastering these techniques improves both defect detection and testing efficiency. Start with equivalence partitioning and boundary value analysis. Add decision tables and state transition testing as needed. Use error guessing throughout. The goal is finding defects before users do.

Quiz on test case design techniques

Your Score: 0/9

Question: A form field accepts quantities from 1 to 99. Using two-value boundary value analysis, which test values should you use?

Continue Reading

The Software Testing Lifecycle: An OverviewDive into the crucial phase of Test Requirement Analysis in the Software Testing Lifecycle, understanding its purpose, activities, deliverables, and best practices to ensure a successful software testing process.How to Master Test Requirement Analysis?Learn how to master requirement analysis, an essential part of the Software Test Life Cycle (STLC), and improve the efficiency of your software testing process.Test PlanningDive into the world of Kanban with this comprehensive introduction, covering its principles, benefits, and applications in various industries.Test DesignLearn the essential steps in the test design phase of the software testing lifecycle, its deliverables, entry and exit criteria, and effective tips for successful test design.Test ExecutionLearn about the steps, deliverables, entry and exit criteria, risks and schedules in the Test Execution phase of the Software Testing Lifecycle, and tips for performing this phase effectively.Test Analysis PhaseDiscover the steps, deliverables, entry and exit criteria, risks and schedules in the Test Analysis phase of the Software Testing Lifecycle, and tips for performing this phase effectively.Test Reporting PhaseLearn the essential steps, deliverables, entry and exit criteria, risks, schedules, and tips for effective Test Reporting in the Software Testing Lifecycle to improve application quality and testing processes.Fixing PhaseExplore the crucial steps, deliverables, entry and exit criteria, risks, schedules, and tips for effective Fixing in the Software Testing Lifecycle to boost application quality and streamline the testing process.Test Closure PhaseDiscover the steps, deliverables, entry and exit criteria, risks, schedules, and tips for performing an effective Test Closure phase in the Software Testing Lifecycle, ensuring a successful and streamlined testing process.

Frequently Asked Questions (FAQs) / People Also Ask (PAA)

What are the main test case design techniques and when should I use each?

How do boundary value analysis and equivalence partitioning work together?

When should I use decision table testing instead of other techniques?

What is state transition testing and what defects does it find?

How does error guessing differ from random testing?

How many test cases should I create for boundary value analysis?

What are the most common mistakes in test case design?

How do I combine multiple test case design techniques for a single feature?