
Test Case Design Techniques: Complete Practical Guide
Test Case Design Techniques
| Question | Quick Answer |
|---|---|
| What are test case design techniques? | Systematic methods for selecting test inputs that maximize defect detection while minimizing redundant tests. |
| Which technique is best for numeric inputs? | Boundary Value Analysis (BVA) combined with Equivalence Partitioning. Test at edges and one value from each partition. |
| When should I use decision tables? | When business logic involves multiple conditions that combine to produce different outcomes. |
| What is state transition testing for? | Systems where behavior depends on previous actions or states, like workflows, shopping carts, or user sessions. |
| How does error guessing work? | Uses tester experience to anticipate likely defects based on common bug patterns and past issues. |
| Should I use one technique or several? | Combine techniques. Each catches different defect types. Start with equivalence partitioning and BVA for inputs. |
Test case design techniques are systematic approaches for selecting test inputs that find defects efficiently. Instead of testing random values or every possible input, these techniques help you choose the right test cases based on how software typically fails.
The six core techniques covered in this guide:
- Boundary Value Analysis - Tests at range edges where defects cluster
- Equivalence Partitioning - Groups similar inputs to reduce redundant tests
- Decision Table Testing - Handles complex business logic with multiple conditions
- State Transition Testing - Tests systems where past actions affect current behavior
- Use Case Testing - Validates end-to-end user scenarios
- Error Guessing - Applies experience to predict likely failure points
Each technique targets specific defect types. Using them together provides thorough coverage without wasting effort on redundant tests.
Table Of Contents-
- Why Test Case Design Techniques Matter
- Boundary Value Analysis
- Equivalence Partitioning
- Using BVA and Equivalence Partitioning Together
- Decision Table Testing
- State Transition Testing
- Use Case Testing
- Error Guessing
- Choosing the Right Technique
- Combining Techniques in Practice
- Common Mistakes to Avoid
- Summary and Key Takeaways
- Quiz
- Continue Reading
Why Test Case Design Techniques Matter
Testing every possible input is impossible. A simple form with three fields accepting numbers 1-100 has one million combinations. Add a fourth field and you have 100 million.
Test case design techniques solve this problem by identifying which inputs actually matter. They focus testing effort on:
- Boundaries where comparison logic often fails
- Representatives from groups of similar inputs
- Combinations of conditions that trigger different behavior
- Sequences that expose state-dependent bugs
- Scenarios that reflect real user behavior
- Weak spots based on common defect patterns
The goal is not more tests. The goal is better tests that find more defects with less effort.
Without systematic techniques, testers default to "happy path" testing with obvious values like 50, 100, or typical user data. These values rarely expose defects. Bugs hide at edges, in unusual combinations, and in sequences users stumble into accidentally.
Boundary Value Analysis
Boundary Value Analysis (BVA) targets the edges of input ranges. The principle: defects cluster at boundaries because programmers make off-by-one errors and use wrong comparison operators.
How BVA Works
For any input with a defined range, BVA tests:
- The minimum valid value
- The maximum valid value
- Values just below minimum (invalid)
- Values just above maximum (invalid)
Example: Age Field (Valid Range: 18-65)
| Test Value | Expected Result | Purpose |
|---|---|---|
| 17 | Reject | Just below minimum |
| 18 | Accept | At minimum boundary |
| 65 | Accept | At maximum boundary |
| 66 | Reject | Just above maximum |
Two-Value vs Three-Value BVA
Two-value BVA tests the boundary and one adjacent value. Four test cases for a simple range.
Three-value BVA adds one more value on each side. Six test cases total, catching defects two-value might miss.
For a 1-100 range:
| Approach | Lower Boundary | Upper Boundary | Total Tests |
|---|---|---|---|
| Two-value | 0, 1 | 100, 101 | 4 |
| Three-value | 0, 1, 2 | 99, 100, 101 | 6 |
Use three-value BVA for critical inputs like financial calculations. Use two-value when time is limited.
BVA Beyond Numbers
BVA applies to any bounded input:
String length (8-20 characters):
- Test 7, 8, 20, 21 character strings
Dates (2024-01-01 to 2024-12-31):
- Test December 31 2023, January 1 2024, December 31 2024, January 1 2025
File size (max 10 MB):
- Test 10,485,759 bytes, 10,485,760 bytes, 10,485,761 bytes
Array size (max 100 items):
- Test 99, 100, 101 items
Why Boundaries Fail
The most common boundary bugs:
// Bug: rejects valid age 65
if (age >= 18 && age < 65) { ... }
// Bug: accepts invalid age 17
if (age > 17 && age <= 65) { ... }
// Correct
if (age >= 18 && age <= 65) { ... }A single character difference (< vs <=) creates a bug visible only at the exact boundary value.
Equivalence Partitioning
Equivalence Partitioning (EP) divides the input domain into groups where all values within a group should produce the same behavior. You then test one representative value from each group instead of testing every value.
How EP Works
- Identify all possible inputs
- Divide them into valid and invalid partitions
- Select one value from each partition
- Test only those values
Example: Age Field (Valid Range: 18-65)
| Partition | Range | Representative | Expected Result |
|---|---|---|---|
| Invalid (too young) | 0-17 | 10 | Reject |
| Valid | 18-65 | 40 | Accept |
| Invalid (too old) | 66+ | 80 | Reject |
Instead of testing ages 0, 1, 2, 3... through 100, test three values. If the system handles age 40 correctly, it should handle 25, 35, 45, 50, and 60 the same way.
Partitioning Rules
Rule 1: Every input belongs to exactly one partition.
Do not create overlapping partitions. If one partition is "under 18" and another is "18-65", they do not overlap.
Rule 2: Test at least one value from every partition.
Including invalid partitions. A system might accept invalid ages if negative numbers are not checked.
Rule 3: All values in a partition should produce identical behavior.
If some values in your partition behave differently, split it into smaller partitions.
Types of Partitions
Valid partitions: Inputs the system should accept Invalid partitions: Inputs the system should reject
Both need testing. Invalid partition tests verify error handling works correctly.
Example: Username Field
| Partition Type | Description | Representative |
|---|---|---|
| Valid | 3-20 alphanumeric characters | "testuser1" |
| Invalid | Empty | "" |
| Invalid | Too short (1-2 chars) | "ab" |
| Invalid | Too long (21+ chars) | "abcdefghijklmnopqrstu" |
| Invalid | Contains spaces | "test user" |
| Invalid | Contains special characters | "test@user" |
Multi-Input Partitioning
When a system has multiple inputs, partition each input independently, then consider combinations.
Example: Discount Calculator
Input 1: Customer Type
- Partition A: Regular
- Partition B: Premium
- Partition C: VIP
Input 2: Order Amount
- Partition X: $0-$99
- Partition Y: $100-$499
- Partition Z: $500+
Testing all combinations (A-X, A-Y, A-Z, B-X, B-Y, B-Z, C-X, C-Y, C-Z) gives 9 test cases. This covers interaction between inputs that single-input partitioning misses.
Using BVA and Equivalence Partitioning Together
BVA and EP complement each other. EP identifies groups; BVA tests the edges between groups.
Combined Approach
- Apply equivalence partitioning to identify partitions
- Select one representative from each partition interior
- Apply BVA to test boundaries between partitions
Example: Quantity Field (Valid: 1-999)
Step 1: EP Partitions
- Invalid: 0 and below
- Valid: 1-999
- Invalid: 1000 and above
Step 2: EP Representatives
- Invalid low: -5
- Valid: 500
- Invalid high: 2000
Step 3: BVA Values
- Lower boundary: 0, 1
- Upper boundary: 999, 1000
Combined Test Set:
| Value | From Technique | Tests |
|---|---|---|
| -5 | EP | Negative number handling |
| 0 | BVA | At invalid/valid boundary |
| 1 | BVA | Minimum valid |
| 500 | EP | Middle of valid partition |
| 999 | BVA | Maximum valid |
| 1000 | BVA | At valid/invalid boundary |
| 2000 | EP | Far above maximum |
Seven tests cover both partition behavior and boundary handling.
Decision Table Testing
Decision table testing handles complex business logic where multiple conditions combine to determine outcomes. When requirements say "if A and B, then X; if A and not B, then Y," a decision table captures all combinations systematically.
When to Use Decision Tables
- Business rules with multiple conditions
- Complex validation logic
- Pricing with conditional discounts
- Access control based on multiple factors
- Insurance premium calculations
- Loan approval logic
Building a Decision Table
Step 1: Identify conditions (inputs)
List all conditions that affect the outcome. Each condition is typically true or false.
Step 2: Identify actions (outputs)
List all possible system responses or outcomes.
Step 3: Calculate combinations
For N true/false conditions, there are 2^N combinations. Three conditions = 8 rules.
Step 4: Fill in all combinations
Create a column for each combination. Mark which actions apply.
Example: Free Shipping Logic
Business Rules:
- Orders over $50 get free shipping
- Premium members always get free shipping
- Sale items have a $5 flat shipping fee regardless of other factors
Conditions:
- Order over $50? (Y/N)
- Premium member? (Y/N)
- Contains sale items? (Y/N)
Actions:
- Free shipping
- Standard shipping rate
- Flat $5 fee
Decision Table:
| Rule | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
|---|---|---|---|---|---|---|---|---|
| Order > $50 | Y | Y | Y | Y | N | N | N | N |
| Premium member | Y | Y | N | N | Y | Y | N | N |
| Sale items | Y | N | Y | N | Y | N | Y | N |
| Actions | ||||||||
| Free shipping | - | X | - | X | - | X | - | - |
| Standard rate | - | - | - | - | - | - | - | X |
| Flat $5 | X | - | X | - | X | - | X | - |
When sale items are present, the flat $5 fee applies regardless of other conditions. This table reveals that Rule 8 (non-premium, under $50, no sale items) is the only case paying standard shipping.
Simplifying Decision Tables
Tables with many conditions become large. Simplify by:
Collapsing rules with same actions:
If Rules 1, 3, 5, 7 all have the same action and only differ in conditions that do not matter, combine them into one rule with a dash (-) for irrelevant conditions.
Simplified table:
| Rule | 1 | 2 | 3 |
|---|---|---|---|
| Order > $50 | - | Y or Premium | N |
| Premium member | - | Y or >$50 | N |
| Sale items | Y | N | N |
| Action | Flat $5 | Free shipping | Standard rate |
Creating Test Cases from Decision Tables
Each rule becomes at least one test case:
| Test | Order Amount | Membership | Items | Expected Shipping |
|---|---|---|---|---|
| 1 | $75 | Premium | Sale item | $5 flat |
| 2 | $75 | Premium | Regular item | Free |
| 3 | $75 | Regular | Sale item | $5 flat |
| 4 | $75 | Regular | Regular item | Free |
| 5 | $30 | Premium | Sale item | $5 flat |
| 6 | $30 | Premium | Regular item | Free |
| 7 | $30 | Regular | Sale item | $5 flat |
| 8 | $30 | Regular | Regular item | Standard rate |
State Transition Testing
State transition testing applies when system behavior depends on previous actions. The same input can produce different outputs depending on what state the system is in.
When to Use State Transition Testing
- User authentication flows (login, logout, locked accounts)
- Shopping cart states (empty, has items, checkout)
- Order status workflows (pending, confirmed, shipped, delivered)
- Document approval processes
- Subscription lifecycle (trial, active, expired, cancelled)
- Media player states (stopped, playing, paused)
State Transition Diagrams
A state transition diagram shows:
- States: Circles representing system conditions
- Transitions: Arrows showing state changes
- Events: Labels on arrows showing what triggers transitions
- Guards: Conditions that must be true for transition to occur
Example: Login System
States:
- Logged Out
- Logged In
- Locked (after 3 failed attempts)
Events:
- Valid login
- Invalid login
- Logout
- Wait 30 minutes (unlocks account)
State Transition Table:
| Current State | Event | Next State | Action |
|---|---|---|---|
| Logged Out | Valid login | Logged In | Grant access, reset attempts |
| Logged Out | Invalid login (attempts < 3) | Logged Out | Increment attempts, show error |
| Logged Out | Invalid login (attempts = 3) | Locked | Lock account, show lockout message |
| Logged In | Logout | Logged Out | End session |
| Logged In | Session timeout | Logged Out | End session, show timeout message |
| Locked | Any login attempt | Locked | Show "account locked" |
| Locked | 30 minutes pass | Logged Out | Unlock, reset attempts |
Test Cases from State Transitions
Coverage levels:
- All states: Ensure every state is reached at least once
- All transitions: Test every arrow in the diagram
- All transition pairs: Test sequences of two transitions
- Invalid transitions: Attempt transitions that should not be possible
Test cases for login system:
| Test | Start State | Action | Expected State |
|---|---|---|---|
| 1 | Logged Out | Valid credentials | Logged In |
| 2 | Logged Out | Invalid credentials x1 | Logged Out (1 attempt) |
| 3 | Logged Out | Invalid credentials x3 | Locked |
| 4 | Logged In | Click logout | Logged Out |
| 5 | Locked | Valid credentials | Locked (still) |
| 6 | Locked | Wait 30 min | Logged Out |
| 7 | Logged In | Wait for timeout | Logged Out |
Switch Coverage (N-Switch Testing)
Switch coverage tests sequences of transitions:
- 0-switch: Each transition tested once (basic coverage)
- 1-switch: Each pair of consecutive transitions tested
- 2-switch: Each sequence of three transitions tested
Higher switch coverage finds bugs in state history handling but requires more tests.
1-switch test example:
- Logged Out -> Invalid login -> Logged Out -> Valid login -> Logged In
This tests what happens after a failed attempt followed by success.
Use Case Testing
Use case testing validates complete user scenarios from start to finish. Instead of testing individual functions, you test the paths users take to accomplish goals.
Anatomy of a Use Case
- Actor: Who performs the action (user, admin, external system)
- Preconditions: What must be true before starting
- Main flow: The happy path steps
- Alternative flows: Valid variations from the main flow
- Exception flows: Error handling paths
- Postconditions: System state after completion
Example: Online Purchase Use Case
Use Case: Purchase Item
Actor: Registered customer
Preconditions: Customer is logged in, item is in stock
Main Flow:
- Customer adds item to cart
- Customer proceeds to checkout
- System displays order summary
- Customer selects shipping address
- Customer selects payment method
- Customer confirms order
- System processes payment
- System confirms order and sends email
Alternative Flows:
A1. New shipping address (at step 4)
- Customer clicks "Add new address"
- Customer enters address details
- System validates address
- Continue at step 5
A2. Saved payment method (at step 5)
- Customer selects saved card
- Continue at step 6
Exception Flows:
E1. Payment declined (at step 7)
- System shows error message
- Customer tries different payment method
- Continue at step 5
E2. Item goes out of stock (at step 7)
- System shows out of stock message
- System removes item from order
- Return to step 3
Test Cases from Use Cases
Main flow test: Test the complete happy path with all default options.
Alternative flow tests: Test each alternative flow by following its path.
Exception flow tests: Force each exception condition and verify handling.
Combined path tests: Test realistic combinations of alternatives and exceptions.
| Test | Path | Description |
|---|---|---|
| 1 | Main | Standard purchase with existing address and card |
| 2 | Main + A1 | Purchase with new shipping address |
| 3 | Main + A2 | Purchase with saved payment method |
| 4 | Main + E1 | Payment declined, retry with different card |
| 5 | Main + E2 | Item out of stock during checkout |
| 6 | Main + A1 + E1 | New address, payment declined |
Benefits of Use Case Testing
- Tests from user perspective, not technical perspective
- Catches integration issues between components
- Validates that features work together as intended
- Reveals gaps in requirements (what happens when...?)
- Creates tests stakeholders can understand
Error Guessing
Error guessing uses tester experience and intuition to identify likely failure points. Unlike systematic techniques, it relies on knowledge of common bug patterns, past defects, and how software typically fails.
When to Use Error Guessing
- After applying systematic techniques to find additional edge cases
- When testing areas with history of defects
- When time is limited and you need high-value tests
- For exploratory testing sessions
- When testing new or unfamiliar functionality
Common Error Categories
Input-related errors:
- Empty or null values
- Extremely long inputs
- Special characters (quotes, slashes, Unicode)
- Leading/trailing spaces
- SQL injection patterns
- Script injection (XSS)
- Negative numbers where not expected
- Zero as a divisor
Timing and sequence errors:
- Double-clicking submit buttons
- Back button after form submission
- Refreshing during processing
- Concurrent modifications
- Session timeout during action
- Network interruption mid-operation
State-related errors:
- Operating on deleted records
- Actions on expired sessions
- Using stale cached data
- Race conditions with multiple users
Environment errors:
- Different browsers
- Different screen sizes
- Slow network connections
- Low disk space
- Different time zones
Building an Error Guessing Checklist
Create a checklist based on your experience:
Form Inputs:
- Empty required fields
- Only spaces in text fields
- Maximum length + 1 character
- Copy-paste formatted text from Word
- Email without @ symbol
- Dates in wrong format
- Future dates where not allowed
- Negative quantities
User Actions:
- Double-click submit
- Click submit then navigate away
- Use browser back button after action
- Open same form in two tabs
- Leave form idle until session expires
Data Conditions:
- First user/record in system
- Empty database tables
- Maximum records in list
- Special characters in names (O'Brien, comma, quotes)
Error Guessing vs Random Testing
Error guessing is not random testing. Random testing picks arbitrary values without logic. Error guessing deliberately targets values likely to cause problems based on experience.
Random: Test age = 47 Error guessing: Test age = 0, age = -1, age = 999999999
Random testing might find bugs by chance. Error guessing systematically targets known weak spots.
Choosing the Right Technique
Different techniques excel in different situations. Here is when to use each:
| Situation | Best Technique |
|---|---|
| Numeric input with defined range | BVA + Equivalence Partitioning |
| Multiple input fields | Equivalence Partitioning |
| Complex business rules with conditions | Decision Table |
| Workflow or status-based systems | State Transition |
| End-to-end user scenarios | Use Case Testing |
| Supplementing systematic testing | Error Guessing |
| Time-limited testing | Error Guessing + BVA |
Technique Selection by Input Type
Single field, numeric: BVA + EP Single field, text: EP for categories, BVA for length Multiple fields, independent: EP per field Multiple fields, dependent: Decision Table Sequential operations: State Transition User journeys: Use Case Testing
Coverage Goals
| Goal | Techniques to Use |
|---|---|
| Input validation coverage | BVA, EP |
| Business logic coverage | Decision Table |
| Workflow coverage | State Transition |
| User scenario coverage | Use Case |
| Edge case coverage | Error Guessing |
| Complete coverage | All techniques combined |
Combining Techniques in Practice
Real testing combines multiple techniques. Here is a practical approach:
Step-by-Step Process
Step 1: Start with Use Cases
Identify the main user scenarios. This gives you end-to-end tests and reveals what needs detailed testing.
Step 2: Apply EP to Inputs
For each input in your use cases, identify equivalence partitions. This ensures you test valid and invalid categories.
Step 3: Add BVA for Bounded Inputs
For inputs with ranges (numbers, dates, string lengths), add boundary values.
Step 4: Build Decision Tables for Complex Rules
When multiple conditions affect outcomes, create decision tables to ensure all combinations are covered.
Step 5: Analyze State-Dependent Behavior
If the system has workflows or statuses, map states and transitions. Add tests for each transition.
Step 6: Apply Error Guessing
Review your test set and add likely error scenarios based on experience.
Example: Testing a Checkout Process
Use Case: Customer completes checkout
EP + BVA for Cart Quantity:
- Empty cart (invalid)
- 1 item (minimum valid)
- 99 items (max per order)
- 100 items (invalid)
Decision Table for Shipping:
- Conditions: Order total > $50, Premium member, Express selected
- Actions: Free standard, Paid standard, Free express, Paid express
State Transitions:
- Empty -> Has Items -> Checkout -> Payment -> Confirmed
- Also: Checkout -> Empty (cancel), Payment -> Checkout (payment failed)
Error Guessing Additions:
- Add item, remove item, add same item (duplicate handling)
- Change quantity to 0 vs clicking remove
- Checkout with expired promo code
- Submit order twice rapidly
Combined, these techniques create thorough coverage of the checkout process.
Common Mistakes to Avoid
Mistake 1: Using Only One Technique
Each technique catches different defect types. Using only BVA misses business rule defects. Using only use cases misses boundary bugs.
Fix: Combine techniques appropriate to what you are testing.
Mistake 2: Not Testing Invalid Partitions
Testers focus on valid inputs. But error handling has bugs too. Invalid input tests verify the system fails gracefully.
Fix: Include invalid equivalence partitions in every test set.
Mistake 3: Testing Boundaries but Not the Boundary
Testing 99 and 101 but not 100. The boundary value itself is where < vs <= bugs appear.
Fix: Always test the boundary value, not just adjacent values.
Mistake 4: Ignoring State Dependencies
Testing features in isolation when they depend on previous actions. A function that works after login might fail for logged-out users.
Fix: Test features in realistic sequences, not just isolation.
Mistake 5: Over-Relying on Error Guessing
Error guessing is valuable but not systematic. Important boundaries and combinations get missed.
Fix: Use error guessing to supplement systematic techniques, not replace them.
Mistake 6: Creating Too Many Tests
Decision tables with 10 conditions create 1024 combinations. Not all need testing.
Fix: Focus on realistic combinations and high-risk rules. Use risk-based prioritization.
Summary and Key Takeaways
Test case design techniques transform testing from random to systematic. Each technique targets specific defect types:
Boundary Value Analysis:
- Tests at range edges where off-by-one errors hide
- Apply to any bounded input: numbers, strings, dates, arrays
- Combine with EP for complete input coverage
Equivalence Partitioning:
- Groups similar inputs to reduce redundant tests
- Always include invalid partitions
- Foundation for all other input-based testing
Decision Table Testing:
- Handles complex conditions systematically
- Ensures all rule combinations are tested
- Best for business logic with multiple factors
State Transition Testing:
- Tests behavior that depends on history
- Covers workflows, statuses, and session states
- Use switch coverage for thorough sequence testing
Use Case Testing:
- Validates complete user journeys
- Tests integration between components
- Catches issues systematic techniques miss
Error Guessing:
- Applies experience to find likely bugs
- Supplements systematic approaches
- Targets common failure patterns
Best Practices:
- Combine techniques based on what you are testing
- Start with use cases to understand scope
- Apply EP and BVA to all inputs systematically
- Use decision tables for conditional logic
- Map states when behavior depends on history
- Add error guessing tests based on experience
- Include invalid inputs in every test set
- Test boundaries, not just adjacent values
- Consider realistic combinations, not all combinations
Mastering these techniques improves both defect detection and testing efficiency. Start with equivalence partitioning and boundary value analysis. Add decision tables and state transition testing as needed. Use error guessing throughout. The goal is finding defects before users do.
Quiz on test case design techniques
Your Score: 0/9
Question: A form field accepts quantities from 1 to 99. Using two-value boundary value analysis, which test values should you use?
Continue Reading
The Software Testing Lifecycle: An OverviewDive into the crucial phase of Test Requirement Analysis in the Software Testing Lifecycle, understanding its purpose, activities, deliverables, and best practices to ensure a successful software testing process.How to Master Test Requirement Analysis?Learn how to master requirement analysis, an essential part of the Software Test Life Cycle (STLC), and improve the efficiency of your software testing process.Test PlanningDive into the world of Kanban with this comprehensive introduction, covering its principles, benefits, and applications in various industries.Test DesignLearn the essential steps in the test design phase of the software testing lifecycle, its deliverables, entry and exit criteria, and effective tips for successful test design.Test ExecutionLearn about the steps, deliverables, entry and exit criteria, risks and schedules in the Test Execution phase of the Software Testing Lifecycle, and tips for performing this phase effectively.Test Analysis PhaseDiscover the steps, deliverables, entry and exit criteria, risks and schedules in the Test Analysis phase of the Software Testing Lifecycle, and tips for performing this phase effectively.Test Reporting PhaseLearn the essential steps, deliverables, entry and exit criteria, risks, schedules, and tips for effective Test Reporting in the Software Testing Lifecycle to improve application quality and testing processes.Fixing PhaseExplore the crucial steps, deliverables, entry and exit criteria, risks, schedules, and tips for effective Fixing in the Software Testing Lifecycle to boost application quality and streamline the testing process.Test Closure PhaseDiscover the steps, deliverables, entry and exit criteria, risks, schedules, and tips for performing an effective Test Closure phase in the Software Testing Lifecycle, ensuring a successful and streamlined testing process.
Frequently Asked Questions (FAQs) / People Also Ask (PAA)
What are the main test case design techniques and when should I use each?
How do boundary value analysis and equivalence partitioning work together?
When should I use decision table testing instead of other techniques?
What is state transition testing and what defects does it find?
How does error guessing differ from random testing?
How many test cases should I create for boundary value analysis?
What are the most common mistakes in test case design?
How do I combine multiple test case design techniques for a single feature?