
Error Guessing: Experience-Based Testing Technique
Error Guessing in Software Testing
| Question | Quick Answer |
|---|---|
| What is error guessing? | A testing technique where testers use experience and intuition to predict where defects are likely to occur in software. |
| When should I use it? | After formal test design techniques. Use it to supplement systematic testing, not replace it. |
| What makes it effective? | Knowledge of common programming mistakes, past defect patterns, and understanding of error-prone areas. |
| Is it ad-hoc testing? | No. Error guessing is targeted and intentional, based on specific knowledge. Ad-hoc testing is random exploration. |
| How do I get better at it? | Study past defects, maintain fault taxonomies, learn common coding errors, and review production bug reports. |
Error guessing is an experience-based testing technique where testers anticipate defects based on their knowledge of common mistakes, past bugs, and typical failure modes. Unlike formal techniques that follow defined rules, error guessing relies on the tester's intuition, domain expertise, and understanding of how software typically fails.
This technique works because experienced testers recognize patterns. They know that null values cause crashes, that date calculations break at month boundaries, and that concurrent operations create race conditions. Error guessing channels this knowledge into targeted testing.
This guide covers practical error guessing implementation, common error categories, building fault taxonomies, and integrating experience-based testing into your test strategy.
Table Of Contents-
- What is Error Guessing
- Why Error Guessing Works
- Common Error Categories
- Building a Fault Taxonomy
- Error Guessing Techniques
- Step-by-Step Implementation
- Practical Examples
- Error Guessing vs Other Techniques
- Documenting Error Guesses
- Common Mistakes to Avoid
- Building Error Guessing Skills
- Summary and Key Takeaways
- Quiz
- Continue Reading
What is Error Guessing
Error guessing is a test design technique defined in the ISTQB syllabus as using experience to anticipate what defects might be present in the component or system being tested. Testers design tests specifically to expose these anticipated defects.
The technique works through a simple process:
- Identify potential errors based on experience
- Create test cases targeting those specific errors
- Execute tests to determine if the anticipated defects exist
Consider a login form. An experienced tester immediately thinks of:
- Empty username and password fields
- Username with only spaces
- Special characters in the password
- SQL injection attempts in the username field
- Extremely long inputs
- Case sensitivity in username
These are not random guesses. Each comes from knowledge of how login systems typically fail and what programmers commonly overlook.
Formal Definition
The IEEE 610.12 standard describes error guessing as a test case design technique where the experience of the tester is used to postulate what faults might occur and to design tests to expose them.
Key characteristics:
- Experience-driven: Based on what testers have learned from past projects
- Targeted: Tests are designed for specific anticipated defects
- Supplementary: Used alongside formal techniques, not instead of them
- Knowledge-based: Improves as testers accumulate more defect knowledge
What Error Guessing Is Not
Error guessing is sometimes confused with random or ad-hoc testing. The distinction matters:
| Error Guessing | Ad-hoc Testing |
|---|---|
| Based on specific anticipated errors | Random exploration without specific targets |
| Tests designed before execution | Tests created during execution |
| Can be documented and repeated | Difficult to reproduce |
| Targets known error patterns | May or may not find errors |
| Requires experience to be effective | Anyone can perform it |
Why Error Guessing Works
Error guessing is effective because software defects follow patterns. The same types of bugs appear repeatedly across different projects, teams, and technologies.
Defects Are Not Random
Research into software defects consistently shows that certain error types dominate:
- Null pointer references: Code assumes a value exists when it does not
- Off-by-one errors: Loop boundaries and array indices calculated incorrectly
- Missing error handling: Code fails to handle unexpected inputs or states
- Concurrency issues: Race conditions when multiple operations access shared resources
- Boundary violations: Input validation fails at edge cases
These patterns are universal. A tester who has seen null pointer exceptions in Java will anticipate similar issues in C#, Python, or any language with nullable references.
Experience Creates Pattern Recognition
Experienced testers develop mental catalogs of common failures:
"Every time I test a date picker, I check February 29 and year boundaries. I have found bugs at these points on at least five different projects."
This pattern recognition is not guesswork. It is applied knowledge from repeated observations. The tester is not randomly trying dates; they are targeting specific points where date handling commonly fails.
Error Guessing Finds What Formal Techniques Miss
Formal techniques like boundary value analysis and equivalence partitioning are systematic but limited. They test based on specified requirements.
Error guessing catches:
- Implementation-specific bugs: Errors in how requirements were coded
- Missing requirements: Features that should exist but are not documented
- Integration failures: Problems that emerge when components interact
- Unusual usage patterns: Ways users actually interact with software
Common Error Categories
Experienced testers maintain mental or written lists of common error categories. These categories guide error guessing across different systems.
Input Handling Errors
Input validation is one of the most error-prone areas in software.
Empty and null inputs:
- Empty strings where text is expected
- Null values passed to functions
- Missing required fields in forms
- Empty arrays or collections
Special characters:
- Single and double quotes in text fields
- Backslashes and forward slashes
- Control characters (tabs, newlines)
- Unicode characters outside ASCII range
- Emoji in text inputs
Numeric edge cases:
- Zero values in calculations
- Negative numbers where only positive expected
- Very large numbers approaching system limits
- Floating-point precision issues (0.1 + 0.2)
- Division by zero scenarios
String length extremes:
- Single character inputs
- Maximum length inputs
- Inputs exceeding maximum length
- Strings with leading or trailing spaces
Boundary and Transition Errors
Systems often fail at transition points:
Date and time boundaries:
- Midnight (00:00:00 to 23:59:59)
- Month boundaries (January 31, February 28/29)
- Year transitions (December 31 to January 1)
- Leap year calculations
- Time zone transitions
- Daylight saving time changes
Numeric boundaries:
- Zero crossings (positive to negative)
- Integer overflow and underflow
- Decimal precision limits
- Currency rounding at small amounts
State transitions:
- First and last items in a list
- Empty state to single item
- Maximum capacity reached
- State machine invalid transitions
State and Sequence Errors
Software behavior depends on state, and state management is complex:
Order-dependent operations:
- Operations performed out of expected sequence
- Repeated operations (double-click, double-submit)
- Operations interrupted and resumed
- Concurrent operations on the same data
Session and context:
- Session expiration during long operations
- Context loss after navigation
- Back button behavior
- Browser refresh during operations
Data persistence:
- Data not saved after changes
- Cached data overwriting new data
- Database transaction failures
- Partial saves leaving inconsistent data
Integration and Interface Errors
Where systems connect, errors occur:
API communication:
- Network timeouts
- Connection failures
- Malformed requests and responses
- Authentication token expiration
- Rate limiting behavior
Data format issues:
- Character encoding mismatches (UTF-8 vs ASCII)
- Date format differences (MM/DD vs DD/MM)
- Numeric format variations (1,000.00 vs 1.000,00)
- Null vs empty string interpretation
External system failures:
- Third-party service unavailable
- Dependency returning unexpected data
- Version incompatibilities
- Certificate expiration
Resource and Performance Errors
Systems fail under resource pressure:
Memory issues:
- Memory leaks over time
- Large data sets causing out-of-memory
- Unbounded cache growth
- Memory fragmentation
Storage problems:
- Disk full scenarios
- File system permission errors
- Path length limits
- Special characters in file names
Concurrency failures:
- Race conditions under load
- Deadlocks between competing operations
- Resource starvation
- Connection pool exhaustion
Building a Fault Taxonomy
A fault taxonomy is a structured classification of defect types. Building one for your domain improves error guessing effectiveness.
What is a Fault Taxonomy
A fault taxonomy organizes defect types into categories with specific examples. It serves as a reference when designing error guessing tests.
Structure of a taxonomy entry:
Category: Input Validation
Subcategory: Numeric Inputs
Error Type: Division by zero
Description: Application crashes or returns incorrect result
when user input causes division by zero
Test Approach: Input zero in any denominator field,
set quantities or rates to zeroCreating Your Fault Taxonomy
Step 1: Gather Historical Defects
Collect bug reports from:
- Your project's defect tracking system
- Previous projects you have worked on
- Public bug databases for similar software
- Security vulnerability databases (CVE)
Step 2: Categorize Defects
Group defects by:
- Root cause (what went wrong in the code)
- Trigger condition (what input or action exposed it)
- Impact area (what functionality was affected)
- Detection method (how it was found)
Step 3: Create Category Structure
Organize into a hierarchy:
Fault Taxonomy
├── Input Handling
│ ├── Validation Failures
│ ├── Encoding Issues
│ └── Format Parsing Errors
├── Business Logic
│ ├── Calculation Errors
│ ├── State Machine Violations
│ └── Rule Processing Failures
├── Data Management
│ ├── Persistence Errors
│ ├── Cache Inconsistencies
│ └── Transaction Failures
├── Integration
│ ├── API Communication
│ ├── File System Operations
│ └── Database Operations
└── Security
├── Authentication Bypasses
├── Authorization Failures
└── Data ExposureStep 4: Document Test Approaches
For each category, document how to test for it:
| Category | Common Triggers | Test Approach |
|---|---|---|
| Null references | Missing data, optional fields | Leave fields empty, clear required values |
| Off-by-one | Lists, arrays, loops | Test first item, last item, empty list |
| Race conditions | Concurrent users, parallel operations | Rapid repeated actions, multiple sessions |
| Encoding issues | Non-ASCII input, copy-paste | Use special characters, Unicode, emoji |
Maintaining Your Taxonomy
Keep your fault taxonomy current:
- Add new categories when you find defect types not covered
- Remove obsolete entries when technologies change
- Update test approaches based on what finds bugs
- Track detection rates to prioritize high-value categories
Error Guessing Techniques
Several specific techniques fall under the error guessing umbrella.
Fault Attack
Fault attack is a systematic approach to error guessing where testers intentionally attack the software with inputs known to cause failures.
Process:
- Select a fault category from your taxonomy
- Identify all inputs that could trigger this fault type
- Create test cases for each input and fault combination
- Execute tests and document results
Example - SQL Injection Fault Attack:
Target: Login form username field
| Test | Input | Expected Result |
|---|---|---|
| FA-01 | ' OR '1'='1 | Rejected, no authentication bypass |
| FA-02 | '; DROP TABLE users;-- | Rejected, no database modification |
| FA-03 | admin'-- | Rejected, no comment bypass |
| FA-04 | " OR "1"="1 | Rejected, handles double quotes |
Defect Seeding
Defect seeding involves intentionally introducing known defects into code, then testing to see if they are found. This technique evaluates test effectiveness.
Applications:
- Training testers in error guessing
- Measuring test coverage of error types
- Validating fault taxonomies
Error Checklist Testing
Create checklists of common errors for specific feature types:
Login Form Checklist:
- Empty username submitted
- Empty password submitted
- Spaces-only username
- Maximum length username
- Special characters in password
- SQL injection in username
- XSS payload in fields
- Rapid repeated login attempts
- Login with expired session cookie
- Case sensitivity tested
File Upload Checklist:
- Empty file (0 bytes)
- File exceeding size limit
- File at exact size limit
- Wrong file type with valid extension
- Executable disguised as document
- File with no extension
- File with double extension (.pdf.exe)
- Path traversal in filename (../../../)
- Very long filename
- Filename with special characters
Historical Bug Testing
Review past defects and test for similar issues:
- Get defect reports from the previous release
- Categorize defects by feature area and root cause
- Design tests targeting the same error patterns
- Execute on the current version
This is effective because:
- Same developers make similar mistakes
- Code patterns repeat across features
- Fixed bugs often reappear after refactoring
Step-by-Step Implementation
Here is a practical process for incorporating error guessing into your testing.
Phase 1: Preparation
1.1 Review Available Information
Before testing, gather:
- Feature requirements and specifications
- Design documents and architecture diagrams
- Past defect reports for this feature or similar features
- Known limitations and constraints
- Developer notes or comments
1.2 Identify High-Risk Areas
Focus error guessing on:
- New or recently modified code
- Complex calculations or business logic
- Integration points with external systems
- Features with history of defects
- Areas with unclear requirements
1.3 Select Error Categories
Choose relevant categories from your fault taxonomy based on:
- Feature type (forms, calculations, file handling)
- Technology stack (database, API, UI)
- Business domain (finance, healthcare, e-commerce)
Phase 2: Test Design
2.1 Create Error Hypotheses
For each high-risk area, document specific anticipated errors:
Feature: Shopping Cart
Hypothesis: Cart total calculation will fail with fractional quantities
Rationale: Past bug found where 1.5 x $10.00 showed as $10.00
Test: Add item with quantity 1.5 and verify total2.2 Design Targeted Tests
Create test cases for each hypothesis:
| Test ID | Hypothesis | Input | Expected Result |
|---|---|---|---|
| EG-001 | Fractional qty bug | Qty: 1.5, Price: $10.00 | Total: $15.00 |
| EG-002 | Zero qty handling | Qty: 0 | Remove item or show error |
| EG-003 | Negative qty | Qty: -1 | Show error, do not allow |
2.3 Prioritize Tests
Rank error guessing tests by:
- Likelihood of finding a defect
- Severity if defect exists
- Time required to execute
Phase 3: Execution
3.1 Execute Systematically
Run error guessing tests in priority order. Document:
- Test executed
- Actual result
- Pass/Fail determination
- Any unexpected behaviors
3.2 Investigate Failures Thoroughly
When a test finds a defect:
- Confirm the defect is reproducible
- Determine the root cause if possible
- Check if similar defects exist elsewhere
- Add new error hypotheses based on findings
3.3 Adjust Based on Results
If early tests find defects:
- Design additional tests for similar error types
- Expand testing in the affected area
- Look for the same error in related features
Phase 4: Documentation
4.1 Record What Worked
Document successful error guesses:
- Which hypotheses found defects
- What knowledge led to the hypothesis
- How the test was designed
4.2 Update Fault Taxonomy
Add new defect types discovered during testing:
- Categorize the defect
- Document how to test for it
- Add to future testing checklists
4.3 Share Knowledge
Distribute findings to the team:
- Include in defect reports
- Add to project retrospectives
- Update team testing checklists
Practical Examples
Example 1: E-Commerce Checkout
Feature: Payment processing form
Error Guessing Approach:
| Hypothesis | Test | Result |
|---|---|---|
| Credit card validation accepts invalid numbers | Enter 1234-5678-9012-3456 | Should reject |
| Expiry date accepts past dates | Enter last month | Should reject |
| CVV field accepts more than 4 digits | Enter 12345 | Should limit or reject |
| Order submits twice on double-click | Click submit rapidly | Should prevent duplicate order |
| Session expires during long checkout | Wait 30+ minutes then submit | Should handle gracefully |
| Price changes between cart and checkout | Modify prices in database during checkout | Should use cart price or notify user |
Example 2: File Upload Feature
Feature: Document upload for user profiles
Error Guessing Approach:
| Hypothesis | Test | Result |
|---|---|---|
| Accepts executable files | Upload .exe renamed to .pdf | Should reject based on content |
| Path traversal in filename | Name file "../../../etc/passwd" | Should sanitize filename |
| Handles corrupt files | Upload truncated PDF | Should show appropriate error |
| No size limit enforcement | Upload 1GB file | Should enforce stated limit |
| Concurrent uploads conflict | Upload two files simultaneously | Both should succeed |
| Unicode filename causes issues | Name file "document_resume.pdf" (Unicode characters) | Should handle correctly |
Example 3: Search Functionality
Feature: Product search in catalog
Error Guessing Approach:
| Hypothesis | Test | Result |
|---|---|---|
| XSS in search results | Search for <script>alert(1)</script> | Should encode, not execute |
| SQL injection | Search for ' OR '1'='1 | Should treat as literal search |
| Empty search crashes | Submit empty search | Should show appropriate message |
| Very long search string | Enter 10,000 characters | Should truncate or limit |
| Special characters break search | Search for *?[]{} | Should escape or handle |
| No results page handles errors | Search for gibberish | Should show friendly message |
Error Guessing vs Other Techniques
Error guessing complements rather than replaces other test design techniques.
Comparison with Formal Techniques
| Aspect | Error Guessing | Boundary Value Analysis | Equivalence Partitioning |
|---|---|---|---|
| Basis | Experience and intuition | Input domain boundaries | Input value groups |
| Structure | Flexible | Highly structured | Structured |
| Coverage | Targets specific errors | Targets boundary errors | Targets partition errors |
| Documentation | Optional, often informal | Formal test cases | Formal test cases |
| Reproducibility | Depends on documentation | Fully reproducible | Fully reproducible |
| Skill required | Significant experience | Basic training | Basic training |
When to Use Each Technique
Use formal techniques first to ensure systematic coverage:
- Equivalence partitioning for input domains
- Boundary value analysis for range limits
- Decision tables for complex business rules
Use error guessing to supplement formal techniques:
- Target errors formal techniques do not cover
- Test based on historical defect patterns
- Explore areas of suspected weakness
Combining Techniques
A complete test strategy uses multiple techniques:
1. Apply equivalence partitioning to identify valid/invalid partitions
2. Apply boundary value analysis to test partition boundaries
3. Apply error guessing to target likely defect areas
4. Apply exploratory testing to find unexpected issuesDocumenting Error Guesses
While error guessing is flexible, documentation improves its value.
Why Document
- Knowledge retention: Captures what worked for future use
- Training: Helps others learn effective error guessing
- Traceability: Shows what was tested and why
- Improvement: Enables analysis of error guessing effectiveness
What to Document
Minimum documentation:
- Error hypothesis (what defect you anticipated)
- Test performed (how you tested for it)
- Result (pass, fail, or inconclusive)
Detailed documentation:
Error Guess: EG-042
Feature: Password reset
Hypothesis: Reset link remains valid after password is changed
Rationale: Found this bug in previous project; common oversight
Test Steps:
1. Request password reset
2. Open reset link in new tab
3. Change password using link
4. Attempt to use original link again
Expected: Link should be invalid after single use
Actual: Link could be used multiple times
Status: FAIL - Bug filed as BUG-1234Error Guess Catalog
Maintain a catalog of effective error guesses:
| ID | Category | Hypothesis | Success Rate |
|---|---|---|---|
| EG-001 | Input | Null in required field causes crash | Found bugs in 3/5 projects |
| EG-002 | Date | Leap year calculation wrong | Found bugs in 2/5 projects |
| EG-003 | Concurrency | Double-submit creates duplicates | Found bugs in 4/5 projects |
Common Mistakes to Avoid
Mistake 1: Replacing Formal Testing
Problem: Using only error guessing without systematic test design.
Why it fails: Error guessing catches specific anticipated errors but misses systematic coverage. You find the bugs you expected but miss the ones you did not.
Solution: Always apply formal techniques first. Use error guessing to supplement, not replace.
Mistake 2: Not Documenting
Problem: Running error guessing tests without recording what was tested.
Why it fails: Knowledge is lost. The same tests may be repeated; successful approaches are not shared.
Solution: Document at least the hypothesis and result for each error guess.
Mistake 3: Guessing Without Experience
Problem: Junior testers trying to error guess without a knowledge base.
Why it fails: Without knowledge of common errors, guesses are random rather than targeted.
Solution: Train with fault taxonomies. Start by executing documented error guessing tests before creating new ones.
Mistake 4: Testing Only Happy Paths
Problem: Error guesses focus on unusual inputs but assume normal system state.
Why it fails: Many defects require specific system conditions, not just unusual inputs.
Solution: Include error guesses about state, timing, and resource conditions.
Mistake 5: Stopping After First Bug
Problem: Finding a defect and moving on without exploring related errors.
Why it fails: Defects cluster. One bug often indicates similar bugs nearby.
Solution: When a defect is found, design additional error guesses targeting similar issues.
Building Error Guessing Skills
Error guessing improves with deliberate practice.
Study Defects
Review bug databases: Examine how defects were found and what caused them.
Categorize what you find: Build your personal fault taxonomy from real bugs.
Identify patterns: Notice which defect types appear repeatedly.
Learn Common Errors
Programming language pitfalls: Every language has common mistakes:
- Java: NullPointerException, ConcurrentModificationException
- Python: IndentationError, mutable default arguments
- JavaScript: undefined vs null, async/await errors
Framework-specific issues: Learn what goes wrong in the frameworks you test:
- React: stale closures, useEffect dependencies
- SQL: injection, N+1 queries, transaction isolation
Security vulnerabilities: Study OWASP Top 10 and common attack patterns.
Practice Systematically
Attack exercises: Practice fault attacks on test applications.
Bug bounty programs: Participate in programs that reward finding bugs.
Code review: Reviewing code builds understanding of where bugs hide.
Share Knowledge
Teach others: Explaining error guessing solidifies your understanding.
Document successes: Keep records of effective error guesses.
Build team resources: Contribute to shared fault taxonomies and checklists.
Summary and Key Takeaways
Error guessing is an experience-based testing technique that targets anticipated defects. It works because software defects follow patterns that experienced testers learn to recognize.
Core principles:
- Error guessing is targeted and intentional, not random
- It supplements formal techniques, not replaces them
- Effectiveness improves with experience and knowledge
Key components:
- Fault taxonomies classify common error types
- Error checklists guide systematic application
- Documentation captures knowledge for reuse
Implementation steps:
- Prepare by reviewing information and identifying high-risk areas
- Design tests based on specific error hypotheses
- Execute systematically and investigate failures
- Document results and update your fault taxonomy
Common error categories:
- Input handling (null, empty, special characters, length)
- Boundary and transition (dates, numeric limits, state changes)
- State and sequence (order, concurrency, persistence)
- Integration (APIs, encoding, external systems)
- Resources (memory, storage, connections)
Building skills:
- Study past defects and categorize them
- Learn common programming errors and security issues
- Practice on test applications and bug bounties
- Share knowledge and build team resources
Error guessing becomes more powerful as you gain experience. Start with established fault taxonomies, document what works, and continuously refine your approach based on what finds bugs.
Quiz on error guessing
Your Score: 0/9
Question: What is the primary characteristic that distinguishes error guessing from ad-hoc testing?
Continue Reading
The Software Testing Lifecycle: An OverviewDive into the crucial phase of Test Requirement Analysis in the Software Testing Lifecycle, understanding its purpose, activities, deliverables, and best practices to ensure a successful software testing process.How to Master Test Requirement Analysis?Learn how to master requirement analysis, an essential part of the Software Test Life Cycle (STLC), and improve the efficiency of your software testing process.Test PlanningDive into the world of Kanban with this comprehensive introduction, covering its principles, benefits, and applications in various industries.Test DesignLearn the essential steps in the test design phase of the software testing lifecycle, its deliverables, entry and exit criteria, and effective tips for successful test design.Test ExecutionLearn about the steps, deliverables, entry and exit criteria, risks and schedules in the Test Execution phase of the Software Testing Lifecycle, and tips for performing this phase effectively.Test Analysis PhaseDiscover the steps, deliverables, entry and exit criteria, risks and schedules in the Test Analysis phase of the Software Testing Lifecycle, and tips for performing this phase effectively.Test Reporting PhaseLearn the essential steps, deliverables, entry and exit criteria, risks, schedules, and tips for effective Test Reporting in the Software Testing Lifecycle to improve application quality and testing processes.Fixing PhaseExplore the crucial steps, deliverables, entry and exit criteria, risks, schedules, and tips for effective Fixing in the Software Testing Lifecycle to boost application quality and streamline the testing process.Test Closure PhaseDiscover the steps, deliverables, entry and exit criteria, risks, schedules, and tips for performing an effective Test Closure phase in the Software Testing Lifecycle, ensuring a successful and streamlined testing process.
Frequently Asked Questions (FAQs) / People Also Ask (PAA)
What is error guessing and how does it differ from random testing?
When should I use error guessing in my test strategy?
What is a fault taxonomy and why should I build one?
What are the most common error categories that error guessing targets?
How do I document error guessing tests effectively?
How can junior testers get better at error guessing?
What is the difference between error guessing and exploratory testing?
What are common mistakes to avoid when performing error guessing?