
CTAL-TA Quality Characteristics: ISO 25010, Usability, Reliability, and Maintainability Testing
Chapter 4 of the ISTQB CTAL-TA syllabus focuses on testing software quality characteristics beyond functional correctness. Representing approximately 23% of the exam content, this chapter examines how Test Analysts evaluate software against quality attributes defined by international standards like ISO 25010.
This comprehensive guide covers the ISO 25010 quality model, specialized testing approaches for usability, reliability, maintainability, and other quality characteristics that distinguish excellent software from merely functional software.
Table Of Contents-
Understanding Software Quality
Beyond Functional Correctness
While functional testing verifies that software performs its intended functions correctly, quality characteristics testing evaluates how well the software performs:
Functional Testing Questions:
- Does the login function work?
- Does the calculation produce correct results?
- Do the business rules execute properly?
Quality Characteristics Questions:
- Is the login process easy to use?
- Does the system remain stable under heavy load?
- Can the software be easily modified when requirements change?
- Will it work correctly when deployed to different environments?
The Business Value of Quality
Quality characteristics directly impact business outcomes:
| Quality Issue | Business Impact |
|---|---|
| Poor usability | Increased training costs, user abandonment |
| Low reliability | Customer dissatisfaction, revenue loss |
| Poor maintainability | High technical debt, slow feature delivery |
| Limited portability | Market restrictions, deployment costs |
CTAL-TA Focus: The exam emphasizes understanding when and how to test each quality characteristic, not just definitions. Be prepared to identify appropriate testing approaches for given scenarios.
ISO 25010 Quality Model
Product Quality Model Overview
ISO 25010:2011 (updated in ISO 25010:2023) defines a comprehensive framework for software product quality:
Eight Quality Characteristics:
- Functional Suitability - Does it do what users need?
- Performance Efficiency - How responsive and resource-efficient is it?
- Compatibility - Does it work with other systems?
- Usability - How easy is it to use?
- Reliability - How consistently does it perform?
- Security - How well does it protect information?
- Maintainability - How easy is it to modify?
- Portability - How easily can it be transferred?
Quality Characteristics and Sub-Characteristics
Each main characteristic has sub-characteristics:
Functional Suitability
├── Functional completeness
├── Functional correctness
└── Functional appropriateness
Performance Efficiency
├── Time behaviour
├── Resource utilization
└── Capacity
Compatibility
├── Co-existence
└── Interoperability
Usability
├── Appropriateness recognizability
├── Learnability
├── Operability
├── User error protection
├── User interface aesthetics
└── Accessibility
Reliability
├── Maturity
├── Availability
├── Fault tolerance
└── Recoverability
Security
├── Confidentiality
├── Integrity
├── Non-repudiation
├── Accountability
└── Authenticity
Maintainability
├── Modularity
├── Reusability
├── Analysability
├── Modifiability
└── Testability
Portability
├── Adaptability
├── Installability
└── ReplaceabilityQuality in Use Model
ISO 25010 also defines quality from the user's perspective:
Quality in Use Characteristics:
- Effectiveness - Users achieve goals accurately
- Efficiency - Resources expended for accuracy
- Satisfaction - User attitude toward use
- Freedom from risk - Economic, health, environmental risk mitigation
- Context coverage - Works in specified contexts
Functional Suitability Testing
Functional Completeness
Verifying all required functions are implemented:
Testing Approach:
- Requirements traceability analysis
- Feature coverage verification
- User story acceptance criteria validation
Metrics:
- Percentage of requirements with test coverage
- Feature implementation completeness
- Acceptance criteria pass rate
Functional Correctness
Verifying functions produce correct results:
Testing Approach:
- Expected results verification
- Calculation accuracy testing
- Data transformation validation
Example Tests:
- Tax calculation accuracy across scenarios
- Currency conversion precision
- Date/time handling across time zones
Functional Appropriateness
Verifying functions facilitate user tasks:
Testing Approach:
- Workflow efficiency evaluation
- Task completion analysis
- User journey validation
Evaluation Criteria:
- Steps required to complete tasks
- Unnecessary function exposure
- Alignment with user mental models
Usability Testing
Appropriateness Recognizability
Can users recognize if software meets their needs?
Testing Methods:
- First impression testing
- Landing page evaluation
- Feature discoverability assessment
Evaluation Points:
- Clear communication of product purpose
- Obvious navigation to key features
- Transparent capability demonstration
Learnability
How easily can users learn to use the software?
Testing Approaches:
Cognitive Walkthrough:
- Define user goals
- Step through interface actions
- Evaluate whether users can determine correct actions
- Identify learning obstacles
Learnability Metrics:
- Time to complete first task
- Number of errors during learning
- Help system access frequency
- Task success rate over time
Operability
How easy is it to operate and control?
Evaluation Areas:
- Input mechanisms (keyboard, touch, voice)
- Error recovery options
- Undo/redo capabilities
- Customization options
Testing Methods:
- Task-based usability testing
- Heuristic evaluation
- Cognitive walkthrough
User Error Protection
How well does the system prevent and handle user errors?
Testing Scenarios:
- Invalid input handling
- Confirmation for destructive actions
- Undo capabilities for mistakes
- Clear error messages with resolution guidance
Example Tests:
| Scenario | Expected Protection |
|---|---|
| Delete important item | Confirmation dialog |
| Invalid email format | Immediate validation feedback |
| Unsaved changes on navigation | Warning with save option |
| Incorrect password | Clear message, no lockout on first attempt |
Accessibility Testing
Ensuring software is usable by people with disabilities:
Standards:
- WCAG 2.1 (Web Content Accessibility Guidelines)
- Section 508 (US federal requirement)
- EN 301 549 (European standard)
Testing Areas:
| Area | Testing Approach |
|---|---|
| Visual | Screen reader compatibility, color contrast |
| Motor | Keyboard navigation, click target sizes |
| Auditory | Captions, visual alerts |
| Cognitive | Simple language, consistent navigation |
Automated Tools:
- WAVE (Web Accessibility Evaluation Tool)
- axe DevTools
- Lighthouse (Chrome)
- NVDA, JAWS (Screen readers for manual testing)
⚠️
Important: Automated accessibility tools find approximately 30-40% of accessibility issues. Manual testing with assistive technologies is essential for comprehensive accessibility evaluation.
User Interface Aesthetics
Evaluating the visual appeal and design quality:
Evaluation Criteria:
- Visual consistency across screens
- Appropriate use of color and typography
- Professional appearance
- Brand alignment
Testing Methods:
- Expert design review
- User satisfaction surveys
- A/B testing of design alternatives
Reliability Testing
Maturity
How reliable is the software under normal operation?
Metrics:
- Mean Time Between Failures (MTBF)
- Defect density in production
- Failure rate over time
Testing Approach:
- Extended operation testing
- Stress testing within normal parameters
- Historical defect analysis
Availability
What proportion of time is the system operational?
Availability Formula:
Availability = MTBF / (MTBF + MTTR)
Where:
MTBF = Mean Time Between Failures
MTTR = Mean Time To Repair/RecoverAvailability Targets:
| Level | Annual Downtime | Common Requirement |
|---|---|---|
| 99% | 3.65 days | Basic web applications |
| 99.9% | 8.76 hours | Business applications |
| 99.99% | 52.56 minutes | E-commerce, banking |
| 99.999% | 5.26 minutes | Critical infrastructure |
Testing Considerations:
- Scheduled vs unscheduled downtime
- Maintenance window impacts
- Geographic availability differences
Fault Tolerance
How well does the system operate despite faults?
Testing Scenarios:
- Component failure simulation
- Network disruption handling
- Database connection loss
- External service unavailability
Fault Injection Testing:
- Identify critical failure points
- Design fault injection scenarios
- Execute while monitoring system behavior
- Verify graceful degradation
Example Fault Tolerance Tests:
| Fault Injected | Expected Behavior |
|---|---|
| Database primary failure | Failover to replica |
| API timeout | Retry with backoff, then graceful error |
| Memory exhaustion | Controlled shutdown, no data loss |
| Network partition | Continue with cached data, sync when restored |
Recoverability
How quickly and completely can the system recover from failures?
Recovery Testing:
- Backup restoration verification
- Crash recovery testing
- Data integrity verification post-recovery
- Recovery time measurement
Key Metrics:
- Recovery Time Objective (RTO): Maximum acceptable downtime
- Recovery Point Objective (RPO): Maximum acceptable data loss
Testing Process:
- Create known system state
- Simulate failure scenario
- Execute recovery procedures
- Verify data integrity and functionality
- Measure recovery time
Maintainability Testing
Modularity
How well is the system divided into discrete components?
Evaluation Approach:
- Architecture review
- Coupling analysis
- Component independence assessment
Indicators of Good Modularity:
- Components can be modified independently
- Clear interfaces between modules
- Minimal ripple effects from changes
Reusability
Can components be used in other systems?
Assessment Areas:
- Component documentation quality
- Interface standardization
- External dependency minimization
- Configuration flexibility
Analysability
How easily can the system be diagnosed?
Testing Focus:
- Log quality and completeness
- Error message clarity
- Diagnostic tool availability
- Monitoring capability
Test Scenarios:
| Scenario | Analysability Test |
|---|---|
| Error occurs | Can root cause be determined from logs? |
| Performance degrades | Are metrics available to identify bottleneck? |
| Unexpected behavior | Can system state be inspected? |
| Security incident | Is audit trail sufficient for investigation? |
Modifiability
How easily can changes be made to the system?
Static Analysis Indicators:
- Cyclomatic complexity
- Code duplication percentage
- Dependency depth
- Method/class size metrics
Dynamic Testing:
- Introduce controlled changes
- Measure effort required
- Assess regression impact
Testability
How effectively can the software be tested?
Testability Factors:
| Factor | Good Testability | Poor Testability |
|---|---|---|
| Observability | Clear outputs, logging | Hidden state, no logs |
| Controllability | Easy to set states | Hard to create conditions |
| Isolation | Components testable alone | Tight coupling |
| Documentation | Clear specifications | Ambiguous requirements |
Test Analyst Insight: Testability is a quality characteristic you can influence. When you encounter testability issues, raise them as defects and work with developers to improve the system's testability.
Portability Testing
Adaptability
How easily can the software be adapted to different environments?
Testing Scenarios:
- Different operating systems
- Various hardware configurations
- Different database systems
- Cloud vs on-premise deployment
Configuration Testing:
- Parameter-driven behavior changes
- Feature flags functionality
- Environment-specific settings
Installability
How easily can the software be installed?
Testing Areas:
- Installation procedure correctness
- Installation time measurement
- Rollback capability
- Upgrade path testing
- Uninstallation completeness
Installation Test Scenarios:
| Scenario | Verification |
|---|---|
| Clean install | Application functions correctly |
| Upgrade from previous version | Data migrated, settings preserved |
| Side-by-side installation | Both versions function independently |
| Silent/automated install | Completes without interaction |
| Installation failure | Graceful failure, no partial install |
Replaceability
How easily can software replace another product?
Testing Considerations:
- Data migration completeness
- Functional equivalence
- User transition experience
- Integration point compatibility
Compatibility Testing
Co-existence
Does the software operate alongside other software without adverse effects?
Testing Approach:
- Resource consumption analysis
- Shared resource conflict testing
- Side-effect identification
Common Issues:
- Port conflicts
- Shared file access
- Memory competition
- Registry/configuration conflicts
Interoperability
Can the software exchange information with other systems?
Testing Areas:
- API compatibility
- Data format validation
- Protocol compliance
- Integration point verification
Interoperability Test Types:
| Type | Focus |
|---|---|
| Data Exchange | Format, encoding, completeness |
| API Integration | Request/response validation |
| Protocol Compliance | Standards adherence |
| Real-time Communication | Synchronization, timing |
Test Analyst Perspective on Quality
Prioritizing Quality Characteristics
Not all quality characteristics are equally important for every project:
Factors for Prioritization:
- Business domain requirements
- User expectations
- Regulatory compliance
- Competitive differentiation
Domain-Specific Priorities:
| Domain | Critical Characteristics |
|---|---|
| Healthcare | Reliability, Security, Accessibility |
| E-commerce | Usability, Performance, Availability |
| Banking | Security, Reliability, Compliance |
| Mobile Apps | Usability, Portability, Performance |
| Enterprise | Maintainability, Compatibility, Security |
Test Planning for Quality Characteristics
Planning Considerations:
- Identify relevant quality characteristics from requirements
- Define measurable quality targets
- Select appropriate testing techniques
- Plan required tools and environments
- Estimate effort for each characteristic
Example Quality Test Plan Entries:
| Characteristic | Target | Technique | Tools |
|---|---|---|---|
| Usability | SUS score > 80 | Usability lab testing | Eye tracking, surveys |
| Reliability | 99.9% availability | Soak testing | Monitoring, chaos tools |
| Performance | < 2s response | Load testing | JMeter, Gatling |
| Accessibility | WCAG 2.1 AA | Manual + automated | axe, screen readers |
Defect Reporting for Quality Issues
Quality characteristic defects require specific information:
Usability Defect Template:
- User task being performed
- User profile (experience level, accessibility needs)
- Specific difficulty encountered
- Severity based on user impact
- Suggested improvement
Reliability Defect Template:
- Conditions leading to failure
- Frequency of occurrence
- System state at failure
- Recovery behavior observed
- Data loss or corruption evidence
Quality Characteristics in Practice
Case Study 1: E-commerce Platform
Priority Characteristics:
- Usability - Critical for conversion rates
- Performance - Page load time impacts sales
- Reliability - Downtime = lost revenue
- Security - Payment data protection
Testing Approach:
- Usability lab sessions with target users
- Load testing for peak shopping periods
- Chaos engineering for reliability
- Penetration testing for security
Case Study 2: Healthcare Application
Priority Characteristics:
- Reliability - Patient safety depends on availability
- Accessibility - Diverse user population
- Security - Protected health information
- Usability - Time-critical clinical workflows
Testing Approach:
- Extended reliability testing (24/7 operation)
- Comprehensive accessibility testing (WCAG + assistive devices)
- Security audit and compliance verification
- Clinical workflow usability studies
Case Study 3: Mobile Banking App
Priority Characteristics:
- Security - Financial data protection
- Usability - Mobile UX expectations
- Portability - Multiple device/OS support
- Reliability - Transaction integrity
Testing Approach:
- Security testing (OWASP Mobile Top 10)
- Device lab testing across platforms
- Usability testing on various screen sizes
- Offline capability and sync testing
Metrics and Measurement
Quantitative Metrics:
| Characteristic | Sample Metrics |
|---|---|
| Usability | Task success rate, Time on task, Error rate |
| Reliability | MTBF, Failure rate, Availability percentage |
| Performance | Response time, Throughput, Resource utilization |
| Maintainability | Cyclomatic complexity, Technical debt ratio |
| Portability | Platforms supported, Migration effort |
Qualitative Assessment:
| Characteristic | Assessment Method |
|---|---|
| Usability | User satisfaction surveys (SUS, NPS) |
| Maintainability | Expert code review |
| Accessibility | Expert evaluation + user testing |
| User interface aesthetics | Design review, A/B testing |
Test Your Knowledge
Quiz on CTAL-TA Quality Characteristics
Your Score: 0/10
Question: According to ISO 25010, which quality characteristic is concerned with how easily users can learn to use the software?
Frequently Asked Questions
Frequently Asked Questions (FAQs) / People Also Ask (PAA)
What are the eight quality characteristics in the ISO 25010 quality model?
How is system availability calculated and what does 99.9% availability mean?
What is the difference between RTO and RPO in reliability testing?
Why do automated accessibility tools only catch 30-40% of accessibility issues?
What are the sub-characteristics of Maintainability in ISO 25010?
How should Test Analysts prioritize quality characteristics for testing?
What is fault tolerance testing and how is it performed?
What usability sub-characteristics should Test Analysts evaluate?