ISTQB Certifications
Test Analyst (CTAL-TA)
Testing Quality Characteristics

CTAL-TA Quality Characteristics: ISO 25010, Usability, Reliability, and Maintainability Testing

Parul Dhingra - Senior Quality Analyst
Parul Dhingra13+ Years ExperienceHire Me

Senior Quality Analyst

Updated: 1/25/2026

Chapter 4 of the ISTQB CTAL-TA syllabus focuses on testing software quality characteristics beyond functional correctness. Representing approximately 23% of the exam content, this chapter examines how Test Analysts evaluate software against quality attributes defined by international standards like ISO 25010.

This comprehensive guide covers the ISO 25010 quality model, specialized testing approaches for usability, reliability, maintainability, and other quality characteristics that distinguish excellent software from merely functional software.

Understanding Software Quality

Beyond Functional Correctness

While functional testing verifies that software performs its intended functions correctly, quality characteristics testing evaluates how well the software performs:

Functional Testing Questions:

  • Does the login function work?
  • Does the calculation produce correct results?
  • Do the business rules execute properly?

Quality Characteristics Questions:

  • Is the login process easy to use?
  • Does the system remain stable under heavy load?
  • Can the software be easily modified when requirements change?
  • Will it work correctly when deployed to different environments?

The Business Value of Quality

Quality characteristics directly impact business outcomes:

Quality IssueBusiness Impact
Poor usabilityIncreased training costs, user abandonment
Low reliabilityCustomer dissatisfaction, revenue loss
Poor maintainabilityHigh technical debt, slow feature delivery
Limited portabilityMarket restrictions, deployment costs

CTAL-TA Focus: The exam emphasizes understanding when and how to test each quality characteristic, not just definitions. Be prepared to identify appropriate testing approaches for given scenarios.

ISO 25010 Quality Model

Product Quality Model Overview

ISO 25010:2011 (updated in ISO 25010:2023) defines a comprehensive framework for software product quality:

Eight Quality Characteristics:

  1. Functional Suitability - Does it do what users need?
  2. Performance Efficiency - How responsive and resource-efficient is it?
  3. Compatibility - Does it work with other systems?
  4. Usability - How easy is it to use?
  5. Reliability - How consistently does it perform?
  6. Security - How well does it protect information?
  7. Maintainability - How easy is it to modify?
  8. Portability - How easily can it be transferred?

Quality Characteristics and Sub-Characteristics

Each main characteristic has sub-characteristics:

Functional Suitability
├── Functional completeness
├── Functional correctness
└── Functional appropriateness

Performance Efficiency
├── Time behaviour
├── Resource utilization
└── Capacity

Compatibility
├── Co-existence
└── Interoperability

Usability
├── Appropriateness recognizability
├── Learnability
├── Operability
├── User error protection
├── User interface aesthetics
└── Accessibility

Reliability
├── Maturity
├── Availability
├── Fault tolerance
└── Recoverability

Security
├── Confidentiality
├── Integrity
├── Non-repudiation
├── Accountability
└── Authenticity

Maintainability
├── Modularity
├── Reusability
├── Analysability
├── Modifiability
└── Testability

Portability
├── Adaptability
├── Installability
└── Replaceability

Quality in Use Model

ISO 25010 also defines quality from the user's perspective:

Quality in Use Characteristics:

  • Effectiveness - Users achieve goals accurately
  • Efficiency - Resources expended for accuracy
  • Satisfaction - User attitude toward use
  • Freedom from risk - Economic, health, environmental risk mitigation
  • Context coverage - Works in specified contexts

Functional Suitability Testing

Functional Completeness

Verifying all required functions are implemented:

Testing Approach:

  • Requirements traceability analysis
  • Feature coverage verification
  • User story acceptance criteria validation

Metrics:

  • Percentage of requirements with test coverage
  • Feature implementation completeness
  • Acceptance criteria pass rate

Functional Correctness

Verifying functions produce correct results:

Testing Approach:

  • Expected results verification
  • Calculation accuracy testing
  • Data transformation validation

Example Tests:

  • Tax calculation accuracy across scenarios
  • Currency conversion precision
  • Date/time handling across time zones

Functional Appropriateness

Verifying functions facilitate user tasks:

Testing Approach:

  • Workflow efficiency evaluation
  • Task completion analysis
  • User journey validation

Evaluation Criteria:

  • Steps required to complete tasks
  • Unnecessary function exposure
  • Alignment with user mental models

Usability Testing

Appropriateness Recognizability

Can users recognize if software meets their needs?

Testing Methods:

  • First impression testing
  • Landing page evaluation
  • Feature discoverability assessment

Evaluation Points:

  • Clear communication of product purpose
  • Obvious navigation to key features
  • Transparent capability demonstration

Learnability

How easily can users learn to use the software?

Testing Approaches:

Cognitive Walkthrough:

  1. Define user goals
  2. Step through interface actions
  3. Evaluate whether users can determine correct actions
  4. Identify learning obstacles

Learnability Metrics:

  • Time to complete first task
  • Number of errors during learning
  • Help system access frequency
  • Task success rate over time

Operability

How easy is it to operate and control?

Evaluation Areas:

  • Input mechanisms (keyboard, touch, voice)
  • Error recovery options
  • Undo/redo capabilities
  • Customization options

Testing Methods:

  • Task-based usability testing
  • Heuristic evaluation
  • Cognitive walkthrough

User Error Protection

How well does the system prevent and handle user errors?

Testing Scenarios:

  • Invalid input handling
  • Confirmation for destructive actions
  • Undo capabilities for mistakes
  • Clear error messages with resolution guidance

Example Tests:

ScenarioExpected Protection
Delete important itemConfirmation dialog
Invalid email formatImmediate validation feedback
Unsaved changes on navigationWarning with save option
Incorrect passwordClear message, no lockout on first attempt

Accessibility Testing

Ensuring software is usable by people with disabilities:

Standards:

  • WCAG 2.1 (Web Content Accessibility Guidelines)
  • Section 508 (US federal requirement)
  • EN 301 549 (European standard)

Testing Areas:

AreaTesting Approach
VisualScreen reader compatibility, color contrast
MotorKeyboard navigation, click target sizes
AuditoryCaptions, visual alerts
CognitiveSimple language, consistent navigation

Automated Tools:

  • WAVE (Web Accessibility Evaluation Tool)
  • axe DevTools
  • Lighthouse (Chrome)
  • NVDA, JAWS (Screen readers for manual testing)
⚠️

Important: Automated accessibility tools find approximately 30-40% of accessibility issues. Manual testing with assistive technologies is essential for comprehensive accessibility evaluation.

User Interface Aesthetics

Evaluating the visual appeal and design quality:

Evaluation Criteria:

  • Visual consistency across screens
  • Appropriate use of color and typography
  • Professional appearance
  • Brand alignment

Testing Methods:

  • Expert design review
  • User satisfaction surveys
  • A/B testing of design alternatives

Reliability Testing

Maturity

How reliable is the software under normal operation?

Metrics:

  • Mean Time Between Failures (MTBF)
  • Defect density in production
  • Failure rate over time

Testing Approach:

  • Extended operation testing
  • Stress testing within normal parameters
  • Historical defect analysis

Availability

What proportion of time is the system operational?

Availability Formula:

Availability = MTBF / (MTBF + MTTR)

Where:
MTBF = Mean Time Between Failures
MTTR = Mean Time To Repair/Recover

Availability Targets:

LevelAnnual DowntimeCommon Requirement
99%3.65 daysBasic web applications
99.9%8.76 hoursBusiness applications
99.99%52.56 minutesE-commerce, banking
99.999%5.26 minutesCritical infrastructure

Testing Considerations:

  • Scheduled vs unscheduled downtime
  • Maintenance window impacts
  • Geographic availability differences

Fault Tolerance

How well does the system operate despite faults?

Testing Scenarios:

  • Component failure simulation
  • Network disruption handling
  • Database connection loss
  • External service unavailability

Fault Injection Testing:

  1. Identify critical failure points
  2. Design fault injection scenarios
  3. Execute while monitoring system behavior
  4. Verify graceful degradation

Example Fault Tolerance Tests:

Fault InjectedExpected Behavior
Database primary failureFailover to replica
API timeoutRetry with backoff, then graceful error
Memory exhaustionControlled shutdown, no data loss
Network partitionContinue with cached data, sync when restored

Recoverability

How quickly and completely can the system recover from failures?

Recovery Testing:

  • Backup restoration verification
  • Crash recovery testing
  • Data integrity verification post-recovery
  • Recovery time measurement

Key Metrics:

  • Recovery Time Objective (RTO): Maximum acceptable downtime
  • Recovery Point Objective (RPO): Maximum acceptable data loss

Testing Process:

  1. Create known system state
  2. Simulate failure scenario
  3. Execute recovery procedures
  4. Verify data integrity and functionality
  5. Measure recovery time

Maintainability Testing

Modularity

How well is the system divided into discrete components?

Evaluation Approach:

  • Architecture review
  • Coupling analysis
  • Component independence assessment

Indicators of Good Modularity:

  • Components can be modified independently
  • Clear interfaces between modules
  • Minimal ripple effects from changes

Reusability

Can components be used in other systems?

Assessment Areas:

  • Component documentation quality
  • Interface standardization
  • External dependency minimization
  • Configuration flexibility

Analysability

How easily can the system be diagnosed?

Testing Focus:

  • Log quality and completeness
  • Error message clarity
  • Diagnostic tool availability
  • Monitoring capability

Test Scenarios:

ScenarioAnalysability Test
Error occursCan root cause be determined from logs?
Performance degradesAre metrics available to identify bottleneck?
Unexpected behaviorCan system state be inspected?
Security incidentIs audit trail sufficient for investigation?

Modifiability

How easily can changes be made to the system?

Static Analysis Indicators:

  • Cyclomatic complexity
  • Code duplication percentage
  • Dependency depth
  • Method/class size metrics

Dynamic Testing:

  • Introduce controlled changes
  • Measure effort required
  • Assess regression impact

Testability

How effectively can the software be tested?

Testability Factors:

FactorGood TestabilityPoor Testability
ObservabilityClear outputs, loggingHidden state, no logs
ControllabilityEasy to set statesHard to create conditions
IsolationComponents testable aloneTight coupling
DocumentationClear specificationsAmbiguous requirements

Test Analyst Insight: Testability is a quality characteristic you can influence. When you encounter testability issues, raise them as defects and work with developers to improve the system's testability.

Portability Testing

Adaptability

How easily can the software be adapted to different environments?

Testing Scenarios:

  • Different operating systems
  • Various hardware configurations
  • Different database systems
  • Cloud vs on-premise deployment

Configuration Testing:

  • Parameter-driven behavior changes
  • Feature flags functionality
  • Environment-specific settings

Installability

How easily can the software be installed?

Testing Areas:

  • Installation procedure correctness
  • Installation time measurement
  • Rollback capability
  • Upgrade path testing
  • Uninstallation completeness

Installation Test Scenarios:

ScenarioVerification
Clean installApplication functions correctly
Upgrade from previous versionData migrated, settings preserved
Side-by-side installationBoth versions function independently
Silent/automated installCompletes without interaction
Installation failureGraceful failure, no partial install

Replaceability

How easily can software replace another product?

Testing Considerations:

  • Data migration completeness
  • Functional equivalence
  • User transition experience
  • Integration point compatibility

Compatibility Testing

Co-existence

Does the software operate alongside other software without adverse effects?

Testing Approach:

  • Resource consumption analysis
  • Shared resource conflict testing
  • Side-effect identification

Common Issues:

  • Port conflicts
  • Shared file access
  • Memory competition
  • Registry/configuration conflicts

Interoperability

Can the software exchange information with other systems?

Testing Areas:

  • API compatibility
  • Data format validation
  • Protocol compliance
  • Integration point verification

Interoperability Test Types:

TypeFocus
Data ExchangeFormat, encoding, completeness
API IntegrationRequest/response validation
Protocol ComplianceStandards adherence
Real-time CommunicationSynchronization, timing

Test Analyst Perspective on Quality

Prioritizing Quality Characteristics

Not all quality characteristics are equally important for every project:

Factors for Prioritization:

  • Business domain requirements
  • User expectations
  • Regulatory compliance
  • Competitive differentiation

Domain-Specific Priorities:

DomainCritical Characteristics
HealthcareReliability, Security, Accessibility
E-commerceUsability, Performance, Availability
BankingSecurity, Reliability, Compliance
Mobile AppsUsability, Portability, Performance
EnterpriseMaintainability, Compatibility, Security

Test Planning for Quality Characteristics

Planning Considerations:

  1. Identify relevant quality characteristics from requirements
  2. Define measurable quality targets
  3. Select appropriate testing techniques
  4. Plan required tools and environments
  5. Estimate effort for each characteristic

Example Quality Test Plan Entries:

CharacteristicTargetTechniqueTools
UsabilitySUS score > 80Usability lab testingEye tracking, surveys
Reliability99.9% availabilitySoak testingMonitoring, chaos tools
Performance< 2s responseLoad testingJMeter, Gatling
AccessibilityWCAG 2.1 AAManual + automatedaxe, screen readers

Defect Reporting for Quality Issues

Quality characteristic defects require specific information:

Usability Defect Template:

  • User task being performed
  • User profile (experience level, accessibility needs)
  • Specific difficulty encountered
  • Severity based on user impact
  • Suggested improvement

Reliability Defect Template:

  • Conditions leading to failure
  • Frequency of occurrence
  • System state at failure
  • Recovery behavior observed
  • Data loss or corruption evidence

Quality Characteristics in Practice

Case Study 1: E-commerce Platform

Priority Characteristics:

  1. Usability - Critical for conversion rates
  2. Performance - Page load time impacts sales
  3. Reliability - Downtime = lost revenue
  4. Security - Payment data protection

Testing Approach:

  • Usability lab sessions with target users
  • Load testing for peak shopping periods
  • Chaos engineering for reliability
  • Penetration testing for security

Case Study 2: Healthcare Application

Priority Characteristics:

  1. Reliability - Patient safety depends on availability
  2. Accessibility - Diverse user population
  3. Security - Protected health information
  4. Usability - Time-critical clinical workflows

Testing Approach:

  • Extended reliability testing (24/7 operation)
  • Comprehensive accessibility testing (WCAG + assistive devices)
  • Security audit and compliance verification
  • Clinical workflow usability studies

Case Study 3: Mobile Banking App

Priority Characteristics:

  1. Security - Financial data protection
  2. Usability - Mobile UX expectations
  3. Portability - Multiple device/OS support
  4. Reliability - Transaction integrity

Testing Approach:

  • Security testing (OWASP Mobile Top 10)
  • Device lab testing across platforms
  • Usability testing on various screen sizes
  • Offline capability and sync testing

Metrics and Measurement

Quantitative Metrics:

CharacteristicSample Metrics
UsabilityTask success rate, Time on task, Error rate
ReliabilityMTBF, Failure rate, Availability percentage
PerformanceResponse time, Throughput, Resource utilization
MaintainabilityCyclomatic complexity, Technical debt ratio
PortabilityPlatforms supported, Migration effort

Qualitative Assessment:

CharacteristicAssessment Method
UsabilityUser satisfaction surveys (SUS, NPS)
MaintainabilityExpert code review
AccessibilityExpert evaluation + user testing
User interface aestheticsDesign review, A/B testing

Test Your Knowledge

Quiz on CTAL-TA Quality Characteristics

Your Score: 0/10

Question: According to ISO 25010, which quality characteristic is concerned with how easily users can learn to use the software?



Frequently Asked Questions

Frequently Asked Questions (FAQs) / People Also Ask (PAA)

What are the eight quality characteristics in the ISO 25010 quality model?

How is system availability calculated and what does 99.9% availability mean?

What is the difference between RTO and RPO in reliability testing?

Why do automated accessibility tools only catch 30-40% of accessibility issues?

What are the sub-characteristics of Maintainability in ISO 25010?

How should Test Analysts prioritize quality characteristics for testing?

What is fault tolerance testing and how is it performed?

What usability sub-characteristics should Test Analysts evaluate?