
GitHub Actions for Test Automation: Complete CI/CD Testing Pipeline Guide
GitHub Actions has emerged as the de facto CI/CD platform for teams developing on GitHub, transforming how we approach test automation. Instead of relying on external CI services, your testing pipeline lives right alongside your code, triggered automatically on every push, pull request, or schedule you define.
For QA engineers and test automation specialists, GitHub Actions provides a powerful orchestration layer that can run any test framework - from Selenium and Playwright to Cypress and API testing with Newman. The platform's matrix strategy enables parallel execution across multiple browsers, operating systems, and Node versions, dramatically reducing feedback time.
This comprehensive guide takes you from basic workflow setup to advanced patterns like reusable workflows, composite actions, and intelligent caching strategies. You'll learn how to implement complete testing pipelines that catch bugs early, provide rapid feedback to developers, and ultimately enable your team to deploy with confidence.
Whether you're setting up your first automated test workflow or optimizing an existing pipeline, this guide provides production-ready examples for Selenium WebDriver, Playwright, Cypress, and API testing frameworks, complete with reporting, artifact management, and troubleshooting strategies.
Table Of Contents-
- Why GitHub Actions for Test Automation?
- Core Concepts: Workflows, Jobs, Steps, and Runners
- Setting Up Your First Test Workflow
- Running Selenium Tests in CI
- Running Playwright Tests in CI
- Running Cypress Tests in CI
- API Test Automation with Newman and REST Assured
- Parallel Test Execution with Matrix Strategy
- Test Reporting and Artifacts Management
- Matrix Testing: Multi-Browser and Multi-OS
- Caching Dependencies for Speed Optimization
- Scheduled Test Runs and Monitoring
- Pull Request Quality Gates
- Secrets and Environment Management
- Advanced Patterns: Reusable Workflows and Composite Actions
- Troubleshooting Common CI Failures
Why GitHub Actions for Test Automation?
GitHub Actions offers compelling advantages for test automation that go beyond traditional CI/CD platforms. Understanding these benefits helps you architect effective testing pipelines.
Native GitHub Integration
GitHub Actions is deeply integrated with your repository. Workflows are defined in .github/workflows directory using YAML, versioned alongside your code. This means your CI/CD configuration evolves with your project, and you can review pipeline changes through the same pull request process as code changes.
The platform provides automatic access to repository context through the github context object, making it trivial to reference branch names, commit SHAs, pull request numbers, and author information without complex configuration.
Extensive Marketplace Ecosystem
The GitHub Marketplace hosts over 11,000 pre-built actions covering everything from browser setup to test reporting. Popular actions like actions/checkout, actions/setup-node, and actions/cache eliminate boilerplate configuration. Framework-specific actions like cypress-io/github-action and microsoft/playwright-github-action provide optimized setups maintained by the framework creators themselves.
This ecosystem promotes reusability and significantly reduces the time to develop new testing pipelines. Instead of scripting browser installations or configuring reporters from scratch, you leverage battle-tested community solutions.
Cost-Effective for Public Repositories
GitHub provides unlimited free minutes for public repositories, making it ideal for open-source projects. Private repositories receive 2,000 free minutes per month on the free tier, with Linux runners being the most cost-effective option.
For larger teams, self-hosted runners provide unlimited execution at no additional cost beyond your own infrastructure. This flexibility allows you to optimize costs based on your testing volume and infrastructure preferences.
Powerful Matrix Builds
Matrix strategy is GitHub Actions' killer feature for testing. A single job definition can automatically expand into dozens of parallel jobs testing different Node versions, browsers, operating systems, or custom dimensions like environment or test suites.
For example, testing Selenium across Chrome, Firefox, and Edge on both Ubuntu and Windows requires just a few lines of YAML configuration. GitHub Actions handles the combinatorial explosion, spinning up appropriate runners and executing tests in parallel.
Built-in Secret Management
Security is paramount when dealing with API keys, credentials, and environment variables. GitHub Actions provides encrypted secrets at repository, environment, and organization levels. Secrets are never exposed in logs, and you can implement approval requirements for sensitive environments like production.
Environment-specific secrets combined with environment protection rules create robust quality gates. For instance, you can require manual approval before deploying to production, while development deployments proceed automatically.
Event-Driven Flexibility
Beyond basic push and pull request triggers, GitHub Actions supports sophisticated event patterns. You can trigger workflows on:
- Pull request events (opened, synchronized, labeled, review requested)
- Scheduled cron jobs for nightly regression suites
- Manual workflow dispatch with custom inputs
- External repository dispatch for cross-repo orchestration
- Release creation for deployment pipelines
- Issue and comment events for ChatOps-style testing
This event-driven model enables advanced scenarios like running full regression only when PR is labeled "ready-for-testing", or triggering cross-browser tests only on release branches.
Debugging and Monitoring
The Actions UI provides excellent visibility into workflow execution. Each step shows timing, logs, and exit codes. Failed steps are clearly highlighted, and you can re-run individual jobs or entire workflows.
For deeper debugging, you can enable step debug logging with secrets, SSH into runners using action-upterm, or download job logs for offline analysis. This transparency makes troubleshooting CI failures significantly easier than with opaque external CI systems.
Core Concepts: Workflows, Jobs, Steps, and Runners
Understanding GitHub Actions' architecture is essential for designing effective test automation pipelines. The hierarchy of workflows, jobs, steps, and runners defines how your tests execute.
Workflows: The Top-Level Container
A workflow is a configurable automated process defined by a YAML file in .github/workflows. Each repository can have multiple workflows serving different purposes - one for unit tests, another for E2E tests, a third for deployment.
Workflows are triggered by events: push, pull request, schedule, workflow dispatch, or external events. You control when workflows run using the on key:
on:
push:
branches: [main, develop]
pull_request:
branches: [main]
schedule:
- cron: '0 2 * * *' # 2 AM daily
workflow_dispatch: # Manual triggerEach workflow runs independently. Failed tests in your E2E workflow don't block your deployment workflow, though you can create dependencies between workflows if needed.
Jobs: Parallel Execution Units
Jobs are the execution containers within a workflow. By default, jobs run in parallel on separate runners, maximizing throughput. Each job defines:
- The runner environment (ubuntu-latest, windows-latest, macos-latest)
- Dependencies on other jobs via
needs - A strategy matrix for multiplicative parallelism
- Environment variables and secrets
- The steps to execute
Jobs can depend on each other using the needs keyword. This creates a directed acyclic graph (DAG) where dependent jobs wait for their prerequisites:
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Build application
run: npm run build
test:
needs: build # Waits for build to succeed
runs-on: ubuntu-latest
steps:
- name: Run tests
run: npm test
deploy:
needs: test # Waits for test to succeed
runs-on: ubuntu-latest
steps:
- name: Deploy
run: ./deploy.shThis dependency chain ensures you don't waste time testing a failed build or deploying untested code.
Steps: Individual Commands
Steps are the atomic units of work within a job. Each step runs in the runner's shell environment and can:
- Execute shell commands using
run - Use pre-built actions via
uses - Set environment variables
- Conditionally execute based on previous step results
Steps within a job run sequentially. A failed step (non-zero exit code) fails the entire job unless you use continue-on-error: true.
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
- name: Install dependencies
run: npm ci
- name: Run tests
run: npm testThe working directory persists between steps within a job, but not between jobs. If Job B needs artifacts from Job A, you must explicitly upload and download them.
Runners: The Execution Environment
Runners are the virtual machines that execute your workflows. GitHub provides hosted runners for Linux (Ubuntu), Windows, and macOS with pre-installed software including Node.js, Python, Java, browsers, and common build tools.
Hosted runner specifications:
- ubuntu-latest: 2 CPU cores, 7 GB RAM, 14 GB SSD
- windows-latest: 2 CPU cores, 7 GB RAM, 14 GB SSD
- macos-latest: 3 CPU cores (M1), 14 GB RAM, 14 GB SSD
For specialized requirements like GPU testing, proprietary software, or higher resource needs, you can register self-hosted runners. These execute on your own infrastructure and provide unlimited minutes.
Workflow Context and Expressions
GitHub Actions provides context objects containing workflow metadata accessible via expressions syntax ${{ }}:
github.event_name: The event that triggered the workflowgithub.ref: Branch or tag ref that triggered workflowgithub.sha: Commit SHAgithub.actor: User who triggered the workflowrunner.os: Operating system of the runnersecrets.API_KEY: Access encrypted secrets
These contexts enable dynamic behavior. For example, deploying only on main branch:
- name: Deploy to production
if: github.ref == 'refs/heads/main'
run: ./deploy.shConcurrency Control
The concurrency key prevents simultaneous workflow runs, useful for avoiding race conditions in deployments or expensive test suites:
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: trueThis configuration cancels in-progress runs when a new commit is pushed, saving resources and providing faster feedback on the latest code.
Setting Up Your First Test Workflow
Let's create a foundational test workflow that demonstrates core concepts. This example runs a Node.js test suite with proper setup, caching, and error handling.
Basic Workflow Structure
Create .github/workflows/test.yml in your repository:
name: Test Suite
on:
push:
branches: [main, develop]
pull_request:
branches: [main]
jobs:
test:
name: Run Tests
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Run unit tests
run: npm test
- name: Run integration tests
run: npm run test:integrationThis workflow triggers on pushes to main/develop and all pull requests targeting main. It checks out code, sets up Node.js 20 with npm cache, installs dependencies using npm ci (faster and more reliable than npm install in CI), then runs tests.
Adding Test Coverage
Extend the workflow to generate and upload coverage reports:
- name: Run tests with coverage
run: npm test -- --coverage
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v4
with:
file: ./coverage/coverage-final.json
fail_ci_if_error: true
- name: Upload coverage reports
uses: actions/upload-artifact@v4
with:
name: coverage-report
path: coverage/
retention-days: 30The codecov-action uploads coverage to Codecov for tracking trends over time. The upload-artifact action stores the full coverage report for download from the Actions UI.
Environment Variables
Define environment variables at workflow, job, or step level:
env:
NODE_ENV: test
API_URL: https://staging-api.example.com
jobs:
test:
runs-on: ubuntu-latest
env:
DATABASE_URL: postgresql://localhost/testdb
steps:
- name: Run API tests
env:
API_KEY: ${{ secrets.API_KEY }}
run: npm run test:apiSecrets like API_KEY are stored in repository settings under Secrets and variables → Actions. They're encrypted at rest and never exposed in logs.
Conditional Execution
Use if conditionals to control step execution:
- name: Run E2E tests (PR only)
if: github.event_name == 'pull_request'
run: npm run test:e2e
- name: Notify on failure
if: failure()
uses: actions/github-script@v7
with:
script: |
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: 'Tests failed! Please check the logs.'
})The failure() function returns true if any previous step failed. Combined with github-script action, this posts a comment on the pull request when tests fail.
Timeout Protection
Prevent jobs from hanging indefinitely:
jobs:
test:
runs-on: ubuntu-latest
timeout-minutes: 30 # Fail if job exceeds 30 minutes
steps:
- name: Run tests
timeout-minutes: 20 # Per-step timeout
run: npm testTimeouts protect against infinite loops, deadlocks, or forgotten debug code that waits for user input.
Complete Starter Workflow
Here's a production-ready starting point incorporating best practices:
name: CI
on:
push:
branches: [main, develop]
pull_request:
branches: [main]
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
test:
name: Test on Node ${{ matrix.node-version }}
runs-on: ubuntu-latest
timeout-minutes: 30
strategy:
matrix:
node-version: [18, 20]
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Node.js ${{ matrix.node-version }}
uses: actions/setup-node@v4
with:
node-version: ${{ matrix.node-version }}
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Run linter
run: npm run lint
- name: Run tests
run: npm test -- --coverage
- name: Upload coverage
uses: codecov/codecov-action@v4
if: matrix.node-version == 20
with:
fail_ci_if_error: false
- name: Upload test results
uses: actions/upload-artifact@v4
if: always()
with:
name: test-results-node-${{ matrix.node-version }}
path: |
coverage/
test-results/
retention-days: 14This workflow tests on both Node 18 and 20, uses concurrency control to cancel outdated runs, includes comprehensive timeouts, and uploads artifacts even when tests fail (if: always()).
Running Selenium Tests in CI
Selenium WebDriver requires specific setup in CI environments including browser installation, WebDriver management, and headless execution. GitHub Actions provides pre-installed browsers on hosted runners, simplifying Selenium integration.
Java + Selenium + Maven Workflow
For Java-based Selenium projects using Maven:
name: Selenium Tests
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
selenium-tests:
name: Selenium Tests
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Java
uses: actions/setup-java@v4
with:
distribution: 'temurin'
java-version: '17'
cache: 'maven'
- name: Install Chrome
uses: browser-actions/setup-chrome@latest
with:
chrome-version: stable
- name: Display Chrome version
run: chrome --version
- name: Install dependencies
run: mvn clean install -DskipTests
- name: Run Selenium tests
run: mvn test -Dtest=SeleniumTestSuite
- name: Upload test reports
uses: actions/upload-artifact@v4
if: always()
with:
name: selenium-test-reports
path: target/surefire-reports/
retention-days: 14
- name: Upload screenshots
uses: actions/upload-artifact@v4
if: failure()
with:
name: failure-screenshots
path: screenshots/
retention-days: 7Key aspects:
setup-javawith Temurin distribution (OpenJDK)- Maven cache for faster dependency resolution
browser-actions/setup-chromeensures latest stable Chrome- Test reports uploaded even on failure (
if: always()) - Screenshots uploaded only on failure for debugging
Python + Selenium + Pytest Workflow
For Python-based Selenium projects:
name: Selenium Python Tests
on:
push:
branches: [main]
pull_request:
jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ['3.10', '3.11', '3.12']
steps:
- uses: actions/checkout@v4
- name: Setup Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
cache: 'pip'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: Setup Chrome and ChromeDriver
uses: nanasess/setup-chromedriver@v2
- name: Run Selenium tests
run: |
export DISPLAY=:99
chromedriver --url-base=/wd/hub &
pytest tests/selenium/ -v --junitxml=test-results.xml
- name: Publish test results
uses: EnricoMi/publish-unit-test-result-action@v2
if: always()
with:
files: test-results.xmlThe nanasess/setup-chromedriver action installs ChromeDriver matching the pre-installed Chrome version. Tests run with explicit DISPLAY variable for headless execution.
JavaScript + Selenium + WebDriverIO
For WebDriverIO projects:
name: WebDriverIO Tests
on:
push:
branches: [main, develop]
pull_request:
jobs:
wdio-tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Run WebDriverIO tests
run: npm run test:wdio
- name: Upload Allure results
uses: actions/upload-artifact@v4
if: always()
with:
name: allure-results
path: allure-results/
- name: Generate Allure report
uses: simple-elf/allure-report-action@v1.7
if: always()
with:
allure_results: allure-results
allure_report: allure-report
gh_pages: gh-pages
allure_history: allure-history
- name: Deploy Allure report to GitHub Pages
uses: peaceiris/actions-gh-pages@v3
if: github.ref == 'refs/heads/main'
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: allure-historyThis workflow generates Allure reports and publishes them to GitHub Pages for persistent test result history.
Using Selenium Grid with Docker
For parallel execution with Selenium Grid:
name: Selenium Grid Tests
on:
push:
branches: [main]
jobs:
grid-tests:
runs-on: ubuntu-latest
services:
selenium-hub:
image: selenium/hub:4.16.0
ports:
- 4444:4444
chrome:
image: selenium/node-chrome:4.16.0
env:
SE_EVENT_BUS_HOST: selenium-hub
SE_EVENT_BUS_PUBLISH_PORT: 4442
SE_EVENT_BUS_SUBSCRIBE_PORT: 4443
firefox:
image: selenium/node-firefox:4.16.0
env:
SE_EVENT_BUS_HOST: selenium-hub
SE_EVENT_BUS_PUBLISH_PORT: 4442
SE_EVENT_BUS_SUBSCRIBE_PORT: 4443
steps:
- uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Wait for Selenium Grid
run: |
timeout 60 bash -c 'until curl -s http://localhost:4444/wd/hub/status | grep "ready"; do sleep 2; done'
- name: Run tests against Grid
env:
SELENIUM_REMOTE_URL: http://localhost:4444/wd/hub
run: npm run test:grid
- name: Upload test videos
uses: actions/upload-artifact@v4
if: always()
with:
name: test-videos
path: videos/The services section starts Selenium Hub and browser nodes as Docker containers. Tests connect to http://localhost:4444 to execute across multiple browsers in parallel.
Running Playwright Tests in CI
Playwright provides official GitHub Actions integration with optimized caching and browser installation. Microsoft maintains microsoft/playwright-github-action specifically for CI/CD scenarios.
Basic Playwright Workflow
name: Playwright Tests
on:
push:
branches: [main, develop]
pull_request:
branches: [main]
jobs:
test:
name: Playwright Tests
timeout-minutes: 60
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Install Playwright browsers
run: npx playwright install --with-deps
- name: Run Playwright tests
run: npx playwright test
- name: Upload Playwright report
uses: actions/upload-artifact@v4
if: always()
with:
name: playwright-report
path: playwright-report/
retention-days: 30The --with-deps flag installs browser binaries and system dependencies needed for Chromium, Firefox, and WebKit.
Playwright with Sharding for Parallel Execution
Playwright supports native test sharding for distributing tests across multiple workers:
name: Playwright Sharded Tests
on:
push:
branches: [main]
pull_request:
jobs:
test:
name: Playwright Tests (Shard ${{ matrix.shardIndex }} of ${{ matrix.shardTotal }})
timeout-minutes: 60
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
shardIndex: [1, 2, 3, 4]
shardTotal: [4]
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Install Playwright browsers
run: npx playwright install --with-deps
- name: Run Playwright tests (shard ${{ matrix.shardIndex }}/${{ matrix.shardTotal }})
run: npx playwright test --shard=${{ matrix.shardIndex }}/${{ matrix.shardTotal }}
- name: Upload blob report to GitHub Actions Artifacts
if: always()
uses: actions/upload-artifact@v4
with:
name: blob-report-${{ matrix.shardIndex }}
path: blob-report
retention-days: 1
merge-reports:
name: Merge Playwright Reports
if: always()
needs: [test]
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Download blob reports from GitHub Actions Artifacts
uses: actions/download-artifact@v4
with:
path: all-blob-reports
pattern: blob-report-*
merge-multiple: true
- name: Merge into HTML Report
run: npx playwright merge-reports --reporter html ./all-blob-reports
- name: Upload HTML report
uses: actions/upload-artifact@v4
with:
name: playwright-report
path: playwright-report
retention-days: 14This configuration splits tests into 4 parallel shards, then merges reports in a final job. The fail-fast: false ensures all shards complete even if one fails.
Playwright with Multiple Projects
Test across different browsers using Playwright projects:
name: Playwright Multi-Browser Tests
on:
push:
branches: [main]
jobs:
test:
name: Playwright Tests - ${{ matrix.project }}
timeout-minutes: 60
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
project: [chromium, firefox, webkit]
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Install Playwright browsers
run: npx playwright install --with-deps ${{ matrix.project }}
- name: Run Playwright tests
run: npx playwright test --project=${{ matrix.project }}
- name: Upload test results
uses: actions/upload-artifact@v4
if: always()
with:
name: playwright-results-${{ matrix.project }}
path: test-results/Each matrix job installs only the required browser, reducing setup time.
Playwright with Component Testing
For component testing workflows:
name: Playwright Component Tests
on:
push:
branches: [main]
pull_request:
jobs:
component-tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Install Playwright browsers
run: npx playwright install --with-deps chromium
- name: Run component tests
run: npx playwright test --config=playwright-ct.config.ts
- name: Upload trace
uses: actions/upload-artifact@v4
if: failure()
with:
name: playwright-traces
path: test-results/**/trace.zipComponent tests are typically faster than E2E tests, making them ideal for pull request validation.
Playwright with Visual Regression Testing
Integrate visual comparison:
name: Playwright Visual Tests
on:
push:
branches: [main]
pull_request:
jobs:
visual-tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Install Playwright
run: npx playwright install --with-deps chromium
- name: Run visual tests
run: npx playwright test --grep @visual
- name: Upload actual snapshots
uses: actions/upload-artifact@v4
if: failure()
with:
name: visual-diff
path: test-results/**/actual/
- name: Update snapshots (main branch only)
if: github.ref == 'refs/heads/main' && failure()
run: npx playwright test --update-snapshotsVisual tests capture screenshots and compare against baseline images. Failed comparisons upload actual vs expected images for review.
Running Cypress Tests in CI
Cypress provides the official cypress-io/github-action which handles installation, caching, and execution with minimal configuration.
Basic Cypress Workflow
name: Cypress Tests
on:
push:
branches: [main, develop]
pull_request:
jobs:
cypress-run:
name: Cypress Tests
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Cypress run
uses: cypress-io/github-action@v6
with:
build: npm run build
start: npm start
wait-on: 'http://localhost:3000'
wait-on-timeout: 120
- name: Upload screenshots
uses: actions/upload-artifact@v4
if: failure()
with:
name: cypress-screenshots
path: cypress/screenshots
- name: Upload videos
uses: actions/upload-artifact@v4
if: always()
with:
name: cypress-videos
path: cypress/videosThe cypress-io/github-action automatically:
- Installs npm dependencies
- Caches Cypress binary
- Runs build command
- Starts development server
- Waits for server to be ready
- Executes Cypress tests
- Records videos
Cypress with Parallel Execution
Split tests across multiple containers for faster execution:
name: Cypress Parallel Tests
on:
push:
branches: [main]
jobs:
cypress-run:
name: Cypress Tests - Container ${{ matrix.containers }}
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
containers: [1, 2, 3, 4]
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Cypress run
uses: cypress-io/github-action@v6
with:
record: true
parallel: true
group: 'UI Tests'
ci-build-id: '${{ github.sha }}-${{ github.workflow }}-${{ github.event_name }}'
env:
CYPRESS_RECORD_KEY: ${{ secrets.CYPRESS_RECORD_KEY }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}Parallel execution requires Cypress Cloud (formerly Dashboard) for test orchestration and load balancing.
Cypress Component Testing
For React/Vue/Angular component tests:
name: Cypress Component Tests
on:
push:
branches: [main]
pull_request:
jobs:
component-tests:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Cypress component tests
uses: cypress-io/github-action@v6
with:
component: true
- name: Upload component test videos
uses: actions/upload-artifact@v4
if: always()
with:
name: component-videos
path: cypress/videosComponent tests run faster than E2E tests and don't require a running server.
Cypress with Custom Browser
Test on specific browsers:
name: Cypress Multi-Browser Tests
on:
push:
branches: [main]
jobs:
cypress-run:
name: Cypress - ${{ matrix.browser }}
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
browser: [chrome, firefox, edge]
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Install Firefox
if: matrix.browser == 'firefox'
uses: browser-actions/setup-firefox@latest
- name: Install Edge
if: matrix.browser == 'edge'
uses: browser-actions/setup-edge@latest
- name: Cypress run
uses: cypress-io/github-action@v6
with:
browser: ${{ matrix.browser }}
start: npm start
wait-on: 'http://localhost:3000'
- name: Upload artifacts
uses: actions/upload-artifact@v4
if: failure()
with:
name: cypress-${{ matrix.browser }}-artifacts
path: |
cypress/screenshots
cypress/videosEach browser runs in a separate job for true parallel execution.
Cypress with Environment Configuration
Test against different environments:
name: Cypress Environment Tests
on:
push:
branches: [main]
jobs:
cypress-run:
name: Cypress - ${{ matrix.environment }}
runs-on: ubuntu-latest
strategy:
matrix:
environment: [staging, production]
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Cypress run
uses: cypress-io/github-action@v6
with:
config-file: cypress.${{ matrix.environment }}.config.js
spec: cypress/e2e/${{ matrix.environment }}/**/*.cy.js
env:
CYPRESS_BASE_URL: ${{ secrets[format('BASE_URL_{0}', matrix.environment)] }}
CYPRESS_API_KEY: ${{ secrets[format('API_KEY_{0}', matrix.environment)] }}This pattern uses matrix to test multiple environments with environment-specific configuration files and secrets.
API Test Automation with Newman and REST Assured
API testing is crucial for validating backend services, microservices, and integration points. GitHub Actions integrates seamlessly with Newman for Postman collections and REST Assured for Java-based API tests.
Newman (Postman Collections) Workflow
name: API Tests with Newman
on:
push:
branches: [main, develop]
pull_request:
schedule:
- cron: '0 */6 * * *' # Every 6 hours
jobs:
api-tests:
name: Newman API Tests
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
- name: Install Newman
run: |
npm install -g newman
npm install -g newman-reporter-htmlextra
- name: Create results directory
run: mkdir -p newman-results
- name: Run API tests
env:
API_KEY: ${{ secrets.API_KEY }}
BASE_URL: ${{ secrets.BASE_URL }}
run: |
newman run postman/collections/api-tests.json \
--environment postman/environments/ci.json \
--env-var "apiKey=$API_KEY" \
--env-var "baseUrl=$BASE_URL" \
--reporters cli,htmlextra,junit \
--reporter-htmlextra-export newman-results/report.html \
--reporter-junit-export newman-results/junit.xml \
--bail \
--timeout-request 10000
- name: Publish test results
uses: EnricoMi/publish-unit-test-result-action@v2
if: always()
with:
files: newman-results/junit.xml
- name: Upload HTML report
uses: actions/upload-artifact@v4
if: always()
with:
name: newman-report
path: newman-results/
retention-days: 30
- name: Notify on failure
if: failure()
uses: slackapi/slack-github-action@v1
with:
channel-id: 'test-alerts'
slack-message: 'API tests failed! Check report at ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}'
env:
SLACK_BOT_TOKEN: ${{ secrets.SLACK_BOT_TOKEN }}This workflow:
- Runs on pushes, pull requests, and every 6 hours
- Installs Newman and HTML reporter
- Executes Postman collection with environment variables
- Generates HTML and JUnit reports
- Publishes test results to PR
- Notifies Slack on failures
Newman with Multiple Environments
Test APIs across different environments:
name: Multi-Environment API Tests
on:
workflow_dispatch:
inputs:
environment:
description: 'Environment to test'
required: true
type: choice
options:
- dev
- staging
- production
jobs:
api-tests:
name: API Tests - ${{ github.event.inputs.environment }}
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
- name: Install Newman
run: npm install -g newman newman-reporter-htmlextra
- name: Run API tests
run: |
newman run postman/collections/api-tests.json \
--environment postman/environments/${{ github.event.inputs.environment }}.json \
--env-var "apiKey=${{ secrets[format('API_KEY_{0}', github.event.inputs.environment)] }}" \
--reporters cli,htmlextra \
--reporter-htmlextra-export newman-results/report-${{ github.event.inputs.environment }}.html
- name: Upload report
uses: actions/upload-artifact@v4
if: always()
with:
name: newman-report-${{ github.event.inputs.environment }}
path: newman-results/The workflow_dispatch trigger enables manual execution with environment selection through the Actions UI.
REST Assured (Java) Workflow
For Java-based API testing with REST Assured:
name: REST Assured API Tests
on:
push:
branches: [main]
pull_request:
schedule:
- cron: '0 2 * * *' # Daily at 2 AM
jobs:
api-tests:
name: REST Assured Tests
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Java
uses: actions/setup-java@v4
with:
distribution: 'temurin'
java-version: '17'
cache: 'maven'
- name: Run API tests
env:
API_BASE_URL: ${{ secrets.API_BASE_URL }}
API_KEY: ${{ secrets.API_KEY }}
run: mvn test -Dtest=ApiTestSuite
- name: Generate Allure report
if: always()
run: mvn allure:report
- name: Upload Allure results
uses: actions/upload-artifact@v4
if: always()
with:
name: allure-results
path: target/allure-results/
- name: Upload Surefire reports
uses: actions/upload-artifact@v4
if: always()
with:
name: surefire-reports
path: target/surefire-reports/
- name: Publish test results
uses: dorny/test-reporter@v1
if: always()
with:
name: API Test Results
path: target/surefire-reports/*.xml
reporter: java-junitREST Assured tests execute via Maven with test results published to the PR checks.
Contract Testing with Pact
For consumer-driven contract testing:
name: Pact Contract Tests
on:
push:
branches: [main]
pull_request:
jobs:
contract-tests:
name: Pact Contract Tests
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Run consumer tests
run: npm run test:pact:consumer
- name: Publish pacts to Pact Broker
if: github.ref == 'refs/heads/main'
run: npm run pact:publish
env:
PACT_BROKER_BASE_URL: ${{ secrets.PACT_BROKER_BASE_URL }}
PACT_BROKER_TOKEN: ${{ secrets.PACT_BROKER_TOKEN }}
- name: Can-i-deploy check
if: github.event_name == 'pull_request'
run: npx pact-broker can-i-deploy --pacticipant=consumer --version=${{ github.sha }} --to-environment=production
env:
PACT_BROKER_BASE_URL: ${{ secrets.PACT_BROKER_BASE_URL }}
PACT_BROKER_TOKEN: ${{ secrets.PACT_BROKER_TOKEN }}Contract tests verify API compatibility between consumers and providers before deployment.
Parallel Test Execution with Matrix Strategy
The matrix strategy is GitHub Actions' most powerful feature for test parallelization. It automatically creates multiple jobs from a single definition, distributing work across runners for dramatic speed improvements.
Basic Matrix Configuration
name: Matrix Tests
on:
push:
branches: [main]
jobs:
test:
name: Test on ${{ matrix.os }} - Node ${{ matrix.node }}
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest, windows-latest, macos-latest]
node: [18, 20]
steps:
- uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: ${{ matrix.node }}
cache: 'npm'
- name: Install and test
run: |
npm ci
npm testThis creates 6 jobs (3 OS × 2 Node versions) running in parallel. The total execution time is roughly the time of the slowest job, not the sum of all jobs.
Test Suite Sharding
Distribute test files across multiple workers:
name: Sharded Tests
on:
push:
branches: [main]
jobs:
test:
name: Test Suite - Shard ${{ matrix.shard }}
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
shard: [1, 2, 3, 4, 5]
total-shards: [5]
steps:
- uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Run tests (shard ${{ matrix.shard }}/${{ matrix.total-shards }})
run: |
npm test -- --shard=${{ matrix.shard }}/${{ matrix.total-shards }} \
--maxWorkers=2
- name: Upload coverage
uses: actions/upload-artifact@v4
with:
name: coverage-${{ matrix.shard }}
path: coverage/
merge-coverage:
name: Merge Coverage Reports
needs: test
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Download all coverage reports
uses: actions/download-artifact@v4
with:
path: coverage-reports
pattern: coverage-*
merge-multiple: false
- name: Merge coverage
run: npx nyc merge coverage-reports coverage-final.json
- name: Generate report
run: npx nyc report --reporter=html --reporter=text
- name: Upload merged coverage
uses: codecov/codecov-action@v4
with:
files: coverage-final.jsonThe fail-fast: false ensures all shards complete even if one fails, providing complete test feedback.
Browser Matrix
Test across multiple browser and viewport combinations:
name: Cross-Browser Tests
on:
push:
branches: [main]
jobs:
test:
name: ${{ matrix.browser }} - ${{ matrix.viewport }}
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
browser: [chromium, firefox, webkit]
viewport: [desktop, tablet, mobile]
include:
- viewport: desktop
width: 1920
height: 1080
- viewport: tablet
width: 768
height: 1024
- viewport: mobile
width: 375
height: 667
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Install Playwright
run: npx playwright install --with-deps ${{ matrix.browser }}
- name: Run tests
run: |
npx playwright test \
--project=${{ matrix.browser }} \
--viewport-size=${{ matrix.width }},${{ matrix.height }}
- name: Upload results
uses: actions/upload-artifact@v4
if: always()
with:
name: results-${{ matrix.browser }}-${{ matrix.viewport }}
path: test-results/This creates 9 jobs (3 browsers × 3 viewports) with custom viewport sizes for each.
Exclude and Include Patterns
Fine-tune matrix combinations:
strategy:
matrix:
os: [ubuntu-latest, windows-latest, macos-latest]
node: [18, 20]
exclude:
# Exclude Node 18 on macOS
- os: macos-latest
node: 18
include:
# Add Node 21 only on Ubuntu
- os: ubuntu-latest
node: 21
experimental: trueDynamic Matrix from File
Generate matrix values from test files:
jobs:
discover-tests:
runs-on: ubuntu-latest
outputs:
matrix: ${{ steps.set-matrix.outputs.matrix }}
steps:
- uses: actions/checkout@v4
- name: Generate matrix
id: set-matrix
run: |
TESTS=$(find tests -name '*.spec.js' -type f | jq -R -s -c 'split("\n")[:-1]')
echo "matrix={\"test\":$TESTS}" >> $GITHUB_OUTPUT
test:
needs: discover-tests
runs-on: ubuntu-latest
strategy:
matrix: ${{ fromJson(needs.discover-tests.outputs.matrix) }}
steps:
- uses: actions/checkout@v4
- name: Run test
run: npm test -- ${{ matrix.test }}This dynamically creates one job per test file, ideal for long-running integration tests.
Controlling Parallelism
Limit concurrent jobs to manage resource usage:
strategy:
max-parallel: 3 # Run max 3 jobs simultaneously
matrix:
node: [16, 18, 20]
os: [ubuntu-latest, windows-latest]Without max-parallel, all 6 jobs run concurrently. With max-parallel: 3, only 3 run at once, reducing resource consumption at the cost of longer total time.
Test Reporting and Artifacts Management
Effective reporting transforms raw test results into actionable insights. GitHub Actions provides multiple mechanisms for capturing, storing, and presenting test outcomes.
JUnit XML Reporting
Most CI systems understand JUnit XML format. Configure your test framework to output JUnit reports:
name: Test with JUnit Reports
on:
push:
branches: [main]
pull_request:
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Run tests with JUnit output
run: npm test -- --reporters=jest-junit
env:
JEST_JUNIT_OUTPUT_DIR: ./test-results
JEST_JUNIT_OUTPUT_NAME: junit.xml
- name: Publish test results
uses: EnricoMi/publish-unit-test-result-action@v2
if: always()
with:
files: test-results/junit.xml
check_name: Test Results
comment_title: Test Results
fail_on: errors
- name: Upload test results
uses: actions/upload-artifact@v4
if: always()
with:
name: test-results
path: test-results/The publish-unit-test-result-action creates a rich test summary in PR checks with pass/fail counts, duration, and trends.
HTML Test Reports
Generate and publish HTML reports:
- name: Generate HTML report
if: always()
run: npm run test:report
- name: Upload HTML report
uses: actions/upload-artifact@v4
if: always()
with:
name: html-test-report
path: html-report/
retention-days: 30
- name: Deploy report to GitHub Pages
if: github.ref == 'refs/heads/main'
uses: peaceiris/actions-gh-pages@v3
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: ./html-report
destination_dir: test-reports/${{ github.run_number }}This uploads the report as an artifact and publishes to GitHub Pages for persistent access.
Allure Reporting
Allure provides rich, interactive test reports:
- name: Run tests with Allure
run: npm test -- --reporter=allure
- name: Generate Allure report
if: always()
uses: simple-elf/allure-report-action@v1
with:
allure_results: allure-results
allure_report: allure-report
gh_pages: gh-pages
allure_history: allure-history
- name: Deploy Allure report to GitHub Pages
if: always()
uses: peaceiris/actions-gh-pages@v3
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: allure-historyArtifact Storage Strategies
Upload different artifact types with appropriate retention:
- name: Upload test artifacts
uses: actions/upload-artifact@v4
if: always()
with:
name: test-artifacts-${{ github.run_id }}
path: |
test-results/
coverage/
logs/
retention-days: 30
- name: Upload screenshots (failures only)
uses: actions/upload-artifact@v4
if: failure()
with:
name: failure-screenshots
path: screenshots/
retention-days: 7
- name: Upload videos (always)
uses: actions/upload-artifact@v4
if: always()
with:
name: test-videos
path: videos/
retention-days: 14Use shorter retention for large artifacts like videos to manage storage costs.
Test Summary in PR Comments
Post test summaries directly to pull requests:
- name: Comment test results on PR
if: github.event_name == 'pull_request' && always()
uses: actions/github-script@v7
with:
script: |
const fs = require('fs');
const summary = fs.readFileSync('test-summary.md', 'utf8');
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: summary
});Mochawesome HTML Reports
For Mocha/Cypress tests:
- name: Run tests
run: npm test -- --reporter mochawesome
- name: Merge Mochawesome reports
if: always()
run: npx mochawesome-merge mochawesome-report/*.json > merged-report.json
- name: Generate HTML report
if: always()
run: npx marge merged-report.json --reportDir public --inline
- name: Upload report
uses: actions/upload-artifact@v4
if: always()
with:
name: mochawesome-report
path: public/Test Failure Notifications
Integrate with external services for failure alerts:
- name: Send Slack notification on failure
if: failure()
uses: slackapi/slack-github-action@v1
with:
channel-id: 'test-failures'
payload: |
{
"text": "Tests failed in ${{ github.repository }}",
"blocks": [
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": "*Tests Failed*\n*Repo:* ${{ github.repository }}\n*Branch:* ${{ github.ref_name }}\n*Run:* <${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}|View Details>"
}
}
]
}
env:
SLACK_BOT_TOKEN: ${{ secrets.SLACK_BOT_TOKEN }}Matrix Testing: Multi-Browser and Multi-OS
Cross-platform testing ensures your application works across user environments. GitHub Actions' matrix strategy combined with pre-installed browsers makes this straightforward.
Multi-OS Testing
Test on Linux, Windows, and macOS:
name: Cross-Platform Tests
on:
push:
branches: [main]
jobs:
test:
name: Test on ${{ matrix.os }}
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ubuntu-latest, windows-latest, macos-latest]
steps:
- uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Run tests
run: npm test
- name: Upload results
uses: actions/upload-artifact@v4
if: always()
with:
name: test-results-${{ matrix.os }}
path: test-results/Multi-Browser Selenium Tests
Test across Chrome, Firefox, and Edge:
name: Multi-Browser Selenium Tests
on:
push:
branches: [main]
jobs:
test:
name: Selenium - ${{ matrix.browser }}
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
browser:
- chrome
- firefox
- edge
steps:
- uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- name: Setup Chrome
if: matrix.browser == 'chrome'
uses: browser-actions/setup-chrome@latest
- name: Setup Firefox
if: matrix.browser == 'firefox'
uses: browser-actions/setup-firefox@latest
- name: Setup Edge
if: matrix.browser == 'edge'
uses: browser-actions/setup-edge@latest
- name: Install dependencies
run: npm ci
- name: Run Selenium tests
env:
BROWSER: ${{ matrix.browser }}
run: npm run test:selenium
- name: Upload screenshots
uses: actions/upload-artifact@v4
if: failure()
with:
name: screenshots-${{ matrix.browser }}
path: screenshots/Combined OS and Browser Matrix
Test all combinations:
name: Full Matrix Tests
on:
push:
branches: [main]
jobs:
test:
name: ${{ matrix.os }} - ${{ matrix.browser }}
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ubuntu-latest, windows-latest, macos-latest]
browser: [chrome, firefox]
exclude:
# Firefox on macOS uses different setup
- os: macos-latest
browser: firefox
steps:
- uses: actions/checkout@v4
- name: Setup browser
uses: browser-actions/setup-${{ matrix.browser }}@latest
- name: Run tests
run: npm run test:e2e -- --browser=${{ matrix.browser }}This creates 5 jobs (3 OS × 2 browsers - 1 excluded combination).
Mobile Browser Testing
Test mobile viewports and user agents:
name: Mobile Browser Tests
on:
push:
branches: [main]
jobs:
test:
name: Mobile - ${{ matrix.device }}
runs-on: ubuntu-latest
strategy:
matrix:
device:
- name: iPhone 14 Pro
width: 393
height: 852
userAgent: 'Mozilla/5.0 (iPhone; CPU iPhone OS 16_0 like Mac OS X)'
- name: Samsung Galaxy S23
width: 360
height: 800
userAgent: 'Mozilla/5.0 (Linux; Android 13; SM-S911B)'
- name: iPad Pro
width: 1024
height: 1366
userAgent: 'Mozilla/5.0 (iPad; CPU OS 16_0 like Mac OS X)'
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Install Playwright
run: npx playwright install --with-deps chromium
- name: Run mobile tests
run: |
npx playwright test \
--viewport-size=${{ matrix.device.width }},${{ matrix.device.height }} \
--user-agent="${{ matrix.device.userAgent }}"
- name: Upload results
uses: actions/upload-artifact@v4
if: always()
with:
name: mobile-results-${{ matrix.device.name }}
path: test-results/Browser Version Matrix
Test against specific browser versions:
name: Browser Version Tests
on:
push:
branches: [main]
jobs:
test:
name: Chrome ${{ matrix.chrome-version }}
runs-on: ubuntu-latest
strategy:
matrix:
chrome-version: [stable, beta, dev]
steps:
- uses: actions/checkout@v4
- name: Setup Chrome
uses: browser-actions/setup-chrome@latest
with:
chrome-version: ${{ matrix.chrome-version }}
- name: Verify Chrome version
run: google-chrome --version
- name: Run tests
run: npm run test:chromeCaching Dependencies for Speed Optimization
Dependency caching dramatically reduces workflow execution time by reusing installed packages across runs. Proper caching strategy can reduce build times by 60% or more.
NPM Package Caching
The setup-node action includes built-in caching:
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm' # Automatically caches npm packages
- name: Install dependencies
run: npm ciThis caches ~/.npm based on package-lock.json hash. When lock file changes, cache is invalidated and rebuilt.
Manual Cache Configuration
For more control, use actions/cache directly:
- name: Cache node modules
uses: actions/cache@v4
id: npm-cache
with:
path: |
~/.npm
node_modules
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
restore-keys: |
${{ runner.os }}-node-
- name: Install dependencies
if: steps.npm-cache.outputs.cache-hit != 'true'
run: npm ciThe restore-keys provides fallback cache entries when exact match isn't found.
Playwright Browser Caching
Cache Playwright browsers to avoid re-downloading:
- name: Get Playwright version
id: playwright-version
run: echo "version=$(npm list @playwright/test --depth=0 --json | jq -r '.dependencies["@playwright/test"].version')" >> $GITHUB_OUTPUT
- name: Cache Playwright browsers
uses: actions/cache@v4
id: playwright-cache
with:
path: ~/.cache/ms-playwright
key: ${{ runner.os }}-playwright-${{ steps.playwright-version.outputs.version }}
- name: Install Playwright browsers
if: steps.playwright-cache.outputs.cache-hit != 'true'
run: npx playwright install --with-depsCypress Binary Caching
Cache Cypress binary separately from npm packages:
- name: Cache Cypress binary
uses: actions/cache@v4
with:
path: ~/.cache/Cypress
key: ${{ runner.os }}-cypress-${{ hashFiles('**/package-lock.json') }}
restore-keys: |
${{ runner.os }}-cypress-The cypress-io/github-action handles this automatically, but manual caching provides more control.
Maven Dependency Caching
For Java projects:
- name: Setup Java
uses: actions/setup-java@v4
with:
distribution: 'temurin'
java-version: '17'
cache: 'maven'
- name: Build with Maven
run: mvn clean installAlternatively, use manual caching:
- name: Cache Maven packages
uses: actions/cache@v4
with:
path: ~/.m2/repository
key: ${{ runner.os }}-maven-${{ hashFiles('**/pom.xml') }}
restore-keys: |
${{ runner.os }}-maven-Python Pip Caching
- name: Setup Python
uses: actions/setup-python@v5
with:
python-version: '3.11'
cache: 'pip'
- name: Install dependencies
run: pip install -r requirements.txtCross-Job Artifact Sharing
For build outputs needed by test jobs:
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Build application
run: npm run build
- name: Upload build artifacts
uses: actions/upload-artifact@v4
with:
name: build-output
path: dist/
retention-days: 1
test:
needs: build
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Download build artifacts
uses: actions/download-artifact@v4
with:
name: build-output
path: dist/
- name: Run tests
run: npm testCache Scope and Limits
Understanding cache behavior:
- Scope: Caches are isolated per branch.
mainbranch cache isn't accessible from feature branches unless usingrestore-keys. - Size limit: 10 GB per repository. Exceeding triggers automatic eviction of oldest caches.
- Expiration: Caches unused for 7 days are automatically deleted.
Optimizing Cache Keys
Use granular cache keys for better hit rates:
- name: Cache dependencies
uses: actions/cache@v4
with:
path: node_modules
key: ${{ runner.os }}-${{ hashFiles('.nvmrc') }}-${{ hashFiles('package-lock.json') }}
restore-keys: |
${{ runner.os }}-${{ hashFiles('.nvmrc') }}-
${{ runner.os }}-This creates a hierarchy: exact match → same Node version → any cache for this OS.
Scheduled Test Runs and Monitoring
Scheduled workflows enable continuous monitoring, nightly regression suites, and periodic health checks independent of code changes.
Cron-Based Scheduling
Schedule tests using cron syntax:
name: Nightly Regression Tests
on:
schedule:
- cron: '0 2 * * *' # Daily at 2 AM UTC
workflow_dispatch: # Allow manual trigger
jobs:
regression:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup environment
uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Run full regression suite
run: npm run test:regression
timeout-minutes: 120
- name: Upload results
uses: actions/upload-artifact@v4
if: always()
with:
name: regression-results-${{ github.run_number }}
path: test-results/
- name: Notify on failure
if: failure()
uses: slackapi/slack-github-action@v1
with:
channel-id: 'qa-alerts'
slack-message: 'Nightly regression tests failed! Check run ${{ github.run_number }}'
env:
SLACK_BOT_TOKEN: ${{ secrets.SLACK_BOT_TOKEN }}Cron syntax:
0 */6 * * *- Every 6 hours0 9 * * 1-5- Weekdays at 9 AM0 0 * * 0- Sundays at midnight
⚠️
Scheduled workflows run on the default branch only. Ensure your cron syntax is correct using crontab.guru (opens in a new tab).
Multi-Environment Monitoring
Schedule health checks across environments:
name: Environment Health Checks
on:
schedule:
- cron: '0 */4 * * *' # Every 4 hours
jobs:
health-check:
name: Health Check - ${{ matrix.environment }}
runs-on: ubuntu-latest
strategy:
matrix:
environment: [staging, production]
steps:
- uses: actions/checkout@v4
- name: Run health checks
run: npm run health-check
env:
ENVIRONMENT: ${{ matrix.environment }}
API_URL: ${{ secrets[format('API_URL_{0}', matrix.environment)] }}
- name: Create issue on failure
if: failure()
uses: actions/github-script@v7
with:
script: |
github.rest.issues.create({
owner: context.repo.owner,
repo: context.repo.repo,
title: `Health check failed: ${{ matrix.environment }}`,
body: `Health check failed for ${{ matrix.environment }} environment.\n\nRun: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}`,
labels: ['bug', 'production-incident', 'automated']
})Performance Monitoring
Track performance metrics over time:
name: Performance Monitoring
on:
schedule:
- cron: '0 3 * * *' # Daily at 3 AM
jobs:
performance:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Run Lighthouse CI
run: npm run lighthouse:ci
- name: Upload Lighthouse results
uses: actions/upload-artifact@v4
with:
name: lighthouse-results-${{ github.run_number }}
path: .lighthouseci/
- name: Comment performance results
uses: treosh/lighthouse-ci-action@v10
with:
uploadArtifacts: true
temporaryPublicStorage: trueDependency Update Checks
Schedule dependency audits:
name: Weekly Dependency Audit
on:
schedule:
- cron: '0 9 * * 1' # Mondays at 9 AM
jobs:
audit:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
- name: Run npm audit
run: npm audit --production --audit-level=high
- name: Check outdated packages
run: npm outdated || true
- name: Create issue if vulnerabilities found
if: failure()
uses: actions/github-script@v7
with:
script: |
github.rest.issues.create({
owner: context.repo.owner,
repo: context.repo.repo,
title: 'Security vulnerabilities detected',
body: 'npm audit found vulnerabilities. Please review and update dependencies.',
labels: ['security', 'dependencies']
})Pull Request Quality Gates
Quality gates prevent merging code that breaks tests, fails linting, or doesn't meet coverage thresholds. GitHub's branch protection rules combined with required status checks create robust gates.
Required Status Checks
Configure branch protection in repository settings:
- Settings → Branches → Add rule for
main - Enable "Require status checks to pass before merging"
- Select workflows:
test,lint,build - Enable "Require branches to be up to date before merging"
Comprehensive PR Workflow
name: Pull Request Validation
on:
pull_request:
branches: [main]
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number }}
cancel-in-progress: true
jobs:
lint:
name: Lint
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- run: npm ci
- name: Run ESLint
run: npm run lint
type-check:
name: Type Check
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- run: npm ci
- name: TypeScript check
run: npm run type-check
unit-tests:
name: Unit Tests
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- run: npm ci
- name: Run unit tests
run: npm test -- --coverage
- name: Check coverage threshold
run: |
COVERAGE=$(cat coverage/coverage-summary.json | jq '.total.lines.pct')
if (( $(echo "$COVERAGE < 80" | bc -l) )); then
echo "Coverage $COVERAGE% is below 80% threshold"
exit 1
fi
- name: Upload coverage
uses: codecov/codecov-action@v4
with:
fail_ci_if_error: true
integration-tests:
name: Integration Tests
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- run: npm ci
- name: Run integration tests
run: npm run test:integration
e2e-tests:
name: E2E Tests
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- run: npm ci
- name: Install Playwright
run: npx playwright install --with-deps
- name: Run E2E tests
run: npm run test:e2e
- name: Upload Playwright report
uses: actions/upload-artifact@v4
if: always()
with:
name: playwright-report
path: playwright-report/
build:
name: Build
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- run: npm ci
- name: Build application
run: npm run build
- name: Upload build artifacts
uses: actions/upload-artifact@v4
with:
name: build-output
path: dist/
all-checks-passed:
name: All Checks Passed
if: always()
needs: [lint, type-check, unit-tests, integration-tests, e2e-tests, build]
runs-on: ubuntu-latest
steps:
- name: Check all jobs
run: |
if [[ "${{ needs.lint.result }}" != "success" ]] || \
[[ "${{ needs.type-check.result }}" != "success" ]] || \
[[ "${{ needs.unit-tests.result }}" != "success" ]] || \
[[ "${{ needs.integration-tests.result }}" != "success" ]] || \
[[ "${{ needs.e2e-tests.result }}" != "success" ]] || \
[[ "${{ needs.build.result }}" != "success" ]]; then
echo "One or more checks failed"
exit 1
fi
echo "All checks passed!"Make All Checks Passed a required status check. This pattern ensures all jobs complete even if one fails, providing complete test feedback.
Label-Based Test Triggering
Run expensive tests only when labeled:
name: Full Test Suite
on:
pull_request:
types: [opened, synchronize, labeled]
jobs:
full-tests:
if: contains(github.event.pull_request.labels.*.name, 'run-full-tests')
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Run full test suite
run: npm run test:allCode Review Automation
Auto-assign reviewers and add labels:
name: PR Automation
on:
pull_request:
types: [opened]
jobs:
automate:
runs-on: ubuntu-latest
steps:
- name: Auto-assign reviewers
uses: kentaro-m/auto-assign-action@v1
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
- name: Label based on files changed
uses: actions/labeler@v5
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
- name: Add PR size label
uses: codelytv/pr-size-labeler@v1
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
xs_label: 'size/xs'
xs_max_size: 10
s_label: 'size/s'
s_max_size: 100
m_label: 'size/m'
m_max_size: 500
l_label: 'size/l'
l_max_size: 1000
xl_label: 'size/xl'Secrets and Environment Management
Secure secrets management is critical for CI/CD pipelines. GitHub provides encrypted secrets at multiple scopes with fine-grained access control.
Repository Secrets
Store secrets in Settings → Secrets and variables → Actions:
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Deploy to production
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
DATABASE_URL: ${{ secrets.DATABASE_URL }}
run: ./deploy.shSecrets are encrypted at rest and masked in logs. Never echo secrets or pass them to untrusted actions.
Environment Secrets
Create environments (Settings → Environments) with environment-specific secrets:
jobs:
deploy-staging:
runs-on: ubuntu-latest
environment: staging
steps:
- name: Deploy
env:
API_URL: ${{ secrets.API_URL }} # staging API_URL
run: ./deploy.sh
deploy-production:
runs-on: ubuntu-latest
environment: production # Can require approvals
steps:
- name: Deploy
env:
API_URL: ${{ secrets.API_URL }} # production API_URL
run: ./deploy.shEnvironment protection rules can require manual approval before running jobs.
Dynamic Secret Selection
Select secrets based on matrix or inputs:
jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
environment: [dev, staging, production]
steps:
- name: Run tests
env:
API_KEY: ${{ secrets[format('API_KEY_{0}', matrix.environment)] }}
BASE_URL: ${{ secrets[format('BASE_URL_{0}', matrix.environment)] }}
run: npm testThis requires secrets named API_KEY_dev, API_KEY_staging, API_KEY_production.
HashiCorp Vault Integration
For enterprise secret management:
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: Import secrets from Vault
uses: hashicorp/vault-action@v2
with:
url: https://vault.company.com
token: ${{ secrets.VAULT_TOKEN }}
secrets: |
secret/data/ci/api API_KEY ;
secret/data/ci/database DB_URL
- name: Run tests
env:
API_KEY: ${{ env.API_KEY }}
DATABASE_URL: ${{ env.DB_URL }}
run: npm testAWS Secrets Manager
Retrieve secrets from AWS:
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: Get secrets from AWS Secrets Manager
uses: aws-actions/aws-secretsmanager-get-secrets@v1
with:
secret-ids: |
prod/api-key
prod/database-url
parse-json-secrets: true
- name: Use secrets
env:
API_KEY: ${{ env.PROD_API_KEY }}
run: npm testLeast Privilege Principle
Limit secret access using environments and required reviewers:
jobs:
test-staging:
runs-on: ubuntu-latest
environment: staging # No approvals required
steps:
- run: npm run test:staging
env:
API_KEY: ${{ secrets.API_KEY }}
test-production:
runs-on: ubuntu-latest
environment: production # Requires approval from security team
steps:
- run: npm run test:production
env:
API_KEY: ${{ secrets.API_KEY }}Configure protection rules in environment settings to require reviews before accessing production secrets.
Detecting Secret Leaks
Prevent accidental secret exposure:
- name: Check for secrets in code
uses: trufflesecurity/trufflehog@v3
with:
path: ./
base: ${{ github.event.repository.default_branch }}
head: HEADAdvanced Patterns: Reusable Workflows and Composite Actions
As your testing infrastructure grows, reusable workflows and composite actions eliminate duplication and standardize practices across repositories.
Reusable Workflows
Create a reusable test workflow in .github/workflows/reusable-test.yml:
name: Reusable Test Workflow
on:
workflow_call:
inputs:
node-version:
required: false
type: string
default: '20'
test-command:
required: false
type: string
default: 'npm test'
secrets:
npm-token:
required: false
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: ${{ inputs.node-version }}
cache: 'npm'
- name: Configure npm authentication
if: secrets.npm-token != ''
run: echo "//registry.npmjs.org/:_authToken=${{ secrets.npm-token }}" > ~/.npmrc
- name: Install dependencies
run: npm ci
- name: Run tests
run: ${{ inputs.test-command }}
- name: Upload results
uses: actions/upload-artifact@v4
if: always()
with:
name: test-results
path: test-results/Call this workflow from other repositories:
name: CI
on:
push:
branches: [main]
pull_request:
jobs:
test:
uses: org-name/shared-workflows/.github/workflows/reusable-test.yml@main
with:
node-version: '20'
test-command: 'npm run test:coverage'
secrets:
npm-token: ${{ secrets.NPM_TOKEN }}Composite Actions
Create a composite action for common setup in .github/actions/setup-test-env/action.yml:
name: 'Setup Test Environment'
description: 'Sets up Node.js, caches dependencies, and installs packages'
inputs:
node-version:
description: 'Node.js version'
required: false
default: '20'
install-playwright:
description: 'Install Playwright browsers'
required: false
default: 'false'
runs:
using: 'composite'
steps:
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: ${{ inputs.node-version }}
cache: 'npm'
- name: Install dependencies
shell: bash
run: npm ci
- name: Install Playwright browsers
if: inputs.install-playwright == 'true'
shell: bash
run: npx playwright install --with-depsUse the composite action:
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup environment
uses: ./.github/actions/setup-test-env
with:
node-version: '20'
install-playwright: 'true'
- name: Run tests
run: npm testMatrix with Reusable Workflows
Combine matrix strategy with reusable workflows:
name: Multi-Version Tests
on:
push:
branches: [main]
jobs:
test:
strategy:
matrix:
node: [18, 20]
uses: ./.github/workflows/reusable-test.yml
with:
node-version: ${{ matrix.node }}Workflow Templates
Create organization-wide workflow templates in a .github repository:
- Create
.github/workflow-templates/node-ci.yml - Add
.github/workflow-templates/node-ci.properties.json:
{
"name": "Node.js CI",
"description": "Standard Node.js testing workflow",
"iconName": "node",
"categories": ["JavaScript"],
"filePatterns": ["package.json"]
}These templates appear in the "New workflow" UI for all organization repositories.
Conditional Reusable Workflows
Only run expensive workflows when needed:
jobs:
check-changes:
runs-on: ubuntu-latest
outputs:
frontend-changed: ${{ steps.filter.outputs.frontend }}
steps:
- uses: actions/checkout@v4
- uses: dorny/paths-filter@v2
id: filter
with:
filters: |
frontend:
- 'src/**'
- 'package.json'
frontend-tests:
needs: check-changes
if: needs.check-changes.outputs.frontend-changed == 'true'
uses: ./.github/workflows/frontend-tests.ymlShared Setup Actions
Publish common actions to the Marketplace or use from a shared repository:
- name: Setup test infrastructure
uses: org-name/shared-actions/setup-testing@v1
with:
framework: 'playwright'
browsers: 'chromium,firefox'Troubleshooting Common CI Failures
Understanding common failure patterns and debugging techniques accelerates incident resolution and improves pipeline reliability.
Flaky Tests
Symptoms: Tests pass locally but fail intermittently in CI.
Common causes and solutions:
Timing Issues
# Problem: Fixed sleeps don't account for CI performance variance
await page.click('#submit');
await sleep(1000);
expect(page.locator('#result')).toBeVisible();
# Solution: Use explicit waits
await page.click('#submit');
await page.locator('#result').waitFor({ state: 'visible', timeout: 10000 });
expect(page.locator('#result')).toBeVisible();Network Timeouts
Increase timeouts in CI environments:
- name: Run E2E tests
run: npx playwright test
env:
PLAYWRIGHT_TIMEOUT: 60000 # 60 seconds per test
PLAYWRIGHT_NAVIGATION_TIMEOUT: 30000Parallel Test Pollution
Tests interfere when run concurrently:
# Run tests serially for debugging
- name: Run tests serially
run: npm test -- --workers=1 --maxConcurrency=1Or isolate test data using unique identifiers:
// Generate unique test data per test
const testId = `test-${Date.now()}-${Math.random()}`;
const email = `user-${testId}@example.com`;Headless vs Headed Differences
Some UI tests behave differently headless:
- name: Run in headed mode for debugging
run: npx playwright test --headed
env:
DISPLAY: :99Retry Flaky Tests
Configure automatic retries:
- name: Run tests with retries
run: npx playwright test --retries=2Playwright/Jest configuration:
// playwright.config.js
export default {
retries: process.env.CI ? 2 : 0,
};Browser-Specific Failures
Symptoms: Tests pass in Chrome but fail in Firefox/Safari.
- name: Run tests on specific browser for debugging
run: npx playwright test --project=firefox --debug
- name: Upload browser-specific screenshots
uses: actions/upload-artifact@v4
if: failure()
with:
name: firefox-failures
path: test-results/**/firefox-*.pngOut of Memory Errors
Symptoms: JavaScript heap out of memory or similar.
- name: Run tests with increased memory
run: NODE_OPTIONS="--max-old-space-size=4096" npm testOr reduce parallelism:
- name: Run tests with fewer workers
run: npm test -- --maxWorkers=2Permission Denied Errors
Make scripts executable:
- name: Make scripts executable
run: chmod +x ./scripts/*.sh
- name: Run setup script
run: ./scripts/setup.shCache Corruption
Symptoms: Tests fail after dependency updates.
- name: Clear cache and reinstall
run: |
rm -rf node_modules ~/.npm ~/.cache
npm ciOr invalidate caches manually in Actions UI.
Debugging Techniques
Enable Debug Logging
- name: Run tests with debug output
run: npm test
env:
DEBUG: 'playwright:*'
NODE_DEBUG: 'http,https'SSH into Runner
Use action-upterm for interactive debugging:
- name: Setup upterm session
uses: lhotari/action-upterm@v1
if: failure()
with:
limit-access-to-actor: trueConditional Debugging Steps
- name: Debug on failure
if: failure()
run: |
echo "::group::Environment Info"
node --version
npm --version
echo "::endgroup::"
echo "::group::Installed Packages"
npm list --depth=0
echo "::endgroup::"
echo "::group::Process List"
ps aux
echo "::endgroup::"Artifacts for Debugging
Upload comprehensive debugging information:
- name: Upload debug artifacts
uses: actions/upload-artifact@v4
if: failure()
with:
name: debug-artifacts
path: |
test-results/
screenshots/
videos/
logs/
*.logJob Summaries
Create custom job summaries with debugging info:
- name: Generate test summary
if: always()
run: |
echo "## Test Results" >> $GITHUB_STEP_SUMMARY
echo "- Total: $(cat test-results.json | jq '.numTotalTests')" >> $GITHUB_STEP_SUMMARY
echo "- Passed: $(cat test-results.json | jq '.numPassedTests')" >> $GITHUB_STEP_SUMMARY
echo "- Failed: $(cat test-results.json | jq '.numFailedTests')" >> $GITHUB_STEP_SUMMARY
if [ -f failures.txt ]; then
echo "### Failed Tests" >> $GITHUB_STEP_SUMMARY
cat failures.txt >> $GITHUB_STEP_SUMMARY
fiWorkflow Debugging
Use act to run workflows locally:
# Install act
brew install act
# Run workflow locally
act push -j test
# Run with secrets
act -s GITHUB_TOKEN=xxx pushService Container Issues
Debug database/service connectivity:
services:
postgres:
image: postgres:15
env:
POSTGRES_PASSWORD: postgres
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
ports:
- 5432:5432
steps:
- name: Verify database connection
run: |
for i in {1..30}; do
if pg_isready -h localhost -p 5432; then
echo "Database ready"
break
fi
echo "Waiting for database..."
sleep 2
doneQuiz on GitHub Actions Testing
Your Score: 0/10
Question: What is the default timeout for a GitHub Actions job?
Continue Reading
Frequently Asked Questions (FAQs) / People Also Ask (PAA)
How do I run GitHub Actions workflows locally for testing?
What's the difference between actions/cache and setup-node cache parameter?
How do I prevent GitHub Actions from running on documentation-only changes?
Why are my Playwright/Cypress tests slower in CI than locally?
How do I run different tests on pull requests vs main branch pushes?
What's the best way to handle flaky tests in GitHub Actions?
How do I securely use secrets in pull requests from forks?
Can I run GitHub Actions workflows on a schedule only if code has changed?