
PyTest Complete Guide: Python Testing Made Powerful and Simple
PyTest has become the de facto standard for Python testing, surpassing the built-in unittest module with its simpler syntax, powerful fixtures, and extensive plugin ecosystem. Whether you're testing a small script or a large application, PyTest scales elegantly while keeping your test code clean and maintainable.
This guide covers PyTest from basic tests to advanced patterns that will transform how you approach Python testing.
Table Of Contents-
Why PyTest?
PyTest offers significant advantages over unittest:
- Simple syntax: No boilerplate classes or special methods
- Better assertions: Plain
assertwith detailed failure messages - Powerful fixtures: Dependency injection for test setup
- Parametrization: Built-in data-driven testing
- Rich plugin ecosystem: 800+ plugins available
- Parallel execution: Via pytest-xdist plugin
# unittest style
import unittest
class TestMath(unittest.TestCase):
def test_addition(self):
self.assertEqual(1 + 1, 2)
# PyTest style - much simpler
def test_addition():
assert 1 + 1 == 2Installation and Setup
Installation
# Install PyTest
pip install pytest
# Verify installation
pytest --version
# Install common plugins
pip install pytest-cov pytest-xdist pytest-htmlProject Structure
project/
├── src/
│ └── myapp/
│ ├── __init__.py
│ └── calculator.py
├── tests/
│ ├── __init__.py
│ ├── conftest.py # Shared fixtures
│ ├── test_calculator.py
│ └── integration/
│ └── test_api.py
├── pytest.ini # PyTest configuration
└── requirements-dev.txtWriting Your First Tests
Basic Test
# tests/test_calculator.py
def test_addition():
assert 1 + 1 == 2
def test_subtraction():
assert 5 - 3 == 2
def test_string_concat():
result = "Hello" + " " + "World"
assert result == "Hello World"Running Tests
# Run all tests
pytest
# Run specific file
pytest tests/test_calculator.py
# Run specific test
pytest tests/test_calculator.py::test_addition
# Verbose output
pytest -v
# Show print statements
pytest -s
# Stop on first failure
pytest -x
# Run last failed tests
pytest --lfTest Discovery
PyTest automatically discovers tests based on naming conventions:
- Files:
test_*.pyor*_test.py - Functions:
test_* - Classes:
Test*(no__init__method) - Methods:
test_*
# All these are discovered automatically
def test_function():
pass
class TestCalculator:
def test_add(self):
pass
def test_subtract(self):
passAssertions
PyTest uses plain assert statements with introspection for detailed failure messages.
Basic Assertions
def test_assertions():
# Equality
assert 1 + 1 == 2
assert "hello" == "hello"
# Truthiness
assert True
assert [1, 2, 3] # Non-empty is truthy
assert not [] # Empty is falsy
# Membership
assert 2 in [1, 2, 3]
assert "hello" in "hello world"
# Identity
a = [1, 2, 3]
b = a
assert a is b
# Comparison
assert 5 > 3
assert 3 <= 3Exception Testing
import pytest
def test_raises_exception():
with pytest.raises(ValueError):
int("not a number")
def test_exception_message():
with pytest.raises(ValueError) as exc_info:
raise ValueError("Invalid input")
assert "Invalid" in str(exc_info.value)
def test_exception_match():
with pytest.raises(ValueError, match=r"Invalid.*"):
raise ValueError("Invalid input provided")Approximate Comparisons
def test_floating_point():
assert 0.1 + 0.2 == pytest.approx(0.3)
assert [0.1, 0.2] == pytest.approx([0.1, 0.2], rel=1e-3)PyTest rewrites assert statements to provide detailed failure information. You'll see the actual values that caused the failure, not just "assertion failed."
Fixtures
Fixtures provide a way to set up test preconditions and share resources across tests.
Basic Fixtures
import pytest
@pytest.fixture
def sample_data():
return {"name": "John", "age": 30}
def test_with_fixture(sample_data):
assert sample_data["name"] == "John"
assert sample_data["age"] == 30Fixture Scopes
@pytest.fixture(scope="function") # Default: runs for each test
def function_fixture():
return create_resource()
@pytest.fixture(scope="class") # Once per test class
def class_fixture():
return create_resource()
@pytest.fixture(scope="module") # Once per module
def module_fixture():
return create_resource()
@pytest.fixture(scope="session") # Once per test session
def session_fixture():
return create_resource()Setup and Teardown
@pytest.fixture
def database():
# Setup
db = Database()
db.connect()
yield db # Provide the fixture value
# Teardown (runs after test completes)
db.disconnect()
def test_query(database):
result = database.query("SELECT * FROM users")
assert len(result) > 0Fixture Dependencies
@pytest.fixture
def app_config():
return {"db_url": "sqlite:///:memory:"}
@pytest.fixture
def database(app_config): # Depends on app_config
db = Database(app_config["db_url"])
db.connect()
yield db
db.disconnect()
@pytest.fixture
def user_service(database): # Depends on database
return UserService(database)
def test_user_creation(user_service):
user = user_service.create_user("john@test.com")
assert user.email == "john@test.com"Shared Fixtures in conftest.py
# tests/conftest.py - fixtures available to all tests
import pytest
@pytest.fixture
def api_client():
client = APIClient()
client.authenticate()
yield client
client.logout()
@pytest.fixture
def test_user():
return {"email": "test@example.com", "name": "Test User"}Auto-use Fixtures
@pytest.fixture(autouse=True)
def setup_logging():
"""Runs automatically before each test"""
logging.basicConfig(level=logging.DEBUG)
yield
logging.shutdown()Parametrization
Run the same test with different inputs.
Basic Parametrization
import pytest
@pytest.mark.parametrize("input,expected", [
(1, 2),
(2, 4),
(3, 6),
(4, 8),
])
def test_double(input, expected):
assert input * 2 == expectedMultiple Parameters
@pytest.mark.parametrize("a,b,expected", [
(1, 2, 3),
(5, 5, 10),
(-1, 1, 0),
(0, 0, 0),
])
def test_addition(a, b, expected):
assert a + b == expectedCombining Parametrize Decorators
@pytest.mark.parametrize("x", [1, 2])
@pytest.mark.parametrize("y", [10, 20])
def test_combinations(x, y):
# Runs 4 times: (1,10), (1,20), (2,10), (2,20)
assert x * y in [10, 20, 20, 40]Parametrizing Fixtures
@pytest.fixture(params=["chrome", "firefox", "safari"])
def browser(request):
driver = create_driver(request.param)
yield driver
driver.quit()
def test_login(browser): # Runs 3 times, once per browser
browser.get("https://example.com/login")
assert "Login" in browser.titleIDs for Test Cases
@pytest.mark.parametrize("email,valid", [
pytest.param("user@example.com", True, id="valid-email"),
pytest.param("invalid-email", False, id="no-at-symbol"),
pytest.param("@nodomain.com", False, id="no-local-part"),
])
def test_email_validation(email, valid):
assert validate_email(email) == validMarkers
Markers categorize and control test execution.
Built-in Markers
import pytest
@pytest.mark.skip(reason="Not implemented yet")
def test_future_feature():
pass
@pytest.mark.skipif(sys.version_info < (3, 10), reason="Requires Python 3.10+")
def test_new_python_feature():
pass
@pytest.mark.xfail(reason="Known bug, fix in progress")
def test_known_bug():
assert buggy_function() == expected
@pytest.mark.timeout(5) # Requires pytest-timeout
def test_slow_operation():
passCustom Markers
# pytest.ini
[pytest]
markers =
slow: marks tests as slow
integration: integration tests
smoke: smoke tests
# tests/test_example.py
import pytest
@pytest.mark.slow
def test_large_dataset():
pass
@pytest.mark.integration
def test_database_connection():
pass
@pytest.mark.smoke
def test_basic_functionality():
passRunning Marked Tests
# Run only smoke tests
pytest -m smoke
# Run slow and integration tests
pytest -m "slow or integration"
# Exclude slow tests
pytest -m "not slow"Configuration
pytest.ini
[pytest]
# Test discovery
testpaths = tests
python_files = test_*.py
python_classes = Test*
python_functions = test_*
# Output
addopts = -v --tb=short
# Markers
markers =
slow: marks tests as slow
integration: integration tests
# Warnings
filterwarnings =
ignore::DeprecationWarningpyproject.toml
[tool.pytest.ini_options]
testpaths = ["tests"]
python_files = ["test_*.py"]
addopts = "-v --cov=src"
markers = [
"slow: marks tests as slow",
"integration: integration tests",
]Command Line Options
# Coverage report
pytest --cov=src --cov-report=html
# Parallel execution (requires pytest-xdist)
pytest -n auto # Use all CPU cores
# Generate HTML report
pytest --html=report.html
# Fail fast
pytest -x
# Run specific markers
pytest -m "not slow"
# Output to file
pytest --junitxml=results.xmlPlugins
Popular Plugins
| Plugin | Purpose |
|---|---|
| pytest-cov | Code coverage |
| pytest-xdist | Parallel execution |
| pytest-html | HTML reports |
| pytest-mock | Mocking utilities |
| pytest-asyncio | Async test support |
| pytest-django | Django integration |
| pytest-flask | Flask integration |
Using pytest-mock
def test_with_mock(mocker):
mock_api = mocker.patch("myapp.api.fetch_data")
mock_api.return_value = {"status": "success"}
result = process_data()
mock_api.assert_called_once()
assert result["status"] == "success"Using pytest-cov
# Generate coverage report
pytest --cov=myapp --cov-report=term-missing
# HTML report
pytest --cov=myapp --cov-report=html
# Fail if coverage below threshold
pytest --cov=myapp --cov-fail-under=80Best Practices
Organize Tests by Feature
tests/
├── conftest.py
├── unit/
│ ├── test_models.py
│ └── test_utils.py
├── integration/
│ ├── test_api.py
│ └── test_database.py
└── e2e/
└── test_workflows.pyKeep Tests Independent
# Bad - tests depend on each other
class TestUser:
user = None
def test_create_user(self):
TestUser.user = create_user()
def test_update_user(self):
TestUser.user.update(name="New Name") # Fails if run alone
# Good - each test is independent
def test_create_user(database):
user = create_user()
assert user.id is not None
def test_update_user(database):
user = create_user() # Own setup
user.update(name="New Name")
assert user.name == "New Name"Use Descriptive Names
# Good
def test_login_with_valid_credentials_returns_token():
pass
def test_login_with_invalid_password_returns_401():
pass
# Avoid
def test_login():
pass
def test_login2():
passUse Fixtures for Setup
# Bad - setup in each test
def test_user_can_place_order():
user = User.create(email="test@test.com")
product = Product.create(name="Widget", price=9.99)
order = Order.create(user=user, product=product)
assert order.total == 9.99
# Good - fixtures handle setup
@pytest.fixture
def user():
return User.create(email="test@test.com")
@pytest.fixture
def product():
return Product.create(name="Widget", price=9.99)
def test_user_can_place_order(user, product):
order = Order.create(user=user, product=product)
assert order.total == 9.99PyTest's simplicity and power make it the ideal choice for Python testing. Its fixture system, parametrization, and plugin ecosystem enable testing patterns that would be verbose or impossible with unittest. Start simple with basic tests and assertions, then gradually adopt fixtures and plugins as your testing needs grow.
Quiz on PyTest
Your Score: 0/10
Question: What is the default fixture scope in PyTest?
Continue Reading
The Software Testing Lifecycle: An OverviewDive into the crucial phase of Test Requirement Analysis in the Software Testing Lifecycle, understanding its purpose, activities, deliverables, and best practices to ensure a successful software testing process.Types of Software TestingThis article provides a comprehensive overview of the different types of software testing.Accessibility TestingLearn about accessibility testing, its importance, types, best practices, and tools.Unit Testing in SoftwareLearn the fundamentals of unit testing in software, its importance in functional testing, and how to ensure early bug detection, improved code quality, and seamless collaboration among team members.Integration TestingLearn the essentials of integration testing, its importance, types, best practices, and tools.System TestingLearn about system testing, its importance, types, techniques, process, best practices, and tools to effectively validate software systems.Performance TestingLearn about performance testing, its importance, types, techniques, process, best practices, and tools to effectively validate software systems.Security TestingLearn about security testing, its importance, types, techniques, process, best practices, and tools to effectively validate software systems.User Acceptance TestingLearn about user acceptance testing, its importance, types, techniques, process, best practices, and tools to effectively validate software systems.
Frequently Asked Questions (FAQs) / People Also Ask (PAA)
Should I use PyTest or unittest for new Python projects?
How do I mock external dependencies in PyTest?
What's the difference between -k and -m flags?
How do I see print statements in test output?
Can I use PyTest with Django or Flask?
How do I generate code coverage reports?
What's the difference between fixture scope 'session' and 'module'?
How do I test async functions in PyTest?