API Reference - pytest documentation (original) (raw)

This page contains the full reference to pytest’s API.

Constants

pytest.__version__

The current pytest version, as a string:

import pytest pytest.version '7.0.0'

pytest.version_tuple

Added in version 7.0.

The current pytest version, as a tuple:

import pytest pytest.version_tuple (7, 0, 0)

For pre-releases, the last component will be a string with the prerelease version:

import pytest pytest.version_tuple (7, 0, '0rc1')

Functions

pytest.approx

approx(expected, rel=None, abs=None, nan_ok=False)[source]

Assert that two numbers (or two ordered sequences of numbers) are equal to each other within some tolerance.

Due to the Floating-Point Arithmetic: Issues and Limitations, numbers that we would intuitively expect to be equal are not always so:

0.1 + 0.2 == 0.3 False

This problem is commonly encountered when writing tests, e.g. when making sure that floating-point values are what you expect them to be. One way to deal with this problem is to assert that two floating-point numbers are equal to within some appropriate tolerance:

abs((0.1 + 0.2) - 0.3) < 1e-6 True

However, comparisons like this are tedious to write and difficult to understand. Furthermore, absolute comparisons like the one above are usually discouraged because there’s no tolerance that works well for all situations. 1e-6 is good for numbers around 1, but too small for very big numbers and too big for very small ones. It’s better to express the tolerance as a fraction of the expected value, but relative comparisons like that are even more difficult to write correctly and concisely.

The approx class performs floating-point comparisons using a syntax that’s as intuitive as possible:

from pytest import approx 0.1 + 0.2 == approx(0.3) True

The same syntax also works for ordered sequences of numbers:

(0.1 + 0.2, 0.2 + 0.4) == approx((0.3, 0.6)) True

numpy arrays:

import numpy as np np.array([0.1, 0.2]) + np.array([0.2, 0.4]) == approx(np.array([0.3, 0.6])) True

And for a numpy array against a scalar:

import numpy as np np.array([0.1, 0.2]) + np.array([0.2, 0.1]) == approx(0.3) True

Only ordered sequences are supported, because approx needs to infer the relative position of the sequences without ambiguity. This meanssets and other unordered sequences are not supported.

Finally, dictionary values can also be compared:

{'a': 0.1 + 0.2, 'b': 0.2 + 0.4} == approx({'a': 0.3, 'b': 0.6}) True

The comparison will be true if both mappings have the same keys and their respective values match the expected tolerances.

Tolerances

By default, approx considers numbers within a relative tolerance of1e-6 (i.e. one part in a million) of its expected value to be equal. This treatment would lead to surprising results if the expected value was0.0, because nothing but 0.0 itself is relatively close to 0.0. To handle this case less surprisingly, approx also considers numbers within an absolute tolerance of 1e-12 of its expected value to be equal. Infinity and NaN are special cases. Infinity is only considered equal to itself, regardless of the relative tolerance. NaN is not considered equal to anything by default, but you can make it be equal to itself by setting the nan_ok argument to True. (This is meant to facilitate comparing arrays that use NaN to mean “no data”.)

Both the relative and absolute tolerances can be changed by passing arguments to the approx constructor:

1.0001 == approx(1) False 1.0001 == approx(1, rel=1e-3) True 1.0001 == approx(1, abs=1e-3) True

If you specify abs but not rel, the comparison will not consider the relative tolerance at all. In other words, two numbers that are within the default relative tolerance of 1e-6 will still be considered unequal if they exceed the specified absolute tolerance. If you specify bothabs and rel, the numbers will be considered equal if either tolerance is met:

1 + 1e-8 == approx(1) True 1 + 1e-8 == approx(1, abs=1e-12) False 1 + 1e-8 == approx(1, rel=1e-6, abs=1e-12) True

You can also use approx to compare nonnumeric types, or dicts and sequences containing nonnumeric types, in which case it falls back to strict equality. This can be useful for comparing dicts and sequences that can contain optional values:

{"required": 1.0000005, "optional": None} == approx({"required": 1, "optional": None}) True [None, 1.0000005] == approx([None,1]) True ["foo", 1.0000005] == approx([None,1]) False

If you’re thinking about using approx, then you might want to know how it compares to other good ways of comparing floating-point numbers. All of these algorithms are based on relative and absolute tolerances and should agree for the most part, but they do have meaningful differences:

Note

approx can handle numpy arrays, but we recommend the specialised test helpers in Test support (numpy.testing)if you need support for comparisons, NaNs, or ULP-based tolerances.

To match strings using regex, you can useMatchesfrom there_assert package.

Warning

Changed in version 3.2.

In order to avoid inconsistent behavior, TypeError is raised for >, >=, < and <= comparisons. The example below illustrates the problem:

assert approx(0.1) > 0.1 + 1e-10 # calls approx(0.1).gt(0.1 + 1e-10) assert 0.1 + 1e-10 > approx(0.1) # calls approx(0.1).lt(0.1 + 1e-10)

In the second example one expects approx(0.1).__le__(0.1 + 1e-10)to be called. But instead, approx(0.1).__lt__(0.1 + 1e-10) is used to comparison. This is because the call hierarchy of rich comparisons follows a fixed behavior. More information: object.__ge__()

Changed in version 3.7.1: approx raises TypeError when it encounters a dict value or sequence element of nonnumeric type.

Changed in version 6.1.0: approx falls back to strict equality for nonnumeric types instead of raising TypeError.

pytest.fail

Tutorial: How to use skip and xfail to deal with tests that cannot succeed

fail(_reason_[, _pytrace=True_])[source]

Explicitly fail an executing test with the given message.

Parameters:

Raises:

pytest.fail.Exception – The exception that is raised.

class pytest.fail.Exception

The exception raised by pytest.fail().

pytest.skip

skip(_reason_[, _allow_module_level=False_])[source]

Skip an executing test with the given message.

This function should be called only during testing (setup, call or teardown) or during collection by using the allow_module_level flag. This function can be called in doctests as well.

Parameters:

Raises:

pytest.skip.Exception – The exception that is raised.

Note

It is better to use the pytest.mark.skipif marker when possible to declare a test to be skipped under certain conditions like mismatching platforms or dependencies. Similarly, use the # doctest: +SKIP directive (see doctest.SKIP) to skip a doctest statically.

class pytest.skip.Exception

The exception raised by pytest.skip().

pytest.importorskip

importorskip(modname, minversion=None, reason=None, *, exc_type=None)[source]

Import and return the requested module modname, or skip the current test if the module cannot be imported.

Parameters:

Returns:

The imported module. This should be assigned to its canonical name.

Raises:

pytest.skip.Exception – If the module cannot be imported.

Return type:

Any

Example:

docutils = pytest.importorskip("docutils")

Added in version 8.2: The exc_type parameter.

pytest.xfail

xfail(reason='')[source]

Imperatively xfail an executing test or setup function with the given reason.

This function should be called only during testing (setup, call or teardown).

No other code is executed after using xfail() (it is implemented internally by raising an exception).

Parameters:

reason (str) – The message to show the user as reason for the xfail.

Note

It is better to use the pytest.mark.xfail marker when possible to declare a test to be xfailed under certain conditions like known bugs or missing features.

Raises:

pytest.xfail.Exception – The exception that is raised.

class pytest.xfail.Exception

The exception raised by pytest.xfail().

pytest.exit

exit(_reason_[, _returncode=None_])[source]

Exit testing process.

Parameters:

Raises:

pytest.exit.Exception – The exception that is raised.

class pytest.exit.Exception

The exception raised by pytest.exit().

pytest.main

Tutorial: Calling pytest from Python code

main(args=None, plugins=None)[source]

Perform an in-process test run.

Parameters:

Returns:

An exit code.

Return type:

int | ExitCode

pytest.param

param(_*values_[, _id_][, _marks_])[source]

Specify a parameter in pytest.mark.parametrize calls orparametrized fixtures.

@pytest.mark.parametrize( "test_input,expected", [ ("3+5", 8), pytest.param("6*9", 42, marks=pytest.mark.xfail), ], ) def test_eval(test_input, expected): assert eval(test_input) == expected

Parameters:

pytest.raises

Tutorial: Assertions about expected exceptions

with raises(expected_exception: type[E] | tuple[type[E], ...], *, match: str | Pattern[str] | None = ...) → RaisesContext[E] as excinfo[source]

with raises(expected_exception: type[E] | tuple[type[E], ...], func: Callable[[...], Any], *args: Any, **kwargs: Any) → ExceptionInfo[E] as excinfo

Assert that a code block/function call raises an exception type, or one of its subclasses.

Parameters:

Use pytest.raises as a context manager, which will capture the exception of the given type, or any of its subclasses:

import pytest with pytest.raises(ZeroDivisionError): ... 1/0

If the code block does not raise the expected exception (ZeroDivisionError in the example above), or no exception at all, the check will fail instead.

You can also use the keyword argument match to assert that the exception matches a text or regex:

with pytest.raises(ValueError, match='must be 0 or None'): ... raise ValueError("value must be 0 or None")

with pytest.raises(ValueError, match=r'must be \d+$'): ... raise ValueError("value must be 42")

The match argument searches the formatted exception string, which includes anyPEP-678 __notes__:

with pytest.raises(ValueError, match=r"had a note added"): ... e = ValueError("value must be 42") ... e.add_note("had a note added") ... raise e

The context manager produces an ExceptionInfo object which can be used to inspect the details of the captured exception:

with pytest.raises(ValueError) as exc_info: ... raise ValueError("value must be 42") assert exc_info.type is ValueError assert exc_info.value.args[0] == "value must be 42"

Warning

Given that pytest.raises matches subclasses, be wary of using it to match Exception like this:

with pytest.raises(Exception): # Careful, this will catch ANY exception raised. some_function()

Because Exception is the base class of almost all exceptions, it is easy for this to hide real bugs, where the user wrote this expecting a specific exception, but some other exception is being raised due to a bug introduced during a refactoring.

Avoid using pytest.raises to catch Exception unless certain that you really want to catchany exception raised.

Note

When using pytest.raises as a context manager, it’s worthwhile to note that normal context manager rules apply and that the exception raised must be the final line in the scope of the context manager. Lines of code after that, within the scope of the context manager will not be executed. For example:

value = 15 with pytest.raises(ValueError) as exc_info: ... if value > 10: ... raise ValueError("value must be <= 10") ... assert exc_info.type is ValueError # This will not execute.

Instead, the following approach must be taken (note the difference in scope):

with pytest.raises(ValueError) as exc_info: ... if value > 10: ... raise ValueError("value must be <= 10") ... assert exc_info.type is ValueError

Using with pytest.mark.parametrize

When using pytest.mark.parametrizeit is possible to parametrize tests such that some runs raise an exception and others do not.

See Parametrizing conditional raising for an example.

Legacy form

It is possible to specify a callable by passing a to-be-called lambda:

raises(ZeroDivisionError, lambda: 1/0) <ExceptionInfo ...>

or you can specify an arbitrary callable with arguments:

def f(x): return 1/x ... raises(ZeroDivisionError, f, 0) <ExceptionInfo ...> raises(ZeroDivisionError, f, x=0) <ExceptionInfo ...>

The form above is fully supported but discouraged for new code because the context manager form is regarded as more readable and less error-prone.

Note

Similar to caught exception objects in Python, explicitly clearing local references to returned ExceptionInfo objects can help the Python interpreter speed up its garbage collection.

Clearing those references breaks a reference cycle (ExceptionInfo –> caught exception –> frame stack raising the exception –> current frame stack –> local variables –>ExceptionInfo) which makes Python keep all objects referenced from that cycle (including all local variables in the current frame) alive until the next cyclic garbage collection run. More detailed information can be found in the official Python documentation for the try statement.

pytest.deprecated_call

Tutorial: Ensuring code triggers a deprecation warning

with deprecated_call(*, match: str | Pattern[str] | None = ...) → WarningsRecorder[source]

with deprecated_call(func: Callable[[...], T], *args: Any, **kwargs: Any) → T

Assert that code produces a DeprecationWarning or PendingDeprecationWarning or FutureWarning.

This function can be used as a context manager:

import warnings def api_call_v2(): ... warnings.warn('use v3 of this api', DeprecationWarning) ... return 200

import pytest with pytest.deprecated_call(): ... assert api_call_v2() == 200

It can also be used by passing a function and *args and **kwargs, in which case it will ensure calling func(*args, **kwargs) produces one of the warnings types above. The return value is the return value of the function.

In the context manager form you may use the keyword argument match to assert that the warning matches a text or regex.

The context manager produces a list of warnings.WarningMessage objects, one for each warning raised.

pytest.register_assert_rewrite

Tutorial: Assertion Rewriting

register_assert_rewrite(*names)[source]

Register one or more module names to be rewritten on import.

This function will make sure that this module or all modules inside the package will get their assert statements rewritten. Thus you should make sure to call this before the module is actually imported, usually in your __init__.py if you are a plugin using a package.

Parameters:

names (str) – The module names to register.

pytest.warns

Tutorial: Asserting warnings with the warns function

with warns(expected_warning: type[Warning] | tuple[type[Warning], ...] = <class 'Warning'>, *, match: str | ~typing.Pattern[str] | None = None) → WarningsChecker[source]

with warns(expected_warning: type[Warning] | tuple[type[Warning], ...], func: Callable[[...], T], *args: Any, **kwargs: Any) → T

Assert that code raises a particular class of warning.

Specifically, the parameter expected_warning can be a warning class or tuple of warning classes, and the code inside the with block must issue at least one warning of that class or classes.

This helper produces a list of warnings.WarningMessage objects, one for each warning emitted (regardless of whether it is an expected_warning or not). Since pytest 8.0, unmatched warnings are also re-emitted when the context closes.

This function can be used as a context manager:

import pytest with pytest.warns(RuntimeWarning): ... warnings.warn("my warning", RuntimeWarning)

In the context manager form you may use the keyword argument match to assert that the warning matches a text or regex:

with pytest.warns(UserWarning, match='must be 0 or None'): ... warnings.warn("value must be 0 or None", UserWarning)

with pytest.warns(UserWarning, match=r'must be \d+$'): ... warnings.warn("value must be 42", UserWarning)

with pytest.warns(UserWarning): # catch re-emitted warning ... with pytest.warns(UserWarning, match=r'must be \d+$'): ... warnings.warn("this is not here", UserWarning) Traceback (most recent call last): ... Failed: DID NOT WARN. No warnings of type ...UserWarning... were emitted...

Using with pytest.mark.parametrize

When using pytest.mark.parametrize it is possible to parametrize tests such that some runs raise a warning and others do not.

This could be achieved in the same way as with exceptions, seeParametrizing conditional raising for an example.

pytest.freeze_includes

Tutorial: Freezing pytest

freeze_includes()[source]

Return a list of module names used by pytest that should be included by cx_freeze.

Marks

Marks can be used to apply metadata to test functions (but not fixtures), which can then be accessed by fixtures or plugins.

pytest.mark.filterwarnings

Tutorial: @pytest.mark.filterwarnings

Add warning filters to marked test items.

pytest.mark.filterwarnings(filter)

Parameters:

filter (str) –

A warning specification string, which is composed of contents of the tuple (action, message, category, module, lineno)as specified in The Warnings Filter section of the Python documentation, separated by ":". Optional fields can be omitted. Module names passed for filtering are not regex-escaped.

For example:

@pytest.mark.filterwarnings("ignore:.usage will be deprecated.:DeprecationWarning") def test_foo(): ...

pytest.mark.parametrize

Tutorial: How to parametrize fixtures and test functions

This mark has the same signature as pytest.Metafunc.parametrize(); see there.

pytest.mark.skip

Tutorial: Skipping test functions

Unconditionally skip a test function.

pytest.mark.skip(reason=None)

Parameters:

reason (str) – Reason why the test function is being skipped.

pytest.mark.skipif

Tutorial: Skipping test functions

Skip a test function if a condition is True.

pytest.mark.skipif(condition, *, reason=None)

Parameters:

pytest.mark.usefixtures

Tutorial: Use fixtures in classes and modules with usefixtures

Mark a test function as using the given fixture names.

pytest.mark.usefixtures(*names)

Parameters:

args – The names of the fixture to use, as strings.

Note

When using usefixtures in hooks, it can only load fixtures when applied to a test function before test setup (for example in the pytest_collection_modifyitems hook).

Also note that this mark has no effect when applied to fixtures.

pytest.mark.xfail

Tutorial: XFail: mark test functions as expected to fail

Marks a test function as expected to fail.

pytest.mark.xfail(condition=False, *, reason=None, raises=None, run=True, strict=xfail_strict)

Parameters:

Custom marks

Marks are created dynamically using the factory object pytest.mark and applied as a decorator.

For example:

@pytest.mark.timeout(10, "slow", method="thread") def test_function(): ...

Will create and attach a Mark object to the collectedItem, which can then be accessed by fixtures or hooks withNode.iter_markers. The mark object will have the following attributes:

mark.args == (10, "slow") mark.kwargs == {"method": "thread"}

Example for using multiple custom markers:

@pytest.mark.timeout(10, "slow", method="thread") @pytest.mark.slow def test_function(): ...

When Node.iter_markers or Node.iter_markers_with_node is used with multiple markers, the marker closest to the function will be iterated over first. The above example will result in @pytest.mark.slow followed by @pytest.mark.timeout(...).

Fixtures

Tutorial: Fixtures reference

Fixtures are requested by test functions or other fixtures by declaring them as argument names.

Example of a test requiring a fixture:

def test_output(capsys): print("hello") out, err = capsys.readouterr() assert out == "hello\n"

Example of a fixture requiring another fixture:

@pytest.fixture def db_session(tmp_path): fn = tmp_path / "db.file" return connect(fn)

For more details, consult the full fixtures docs.

@pytest.fixture

@fixture(fixture_function: FixtureFunction, *, scope: Literal['session', 'package', 'module', 'class', 'function'] | Callable[[str, Config], Literal['session', 'package', 'module', 'class', 'function']] = 'function', params: Iterable[object] | None = None, autouse: bool = False, ids: Sequence[object | None] | Callable[[Any], object | None] | None = None, name: str | None = None) → FixtureFunction[source]

@fixture(fixture_function: None = None, *, scope: Literal['session', 'package', 'module', 'class', 'function'] | Callable[[str, Config], Literal['session', 'package', 'module', 'class', 'function']] = 'function', params: Iterable[object] | None = None, autouse: bool = False, ids: Sequence[object | None] | Callable[[Any], object | None] | None = None, name: str | None = None) → FixtureFunctionMarker

Decorator to mark a fixture factory function.

This decorator can be used, with or without parameters, to define a fixture function.

The name of the fixture function can later be referenced to cause its invocation ahead of running tests: test modules or classes can use thepytest.mark.usefixtures(fixturename) marker.

Test functions can directly use fixture names as input arguments in which case the fixture instance returned from the fixture function will be injected.

Fixtures can provide their values to test functions using return oryield statements. When using yield the code block after theyield statement is executed as teardown code regardless of the test outcome, and must yield exactly once.

Parameters:

capfd

Tutorial: How to capture stdout/stderr output

capfd()[source]

Enable text capturing of writes to file descriptors 1 and 2.

The captured output is made available via capfd.readouterr() method calls, which return a (out, err) namedtuple.out and err will be text objects.

Returns an instance of CaptureFixture[str].

Example:

def test_system_echo(capfd): os.system('echo "hello"') captured = capfd.readouterr() assert captured.out == "hello\n"

capfdbinary

Tutorial: How to capture stdout/stderr output

capfdbinary()[source]

Enable bytes capturing of writes to file descriptors 1 and 2.

The captured output is made available via capfd.readouterr() method calls, which return a (out, err) namedtuple.out and err will be byte objects.

Returns an instance of CaptureFixture[bytes].

Example:

def test_system_echo(capfdbinary): os.system('echo "hello"') captured = capfdbinary.readouterr() assert captured.out == b"hello\n"

caplog

Tutorial: How to manage logging

caplog()[source]

Access and control log capturing.

Captured logs are available through the following properties/methods:

Returns a pytest.LogCaptureFixture instance.

final class LogCaptureFixture[source]

Provides access and control of log capturing.

property handler_: LogCaptureHandler_

Get the logging handler used by the fixture.

get_records(when)[source]

Get the logging records for one of the possible test phases.

Parameters:

when (Literal[ 'setup' , 'call' , 'teardown' ]) – Which test phase to obtain the records from. Valid values are: “setup”, “call” and “teardown”.

Returns:

The list of captured records at the given stage.

Return type:

list[LogRecord]

Added in version 3.4.

property text_: str_

The formatted log text.

property records_: list[LogRecord]_

The list of log records.

property record_tuples_: list[tuple[str, int, str]]_

A list of a stripped down version of log records intended for use in assertion comparison.

The format of the tuple is:

(logger_name, log_level, message)

property messages_: list[str]_

A list of format-interpolated log messages.

Unlike ‘records’, which contains the format string and parameters for interpolation, log messages in this list are all interpolated.

Unlike ‘text’, which contains the output from the handler, log messages in this list are unadorned with levels, timestamps, etc, making exact comparisons more reliable.

Note that traceback or stack info (from logging.exception() or the exc_info or stack_info arguments to the logging functions) is not included, as this is added by the formatter in the handler.

Added in version 3.7.

clear()[source]

Reset the list of log records and the captured log text.

set_level(level, logger=None)[source]

Set the threshold level of a logger for the duration of a test.

Logging messages which are less severe than this level will not be captured.

Changed in version 3.4: The levels of the loggers changed by this function will be restored to their initial values at the end of the test.

Will enable the requested logging level if it was disabled via logging.disable().

Parameters:

with at_level(level, logger=None)[source]

Context manager that sets the level for capturing of logs. After the end of the ‘with’ statement the level is restored to its original value.

Will enable the requested logging level if it was disabled via logging.disable().

Parameters:

with filtering(filter_)[source]

Context manager that temporarily adds the given filter to the caplog’shandler() for the ‘with’ statement block, and removes that filter at the end of the block.

Parameters:

filter – A custom logging.Filter object.

Added in version 7.5.

capsys

Tutorial: How to capture stdout/stderr output

capsys()[source]

Enable text capturing of writes to sys.stdout and sys.stderr.

The captured output is made available via capsys.readouterr() method calls, which return a (out, err) namedtuple.out and err will be text objects.

Returns an instance of CaptureFixture[str].

Example:

def test_output(capsys): print("hello") captured = capsys.readouterr() assert captured.out == "hello\n"

class CaptureFixture[source]

Object returned by the capsys, capsysbinary,capfd and capfdbinary fixtures.

readouterr()[source]

Read and return the captured output so far, resetting the internal buffer.

Returns:

The captured content as a namedtuple with out and errstring attributes.

Return type:

CaptureResult

with disabled()[source]

Temporarily disable capturing while inside the with block.

capsysbinary

Tutorial: How to capture stdout/stderr output

capsysbinary()[source]

Enable bytes capturing of writes to sys.stdout and sys.stderr.

The captured output is made available via capsysbinary.readouterr()method calls, which return a (out, err) namedtuple.out and err will be bytes objects.

Returns an instance of CaptureFixture[bytes].

Example:

def test_output(capsysbinary): print("hello") captured = capsysbinary.readouterr() assert captured.out == b"hello\n"

config.cache

Tutorial: How to re-run failed tests and maintain state between test runs

The config.cache object allows other plugins and fixtures to store and retrieve values across test runs. To access it from fixtures request pytestconfig into your fixture and get it with pytestconfig.cache.

Under the hood, the cache plugin uses the simpledumps/loads API of the json stdlib module.

config.cache is an instance of pytest.Cache:

final class Cache[source]

Instance of the cache fixture.

mkdir(name)[source]

Return a directory path object with the given name.

If the directory does not yet exist, it will be created. You can use it to manage files to e.g. store/retrieve database dumps across test sessions.

Added in version 7.0.

Parameters:

name (str) – Must be a string not containing a / separator. Make sure the name contains your plugin or application identifiers to prevent clashes with other cache users.

get(key, default)[source]

Return the cached value for the given key.

If no value was yet cached or the value cannot be read, the specified default is returned.

Parameters:

set(key, value)[source]

Save value for the given key.

Parameters:

doctest_namespace

Tutorial: How to run doctests

doctest_namespace()[source]

Fixture that returns a dict that will be injected into the namespace of doctests.

Usually this fixture is used in conjunction with another autouse fixture:

@pytest.fixture(autouse=True) def add_np(doctest_namespace): doctest_namespace["np"] = numpy

For more details: ‘doctest_namespace’ fixture.

monkeypatch

Tutorial: How to monkeypatch/mock modules and environments

monkeypatch()[source]

A convenient fixture for monkey-patching.

The fixture provides these methods to modify objects, dictionaries, oros.environ:

All modifications will be undone after the requesting test function or fixture has finished. The raising parameter determines if a KeyErroror AttributeError will be raised if the set/deletion operation does not have the specified target.

To undo modifications done by the fixture in a contained scope, use context().

Returns a MonkeyPatch instance.

final class MonkeyPatch[source]

Helper to conveniently monkeypatch attributes/items/environment variables/syspath.

Returned by the monkeypatch fixture.

Changed in version 6.2: Can now also be used directly as pytest.MonkeyPatch(), for when the fixture is not available. In this case, usewith MonkeyPatch.context() as mp: or remember to callundo() explicitly.

classmethod with context()[source]

Context manager that returns a new MonkeyPatch object which undoes any patching done inside the with block upon exit.

Example:

import functools

def test_partial(monkeypatch): with monkeypatch.context() as m: m.setattr(functools, "partial", 3)

Useful in situations where it is desired to undo some patches before the test ends, such as mocking stdlib functions that might break pytest itself if mocked (for examples of this see #3290).

setattr(target: str, name: object, value: ~_pytest.monkeypatch.Notset = , raising: bool = True) → None[source]

setattr(target: object, name: str, value: object, raising: bool = True) → None

Set attribute value on target, memorizing the old value.

For example:

import os

monkeypatch.setattr(os, "getcwd", lambda: "/")

The code above replaces the os.getcwd() function by a lambda which always returns "/".

For convenience, you can specify a string as target which will be interpreted as a dotted import path, with the last part being the attribute name:

monkeypatch.setattr("os.getcwd", lambda: "/")

Raises AttributeError if the attribute does not exist, unlessraising is set to False.

Where to patch

monkeypatch.setattr works by (temporarily) changing the object that a name points to with another one. There can be many names pointing to any individual object, so for patching to work you must ensure that you patch the name used by the system under test.

See the section Where to patch in the unittest.mockdocs for a complete explanation, which is meant for unittest.mock.patch() but applies to monkeypatch.setattr as well.

delattr(target, name=, raising=True)[source]

Delete attribute name from target.

If no name is specified and target is a string it will be interpreted as a dotted import path with the last part being the attribute name.

Raises AttributeError it the attribute does not exist, unlessraising is set to False.

setitem(dic, name, value)[source]

Set dictionary entry name to value.

delitem(dic, name, raising=True)[source]

Delete name from dict.

Raises KeyError if it doesn’t exist, unless raising is set to False.

setenv(name, value, prepend=None)[source]

Set environment variable name to value.

If prepend is a character, read the current environment variable value and prepend the value adjoined with the prependcharacter.

delenv(name, raising=True)[source]

Delete name from the environment.

Raises KeyError if it does not exist, unless raising is set to False.

syspath_prepend(path)[source]

Prepend path to sys.path list of import locations.

chdir(path)[source]

Change the current working directory to the specified path.

Parameters:

path (str | PathLike_[_str]) – The path to change into.

undo()[source]

Undo previous changes.

This call consumes the undo stack. Calling it a second time has no effect unless you do more monkeypatching after the undo call.

There is generally no need to call undo(), since it is called automatically during tear-down.

Note

The same monkeypatch fixture is used across a single test function invocation. If monkeypatch is used both by the test function itself and one of the test fixtures, calling undo() will undo all of the changes made in both functions.

Prefer to use context() instead.

pytestconfig

pytestconfig()[source]

Session-scoped fixture that returns the session’s pytest.Configobject.

Example:

def test_foo(pytestconfig): if pytestconfig.get_verbosity() > 0: ...

pytester

Added in version 6.2.

Provides a Pytester instance that can be used to run and test pytest itself.

It provides an empty directory where pytest can be executed in isolation, and contains facilities to write tests, configuration files, and match against expected output.

To use it, include in your topmost conftest.py file:

pytest_plugins = "pytester"

final class Pytester[source]

Facilities to write tests/configuration files, execute pytest in isolation, and match against expected output, perfect for black-box testing of pytest plugins.

It attempts to isolate the test run from external factors as much as possible, modifying the current working directory to path and environment variables during initialization.

exception TimeoutExpired[source]

plugins_: list[str | object]_

A list of plugins to use with parseconfig() andrunpytest(). Initially this is an empty list but plugins can be added to the list. The type of items to add to the list depends on the method using them so refer to them for details.

property path_: Path_

Temporary directory path used to create files/run tests from, etc.

make_hook_recorder(pluginmanager)[source]

Create a new HookRecorder for a PytestPluginManager.

chdir()[source]

Cd into the temporary directory.

This is done automatically upon instantiation.

makefile(ext, *args, **kwargs)[source]

Create new text file(s) in the test directory.

Parameters:

Returns:

The first created file.

Return type:

Path

Examples:

pytester.makefile(".txt", "line1", "line2")

pytester.makefile(".ini", pytest="[pytest]\naddopts=-rs\n")

To create binary files, use pathlib.Path.write_bytes() directly:

filename = pytester.path.joinpath("foo.bin") filename.write_bytes(b"...")

makeconftest(source)[source]

Write a conftest.py file.

Parameters:

source (str) – The contents.

Returns:

The conftest.py file.

Return type:

Path

makeini(source)[source]

Write a tox.ini file.

Parameters:

source (str) – The contents.

Returns:

The tox.ini file.

Return type:

Path

getinicfg(source)[source]

Return the pytest section from the tox.ini config file.

makepyprojecttoml(source)[source]

Write a pyproject.toml file.

Parameters:

source (str) – The contents.

Returns:

The pyproject.ini file.

Return type:

Path

Added in version 6.0.

makepyfile(*args, **kwargs)[source]

Shortcut for .makefile() with a .py extension.

Defaults to the test name with a ‘.py’ extension, e.g test_foobar.py, overwriting existing files.

Examples:

def test_something(pytester): # Initial file is created test_something.py. pytester.makepyfile("foobar") # To create multiple files, pass kwargs accordingly. pytester.makepyfile(custom="foobar") # At this point, both 'test_something.py' & 'custom.py' exist in the test directory.

maketxtfile(*args, **kwargs)[source]

Shortcut for .makefile() with a .txt extension.

Defaults to the test name with a ‘.txt’ extension, e.g test_foobar.txt, overwriting existing files.

Examples:

def test_something(pytester): # Initial file is created test_something.txt. pytester.maketxtfile("foobar") # To create multiple files, pass kwargs accordingly. pytester.maketxtfile(custom="foobar") # At this point, both 'test_something.txt' & 'custom.txt' exist in the test directory.

syspathinsert(path=None)[source]

Prepend a directory to sys.path, defaults to path.

This is undone automatically when this object dies at the end of each test.

Parameters:

path (str | PathLike_[_str] | None) – The path.

mkdir(name)[source]

Create a new (sub)directory.

Parameters:

name (str | PathLike_[_str]) – The name of the directory, relative to the pytester path.

Returns:

The created directory.

Return type:

pathlib.Path

mkpydir(name)[source]

Create a new python package.

This creates a (sub)directory with an empty __init__.py file so it gets recognised as a Python package.

copy_example(name=None)[source]

Copy file from project’s directory into the testdir.

Parameters:

name (str | None) – The name of the file to copy.

Returns:

Path to the copied directory (inside self.path).

Return type:

pathlib.Path

getnode(config, arg)[source]

Get the collection node of a file.

Parameters:

Returns:

The node.

Return type:

Collector | Item

getpathnode(path)[source]

Return the collection node of a file.

This is like getnode() but uses parseconfigure() to create the (configured) pytest Config instance.

Parameters:

path (str | PathLike_[_str]) – Path to the file.

Returns:

The node.

Return type:

Collector | Item

genitems(colitems)[source]

Generate all test items from a collection node.

This recurses into the collection node and returns a list of all the test items contained within.

Parameters:

colitems (Sequence_[_Item | Collector]) – The collection nodes.

Returns:

The collected items.

Return type:

list[Item]

runitem(source)[source]

Run the “test_func” Item.

The calling test instance (class containing the test method) must provide a .getrunner() method which should return a runner which can run the test protocol for a single item, e.g._pytest.runner.runtestprotocol.

inline_runsource(source, *cmdlineargs)[source]

Run a test module in process using pytest.main().

This run writes “source” into a temporary file and runspytest.main() on it, returning a HookRecorder instance for the result.

Parameters:

inline_genitems(*args)[source]

Run pytest.main(['--collect-only']) in-process.

Runs the pytest.main() function to run all of pytest inside the test process itself like inline_run(), but returns a tuple of the collected items and a HookRecorder instance.

inline_run(*args, plugins=(), no_reraise_ctrlc=False)[source]

Run pytest.main() in-process, returning a HookRecorder.

Runs the pytest.main() function to run all of pytest inside the test process itself. This means it can return aHookRecorder instance which gives more detailed results from that run than can be done by matching stdout/stderr fromrunpytest().

Parameters:

runpytest_inprocess(*args, **kwargs)[source]

Return result of running pytest in-process, providing a similar interface to what self.runpytest() provides.

runpytest(*args, **kwargs)[source]

Run pytest inline or in a subprocess, depending on the command line option “–runpytest” and return a RunResult.

parseconfig(*args)[source]

Return a new pytest pytest.Config instance from given commandline args.

This invokes the pytest bootstrapping code in _pytest.config to create a new pytest.PytestPluginManager and call thepytest_cmdline_parse hook to create a new pytest.Configinstance.

If plugins has been populated they should be plugin modules to be registered with the plugin manager.

parseconfigure(*args)[source]

Return a new pytest configured Config instance.

Returns a new pytest.Config instance likeparseconfig(), but also calls the pytest_configurehook.

getitem(source, funcname='test_func')[source]

Return the test item for a test function.

Writes the source to a python file and runs pytest’s collection on the resulting module, returning the test item for the requested function name.

Parameters:

Returns:

The test item.

Return type:

Item

getitems(source)[source]

Return all test items collected from the module.

Writes the source to a Python file and runs pytest’s collection on the resulting module, returning all test items contained within.

getmodulecol(source, configargs=(), *, withinit=False)[source]

Return the module collection node for source.

Writes source to a file using makepyfile() and then runs the pytest collection on it, returning the collection node for the test module.

Parameters:

collect_by_name(modcol, name)[source]

Return the collection node for name from the module collection.

Searches a module collection node for a collection node matching the given name.

Parameters:

popen(cmdargs, stdout=-1, stderr=-1, stdin=NotSetType.token, **kw)[source]

Invoke subprocess.Popen.

Calls subprocess.Popen making sure the current working directory is in PYTHONPATH.

You probably want to use run() instead.

run(*cmdargs, timeout=None, stdin=NotSetType.token)[source]

Run a command with arguments.

Run a process using subprocess.Popen saving the stdout and stderr.

Parameters:

Returns:

The result.

Return type:

RunResult

runpython(script)[source]

Run a python script using sys.executable as interpreter.

runpython_c(command)[source]

Run python -c "command".

runpytest_subprocess(*args, timeout=None)[source]

Run pytest as a subprocess with given arguments.

Any plugins added to the plugins list will be added using the-p command line option. Additionally --basetemp is used to put any temporary files and directories in a numbered directory prefixed with “runpytest-” to not conflict with the normal numbered pytest location for temporary files and directories.

Parameters:

Returns:

The result.

Return type:

RunResult

spawn_pytest(string, expect_timeout=10.0)[source]

Run pytest using pexpect.

This makes sure to use the right pytest and sets up the temporary directory locations.

The pexpect child is returned.

spawn(cmd, expect_timeout=10.0)[source]

Run a command using pexpect.

The pexpect child is returned.

final class RunResult[source]

The result of running a command from Pytester.

ret_: int | ExitCode_

The return value.

outlines

List of lines captured from stdout.

errlines

List of lines captured from stderr.

stdout

LineMatcher of stdout.

Use e.g. str(stdout) to reconstruct stdout, or the commonly usedstdout.fnmatch_lines() method.

stderr

LineMatcher of stderr.

duration

Duration in seconds.

parseoutcomes()[source]

Return a dictionary of outcome noun -> count from parsing the terminal output that the test process produced.

The returned nouns will always be in plural form:

======= 1 failed, 1 passed, 1 warning, 1 error in 0.13s ====

Will return {"failed": 1, "passed": 1, "warnings": 1, "errors": 1}.

classmethod parse_summary_nouns(lines)[source]

Extract the nouns from a pytest terminal summary line.

It always returns the plural noun for consistency:

======= 1 failed, 1 passed, 1 warning, 1 error in 0.13s ====

Will return {"failed": 1, "passed": 1, "warnings": 1, "errors": 1}.

assert_outcomes(passed=0, skipped=0, failed=0, errors=0, xpassed=0, xfailed=0, warnings=None, deselected=None)[source]

Assert that the specified outcomes appear with the respective numbers (0 means it didn’t occur) in the text output from a test run.

warnings and deselected are only checked if not None.

class LineMatcher[source]

Flexible matching of text.

This is a convenience class to test large texts like the output of commands.

The constructor takes a list of lines without their trailing newlines, i.e.text.splitlines().

__str__()[source]

Return the entire original text.

Added in version 6.2: You can use str() in older versions.

fnmatch_lines_random(lines2)[source]

Check lines exist in the output in any order (using fnmatch.fnmatch()).

re_match_lines_random(lines2)[source]

Check lines exist in the output in any order (using re.match()).

get_lines_after(fnline)[source]

Return all lines following the given line in the text.

The given line can contain glob wildcards.

fnmatch_lines(lines2, *, consecutive=False)[source]

Check lines exist in the output (using fnmatch.fnmatch()).

The argument is a list of lines which have to match and can use glob wildcards. If they do not match a pytest.fail() is called. The matches and non-matches are also shown as part of the error message.

Parameters:

re_match_lines(lines2, *, consecutive=False)[source]

Check lines exist in the output (using re.match()).

The argument is a list of lines which have to match using re.match. If they do not match a pytest.fail() is called.

The matches and non-matches are also shown as part of the error message.

Parameters:

no_fnmatch_line(pat)[source]

Ensure captured lines do not match the given pattern, using fnmatch.fnmatch.

Parameters:

pat (str) – The pattern to match lines.

no_re_match_line(pat)[source]

Ensure captured lines do not match the given pattern, using re.match.

Parameters:

pat (str) – The regular expression to match lines.

str()[source]

Return the entire original text.

final class HookRecorder[source]

Record all hooks called in a plugin manager.

Hook recorders are created by Pytester.

This wraps all the hook calls in the plugin manager, recording each call before propagating the normal calls.

getcalls(names)[source]

Get all recorded calls to hooks with the given names (or name).

matchreport(inamepart='', names=('pytest_runtest_logreport', 'pytest_collectreport'), when=None)[source]

Return a testreport whose dotted import path matches.

final class RecordedHookCall[source]

A recorded call to a hook.

The arguments to the hook call are set as attributes. For example:

calls = hook_recorder.getcalls("pytest_runtest_setup")

Suppose pytest_runtest_setup was called once with item=an_item.

assert calls[0].item is an_item

record_property

Tutorial: record_property

record_property()[source]

Add extra properties to the calling test.

User properties become part of the test report and are available to the configured reporters, like JUnit XML.

The fixture is callable with name, value. The value is automatically XML-encoded.

Example:

def test_function(record_property): record_property("example_key", 1)

record_testsuite_property

Tutorial: record_testsuite_property

record_testsuite_property()[source]

Record a new <property> tag as child of the root <testsuite>.

This is suitable to writing global information regarding the entire test suite, and is compatible with xunit2 JUnit family.

This is a session-scoped fixture which is called with (name, value). Example:

def test_foo(record_testsuite_property): record_testsuite_property("ARCH", "PPC") record_testsuite_property("STORAGE_TYPE", "CEPH")

Parameters:

Warning

Currently this fixture does not work with thepytest-xdist plugin. See#7767 for details.

recwarn

Tutorial: Recording warnings

recwarn()[source]

Return a WarningsRecorder instance that records all warnings emitted by test functions.

See How to capture warnings for information on warning categories.

class WarningsRecorder[source]

A context manager to record raised warnings.

Each recorded warning is an instance of warnings.WarningMessage.

Adapted from warnings.catch_warnings.

property list_: list[WarningMessage]_

The list of recorded warnings.

__getitem__(i)[source]

Get a recorded warning by index.

__iter__()[source]

Iterate through the recorded warnings.

__len__()[source]

The number of recorded warnings.

pop(cls=<class 'Warning'>)[source]

Pop the first recorded warning which is an instance of cls, but not an instance of a child class of any other match. Raises AssertionError if there is no match.

clear()[source]

Clear the list of recorded warnings.

request

Example: Pass different values to a test function, depending on command line options

The request fixture is a special fixture providing information of the requesting test function.

class FixtureRequest[source]

The type of the request fixture.

A request object gives access to the requesting test context and has aparam attribute in case the fixture is parametrized.

fixturename_: Final_

Fixture for which this request is being performed.

property scope_: Literal['session', 'package', 'module', 'class', 'function']_

Scope string, one of “function”, “class”, “module”, “package”, “session”.

property fixturenames_: list[str]_

Names of all active fixtures in this request.

abstract property node

Underlying collection node (depends on current request scope).

property config_: Config_

The pytest config object associated with this request.

property function

Test function object if the request has a per-function scope.

property cls

Class (can be None) where the test function was collected.

property instance

Instance (can be None) on which test function was collected.

property module

Python module object where the test function was collected.

property path_: Path_

Path where the test function was collected.

property keywords_: MutableMapping[str, Any]_

Keywords/markers dictionary for the underlying node.

property session_: Session_

Pytest session object.

abstractmethod addfinalizer(finalizer)[source]

Add finalizer/teardown function to be called without arguments after the last test within the requesting test context finished execution.

applymarker(marker)[source]

Apply a marker to a single test function invocation.

This method is useful if you don’t want to have a keyword/marker on all function invocations.

Parameters:

marker (str | MarkDecorator) – An object created by a call to pytest.mark.NAME(...).

raiseerror(msg)[source]

Raise a FixtureLookupError exception.

Parameters:

msg (str | None) – An optional custom error message.

getfixturevalue(argname)[source]

Dynamically run a named fixture function.

Declaring fixtures via function argument is recommended where possible. But if you can only decide whether to use another fixture at test setup time, you may use this function to retrieve it inside a fixture or test function body.

This method can be used during the test setup phase or the test run phase, but during the test teardown phase a fixture’s value may not be available.

Parameters:

argname (str) – The fixture name.

Raises:

pytest.FixtureLookupError – If the given fixture could not be found.

testdir

Identical to pytester, but provides an instance whose methods return legacy py.path.local objects instead when applicable.

New code should avoid using testdir in favor of pytester.

final class Testdir[source]

Similar to Pytester, but this class works with legacy legacy_path objects instead.

All methods just forward to an internal Pytester instance, converting results to legacy_path objects as necessary.

exception TimeoutExpired

property tmpdir_: LocalPath_

Temporary directory where tests are executed.

make_hook_recorder(pluginmanager)[source]

See Pytester.make_hook_recorder().

chdir()[source]

See Pytester.chdir().

makefile(ext, *args, **kwargs)[source]

See Pytester.makefile().

makeconftest(source)[source]

See Pytester.makeconftest().

makeini(source)[source]

See Pytester.makeini().

getinicfg(source)[source]

See Pytester.getinicfg().

makepyprojecttoml(source)[source]

See Pytester.makepyprojecttoml().

makepyfile(*args, **kwargs)[source]

See Pytester.makepyfile().

maketxtfile(*args, **kwargs)[source]

See Pytester.maketxtfile().

syspathinsert(path=None)[source]

See Pytester.syspathinsert().

mkdir(name)[source]

See Pytester.mkdir().

mkpydir(name)[source]

See Pytester.mkpydir().

copy_example(name=None)[source]

See Pytester.copy_example().

getnode(config, arg)[source]

See Pytester.getnode().

getpathnode(path)[source]

See Pytester.getpathnode().

genitems(colitems)[source]

See Pytester.genitems().

runitem(source)[source]

See Pytester.runitem().

inline_runsource(source, *cmdlineargs)[source]

See Pytester.inline_runsource().

inline_genitems(*args)[source]

See Pytester.inline_genitems().

inline_run(*args, plugins=(), no_reraise_ctrlc=False)[source]

See Pytester.inline_run().

runpytest_inprocess(*args, **kwargs)[source]

See Pytester.runpytest_inprocess().

runpytest(*args, **kwargs)[source]

See Pytester.runpytest().

parseconfig(*args)[source]

See Pytester.parseconfig().

parseconfigure(*args)[source]

See Pytester.parseconfigure().

getitem(source, funcname='test_func')[source]

See Pytester.getitem().

getitems(source)[source]

See Pytester.getitems().

getmodulecol(source, configargs=(), withinit=False)[source]

See Pytester.getmodulecol().

collect_by_name(modcol, name)[source]

See Pytester.collect_by_name().

popen(cmdargs, stdout=-1, stderr=-1, stdin=NotSetType.token, **kw)[source]

See Pytester.popen().

run(*cmdargs, timeout=None, stdin=NotSetType.token)[source]

See Pytester.run().

runpython(script)[source]

See Pytester.runpython().

runpython_c(command)[source]

See Pytester.runpython_c().

runpytest_subprocess(*args, timeout=None)[source]

See Pytester.runpytest_subprocess().

spawn_pytest(string, expect_timeout=10.0)[source]

See Pytester.spawn_pytest().

spawn(cmd, expect_timeout=10.0)[source]

See Pytester.spawn().

tmp_path

Tutorial: How to use temporary directories and files in tests

tmp_path()[source]

Return a temporary directory (as pathlib.Path object) which is unique to each test function invocation. The temporary directory is created as a subdirectory of the base temporary directory, with configurable retention, as discussed in Temporary directory location and retention.

tmp_path_factory

Tutorial: The tmp_path_factory fixture

tmp_path_factory is an instance of TempPathFactory:

final class TempPathFactory[source]

Factory for temporary directories under the common base temp directory, as discussed at Temporary directory location and retention.

mktemp(basename, numbered=True)[source]

Create a new temporary directory managed by the factory.

Parameters:

Returns:

The path to the new directory.

Return type:

Path

getbasetemp()[source]

Return the base temporary directory, creating it if needed.

Returns:

The base temporary directory.

Return type:

Path

tmpdir

Tutorial: The tmpdir and tmpdir_factory fixtures

tmpdir()

Return a temporary directory (as legacy_path object) which is unique to each test function invocation. The temporary directory is created as a subdirectory of the base temporary directory, with configurable retention, as discussed in Temporary directory location and retention.

tmpdir_factory

Tutorial: The tmpdir and tmpdir_factory fixtures

tmpdir_factory is an instance of TempdirFactory:

final class TempdirFactory[source]

Backward compatibility wrapper that implements py.path.localfor TempPathFactory.

mktemp(basename, numbered=True)[source]

Same as TempPathFactory.mktemp(), but returns a py.path.local object.

getbasetemp()[source]

Same as TempPathFactory.getbasetemp(), but returns a py.path.local object.

Hooks

Tutorial: Writing plugins

Reference to all hooks which can be implemented by conftest.py files and plugins.

@pytest.hookimpl

@pytest.hookimpl

pytest’s decorator for marking functions as hook implementations.

See Writing hook functions and pluggy.HookimplMarker().

@pytest.hookspec

@pytest.hookspec

pytest’s decorator for marking functions as hook specifications.

See Declaring new hooks and pluggy.HookspecMarker().

Bootstrapping hooks

Bootstrapping hooks called for plugins registered early enough (internal and third-party plugins).

pytest_load_initial_conftests(early_config, parser, args)[source]

Called to implement the loading of initial conftest files ahead of command line option parsing.

Parameters:

Use in conftest plugins

This hook is not called for conftest files.

pytest_cmdline_parse(pluginmanager, args)[source]

Return an initialized Config, parsing the specified args.

Stops at first non-None result, see firstresult: stop at first non-None result.

Note

This hook is only called for plugin classes passed to theplugins arg when using pytest.main to perform an in-process test run.

Parameters:

Returns:

A pytest config object.

Return type:

Config | None

Use in conftest plugins

This hook is not called for conftest files.

pytest_cmdline_main(config)[source]

Called for performing the main command line action.

The default implementation will invoke the configure hooks andpytest_runtestloop.

Stops at first non-None result, see firstresult: stop at first non-None result.

Parameters:

config (Config) – The pytest config object.

Returns:

The exit code.

Return type:

ExitCode | int | None

Use in conftest plugins

This hook is only called for initial conftests.

Initialization hooks

Initialization hooks called for plugins and conftest.py files.

pytest_addoption(parser, pluginmanager)[source]

Register argparse-style options and ini-style config values, called once at the beginning of a test run.

Parameters:

Options can later be accessed through theconfig object, respectively:

The config object is passed around on many internal objects via the .configattribute or can be retrieved as the pytestconfig fixture.

Note

This hook is incompatible with hook wrappers.

Use in conftest plugins

If a conftest plugin implements this hook, it will be called immediately when the conftest is registered.

This hook is only called for initial conftests.

pytest_addhooks(pluginmanager)[source]

Called at plugin registration time to allow adding new hooks via a call topluginmanager.add_hookspecs(module_or_class, prefix).

Parameters:

pluginmanager (PytestPluginManager) – The pytest plugin manager.

Note

This hook is incompatible with hook wrappers.

Use in conftest plugins

If a conftest plugin implements this hook, it will be called immediately when the conftest is registered.

pytest_configure(config)[source]

Allow plugins and conftest files to perform initial configuration.

Note

This hook is incompatible with hook wrappers.

Parameters:

config (Config) – The pytest config object.

Use in conftest plugins

This hook is called for every initial conftest file after command line options have been parsed. After that, the hook is called for other conftest files as they are registered.

pytest_unconfigure(config)[source]

Called before test process is exited.

Parameters:

config (Config) – The pytest config object.

Use in conftest plugins

Any conftest file can implement this hook.

pytest_sessionstart(session)[source]

Called after the Session object has been created and before performing collection and entering the run test loop.

Parameters:

session (Session) – The pytest session object.

Use in conftest plugins

This hook is only called for initial conftests.

pytest_sessionfinish(session, exitstatus)[source]

Called after whole test run finished, right before returning the exit status to the system.

Parameters:

Use in conftest plugins

Any conftest file can implement this hook.

pytest_plugin_registered(plugin, plugin_name, manager)[source]

A new pytest plugin got registered.

Parameters:

Note

This hook is incompatible with hook wrappers.

Use in conftest plugins

If a conftest plugin implements this hook, it will be called immediately when the conftest is registered, once for each plugin registered thus far (including itself!), and for all plugins thereafter when they are registered.

Collection hooks

pytest calls the following hooks for collecting files and directories:

pytest_collection(session)[source]

Perform the collection phase for the given session.

Stops at first non-None result, see firstresult: stop at first non-None result. The return value is not used, but only stops further processing.

The default collection phase is this (see individual hooks for full details):

  1. Starting from session as the initial collector:
  1. pytest_collectstart(collector)
  2. report = pytest_make_collect_report(collector)
  3. pytest_exception_interact(collector, call, report) if an interactive exception occurred
  4. For each collected node:
  1. If an item, pytest_itemcollected(item)
  2. If a collector, recurse into it.
  1. pytest_collectreport(report)
  1. pytest_collection_modifyitems(session, config, items)
  1. pytest_deselected(items) for any deselected items (may be called multiple times)
  1. pytest_collection_finish(session)
  2. Set session.items to the list of collected items
  3. Set session.testscollected to the number of collected items

You can implement this hook to only perform some action before collection, for example the terminal plugin uses it to start displaying the collection counter (and returns None).

Parameters:

session (Session) – The pytest session object.

Use in conftest plugins

This hook is only called for initial conftests.

pytest_ignore_collect(collection_path, path, config)[source]

Return True to ignore this path for collection.

Return None to let other plugins ignore the path for collection.

Returning False will forcefully not ignore this path for collection, without giving a chance for other plugins to ignore this path.

This hook is consulted for all files and directories prior to calling more specific hooks.

Stops at first non-None result, see firstresult: stop at first non-None result.

Parameters:

Changed in version 7.0.0: The collection_path parameter was added as a pathlib.Pathequivalent of the path parameter. The path parameter has been deprecated.

Use in conftest plugins

Any conftest file can implement this hook. For a given collection path, only conftest files in parent directories of the collection path are consulted (if the path is a directory, its own conftest file is not consulted - a directory cannot ignore itself!).

pytest_collect_directory(path, parent)[source]

Create a Collector for the given directory, or None if not relevant.

Added in version 8.0.

For best results, the returned collector should be a subclass ofDirectory, but this is not required.

The new node needs to have the specified parent as a parent.

Stops at first non-None result, see firstresult: stop at first non-None result.

Parameters:

path (pathlib.Path) – The path to analyze.

See Using a custom directory collector for a simple example of use of this hook.

Use in conftest plugins

Any conftest file can implement this hook. For a given collection path, only conftest files in parent directories of the collection path are consulted (if the path is a directory, its own conftest file is not consulted - a directory cannot collect itself!).

pytest_collect_file(file_path, path, parent)[source]

Create a Collector for the given path, or None if not relevant.

For best results, the returned collector should be a subclass ofFile, but this is not required.

The new node needs to have the specified parent as a parent.

Parameters:

Changed in version 7.0.0: The file_path parameter was added as a pathlib.Pathequivalent of the path parameter. The path parameter has been deprecated.

Use in conftest plugins

Any conftest file can implement this hook. For a given file path, only conftest files in parent directories of the file path are consulted.

pytest_pycollect_makemodule(module_path, path, parent)[source]

Return a pytest.Module collector or None for the given path.

This hook will be called for each matching test module path. The pytest_collect_file hook needs to be used if you want to create test modules for files that do not match as a test module.

Stops at first non-None result, see firstresult: stop at first non-None result.

Parameters:

Changed in version 7.0.0: The module_path parameter was added as a pathlib.Pathequivalent of the path parameter.

The path parameter has been deprecated in favor of fspath.

Use in conftest plugins

Any conftest file can implement this hook. For a given parent collector, only conftest files in the collector’s directory and its parent directories are consulted.

For influencing the collection of objects in Python modules you can use the following hook:

pytest_pycollect_makeitem(collector, name, obj)[source]

Return a custom item/collector for a Python object in a module, or None.

Stops at first non-None result, see firstresult: stop at first non-None result.

Parameters:

Returns:

The created items/collectors.

Return type:

None | Item | Collector | list[Item | Collector]

Use in conftest plugins

Any conftest file can implement this hook. For a given collector, only conftest files in the collector’s directory and its parent directories are consulted.

pytest_generate_tests(metafunc)[source]

Generate (multiple) parametrized calls to a test function.

Parameters:

metafunc (Metafunc) – The Metafunc helper for the test function.

Use in conftest plugins

Any conftest file can implement this hook. For a given function definition, only conftest files in the functions’s directory and its parent directories are consulted.

pytest_make_parametrize_id(config, val, argname)[source]

Return a user-friendly string representation of the given valthat will be used by @pytest.mark.parametrize calls, or None if the hook doesn’t know about val.

The parameter name is available as argname, if required.

Stops at first non-None result, see firstresult: stop at first non-None result.

Parameters:

Use in conftest plugins

Any conftest file can implement this hook.

Hooks for influencing test skipping:

pytest_markeval_namespace(config)[source]

Called when constructing the globals dictionary used for evaluating string conditions in xfail/skipif markers.

This is useful when the condition for a marker requires objects that are expensive or impossible to obtain during collection time, which is required by normal boolean conditions.

Added in version 6.2.

Parameters:

config (Config) – The pytest config object.

Returns:

A dictionary of additional globals to add.

Return type:

dict[str, Any]

Use in conftest plugins

Any conftest file can implement this hook. For a given item, only conftest files in parent directories of the item are consulted.

After collection is complete, you can modify the order of items, delete or otherwise amend the test items:

pytest_collection_modifyitems(session, config, items)[source]

Called after collection has been performed. May filter or re-order the items in-place.

When items are deselected (filtered out from items), the hook pytest_deselected must be called explicitly with the deselected items to properly notify other plugins, e.g. with config.hook.pytest_deselected(items=deselected_items).

Parameters:

Use in conftest plugins

Any conftest plugin can implement this hook.

Note

If this hook is implemented in conftest.py files, it always receives all collected items, not only those under the conftest.py where it is implemented.

pytest_collection_finish(session)[source]

Called after collection has been performed and modified.

Parameters:

session (Session) – The pytest session object.

Use in conftest plugins

Any conftest plugin can implement this hook.

Test running (runtest) hooks

All runtest related hooks receive a pytest.Item object.

pytest_runtestloop(session)[source]

Perform the main runtest loop (after collection finished).

The default hook implementation performs the runtest protocol for all items collected in the session (session.items), unless the collection failed or the collectonly pytest option is set.

If at any point pytest.exit() is called, the loop is terminated immediately.

If at any point session.shouldfail or session.shouldstop are set, the loop is terminated after the runtest protocol for the current item is finished.

Parameters:

session (Session) – The pytest session object.

Stops at first non-None result, see firstresult: stop at first non-None result. The return value is not used, but only stops further processing.

Use in conftest plugins

Any conftest file can implement this hook.

pytest_runtest_protocol(item, nextitem)[source]

Perform the runtest protocol for a single test item.

The default runtest protocol is this (see individual hooks for full details):

Parameters:

Stops at first non-None result, see firstresult: stop at first non-None result. The return value is not used, but only stops further processing.

Use in conftest plugins

Any conftest file can implement this hook.

pytest_runtest_logstart(nodeid, location)[source]

Called at the start of running the runtest protocol for a single item.

See pytest_runtest_protocol for a description of the runtest protocol.

Parameters:

Use in conftest plugins

Any conftest file can implement this hook. For a given item, only conftest files in the item’s directory and its parent directories are consulted.

pytest_runtest_logfinish(nodeid, location)[source]

Called at the end of running the runtest protocol for a single item.

See pytest_runtest_protocol for a description of the runtest protocol.

Parameters:

Use in conftest plugins

Any conftest file can implement this hook. For a given item, only conftest files in the item’s directory and its parent directories are consulted.

pytest_runtest_setup(item)[source]

Called to perform the setup phase for a test item.

The default implementation runs setup() on item and all of its parents (which haven’t been setup yet). This includes obtaining the values of fixtures required by the item (which haven’t been obtained yet).

Parameters:

item (Item) – The item.

Use in conftest plugins

Any conftest file can implement this hook. For a given item, only conftest files in the item’s directory and its parent directories are consulted.

pytest_runtest_call(item)[source]

Called to run the test for test item (the call phase).

The default implementation calls item.runtest().

Parameters:

item (Item) – The item.

Use in conftest plugins

Any conftest file can implement this hook. For a given item, only conftest files in the item’s directory and its parent directories are consulted.

pytest_runtest_teardown(item, nextitem)[source]

Called to perform the teardown phase for a test item.

The default implementation runs the finalizers and calls teardown()on item and all of its parents (which need to be torn down). This includes running the teardown phase of fixtures required by the item (if they go out of scope).

Parameters:

Use in conftest plugins

Any conftest file can implement this hook. For a given item, only conftest files in the item’s directory and its parent directories are consulted.

pytest_runtest_makereport(item, call)[source]

Called to create a TestReport for each of the setup, call and teardown runtest phases of a test item.

See pytest_runtest_protocol for a description of the runtest protocol.

Parameters:

Stops at first non-None result, see firstresult: stop at first non-None result.

Use in conftest plugins

Any conftest file can implement this hook. For a given item, only conftest files in the item’s directory and its parent directories are consulted.

For deeper understanding you may look at the default implementation of these hooks in _pytest.runner and maybe also in _pytest.pdb which interacts with _pytest.captureand its input/output capturing in order to immediately drop into interactive debugging when a test failure occurs.

pytest_pyfunc_call(pyfuncitem)[source]

Call underlying test function.

Stops at first non-None result, see firstresult: stop at first non-None result.

Parameters:

pyfuncitem (Function) – The function item.

Use in conftest plugins

Any conftest file can implement this hook. For a given item, only conftest files in the item’s directory and its parent directories are consulted.

Reporting hooks

Session related reporting hooks:

pytest_collectstart(collector)[source]

Collector starts collecting.

Parameters:

collector (Collector) – The collector.

Use in conftest plugins

Any conftest file can implement this hook. For a given collector, only conftest files in the collector’s directory and its parent directories are consulted.

pytest_make_collect_report(collector)[source]

Perform collector.collect() and return a CollectReport.

Stops at first non-None result, see firstresult: stop at first non-None result.

Parameters:

collector (Collector) – The collector.

Use in conftest plugins

Any conftest file can implement this hook. For a given collector, only conftest files in the collector’s directory and its parent directories are consulted.

pytest_itemcollected(item)[source]

We just collected a test item.

Parameters:

item (Item) – The item.

Use in conftest plugins

Any conftest file can implement this hook. For a given item, only conftest files in the item’s directory and its parent directories are consulted.

pytest_collectreport(report)[source]

Collector finished collecting.

Parameters:

report (CollectReport) – The collect report.

Use in conftest plugins

Any conftest file can implement this hook. For a given collector, only conftest files in the collector’s directory and its parent directories are consulted.

pytest_deselected(items)[source]

Called for deselected test items, e.g. by keyword.

Note that this hook has two integration aspects for plugins:

May be called multiple times.

Parameters:

items (Sequence _[_Item]) – The items.

Use in conftest plugins

Any conftest file can implement this hook.

pytest_report_collectionfinish(config, start_path, startdir, items)[source]

Return a string or list of strings to be displayed after collection has finished successfully.

These strings will be displayed after the standard “collected X items” message.

Added in version 3.2.

Parameters:

Note

Lines returned by a plugin are displayed before those of plugins which ran before it. If you want to have your line(s) displayed first, usetrylast=True.

Changed in version 7.0.0: The start_path parameter was added as a pathlib.Pathequivalent of the startdir parameter. The startdir parameter has been deprecated.

Use in conftest plugins

Any conftest plugin can implement this hook.

pytest_report_teststatus(report, config)[source]

Return result-category, shortletter and verbose word for status reporting.

The result-category is a category in which to count the result, for example “passed”, “skipped”, “error” or the empty string.

The shortletter is shown as testing progresses, for example “.”, “s”, “E” or the empty string.

The verbose word is shown as testing progresses in verbose mode, for example “PASSED”, “SKIPPED”, “ERROR” or the empty string.

pytest may style these implicitly according to the report outcome. To provide explicit styling, return a tuple for the verbose word, for example "rerun", "R", ("RERUN", {"yellow": True}).

Parameters:

Returns:

The test status.

Return type:

TestShortLogReport | tuple[str, str, str | tuple[str, Mapping[str, bool]]]

Stops at first non-None result, see firstresult: stop at first non-None result.

Use in conftest plugins

Any conftest plugin can implement this hook.

pytest_report_to_serializable(config, report)[source]

Serialize the given report object into a data structure suitable for sending over the wire, e.g. converted to JSON.

Parameters:

Use in conftest plugins

Any conftest file can implement this hook. The exact details may depend on the plugin which calls the hook.

pytest_report_from_serializable(config, data)[source]

Restore a report object previously serialized withpytest_report_to_serializable.

Parameters:

config (Config) – The pytest config object.

Use in conftest plugins

Any conftest file can implement this hook. The exact details may depend on the plugin which calls the hook.

pytest_terminal_summary(terminalreporter, exitstatus, config)[source]

Add a section to terminal summary reporting.

Parameters:

Added in version 4.2: The config parameter.

Use in conftest plugins

Any conftest plugin can implement this hook.

pytest_fixture_setup(fixturedef, request)[source]

Perform fixture setup execution.

Parameters:

Returns:

The return value of the call to the fixture function.

Return type:

object | None

Stops at first non-None result, see firstresult: stop at first non-None result.

Use in conftest plugins

Any conftest file can implement this hook. For a given fixture, only conftest files in the fixture scope’s directory and its parent directories are consulted.

pytest_fixture_post_finalizer(fixturedef, request)[source]

Called after fixture teardown, but before the cache is cleared, so the fixture result fixturedef.cached_result is still available (notNone).

Parameters:

Use in conftest plugins

Any conftest file can implement this hook. For a given fixture, only conftest files in the fixture scope’s directory and its parent directories are consulted.

pytest_warning_recorded(warning_message, when, nodeid, location)[source]

Process a warning captured by the internal pytest warnings plugin.

Parameters:

Added in version 6.0.

Use in conftest plugins

Any conftest file can implement this hook. If the warning is specific to a particular node, only conftest files in parent directories of the node are consulted.

Central hook for reporting about test execution:

pytest_runtest_logreport(report)[source]

Process the TestReport produced for each of the setup, call and teardown runtest phases of an item.

See pytest_runtest_protocol for a description of the runtest protocol.

Use in conftest plugins

Any conftest file can implement this hook. For a given item, only conftest files in the item’s directory and its parent directories are consulted.

Assertion related hooks:

pytest_assertrepr_compare(config, op, left, right)[source]

Return explanation for comparisons in failing assert expressions.

Return None for no custom explanation, otherwise return a list of strings. The strings will be joined by newlines but any newlines_in_ a string will be escaped. Note that all but the first line will be indented slightly, the intention is for the first line to be a summary.

Parameters:

Use in conftest plugins

Any conftest file can implement this hook. For a given item, only conftest files in the item’s directory and its parent directories are consulted.

pytest_assertion_pass(item, lineno, orig, expl)[source]

Called whenever an assertion passes.

Added in version 5.0.

Use this hook to do some processing after a passing assertion. The original assertion information is available in the orig string and the pytest introspected assertion information is available in theexpl string.

This hook must be explicitly enabled by the enable_assertion_pass_hookini-file option:

[pytest] enable_assertion_pass_hook=true

You need to clean the .pyc files in your project directory and interpreter libraries when enabling this option, as assertions will require to be re-written.

Parameters:

Use in conftest plugins

Any conftest file can implement this hook. For a given item, only conftest files in the item’s directory and its parent directories are consulted.

Debugging/Interaction hooks

There are few hooks which can be used for special reporting or interaction with exceptions:

pytest_internalerror(excrepr, excinfo)[source]

Called for internal errors.

Return True to suppress the fallback handling of printing an INTERNALERROR message directly to sys.stderr.

Parameters:

Use in conftest plugins

Any conftest plugin can implement this hook.

pytest_keyboard_interrupt(excinfo)[source]

Called for keyboard interrupt.

Parameters:

excinfo (ExceptionInfo_[_KeyboardInterrupt | Exit ]) – The exception info.

Use in conftest plugins

Any conftest plugin can implement this hook.

pytest_exception_interact(node, call, report)[source]

Called when an exception was raised which can potentially be interactively handled.

May be called during collection (see pytest_make_collect_report), in which case report is a CollectReport.

May be called during runtest of an item (see pytest_runtest_protocol), in which case report is a TestReport.

This hook is not called if the exception that was raised is an internal exception like skip.Exception.

Parameters:

Use in conftest plugins

Any conftest file can implement this hook. For a given node, only conftest files in parent directories of the node are consulted.

pytest_enter_pdb(config, pdb)[source]

Called upon pdb.set_trace().

Can be used by plugins to take special action just before the python debugger enters interactive mode.

Parameters:

Use in conftest plugins

Any conftest plugin can implement this hook.

pytest_leave_pdb(config, pdb)[source]

Called when leaving pdb (e.g. with continue after pdb.set_trace()).

Can be used by plugins to take special action just after the python debugger leaves interactive mode.

Parameters:

Use in conftest plugins

Any conftest plugin can implement this hook.

Collection tree objects

These are the collector and item classes (collectively called “nodes”) which make up the collection tree.

Node

class Node[source]

Bases: ABC

Base class of Collector and Item, the components of the test collection tree.

Collector's are the internal nodes of the tree, and Item's are the leaf nodes.

fspath_: LEGACY_PATH_

A LEGACY_PATH copy of the path attribute. Intended for usage for methods not migrated to pathlib.Path yet, such asItem.reportinfo. Will be deprecated in a future release, prefer using path instead.

name_: str_

A unique name within the scope of the parent node.

parent

The parent collector node.

config_: Config_

The pytest config object.

session_: Session_

The pytest session this node is part of.

path_: pathlib.Path_

Filesystem path where this node was collected from (can be None).

keywords_: MutableMapping[str, Any]_

Keywords/markers collected from all scopes.

own_markers_: list[Mark]_

The marker objects belonging to this node.

Allow adding of extra keywords to use for matching.

stash_: Stash_

A place where plugins can store information on the node for their own use.

classmethod from_parent(parent, **kw)[source]

Public constructor for Nodes.

This indirection got introduced in order to enable removing the fragile logic from the node constructors.

Subclasses can use super().from_parent(...) when overriding the construction.

Parameters:

parent (Node) – The parent node of this Node.

property ihook_: HookRelay_

fspath-sensitive hook proxy used to call pytest hooks.

warn(warning)[source]

Issue a warning for this Node.

Warnings will be displayed after the test session, unless explicitly suppressed.

Parameters:

warning (Warning) – The warning instance to issue.

Raises:

ValueError – If warning instance is not a subclass of Warning.

Example usage:

node.warn(PytestWarning("some message")) node.warn(UserWarning("some message"))

Changed in version 6.2: Any subclass of Warning is now accepted, rather than onlyPytestWarning subclasses.

property nodeid_: str_

A ::-separated string denoting its collection tree address.

for ... in iter_parents()[source]

Iterate over all parent collectors starting from and including self up to the root of the collection tree.

Added in version 8.1.

listchain()[source]

Return a list of all parent collectors starting from the root of the collection tree down to and including self.

add_marker(marker, append=True)[source]

Dynamically add a marker object to the node.

Parameters:

iter_markers(name=None)[source]

Iterate over all markers of the node.

Parameters:

name (str | None) – If given, filter the results by the name attribute.

Returns:

An iterator of the markers of the node.

Return type:

Iterator[Mark]

for ... in iter_markers_with_node(name=None)[source]

Iterate over all markers of the node.

Parameters:

name (str | None) – If given, filter the results by the name attribute.

Returns:

An iterator of (node, mark) tuples.

Return type:

Iterator[tuple[Node, Mark]]

get_closest_marker(name: str) → Mark | None[source]

get_closest_marker(name: str, default: Mark) → Mark

Return the first marker matching the name, from closest (for example function) to farther level (for example module level).

Parameters:

Return a set of all extra keywords in self and any parents.

addfinalizer(fin)[source]

Register a function to be called without arguments when this node is finalized.

This method can only be called when this node is active in a setup chain, for example during self.setup().

getparent(cls)[source]

Get the closest parent node (including self) which is an instance of the given class.

Parameters:

cls (type[ _NodeType ]) – The node class to search for.

Returns:

The node, if found.

Return type:

_NodeType | None

repr_failure(excinfo, style=None)[source]

Return a representation of a collection or test failure.

Parameters:

excinfo (ExceptionInfo_[_BaseException]) – Exception information for the failure.

Collector

class Collector[source]

Bases: Node, ABC

Base class of all collectors.

Collector create children through collect() and thus iteratively build the collection tree.

exception CollectError[source]

Bases: Exception

An error during collection, contains a custom message.

abstractmethod collect()[source]

Collect children (items and collectors) for this collector.

repr_failure(excinfo)[source]

Return a representation of a collection failure.

Parameters:

excinfo (ExceptionInfo_[_BaseException]) – Exception information for the failure.

name_: str_

A unique name within the scope of the parent node.

parent

The parent collector node.

config_: Config_

The pytest config object.

session_: Session_

The pytest session this node is part of.

path_: pathlib.Path_

Filesystem path where this node was collected from (can be None).

Item

class Item[source]

Bases: Node, ABC

Base class of all test invocation items.

Note that for a single function there might be multiple test invocation items.

user_properties_: list[tuple[str, object]]_

A list of tuples (name, value) that holds user defined properties for this test.

name_: str_

A unique name within the scope of the parent node.

parent

The parent collector node.

config_: Config_

The pytest config object.

session_: Session_

The pytest session this node is part of.

path_: pathlib.Path_

Filesystem path where this node was collected from (can be None).

abstractmethod runtest()[source]

Run the test case for this item.

Must be implemented by subclasses.

add_report_section(when, key, content)[source]

Add a new report section, similar to what’s done internally to add stdout and stderr captured output:

item.add_report_section("call", "stdout", "report section contents")

Parameters:

reportinfo()[source]

Get location information for this item for test reports.

Returns a tuple with three elements:

property location_: tuple[str, int | None, str]_

Returns a tuple of (relfspath, lineno, testname) for this item where relfspath is file path relative to config.rootpathand lineno is a 0-based line number.

File

class File[source]

Bases: FSCollector, ABC

Base class for collecting tests from a file.

Working with non-python tests.

name_: str_

A unique name within the scope of the parent node.

parent

The parent collector node.

config_: Config_

The pytest config object.

session_: Session_

The pytest session this node is part of.

path_: pathlib.Path_

Filesystem path where this node was collected from (can be None).

FSCollector

class FSCollector[source]

Bases: Collector, ABC

Base class for filesystem collectors.

path_: pathlib.Path_

Filesystem path where this node was collected from (can be None).

classmethod from_parent(parent, *, fspath=None, path=None, **kw)[source]

The public constructor.

name_: str_

A unique name within the scope of the parent node.

parent

The parent collector node.

config_: Config_

The pytest config object.

session_: Session_

The pytest session this node is part of.

Session

final class Session[source]

Bases: Collector

The root of the collection tree.

Session collects the initial paths given as arguments to pytest.

exception Interrupted

Bases: KeyboardInterrupt

Signals that the test run was interrupted.

exception Failed

Bases: Exception

Signals a stop as failed test run.

property startpath_: Path_

The path from which pytest was invoked.

Added in version 7.0.0.

isinitpath(path, *, with_parents=False)[source]

Is path an initial path?

An initial path is a path explicitly given to pytest on the command line.

Parameters:

with_parents (bool) – If set, also return True if the path is a parent of an initial path.

Changed in version 8.0: Added the with_parents parameter.

perform_collect(args: Sequence[str] | None = None, genitems: Literal[True] = True) → Sequence[Item][source]

perform_collect(args: Sequence[str] | None = None, genitems: bool = True) → Sequence[Item | Collector]

Perform the collection phase for this session.

This is called by the default pytest_collection hook implementation; see the documentation of this hook for more details. For testing purposes, it may also be called directly on a freshSession.

This function normally recursively expands any collectors collected from the session to their items, and only items are returned. For testing purposes, this may be suppressed by passing genitems=False, in which case the return value contains these collectors unexpanded, and session.items is empty.

for ... in collect()[source]

Collect children (items and collectors) for this collector.

name_: str_

A unique name within the scope of the parent node.

parent

The parent collector node.

config_: Config_

The pytest config object.

session_: Session_

The pytest session this node is part of.

path_: pathlib.Path_

Filesystem path where this node was collected from (can be None).

Package

class Package[source]

Bases: Directory

Collector for files and directories in a Python packages – directories with an __init__.py file.

Note

Directories without an __init__.py file are instead collected byDir by default. Both are Directorycollectors.

Changed in version 8.0: Now inherits from Directory.

for ... in collect()[source]

Collect children (items and collectors) for this collector.

name_: str_

A unique name within the scope of the parent node.

parent

The parent collector node.

config_: Config_

The pytest config object.

session_: Session_

The pytest session this node is part of.

path_: pathlib.Path_

Filesystem path where this node was collected from (can be None).

Module

class Module[source]

Bases: File, PyCollector

Collector for test classes and functions in a Python module.

collect()[source]

Collect children (items and collectors) for this collector.

name_: str_

A unique name within the scope of the parent node.

parent

The parent collector node.

config_: Config_

The pytest config object.

session_: Session_

The pytest session this node is part of.

path_: pathlib.Path_

Filesystem path where this node was collected from (can be None).

Class

class Class[source]

Bases: PyCollector

Collector for test methods (and nested classes) in a Python class.

classmethod from_parent(parent, *, name, obj=None, **kw)[source]

The public constructor.

collect()[source]

Collect children (items and collectors) for this collector.

name_: str_

A unique name within the scope of the parent node.

parent

The parent collector node.

config_: Config_

The pytest config object.

session_: Session_

The pytest session this node is part of.

path_: pathlib.Path_

Filesystem path where this node was collected from (can be None).

Function

class Function[source]

Bases: PyobjMixin, Item

Item responsible for setting up and executing a Python test function.

Parameters:

originalname

Original function name, without any decorations (for example parametrization adds a "[...]" suffix to function names), used to access the underlying function object from parent (in case callobj is not given explicitly).

Added in version 3.0.

classmethod from_parent(parent, **kw)[source]

The public constructor.

property function

Underlying python ‘function’ object.

property instance

Python instance object the function is bound to.

Returns None if not a test method, e.g. for a standalone test function, a class or a module.

runtest()[source]

Execute the underlying test function.

repr_failure(excinfo)[source]

Return a representation of a collection or test failure.

Parameters:

excinfo (ExceptionInfo_[_BaseException]) – Exception information for the failure.

name_: str_

A unique name within the scope of the parent node.

parent

The parent collector node.

config_: Config_

The pytest config object.

session_: Session_

The pytest session this node is part of.

path_: pathlib.Path_

Filesystem path where this node was collected from (can be None).

FunctionDefinition

class FunctionDefinition[source]

Bases: Function

This class is a stop gap solution until we evolve to have actual function definition nodes and manage to get rid of metafunc.

runtest()[source]

Execute the underlying test function.

name_: str_

A unique name within the scope of the parent node.

parent

The parent collector node.

config_: Config_

The pytest config object.

session_: Session_

The pytest session this node is part of.

path_: pathlib.Path_

Filesystem path where this node was collected from (can be None).

setup()

Execute the underlying test function.

Objects

Objects accessible from fixtures or hooksor importable from pytest.

CallInfo

final class CallInfo[source]

Result/Exception info of a function invocation.

excinfo_: ExceptionInfo[BaseException] | None_

The captured exception of the call, if it raised.

start_: float_

The system time when the call started, in seconds since the epoch.

stop_: float_

The system time when the call ended, in seconds since the epoch.

duration_: float_

The call duration, in seconds.

when_: Literal['collect', 'setup', 'call', 'teardown']_

The context of invocation: “collect”, “setup”, “call” or “teardown”.

property result_: TResult_

The return value of the call, if it didn’t raise.

Can only be accessed if excinfo is None.

classmethod from_call(func, when, reraise=None)[source]

Call func, wrapping the result in a CallInfo.

Parameters:

CollectReport

final class CollectReport[source]

Bases: BaseReport

Collection report object.

Reports can contain arbitrary extra attributes.

nodeid_: str_

Normalized collection nodeid.

outcome_: Literal['passed', 'failed', 'skipped']_

Test outcome, always one of “passed”, “failed”, “skipped”.

longrepr_: None | ExceptionInfo[BaseException] | tuple[str, int, str] | str | TerminalRepr_

None or a failure representation.

result

The collected items and collection nodes.

sections_: list[tuple[str, str]]_

Tuples of str (heading, content) with extra information for the test report. Used by pytest to add text captured from stdout, stderr, and intercepted logging events. May be used by other plugins to add arbitrary information to reports.

property caplog_: str_

Return captured log lines, if log capturing is enabled.

Added in version 3.5.

property capstderr_: str_

Return captured text from stderr, if capturing is enabled.

Added in version 3.0.

property capstdout_: str_

Return captured text from stdout, if capturing is enabled.

Added in version 3.0.

property count_towards_summary_: bool_

Experimental Whether this report should be counted towards the totals shown at the end of the test session: “1 passed, 1 failure, etc”.

Note

This function is considered experimental, so beware that it is subject to changes even in patch releases.

property failed_: bool_

Whether the outcome is failed.

property fspath_: str_

The path portion of the reported node, as a string.

property head_line_: str | None_

Experimental The head line shown with longrepr output for this report, more commonly during traceback representation during failures:

________ Test.foo ________

In the example above, the head_line is “Test.foo”.

Note

This function is considered experimental, so beware that it is subject to changes even in patch releases.

property longreprtext_: str_

Read-only property that returns the full string representation oflongrepr.

Added in version 3.0.

property passed_: bool_

Whether the outcome is passed.

property skipped_: bool_

Whether the outcome is skipped.

Config

final class Config[source]

Access to configuration values, pluginmanager and plugin hooks.

Parameters:

final class InvocationParams(*, args, plugins, dir)[source]

Holds parameters passed during pytest.main().

The object attributes are read-only.

Added in version 5.1.

Note

Note that the environment variable PYTEST_ADDOPTS and the addoptsini option are handled by pytest, not being included in the args attribute.

Plugins accessing InvocationParams must be aware of that.

args_: tuple[str, ...]_

The command-line arguments as passed to pytest.main().

plugins_: Sequence[str | object] | None_

Extra plugins, might be None.

dir_: Path_

The directory from which pytest.main() was invoked. :type: pathlib.Path

class ArgsSource(*values)[source]

Indicates the source of the test arguments.

Added in version 7.2.

ARGS = 1

Command line arguments.

INVOCATION_DIR = 2

Invocation directory.

TESTPATHS = 3

‘testpaths’ configuration value.

option

Access to command line option as attributes.

Type:

argparse.Namespace

invocation_params

The parameters with which pytest was invoked.

Type:

InvocationParams

pluginmanager

The plugin manager handles plugin registration and hook invocation.

Type:

PytestPluginManager

stash

A place where plugins can store information on the config for their own use.

Type:

Stash

property rootpath_: Path_

The path to the rootdir.

Type:

pathlib.Path

Added in version 6.1.

property inipath_: Path | None_

The path to the configfile.

Added in version 6.1.

add_cleanup(func)[source]

Add a function to be called when the config object gets out of use (usually coinciding with pytest_unconfigure).

classmethod fromdictargs(option_dict, args)[source]

Constructor usable for subprocesses.

issue_config_time_warning(warning, stacklevel)[source]

Issue and handle a warning during the “configure” stage.

During pytest_configure we can’t capture warnings using the catch_warnings_for_itemfunction because it is not possible to have hook wrappers around pytest_configure.

This function is mainly intended for plugins that need to issue warnings duringpytest_configure (or similar stages).

Parameters:

addinivalue_line(name, line)[source]

Add a line to an ini-file option. The option must have been declared but might not yet be set in which case the line becomes the first line in its value.

getini(name)[source]

Return configuration value from an ini file.

If a configuration value is not defined in anini file, then the default value provided while registering the configuration throughparser.addini will be returned. Please note that you can even provide None as a valid default value.

If default is not provided while registering usingparser.addini, then a default value based on the type parameter passed toparser.addini will be returned. The default values based on type are:paths, pathlist, args and linelist : empty list [] bool : False string : empty string ""

If neither the default nor the type parameter is passed while registering the configuration throughparser.addini, then the configuration is treated as a string and a default empty string ‘’ is returned.

If the specified name hasn’t been registered through a priorparser.addini call (usually from a plugin), a ValueError is raised.

getoption(name, default=, skip=False)[source]

Return command line option value.

Parameters:

getvalue(name, path=None)[source]

Deprecated, use getoption() instead.

getvalueorskip(name, path=None)[source]

Deprecated, use getoption(skip=True) instead.

VERBOSITY_ASSERTIONS_: Final_ = 'assertions'

Verbosity type for failed assertions (see verbosity_assertions).

VERBOSITY_TEST_CASES_: Final_ = 'test_cases'

Verbosity type for test case execution (see verbosity_test_cases).

get_verbosity(verbosity_type=None)[source]

Retrieve the verbosity level for a fine-grained verbosity type.

Parameters:

verbosity_type (str | None) – Verbosity type to get level for. If a level is configured for the given type, that value will be returned. If the given type is not a known verbosity type, the global verbosity level will be returned. If the given type is None (default), the global verbosity level will be returned.

To configure a level for a fine-grained verbosity type, the configuration file should have a setting for the configuration name and a numeric value for the verbosity level. A special value of “auto” can be used to explicitly use the global verbosity level.

Example:

content of pytest.ini

[pytest] verbosity_assertions = 2

print(config.get_verbosity()) # 1 print(config.get_verbosity(Config.VERBOSITY_ASSERTIONS)) # 2

Dir

final class Dir[source]

Collector of files in a file system directory.

Added in version 8.0.

Note

Python directories with an __init__.py file are instead collected byPackage by default. Both are Directorycollectors.

classmethod from_parent(parent, *, path)[source]

The public constructor.

Parameters:

for ... in collect()[source]

Collect children (items and collectors) for this collector.

name_: str_

A unique name within the scope of the parent node.

parent

The parent collector node.

config_: Config_

The pytest config object.

session_: Session_

The pytest session this node is part of.

path_: pathlib.Path_

Filesystem path where this node was collected from (can be None).

Directory

class Directory[source]

Base class for collecting files from a directory.

A basic directory collector does the following: goes over the files and sub-directories in the directory and creates collectors for them by calling the hooks pytest_collect_directory and pytest_collect_file, after checking that they are not ignored usingpytest_ignore_collect.

The default directory collectors are Dir andPackage.

Added in version 8.0.

Using a custom directory collector.

name_: str_

A unique name within the scope of the parent node.

parent

The parent collector node.

config_: Config_

The pytest config object.

session_: Session_

The pytest session this node is part of.

path_: pathlib.Path_

Filesystem path where this node was collected from (can be None).

ExceptionInfo

final class ExceptionInfo[source]

Wraps sys.exc_info() objects and offers help for navigating the traceback.

classmethod from_exception(exception, exprinfo=None)[source]

Return an ExceptionInfo for an existing exception.

The exception must have a non-None __traceback__ attribute, otherwise this function fails with an assertion error. This means that the exception must have been raised, or added a traceback with thewith_traceback() method.

Parameters:

exprinfo (str | None) – A text string helping to determine if we should stripAssertionError from the output. Defaults to the exception message/__str__().

Added in version 7.4.

classmethod from_exc_info(exc_info, exprinfo=None)[source]

Like from_exception(), but using old-style exc_info tuple.

classmethod from_current(exprinfo=None)[source]

Return an ExceptionInfo matching the current traceback.

Parameters:

exprinfo (str | None) – A text string helping to determine if we should stripAssertionError from the output. Defaults to the exception message/__str__().

classmethod for_later()[source]

Return an unfilled ExceptionInfo.

fill_unfilled(exc_info)[source]

Fill an unfilled ExceptionInfo created with for_later().

property type_: type[E]_

The exception class.

property value_: E_

The exception value.

property tb_: TracebackType_

The exception raw traceback.

property typename_: str_

The type name of the exception.

property traceback_: Traceback_

The traceback.

exconly(tryshort=False)[source]

Return the exception as a string.

When ‘tryshort’ resolves to True, and the exception is an AssertionError, only the actual exception part of the exception representation is returned (so ‘AssertionError: ‘ is removed from the beginning).

errisinstance(exc)[source]

Return True if the exception is an instance of exc.

Consider using isinstance(excinfo.value, exc) instead.

getrepr(showlocals=False, style='long', abspath=False, tbfilter=True, funcargs=False, truncate_locals=True, truncate_args=True, chain=True)[source]

Return str()able representation of this exception info.

Parameters:

Changed in version 3.9: Added the chain parameter.

match(regexp)[source]

Check whether the regular expression regexp matches the string representation of the exception using re.search().

If it matches True is returned, otherwise an AssertionError is raised.

group_contains(expected_exception, *, match=None, depth=None)[source]

Check whether a captured exception group contains a matching exception.

Parameters:

Added in version 8.0.

ExitCode

final class ExitCode(*values)[source]

Encodes the valid exit codes by pytest.

Currently users and plugins may supply other exit codes as well.

Added in version 5.0.

OK = 0

Tests passed.

TESTS_FAILED = 1

Tests failed.

INTERRUPTED = 2

pytest was interrupted.

INTERNAL_ERROR = 3

An internal error got in the way.

USAGE_ERROR = 4

pytest was misused.

NO_TESTS_COLLECTED = 5

pytest couldn’t find tests.

FixtureDef

final class FixtureDef[source]

Bases: Generic[FixtureValue]

A container for a fixture definition.

Note: At this time, only explicitly documented fields and methods are considered public stable API.

property scope_: Literal['session', 'package', 'module', 'class', 'function']_

Scope string, one of “function”, “class”, “module”, “package”, “session”.

execute(request)[source]

Return the value of this fixture, executing it if not cached.

MarkDecorator

class MarkDecorator[source]

A decorator for applying a mark on test functions and classes.

MarkDecorators are created with pytest.mark:

mark1 = pytest.mark.NAME # Simple MarkDecorator mark2 = pytest.mark.NAME(name1=value) # Parametrized MarkDecorator

and can then be applied as decorators to test functions:

@mark2 def test_function(): pass

When a MarkDecorator is called, it does the following:

  1. If called with a single class as its only positional argument and no additional keyword arguments, it attaches the mark to the class so it gets applied automatically to all test cases found in that class.
  2. If called with a single function as its only positional argument and no additional keyword arguments, it attaches the mark to the function, containing all the arguments already stored internally in theMarkDecorator.
  3. When called in any other case, it returns a new MarkDecoratorinstance with the original MarkDecorator’s content updated with the arguments passed to this call.

Note: The rules above prevent a MarkDecorator from storing only a single function or class reference as its positional argument with no additional keyword or positional arguments. You can work around this by using with_args().

property name_: str_

Alias for mark.name.

property args_: tuple[Any, ...]_

Alias for mark.args.

property kwargs_: Mapping[str, Any]_

Alias for mark.kwargs.

with_args(*args, **kwargs)[source]

Return a MarkDecorator with extra arguments added.

Unlike calling the MarkDecorator, with_args() can be used even if the sole argument is a callable/class.

MarkGenerator

final class MarkGenerator[source]

Factory for MarkDecorator objects - exposed as a pytest.mark singleton instance.

Example:

import pytest

@pytest.mark.slowtest def test_function(): pass

applies a ‘slowtest’ Mark on test_function.

Mark

final class Mark[source]

A pytest mark.

name_: str_

Name of the mark.

args_: tuple[Any, ...]_

Positional arguments of the mark decorator.

kwargs_: Mapping[str, Any]_

Keyword arguments of the mark decorator.

combined_with(other)[source]

Return a new Mark which is a combination of this Mark and another Mark.

Combines by appending args and merging kwargs.

Parameters:

other (Mark) – The mark to combine with.

Return type:

Mark

Metafunc

final class Metafunc[source]

Objects passed to the pytest_generate_tests hook.

They help to inspect a test function and to generate tests according to test configuration or values specified in the class or module where a test function is defined.

definition

Access to the underlying _pytest.python.FunctionDefinition.

config

Access to the pytest.Config object for the test session.

module

The module object where the test function is defined in.

function

Underlying Python test function.

fixturenames

Set of fixture names required by the test function.

cls

Class object where the test function is defined in or None.

parametrize(argnames, argvalues, indirect=False, ids=None, scope=None, *, _param_mark=None)[source]

Add new invocations to the underlying test function using the list of argvalues for the given argnames. Parametrization is performed during the collection phase. If you need to setup expensive resources see about setting indirect to do it rather than at test setup time.

Can be called multiple times per test function (but only on different argument names), in which case each call parametrizes all previous parametrizations, e.g.

unparametrized: t parametrize ["x", "y"]: t[x], t[y] parametrize [1, 2]: t[x-1], t[x-2], t[y-1], t[y-2]

Parameters:

Parser

final class Parser[source]

Parser for command line arguments and ini-file values.

Variables:

extra_info – Dict of generic param -> value to display in case there’s an error processing the command line arguments.

getgroup(name, description='', after=None)[source]

Get (or create) a named option Group.

Parameters:

Returns:

The option group.

Return type:

OptionGroup

The returned group object has an addoption method with the same signature as parser.addoption but will be shown in the respective group in the output ofpytest --help.

addoption(*opts, **attrs)[source]

Register a command line option.

Parameters:

After command line parsing, options are available on the pytest config object via config.option.NAME where NAME is usually set by passing a dest attribute, for exampleaddoption("--long", dest="NAME", ...).

parse_known_args(args, namespace=None)[source]

Parse the known arguments at this point.

Returns:

An argparse namespace object.

Return type:

Namespace

parse_known_and_unknown_args(args, namespace=None)[source]

Parse the known arguments at this point, and also return the remaining unknown arguments.

Returns:

A tuple containing an argparse namespace object for the known arguments, and a list of the unknown arguments.

Return type:

tuple[Namespace, list[str]]

addini(name, help, type=None, default=)[source]

Register an ini-file option.

Parameters:

The value of ini-variables can be retrieved via a call toconfig.getini(name).

OptionGroup

class OptionGroup[source]

A group of options shown in its own section.

addoption(*opts, **attrs)[source]

Add an option to this group.

If a shortened version of a long option is specified, it will be suppressed in the help. addoption('--twowords', '--two-words')results in help showing --two-words only, but --twowords gets accepted and the automatic destination is in args.twowords.

Parameters:

PytestPluginManager

final class PytestPluginManager[source]

Bases: PluginManager

A pluggy.PluginManager with additional pytest-specific functionality:

register(plugin, name=None)[source]

Register a plugin and return its name.

Parameters:

name (str | None) – The name under which to register the plugin. If not specified, a name is generated using get_canonical_name().

Returns:

The plugin name. If the name is blocked from registering, returnsNone.

Return type:

str | None

If the plugin is already registered, raises a ValueError.

getplugin(name)[source]

hasplugin(name)[source]

Return whether a plugin with the given name is registered.

import_plugin(modname, consider_entry_points=False)[source]

Import a plugin with modname.

If consider_entry_points is True, entry point names are also considered to find a plugin.

add_hookcall_monitoring(before, after)

Add before/after tracing functions for all hooks.

Returns an undo function which, when called, removes the added tracers.

before(hook_name, hook_impls, kwargs) will be called ahead of all hook calls and receive a hookcaller instance, a list of HookImpl instances and the keyword arguments for the hook call.

after(outcome, hook_name, hook_impls, kwargs) receives the same arguments as before but also a Result object which represents the result of the overall hook call.

add_hookspecs(module_or_class)

Add new hook specifications defined in the given module_or_class.

Functions are recognized as hook specifications if they have been decorated with a matching HookspecMarker.

check_pending()

Verify that all hooks which have not been verified against a hook specification are optional, otherwise raisePluginValidationError.

enable_tracing()

Enable tracing of hook calls.

Returns an undo function which, when called, removes the added tracing.

get_canonical_name(plugin)

Return a canonical name for a plugin object.

Note that a plugin may be registered under a different name specified by the caller of register(plugin, name). To obtain the name of a registered plugin use get_name(plugin) instead.

get_hookcallers(plugin)

Get all hook callers for the specified plugin.

Returns:

The hook callers, or None if plugin is not registered in this plugin manager.

Return type:

list[HookCaller] | None

get_name(plugin)

Return the name the plugin is registered under, or None if is isn’t.

get_plugin(name)

Return the plugin registered under the given name, if any.

get_plugins()

Return a set of all registered plugin objects.

has_plugin(name)

Return whether a plugin with the given name is registered.

is_blocked(name)

Return whether the given plugin name is blocked.

is_registered(plugin)

Return whether the plugin is already registered.

list_name_plugin()

Return a list of (name, plugin) pairs for all registered plugins.

list_plugin_distinfo()

Return a list of (plugin, distinfo) pairs for all setuptools-registered plugins.

load_setuptools_entrypoints(group, name=None)

Load modules from querying the specified setuptools group.

Parameters:

Returns:

The number of plugins loaded by this call.

Return type:

int

set_blocked(name)

Block registrations of the given name, unregister if already registered.

subset_hook_caller(name, remove_plugins)

Return a proxy HookCaller instance for the named method which manages calls to all registered plugins except the ones from remove_plugins.

unblock(name)

Unblocks a name.

Returns whether the name was actually blocked.

unregister(plugin=None, name=None)

Unregister a plugin and all of its hook implementations.

The plugin can be specified either by the plugin object or the plugin name. If both are specified, they must agree.

Returns the unregistered plugin, or None if not found.

project_name_: Final_

The project name.

hook_: Final_

The “hook relay”, used to call a hook on all registered plugins. See Calling hooks.

trace_: Final[_tracing.TagTracerSub]_

The tracing entry point. See Built-in tracing.

TestReport

final class TestReport[source]

Bases: BaseReport

Basic test report object (also used for setup and teardown calls if they fail).

Reports can contain arbitrary extra attributes.

nodeid_: str_

Normalized collection nodeid.

location_: tuple[str, int | None, str]_

A (filesystempath, lineno, domaininfo) tuple indicating the actual location of a test item - it might be different from the collected one e.g. if a method is inherited from a different module. The filesystempath may be relative to config.rootdir. The line number is 0-based.

keywords_: Mapping[str, Any]_

A name -> value dictionary containing all keywords and markers associated with a test invocation.

outcome_: Literal['passed', 'failed', 'skipped']_

Test outcome, always one of “passed”, “failed”, “skipped”.

longrepr_: None | ExceptionInfo[BaseException] | tuple[str, int, str] | str | TerminalRepr_

None or a failure representation.

when_: str | None_

One of ‘setup’, ‘call’, ‘teardown’ to indicate runtest phase.

user_properties

User properties is a list of tuples (name, value) that holds user defined properties of the test.

sections_: list[tuple[str, str]]_

Tuples of str (heading, content) with extra information for the test report. Used by pytest to add text captured from stdout, stderr, and intercepted logging events. May be used by other plugins to add arbitrary information to reports.

duration_: float_

Time it took to run just the test.

start_: float_

The system time when the call started, in seconds since the epoch.

stop_: float_

The system time when the call ended, in seconds since the epoch.

classmethod from_item_and_call(item, call)[source]

Create and fill a TestReport with standard item and call info.

Parameters:

property caplog_: str_

Return captured log lines, if log capturing is enabled.

Added in version 3.5.

property capstderr_: str_

Return captured text from stderr, if capturing is enabled.

Added in version 3.0.

property capstdout_: str_

Return captured text from stdout, if capturing is enabled.

Added in version 3.0.

property count_towards_summary_: bool_

Experimental Whether this report should be counted towards the totals shown at the end of the test session: “1 passed, 1 failure, etc”.

Note

This function is considered experimental, so beware that it is subject to changes even in patch releases.

property failed_: bool_

Whether the outcome is failed.

property fspath_: str_

The path portion of the reported node, as a string.

property head_line_: str | None_

Experimental The head line shown with longrepr output for this report, more commonly during traceback representation during failures:

________ Test.foo ________

In the example above, the head_line is “Test.foo”.

Note

This function is considered experimental, so beware that it is subject to changes even in patch releases.

property longreprtext_: str_

Read-only property that returns the full string representation oflongrepr.

Added in version 3.0.

property passed_: bool_

Whether the outcome is passed.

property skipped_: bool_

Whether the outcome is skipped.

TestShortLogReport

class TestShortLogReport[source]

Used to store the test status result category, shortletter and verbose word. For example "rerun", "R", ("RERUN", {"yellow": True}).

Variables:

category_: str_

Alias for field number 0

letter_: str_

Alias for field number 1

word_: str | tuple[str, Mapping[str, bool]]_

Alias for field number 2

Result

Result object used within hook wrappers, see Result in the pluggy documentation for more information.

Stash

class Stash[source]

Stash is a type-safe heterogeneous mutable mapping that allows keys and value types to be defined separately from where it (the Stash) is created.

Usually you will be given an object which has a Stash, for exampleConfig or a Node:

stash: Stash = some_object.stash

If a module or plugin wants to store data in this Stash, it createsStashKeys for its keys (at the module level):

At the top-level of the module

some_str_key = StashKeystr some_bool_key = StashKeybool

To store information:

Value type must match the key.

stash[some_str_key] = "value" stash[some_bool_key] = True

To retrieve the information:

The static type of some_str is str.

some_str = stash[some_str_key]

The static type of some_bool is bool.

some_bool = stash[some_bool_key]

Added in version 7.0.

__setitem__(key, value)[source]

Set a value for key.

__getitem__(key)[source]

Get the value for key.

Raises KeyError if the key wasn’t set before.

get(key, default)[source]

Get the value for key, or return default if the key wasn’t set before.

setdefault(key, default)[source]

Return the value of key if already set, otherwise set the value of key to default and return default.

__delitem__(key)[source]

Delete the value for key.

Raises KeyError if the key wasn’t set before.

__contains__(key)[source]

Return whether key was set.

__len__()[source]

Return how many items exist in the stash.

class StashKey[source]

Bases: Generic[T]

StashKey is an object used as a key to a Stash.

A StashKey is associated with the type T of the value of the key.

A StashKey is unique and cannot conflict with another key.

Added in version 7.0.

Global Variables

pytest treats some global variables in a special manner when defined in a test module orconftest.py files.

collect_ignore

Tutorial: Customizing test collection

Can be declared in conftest.py files to exclude test directories or modules. Needs to be a list of paths (str, pathlib.Path or any os.PathLike).

collect_ignore = ["setup.py"]

collect_ignore_glob

Tutorial: Customizing test collection

Can be declared in conftest.py files to exclude test directories or modules with Unix shell-style wildcards. Needs to be list[str] where str can contain glob patterns.

collect_ignore_glob = ["*_ignore.py"]

pytest_plugins

Tutorial: Requiring/Loading plugins in a test module or conftest file

Can be declared at the global level in test modules and conftest.py files to register additional plugins. Can be either a str or Sequence[str].

pytest_plugins = "myapp.testsupport.myplugin"

pytest_plugins = ("myapp.testsupport.tools", "myapp.testsupport.regression")

pytestmark

Tutorial: Marking whole classes or modules

Can be declared at the global level in test modules to apply one or more marks to all test functions and methods. Can be either a single mark or a list of marks (applied in left-to-right order).

import pytest

pytestmark = pytest.mark.webtest

import pytest

pytestmark = [pytest.mark.integration, pytest.mark.slow]

Environment Variables

Environment variables that can be used to change pytest’s behavior.

CI

When set (regardless of value), pytest acknowledges that is running in a CI process. Alternative to BUILD_NUMBER variable. See also CI Pipelines.

BUILD_NUMBER

When set (regardless of value), pytest acknowledges that is running in a CI process. Alternative to CI variable. See also CI Pipelines.

PYTEST_ADDOPTS

This contains a command-line (parsed by the py:mod:shlex module) that will be prepended to the command line given by the user, see Builtin configuration file options for more information.

PYTEST_VERSION

This environment variable is defined at the start of the pytest session and is undefined afterwards. It contains the value of pytest.__version__, and among other things can be used to easily check if a code is running from within a pytest run.

PYTEST_CURRENT_TEST

This is not meant to be set by users, but is set by pytest internally with the name of the current test so other processes can inspect it, see PYTEST_CURRENT_TEST environment variable for more information.

PYTEST_DEBUG

When set, pytest will print tracing and debug information.

PYTEST_DEBUG_TEMPROOT

Root for temporary directories produced by fixtures like tmp_pathas discussed in Temporary directory location and retention.

PYTEST_DISABLE_PLUGIN_AUTOLOAD

When set, disables plugin auto-loading through entry point packaging metadata. Only explicitly specified plugins will be loaded.

PYTEST_PLUGINS

Contains comma-separated list of modules that should be loaded as plugins:

export PYTEST_PLUGINS=mymodule.plugin,xdist

PYTEST_THEME

Sets a pygment style to use for the code output.

PYTEST_THEME_MODE

Sets the PYTEST_THEME to be either dark or light.

PY_COLORS

When set to 1, pytest will use color in terminal output. When set to 0, pytest will not use color.PY_COLORS takes precedence over NO_COLOR and FORCE_COLOR.

NO_COLOR

When set to a non-empty string (regardless of value), pytest will not use color in terminal output.PY_COLORS takes precedence over NO_COLOR, which takes precedence over FORCE_COLOR. See no-color.org for other libraries supporting this community standard.

FORCE_COLOR

When set to a non-empty string (regardless of value), pytest will use color in terminal output.PY_COLORS and NO_COLOR take precedence over FORCE_COLOR.

Exceptions

final exception UsageError[source]

Bases: Exception

Error in pytest usage or invocation.

final exception FixtureLookupError[source]

Bases: LookupError

Could not return a requested fixture (missing or invalid).

Warnings

Custom warnings generated in some situations such as improper usage or deprecated features.

class PytestWarning

Bases: UserWarning

Base class for all warnings emitted by pytest.

class PytestAssertRewriteWarning

Bases: PytestWarning

Warning emitted by the pytest assert rewrite module.

class PytestCacheWarning

Bases: PytestWarning

Warning emitted by the cache plugin in various situations.

class PytestCollectionWarning

Bases: PytestWarning

Warning emitted when pytest is not able to collect a file or symbol in a module.

class PytestConfigWarning

Bases: PytestWarning

Warning emitted for configuration issues.

class PytestDeprecationWarning

Bases: PytestWarning, DeprecationWarning

Warning class for features that will be removed in a future version.

class PytestExperimentalApiWarning

Bases: PytestWarning, FutureWarning

Warning category used to denote experiments in pytest.

Use sparingly as the API might change or even be removed completely in a future version.

class PytestReturnNotNoneWarning

Bases: PytestWarning

Warning emitted when a test function is returning value other than None.

class PytestRemovedIn9Warning

Bases: PytestDeprecationWarning

Warning class for features that will be removed in pytest 9.

class PytestUnhandledCoroutineWarning

Bases: PytestReturnNotNoneWarning

Warning emitted for an unhandled coroutine.

A coroutine was encountered when collecting test functions, but was not handled by any async-aware plugin. Coroutine test functions are not natively supported.

class PytestUnknownMarkWarning

Bases: PytestWarning

Warning emitted on use of unknown markers.

See How to mark test functions with attributes for details.

class PytestUnraisableExceptionWarning

Bases: PytestWarning

An unraisable exception was reported.

Unraisable exceptions are exceptions raised in __del__implementations and similar situations when the exception cannot be raised as normal.

class PytestUnhandledThreadExceptionWarning

Bases: PytestWarning

An unhandled exception occurred in a Thread.

Such exceptions don’t propagate normally.

Consult the Internal pytest warnings section in the documentation for more information.

Configuration Options

Here is a list of builtin configuration options that may be written in a pytest.ini (or .pytest.ini),pyproject.toml, tox.ini, or setup.cfg file, usually located at the root of your repository.

To see each file format in details, see Configuration file formats.

Warning

Usage of setup.cfg is not recommended except for very simple use cases. .cfgfiles use a different parser than pytest.ini and tox.ini which might cause hard to track down problems. When possible, it is recommended to use the latter files, or pyproject.toml, to hold your pytest configuration.

Configuration options may be overwritten in the command-line by using -o/--override-ini, which can also be passed multiple times. The expected format is name=value. For example:

pytest -o console_output_style=classic -o cache_dir=/tmp/mycache

addopts

Add the specified OPTS to the set of command line arguments as if they had been specified by the user. Example: if you have this ini file content:

content of pytest.ini

[pytest] addopts = --maxfail=2 -rf # exit after 2 failures, report fail info

issuing pytest test_hello.py actually means:

pytest --maxfail=2 -rf test_hello.py

Default is to add no options.

cache_dir

Sets the directory where the cache plugin’s content is stored. Default directory is.pytest_cache which is created in rootdir. Directory may be relative or absolute path. If setting relative path, then directory is created relative to rootdir. Additionally, a path may contain environment variables, that will be expanded. For more information about cache plugin please refer to How to re-run failed tests and maintain state between test runs.

consider_namespace_packages

Controls if pytest should attempt to identify namespace packageswhen collecting Python modules. Default is False.

Set to True if the package you are testing is part of a namespace package.

Only native namespace packagesare supported, with no plans to support legacy namespace packages.

Added in version 8.1.

console_output_style

Sets the console output style while running tests:

The default is progress, but you can fallback to classic if you prefer or the new mode is causing unexpected problems:

content of pytest.ini

[pytest] console_output_style = classic

doctest_encoding

Default encoding to use to decode text files with docstrings.See how pytest handles doctests.

doctest_optionflags

One or more doctest flag names from the standard doctest module.See how pytest handles doctests.

empty_parameter_set_mark

Allows to pick the action for empty parametersets in parameterization

content of pytest.ini

[pytest] empty_parameter_set_mark = xfail

Note

The default value of this option is planned to change to xfail in future releases as this is considered less error prone, see #3155 for more details.

faulthandler_timeout

Dumps the tracebacks of all threads if a test takes longer than X seconds to run (including fixture setup and teardown). Implemented using the faulthandler.dump_traceback_later() function, so all caveats there apply.

content of pytest.ini

[pytest] faulthandler_timeout=5

For more information please refer to Fault Handler.

filterwarnings

Sets a list of filters and actions that should be taken for matched warnings. By default all warnings emitted during the test session will be displayed in a summary at the end of the test session.

content of pytest.ini

[pytest] filterwarnings = error ignore::DeprecationWarning

This tells pytest to ignore deprecation warnings and turn all other warnings into errors. For more information please refer to How to capture warnings.

junit_duration_report

Added in version 4.1.

Configures how durations are recorded into the JUnit XML report:

[pytest] junit_duration_report = call

junit_family

Added in version 4.2.

Changed in version 6.1: Default changed to xunit2.

Configures the format of the generated JUnit XML file. The possible options are:

[pytest] junit_family = xunit2

junit_logging

Added in version 3.5.

Changed in version 5.4: log, all, out-err options added.

Configures if captured output should be written to the JUnit XML file. Valid values are:

[pytest] junit_logging = system-out

junit_log_passing_tests

Added in version 4.6.

If junit_logging != "no", configures if the captured output should be written to the JUnit XML file for passing tests. Default is True.

[pytest] junit_log_passing_tests = False

junit_suite_name

To set the name of the root test suite xml item, you can configure the junit_suite_name option in your config file:

[pytest] junit_suite_name = my_suite

log_auto_indent

Allow selective auto-indentation of multiline log messages.

Supports command line option --log-auto-indent [value]and config option log_auto_indent = [value] to set the auto-indentation behavior for all logging.

[value] can be:

[pytest] log_auto_indent = False

Supports passing kwarg extra={"auto_indent": [value]} to calls to logging.log() to specify auto-indentation behavior for a specific entry in the log. extra kwarg overrides the value specified on the command line or in the config.

log_cli

Enable log display during test run (also known as “live logging”). The default is False.

log_cli_date_format

Sets a time.strftime()-compatible string that will be used when formatting dates for live logging.

[pytest] log_cli_date_format = %Y-%m-%d %H:%M:%S

For more information, see Live Logs.

log_cli_format

Sets a logging-compatible string used to format live logging messages.

[pytest] log_cli_format = %(asctime)s %(levelname)s %(message)s

For more information, see Live Logs.

log_cli_level

Sets the minimum log message level that should be captured for live logging. The integer value or the names of the levels can be used.

[pytest] log_cli_level = INFO

For more information, see Live Logs.

log_date_format

Sets a time.strftime()-compatible string that will be used when formatting dates for logging capture.

[pytest] log_date_format = %Y-%m-%d %H:%M:%S

For more information, see How to manage logging.

log_file

Sets a file name relative to the current working directory where log messages should be written to, in addition to the other logging facilities that are active.

[pytest] log_file = logs/pytest-logs.txt

For more information, see How to manage logging.

log_file_date_format

Sets a time.strftime()-compatible string that will be used when formatting dates for the logging file.

[pytest] log_file_date_format = %Y-%m-%d %H:%M:%S

For more information, see How to manage logging.

log_file_format

Sets a logging-compatible string used to format logging messages redirected to the logging file.

[pytest] log_file_format = %(asctime)s %(levelname)s %(message)s

For more information, see How to manage logging.

log_file_level

Sets the minimum log message level that should be captured for the logging file. The integer value or the names of the levels can be used.

[pytest] log_file_level = INFO

For more information, see How to manage logging.

log_format

Sets a logging-compatible string used to format captured logging messages.

[pytest] log_format = %(asctime)s %(levelname)s %(message)s

For more information, see How to manage logging.

log_level

Sets the minimum log message level that should be captured for logging capture. The integer value or the names of the levels can be used.

[pytest] log_level = INFO

For more information, see How to manage logging.

markers

When the --strict-markers or --strict command-line arguments are used, only known markers - defined in code by core pytest or some plugin - are allowed.

You can list additional markers in this setting to add them to the whitelist, in which case you probably want to add --strict-markers to addoptsto avoid future regressions:

[pytest] addopts = --strict-markers markers = slow serial

Note

The use of --strict-markers is highly preferred. --strict was kept for backward compatibility only and may be confusing for others as it only applies to markers and not to other options.

minversion

Specifies a minimal pytest version required for running tests.

content of pytest.ini

[pytest] minversion = 3.0 # will fail if we run with pytest-2.8

norecursedirs

Set the directory basename patterns to avoid when recursing for test discovery. The individual (fnmatch-style) patterns are applied to the basename of a directory to decide if to recurse into it. Pattern matching characters:

? matches any single character [seq] matches any character in seq [!seq] matches any char not in seq

Default patterns are '*.egg', '.*', '_darcs', 'build','CVS', 'dist', 'node_modules', 'venv', '{arch}'. Setting a norecursedirs replaces the default. Here is an example of how to avoid certain directories:

[pytest] norecursedirs = .svn _build tmp*

This would tell pytest to not look into typical subversion or sphinx-build directories or into any tmp prefixed directory.

Additionally, pytest will attempt to intelligently identify and ignore a virtualenv. Any directory deemed to be the root of a virtual environment will not be considered during test collection unless--collect-in-virtualenv is given. Note also that norecursedirstakes precedence over --collect-in-virtualenv; e.g. if you intend to run tests in a virtualenv with a base directory that matches '.*' you_must_ override norecursedirs in addition to using the--collect-in-virtualenv flag.

python_classes

One or more name prefixes or glob-style patterns determining which classes are considered for test collection. Search for multiple glob patterns by adding a space between patterns. By default, pytest will consider any class prefixed with Test as a test collection. Here is an example of how to collect tests from classes that end in Suite:

[pytest] python_classes = *Suite

Note that unittest.TestCase derived classes are always collected regardless of this option, as unittest’s own collection framework is used to collect those tests.

python_files

One or more Glob-style file patterns determining which python files are considered as test modules. Search for multiple glob patterns by adding a space between patterns:

[pytest] python_files = test_*.py check_*.py example_*.py

Or one per line:

[pytest] python_files = test_*.py check_*.py example_*.py

By default, files matching test_*.py and *_test.py will be considered test modules.

python_functions

One or more name prefixes or glob-patterns determining which test functions and methods are considered tests. Search for multiple glob patterns by adding a space between patterns. By default, pytest will consider any function prefixed with test as a test. Here is an example of how to collect test functions and methods that end in _test:

[pytest] python_functions = *_test

Note that this has no effect on methods that live on a unittest.TestCasederived class, as unittest’s own collection framework is used to collect those tests.

See Changing naming conventions for more detailed examples.

pythonpath

Sets list of directories that should be added to the python search path. Directories will be added to the head of sys.path. Similar to the PYTHONPATH environment variable, the directories will be included in where Python will look for imported modules. Paths are relative to the rootdir directory. Directories remain in path for the duration of the test session.

[pytest] pythonpath = src1 src2

Note

pythonpath does not affect some imports that happen very early, most notably plugins loaded using the -p command line option.

required_plugins

A space separated list of plugins that must be present for pytest to run. Plugins can be listed with or without version specifiers directly following their name. Whitespace between different version specifiers is not allowed. If any one of the plugins is not found, emit an error.

[pytest] required_plugins = pytest-django>=3.0.0,<4.0.0 pytest-html pytest-xdist>=1.0.0

testpaths

Sets list of directories that should be searched for tests when no specific directories, files or test ids are given in the command line when executing pytest from the rootdir directory. File system paths may use shell-style wildcards, including the recursive** pattern.

Useful when all project tests are in a known location to speed up test collection and to avoid picking up undesired tests by accident.

[pytest] testpaths = testing doc

This configuration means that executing:

has the same practical effects as executing:

tmp_path_retention_count

How many sessions should we keep the tmp_path directories, according to tmp_path_retention_policy.

[pytest] tmp_path_retention_count = 3

Default: 3

tmp_path_retention_policy

Controls which directories created by the tmp_path fixture are kept around, based on test outcome.

[pytest] tmp_path_retention_policy = all

Default: all

usefixtures

List of fixtures that will be applied to all test functions; this is semantically the same to apply the @pytest.mark.usefixtures marker to all test functions.

[pytest] usefixtures = clean_db

verbosity_assertions

Set a verbosity level specifically for assertion related output, overriding the application wide level.

[pytest] verbosity_assertions = 2

Defaults to application wide verbosity level (via the -v command-line option). A special value of “auto” can be used to explicitly use the global verbosity level.

verbosity_test_cases

Set a verbosity level specifically for test case execution related output, overriding the application wide level.

[pytest] verbosity_test_cases = 2

Defaults to application wide verbosity level (via the -v command-line option). A special value of “auto” can be used to explicitly use the global verbosity level.

xfail_strict

If set to True, tests marked with @pytest.mark.xfail that actually succeed will by default fail the test suite. For more information, see strict parameter.

[pytest] xfail_strict = True

Command-line Flags

All the command-line flags can be obtained by running pytest --help:

$ pytest --help usage: pytest [options] [file_or_dir] [file_or_dir] [...]

positional arguments: file_or_dir

general: -k EXPRESSION Only run tests which match the given substring expression. An expression is a Python evaluable expression where all names are substring-matched against test names and their parent classes. Example: -k 'test_method or test_other' matches all test functions and classes whose name contains 'test_method' or 'test_other', while -k 'not test_method' matches those that don't contain 'test_method' in their names. -k 'not test_method and not test_other' will eliminate the matches. Additionally keywords are matched to classes and functions containing extra names in their 'extra_keyword_matches' set, as well as functions which have names assigned directly to them. The matching is case-insensitive. -m MARKEXPR Only run tests matching given mark expression. For example: -m 'mark1 and not mark2'. --markers show markers (builtin, plugin and per-project ones). -x, --exitfirst Exit instantly on first error or failed test --fixtures, --funcargs Show available fixtures, sorted by plugin appearance (fixtures with leading '_' are only shown with '-v') --fixtures-per-test Show fixtures per test --pdb Start the interactive Python debugger on errors or KeyboardInterrupt --pdbcls=modulename:classname Specify a custom interactive Python debugger for use with --pdb.For example: --pdbcls=IPython.terminal.debugger:TerminalPdb --trace Immediately break when running each test --capture=method Per-test capturing method: one of fd|sys|no|tee-sys -s Shortcut for --capture=no --runxfail Report the results of xfail tests as if they were not marked --lf, --last-failed Rerun only the tests that failed at the last run (or all if none failed) --ff, --failed-first Run all tests, but run the last failures first. This may re-order tests and thus lead to repeated fixture setup/teardown. --nf, --new-first Run tests from new files first, then the rest of the tests sorted by file mtime --cache-show=[CACHESHOW] Show cache contents, don't perform collection or tests. Optional argument: glob (default: '*'). --cache-clear Remove all cache contents at start of test run --lfnf, --last-failed-no-failures={all,none} With --lf, determines whether to execute tests when there are no previously (known) failures or when no cached lastfailed data was found. all (the default) runs the full test suite again. none just emits a message about no known failures and exits successfully. --sw, --stepwise Exit on test failure and continue from last failing test next time --sw-skip, --stepwise-skip Ignore the first failing test but stop on the next failing test. Implicitly enables --stepwise.

Reporting: --durations=N Show N slowest setup/test durations (N=0 for all) --durations-min=N Minimal duration in seconds for inclusion in slowest list. Default: 0.005. -v, --verbose Increase verbosity --no-header Disable header --no-summary Disable summary --no-fold-skipped Do not fold skipped tests in short summary. -q, --quiet Decrease verbosity --verbosity=VERBOSE Set verbosity. Default: 0. -r chars Show extra test summary info as specified by chars: (f)ailed, (E)rror, (s)kipped, (x)failed, (X)passed, (p)assed, (P)assed with output, (a)ll except passed (p/P), or (A)ll. (w)arnings are enabled by default (see --disable-warnings), 'N' can be used to reset the list. (default: 'fE'). --disable-warnings, --disable-pytest-warnings Disable warnings summary -l, --showlocals Show locals in tracebacks (disabled by default) --no-showlocals Hide locals in tracebacks (negate --showlocals passed through addopts) --tb=style Traceback print mode (auto/long/short/line/native/no) --xfail-tb Show tracebacks for xfail (as long as --tb != no) --show-capture={no,stdout,stderr,log,all} Controls how captured stdout/stderr/log is shown on failed tests. Default: all. --full-trace Don't cut any tracebacks (default is to cut) --color=color Color terminal output (yes/no/auto) --code-highlight={yes,no} Whether code should be highlighted (only if --color is also enabled). Default: yes. --pastebin=mode Send failed|all info to bpaste.net pastebin service --junitxml, --junit-xml=path Create junit-xml style report file at given path --junitprefix, --junit-prefix=str Prepend prefix to classnames in junit-xml output

pytest-warnings: -W, --pythonwarnings PYTHONWARNINGS Set which warnings to report, see -W option of Python itself --maxfail=num Exit after first num failures or errors --strict-config Any warnings encountered while parsing the pytest section of the configuration file raise errors --strict-markers Markers not registered in the markers section of the configuration file raise errors --strict (Deprecated) alias to --strict-markers -c, --config-file FILE Load configuration from FILE instead of trying to locate one of the implicit configuration files. --continue-on-collection-errors Force test execution even if collection errors occur --rootdir=ROOTDIR Define root directory for tests. Can be relative path: 'root_dir', './root_dir', 'root_dir/another_dir/'; absolute path: '/home/user/root_dir'; path with variables: '$HOME/root_dir'.

collection: --collect-only, --co Only collect tests, don't execute them --pyargs Try to interpret all arguments as Python packages --ignore=path Ignore path during collection (multi-allowed) --ignore-glob=path Ignore path pattern during collection (multi- allowed) --deselect=nodeid_prefix Deselect item (via node id prefix) during collection (multi-allowed) --confcutdir=dir Only load conftest.py's relative to specified dir --noconftest Don't load any conftest.py files --keep-duplicates Keep duplicate tests --collect-in-virtualenv Don't ignore tests in a local virtualenv directory --import-mode={prepend,append,importlib} Prepend/append to sys.path when importing test modules and conftest files. Default: prepend. --doctest-modules Run doctests in all .py modules --doctest-report={none,cdiff,ndiff,udiff,only_first_failure} Choose another output format for diffs on doctest failure --doctest-glob=pat Doctests file matching pattern, default: test*.txt --doctest-ignore-import-errors Ignore doctest collection errors --doctest-continue-on-failure For a given doctest, continue to run after the first failure

test session debugging and configuration: --basetemp=dir Base temporary directory for this test run. (Warning: this directory is removed if it exists.) -V, --version Display pytest version and information about plugins. When given twice, also display information about plugins. -h, --help Show help message and configuration info -p name Early-load given plugin module name or entry point (multi-allowed). To avoid loading of plugins, use the no: prefix, e.g. no:doctest. --trace-config Trace considerations of conftest.py files --debug=[DEBUG_FILE_NAME] Store internal tracing debug information in this log file. This file is opened with 'w' and truncated as a result, care advised. Default: pytestdebug.log. -o, --override-ini OVERRIDE_INI Override ini option with "option=value" style, e.g. -o xfail_strict=True -o cache_dir=cache. --assert=MODE Control assertion debugging tools. 'plain' performs no assertion debugging. 'rewrite' (the default) rewrites assert statements in test modules on import to provide assert expression information. --setup-only Only setup fixtures, do not execute tests --setup-show Show setup of fixtures while executing tests --setup-plan Show what fixtures and tests would be executed but don't execute anything

logging: --log-level=LEVEL Level of messages to catch/display. Not set by default, so it depends on the root/parent log handler's effective level, where it is "WARNING" by default. --log-format=LOG_FORMAT Log format used by the logging module --log-date-format=LOG_DATE_FORMAT Log date format used by the logging module --log-cli-level=LOG_CLI_LEVEL CLI logging level --log-cli-format=LOG_CLI_FORMAT Log format used by the logging module --log-cli-date-format=LOG_CLI_DATE_FORMAT Log date format used by the logging module --log-file=LOG_FILE Path to a file when logging will be written to --log-file-mode={w,a} Log file open mode --log-file-level=LOG_FILE_LEVEL Log file logging level --log-file-format=LOG_FILE_FORMAT Log format used by the logging module --log-file-date-format=LOG_FILE_DATE_FORMAT Log date format used by the logging module --log-auto-indent=LOG_AUTO_INDENT Auto-indent multiline messages passed to the logging module. Accepts true|on, false|off or an integer. --log-disable=LOGGER_DISABLE Disable a logger by name. Can be passed multiple times.

[pytest] ini-options in the first pytest.ini|tox.ini|setup.cfg|pyproject.toml file found:

markers (linelist): Register new markers for test functions empty_parameter_set_mark (string): Default marker for empty parametersets norecursedirs (args): Directory patterns to avoid for recursion testpaths (args): Directories to search for tests when no files or directories are given on the command line filterwarnings (linelist): Each line specifies a pattern for warnings.filterwarnings. Processed after -W/--pythonwarnings. consider_namespace_packages (bool): Consider namespace packages when resolving module names during import usefixtures (args): List of default fixtures to be used with this project python_files (args): Glob-style file patterns for Python test module discovery python_classes (args): Prefixes or glob names for Python test class discovery python_functions (args): Prefixes or glob names for Python test function and method discovery disable_test_id_escaping_and_forfeit_all_rights_to_community_support (bool): Disable string escape non-ASCII characters, might cause unwanted side effects(use at your own risk) console_output_style (string): Console output: "classic", or with additional progress information ("progress" (percentage) | "count" | "progress-even-when-capture-no" (forces progress even when capture=no) verbosity_test_cases (string): Specify a verbosity level for test case execution, overriding the main level. Higher levels will provide more detailed information about each test case executed. xfail_strict (bool): Default for the strict parameter of xfail markers when not given explicitly (default: False) tmp_path_retention_count (string): How many sessions should we keep the tmp_path directories, according to tmp_path_retention_policy. tmp_path_retention_policy (string): Controls which directories created by the tmp_path fixture are kept around, based on test outcome. (all/failed/none) enable_assertion_pass_hook (bool): Enables the pytest_assertion_pass hook. Make sure to delete any previously generated pyc cache files. verbosity_assertions (string): Specify a verbosity level for assertions, overriding the main level. Higher levels will provide more detailed explanation when an assertion fails. junit_suite_name (string): Test suite name for JUnit report junit_logging (string): Write captured log messages to JUnit report: one of no|log|system-out|system-err|out-err|all junit_log_passing_tests (bool): Capture log information for passing tests to JUnit report: junit_duration_report (string): Duration time to report: one of total|call junit_family (string): Emit XML for schema: one of legacy|xunit1|xunit2 doctest_optionflags (args): Option flags for doctests doctest_encoding (string): Encoding used for doctest files cache_dir (string): Cache directory path log_level (string): Default value for --log-level log_format (string): Default value for --log-format log_date_format (string): Default value for --log-date-format log_cli (bool): Enable log display during test run (also known as "live logging") log_cli_level (string): Default value for --log-cli-level log_cli_format (string): Default value for --log-cli-format log_cli_date_format (string): Default value for --log-cli-date-format log_file (string): Default value for --log-file log_file_mode (string): Default value for --log-file-mode log_file_level (string): Default value for --log-file-level log_file_format (string): Default value for --log-file-format log_file_date_format (string): Default value for --log-file-date-format log_auto_indent (string): Default value for --log-auto-indent pythonpath (paths): Add paths to sys.path faulthandler_timeout (string): Dump the traceback of all threads if a test takes more than TIMEOUT seconds to finish addopts (args): Extra command line options minversion (string): Minimally required pytest version required_plugins (args): Plugins that must be present for pytest to run

Environment variables: CI When set (regardless of value), pytest knows it is running in a CI process and does not truncate summary info BUILD_NUMBER Equivalent to CI PYTEST_ADDOPTS Extra command line options PYTEST_PLUGINS Comma-separated plugins to load during startup PYTEST_DISABLE_PLUGIN_AUTOLOAD Set to disable plugin auto-loading PYTEST_DEBUG Set to enable debug tracing of pytest's internals

to see available markers type: pytest --markers to see available fixtures type: pytest --fixtures (shown according to specified file_or_dir or current dir if not specified; fixtures with leading '_' are only shown with the '-v' option