API Reference - pytest documentation (original) (raw)
This page contains the full reference to pytest’s API.
Constants¶
pytest.__version__¶
The current pytest version, as a string:
import pytest pytest.version '7.0.0'
pytest.version_tuple¶
Added in version 7.0.
The current pytest version, as a tuple:
import pytest pytest.version_tuple (7, 0, 0)
For pre-releases, the last component will be a string with the prerelease version:
import pytest pytest.version_tuple (7, 0, '0rc1')
Functions¶
pytest.approx¶
approx(expected, rel=None, abs=None, nan_ok=False)[source]¶
Assert that two numbers (or two ordered sequences of numbers) are equal to each other within some tolerance.
Due to the Floating-Point Arithmetic: Issues and Limitations, numbers that we would intuitively expect to be equal are not always so:
0.1 + 0.2 == 0.3 False
This problem is commonly encountered when writing tests, e.g. when making sure that floating-point values are what you expect them to be. One way to deal with this problem is to assert that two floating-point numbers are equal to within some appropriate tolerance:
abs((0.1 + 0.2) - 0.3) < 1e-6 True
However, comparisons like this are tedious to write and difficult to understand. Furthermore, absolute comparisons like the one above are usually discouraged because there’s no tolerance that works well for all situations. 1e-6
is good for numbers around 1
, but too small for very big numbers and too big for very small ones. It’s better to express the tolerance as a fraction of the expected value, but relative comparisons like that are even more difficult to write correctly and concisely.
The approx
class performs floating-point comparisons using a syntax that’s as intuitive as possible:
from pytest import approx 0.1 + 0.2 == approx(0.3) True
The same syntax also works for ordered sequences of numbers:
(0.1 + 0.2, 0.2 + 0.4) == approx((0.3, 0.6)) True
numpy
arrays:
import numpy as np np.array([0.1, 0.2]) + np.array([0.2, 0.4]) == approx(np.array([0.3, 0.6])) True
And for a numpy
array against a scalar:
import numpy as np np.array([0.1, 0.2]) + np.array([0.2, 0.1]) == approx(0.3) True
Only ordered sequences are supported, because approx
needs to infer the relative position of the sequences without ambiguity. This meanssets
and other unordered sequences are not supported.
Finally, dictionary values can also be compared:
{'a': 0.1 + 0.2, 'b': 0.2 + 0.4} == approx({'a': 0.3, 'b': 0.6}) True
The comparison will be true if both mappings have the same keys and their respective values match the expected tolerances.
Tolerances
By default, approx
considers numbers within a relative tolerance of1e-6
(i.e. one part in a million) of its expected value to be equal. This treatment would lead to surprising results if the expected value was0.0
, because nothing but 0.0
itself is relatively close to 0.0
. To handle this case less surprisingly, approx
also considers numbers within an absolute tolerance of 1e-12
of its expected value to be equal. Infinity and NaN are special cases. Infinity is only considered equal to itself, regardless of the relative tolerance. NaN is not considered equal to anything by default, but you can make it be equal to itself by setting the nan_ok
argument to True. (This is meant to facilitate comparing arrays that use NaN to mean “no data”.)
Both the relative and absolute tolerances can be changed by passing arguments to the approx
constructor:
1.0001 == approx(1) False 1.0001 == approx(1, rel=1e-3) True 1.0001 == approx(1, abs=1e-3) True
If you specify abs
but not rel
, the comparison will not consider the relative tolerance at all. In other words, two numbers that are within the default relative tolerance of 1e-6
will still be considered unequal if they exceed the specified absolute tolerance. If you specify bothabs
and rel
, the numbers will be considered equal if either tolerance is met:
1 + 1e-8 == approx(1) True 1 + 1e-8 == approx(1, abs=1e-12) False 1 + 1e-8 == approx(1, rel=1e-6, abs=1e-12) True
You can also use approx
to compare nonnumeric types, or dicts and sequences containing nonnumeric types, in which case it falls back to strict equality. This can be useful for comparing dicts and sequences that can contain optional values:
{"required": 1.0000005, "optional": None} == approx({"required": 1, "optional": None}) True [None, 1.0000005] == approx([None,1]) True ["foo", 1.0000005] == approx([None,1]) False
If you’re thinking about using approx
, then you might want to know how it compares to other good ways of comparing floating-point numbers. All of these algorithms are based on relative and absolute tolerances and should agree for the most part, but they do have meaningful differences:
math.isclose(a, b, rel_tol=1e-9, abs_tol=0.0)
: True if the relative tolerance is met w.r.t. eithera
orb
or if the absolute tolerance is met. Because the relative tolerance is calculated w.r.t. botha
andb
, this test is symmetric (i.e. neithera
norb
is a “reference value”). You have to specify an absolute tolerance if you want to compare to0.0
because there is no tolerance by default. More information: math.isclose().numpy.isclose(a, b, rtol=1e-5, atol=1e-8)
: True if the difference betweena
andb
is less that the sum of the relative tolerance w.r.t.b
and the absolute tolerance. Because the relative tolerance is only calculated w.r.t.b
, this test is asymmetric and you can think ofb
as the reference value. Support for comparing sequences is provided by numpy.allclose(). More information:numpy.isclose.unittest.TestCase.assertAlmostEqual(a, b)
: True ifa
andb
are within an absolute tolerance of1e-7
. No relative tolerance is considered , so this function is not appropriate for very large or very small numbers. Also, it’s only available in subclasses ofunittest.TestCase
and it’s ugly because it doesn’t follow PEP8. More information:unittest.TestCase.assertAlmostEqual().a == pytest.approx(b, rel=1e-6, abs=1e-12)
: True if the relative tolerance is met w.r.t.b
or if the absolute tolerance is met. Because the relative tolerance is only calculated w.r.t.b
, this test is asymmetric and you can think ofb
as the reference value. In the special case that you explicitly specify an absolute tolerance but not a relative tolerance, only the absolute tolerance is considered.
Note
approx
can handle numpy arrays, but we recommend the specialised test helpers in Test support (numpy.testing)if you need support for comparisons, NaNs, or ULP-based tolerances.
To match strings using regex, you can useMatchesfrom there_assert package.
Warning
Changed in version 3.2.
In order to avoid inconsistent behavior, TypeError is raised for >
, >=
, <
and <=
comparisons. The example below illustrates the problem:
assert approx(0.1) > 0.1 + 1e-10 # calls approx(0.1).gt(0.1 + 1e-10) assert 0.1 + 1e-10 > approx(0.1) # calls approx(0.1).lt(0.1 + 1e-10)
In the second example one expects approx(0.1).__le__(0.1 + 1e-10)
to be called. But instead, approx(0.1).__lt__(0.1 + 1e-10)
is used to comparison. This is because the call hierarchy of rich comparisons follows a fixed behavior. More information: object.__ge__()
Changed in version 3.7.1: approx
raises TypeError
when it encounters a dict value or sequence element of nonnumeric type.
Changed in version 6.1.0: approx
falls back to strict equality for nonnumeric types instead of raising TypeError
.
pytest.fail¶
Tutorial: How to use skip and xfail to deal with tests that cannot succeed
fail(_reason_[, _pytrace=True_])[source]¶
Explicitly fail an executing test with the given message.
Parameters:
- reason (str) – The message to show the user as reason for the failure.
- pytrace (bool) – If False, msg represents the full failure information and no python traceback will be reported.
Raises:
pytest.fail.Exception – The exception that is raised.
class pytest.fail.Exception¶
The exception raised by pytest.fail().
pytest.skip¶
skip(_reason_[, _allow_module_level=False_])[source]¶
Skip an executing test with the given message.
This function should be called only during testing (setup, call or teardown) or during collection by using the allow_module_level
flag. This function can be called in doctests as well.
Parameters:
- reason (str) – The message to show the user as reason for the skip.
- allow_module_level (bool) –
Allows this function to be called at module level. Raising the skip exception at module level will stop the execution of the module and prevent the collection of all tests in the module, even those defined before theskip
call.
Defaults to False.
Raises:
pytest.skip.Exception – The exception that is raised.
Note
It is better to use the pytest.mark.skipif marker when possible to declare a test to be skipped under certain conditions like mismatching platforms or dependencies. Similarly, use the # doctest: +SKIP
directive (see doctest.SKIP) to skip a doctest statically.
class pytest.skip.Exception¶
The exception raised by pytest.skip().
pytest.importorskip¶
importorskip(modname, minversion=None, reason=None, *, exc_type=None)[source]¶
Import and return the requested module modname
, or skip the current test if the module cannot be imported.
Parameters:
- modname (str) – The name of the module to import.
- minversion (str | None) – If given, the imported module’s
__version__
attribute must be at least this minimal version, otherwise the test is still skipped. - reason (str | None) – If given, this reason is shown as the message when the module cannot be imported.
- exc_type (type_[_ImportError] | None) –
The exception that should be captured in order to skip modules. Must be ImportError or a subclass.
If the module can be imported but raises ImportError, pytest will issue a warning to the user, as often users expect the module not to be found (which would raise ModuleNotFoundError instead).
This warning can be suppressed by passingexc_type=ImportError
explicitly.
See pytest.importorskip default behavior regarding ImportError for details.
Returns:
The imported module. This should be assigned to its canonical name.
Raises:
pytest.skip.Exception – If the module cannot be imported.
Return type:
Example:
docutils = pytest.importorskip("docutils")
Added in version 8.2: The exc_type
parameter.
pytest.xfail¶
Imperatively xfail an executing test or setup function with the given reason.
This function should be called only during testing (setup, call or teardown).
No other code is executed after using xfail()
(it is implemented internally by raising an exception).
Parameters:
reason (str) – The message to show the user as reason for the xfail.
Note
It is better to use the pytest.mark.xfail marker when possible to declare a test to be xfailed under certain conditions like known bugs or missing features.
Raises:
pytest.xfail.Exception – The exception that is raised.
class pytest.xfail.Exception¶
The exception raised by pytest.xfail().
pytest.exit¶
exit(_reason_[, _returncode=None_])[source]¶
Exit testing process.
Parameters:
- reason (str) – The message to show as the reason for exiting pytest. reason has a default value only because
msg
is deprecated. - returncode (int | None) – Return code to be used when exiting pytest. None means the same as
0
(no error), same as sys.exit().
Raises:
pytest.exit.Exception – The exception that is raised.
class pytest.exit.Exception¶
The exception raised by pytest.exit().
pytest.main¶
Tutorial: Calling pytest from Python code
main(args=None, plugins=None)[source]¶
Perform an in-process test run.
Parameters:
- args (list_[_str] | PathLike_[_str] | None) – List of command line arguments. If
None
or not given, defaults to reading arguments directly from the process command line (sys.argv). - plugins (Sequence_[_str | object] | None) – List of plugin objects to be auto-registered during initialization.
Returns:
An exit code.
Return type:
pytest.param¶
param(_*values_[, _id_][, _marks_])[source]¶
Specify a parameter in pytest.mark.parametrize calls orparametrized fixtures.
@pytest.mark.parametrize( "test_input,expected", [ ("3+5", 8), pytest.param("6*9", 42, marks=pytest.mark.xfail), ], ) def test_eval(test_input, expected): assert eval(test_input) == expected
Parameters:
- values (object) – Variable args of the values of the parameter set, in order.
- marks (MarkDecorator | Collection_[_MarkDecorator | Mark]) – A single mark or a list of marks to be applied to this parameter set.
- id (str | None) – The id to attribute to this parameter set.
pytest.raises¶
Tutorial: Assertions about expected exceptions
with raises(expected_exception: type[E] | tuple[type[E], ...], *, match: str | Pattern[str] | None = ...) → RaisesContext[E] as excinfo[source]¶
with raises(expected_exception: type[E] | tuple[type[E], ...], func: Callable[[...], Any], *args: Any, **kwargs: Any) → ExceptionInfo[E] as excinfo
Assert that a code block/function call raises an exception type, or one of its subclasses.
Parameters:
- expected_exception – The expected exception type, or a tuple if one of multiple possible exception types are expected. Note that subclasses of the passed exceptions will also match.
- match (str | re.Pattern_[_str] | None) –
If specified, a string containing a regular expression, or a regular expression object, that is tested against the string representation of the exception and its PEP 678__notes__
using re.search().
To match a literal string that may contain special characters, the pattern can first be escaped with re.escape().
(This is only used whenpytest.raises
is used as a context manager, and passed through to the function otherwise. When usingpytest.raises
as a function, you can use:pytest.raises(Exc, func, match="passed on").match("my pattern")
.)
Use pytest.raises
as a context manager, which will capture the exception of the given type, or any of its subclasses:
import pytest with pytest.raises(ZeroDivisionError): ... 1/0
If the code block does not raise the expected exception (ZeroDivisionError in the example above), or no exception at all, the check will fail instead.
You can also use the keyword argument match
to assert that the exception matches a text or regex:
with pytest.raises(ValueError, match='must be 0 or None'): ... raise ValueError("value must be 0 or None")
with pytest.raises(ValueError, match=r'must be \d+$'): ... raise ValueError("value must be 42")
The match
argument searches the formatted exception string, which includes anyPEP-678 __notes__
:
with pytest.raises(ValueError, match=r"had a note added"): ... e = ValueError("value must be 42") ... e.add_note("had a note added") ... raise e
The context manager produces an ExceptionInfo object which can be used to inspect the details of the captured exception:
with pytest.raises(ValueError) as exc_info: ... raise ValueError("value must be 42") assert exc_info.type is ValueError assert exc_info.value.args[0] == "value must be 42"
Warning
Given that pytest.raises
matches subclasses, be wary of using it to match Exception like this:
with pytest.raises(Exception): # Careful, this will catch ANY exception raised. some_function()
Because Exception is the base class of almost all exceptions, it is easy for this to hide real bugs, where the user wrote this expecting a specific exception, but some other exception is being raised due to a bug introduced during a refactoring.
Avoid using pytest.raises
to catch Exception unless certain that you really want to catchany exception raised.
Note
When using pytest.raises
as a context manager, it’s worthwhile to note that normal context manager rules apply and that the exception raised must be the final line in the scope of the context manager. Lines of code after that, within the scope of the context manager will not be executed. For example:
value = 15 with pytest.raises(ValueError) as exc_info: ... if value > 10: ... raise ValueError("value must be <= 10") ... assert exc_info.type is ValueError # This will not execute.
Instead, the following approach must be taken (note the difference in scope):
with pytest.raises(ValueError) as exc_info: ... if value > 10: ... raise ValueError("value must be <= 10") ... assert exc_info.type is ValueError
Using with pytest.mark.parametrize
When using pytest.mark.parametrizeit is possible to parametrize tests such that some runs raise an exception and others do not.
See Parametrizing conditional raising for an example.
Legacy form
It is possible to specify a callable by passing a to-be-called lambda:
raises(ZeroDivisionError, lambda: 1/0) <ExceptionInfo ...>
or you can specify an arbitrary callable with arguments:
def f(x): return 1/x ... raises(ZeroDivisionError, f, 0) <ExceptionInfo ...> raises(ZeroDivisionError, f, x=0) <ExceptionInfo ...>
The form above is fully supported but discouraged for new code because the context manager form is regarded as more readable and less error-prone.
Note
Similar to caught exception objects in Python, explicitly clearing local references to returned ExceptionInfo
objects can help the Python interpreter speed up its garbage collection.
Clearing those references breaks a reference cycle (ExceptionInfo
–> caught exception –> frame stack raising the exception –> current frame stack –> local variables –>ExceptionInfo
) which makes Python keep all objects referenced from that cycle (including all local variables in the current frame) alive until the next cyclic garbage collection run. More detailed information can be found in the official Python documentation for the try statement.
pytest.deprecated_call¶
Tutorial: Ensuring code triggers a deprecation warning
with deprecated_call(*, match: str | Pattern[str] | None = ...) → WarningsRecorder[source]¶
with deprecated_call(func: Callable[[...], T], *args: Any, **kwargs: Any) → T
Assert that code produces a DeprecationWarning
or PendingDeprecationWarning
or FutureWarning
.
This function can be used as a context manager:
import warnings def api_call_v2(): ... warnings.warn('use v3 of this api', DeprecationWarning) ... return 200
import pytest with pytest.deprecated_call(): ... assert api_call_v2() == 200
It can also be used by passing a function and *args
and **kwargs
, in which case it will ensure calling func(*args, **kwargs)
produces one of the warnings types above. The return value is the return value of the function.
In the context manager form you may use the keyword argument match
to assert that the warning matches a text or regex.
The context manager produces a list of warnings.WarningMessage
objects, one for each warning raised.
pytest.register_assert_rewrite¶
Tutorial: Assertion Rewriting
register_assert_rewrite(*names)[source]¶
Register one or more module names to be rewritten on import.
This function will make sure that this module or all modules inside the package will get their assert statements rewritten. Thus you should make sure to call this before the module is actually imported, usually in your __init__.py if you are a plugin using a package.
Parameters:
names (str) – The module names to register.
pytest.warns¶
Tutorial: Asserting warnings with the warns function
with warns(expected_warning: type[Warning] | tuple[type[Warning], ...] = <class 'Warning'>, *, match: str | ~typing.Pattern[str] | None = None) → WarningsChecker[source]¶
with warns(expected_warning: type[Warning] | tuple[type[Warning], ...], func: Callable[[...], T], *args: Any, **kwargs: Any) → T
Assert that code raises a particular class of warning.
Specifically, the parameter expected_warning
can be a warning class or tuple of warning classes, and the code inside the with
block must issue at least one warning of that class or classes.
This helper produces a list of warnings.WarningMessage
objects, one for each warning emitted (regardless of whether it is an expected_warning
or not). Since pytest 8.0, unmatched warnings are also re-emitted when the context closes.
This function can be used as a context manager:
import pytest with pytest.warns(RuntimeWarning): ... warnings.warn("my warning", RuntimeWarning)
In the context manager form you may use the keyword argument match
to assert that the warning matches a text or regex:
with pytest.warns(UserWarning, match='must be 0 or None'): ... warnings.warn("value must be 0 or None", UserWarning)
with pytest.warns(UserWarning, match=r'must be \d+$'): ... warnings.warn("value must be 42", UserWarning)
with pytest.warns(UserWarning): # catch re-emitted warning ... with pytest.warns(UserWarning, match=r'must be \d+$'): ... warnings.warn("this is not here", UserWarning) Traceback (most recent call last): ... Failed: DID NOT WARN. No warnings of type ...UserWarning... were emitted...
Using with pytest.mark.parametrize
When using pytest.mark.parametrize it is possible to parametrize tests such that some runs raise a warning and others do not.
This could be achieved in the same way as with exceptions, seeParametrizing conditional raising for an example.
pytest.freeze_includes¶
Tutorial: Freezing pytest
Return a list of module names used by pytest that should be included by cx_freeze.
Marks¶
Marks can be used to apply metadata to test functions (but not fixtures), which can then be accessed by fixtures or plugins.
pytest.mark.filterwarnings¶
Tutorial: @pytest.mark.filterwarnings
Add warning filters to marked test items.
pytest.mark.filterwarnings(filter)¶
Parameters:
filter (str) –
A warning specification string, which is composed of contents of the tuple (action, message, category, module, lineno)
as specified in The Warnings Filter section of the Python documentation, separated by ":"
. Optional fields can be omitted. Module names passed for filtering are not regex-escaped.
For example:
@pytest.mark.filterwarnings("ignore:.usage will be deprecated.:DeprecationWarning") def test_foo(): ...
pytest.mark.parametrize¶
Tutorial: How to parametrize fixtures and test functions
This mark has the same signature as pytest.Metafunc.parametrize(); see there.
pytest.mark.skip¶
Tutorial: Skipping test functions
Unconditionally skip a test function.
pytest.mark.skip(reason=None)¶
Parameters:
reason (str) – Reason why the test function is being skipped.
pytest.mark.skipif¶
Tutorial: Skipping test functions
Skip a test function if a condition is True
.
pytest.mark.skipif(condition, *, reason=None)¶
Parameters:
- condition (bool or str) –
True/False
if the condition should be skipped or a condition string. - reason (str) – Reason why the test function is being skipped.
pytest.mark.usefixtures¶
Tutorial: Use fixtures in classes and modules with usefixtures
Mark a test function as using the given fixture names.
pytest.mark.usefixtures(*names)¶
Parameters:
args – The names of the fixture to use, as strings.
Note
When using usefixtures
in hooks, it can only load fixtures when applied to a test function before test setup (for example in the pytest_collection_modifyitems
hook).
Also note that this mark has no effect when applied to fixtures.
pytest.mark.xfail¶
Tutorial: XFail: mark test functions as expected to fail
Marks a test function as expected to fail.
pytest.mark.xfail(condition=False, *, reason=None, raises=None, run=True, strict=xfail_strict)¶
Parameters:
- condition (Union [_bool,_ str]) – Condition for marking the test function as xfail (
True/False
or acondition string). If abool
, you also have to specifyreason
(see condition string). - reason (str) – Reason why the test function is marked as xfail.
- raises (Type[Exception]) – Exception class (or tuple of classes) expected to be raised by the test function; other exceptions will fail the test. Note that subclasses of the classes passed will also result in a match (similar to how the
except
statement works). - run (bool) – Whether the test function should actually be executed. If
False
, the function will always xfail and will not be executed (useful if a function is segfaulting). - strict (bool) –
- If
False
the function will be shown in the terminal output asxfailed
if it fails and asxpass
if it passes. In both cases this will not cause the test suite to fail as a whole. This is particularly useful to mark flaky tests (tests that fail at random) to be tackled later. - If
True
, the function will be shown in the terminal output asxfailed
if it fails, but if it unexpectedly passes then it will fail the test suite. This is particularly useful to mark functions that are always failing and there should be a clear indication if they unexpectedly start to pass (for example a new release of a library fixes a known bug).
Defaults to xfail_strict, which isFalse
by default.
- If
Custom marks¶
Marks are created dynamically using the factory object pytest.mark
and applied as a decorator.
For example:
@pytest.mark.timeout(10, "slow", method="thread") def test_function(): ...
Will create and attach a Mark object to the collectedItem, which can then be accessed by fixtures or hooks withNode.iter_markers. The mark
object will have the following attributes:
mark.args == (10, "slow") mark.kwargs == {"method": "thread"}
Example for using multiple custom markers:
@pytest.mark.timeout(10, "slow", method="thread") @pytest.mark.slow def test_function(): ...
When Node.iter_markers or Node.iter_markers_with_node is used with multiple markers, the marker closest to the function will be iterated over first. The above example will result in @pytest.mark.slow
followed by @pytest.mark.timeout(...)
.
Fixtures¶
Tutorial: Fixtures reference
Fixtures are requested by test functions or other fixtures by declaring them as argument names.
Example of a test requiring a fixture:
def test_output(capsys): print("hello") out, err = capsys.readouterr() assert out == "hello\n"
Example of a fixture requiring another fixture:
@pytest.fixture def db_session(tmp_path): fn = tmp_path / "db.file" return connect(fn)
For more details, consult the full fixtures docs.
@pytest.fixture¶
@fixture(fixture_function: FixtureFunction, *, scope: Literal['session', 'package', 'module', 'class', 'function'] | Callable[[str, Config], Literal['session', 'package', 'module', 'class', 'function']] = 'function', params: Iterable[object] | None = None, autouse: bool = False, ids: Sequence[object | None] | Callable[[Any], object | None] | None = None, name: str | None = None) → FixtureFunction[source]¶
@fixture(fixture_function: None = None, *, scope: Literal['session', 'package', 'module', 'class', 'function'] | Callable[[str, Config], Literal['session', 'package', 'module', 'class', 'function']] = 'function', params: Iterable[object] | None = None, autouse: bool = False, ids: Sequence[object | None] | Callable[[Any], object | None] | None = None, name: str | None = None) → FixtureFunctionMarker
Decorator to mark a fixture factory function.
This decorator can be used, with or without parameters, to define a fixture function.
The name of the fixture function can later be referenced to cause its invocation ahead of running tests: test modules or classes can use thepytest.mark.usefixtures(fixturename)
marker.
Test functions can directly use fixture names as input arguments in which case the fixture instance returned from the fixture function will be injected.
Fixtures can provide their values to test functions using return
oryield
statements. When using yield
the code block after theyield
statement is executed as teardown code regardless of the test outcome, and must yield exactly once.
Parameters:
- scope –
The scope for which this fixture is shared; one of"function"
(default),"class"
,"module"
,"package"
or"session"
.
This parameter may also be a callable which receives(fixture_name, config)
as parameters, and must return astr
with one of the values mentioned above.
See Dynamic scope in the docs for more information. - params – An optional list of parameters which will cause multiple invocations of the fixture function and all of the tests using it. The current parameter is available in
request.param
. - autouse – If True, the fixture func is activated for all tests that can see it. If False (the default), an explicit reference is needed to activate the fixture.
- ids – Sequence of ids each corresponding to the params so that they are part of the test id. If no ids are provided they will be generated automatically from the params.
- name – The name of the fixture. This defaults to the name of the decorated function. If a fixture is used in the same module in which it is defined, the function name of the fixture will be shadowed by the function arg that requests the fixture; one way to resolve this is to name the decorated function
fixture_<fixturename>
and then use@pytest.fixture(name='<fixturename>')
.
capfd¶
Tutorial: How to capture stdout/stderr output
Enable text capturing of writes to file descriptors 1
and 2
.
The captured output is made available via capfd.readouterr()
method calls, which return a (out, err)
namedtuple.out
and err
will be text
objects.
Returns an instance of CaptureFixture[str].
Example:
def test_system_echo(capfd): os.system('echo "hello"') captured = capfd.readouterr() assert captured.out == "hello\n"
capfdbinary¶
Tutorial: How to capture stdout/stderr output
Enable bytes capturing of writes to file descriptors 1
and 2
.
The captured output is made available via capfd.readouterr()
method calls, which return a (out, err)
namedtuple.out
and err
will be byte
objects.
Returns an instance of CaptureFixture[bytes].
Example:
def test_system_echo(capfdbinary): os.system('echo "hello"') captured = capfdbinary.readouterr() assert captured.out == b"hello\n"
caplog¶
Tutorial: How to manage logging
Access and control log capturing.
Captured logs are available through the following properties/methods:
- caplog.messages -> list of format-interpolated log messages
- caplog.text -> string containing formatted log output
- caplog.records -> list of logging.LogRecord instances
- caplog.record_tuples -> list of (logger_name, level, message) tuples
- caplog.clear() -> clear captured records and formatted log output string
Returns a pytest.LogCaptureFixture instance.
final class LogCaptureFixture[source]¶
Provides access and control of log capturing.
property handler_: LogCaptureHandler_¶
Get the logging handler used by the fixture.
Get the logging records for one of the possible test phases.
Parameters:
when (Literal[ 'setup' , 'call' , 'teardown' ]) – Which test phase to obtain the records from. Valid values are: “setup”, “call” and “teardown”.
Returns:
The list of captured records at the given stage.
Return type:
Added in version 3.4.
The formatted log text.
property records_: list[LogRecord]_¶
The list of log records.
property record_tuples_: list[tuple[str, int, str]]_¶
A list of a stripped down version of log records intended for use in assertion comparison.
The format of the tuple is:
(logger_name, log_level, message)
property messages_: list[str]_¶
A list of format-interpolated log messages.
Unlike ‘records’, which contains the format string and parameters for interpolation, log messages in this list are all interpolated.
Unlike ‘text’, which contains the output from the handler, log messages in this list are unadorned with levels, timestamps, etc, making exact comparisons more reliable.
Note that traceback or stack info (from logging.exception() or the exc_info
or stack_info
arguments to the logging functions) is not included, as this is added by the formatter in the handler.
Added in version 3.7.
Reset the list of log records and the captured log text.
set_level(level, logger=None)[source]¶
Set the threshold level of a logger for the duration of a test.
Logging messages which are less severe than this level will not be captured.
Changed in version 3.4: The levels of the loggers changed by this function will be restored to their initial values at the end of the test.
Will enable the requested logging level if it was disabled via logging.disable().
Parameters:
- level (int | str) – The level.
- logger (str | None) – The logger to update. If not given, the root logger.
with at_level(level, logger=None)[source]¶
Context manager that sets the level for capturing of logs. After the end of the ‘with’ statement the level is restored to its original value.
Will enable the requested logging level if it was disabled via logging.disable().
Parameters:
- level (int | str) – The level.
- logger (str | None) – The logger to update. If not given, the root logger.
with filtering(filter_)[source]¶
Context manager that temporarily adds the given filter to the caplog’shandler() for the ‘with’ statement block, and removes that filter at the end of the block.
Parameters:
filter – A custom logging.Filter object.
Added in version 7.5.
capsys¶
Tutorial: How to capture stdout/stderr output
Enable text capturing of writes to sys.stdout
and sys.stderr
.
The captured output is made available via capsys.readouterr()
method calls, which return a (out, err)
namedtuple.out
and err
will be text
objects.
Returns an instance of CaptureFixture[str].
Example:
def test_output(capsys): print("hello") captured = capsys.readouterr() assert captured.out == "hello\n"
Object returned by the capsys, capsysbinary,capfd and capfdbinary fixtures.
Read and return the captured output so far, resetting the internal buffer.
Returns:
The captured content as a namedtuple with out
and err
string attributes.
Return type:
CaptureResult
Temporarily disable capturing while inside the with
block.
capsysbinary¶
Tutorial: How to capture stdout/stderr output
Enable bytes capturing of writes to sys.stdout
and sys.stderr
.
The captured output is made available via capsysbinary.readouterr()
method calls, which return a (out, err)
namedtuple.out
and err
will be bytes
objects.
Returns an instance of CaptureFixture[bytes].
Example:
def test_output(capsysbinary): print("hello") captured = capsysbinary.readouterr() assert captured.out == b"hello\n"
config.cache¶
Tutorial: How to re-run failed tests and maintain state between test runs
The config.cache
object allows other plugins and fixtures to store and retrieve values across test runs. To access it from fixtures request pytestconfig
into your fixture and get it with pytestconfig.cache
.
Under the hood, the cache plugin uses the simpledumps
/loads
API of the json stdlib module.
config.cache
is an instance of pytest.Cache:
Instance of the cache
fixture.
Return a directory path object with the given name.
If the directory does not yet exist, it will be created. You can use it to manage files to e.g. store/retrieve database dumps across test sessions.
Added in version 7.0.
Parameters:
name (str) – Must be a string not containing a /
separator. Make sure the name contains your plugin or application identifiers to prevent clashes with other cache users.
Return the cached value for the given key.
If no value was yet cached or the value cannot be read, the specified default is returned.
Parameters:
- key (str) – Must be a
/
separated value. Usually the first name is the name of your plugin or your application. - default – The value to return in case of a cache-miss or invalid cache value.
Save value for the given key.
Parameters:
- key (str) – Must be a
/
separated value. Usually the first name is the name of your plugin or your application. - value (object) – Must be of any combination of basic python types, including nested types like lists of dictionaries.
doctest_namespace¶
Tutorial: How to run doctests
Fixture that returns a dict that will be injected into the namespace of doctests.
Usually this fixture is used in conjunction with another autouse
fixture:
@pytest.fixture(autouse=True) def add_np(doctest_namespace): doctest_namespace["np"] = numpy
For more details: ‘doctest_namespace’ fixture.
monkeypatch¶
Tutorial: How to monkeypatch/mock modules and environments
A convenient fixture for monkey-patching.
The fixture provides these methods to modify objects, dictionaries, oros.environ:
- monkeypatch.setattr(obj, name, value, raising=True)
- monkeypatch.delattr(obj, name, raising=True)
- monkeypatch.setitem(mapping, name, value)
- monkeypatch.delitem(obj, name, raising=True)
- monkeypatch.setenv(name, value, prepend=None)
- monkeypatch.delenv(name, raising=True)
- monkeypatch.syspath_prepend(path)
- monkeypatch.chdir(path)
- monkeypatch.context()
All modifications will be undone after the requesting test function or fixture has finished. The raising
parameter determines if a KeyErroror AttributeError will be raised if the set/deletion operation does not have the specified target.
To undo modifications done by the fixture in a contained scope, use context().
Returns a MonkeyPatch instance.
final class MonkeyPatch[source]¶
Helper to conveniently monkeypatch attributes/items/environment variables/syspath.
Returned by the monkeypatch fixture.
Changed in version 6.2: Can now also be used directly as pytest.MonkeyPatch()
, for when the fixture is not available. In this case, usewith MonkeyPatch.context() as mp: or remember to callundo() explicitly.
classmethod with context()[source]¶
Context manager that returns a new MonkeyPatch object which undoes any patching done inside the with
block upon exit.
Example:
import functools
def test_partial(monkeypatch): with monkeypatch.context() as m: m.setattr(functools, "partial", 3)
Useful in situations where it is desired to undo some patches before the test ends, such as mocking stdlib
functions that might break pytest itself if mocked (for examples of this see #3290).
setattr(target: str, name: object, value: ~_pytest.monkeypatch.Notset = , raising: bool = True) → None[source]¶
setattr(target: object, name: str, value: object, raising: bool = True) → None
Set attribute value on target, memorizing the old value.
For example:
import os
monkeypatch.setattr(os, "getcwd", lambda: "/")
The code above replaces the os.getcwd() function by a lambda
which always returns "/"
.
For convenience, you can specify a string as target
which will be interpreted as a dotted import path, with the last part being the attribute name:
monkeypatch.setattr("os.getcwd", lambda: "/")
Raises AttributeError if the attribute does not exist, unlessraising
is set to False.
Where to patch
monkeypatch.setattr
works by (temporarily) changing the object that a name points to with another one. There can be many names pointing to any individual object, so for patching to work you must ensure that you patch the name used by the system under test.
See the section Where to patch in the unittest.mockdocs for a complete explanation, which is meant for unittest.mock.patch() but applies to monkeypatch.setattr
as well.
delattr(target, name=, raising=True)[source]¶
Delete attribute name
from target
.
If no name
is specified and target
is a string it will be interpreted as a dotted import path with the last part being the attribute name.
Raises AttributeError it the attribute does not exist, unlessraising
is set to False.
setitem(dic, name, value)[source]¶
Set dictionary entry name
to value.
delitem(dic, name, raising=True)[source]¶
Delete name
from dict.
Raises KeyError
if it doesn’t exist, unless raising
is set to False.
setenv(name, value, prepend=None)[source]¶
Set environment variable name
to value
.
If prepend
is a character, read the current environment variable value and prepend the value
adjoined with the prepend
character.
delenv(name, raising=True)[source]¶
Delete name
from the environment.
Raises KeyError
if it does not exist, unless raising
is set to False.
syspath_prepend(path)[source]¶
Prepend path
to sys.path
list of import locations.
Change the current working directory to the specified path.
Parameters:
path (str | PathLike_[_str]) – The path to change into.
Undo previous changes.
This call consumes the undo stack. Calling it a second time has no effect unless you do more monkeypatching after the undo call.
There is generally no need to call undo()
, since it is called automatically during tear-down.
Note
The same monkeypatch
fixture is used across a single test function invocation. If monkeypatch
is used both by the test function itself and one of the test fixtures, calling undo()
will undo all of the changes made in both functions.
Prefer to use context() instead.
pytestconfig¶
Session-scoped fixture that returns the session’s pytest.Configobject.
Example:
def test_foo(pytestconfig): if pytestconfig.get_verbosity() > 0: ...
pytester¶
Added in version 6.2.
Provides a Pytester instance that can be used to run and test pytest itself.
It provides an empty directory where pytest can be executed in isolation, and contains facilities to write tests, configuration files, and match against expected output.
To use it, include in your topmost conftest.py
file:
pytest_plugins = "pytester"
Facilities to write tests/configuration files, execute pytest in isolation, and match against expected output, perfect for black-box testing of pytest plugins.
It attempts to isolate the test run from external factors as much as possible, modifying the current working directory to path and environment variables during initialization.
exception TimeoutExpired[source]¶
plugins_: list[str | object]_¶
A list of plugins to use with parseconfig() andrunpytest(). Initially this is an empty list but plugins can be added to the list. The type of items to add to the list depends on the method using them so refer to them for details.
Temporary directory path used to create files/run tests from, etc.
make_hook_recorder(pluginmanager)[source]¶
Create a new HookRecorder for a PytestPluginManager.
Cd into the temporary directory.
This is done automatically upon instantiation.
makefile(ext, *args, **kwargs)[source]¶
Create new text file(s) in the test directory.
Parameters:
- ext (str) – The extension the file(s) should use, including the dot, e.g.
.py
. - args (str) – All args are treated as strings and joined using newlines. The result is written as contents to the file. The name of the file is based on the test function requesting this fixture.
- kwargs (str) – Each keyword is the name of a file, while the value of it will be written as contents of the file.
Returns:
The first created file.
Return type:
Examples:
pytester.makefile(".txt", "line1", "line2")
pytester.makefile(".ini", pytest="[pytest]\naddopts=-rs\n")
To create binary files, use pathlib.Path.write_bytes() directly:
filename = pytester.path.joinpath("foo.bin") filename.write_bytes(b"...")
Write a conftest.py file.
Parameters:
source (str) – The contents.
Returns:
The conftest.py file.
Return type:
Write a tox.ini file.
Parameters:
source (str) – The contents.
Returns:
The tox.ini file.
Return type:
Return the pytest section from the tox.ini config file.
makepyprojecttoml(source)[source]¶
Write a pyproject.toml file.
Parameters:
source (str) – The contents.
Returns:
The pyproject.ini file.
Return type:
Added in version 6.0.
makepyfile(*args, **kwargs)[source]¶
Shortcut for .makefile() with a .py extension.
Defaults to the test name with a ‘.py’ extension, e.g test_foobar.py, overwriting existing files.
Examples:
def test_something(pytester): # Initial file is created test_something.py. pytester.makepyfile("foobar") # To create multiple files, pass kwargs accordingly. pytester.makepyfile(custom="foobar") # At this point, both 'test_something.py' & 'custom.py' exist in the test directory.
maketxtfile(*args, **kwargs)[source]¶
Shortcut for .makefile() with a .txt extension.
Defaults to the test name with a ‘.txt’ extension, e.g test_foobar.txt, overwriting existing files.
Examples:
def test_something(pytester): # Initial file is created test_something.txt. pytester.maketxtfile("foobar") # To create multiple files, pass kwargs accordingly. pytester.maketxtfile(custom="foobar") # At this point, both 'test_something.txt' & 'custom.txt' exist in the test directory.
syspathinsert(path=None)[source]¶
Prepend a directory to sys.path, defaults to path.
This is undone automatically when this object dies at the end of each test.
Parameters:
path (str | PathLike_[_str] | None) – The path.
Create a new (sub)directory.
Parameters:
name (str | PathLike_[_str]) – The name of the directory, relative to the pytester path.
Returns:
The created directory.
Return type:
Create a new python package.
This creates a (sub)directory with an empty __init__.py
file so it gets recognised as a Python package.
copy_example(name=None)[source]¶
Copy file from project’s directory into the testdir.
Parameters:
name (str | None) – The name of the file to copy.
Returns:
Path to the copied directory (inside self.path
).
Return type:
Get the collection node of a file.
Parameters:
- config (Config) – A pytest config. See parseconfig() and parseconfigure() for creating it.
- arg (str | PathLike_[_str]) – Path to the file.
Returns:
The node.
Return type:
Return the collection node of a file.
This is like getnode() but uses parseconfigure() to create the (configured) pytest Config instance.
Parameters:
path (str | PathLike_[_str]) – Path to the file.
Returns:
The node.
Return type:
Generate all test items from a collection node.
This recurses into the collection node and returns a list of all the test items contained within.
Parameters:
colitems (Sequence_[_Item | Collector]) – The collection nodes.
Returns:
The collected items.
Return type:
Run the “test_func” Item.
The calling test instance (class containing the test method) must provide a .getrunner()
method which should return a runner which can run the test protocol for a single item, e.g._pytest.runner.runtestprotocol
.
inline_runsource(source, *cmdlineargs)[source]¶
Run a test module in process using pytest.main()
.
This run writes “source” into a temporary file and runspytest.main()
on it, returning a HookRecorder instance for the result.
Parameters:
- source (str) – The source code of the test module.
- cmdlineargs – Any extra command line arguments to use.
inline_genitems(*args)[source]¶
Run pytest.main(['--collect-only'])
in-process.
Runs the pytest.main() function to run all of pytest inside the test process itself like inline_run(), but returns a tuple of the collected items and a HookRecorder instance.
inline_run(*args, plugins=(), no_reraise_ctrlc=False)[source]¶
Run pytest.main()
in-process, returning a HookRecorder.
Runs the pytest.main() function to run all of pytest inside the test process itself. This means it can return aHookRecorder instance which gives more detailed results from that run than can be done by matching stdout/stderr fromrunpytest().
Parameters:
- args (str | PathLike_[_str]) – Command line arguments to pass to pytest.main().
- plugins – Extra plugin instances the
pytest.main()
instance should use. - no_reraise_ctrlc (bool) – Typically we reraise keyboard interrupts from the child run. If True, the KeyboardInterrupt exception is captured.
runpytest_inprocess(*args, **kwargs)[source]¶
Return result of running pytest in-process, providing a similar interface to what self.runpytest() provides.
runpytest(*args, **kwargs)[source]¶
Run pytest inline or in a subprocess, depending on the command line option “–runpytest” and return a RunResult.
Return a new pytest pytest.Config instance from given commandline args.
This invokes the pytest bootstrapping code in _pytest.config to create a new pytest.PytestPluginManager and call thepytest_cmdline_parse hook to create a new pytest.Configinstance.
If plugins has been populated they should be plugin modules to be registered with the plugin manager.
parseconfigure(*args)[source]¶
Return a new pytest configured Config instance.
Returns a new pytest.Config instance likeparseconfig(), but also calls the pytest_configurehook.
getitem(source, funcname='test_func')[source]¶
Return the test item for a test function.
Writes the source to a python file and runs pytest’s collection on the resulting module, returning the test item for the requested function name.
Parameters:
- source (str | PathLike_[_str]) – The module source.
- funcname (str) – The name of the test function for which to return a test item.
Returns:
The test item.
Return type:
Return all test items collected from the module.
Writes the source to a Python file and runs pytest’s collection on the resulting module, returning all test items contained within.
getmodulecol(source, configargs=(), *, withinit=False)[source]¶
Return the module collection node for source
.
Writes source
to a file using makepyfile() and then runs the pytest collection on it, returning the collection node for the test module.
Parameters:
- source (str | PathLike_[_str]) – The source code of the module to collect.
- configargs – Any extra arguments to pass to parseconfigure().
- withinit (bool) – Whether to also write an
__init__.py
file to the same directory to ensure it is a package.
collect_by_name(modcol, name)[source]¶
Return the collection node for name from the module collection.
Searches a module collection node for a collection node matching the given name.
Parameters:
- modcol (Collector) – A module collection node; see getmodulecol().
- name (str) – The name of the node to return.
popen(cmdargs, stdout=-1, stderr=-1, stdin=NotSetType.token, **kw)[source]¶
Invoke subprocess.Popen.
Calls subprocess.Popen making sure the current working directory is in PYTHONPATH
.
You probably want to use run() instead.
run(*cmdargs, timeout=None, stdin=NotSetType.token)[source]¶
Run a command with arguments.
Run a process using subprocess.Popen saving the stdout and stderr.
Parameters:
- cmdargs (str | PathLike_[_str]) – The sequence of arguments to pass to subprocess.Popen, with path-like objects being converted to strautomatically.
- timeout (float | None) – The period in seconds after which to timeout and raisePytester.TimeoutExpired.
- stdin (_pytest.compat.NotSetType | bytes | IO [ Any ] | int) –
Optional standard input.- If it is
CLOSE_STDIN
(Default), then this method callssubprocess.Popen withstdin=subprocess.PIPE
, and the standard input is closed immediately after the new command is started. - If it is of type bytes, these bytes are sent to the standard input of the command.
- Otherwise, it is passed through to subprocess.Popen. For further information in this case, consult the document of the
stdin
parameter in subprocess.Popen.
- If it is
Returns:
The result.
Return type:
Run a python script using sys.executable as interpreter.
Run python -c "command"
.
runpytest_subprocess(*args, timeout=None)[source]¶
Run pytest as a subprocess with given arguments.
Any plugins added to the plugins list will be added using the-p
command line option. Additionally --basetemp
is used to put any temporary files and directories in a numbered directory prefixed with “runpytest-” to not conflict with the normal numbered pytest location for temporary files and directories.
Parameters:
- args (str | PathLike_[_str]) – The sequence of arguments to pass to the pytest subprocess.
- timeout (float | None) – The period in seconds after which to timeout and raisePytester.TimeoutExpired.
Returns:
The result.
Return type:
spawn_pytest(string, expect_timeout=10.0)[source]¶
Run pytest using pexpect.
This makes sure to use the right pytest and sets up the temporary directory locations.
The pexpect child is returned.
spawn(cmd, expect_timeout=10.0)[source]¶
Run a command using pexpect.
The pexpect child is returned.
final class RunResult[source]¶
The result of running a command from Pytester.
The return value.
outlines¶
List of lines captured from stdout.
errlines¶
List of lines captured from stderr.
stdout¶
LineMatcher of stdout.
Use e.g. str(stdout) to reconstruct stdout, or the commonly usedstdout.fnmatch_lines() method.
stderr¶
LineMatcher of stderr.
duration¶
Duration in seconds.
Return a dictionary of outcome noun -> count from parsing the terminal output that the test process produced.
The returned nouns will always be in plural form:
======= 1 failed, 1 passed, 1 warning, 1 error in 0.13s ====
Will return {"failed": 1, "passed": 1, "warnings": 1, "errors": 1}
.
classmethod parse_summary_nouns(lines)[source]¶
Extract the nouns from a pytest terminal summary line.
It always returns the plural noun for consistency:
======= 1 failed, 1 passed, 1 warning, 1 error in 0.13s ====
Will return {"failed": 1, "passed": 1, "warnings": 1, "errors": 1}
.
assert_outcomes(passed=0, skipped=0, failed=0, errors=0, xpassed=0, xfailed=0, warnings=None, deselected=None)[source]¶
Assert that the specified outcomes appear with the respective numbers (0 means it didn’t occur) in the text output from a test run.
warnings
and deselected
are only checked if not None.
Flexible matching of text.
This is a convenience class to test large texts like the output of commands.
The constructor takes a list of lines without their trailing newlines, i.e.text.splitlines()
.
Return the entire original text.
Added in version 6.2: You can use str() in older versions.
fnmatch_lines_random(lines2)[source]¶
Check lines exist in the output in any order (using fnmatch.fnmatch()).
re_match_lines_random(lines2)[source]¶
Check lines exist in the output in any order (using re.match()).
get_lines_after(fnline)[source]¶
Return all lines following the given line in the text.
The given line can contain glob wildcards.
fnmatch_lines(lines2, *, consecutive=False)[source]¶
Check lines exist in the output (using fnmatch.fnmatch()).
The argument is a list of lines which have to match and can use glob wildcards. If they do not match a pytest.fail() is called. The matches and non-matches are also shown as part of the error message.
Parameters:
- lines2 (Sequence_[_str]) – String patterns to match.
- consecutive (bool) – Match lines consecutively?
re_match_lines(lines2, *, consecutive=False)[source]¶
Check lines exist in the output (using re.match()).
The argument is a list of lines which have to match using re.match
. If they do not match a pytest.fail() is called.
The matches and non-matches are also shown as part of the error message.
Parameters:
- lines2 (Sequence_[_str]) – string patterns to match.
- consecutive (bool) – match lines consecutively?
Ensure captured lines do not match the given pattern, using fnmatch.fnmatch
.
Parameters:
pat (str) – The pattern to match lines.
no_re_match_line(pat)[source]¶
Ensure captured lines do not match the given pattern, using re.match
.
Parameters:
pat (str) – The regular expression to match lines.
Return the entire original text.
final class HookRecorder[source]¶
Record all hooks called in a plugin manager.
Hook recorders are created by Pytester.
This wraps all the hook calls in the plugin manager, recording each call before propagating the normal calls.
Get all recorded calls to hooks with the given names (or name).
matchreport(inamepart='', names=('pytest_runtest_logreport', 'pytest_collectreport'), when=None)[source]¶
Return a testreport whose dotted import path matches.
final class RecordedHookCall[source]¶
A recorded call to a hook.
The arguments to the hook call are set as attributes. For example:
calls = hook_recorder.getcalls("pytest_runtest_setup")
Suppose pytest_runtest_setup was called once with item=an_item
.
assert calls[0].item is an_item
record_property¶
Tutorial: record_property
Add extra properties to the calling test.
User properties become part of the test report and are available to the configured reporters, like JUnit XML.
The fixture is callable with name, value
. The value is automatically XML-encoded.
Example:
def test_function(record_property): record_property("example_key", 1)
record_testsuite_property¶
Tutorial: record_testsuite_property
record_testsuite_property()[source]¶
Record a new <property>
tag as child of the root <testsuite>
.
This is suitable to writing global information regarding the entire test suite, and is compatible with xunit2
JUnit family.
This is a session
-scoped fixture which is called with (name, value)
. Example:
def test_foo(record_testsuite_property): record_testsuite_property("ARCH", "PPC") record_testsuite_property("STORAGE_TYPE", "CEPH")
Parameters:
- name – The property name.
- value – The property value. Will be converted to a string.
Warning
Currently this fixture does not work with thepytest-xdist plugin. See#7767 for details.
recwarn¶
Tutorial: Recording warnings
Return a WarningsRecorder instance that records all warnings emitted by test functions.
See How to capture warnings for information on warning categories.
class WarningsRecorder[source]¶
A context manager to record raised warnings.
Each recorded warning is an instance of warnings.WarningMessage
.
Adapted from warnings.catch_warnings
.
property list_: list[WarningMessage]_¶
The list of recorded warnings.
Get a recorded warning by index.
Iterate through the recorded warnings.
The number of recorded warnings.
pop(cls=<class 'Warning'>)[source]¶
Pop the first recorded warning which is an instance of cls
, but not an instance of a child class of any other match. Raises AssertionError
if there is no match.
Clear the list of recorded warnings.
request¶
Example: Pass different values to a test function, depending on command line options
The request
fixture is a special fixture providing information of the requesting test function.
The type of the request
fixture.
A request object gives access to the requesting test context and has aparam
attribute in case the fixture is parametrized.
Fixture for which this request is being performed.
property scope_: Literal['session', 'package', 'module', 'class', 'function']_¶
Scope string, one of “function”, “class”, “module”, “package”, “session”.
property fixturenames_: list[str]_¶
Names of all active fixtures in this request.
abstract property node¶
Underlying collection node (depends on current request scope).
The pytest config object associated with this request.
property function¶
Test function object if the request has a per-function scope.
property cls¶
Class (can be None) where the test function was collected.
property instance¶
Instance (can be None) on which test function was collected.
property module¶
Python module object where the test function was collected.
Path where the test function was collected.
property keywords_: MutableMapping[str, Any]_¶
Keywords/markers dictionary for the underlying node.
Pytest session object.
abstractmethod addfinalizer(finalizer)[source]¶
Add finalizer/teardown function to be called without arguments after the last test within the requesting test context finished execution.
Apply a marker to a single test function invocation.
This method is useful if you don’t want to have a keyword/marker on all function invocations.
Parameters:
marker (str | MarkDecorator) – An object created by a call to pytest.mark.NAME(...)
.
Raise a FixtureLookupError exception.
Parameters:
msg (str | None) – An optional custom error message.
getfixturevalue(argname)[source]¶
Dynamically run a named fixture function.
Declaring fixtures via function argument is recommended where possible. But if you can only decide whether to use another fixture at test setup time, you may use this function to retrieve it inside a fixture or test function body.
This method can be used during the test setup phase or the test run phase, but during the test teardown phase a fixture’s value may not be available.
Parameters:
argname (str) – The fixture name.
Raises:
pytest.FixtureLookupError – If the given fixture could not be found.
testdir¶
Identical to pytester, but provides an instance whose methods return legacy py.path.local
objects instead when applicable.
New code should avoid using testdir in favor of pytester.
final class Testdir[source]
Similar to Pytester, but this class works with legacy legacy_path objects instead.
All methods just forward to an internal Pytester instance, converting results to legacy_path
objects as necessary.
exception TimeoutExpired
property tmpdir_: LocalPath_
Temporary directory where tests are executed.
make_hook_recorder(pluginmanager)[source]
See Pytester.make_hook_recorder().
chdir()[source]
See Pytester.chdir().
makefile(ext, *args, **kwargs)[source]
See Pytester.makefile().
makeconftest(source)[source]
makeini(source)[source]
See Pytester.makeini().
getinicfg(source)[source]
See Pytester.getinicfg().
makepyprojecttoml(source)[source]
See Pytester.makepyprojecttoml().
makepyfile(*args, **kwargs)[source]
maketxtfile(*args, **kwargs)[source]
syspathinsert(path=None)[source]
mkdir(name)[source]
See Pytester.mkdir().
mkpydir(name)[source]
See Pytester.mkpydir().
copy_example(name=None)[source]
getnode(config, arg)[source]
See Pytester.getnode().
getpathnode(path)[source]
genitems(colitems)[source]
See Pytester.genitems().
runitem(source)[source]
See Pytester.runitem().
inline_runsource(source, *cmdlineargs)[source]
See Pytester.inline_runsource().
inline_genitems(*args)[source]
See Pytester.inline_genitems().
inline_run(*args, plugins=(), no_reraise_ctrlc=False)[source]
runpytest_inprocess(*args, **kwargs)[source]
See Pytester.runpytest_inprocess().
runpytest(*args, **kwargs)[source]
See Pytester.runpytest().
parseconfig(*args)[source]
parseconfigure(*args)[source]
See Pytester.parseconfigure().
getitem(source, funcname='test_func')[source]
See Pytester.getitem().
getitems(source)[source]
See Pytester.getitems().
getmodulecol(source, configargs=(), withinit=False)[source]
collect_by_name(modcol, name)[source]
See Pytester.collect_by_name().
popen(cmdargs, stdout=-1, stderr=-1, stdin=NotSetType.token, **kw)[source]
See Pytester.popen().
run(*cmdargs, timeout=None, stdin=NotSetType.token)[source]
See Pytester.run().
runpython(script)[source]
See Pytester.runpython().
runpython_c(command)[source]
runpytest_subprocess(*args, timeout=None)[source]
See Pytester.runpytest_subprocess().
spawn_pytest(string, expect_timeout=10.0)[source]
spawn(cmd, expect_timeout=10.0)[source]
See Pytester.spawn().
tmp_path¶
Tutorial: How to use temporary directories and files in tests
Return a temporary directory (as pathlib.Path object) which is unique to each test function invocation. The temporary directory is created as a subdirectory of the base temporary directory, with configurable retention, as discussed in Temporary directory location and retention.
tmp_path_factory¶
Tutorial: The tmp_path_factory fixture
tmp_path_factory
is an instance of TempPathFactory:
final class TempPathFactory[source]¶
Factory for temporary directories under the common base temp directory, as discussed at Temporary directory location and retention.
mktemp(basename, numbered=True)[source]¶
Create a new temporary directory managed by the factory.
Parameters:
- basename (str) – Directory base name, must be a relative path.
- numbered (bool) – If
True
, ensure the directory is unique by adding a numbered suffix greater than any existing one:basename="foo-"
andnumbered=True
means that this function will create directories named"foo-0"
,"foo-1"
,"foo-2"
and so on.
Returns:
The path to the new directory.
Return type:
Return the base temporary directory, creating it if needed.
Returns:
The base temporary directory.
Return type:
tmpdir¶
Tutorial: The tmpdir and tmpdir_factory fixtures
tmpdir()¶
Return a temporary directory (as legacy_path object) which is unique to each test function invocation. The temporary directory is created as a subdirectory of the base temporary directory, with configurable retention, as discussed in Temporary directory location and retention.
tmpdir_factory¶
Tutorial: The tmpdir and tmpdir_factory fixtures
tmpdir_factory
is an instance of TempdirFactory:
final class TempdirFactory[source]¶
Backward compatibility wrapper that implements py.path.local
for TempPathFactory.
mktemp(basename, numbered=True)[source]¶
Same as TempPathFactory.mktemp(), but returns a py.path.local
object.
Same as TempPathFactory.getbasetemp(), but returns a py.path.local
object.
Hooks¶
Tutorial: Writing plugins
Reference to all hooks which can be implemented by conftest.py files and plugins.
@pytest.hookimpl¶
@pytest.hookimpl¶
pytest’s decorator for marking functions as hook implementations.
See Writing hook functions and pluggy.HookimplMarker().
@pytest.hookspec¶
@pytest.hookspec¶
pytest’s decorator for marking functions as hook specifications.
See Declaring new hooks and pluggy.HookspecMarker().
Bootstrapping hooks¶
Bootstrapping hooks called for plugins registered early enough (internal and third-party plugins).
pytest_load_initial_conftests(early_config, parser, args)[source]¶
Called to implement the loading of initial conftest files ahead of command line option parsing.
Parameters:
- early_config (Config) – The pytest config object.
- args (list_[_str]) – Arguments passed on the command line.
- parser (Parser) – To add command line options.
Use in conftest plugins¶
This hook is not called for conftest files.
pytest_cmdline_parse(pluginmanager, args)[source]¶
Return an initialized Config, parsing the specified args.
Stops at first non-None result, see firstresult: stop at first non-None result.
Note
This hook is only called for plugin classes passed to theplugins
arg when using pytest.main to perform an in-process test run.
Parameters:
- pluginmanager (PytestPluginManager) – The pytest plugin manager.
- args (list_[_str]) – List of arguments passed on the command line.
Returns:
A pytest config object.
Return type:
Config | None
Use in conftest plugins¶
This hook is not called for conftest files.
pytest_cmdline_main(config)[source]¶
Called for performing the main command line action.
The default implementation will invoke the configure hooks andpytest_runtestloop.
Stops at first non-None result, see firstresult: stop at first non-None result.
Parameters:
config (Config) – The pytest config object.
Returns:
The exit code.
Return type:
Use in conftest plugins¶
This hook is only called for initial conftests.
Initialization hooks¶
Initialization hooks called for plugins and conftest.py
files.
pytest_addoption(parser, pluginmanager)[source]¶
Register argparse-style options and ini-style config values, called once at the beginning of a test run.
Parameters:
- parser (Parser) – To add command line options, callparser.addoption(...). To add ini-file values call parser.addini(...).
- pluginmanager (PytestPluginManager) – The pytest plugin manager, which can be used to install hookspec()’s or hookimpl()’s and allow one plugin to call another plugin’s hooks to change how command line options are added.
Options can later be accessed through theconfig object, respectively:
- config.getoption(name) to retrieve the value of a command line option.
- config.getini(name) to retrieve a value read from an ini-style file.
The config object is passed around on many internal objects via the .config
attribute or can be retrieved as the pytestconfig
fixture.
Note
This hook is incompatible with hook wrappers.
Use in conftest plugins¶
If a conftest plugin implements this hook, it will be called immediately when the conftest is registered.
This hook is only called for initial conftests.
pytest_addhooks(pluginmanager)[source]¶
Called at plugin registration time to allow adding new hooks via a call topluginmanager.add_hookspecs(module_or_class, prefix).
Parameters:
pluginmanager (PytestPluginManager) – The pytest plugin manager.
Note
This hook is incompatible with hook wrappers.
Use in conftest plugins¶
If a conftest plugin implements this hook, it will be called immediately when the conftest is registered.
pytest_configure(config)[source]¶
Allow plugins and conftest files to perform initial configuration.
Note
This hook is incompatible with hook wrappers.
Parameters:
config (Config) – The pytest config object.
Use in conftest plugins¶
This hook is called for every initial conftest file after command line options have been parsed. After that, the hook is called for other conftest files as they are registered.
pytest_unconfigure(config)[source]¶
Called before test process is exited.
Parameters:
config (Config) – The pytest config object.
Use in conftest plugins¶
Any conftest file can implement this hook.
pytest_sessionstart(session)[source]¶
Called after the Session
object has been created and before performing collection and entering the run test loop.
Parameters:
session (Session) – The pytest session object.
Use in conftest plugins¶
This hook is only called for initial conftests.
pytest_sessionfinish(session, exitstatus)[source]¶
Called after whole test run finished, right before returning the exit status to the system.
Parameters:
- session (Session) – The pytest session object.
- exitstatus (int | ExitCode) – The status which pytest will return to the system.
Use in conftest plugins¶
Any conftest file can implement this hook.
pytest_plugin_registered(plugin, plugin_name, manager)[source]¶
A new pytest plugin got registered.
Parameters:
- plugin (_PluggyPlugin) – The plugin module or instance.
- plugin_name (str) – The name by which the plugin is registered.
- manager (PytestPluginManager) – The pytest plugin manager.
Note
This hook is incompatible with hook wrappers.
Use in conftest plugins¶
If a conftest plugin implements this hook, it will be called immediately when the conftest is registered, once for each plugin registered thus far (including itself!), and for all plugins thereafter when they are registered.
Collection hooks¶
pytest
calls the following hooks for collecting files and directories:
pytest_collection(session)[source]¶
Perform the collection phase for the given session.
Stops at first non-None result, see firstresult: stop at first non-None result. The return value is not used, but only stops further processing.
The default collection phase is this (see individual hooks for full details):
- Starting from
session
as the initial collector:
pytest_collectstart(collector)
report = pytest_make_collect_report(collector)
pytest_exception_interact(collector, call, report)
if an interactive exception occurred- For each collected node:
- If an item,
pytest_itemcollected(item)
- If a collector, recurse into it.
pytest_collectreport(report)
pytest_collection_modifyitems(session, config, items)
pytest_deselected(items)
for any deselected items (may be called multiple times)
pytest_collection_finish(session)
- Set
session.items
to the list of collected items - Set
session.testscollected
to the number of collected items
You can implement this hook to only perform some action before collection, for example the terminal plugin uses it to start displaying the collection counter (and returns None
).
Parameters:
session (Session) – The pytest session object.
Use in conftest plugins¶
This hook is only called for initial conftests.
pytest_ignore_collect(collection_path, path, config)[source]¶
Return True
to ignore this path for collection.
Return None
to let other plugins ignore the path for collection.
Returning False
will forcefully not ignore this path for collection, without giving a chance for other plugins to ignore this path.
This hook is consulted for all files and directories prior to calling more specific hooks.
Stops at first non-None result, see firstresult: stop at first non-None result.
Parameters:
- collection_path (pathlib.Path) – The path to analyze.
- path (LEGACY_PATH) – The path to analyze (deprecated).
- config (Config) – The pytest config object.
Changed in version 7.0.0: The collection_path
parameter was added as a pathlib.Pathequivalent of the path
parameter. The path
parameter has been deprecated.
Use in conftest plugins¶
Any conftest file can implement this hook. For a given collection path, only conftest files in parent directories of the collection path are consulted (if the path is a directory, its own conftest file is not consulted - a directory cannot ignore itself!).
pytest_collect_directory(path, parent)[source]¶
Create a Collector for the given directory, or None if not relevant.
Added in version 8.0.
For best results, the returned collector should be a subclass ofDirectory, but this is not required.
The new node needs to have the specified parent
as a parent.
Stops at first non-None result, see firstresult: stop at first non-None result.
Parameters:
path (pathlib.Path) – The path to analyze.
See Using a custom directory collector for a simple example of use of this hook.
Use in conftest plugins¶
Any conftest file can implement this hook. For a given collection path, only conftest files in parent directories of the collection path are consulted (if the path is a directory, its own conftest file is not consulted - a directory cannot collect itself!).
pytest_collect_file(file_path, path, parent)[source]¶
Create a Collector for the given path, or None if not relevant.
For best results, the returned collector should be a subclass ofFile, but this is not required.
The new node needs to have the specified parent
as a parent.
Parameters:
- file_path (pathlib.Path) – The path to analyze.
- path (LEGACY_PATH) – The path to collect (deprecated).
Changed in version 7.0.0: The file_path
parameter was added as a pathlib.Pathequivalent of the path
parameter. The path
parameter has been deprecated.
Use in conftest plugins¶
Any conftest file can implement this hook. For a given file path, only conftest files in parent directories of the file path are consulted.
pytest_pycollect_makemodule(module_path, path, parent)[source]¶
Return a pytest.Module collector or None for the given path.
This hook will be called for each matching test module path. The pytest_collect_file hook needs to be used if you want to create test modules for files that do not match as a test module.
Stops at first non-None result, see firstresult: stop at first non-None result.
Parameters:
- module_path (pathlib.Path) – The path of the module to collect.
- path (LEGACY_PATH) – The path of the module to collect (deprecated).
Changed in version 7.0.0: The module_path
parameter was added as a pathlib.Pathequivalent of the path
parameter.
The path
parameter has been deprecated in favor of fspath
.
Use in conftest plugins¶
Any conftest file can implement this hook. For a given parent collector, only conftest files in the collector’s directory and its parent directories are consulted.
For influencing the collection of objects in Python modules you can use the following hook:
pytest_pycollect_makeitem(collector, name, obj)[source]¶
Return a custom item/collector for a Python object in a module, or None.
Stops at first non-None result, see firstresult: stop at first non-None result.
Parameters:
- collector (Module | Class) – The module/class collector.
- name (str) – The name of the object in the module/class.
- obj (object) – The object.
Returns:
The created items/collectors.
Return type:
None | Item | Collector | list[Item | Collector]
Use in conftest plugins¶
Any conftest file can implement this hook. For a given collector, only conftest files in the collector’s directory and its parent directories are consulted.
pytest_generate_tests(metafunc)[source]¶
Generate (multiple) parametrized calls to a test function.
Parameters:
metafunc (Metafunc) – The Metafunc helper for the test function.
Use in conftest plugins¶
Any conftest file can implement this hook. For a given function definition, only conftest files in the functions’s directory and its parent directories are consulted.
pytest_make_parametrize_id(config, val, argname)[source]¶
Return a user-friendly string representation of the given val
that will be used by @pytest.mark.parametrize calls, or None if the hook doesn’t know about val
.
The parameter name is available as argname
, if required.
Stops at first non-None result, see firstresult: stop at first non-None result.
Parameters:
- config (Config) – The pytest config object.
- val (object) – The parametrized value.
- argname (str) – The automatic parameter name produced by pytest.
Use in conftest plugins¶
Any conftest file can implement this hook.
Hooks for influencing test skipping:
pytest_markeval_namespace(config)[source]¶
Called when constructing the globals dictionary used for evaluating string conditions in xfail/skipif markers.
This is useful when the condition for a marker requires objects that are expensive or impossible to obtain during collection time, which is required by normal boolean conditions.
Added in version 6.2.
Parameters:
config (Config) – The pytest config object.
Returns:
A dictionary of additional globals to add.
Return type:
Use in conftest plugins¶
Any conftest file can implement this hook. For a given item, only conftest files in parent directories of the item are consulted.
After collection is complete, you can modify the order of items, delete or otherwise amend the test items:
pytest_collection_modifyitems(session, config, items)[source]¶
Called after collection has been performed. May filter or re-order the items in-place.
When items are deselected (filtered out from items
), the hook pytest_deselected must be called explicitly with the deselected items to properly notify other plugins, e.g. with config.hook.pytest_deselected(items=deselected_items)
.
Parameters:
- session (Session) – The pytest session object.
- config (Config) – The pytest config object.
- items (list_[_Item]) – List of item objects.
Use in conftest plugins¶
Any conftest plugin can implement this hook.
Note
If this hook is implemented in conftest.py
files, it always receives all collected items, not only those under the conftest.py
where it is implemented.
pytest_collection_finish(session)[source]¶
Called after collection has been performed and modified.
Parameters:
session (Session) – The pytest session object.
Use in conftest plugins¶
Any conftest plugin can implement this hook.
Test running (runtest) hooks¶
All runtest related hooks receive a pytest.Item object.
pytest_runtestloop(session)[source]¶
Perform the main runtest loop (after collection finished).
The default hook implementation performs the runtest protocol for all items collected in the session (session.items
), unless the collection failed or the collectonly
pytest option is set.
If at any point pytest.exit() is called, the loop is terminated immediately.
If at any point session.shouldfail
or session.shouldstop
are set, the loop is terminated after the runtest protocol for the current item is finished.
Parameters:
session (Session) – The pytest session object.
Stops at first non-None result, see firstresult: stop at first non-None result. The return value is not used, but only stops further processing.
Use in conftest plugins¶
Any conftest file can implement this hook.
pytest_runtest_protocol(item, nextitem)[source]¶
Perform the runtest protocol for a single test item.
The default runtest protocol is this (see individual hooks for full details):
pytest_runtest_logstart(nodeid, location)
- Setup phase:
call = pytest_runtest_setup(item)
(wrapped inCallInfo(when="setup")
)report = pytest_runtest_makereport(item, call)
pytest_runtest_logreport(report)
pytest_exception_interact(call, report)
if an interactive exception occurred
- Call phase, if the setup passed and the
setuponly
pytest option is not set:call = pytest_runtest_call(item)
(wrapped inCallInfo(when="call")
)report = pytest_runtest_makereport(item, call)
pytest_runtest_logreport(report)
pytest_exception_interact(call, report)
if an interactive exception occurred
- Teardown phase:
call = pytest_runtest_teardown(item, nextitem)
(wrapped inCallInfo(when="teardown")
)report = pytest_runtest_makereport(item, call)
pytest_runtest_logreport(report)
pytest_exception_interact(call, report)
if an interactive exception occurred
pytest_runtest_logfinish(nodeid, location)
Parameters:
- item (Item) – Test item for which the runtest protocol is performed.
- nextitem (Item | None) – The scheduled-to-be-next test item (or None if this is the end my friend).
Stops at first non-None result, see firstresult: stop at first non-None result. The return value is not used, but only stops further processing.
Use in conftest plugins¶
Any conftest file can implement this hook.
pytest_runtest_logstart(nodeid, location)[source]¶
Called at the start of running the runtest protocol for a single item.
See pytest_runtest_protocol for a description of the runtest protocol.
Parameters:
- nodeid (str) – Full node ID of the item.
- location (tuple[_str,_ int | None , str]) – A tuple of
(filename, lineno, testname)
wherefilename
is a file path relative toconfig.rootpath
andlineno
is 0-based.
Use in conftest plugins¶
Any conftest file can implement this hook. For a given item, only conftest files in the item’s directory and its parent directories are consulted.
pytest_runtest_logfinish(nodeid, location)[source]¶
Called at the end of running the runtest protocol for a single item.
See pytest_runtest_protocol for a description of the runtest protocol.
Parameters:
- nodeid (str) – Full node ID of the item.
- location (tuple[_str,_ int | None , str]) – A tuple of
(filename, lineno, testname)
wherefilename
is a file path relative toconfig.rootpath
andlineno
is 0-based.
Use in conftest plugins¶
Any conftest file can implement this hook. For a given item, only conftest files in the item’s directory and its parent directories are consulted.
pytest_runtest_setup(item)[source]¶
Called to perform the setup phase for a test item.
The default implementation runs setup()
on item
and all of its parents (which haven’t been setup yet). This includes obtaining the values of fixtures required by the item (which haven’t been obtained yet).
Parameters:
item (Item) – The item.
Use in conftest plugins¶
Any conftest file can implement this hook. For a given item, only conftest files in the item’s directory and its parent directories are consulted.
pytest_runtest_call(item)[source]¶
Called to run the test for test item (the call phase).
The default implementation calls item.runtest()
.
Parameters:
item (Item) – The item.
Use in conftest plugins¶
Any conftest file can implement this hook. For a given item, only conftest files in the item’s directory and its parent directories are consulted.
pytest_runtest_teardown(item, nextitem)[source]¶
Called to perform the teardown phase for a test item.
The default implementation runs the finalizers and calls teardown()
on item
and all of its parents (which need to be torn down). This includes running the teardown phase of fixtures required by the item (if they go out of scope).
Parameters:
- item (Item) – The item.
- nextitem (Item | None) – The scheduled-to-be-next test item (None if no further test item is scheduled). This argument is used to perform exact teardowns, i.e. calling just enough finalizers so that nextitem only needs to call setup functions.
Use in conftest plugins¶
Any conftest file can implement this hook. For a given item, only conftest files in the item’s directory and its parent directories are consulted.
pytest_runtest_makereport(item, call)[source]¶
Called to create a TestReport for each of the setup, call and teardown runtest phases of a test item.
See pytest_runtest_protocol for a description of the runtest protocol.
Parameters:
Stops at first non-None result, see firstresult: stop at first non-None result.
Use in conftest plugins¶
Any conftest file can implement this hook. For a given item, only conftest files in the item’s directory and its parent directories are consulted.
For deeper understanding you may look at the default implementation of these hooks in _pytest.runner
and maybe also in _pytest.pdb
which interacts with _pytest.capture
and its input/output capturing in order to immediately drop into interactive debugging when a test failure occurs.
pytest_pyfunc_call(pyfuncitem)[source]¶
Call underlying test function.
Stops at first non-None result, see firstresult: stop at first non-None result.
Parameters:
pyfuncitem (Function) – The function item.
Use in conftest plugins¶
Any conftest file can implement this hook. For a given item, only conftest files in the item’s directory and its parent directories are consulted.
Reporting hooks¶
Session related reporting hooks:
pytest_collectstart(collector)[source]¶
Collector starts collecting.
Parameters:
collector (Collector) – The collector.
Use in conftest plugins¶
Any conftest file can implement this hook. For a given collector, only conftest files in the collector’s directory and its parent directories are consulted.
pytest_make_collect_report(collector)[source]¶
Perform collector.collect() and return a CollectReport.
Stops at first non-None result, see firstresult: stop at first non-None result.
Parameters:
collector (Collector) – The collector.
Use in conftest plugins¶
Any conftest file can implement this hook. For a given collector, only conftest files in the collector’s directory and its parent directories are consulted.
pytest_itemcollected(item)[source]¶
We just collected a test item.
Parameters:
item (Item) – The item.
Use in conftest plugins¶
Any conftest file can implement this hook. For a given item, only conftest files in the item’s directory and its parent directories are consulted.
pytest_collectreport(report)[source]¶
Collector finished collecting.
Parameters:
report (CollectReport) – The collect report.
Use in conftest plugins¶
Any conftest file can implement this hook. For a given collector, only conftest files in the collector’s directory and its parent directories are consulted.
pytest_deselected(items)[source]¶
Called for deselected test items, e.g. by keyword.
Note that this hook has two integration aspects for plugins:
- it can be implemented to be notified of deselected items
- it must be called from pytest_collection_modifyitemsimplementations when items are deselected (to properly notify other plugins).
May be called multiple times.
Parameters:
items (Sequence _[_Item]) – The items.
Use in conftest plugins¶
Any conftest file can implement this hook.
pytest_report_collectionfinish(config, start_path, startdir, items)[source]¶
Return a string or list of strings to be displayed after collection has finished successfully.
These strings will be displayed after the standard “collected X items” message.
Added in version 3.2.
Parameters:
- config (Config) – The pytest config object.
- start_path (pathlib.Path) – The starting dir.
- startdir (LEGACY_PATH) – The starting dir (deprecated).
- items (Sequence _[_Item]) – List of pytest items that are going to be executed; this list should not be modified.
Note
Lines returned by a plugin are displayed before those of plugins which ran before it. If you want to have your line(s) displayed first, usetrylast=True.
Changed in version 7.0.0: The start_path
parameter was added as a pathlib.Pathequivalent of the startdir
parameter. The startdir
parameter has been deprecated.
Use in conftest plugins¶
Any conftest plugin can implement this hook.
pytest_report_teststatus(report, config)[source]¶
Return result-category, shortletter and verbose word for status reporting.
The result-category is a category in which to count the result, for example “passed”, “skipped”, “error” or the empty string.
The shortletter is shown as testing progresses, for example “.”, “s”, “E” or the empty string.
The verbose word is shown as testing progresses in verbose mode, for example “PASSED”, “SKIPPED”, “ERROR” or the empty string.
pytest may style these implicitly according to the report outcome. To provide explicit styling, return a tuple for the verbose word, for example "rerun", "R", ("RERUN", {"yellow": True})
.
Parameters:
- report (CollectReport | TestReport) – The report object whose status is to be returned.
- config (Config) – The pytest config object.
Returns:
The test status.
Return type:
TestShortLogReport | tuple[str, str, str | tuple[str, Mapping[str, bool]]]
Stops at first non-None result, see firstresult: stop at first non-None result.
Use in conftest plugins¶
Any conftest plugin can implement this hook.
pytest_report_to_serializable(config, report)[source]¶
Serialize the given report object into a data structure suitable for sending over the wire, e.g. converted to JSON.
Parameters:
- config (Config) – The pytest config object.
- report (CollectReport | TestReport) – The report.
Use in conftest plugins¶
Any conftest file can implement this hook. The exact details may depend on the plugin which calls the hook.
pytest_report_from_serializable(config, data)[source]¶
Restore a report object previously serialized withpytest_report_to_serializable.
Parameters:
config (Config) – The pytest config object.
Use in conftest plugins¶
Any conftest file can implement this hook. The exact details may depend on the plugin which calls the hook.
pytest_terminal_summary(terminalreporter, exitstatus, config)[source]¶
Add a section to terminal summary reporting.
Parameters:
- terminalreporter (TerminalReporter) – The internal terminal reporter object.
- exitstatus (ExitCode) – The exit status that will be reported back to the OS.
- config (Config) – The pytest config object.
Added in version 4.2: The config
parameter.
Use in conftest plugins¶
Any conftest plugin can implement this hook.
pytest_fixture_setup(fixturedef, request)[source]¶
Perform fixture setup execution.
Parameters:
- fixturedef (FixtureDef[ Any ]) – The fixture definition object.
- request (SubRequest) – The fixture request object.
Returns:
The return value of the call to the fixture function.
Return type:
object | None
Stops at first non-None result, see firstresult: stop at first non-None result.
Use in conftest plugins¶
Any conftest file can implement this hook. For a given fixture, only conftest files in the fixture scope’s directory and its parent directories are consulted.
pytest_fixture_post_finalizer(fixturedef, request)[source]¶
Called after fixture teardown, but before the cache is cleared, so the fixture result fixturedef.cached_result
is still available (notNone
).
Parameters:
- fixturedef (FixtureDef[ Any ]) – The fixture definition object.
- request (SubRequest) – The fixture request object.
Use in conftest plugins¶
Any conftest file can implement this hook. For a given fixture, only conftest files in the fixture scope’s directory and its parent directories are consulted.
pytest_warning_recorded(warning_message, when, nodeid, location)[source]¶
Process a warning captured by the internal pytest warnings plugin.
Parameters:
- warning_message (warnings.WarningMessage) – The captured warning. This is the same object produced by warnings.catch_warnings, and contains the same attributes as the parameters of warnings.showwarning().
- when (Literal [ 'config' , 'collect' , 'runtest' ]) –
Indicates when the warning was captured. Possible values:"config"
: during pytest configuration/initialization stage."collect"
: during test collection."runtest"
: during test execution.
- nodeid (str) – Full id of the item. Empty string for warnings that are not specific to a particular node.
- location (tuple[_str,_ int, str] | None) – When available, holds information about the execution context of the captured warning (filename, linenumber, function).
function
evaluates to when the execution context is at the module level.
Added in version 6.0.
Use in conftest plugins¶
Any conftest file can implement this hook. If the warning is specific to a particular node, only conftest files in parent directories of the node are consulted.
Central hook for reporting about test execution:
pytest_runtest_logreport(report)[source]¶
Process the TestReport produced for each of the setup, call and teardown runtest phases of an item.
See pytest_runtest_protocol for a description of the runtest protocol.
Use in conftest plugins¶
Any conftest file can implement this hook. For a given item, only conftest files in the item’s directory and its parent directories are consulted.
Assertion related hooks:
pytest_assertrepr_compare(config, op, left, right)[source]¶
Return explanation for comparisons in failing assert expressions.
Return None for no custom explanation, otherwise return a list of strings. The strings will be joined by newlines but any newlines_in_ a string will be escaped. Note that all but the first line will be indented slightly, the intention is for the first line to be a summary.
Parameters:
- config (Config) – The pytest config object.
- op (str) – The operator, e.g.
"=="
,"!="
,"not in"
. - left (object) – The left operand.
- right (object) – The right operand.
Use in conftest plugins¶
Any conftest file can implement this hook. For a given item, only conftest files in the item’s directory and its parent directories are consulted.
pytest_assertion_pass(item, lineno, orig, expl)[source]¶
Called whenever an assertion passes.
Added in version 5.0.
Use this hook to do some processing after a passing assertion. The original assertion information is available in the orig
string and the pytest introspected assertion information is available in theexpl
string.
This hook must be explicitly enabled by the enable_assertion_pass_hook
ini-file option:
[pytest] enable_assertion_pass_hook=true
You need to clean the .pyc files in your project directory and interpreter libraries when enabling this option, as assertions will require to be re-written.
Parameters:
- item (Item) – pytest item object of current test.
- lineno (int) – Line number of the assert statement.
- orig (str) – String with the original assertion.
- expl (str) – String with the assert explanation.
Use in conftest plugins¶
Any conftest file can implement this hook. For a given item, only conftest files in the item’s directory and its parent directories are consulted.
Debugging/Interaction hooks¶
There are few hooks which can be used for special reporting or interaction with exceptions:
pytest_internalerror(excrepr, excinfo)[source]¶
Called for internal errors.
Return True to suppress the fallback handling of printing an INTERNALERROR message directly to sys.stderr.
Parameters:
- excrepr (ExceptionRepr) – The exception repr object.
- excinfo (ExceptionInfo_[_BaseException]) – The exception info.
Use in conftest plugins¶
Any conftest plugin can implement this hook.
pytest_keyboard_interrupt(excinfo)[source]¶
Called for keyboard interrupt.
Parameters:
excinfo (ExceptionInfo_[_KeyboardInterrupt | Exit ]) – The exception info.
Use in conftest plugins¶
Any conftest plugin can implement this hook.
pytest_exception_interact(node, call, report)[source]¶
Called when an exception was raised which can potentially be interactively handled.
May be called during collection (see pytest_make_collect_report), in which case report
is a CollectReport.
May be called during runtest of an item (see pytest_runtest_protocol), in which case report
is a TestReport.
This hook is not called if the exception that was raised is an internal exception like skip.Exception
.
Parameters:
- node (Item | Collector) – The item or collector.
- call (CallInfo[ Any ]) – The call information. Contains the exception.
- report (CollectReport | TestReport) – The collection or test report.
Use in conftest plugins¶
Any conftest file can implement this hook. For a given node, only conftest files in parent directories of the node are consulted.
pytest_enter_pdb(config, pdb)[source]¶
Called upon pdb.set_trace().
Can be used by plugins to take special action just before the python debugger enters interactive mode.
Parameters:
Use in conftest plugins¶
Any conftest plugin can implement this hook.
pytest_leave_pdb(config, pdb)[source]¶
Called when leaving pdb (e.g. with continue after pdb.set_trace()).
Can be used by plugins to take special action just after the python debugger leaves interactive mode.
Parameters:
Use in conftest plugins¶
Any conftest plugin can implement this hook.
Collection tree objects¶
These are the collector and item classes (collectively called “nodes”) which make up the collection tree.
Node¶
Bases: ABC
Base class of Collector and Item, the components of the test collection tree.
Collector
's are the internal nodes of the tree, and Item
's are the leaf nodes.
fspath_: LEGACY_PATH_¶
A LEGACY_PATH
copy of the path attribute. Intended for usage for methods not migrated to pathlib.Path
yet, such asItem.reportinfo. Will be deprecated in a future release, prefer using path instead.
A unique name within the scope of the parent node.
parent¶
The parent collector node.
The pytest config object.
The pytest session this node is part of.
path_: pathlib.Path_¶
Filesystem path where this node was collected from (can be None).
keywords_: MutableMapping[str, Any]_¶
Keywords/markers collected from all scopes.
The marker objects belonging to this node.
Allow adding of extra keywords to use for matching.
A place where plugins can store information on the node for their own use.
classmethod from_parent(parent, **kw)[source]¶
Public constructor for Nodes.
This indirection got introduced in order to enable removing the fragile logic from the node constructors.
Subclasses can use super().from_parent(...)
when overriding the construction.
Parameters:
parent (Node) – The parent node of this Node.
fspath-sensitive hook proxy used to call pytest hooks.
Issue a warning for this Node.
Warnings will be displayed after the test session, unless explicitly suppressed.
Parameters:
warning (Warning) – The warning instance to issue.
Raises:
ValueError – If warning
instance is not a subclass of Warning.
Example usage:
node.warn(PytestWarning("some message")) node.warn(UserWarning("some message"))
Changed in version 6.2: Any subclass of Warning is now accepted, rather than onlyPytestWarning subclasses.
A ::-separated string denoting its collection tree address.
for ... in iter_parents()[source]¶
Iterate over all parent collectors starting from and including self up to the root of the collection tree.
Added in version 8.1.
Return a list of all parent collectors starting from the root of the collection tree down to and including self.
add_marker(marker, append=True)[source]¶
Dynamically add a marker object to the node.
Parameters:
- marker (str | MarkDecorator) – The marker.
- append (bool) – Whether to append the marker, or prepend it.
iter_markers(name=None)[source]¶
Iterate over all markers of the node.
Parameters:
name (str | None) – If given, filter the results by the name attribute.
Returns:
An iterator of the markers of the node.
Return type:
for ... in iter_markers_with_node(name=None)[source]¶
Iterate over all markers of the node.
Parameters:
name (str | None) – If given, filter the results by the name attribute.
Returns:
An iterator of (node, mark) tuples.
Return type:
get_closest_marker(name: str) → Mark | None[source]¶
get_closest_marker(name: str, default: Mark) → Mark
Return the first marker matching the name, from closest (for example function) to farther level (for example module level).
Parameters:
- default – Fallback return value if no marker was found.
- name – Name to filter by.
Return a set of all extra keywords in self and any parents.
Register a function to be called without arguments when this node is finalized.
This method can only be called when this node is active in a setup chain, for example during self.setup().
Get the closest parent node (including self) which is an instance of the given class.
Parameters:
cls (type[ _NodeType ]) – The node class to search for.
Returns:
The node, if found.
Return type:
_NodeType | None
repr_failure(excinfo, style=None)[source]¶
Return a representation of a collection or test failure.
Parameters:
excinfo (ExceptionInfo_[_BaseException]) – Exception information for the failure.
Collector¶
Base class of all collectors.
Collector create children through collect()
and thus iteratively build the collection tree.
exception CollectError[source]¶
Bases: Exception
An error during collection, contains a custom message.
abstractmethod collect()[source]¶
Collect children (items and collectors) for this collector.
repr_failure(excinfo)[source]¶
Return a representation of a collection failure.
Parameters:
excinfo (ExceptionInfo_[_BaseException]) – Exception information for the failure.
A unique name within the scope of the parent node.
parent¶
The parent collector node.
The pytest config object.
The pytest session this node is part of.
path_: pathlib.Path_¶
Filesystem path where this node was collected from (can be None).
Item¶
Base class of all test invocation items.
Note that for a single function there might be multiple test invocation items.
user_properties_: list[tuple[str, object]]_¶
A list of tuples (name, value) that holds user defined properties for this test.
A unique name within the scope of the parent node.
parent¶
The parent collector node.
The pytest config object.
The pytest session this node is part of.
path_: pathlib.Path_¶
Filesystem path where this node was collected from (can be None).
abstractmethod runtest()[source]¶
Run the test case for this item.
Must be implemented by subclasses.
add_report_section(when, key, content)[source]¶
Add a new report section, similar to what’s done internally to add stdout and stderr captured output:
item.add_report_section("call", "stdout", "report section contents")
Parameters:
- when (str) – One of the possible capture states,
"setup"
,"call"
,"teardown"
. - key (str) – Name of the section, can be customized at will. Pytest uses
"stdout"
and"stderr"
internally. - content (str) – The full contents as a string.
Get location information for this item for test reports.
Returns a tuple with three elements:
- The path of the test (default
self.path
) - The 0-based line number of the test (default
None
) - A name of the test to be shown (default
""
)
property location_: tuple[str, int | None, str]_¶
Returns a tuple of (relfspath, lineno, testname)
for this item where relfspath
is file path relative to config.rootpath
and lineno is a 0-based line number.
File¶
Bases: FSCollector, ABC
Base class for collecting tests from a file.
Working with non-python tests.
A unique name within the scope of the parent node.
parent¶
The parent collector node.
The pytest config object.
The pytest session this node is part of.
path_: pathlib.Path_¶
Filesystem path where this node was collected from (can be None).
FSCollector¶
Base class for filesystem collectors.
path_: pathlib.Path_¶
Filesystem path where this node was collected from (can be None).
classmethod from_parent(parent, *, fspath=None, path=None, **kw)[source]¶
The public constructor.
A unique name within the scope of the parent node.
parent¶
The parent collector node.
The pytest config object.
The pytest session this node is part of.
Session¶
Bases: Collector
The root of the collection tree.
Session
collects the initial paths given as arguments to pytest.
exception Interrupted¶
Bases: KeyboardInterrupt
Signals that the test run was interrupted.
exception Failed¶
Bases: Exception
Signals a stop as failed test run.
The path from which pytest was invoked.
Added in version 7.0.0.
isinitpath(path, *, with_parents=False)[source]¶
Is path an initial path?
An initial path is a path explicitly given to pytest on the command line.
Parameters:
with_parents (bool) – If set, also return True if the path is a parent of an initial path.
Changed in version 8.0: Added the with_parents
parameter.
perform_collect(args: Sequence[str] | None = None, genitems: Literal[True] = True) → Sequence[Item][source]¶
perform_collect(args: Sequence[str] | None = None, genitems: bool = True) → Sequence[Item | Collector]
Perform the collection phase for this session.
This is called by the default pytest_collection hook implementation; see the documentation of this hook for more details. For testing purposes, it may also be called directly on a freshSession
.
This function normally recursively expands any collectors collected from the session to their items, and only items are returned. For testing purposes, this may be suppressed by passing genitems=False
, in which case the return value contains these collectors unexpanded, and session.items
is empty.
Collect children (items and collectors) for this collector.
A unique name within the scope of the parent node.
parent¶
The parent collector node.
The pytest config object.
The pytest session this node is part of.
path_: pathlib.Path_¶
Filesystem path where this node was collected from (can be None).
Package¶
Bases: Directory
Collector for files and directories in a Python packages – directories with an __init__.py
file.
Note
Directories without an __init__.py
file are instead collected byDir by default. Both are Directorycollectors.
Changed in version 8.0: Now inherits from Directory.
Collect children (items and collectors) for this collector.
A unique name within the scope of the parent node.
parent¶
The parent collector node.
The pytest config object.
The pytest session this node is part of.
path_: pathlib.Path_¶
Filesystem path where this node was collected from (can be None).
Module¶
Bases: File, PyCollector
Collector for test classes and functions in a Python module.
Collect children (items and collectors) for this collector.
A unique name within the scope of the parent node.
parent¶
The parent collector node.
The pytest config object.
The pytest session this node is part of.
path_: pathlib.Path_¶
Filesystem path where this node was collected from (can be None).
Class¶
Bases: PyCollector
Collector for test methods (and nested classes) in a Python class.
classmethod from_parent(parent, *, name, obj=None, **kw)[source]¶
The public constructor.
Collect children (items and collectors) for this collector.
A unique name within the scope of the parent node.
parent¶
The parent collector node.
The pytest config object.
The pytest session this node is part of.
path_: pathlib.Path_¶
Filesystem path where this node was collected from (can be None).
Function¶
Bases: PyobjMixin
, Item
Item responsible for setting up and executing a Python test function.
Parameters:
- name – The full function name, including any decorations like those added by parametrization (
my_func[my_param]
). - parent – The parent Node.
- config – The pytest Config object.
- callspec – If given, this function has been parametrized and the callspec contains meta information about the parametrization.
- callobj – If given, the object which will be called when the Function is invoked, otherwise the callobj will be obtained from
parent
usingoriginalname
. - keywords – Keywords bound to the function object for “-k” matching.
- session – The pytest Session object.
- fixtureinfo – Fixture information already resolved at this fixture node..
- originalname – The attribute name to use for accessing the underlying function object. Defaults to
name
. Set this if name is different from the original name, for example when it contains decorations like those added by parametrization (my_func[my_param]
).
originalname¶
Original function name, without any decorations (for example parametrization adds a "[...]"
suffix to function names), used to access the underlying function object from parent
(in case callobj
is not given explicitly).
Added in version 3.0.
classmethod from_parent(parent, **kw)[source]¶
The public constructor.
property function¶
Underlying python ‘function’ object.
property instance¶
Python instance object the function is bound to.
Returns None if not a test method, e.g. for a standalone test function, a class or a module.
Execute the underlying test function.
repr_failure(excinfo)[source]¶
Return a representation of a collection or test failure.
Parameters:
excinfo (ExceptionInfo_[_BaseException]) – Exception information for the failure.
A unique name within the scope of the parent node.
parent¶
The parent collector node.
The pytest config object.
The pytest session this node is part of.
path_: pathlib.Path_¶
Filesystem path where this node was collected from (can be None).
FunctionDefinition¶
class FunctionDefinition[source]¶
Bases: Function
This class is a stop gap solution until we evolve to have actual function definition nodes and manage to get rid of metafunc
.
Execute the underlying test function.
A unique name within the scope of the parent node.
parent¶
The parent collector node.
The pytest config object.
The pytest session this node is part of.
path_: pathlib.Path_¶
Filesystem path where this node was collected from (can be None).
setup()¶
Execute the underlying test function.
Objects¶
Objects accessible from fixtures or hooksor importable from pytest
.
CallInfo¶
Result/Exception info of a function invocation.
excinfo_: ExceptionInfo[BaseException] | None_¶
The captured exception of the call, if it raised.
The system time when the call started, in seconds since the epoch.
The system time when the call ended, in seconds since the epoch.
The call duration, in seconds.
when_: Literal['collect', 'setup', 'call', 'teardown']_¶
The context of invocation: “collect”, “setup”, “call” or “teardown”.
property result_: TResult_¶
The return value of the call, if it didn’t raise.
Can only be accessed if excinfo is None.
classmethod from_call(func, when, reraise=None)[source]¶
Call func, wrapping the result in a CallInfo.
Parameters:
- func (Callable [ [ ] , _pytest.runner.TResult ]) – The function to call. Called without arguments.
- when (Literal[ 'collect' , 'setup' , 'call' , 'teardown' ]) – The phase in which the function is called.
- reraise (type_[_BaseException] | tuple_[_type_[_BaseException] , ... ] | None) – Exception or exceptions that shall propagate if raised by the function, instead of being wrapped in the CallInfo.
CollectReport¶
final class CollectReport[source]¶
Bases: BaseReport
Collection report object.
Reports can contain arbitrary extra attributes.
Normalized collection nodeid.
outcome_: Literal['passed', 'failed', 'skipped']_¶
Test outcome, always one of “passed”, “failed”, “skipped”.
longrepr_: None | ExceptionInfo[BaseException] | tuple[str, int, str] | str | TerminalRepr_¶
None or a failure representation.
result¶
The collected items and collection nodes.
sections_: list[tuple[str, str]]_¶
Tuples of str (heading, content)
with extra information for the test report. Used by pytest to add text captured from stdout
, stderr
, and intercepted logging events. May be used by other plugins to add arbitrary information to reports.
Return captured log lines, if log capturing is enabled.
Added in version 3.5.
Return captured text from stderr, if capturing is enabled.
Added in version 3.0.
Return captured text from stdout, if capturing is enabled.
Added in version 3.0.
property count_towards_summary_: bool_¶
Experimental Whether this report should be counted towards the totals shown at the end of the test session: “1 passed, 1 failure, etc”.
Note
This function is considered experimental, so beware that it is subject to changes even in patch releases.
Whether the outcome is failed.
The path portion of the reported node, as a string.
property head_line_: str | None_¶
Experimental The head line shown with longrepr output for this report, more commonly during traceback representation during failures:
________ Test.foo ________
In the example above, the head_line is “Test.foo”.
Note
This function is considered experimental, so beware that it is subject to changes even in patch releases.
Read-only property that returns the full string representation oflongrepr
.
Added in version 3.0.
Whether the outcome is passed.
Whether the outcome is skipped.
Config¶
Access to configuration values, pluginmanager and plugin hooks.
Parameters:
- pluginmanager (PytestPluginManager) – A pytest PluginManager.
- invocation_params (InvocationParams) – Object containing parameters regarding the pytest.main()invocation.
final class InvocationParams(*, args, plugins, dir)[source]¶
Holds parameters passed during pytest.main().
The object attributes are read-only.
Added in version 5.1.
Note
Note that the environment variable PYTEST_ADDOPTS
and the addopts
ini option are handled by pytest, not being included in the args
attribute.
Plugins accessing InvocationParams
must be aware of that.
The command-line arguments as passed to pytest.main().
plugins_: Sequence[str | object] | None_¶
Extra plugins, might be None
.
The directory from which pytest.main() was invoked. :type: pathlib.Path
class ArgsSource(*values)[source]¶
Indicates the source of the test arguments.
Added in version 7.2.
ARGS = 1¶
Command line arguments.
INVOCATION_DIR = 2¶
Invocation directory.
TESTPATHS = 3¶
‘testpaths’ configuration value.
option¶
Access to command line option as attributes.
Type:
invocation_params¶
The parameters with which pytest was invoked.
Type:
pluginmanager¶
The plugin manager handles plugin registration and hook invocation.
Type:
stash¶
A place where plugins can store information on the config for their own use.
Type:
The path to the rootdir.
Type:
Added in version 6.1.
property inipath_: Path | None_¶
The path to the configfile.
Added in version 6.1.
Add a function to be called when the config object gets out of use (usually coinciding with pytest_unconfigure).
classmethod fromdictargs(option_dict, args)[source]¶
Constructor usable for subprocesses.
issue_config_time_warning(warning, stacklevel)[source]¶
Issue and handle a warning during the “configure” stage.
During pytest_configure
we can’t capture warnings using the catch_warnings_for_item
function because it is not possible to have hook wrappers around pytest_configure
.
This function is mainly intended for plugins that need to issue warnings duringpytest_configure
(or similar stages).
Parameters:
addinivalue_line(name, line)[source]¶
Add a line to an ini-file option. The option must have been declared but might not yet be set in which case the line becomes the first line in its value.
Return configuration value from an ini file.
If a configuration value is not defined in anini file, then the default
value provided while registering the configuration throughparser.addini will be returned. Please note that you can even provide None
as a valid default value.
If default
is not provided while registering usingparser.addini, then a default value based on the type
parameter passed toparser.addini will be returned. The default values based on type
are:paths
, pathlist
, args
and linelist
: empty list []
bool
: False
string
: empty string ""
If neither the default
nor the type
parameter is passed while registering the configuration throughparser.addini, then the configuration is treated as a string and a default empty string ‘’ is returned.
If the specified name hasn’t been registered through a priorparser.addini call (usually from a plugin), a ValueError is raised.
getoption(name, default=, skip=False)[source]¶
Return command line option value.
Parameters:
- name (str) – Name of the option. You may also specify the literal
--OPT
option instead of the “dest” option name. - default – Fallback value if no option of that name is declared via pytest_addoption. Note this parameter will be ignored when the option is declared even if the option’s value is
None
. - skip (bool) – If
True
, raise pytest.skip() if option is undeclared or has aNone
value. Note that even ifTrue
, if a default was specified it will be returned instead of a skip.
getvalue(name, path=None)[source]¶
Deprecated, use getoption() instead.
getvalueorskip(name, path=None)[source]¶
Deprecated, use getoption(skip=True) instead.
VERBOSITY_ASSERTIONS_: Final_ = 'assertions'¶
Verbosity type for failed assertions (see verbosity_assertions).
VERBOSITY_TEST_CASES_: Final_ = 'test_cases'¶
Verbosity type for test case execution (see verbosity_test_cases).
get_verbosity(verbosity_type=None)[source]¶
Retrieve the verbosity level for a fine-grained verbosity type.
Parameters:
verbosity_type (str | None) – Verbosity type to get level for. If a level is configured for the given type, that value will be returned. If the given type is not a known verbosity type, the global verbosity level will be returned. If the given type is None (default), the global verbosity level will be returned.
To configure a level for a fine-grained verbosity type, the configuration file should have a setting for the configuration name and a numeric value for the verbosity level. A special value of “auto” can be used to explicitly use the global verbosity level.
Example:
content of pytest.ini
[pytest] verbosity_assertions = 2
print(config.get_verbosity()) # 1 print(config.get_verbosity(Config.VERBOSITY_ASSERTIONS)) # 2
Dir¶
Collector of files in a file system directory.
Added in version 8.0.
Note
Python directories with an __init__.py
file are instead collected byPackage by default. Both are Directorycollectors.
classmethod from_parent(parent, *, path)[source]¶
The public constructor.
Parameters:
- parent (nodes.Collector) – The parent collector of this Dir.
- path (pathlib.Path) – The directory’s path.
Collect children (items and collectors) for this collector.
A unique name within the scope of the parent node.
parent¶
The parent collector node.
The pytest config object.
The pytest session this node is part of.
path_: pathlib.Path_¶
Filesystem path where this node was collected from (can be None).
Directory¶
Base class for collecting files from a directory.
A basic directory collector does the following: goes over the files and sub-directories in the directory and creates collectors for them by calling the hooks pytest_collect_directory and pytest_collect_file, after checking that they are not ignored usingpytest_ignore_collect.
The default directory collectors are Dir andPackage.
Added in version 8.0.
Using a custom directory collector.
A unique name within the scope of the parent node.
parent¶
The parent collector node.
The pytest config object.
The pytest session this node is part of.
path_: pathlib.Path_¶
Filesystem path where this node was collected from (can be None).
ExceptionInfo¶
final class ExceptionInfo[source]¶
Wraps sys.exc_info() objects and offers help for navigating the traceback.
classmethod from_exception(exception, exprinfo=None)[source]¶
Return an ExceptionInfo for an existing exception.
The exception must have a non-None
__traceback__
attribute, otherwise this function fails with an assertion error. This means that the exception must have been raised, or added a traceback with thewith_traceback() method.
Parameters:
exprinfo (str | None) – A text string helping to determine if we should stripAssertionError
from the output. Defaults to the exception message/__str__()
.
Added in version 7.4.
classmethod from_exc_info(exc_info, exprinfo=None)[source]¶
Like from_exception(), but using old-style exc_info tuple.
classmethod from_current(exprinfo=None)[source]¶
Return an ExceptionInfo matching the current traceback.
Parameters:
exprinfo (str | None) – A text string helping to determine if we should stripAssertionError
from the output. Defaults to the exception message/__str__()
.
classmethod for_later()[source]¶
Return an unfilled ExceptionInfo.
fill_unfilled(exc_info)[source]¶
Fill an unfilled ExceptionInfo created with for_later()
.
The exception class.
property value_: E_¶
The exception value.
property tb_: TracebackType_¶
The exception raw traceback.
The type name of the exception.
property traceback_: Traceback_¶
The traceback.
exconly(tryshort=False)[source]¶
Return the exception as a string.
When ‘tryshort’ resolves to True, and the exception is an AssertionError, only the actual exception part of the exception representation is returned (so ‘AssertionError: ‘ is removed from the beginning).
Return True if the exception is an instance of exc.
Consider using isinstance(excinfo.value, exc)
instead.
getrepr(showlocals=False, style='long', abspath=False, tbfilter=True, funcargs=False, truncate_locals=True, truncate_args=True, chain=True)[source]¶
Return str()able representation of this exception info.
Parameters:
- showlocals (bool) – Show locals per traceback entry. Ignored if
style=="native"
. - style (str) – long|short|line|no|native|value traceback style.
- abspath (bool) – If paths should be changed to absolute or left unchanged.
- tbfilter (bool | Callable_[_ _[_ExceptionInfo_[_BaseException] ] , Traceback ]) –
A filter for traceback entries.- If false, don’t hide any entries.
- If true, hide internal entries and entries that contain a local variable
__tracebackhide__ = True
. - If a callable, delegates the filtering to the callable.
Ignored ifstyle
is"native"
.
- funcargs (bool) – Show fixtures (“funcargs” for legacy purposes) per traceback entry.
- truncate_locals (bool) – With
showlocals==True
, make sure locals can be safely represented as strings. - truncate_args (bool) – With
showargs==True
, make sure args can be safely represented as strings. - chain (bool) – If chained exceptions in Python 3 should be shown.
Changed in version 3.9: Added the chain
parameter.
Check whether the regular expression regexp
matches the string representation of the exception using re.search().
If it matches True
is returned, otherwise an AssertionError
is raised.
group_contains(expected_exception, *, match=None, depth=None)[source]¶
Check whether a captured exception group contains a matching exception.
Parameters:
- expected_exception (Type _[_BaseException] | Tuple _[_ _Type_ _[_BaseException] ]) – The expected exception type, or a tuple if one of multiple possible exception types are expected.
- match (str | Pattern _[_str] | None) –
If specified, a string containing a regular expression, or a regular expression object, that is tested against the string representation of the exception and itsPEP-678 <https://peps.python.org/pep-0678/>
__notes__
using re.search().
To match a literal string that may contain special characters, the pattern can first be escaped with re.escape(). - depth (Optional _[_int]) – If
None
, will search for a matching exception at any nesting depth. If >= 1, will only match an exception if it’s at the specified depth (depth = 1 being the exceptions contained within the topmost exception group).
Added in version 8.0.
ExitCode¶
final class ExitCode(*values)[source]¶
Encodes the valid exit codes by pytest.
Currently users and plugins may supply other exit codes as well.
Added in version 5.0.
OK = 0¶
Tests passed.
TESTS_FAILED = 1¶
Tests failed.
INTERRUPTED = 2¶
pytest was interrupted.
INTERNAL_ERROR = 3¶
An internal error got in the way.
USAGE_ERROR = 4¶
pytest was misused.
NO_TESTS_COLLECTED = 5¶
pytest couldn’t find tests.
FixtureDef¶
final class FixtureDef[source]¶
Bases: Generic[FixtureValue
]
A container for a fixture definition.
Note: At this time, only explicitly documented fields and methods are considered public stable API.
property scope_: Literal['session', 'package', 'module', 'class', 'function']_¶
Scope string, one of “function”, “class”, “module”, “package”, “session”.
Return the value of this fixture, executing it if not cached.
MarkDecorator¶
A decorator for applying a mark on test functions and classes.
MarkDecorators
are created with pytest.mark
:
mark1 = pytest.mark.NAME # Simple MarkDecorator mark2 = pytest.mark.NAME(name1=value) # Parametrized MarkDecorator
and can then be applied as decorators to test functions:
@mark2 def test_function(): pass
When a MarkDecorator
is called, it does the following:
- If called with a single class as its only positional argument and no additional keyword arguments, it attaches the mark to the class so it gets applied automatically to all test cases found in that class.
- If called with a single function as its only positional argument and no additional keyword arguments, it attaches the mark to the function, containing all the arguments already stored internally in the
MarkDecorator
. - When called in any other case, it returns a new
MarkDecorator
instance with the originalMarkDecorator
’s content updated with the arguments passed to this call.
Note: The rules above prevent a MarkDecorator
from storing only a single function or class reference as its positional argument with no additional keyword or positional arguments. You can work around this by using with_args()
.
Alias for mark.name.
property args_: tuple[Any, ...]_¶
Alias for mark.args.
property kwargs_: Mapping[str, Any]_¶
Alias for mark.kwargs.
with_args(*args, **kwargs)[source]¶
Return a MarkDecorator with extra arguments added.
Unlike calling the MarkDecorator, with_args() can be used even if the sole argument is a callable/class.
MarkGenerator¶
final class MarkGenerator[source]¶
Factory for MarkDecorator objects - exposed as a pytest.mark
singleton instance.
Example:
import pytest
@pytest.mark.slowtest def test_function(): pass
applies a ‘slowtest’ Mark on test_function
.
Mark¶
A pytest mark.
Name of the mark.
Positional arguments of the mark decorator.
Keyword arguments of the mark decorator.
Return a new Mark which is a combination of this Mark and another Mark.
Combines by appending args and merging kwargs.
Parameters:
other (Mark) – The mark to combine with.
Return type:
Metafunc¶
Objects passed to the pytest_generate_tests hook.
They help to inspect a test function and to generate tests according to test configuration or values specified in the class or module where a test function is defined.
definition¶
Access to the underlying _pytest.python.FunctionDefinition.
config¶
Access to the pytest.Config object for the test session.
module¶
The module object where the test function is defined in.
function¶
Underlying Python test function.
fixturenames¶
Set of fixture names required by the test function.
cls¶
Class object where the test function is defined in or None
.
parametrize(argnames, argvalues, indirect=False, ids=None, scope=None, *, _param_mark=None)[source]¶
Add new invocations to the underlying test function using the list of argvalues for the given argnames. Parametrization is performed during the collection phase. If you need to setup expensive resources see about setting indirect to do it rather than at test setup time.
Can be called multiple times per test function (but only on different argument names), in which case each call parametrizes all previous parametrizations, e.g.
unparametrized: t parametrize ["x", "y"]: t[x], t[y] parametrize [1, 2]: t[x-1], t[x-2], t[y-1], t[y-2]
Parameters:
- argnames (str | Sequence_[_str]) – A comma-separated string denoting one or more argument names, or a list/tuple of argument strings.
- argvalues (Iterable _[_ __pytest.mark.structures.ParameterSet_ _|_ _Sequence_ _[_object] | object]) –
The list of argvalues determines how often a test is invoked with different argument values.
If only one argname was specified argvalues is a list of values. If N argnames were specified, argvalues must be a list of N-tuples, where each tuple-element specifies a value for its respective argname. - indirect (bool | Sequence_[_str]) – A list of arguments’ names (subset of argnames) or a boolean. If True the list contains all names from the argnames. Each argvalue corresponding to an argname in this list will be passed as request.param to its respective argname fixture function so that it can perform more expensive setups during the setup phase of a test rather than at collection time.
- ids (Iterable_[_object | None ] | Callable_[_ _[_Any] , object | None ] | None) –
Sequence of (or generator for) ids forargvalues
, or a callable to return part of the id for each argvalue.
With sequences (and generators likeitertools.count()
) the returned ids should be of typestring
,int
,float
,bool
, orNone
. They are mapped to the corresponding index inargvalues
.None
means to use the auto-generated id.
If it is a callable it will be called for each entry inargvalues
, and the return value is used as part of the auto-generated id for the whole set (where parts are joined with dashes (“-“)). This is useful to provide more specific ids for certain items, e.g. dates. ReturningNone
will use an auto-generated id.
If no ids are provided they will be generated automatically from the argvalues. - scope (Literal[ 'session' , 'package' , 'module' , 'class' , 'function' ] | None) – If specified it denotes the scope of the parameters. The scope is used for grouping tests by parameter instances. It will also override any fixture-function defined scope, allowing to set a dynamic scope using test context or configuration.
Parser¶
Parser for command line arguments and ini-file values.
Variables:
extra_info – Dict of generic param -> value to display in case there’s an error processing the command line arguments.
getgroup(name, description='', after=None)[source]¶
Get (or create) a named option Group.
Parameters:
- name (str) – Name of the option group.
- description (str) – Long description for –help output.
- after (str | None) – Name of another group, used for ordering –help output.
Returns:
The option group.
Return type:
The returned group object has an addoption
method with the same signature as parser.addoption but will be shown in the respective group in the output ofpytest --help
.
addoption(*opts, **attrs)[source]¶
Register a command line option.
Parameters:
- opts (str) – Option names, can be short or long options.
- attrs (Any) – Same attributes as the argparse library’s add_argument() function accepts.
After command line parsing, options are available on the pytest config object via config.option.NAME
where NAME
is usually set by passing a dest
attribute, for exampleaddoption("--long", dest="NAME", ...)
.
parse_known_args(args, namespace=None)[source]¶
Parse the known arguments at this point.
Returns:
An argparse namespace object.
Return type:
parse_known_and_unknown_args(args, namespace=None)[source]¶
Parse the known arguments at this point, and also return the remaining unknown arguments.
Returns:
A tuple containing an argparse namespace object for the known arguments, and a list of the unknown arguments.
Return type:
addini(name, help, type=None, default=)[source]¶
Register an ini-file option.
Parameters:
- name (str) – Name of the ini-variable.
- type (Literal[ 'string' , 'paths' , 'pathlist' , 'args' , 'linelist' , 'bool' ] | None) –
Type of the variable. Can be:string
: a stringbool
: a booleanargs
: a list of strings, separated as in a shelllinelist
: a list of strings, separated by line breakspaths
: a list of pathlib.Path, separated as in a shellpathlist
: a list ofpy.path
, separated as in a shell
Forpaths
andpathlist
types, they are considered relative to the ini-file. In case the execution is happening without an ini-file defined, they will be considered relative to the current working directory (for example with--override-ini
).
Added in version 7.0: Thepaths
variable type.
Added in version 8.1: Use the current working directory to resolvepaths
andpathlist
in the absence of an ini-file.
Defaults tostring
ifNone
or not passed.
- default (Any) – Default value if no ini-file option exists but is queried.
The value of ini-variables can be retrieved via a call toconfig.getini(name).
OptionGroup¶
A group of options shown in its own section.
addoption(*opts, **attrs)[source]¶
Add an option to this group.
If a shortened version of a long option is specified, it will be suppressed in the help. addoption('--twowords', '--two-words')
results in help showing --two-words
only, but --twowords
gets accepted and the automatic destination is in args.twowords
.
Parameters:
- opts (str) – Option names, can be short or long options.
- attrs (Any) – Same attributes as the argparse library’s add_argument() function accepts.
PytestPluginManager¶
final class PytestPluginManager[source]¶
Bases: PluginManager
A pluggy.PluginManager with additional pytest-specific functionality:
- Loading plugins from the command line,
PYTEST_PLUGINS
env variable andpytest_plugins
global variables found in plugins being loaded. conftest.py
loading during start-up.
register(plugin, name=None)[source]¶
Register a plugin and return its name.
Parameters:
name (str | None) – The name under which to register the plugin. If not specified, a name is generated using get_canonical_name().
Returns:
The plugin name. If the name is blocked from registering, returnsNone
.
Return type:
str | None
If the plugin is already registered, raises a ValueError.
Return whether a plugin with the given name is registered.
import_plugin(modname, consider_entry_points=False)[source]¶
Import a plugin with modname
.
If consider_entry_points
is True, entry point names are also considered to find a plugin.
add_hookcall_monitoring(before, after)¶
Add before/after tracing functions for all hooks.
Returns an undo function which, when called, removes the added tracers.
before(hook_name, hook_impls, kwargs)
will be called ahead of all hook calls and receive a hookcaller instance, a list of HookImpl instances and the keyword arguments for the hook call.
after(outcome, hook_name, hook_impls, kwargs)
receives the same arguments as before
but also a Result object which represents the result of the overall hook call.
add_hookspecs(module_or_class)¶
Add new hook specifications defined in the given module_or_class
.
Functions are recognized as hook specifications if they have been decorated with a matching HookspecMarker
.
check_pending()¶
Verify that all hooks which have not been verified against a hook specification are optional, otherwise raisePluginValidationError
.
enable_tracing()¶
Enable tracing of hook calls.
Returns an undo function which, when called, removes the added tracing.
get_canonical_name(plugin)¶
Return a canonical name for a plugin object.
Note that a plugin may be registered under a different name specified by the caller of register(plugin, name). To obtain the name of a registered plugin use get_name(plugin) instead.
get_hookcallers(plugin)¶
Get all hook callers for the specified plugin.
Returns:
The hook callers, or None
if plugin
is not registered in this plugin manager.
Return type:
list[HookCaller] | None
get_name(plugin)¶
Return the name the plugin is registered under, or None
if is isn’t.
get_plugin(name)¶
Return the plugin registered under the given name, if any.
get_plugins()¶
Return a set of all registered plugin objects.
has_plugin(name)¶
Return whether a plugin with the given name is registered.
is_blocked(name)¶
Return whether the given plugin name is blocked.
is_registered(plugin)¶
Return whether the plugin is already registered.
list_name_plugin()¶
Return a list of (name, plugin) pairs for all registered plugins.
list_plugin_distinfo()¶
Return a list of (plugin, distinfo) pairs for all setuptools-registered plugins.
load_setuptools_entrypoints(group, name=None)¶
Load modules from querying the specified setuptools group
.
Parameters:
- group (str) – Entry point group to load plugins.
- name (str | None) – If given, loads only plugins with the given
name
.
Returns:
The number of plugins loaded by this call.
Return type:
set_blocked(name)¶
Block registrations of the given name, unregister if already registered.
subset_hook_caller(name, remove_plugins)¶
Return a proxy HookCaller instance for the named method which manages calls to all registered plugins except the ones from remove_plugins.
unblock(name)¶
Unblocks a name.
Returns whether the name was actually blocked.
unregister(plugin=None, name=None)¶
Unregister a plugin and all of its hook implementations.
The plugin can be specified either by the plugin object or the plugin name. If both are specified, they must agree.
Returns the unregistered plugin, or None
if not found.
project_name_: Final_¶
The project name.
hook_: Final_¶
The “hook relay”, used to call a hook on all registered plugins. See Calling hooks.
trace_: Final[_tracing.TagTracerSub]_¶
The tracing entry point. See Built-in tracing.
TestReport¶
final class TestReport[source]¶
Bases: BaseReport
Basic test report object (also used for setup and teardown calls if they fail).
Reports can contain arbitrary extra attributes.
Normalized collection nodeid.
location_: tuple[str, int | None, str]_¶
A (filesystempath, lineno, domaininfo) tuple indicating the actual location of a test item - it might be different from the collected one e.g. if a method is inherited from a different module. The filesystempath may be relative to config.rootdir
. The line number is 0-based.
keywords_: Mapping[str, Any]_¶
A name -> value dictionary containing all keywords and markers associated with a test invocation.
outcome_: Literal['passed', 'failed', 'skipped']_¶
Test outcome, always one of “passed”, “failed”, “skipped”.
longrepr_: None | ExceptionInfo[BaseException] | tuple[str, int, str] | str | TerminalRepr_¶
None or a failure representation.
One of ‘setup’, ‘call’, ‘teardown’ to indicate runtest phase.
user_properties¶
User properties is a list of tuples (name, value) that holds user defined properties of the test.
sections_: list[tuple[str, str]]_¶
Tuples of str (heading, content)
with extra information for the test report. Used by pytest to add text captured from stdout
, stderr
, and intercepted logging events. May be used by other plugins to add arbitrary information to reports.
Time it took to run just the test.
The system time when the call started, in seconds since the epoch.
The system time when the call ended, in seconds since the epoch.
classmethod from_item_and_call(item, call)[source]¶
Create and fill a TestReport with standard item and call info.
Parameters:
Return captured log lines, if log capturing is enabled.
Added in version 3.5.
Return captured text from stderr, if capturing is enabled.
Added in version 3.0.
Return captured text from stdout, if capturing is enabled.
Added in version 3.0.
property count_towards_summary_: bool_¶
Experimental Whether this report should be counted towards the totals shown at the end of the test session: “1 passed, 1 failure, etc”.
Note
This function is considered experimental, so beware that it is subject to changes even in patch releases.
Whether the outcome is failed.
The path portion of the reported node, as a string.
property head_line_: str | None_¶
Experimental The head line shown with longrepr output for this report, more commonly during traceback representation during failures:
________ Test.foo ________
In the example above, the head_line is “Test.foo”.
Note
This function is considered experimental, so beware that it is subject to changes even in patch releases.
Read-only property that returns the full string representation oflongrepr
.
Added in version 3.0.
Whether the outcome is passed.
Whether the outcome is skipped.
TestShortLogReport¶
class TestShortLogReport[source]¶
Used to store the test status result category, shortletter and verbose word. For example "rerun", "R", ("RERUN", {"yellow": True})
.
Variables:
- category – The class of result, for example
“passed”
,“skipped”
,“error”
, or the empty string. - letter – The short letter shown as testing progresses, for example
"."
,"s"
,"E"
, or the empty string. - word – Verbose word is shown as testing progresses in verbose mode, for example
"PASSED"
,"SKIPPED"
,"ERROR"
, or the empty string.
Alias for field number 0
Alias for field number 1
word_: str | tuple[str, Mapping[str, bool]]_¶
Alias for field number 2
Result¶
Result object used within hook wrappers, see Result in the pluggy documentation for more information.
Stash¶
Stash
is a type-safe heterogeneous mutable mapping that allows keys and value types to be defined separately from where it (the Stash
) is created.
Usually you will be given an object which has a Stash
, for exampleConfig or a Node:
stash: Stash = some_object.stash
If a module or plugin wants to store data in this Stash
, it createsStashKeys for its keys (at the module level):
At the top-level of the module
some_str_key = StashKeystr some_bool_key = StashKeybool
To store information:
Value type must match the key.
stash[some_str_key] = "value" stash[some_bool_key] = True
To retrieve the information:
The static type of some_str is str.
some_str = stash[some_str_key]
The static type of some_bool is bool.
some_bool = stash[some_bool_key]
Added in version 7.0.
__setitem__(key, value)[source]¶
Set a value for key.
Get the value for key.
Raises KeyError
if the key wasn’t set before.
Get the value for key, or return default if the key wasn’t set before.
setdefault(key, default)[source]¶
Return the value of key if already set, otherwise set the value of key to default and return default.
Delete the value for key.
Raises KeyError
if the key wasn’t set before.
Return whether key was set.
Return how many items exist in the stash.
Bases: Generic[T
]
StashKey
is an object used as a key to a Stash.
A StashKey
is associated with the type T
of the value of the key.
A StashKey
is unique and cannot conflict with another key.
Added in version 7.0.
Global Variables¶
pytest treats some global variables in a special manner when defined in a test module orconftest.py
files.
collect_ignore¶
Tutorial: Customizing test collection
Can be declared in conftest.py files to exclude test directories or modules. Needs to be a list of paths (str
, pathlib.Path or any os.PathLike).
collect_ignore = ["setup.py"]
collect_ignore_glob¶
Tutorial: Customizing test collection
Can be declared in conftest.py files to exclude test directories or modules with Unix shell-style wildcards. Needs to be list[str]
where str
can contain glob patterns.
collect_ignore_glob = ["*_ignore.py"]
pytest_plugins¶
Tutorial: Requiring/Loading plugins in a test module or conftest file
Can be declared at the global level in test modules and conftest.py files to register additional plugins. Can be either a str
or Sequence[str]
.
pytest_plugins = "myapp.testsupport.myplugin"
pytest_plugins = ("myapp.testsupport.tools", "myapp.testsupport.regression")
pytestmark¶
Tutorial: Marking whole classes or modules
Can be declared at the global level in test modules to apply one or more marks to all test functions and methods. Can be either a single mark or a list of marks (applied in left-to-right order).
import pytest
pytestmark = pytest.mark.webtest
import pytest
pytestmark = [pytest.mark.integration, pytest.mark.slow]
Environment Variables¶
Environment variables that can be used to change pytest’s behavior.
CI¶
When set (regardless of value), pytest acknowledges that is running in a CI process. Alternative to BUILD_NUMBER
variable. See also CI Pipelines.
BUILD_NUMBER¶
When set (regardless of value), pytest acknowledges that is running in a CI process. Alternative to CI variable. See also CI Pipelines.
PYTEST_ADDOPTS¶
This contains a command-line (parsed by the py:mod:shlex
module) that will be prepended to the command line given by the user, see Builtin configuration file options for more information.
PYTEST_VERSION¶
This environment variable is defined at the start of the pytest session and is undefined afterwards. It contains the value of pytest.__version__
, and among other things can be used to easily check if a code is running from within a pytest run.
PYTEST_CURRENT_TEST¶
This is not meant to be set by users, but is set by pytest internally with the name of the current test so other processes can inspect it, see PYTEST_CURRENT_TEST environment variable for more information.
PYTEST_DEBUG¶
When set, pytest will print tracing and debug information.
PYTEST_DEBUG_TEMPROOT¶
Root for temporary directories produced by fixtures like tmp_pathas discussed in Temporary directory location and retention.
PYTEST_DISABLE_PLUGIN_AUTOLOAD¶
When set, disables plugin auto-loading through entry point packaging metadata. Only explicitly specified plugins will be loaded.
PYTEST_PLUGINS¶
Contains comma-separated list of modules that should be loaded as plugins:
export PYTEST_PLUGINS=mymodule.plugin,xdist
PYTEST_THEME¶
Sets a pygment style to use for the code output.
PYTEST_THEME_MODE¶
Sets the PYTEST_THEME to be either dark or light.
PY_COLORS¶
When set to 1
, pytest will use color in terminal output. When set to 0
, pytest will not use color.PY_COLORS
takes precedence over NO_COLOR
and FORCE_COLOR
.
NO_COLOR¶
When set to a non-empty string (regardless of value), pytest will not use color in terminal output.PY_COLORS
takes precedence over NO_COLOR
, which takes precedence over FORCE_COLOR
. See no-color.org for other libraries supporting this community standard.
FORCE_COLOR¶
When set to a non-empty string (regardless of value), pytest will use color in terminal output.PY_COLORS
and NO_COLOR
take precedence over FORCE_COLOR
.
Exceptions¶
final exception UsageError[source]¶
Bases: Exception
Error in pytest usage or invocation.
final exception FixtureLookupError[source]¶
Bases: LookupError
Could not return a requested fixture (missing or invalid).
Warnings¶
Custom warnings generated in some situations such as improper usage or deprecated features.
class PytestWarning¶
Bases: UserWarning
Base class for all warnings emitted by pytest.
class PytestAssertRewriteWarning¶
Bases: PytestWarning
Warning emitted by the pytest assert rewrite module.
class PytestCacheWarning¶
Bases: PytestWarning
Warning emitted by the cache plugin in various situations.
class PytestCollectionWarning¶
Bases: PytestWarning
Warning emitted when pytest is not able to collect a file or symbol in a module.
class PytestConfigWarning¶
Bases: PytestWarning
Warning emitted for configuration issues.
class PytestDeprecationWarning¶
Bases: PytestWarning, DeprecationWarning
Warning class for features that will be removed in a future version.
class PytestExperimentalApiWarning¶
Bases: PytestWarning, FutureWarning
Warning category used to denote experiments in pytest.
Use sparingly as the API might change or even be removed completely in a future version.
class PytestReturnNotNoneWarning¶
Bases: PytestWarning
Warning emitted when a test function is returning value other than None.
class PytestRemovedIn9Warning¶
Bases: PytestDeprecationWarning
Warning class for features that will be removed in pytest 9.
class PytestUnhandledCoroutineWarning¶
Bases: PytestReturnNotNoneWarning
Warning emitted for an unhandled coroutine.
A coroutine was encountered when collecting test functions, but was not handled by any async-aware plugin. Coroutine test functions are not natively supported.
class PytestUnknownMarkWarning¶
Bases: PytestWarning
Warning emitted on use of unknown markers.
See How to mark test functions with attributes for details.
class PytestUnraisableExceptionWarning¶
Bases: PytestWarning
An unraisable exception was reported.
Unraisable exceptions are exceptions raised in __del__implementations and similar situations when the exception cannot be raised as normal.
class PytestUnhandledThreadExceptionWarning¶
Bases: PytestWarning
An unhandled exception occurred in a Thread.
Such exceptions don’t propagate normally.
Consult the Internal pytest warnings section in the documentation for more information.
Configuration Options¶
Here is a list of builtin configuration options that may be written in a pytest.ini
(or .pytest.ini
),pyproject.toml
, tox.ini
, or setup.cfg
file, usually located at the root of your repository.
To see each file format in details, see Configuration file formats.
Warning
Usage of setup.cfg
is not recommended except for very simple use cases. .cfg
files use a different parser than pytest.ini
and tox.ini
which might cause hard to track down problems. When possible, it is recommended to use the latter files, or pyproject.toml
, to hold your pytest configuration.
Configuration options may be overwritten in the command-line by using -o/--override-ini
, which can also be passed multiple times. The expected format is name=value
. For example:
pytest -o console_output_style=classic -o cache_dir=/tmp/mycache
addopts¶
Add the specified OPTS
to the set of command line arguments as if they had been specified by the user. Example: if you have this ini file content:
content of pytest.ini
[pytest] addopts = --maxfail=2 -rf # exit after 2 failures, report fail info
issuing pytest test_hello.py
actually means:
pytest --maxfail=2 -rf test_hello.py
Default is to add no options.
cache_dir¶
Sets the directory where the cache plugin’s content is stored. Default directory is.pytest_cache
which is created in rootdir. Directory may be relative or absolute path. If setting relative path, then directory is created relative to rootdir. Additionally, a path may contain environment variables, that will be expanded. For more information about cache plugin please refer to How to re-run failed tests and maintain state between test runs.
consider_namespace_packages¶
Controls if pytest should attempt to identify namespace packageswhen collecting Python modules. Default is False
.
Set to True
if the package you are testing is part of a namespace package.
Only native namespace packagesare supported, with no plans to support legacy namespace packages.
Added in version 8.1.
console_output_style¶
Sets the console output style while running tests:
classic
: classic pytest output.progress
: like classic pytest output, but with a progress indicator.progress-even-when-capture-no
: allows the use of the progress indicator even whencapture=no
.count
: like progress, but shows progress as the number of tests completed instead of a percent.
The default is progress
, but you can fallback to classic
if you prefer or the new mode is causing unexpected problems:
content of pytest.ini
[pytest] console_output_style = classic
doctest_encoding¶
Default encoding to use to decode text files with docstrings.See how pytest handles doctests.
doctest_optionflags¶
One or more doctest flag names from the standard doctest
module.See how pytest handles doctests.
empty_parameter_set_mark¶
Allows to pick the action for empty parametersets in parameterization
skip
skips tests with an empty parameterset (default)xfail
marks tests with an empty parameterset as xfail(run=False)fail_at_collect
raises an exception if parametrize collects an empty parameter set
content of pytest.ini
[pytest] empty_parameter_set_mark = xfail
Note
The default value of this option is planned to change to xfail
in future releases as this is considered less error prone, see #3155 for more details.
faulthandler_timeout¶
Dumps the tracebacks of all threads if a test takes longer than X
seconds to run (including fixture setup and teardown). Implemented using the faulthandler.dump_traceback_later() function, so all caveats there apply.
content of pytest.ini
[pytest] faulthandler_timeout=5
For more information please refer to Fault Handler.
filterwarnings¶
Sets a list of filters and actions that should be taken for matched warnings. By default all warnings emitted during the test session will be displayed in a summary at the end of the test session.
content of pytest.ini
[pytest] filterwarnings = error ignore::DeprecationWarning
This tells pytest to ignore deprecation warnings and turn all other warnings into errors. For more information please refer to How to capture warnings.
junit_duration_report¶
Added in version 4.1.
Configures how durations are recorded into the JUnit XML report:
total
(the default): duration times reported include setup, call, and teardown times.call
: duration times reported include only call times, excluding setup and teardown.
[pytest] junit_duration_report = call
junit_family¶
Added in version 4.2.
Changed in version 6.1: Default changed to xunit2
.
Configures the format of the generated JUnit XML file. The possible options are:
xunit1
(orlegacy
): produces old style output, compatible with the xunit 1.0 format.xunit2
: produces xunit 2.0 style output, which should be more compatible with latest Jenkins versions. This is the default.
[pytest] junit_family = xunit2
junit_logging¶
Added in version 3.5.
Changed in version 5.4: log
, all
, out-err
options added.
Configures if captured output should be written to the JUnit XML file. Valid values are:
log
: write onlylogging
captured output.system-out
: write capturedstdout
contents.system-err
: write capturedstderr
contents.out-err
: write both capturedstdout
andstderr
contents.all
: write capturedlogging
,stdout
andstderr
contents.no
(the default): no captured output is written.
[pytest] junit_logging = system-out
junit_log_passing_tests¶
Added in version 4.6.
If junit_logging != "no"
, configures if the captured output should be written to the JUnit XML file for passing tests. Default is True
.
[pytest] junit_log_passing_tests = False
junit_suite_name¶
To set the name of the root test suite xml item, you can configure the junit_suite_name
option in your config file:
[pytest] junit_suite_name = my_suite
log_auto_indent¶
Allow selective auto-indentation of multiline log messages.
Supports command line option --log-auto-indent [value]
and config option log_auto_indent = [value]
to set the auto-indentation behavior for all logging.
[value]
can be:
- True or “On” - Dynamically auto-indent multiline log messages
- False or “Off” or 0 - Do not auto-indent multiline log messages (the default behavior)
- [positive integer] - auto-indent multiline log messages by [value] spaces
[pytest] log_auto_indent = False
Supports passing kwarg extra={"auto_indent": [value]}
to calls to logging.log()
to specify auto-indentation behavior for a specific entry in the log. extra
kwarg overrides the value specified on the command line or in the config.
log_cli¶
Enable log display during test run (also known as “live logging”). The default is False
.
log_cli_date_format¶
Sets a time.strftime()-compatible string that will be used when formatting dates for live logging.
[pytest] log_cli_date_format = %Y-%m-%d %H:%M:%S
For more information, see Live Logs.
log_cli_format¶
Sets a logging-compatible string used to format live logging messages.
[pytest] log_cli_format = %(asctime)s %(levelname)s %(message)s
For more information, see Live Logs.
log_cli_level¶
Sets the minimum log message level that should be captured for live logging. The integer value or the names of the levels can be used.
[pytest] log_cli_level = INFO
For more information, see Live Logs.
log_date_format¶
Sets a time.strftime()-compatible string that will be used when formatting dates for logging capture.
[pytest] log_date_format = %Y-%m-%d %H:%M:%S
For more information, see How to manage logging.
log_file¶
Sets a file name relative to the current working directory where log messages should be written to, in addition to the other logging facilities that are active.
[pytest] log_file = logs/pytest-logs.txt
For more information, see How to manage logging.
log_file_date_format¶
Sets a time.strftime()-compatible string that will be used when formatting dates for the logging file.
[pytest] log_file_date_format = %Y-%m-%d %H:%M:%S
For more information, see How to manage logging.
log_file_format¶
Sets a logging-compatible string used to format logging messages redirected to the logging file.
[pytest] log_file_format = %(asctime)s %(levelname)s %(message)s
For more information, see How to manage logging.
log_file_level¶
Sets the minimum log message level that should be captured for the logging file. The integer value or the names of the levels can be used.
[pytest] log_file_level = INFO
For more information, see How to manage logging.
log_format¶
Sets a logging-compatible string used to format captured logging messages.
[pytest] log_format = %(asctime)s %(levelname)s %(message)s
For more information, see How to manage logging.
log_level¶
Sets the minimum log message level that should be captured for logging capture. The integer value or the names of the levels can be used.
[pytest] log_level = INFO
For more information, see How to manage logging.
markers¶
When the --strict-markers
or --strict
command-line arguments are used, only known markers - defined in code by core pytest or some plugin - are allowed.
You can list additional markers in this setting to add them to the whitelist, in which case you probably want to add --strict-markers
to addopts
to avoid future regressions:
[pytest] addopts = --strict-markers markers = slow serial
Note
The use of --strict-markers
is highly preferred. --strict
was kept for backward compatibility only and may be confusing for others as it only applies to markers and not to other options.
minversion¶
Specifies a minimal pytest version required for running tests.
content of pytest.ini
[pytest] minversion = 3.0 # will fail if we run with pytest-2.8
norecursedirs¶
Set the directory basename patterns to avoid when recursing for test discovery. The individual (fnmatch-style) patterns are applied to the basename of a directory to decide if to recurse into it. Pattern matching characters:
matches everything
? matches any single character [seq] matches any character in seq [!seq] matches any char not in seq
Default patterns are '*.egg'
, '.*'
, '_darcs'
, 'build'
,'CVS'
, 'dist'
, 'node_modules'
, 'venv'
, '{arch}'
. Setting a norecursedirs
replaces the default. Here is an example of how to avoid certain directories:
[pytest] norecursedirs = .svn _build tmp*
This would tell pytest
to not look into typical subversion or sphinx-build directories or into any tmp
prefixed directory.
Additionally, pytest
will attempt to intelligently identify and ignore a virtualenv. Any directory deemed to be the root of a virtual environment will not be considered during test collection unless--collect-in-virtualenv
is given. Note also that norecursedirs
takes precedence over --collect-in-virtualenv
; e.g. if you intend to run tests in a virtualenv with a base directory that matches '.*'
you_must_ override norecursedirs
in addition to using the--collect-in-virtualenv
flag.
python_classes¶
One or more name prefixes or glob-style patterns determining which classes are considered for test collection. Search for multiple glob patterns by adding a space between patterns. By default, pytest will consider any class prefixed with Test
as a test collection. Here is an example of how to collect tests from classes that end in Suite
:
[pytest] python_classes = *Suite
Note that unittest.TestCase
derived classes are always collected regardless of this option, as unittest
’s own collection framework is used to collect those tests.
python_files¶
One or more Glob-style file patterns determining which python files are considered as test modules. Search for multiple glob patterns by adding a space between patterns:
[pytest] python_files = test_*.py check_*.py example_*.py
Or one per line:
[pytest] python_files = test_*.py check_*.py example_*.py
By default, files matching test_*.py
and *_test.py
will be considered test modules.
python_functions¶
One or more name prefixes or glob-patterns determining which test functions and methods are considered tests. Search for multiple glob patterns by adding a space between patterns. By default, pytest will consider any function prefixed with test
as a test. Here is an example of how to collect test functions and methods that end in _test
:
[pytest] python_functions = *_test
Note that this has no effect on methods that live on a unittest.TestCase
derived class, as unittest
’s own collection framework is used to collect those tests.
See Changing naming conventions for more detailed examples.
pythonpath¶
Sets list of directories that should be added to the python search path. Directories will be added to the head of sys.path. Similar to the PYTHONPATH environment variable, the directories will be included in where Python will look for imported modules. Paths are relative to the rootdir directory. Directories remain in path for the duration of the test session.
[pytest] pythonpath = src1 src2
Note
pythonpath
does not affect some imports that happen very early, most notably plugins loaded using the -p
command line option.
required_plugins¶
A space separated list of plugins that must be present for pytest to run. Plugins can be listed with or without version specifiers directly following their name. Whitespace between different version specifiers is not allowed. If any one of the plugins is not found, emit an error.
[pytest] required_plugins = pytest-django>=3.0.0,<4.0.0 pytest-html pytest-xdist>=1.0.0
testpaths¶
Sets list of directories that should be searched for tests when no specific directories, files or test ids are given in the command line when executing pytest from the rootdir directory. File system paths may use shell-style wildcards, including the recursive**
pattern.
Useful when all project tests are in a known location to speed up test collection and to avoid picking up undesired tests by accident.
[pytest] testpaths = testing doc
This configuration means that executing:
has the same practical effects as executing:
tmp_path_retention_count¶
How many sessions should we keep the tmp_path
directories, according to tmp_path_retention_policy
.
[pytest] tmp_path_retention_count = 3
Default: 3
tmp_path_retention_policy¶
Controls which directories created by the tmp_path
fixture are kept around, based on test outcome.
all
: retains directories for all tests, regardless of the outcome.failed
: retains directories only for tests with outcomeerror
orfailed
.none
: directories are always removed after each test ends, regardless of the outcome.
[pytest] tmp_path_retention_policy = all
Default: all
usefixtures¶
List of fixtures that will be applied to all test functions; this is semantically the same to apply the @pytest.mark.usefixtures
marker to all test functions.
[pytest] usefixtures = clean_db
verbosity_assertions¶
Set a verbosity level specifically for assertion related output, overriding the application wide level.
[pytest] verbosity_assertions = 2
Defaults to application wide verbosity level (via the -v
command-line option). A special value of “auto” can be used to explicitly use the global verbosity level.
verbosity_test_cases¶
Set a verbosity level specifically for test case execution related output, overriding the application wide level.
[pytest] verbosity_test_cases = 2
Defaults to application wide verbosity level (via the -v
command-line option). A special value of “auto” can be used to explicitly use the global verbosity level.
xfail_strict¶
If set to True
, tests marked with @pytest.mark.xfail
that actually succeed will by default fail the test suite. For more information, see strict parameter.
[pytest] xfail_strict = True
Command-line Flags¶
All the command-line flags can be obtained by running pytest --help
:
$ pytest --help usage: pytest [options] [file_or_dir] [file_or_dir] [...]
positional arguments: file_or_dir
general:
-k EXPRESSION Only run tests which match the given substring
expression. An expression is a Python evaluable
expression where all names are substring-matched
against test names and their parent classes.
Example: -k 'test_method or test_other' matches all
test functions and classes whose name contains
'test_method' or 'test_other', while -k 'not
test_method' matches those that don't contain
'test_method' in their names. -k 'not test_method
and not test_other' will eliminate the matches.
Additionally keywords are matched to classes and
functions containing extra names in their
'extra_keyword_matches' set, as well as functions
which have names assigned directly to them. The
matching is case-insensitive.
-m MARKEXPR Only run tests matching given mark expression. For
example: -m 'mark1 and not mark2'.
--markers show markers (builtin, plugin and per-project ones).
-x, --exitfirst Exit instantly on first error or failed test
--fixtures, --funcargs
Show available fixtures, sorted by plugin appearance
(fixtures with leading '_' are only shown with '-v')
--fixtures-per-test Show fixtures per test
--pdb Start the interactive Python debugger on errors or
KeyboardInterrupt
--pdbcls=modulename:classname
Specify a custom interactive Python debugger for use
with --pdb.For example:
--pdbcls=IPython.terminal.debugger:TerminalPdb
--trace Immediately break when running each test
--capture=method Per-test capturing method: one of fd|sys|no|tee-sys
-s Shortcut for --capture=no
--runxfail Report the results of xfail tests as if they were
not marked
--lf, --last-failed Rerun only the tests that failed at the last run (or
all if none failed)
--ff, --failed-first Run all tests, but run the last failures first. This
may re-order tests and thus lead to repeated fixture
setup/teardown.
--nf, --new-first Run tests from new files first, then the rest of the
tests sorted by file mtime
--cache-show=[CACHESHOW]
Show cache contents, don't perform collection or
tests. Optional argument: glob (default: '*').
--cache-clear Remove all cache contents at start of test run
--lfnf, --last-failed-no-failures={all,none}
With --lf
, determines whether to execute tests
when there are no previously (known) failures or
when no cached lastfailed
data was found.
all
(the default) runs the full test suite
again. none
just emits a message about no known
failures and exits successfully.
--sw, --stepwise Exit on test failure and continue from last failing
test next time
--sw-skip, --stepwise-skip
Ignore the first failing test but stop on the next
failing test. Implicitly enables --stepwise.
Reporting: --durations=N Show N slowest setup/test durations (N=0 for all) --durations-min=N Minimal duration in seconds for inclusion in slowest list. Default: 0.005. -v, --verbose Increase verbosity --no-header Disable header --no-summary Disable summary --no-fold-skipped Do not fold skipped tests in short summary. -q, --quiet Decrease verbosity --verbosity=VERBOSE Set verbosity. Default: 0. -r chars Show extra test summary info as specified by chars: (f)ailed, (E)rror, (s)kipped, (x)failed, (X)passed, (p)assed, (P)assed with output, (a)ll except passed (p/P), or (A)ll. (w)arnings are enabled by default (see --disable-warnings), 'N' can be used to reset the list. (default: 'fE'). --disable-warnings, --disable-pytest-warnings Disable warnings summary -l, --showlocals Show locals in tracebacks (disabled by default) --no-showlocals Hide locals in tracebacks (negate --showlocals passed through addopts) --tb=style Traceback print mode (auto/long/short/line/native/no) --xfail-tb Show tracebacks for xfail (as long as --tb != no) --show-capture={no,stdout,stderr,log,all} Controls how captured stdout/stderr/log is shown on failed tests. Default: all. --full-trace Don't cut any tracebacks (default is to cut) --color=color Color terminal output (yes/no/auto) --code-highlight={yes,no} Whether code should be highlighted (only if --color is also enabled). Default: yes. --pastebin=mode Send failed|all info to bpaste.net pastebin service --junitxml, --junit-xml=path Create junit-xml style report file at given path --junitprefix, --junit-prefix=str Prepend prefix to classnames in junit-xml output
pytest-warnings:
-W, --pythonwarnings PYTHONWARNINGS
Set which warnings to report, see -W option of
Python itself
--maxfail=num Exit after first num failures or errors
--strict-config Any warnings encountered while parsing the pytest
section of the configuration file raise errors
--strict-markers Markers not registered in the markers
section of
the configuration file raise errors
--strict (Deprecated) alias to --strict-markers
-c, --config-file FILE
Load configuration from FILE
instead of trying to
locate one of the implicit configuration files.
--continue-on-collection-errors
Force test execution even if collection errors occur
--rootdir=ROOTDIR Define root directory for tests. Can be relative
path: 'root_dir', './root_dir',
'root_dir/another_dir/'; absolute path:
'/home/user/root_dir'; path with variables:
'$HOME/root_dir'.
collection: --collect-only, --co Only collect tests, don't execute them --pyargs Try to interpret all arguments as Python packages --ignore=path Ignore path during collection (multi-allowed) --ignore-glob=path Ignore path pattern during collection (multi- allowed) --deselect=nodeid_prefix Deselect item (via node id prefix) during collection (multi-allowed) --confcutdir=dir Only load conftest.py's relative to specified dir --noconftest Don't load any conftest.py files --keep-duplicates Keep duplicate tests --collect-in-virtualenv Don't ignore tests in a local virtualenv directory --import-mode={prepend,append,importlib} Prepend/append to sys.path when importing test modules and conftest files. Default: prepend. --doctest-modules Run doctests in all .py modules --doctest-report={none,cdiff,ndiff,udiff,only_first_failure} Choose another output format for diffs on doctest failure --doctest-glob=pat Doctests file matching pattern, default: test*.txt --doctest-ignore-import-errors Ignore doctest collection errors --doctest-continue-on-failure For a given doctest, continue to run after the first failure
test session debugging and configuration:
--basetemp=dir Base temporary directory for this test run.
(Warning: this directory is removed if it exists.)
-V, --version Display pytest version and information about
plugins. When given twice, also display information
about plugins.
-h, --help Show help message and configuration info
-p name Early-load given plugin module name or entry point
(multi-allowed). To avoid loading of plugins, use
the no:
prefix, e.g. no:doctest
.
--trace-config Trace considerations of conftest.py files
--debug=[DEBUG_FILE_NAME]
Store internal tracing debug information in this log
file. This file is opened with 'w' and truncated as
a result, care advised. Default: pytestdebug.log.
-o, --override-ini OVERRIDE_INI
Override ini option with "option=value" style, e.g.
-o xfail_strict=True -o cache_dir=cache
.
--assert=MODE Control assertion debugging tools.
'plain' performs no assertion debugging.
'rewrite' (the default) rewrites assert statements
in test modules on import to provide assert
expression information.
--setup-only Only setup fixtures, do not execute tests
--setup-show Show setup of fixtures while executing tests
--setup-plan Show what fixtures and tests would be executed but
don't execute anything
logging: --log-level=LEVEL Level of messages to catch/display. Not set by default, so it depends on the root/parent log handler's effective level, where it is "WARNING" by default. --log-format=LOG_FORMAT Log format used by the logging module --log-date-format=LOG_DATE_FORMAT Log date format used by the logging module --log-cli-level=LOG_CLI_LEVEL CLI logging level --log-cli-format=LOG_CLI_FORMAT Log format used by the logging module --log-cli-date-format=LOG_CLI_DATE_FORMAT Log date format used by the logging module --log-file=LOG_FILE Path to a file when logging will be written to --log-file-mode={w,a} Log file open mode --log-file-level=LOG_FILE_LEVEL Log file logging level --log-file-format=LOG_FILE_FORMAT Log format used by the logging module --log-file-date-format=LOG_FILE_DATE_FORMAT Log date format used by the logging module --log-auto-indent=LOG_AUTO_INDENT Auto-indent multiline messages passed to the logging module. Accepts true|on, false|off or an integer. --log-disable=LOGGER_DISABLE Disable a logger by name. Can be passed multiple times.
[pytest] ini-options in the first pytest.ini|tox.ini|setup.cfg|pyproject.toml file found:
markers (linelist): Register new markers for test functions
empty_parameter_set_mark (string):
Default marker for empty parametersets
norecursedirs (args): Directory patterns to avoid for recursion
testpaths (args): Directories to search for tests when no files or
directories are given on the command line
filterwarnings (linelist):
Each line specifies a pattern for
warnings.filterwarnings. Processed after
-W/--pythonwarnings.
consider_namespace_packages (bool):
Consider namespace packages when resolving module
names during import
usefixtures (args): List of default fixtures to be used with this
project
python_files (args): Glob-style file patterns for Python test module
discovery
python_classes (args):
Prefixes or glob names for Python test class
discovery
python_functions (args):
Prefixes or glob names for Python test function and
method discovery
disable_test_id_escaping_and_forfeit_all_rights_to_community_support (bool):
Disable string escape non-ASCII characters, might
cause unwanted side effects(use at your own risk)
console_output_style (string):
Console output: "classic", or with additional
progress information ("progress" (percentage) |
"count" | "progress-even-when-capture-no" (forces
progress even when capture=no)
verbosity_test_cases (string):
Specify a verbosity level for test case execution,
overriding the main level. Higher levels will
provide more detailed information about each test
case executed.
xfail_strict (bool): Default for the strict parameter of xfail markers
when not given explicitly (default: False)
tmp_path_retention_count (string):
How many sessions should we keep the tmp_path
directories, according to
tmp_path_retention_policy
.
tmp_path_retention_policy (string):
Controls which directories created by the tmp_path
fixture are kept around, based on test outcome.
(all/failed/none)
enable_assertion_pass_hook (bool):
Enables the pytest_assertion_pass hook. Make sure to
delete any previously generated pyc cache files.
verbosity_assertions (string):
Specify a verbosity level for assertions, overriding
the main level. Higher levels will provide more
detailed explanation when an assertion fails.
junit_suite_name (string):
Test suite name for JUnit report
junit_logging (string):
Write captured log messages to JUnit report: one of
no|log|system-out|system-err|out-err|all
junit_log_passing_tests (bool):
Capture log information for passing tests to JUnit
report:
junit_duration_report (string):
Duration time to report: one of total|call
junit_family (string):
Emit XML for schema: one of legacy|xunit1|xunit2
doctest_optionflags (args):
Option flags for doctests
doctest_encoding (string):
Encoding used for doctest files
cache_dir (string): Cache directory path
log_level (string): Default value for --log-level
log_format (string): Default value for --log-format
log_date_format (string):
Default value for --log-date-format
log_cli (bool): Enable log display during test run (also known as
"live logging")
log_cli_level (string):
Default value for --log-cli-level
log_cli_format (string):
Default value for --log-cli-format
log_cli_date_format (string):
Default value for --log-cli-date-format
log_file (string): Default value for --log-file
log_file_mode (string):
Default value for --log-file-mode
log_file_level (string):
Default value for --log-file-level
log_file_format (string):
Default value for --log-file-format
log_file_date_format (string):
Default value for --log-file-date-format
log_auto_indent (string):
Default value for --log-auto-indent
pythonpath (paths): Add paths to sys.path
faulthandler_timeout (string):
Dump the traceback of all threads if a test takes
more than TIMEOUT seconds to finish
addopts (args): Extra command line options
minversion (string): Minimally required pytest version
required_plugins (args):
Plugins that must be present for pytest to run
Environment variables: CI When set (regardless of value), pytest knows it is running in a CI process and does not truncate summary info BUILD_NUMBER Equivalent to CI PYTEST_ADDOPTS Extra command line options PYTEST_PLUGINS Comma-separated plugins to load during startup PYTEST_DISABLE_PLUGIN_AUTOLOAD Set to disable plugin auto-loading PYTEST_DEBUG Set to enable debug tracing of pytest's internals
to see available markers type: pytest --markers to see available fixtures type: pytest --fixtures (shown according to specified file_or_dir or current dir if not specified; fixtures with leading '_' are only shown with the '-v' option