py.test cheatsheet

Published: 2018-01-27
Tagged: python learning guide

I've converted to the church of Pytest some time ago and since then I've been using a small reference file to be able to quickly use certain features without having to trawl the documentation. Now that it's grown, I thought I should share it with others.

Running tests:

# Run all tests with 'get_data' in the name
pytest -k "get_data"

# Exclude tests with 'timeout' in name
pytest -k "get_data and not timeout"

# Run a single method of a test class
pytest tests/unit/test_protocol.py::TestWriteProto::test_bad_block

# Run a single test class
pytest tests/unit/test_protocol.py::TestWriteProto

# Run a single test function
pytest tests/unit/test_protocol.py::test_setup_func

# Run tests that won't redirect stdout, so you can use pdb:
pytest -s tests/

# Run tests in verbose mode, useful for finding small differences in assertions:
pytest -vv tests/

# Only run tests that failed during the last run
pytest --lf

Marking and running marked tests:

# Contents of tests/unit/test_api.py
@pytest.mark.api
def test_api_get_remote():
    # test

@pytest.mark.api
@pytest.mark.post
def test_api_post_remote():
    # test

# back to the CLI, this will run both of the tests above
pytest tests/unit -m "api"

# but this will only run the first one
pytest tests/unit -m "api and not post"

XFail

xfail tests are tests that we expect to fail. They are marked differently in the test output - "x"s instead of "."s and they're reported as eg. "3 xfailed" when everything is running correctly. If an xfail fails (it passes instead of failing), it's marked with an uppercase "X" and the results also report eg. "3 xpassed".

collected 3 items

test_weird.py .xX                       [100%]

1 passed, 1 xfailed, 1 xpassed in 0.04 seconds
==============================================

Marking tests as xfail:

@pytest.mark.xfail
def test_post_response_200():
    #test

xfail can also take a conditional statement and a reason:

@pytest.mark.xfail(package.version < '1.0.0', reason='Pre-production release does not support this feature')
def test_advanced_function():
    #test

Fixtures

The following will enable verbose fixture setup/teardown messages when running tests:

pytest --setup-show

Simple fixture:

@pytest.fixture()
def example_data():
    return {'data': 'test'}

Setup/teardown fixture + nested fixtures:

@pytest.fixture()
def db_connection():
    conn = DbAdapter('db-url')
    yield conn
    conn.close()

@pytest.fixture()
def example_setup_teardown(db_connection):
    data = db_connection.query('id = 1')
    return data

Specifying fixture scope happens import-time. The default fixture scope is 'function'. Scoped fixtures can depend on other scoped fixtures with the same or high scope. Possible scopes, from lowest to highest area: function, class, module, session.

Example:

@pytest.fixture(scope='function')
def example_fixture():
    # fixture code

@pytest.fixture(scope='class')
def example_class_fixture():
    # fixture code

@pytest.fixture(scope='session')
def example_session_fixture():
    # cannot re-use "example_fixture" because it's a "lower-scoped" fixture
    # fixture code

Fixtures can be applied to classes:

# using code from above example
@pytest.mark.usefixtures('example_class_fixture')
class TestThings:
    def test_thing(self):
        # example_class_fixture will get called
        # test code

Or, to use the return value of a fixture in a class:

@pytest.fixture(scope='class')
def example_data(request):
    data = {'data': 'test'}
    request.cls.data = data


class TestThings:
    def test_think(self):
        test_data = self.data
        # remainder of test

Fixtures can be parametrized (more on this soon):

responses =[
    (200, 'OK'),
    (400, 'Bad Request')
]

@pytest.fixture(params=responses)
def example_response(request):
    rsp = Response(status=request.param[0], msg=request.param[1])
    return rsp

def test_requests(example_response):
    # this test will run twice, once for each item in "responses"

A fixture can be marked as "autouse", which will make every test in your suite use it by default:

@pytest.fixture(scope='session', autouse=True)
def global_fixture():
    # fixture code

conftest.py can be used to share fixtures. If you want to share fixtures through out all of your project's tests, you can create a conftest.py file in project/tests. Pytest will detect this file and import all the fixtures from it before running tests. Also, you can a conftest.py inside of a subdirectory to share its fixtures only in that subdirectory, for example: creating project/tests/unit/login/conftest.py will ensure that only tests from project/tests/unit/login will have access to these fixtures.

Built-in fixtures

Pytest comes with a number of built-in fixtures. I've found these the most useful:

Other built-in fixtures can be found here: https://docs.pytest.org/en/latest/builtin.html#builtin-fixtures-function-arguments

Parametrized tests

Parametrized tests are tests, which pytest will run for each parameter. For example, the following test will run three times:

server_rsp_params = [
    ('/home', 200),
    ('/user', 403),
    ('/fake-url', 404)
]

@pytest.mark.parametrize('url, status_code')
def test_server_responses(url, status_code):
    base_url = 'http://127.0.0.1'
    rsp = requests.get(base_url + url)
    assert rsp.status_code == status_code

Comments

There aren't any comments here.

Add new comment