Django and Oscar
Mike Slinn

Django / PyTest Setup and Rationale

Published 2021-06-08.
Time to read: 3 minutes.

This page is part of the django collection.

In an earlier article I discussed a simple setup for Django unit testing in general, using unittest, which requires tests to be written using classes. Unfortunately, the official Django documentation for unittest does not mention fixtures by name. Instead, the docs contain a poorly written description of how to use manage.py’s dumpdata and loaddata commands to manage test data. I found the class-based approach imposed by unittest to be limiting because it is incompatible with using programmatically-generated fixtures.

PyTest

This article focuses on unit testing using PyTest, which could be described as a loose superset of unittest that has strong support for programmatically-generated fixtures. PyTest does not need tests to be arranged in classes, nor are test classes supported. I actually installed pytest-django because it integrates PyTest with Django.

Shell
(aw) $ pip install pytest-django

Below is the pytest help message. It is difficult to read, and would benefit from copy editing and formatting:

Shell
(aw) $ pytest -h
usage: pytest [options] [file_or_dir] [file_or_dir] [...]
positional arguments: file_or_dir
general: -k EXPRESSION only run tests which match the given substring expression. An expression is a python evaluatable expression where all names are substring-matched against test names and their parent classes. Example: -k 'test_method or test_other' matches all test functions and classes whose name contains 'test_method' or 'test_other', while -k 'not test_method' matches those that don't contain 'test_method' in their names. -k 'not test_method and not test_other' will eliminate the matches. Additionally keywords are matched to classes and functions containing extra names in their 'extra_keyword_matches' set, as well as functions which have names assigned directly to them. The matching is case-insensitive. -m MARKEXPR only run tests matching given mark expression. For example: -m 'mark1 and not mark2'. --markers show markers (builtin, plugin and per-project ones). -x, --exitfirst exit instantly on first error or failed test. --fixtures, --funcargs show available fixtures, sorted by plugin appearance (fixtures with leading '_' are only shown with '-v') --fixtures-per-test show fixtures per test --pdb start the interactive Python debugger on errors or KeyboardInterrupt. --pdbcls=modulename:classname start a custom interactive Python debugger on errors. For example: --pdbcls=IPython.terminal.debugger:TerminalPdb --trace Immediately break when running each test. --capture=method per-test capturing method: one of fd|sys|no|tee-sys. -s shortcut for --capture=no. --runxfail report the results of xfail tests as if they were not marked --lf, --last-failed rerun only the tests that failed at the last run (or all if none failed) --ff, --failed-first run all tests, but run the last failures first. This may re-order tests and thus lead to repeated fixture setup/teardown. --nf, --new-first run tests from new files first, then the rest of the tests sorted by file mtime --cache-show=[CACHESHOW] show cache contents, don't perform collection or tests. Optional argument: glob (default: '*'). --cache-clear remove all cache contents at start of test run. --lfnf={all,none}, --last-failed-no-failures={all,none} which tests to run with no previously (known) failures. --sw, --stepwise exit on test failure and continue from last failing test next time --sw-skip, --stepwise-skip ignore the first failing test but stop on the next failing test
reporting: --durations=N show N slowest setup/test durations (N=0 for all). --durations-min=N Minimal duration in seconds for inclusion in slowest list. Default 0.005 -v, --verbose increase verbosity. --no-header disable header --no-summary disable summary -q, --quiet decrease verbosity. --verbosity=VERBOSE set verbosity. Default is 0. -r chars show extra test summary info as specified by chars: (f)ailed, (E)rror, (s)kipped, (x)failed, (X)passed, (p)assed, (P)assed with output, (a)ll except passed (p/P), or (A)ll. (w)arnings are enabled by default (see --disable-warnings), 'N' can be used to reset the list. (default: 'fE'). --disable-warnings, --disable-pytest-warnings disable warnings summary -l, --showlocals show locals in tracebacks (disabled by default). --tb=style traceback print mode (auto/long/short/line/native/no). --show-capture={no,stdout,stderr,log,all} Controls how captured stdout/stderr/log is shown on failed tests. Default is 'all'. --full-trace don't cut any tracebacks (default is to cut). --color=color color terminal output (yes/no/auto). --code-highlight={yes,no} Whether code should be highlighted (only if --color is also enabled) --pastebin=mode send failed|all info to bpaste.net pastebin service. --junit-xml=path create junit-xml style report file at given path. --junit-prefix=str prepend prefix to classnames in junit-xml output
pytest-warnings: -W PYTHONWARNINGS, --pythonwarnings=PYTHONWARNINGS set which warnings to report, see -W option of python itself. --maxfail=num exit after first num failures or errors. --strict-config any warnings encountered while parsing the `pytest` section of the configuration file raise errors. --strict-markers markers not registered in the `markers` section of the configuration file raise errors. --strict (deprecated) alias to --strict-markers. -c file load configuration from `file` instead of trying to locate one of the implicit configuration files. --continue-on-collection-errors Force test execution even if collection errors occur. --rootdir=ROOTDIR Define root directory for tests. Can be relative path: 'root_dir', './root_dir', 'root_dir/another_dir/'; absolute path: '/home/user/root_dir'; path with variables: '$HOME/root_dir'.
collection: --collect-only, --co only collect tests, don't execute them. --pyargs try to interpret all arguments as python packages. --ignore=path ignore path during collection (multi-allowed). --ignore-glob=path ignore path pattern during collection (multi-allowed). --deselect=nodeid_prefix deselect item (via node id prefix) during collection (multi-allowed). --confcutdir=dir only load conftest.py's relative to specified dir. --noconftest Don't load any conftest.py files. --keep-duplicates Keep duplicate tests. --collect-in-virtualenv Don't ignore tests in a local virtualenv directory --import-mode={prepend,append,importlib} prepend/append to sys.path when importing test modules and conftest files, default is to prepend. --doctest-modules run doctests in all .py modules --doctest-report={none,cdiff,ndiff,udiff,only_first_failure} choose another output format for diffs on doctest failure --doctest-glob=pat doctests file matching pattern, default: test*.txt --doctest-ignore-import-errors ignore doctest ImportErrors --doctest-continue-on-failure for a given doctest, continue to run after the first failure
test session debugging and configuration: --basetemp=dir base temporary directory for this test run.(warning: this directory is removed if it exists) -V, --version display pytest version and information about plugins.When given twice, also display information about plugins. -h, --help show help message and configuration info -p name early-load given plugin module name or entry point (multi-allowed). To avoid loading of plugins, use the `no:` prefix, e.g. `no:doctest`. --trace-config trace considerations of conftest.py files. --debug store internal tracing debug information in 'pytestdebug.log'. -o OVERRIDE_INI, --override-ini=OVERRIDE_INI override ini option with "option=value" style, e.g. `-o xfail_strict=True -o cache_dir=cache`. --assert=MODE Control assertion debugging tools. 'plain' performs no assertion debugging. 'rewrite' (the default) rewrites assert statements in test modules on import to provide assert expression information. --setup-only only setup fixtures, do not execute tests. --setup-show show setup of fixtures while executing tests. --setup-plan show what fixtures and tests would be executed but don't execute anything.
logging: --log-level=LEVEL level of messages to catch/display. Not set by default, so it depends on the root/parent log handler's effective level, where it is "WARNING" by default. --log-format=LOG_FORMAT log format as used by the logging module. --log-date-format=LOG_DATE_FORMAT log date format as used by the logging module. --log-cli-level=LOG_CLI_LEVEL cli logging level. --log-cli-format=LOG_CLI_FORMAT log format as used by the logging module. --log-cli-date-format=LOG_CLI_DATE_FORMAT log date format as used by the logging module. --log-file=LOG_FILE path to a file when logging will be written to. --log-file-level=LOG_FILE_LEVEL log file logging level. --log-file-format=LOG_FILE_FORMAT log format as used by the logging module. --log-file-date-format=LOG_FILE_DATE_FORMAT log date format as used by the logging module. --log-auto-indent=LOG_AUTO_INDENT Auto-indent multiline messages passed to the logging module. Accepts true|on, false|off or an integer.
[pytest] ini-options in the first pytest.ini|tox.ini|setup.cfg file found:
markers (linelist): markers for test functions empty_parameter_set_mark (string): default marker for empty parametersets norecursedirs (args): directory patterns to avoid for recursion testpaths (args): directories to search for tests when no files or directories are given in the command line. filterwarnings (linelist): Each line specifies a pattern for warnings.filterwarnings. Processed after -W/--pythonwarnings. usefixtures (args): list of default fixtures to be used with this project python_files (args): glob-style file patterns for Python test module discovery python_classes (args): prefixes or glob names for Python test class discovery python_functions (args): prefixes or glob names for Python test function and method discovery disable_test_id_escaping_and_forfeit_all_rights_to_community_support (bool): disable string escape non-ascii characters, might cause unwanted side effects(use at your own risk) console_output_style (string): console output: "classic", or with additional progress information ("progress" (percentage) | "count"). xfail_strict (bool): default for the strict parameter of xfail markers when not given explicitly (default: False) enable_assertion_pass_hook (bool): Enables the pytest_assertion_pass hook.Make sure to delete any previously generated pyc cache files. junit_suite_name (string): Test suite name for JUnit report junit_logging (string): Write captured log messages to JUnit report: one of no|log|system-out|system-err|out-err|all junit_log_passing_tests (bool): Capture log information for passing tests to JUnit report: junit_duration_report (string): Duration time to report: one of total|call junit_family (string): Emit XML for schema: one of legacy|xunit1|xunit2 doctest_optionflags (args): option flags for doctests doctest_encoding (string): encoding used for doctest files cache_dir (string): cache directory path. log_level (string): default value for --log-level log_format (string): default value for --log-format log_date_format (string): default value for --log-date-format log_cli (bool): enable log display during test run (also known as "live logging"). log_cli_level (string): default value for --log-cli-level log_cli_format (string): default value for --log-cli-format log_cli_date_format (string): default value for --log-cli-date-format log_file (string): default value for --log-file log_file_level (string): default value for --log-file-level log_file_format (string): default value for --log-file-format log_file_date_format (string): default value for --log-file-date-format log_auto_indent (string): default value for --log-auto-indent faulthandler_timeout (string): Dump the traceback of all threads if a test takes more than TIMEOUT seconds to finish. addopts (args): extra command line options minversion (string): minimally required pytest version required_plugins (args): plugins that must be present for pytest to run
environment variables: PYTEST_ADDOPTS extra command line options PYTEST_PLUGINS comma-separated plugins to load during startup PYTEST_DISABLE_PLUGIN_AUTOLOAD set to disable plugin auto-loading PYTEST_DEBUG set to enable debug tracing of pytest's internals

to see available markers type: pytest --markers to see available fixtures type: pytest --fixtures (shown according to specified file_or_dir or current dir if not specified; fixtures with leading '_' are only shown with the '-v' option

I found that PyTest ran extremely slowly for my tests when accessing the database (66 seconds). Instead of taking seconds it took minutes to create a few simple fixtures. I installed pytest-timer[termcolor] to see what was slow. That showed me that the tests took 0.0029 seconds in all; startup overhead took 66 seconds. When I disabled all the tests (by annotating them with @pytest.mark.skip()), pytest completed in 0.04 seconds. The problem is that PyTest creates a test database and performs all migrations before running tests. I experience the same delay when using manage.py test, and for the same reason.

Shell
(aw) $ pip install pytest-timer[termcolor]

pytest.ini

PyTest is configured by placing a file called pytest.ini in the root directory of a Django webapp. This is the one I created:

[pytest]
# Django-specific unit test settings
#
# See https://pytest-django.readthedocs.io/en/latest/usage.html#django-debug-mode-change-how-debug-is-set
# django_debug_mode = true
#
DJANGO_SETTINGS_MODULE = main.settings.test
FAIL_INVALID_TEMPLATE_VARS = True

# General Python unit test settings
filterwarnings = ignore::django.utils.deprecation.RemovedInDjango40Warning
python_files = tests.py test_*.py *_tests.py

Fixtures

PyTest fixtures can either be created by Python code (which is the subject of this article), or by @pytest.mark.parametrize annotation.

Fixtures for PyTest unit tests can only be created in a file called conftest.py. A conftest.py file is a Python module, loaded automatically by PyTest, and fixtures defined in it are available to test modules in the same directory and below. If unit tests for a Django app need fixtures, create a conftest.py file in the app directory. For a Django app called core within a webapp called myWebApp, the path is myWebApp/core/conftest.py.

Fixtures are normally created when a test function requests them, by declaring them in the method signature. By default, fixtures are destroyed when each test finishes. The scope parameter of the @pytest.fixture annotation controls the fixture’s lifespan. For example, session scope causes fixtures to be created only once, and to survive until all tests complete. Session scope is declared by applying the @pytest.fixture(scope="session") annotation to the function signature that defines the fixture.

Database-Backed Fixtures

The following code creates a fixture factory called designer for unit testing a Django-Oscar webapp. The code also demonstrates how to create instances of Django-Oscar’s Country, Partner and PartnerAddress classes. To access the database in a test fixture another special fixture called db must be injected.

The underdocumented db fixture is part of the django-pytest plugin. When db is injected, the @pytest.mark.django_db annotation is not required. However, because db is a function-scoped fixture, it can only be injected into other function-scoped fixtures. This means that the @pytest.fixture annotation must be used instead of @pytest.fixture(scope="session") when applied to fixture definitions that inject the db fixture.

myWebApp/conftest.py
import pytest
from django.contrib.auth import get_user_model
from oscar.core.loading import get_model
from core.models import Design
@pytest.fixture def designer(db): user = get_user_model().objects.create( username = "testy", email = "test@ancientwarmth.com", is_staff = True, is_superuser = True, ) return Designer.objects.create( mission_statement = "Test mission", user = user )
@pytest.fixture def another_fixture(db, designer): # Do something with the injected fixtures (db and designer) # Any number of fixtures can be injected pass

The following fixtures could be helpful for testing Django-oscar applications:

myWebApp/conftest.py (Continued)
@pytest.fixture
def country(db):
    OscarCountryMgr = get_model('address', 'Country')._default_manager
    OscarCountryMgr.all().delete()
    return OscarCountryMgr.create(iso_3166_1_a2='CA', name="Canada")
@pytest.fixture def partner(db): OscarPartnerMgr = get_model('partner', 'Partner')._default_manager OscarPartnerMgr.all().delete() return OscarPartnerMgr.create(name="Test partner")
@pytest.fixture def partner_address(country, db, partner): OscarPartnerAddressMgr = get_model('partner', 'PartnerAddress')._default_manager OscarPartnerAddressMgr.all().delete() return OscarPartnerAddressMgr.create( title = "Dr", first_name = "Soapy", last_name = "Sudsberger", country = country, postcode = "H2P 2J7", partner = partner)

PyTest fixtures can also set global state for the test suite by enabling the autouse parameter of @pytest.fixture. In the following code, the name of the set_global_state fixture is not important. The yield keyword causes the fixture to pause execution while tests run.

myWebApp/conftest.py (Continued)
@pytest.fixture(autouse=True)
def set_global_state():
    """Set environment variables, and other global state"""
    # Set global state here
    yield  # Tests run now
    # Restore global state here

Writing Unit Tests

The test_mission test acccepts a fixture created by the designer fixture factory in core/conftest.py. Note that the test_mission test does not directly access the database (the fixture does that), so no annotation such as @pytest.mark.django_db is required on the test.

myWebApp/core/tests.py
import pytest
from core.conftest import *
def test_mission(designer): assert designer.mission_statement == "Test mission"
def test_another_fixture(another_fixture): assert another_fixture != None

Django-oscar applications can test Country, Partner and PartnerAddress like this:

myWebApp/core/oscar_tests.py
def test_country(country):
    assert country.name == "Canada"
    assert country.code == "CA"
    # assert country.is_shipping_country == True
    # assert country.printable_name != ""
    assert country.iso_3166_1_a2 == "CA"
    # assert country.iso_3166_1_a3 == "CAN"
def test_partner(partner): assert partner.code == 'test-partner' assert partner.display_name == 'Test partner'
order_lines = partner.order_lines assert order_lines.count() == 0 # TODO what is this?
addresses = partner.addresses assert addresses.count() == 0
stockrecords = partner.stockrecords assert stockrecords.count() == 0
users = partner.users assert users.count() == 0
def test_partner_address(partner_address): assert partner_address.city == '' assert partner_address.country_id == 'CA' assert partner_address.first_name == 'Soapy' assert partner_address.get_title_display() == 'Dr' assert partner_address.last_name == 'Sudsberger' assert partner_address.line1 == '' assert partner_address.line2 == '' assert partner_address.line3 == '' assert partner_address.line4 == '' assert partner_address.name == 'Soapy Sudsberger' assert partner_address.partner.display_name == 'Test partner' assert partner_address.postcode == 'H2P 2J7' assert partner_address.salutation == 'Dr Soapy Sudsberger' assert partner_address.state == '' assert partner_address.summary == 'Dr Soapy Sudsberger, H2P 2J7' assert partner_address.title == 'Dr'

Running Unit Tests From the Command Line

To run all tests from the command line, simply type:

Shell
(aw) $ pytest

Specific test files can be run by specifying the test file names:

Shell
(aw) $ pytest myapp/test/test_1.py

All tests in specific directories can be run by specifying the directory names:

Shell
(aw) $ pytest myapp1 myapp2

Debugging pytest issues can be done by using the --log-cli-level option:

Shell
(aw) $ pytest --log-cli-level=DEBUG core/tests.py

Running Unit Tests From Visual Studio Code

This is my VSCode launch configuration for running PyTest on all unit tests for the core webapp:

.vscode/launch.json
[ 
  {
    "justMyCode": true,
    "name": "AW PyTest",
    "type": "python",
    "request": "launch",
    "program": "${env:venv}/aw/bin/pytest",
    "python": "${env:venv}/aw/bin/python",
    "args": [
      "core/tests.py"
    ],
    "django": true,
    "env": {
      "DJANGO_SETTINGS_MODULE": "main.settings.test"
    },
    "presentation": {
      "group": "aw",
      "order": 6
    }
  },
] 


* indicates a required field.

Please select the following to receive Mike Slinn’s newsletter:

You can unsubscribe at any time by clicking the link in the footer of emails.

Mike Slinn uses Mailchimp as his marketing platform. By clicking below to subscribe, you acknowledge that your information will be transferred to Mailchimp for processing. Learn more about Mailchimp’s privacy practices.