Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions .github/workflows/main.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -65,9 +65,9 @@ jobs:
pip install -r requirements-test.txt
- name: Test with Python=${{ matrix.python-version }} Loader=${{ matrix.loader }}
run: |
python tests/runtests.py ./specifications/json-ld-api/tests -l ${{ matrix.loader }}
python tests/runtests.py ./specifications/json-ld-framing/tests -l ${{ matrix.loader }}
python tests/runtests.py ./specifications/normalization/tests -l ${{ matrix.loader }}
pytest --tests=./specifications/json-ld-api/tests --loader=${{ matrix.loader }}
pytest --tests=./specifications/json-ld-framing/tests --loader=${{ matrix.loader }}
pytest --tests=./specifications/normalization/tests --loader=${{ matrix.loader }}
env:
LOADER: ${{ matrix.loader }}
#coverage:
Expand Down
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@
- Minimize async related changes to library code in this release.
- In sync environment use `asyncio.run`.
- In async environment use background thread.
- Add ability to run test suites using pytest and make pytest the default way for running (unit)tests.

## 2.0.4 - 2024-02-16

Expand Down
43 changes: 41 additions & 2 deletions README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -220,10 +220,41 @@ Note that you can clone these repositories into any location you wish; however,
if you do not clone them into the default ``specifications/`` folder, you will
need to provide the paths to the test runner as arguments when running the tests, as explained below

## Running the tests
## Running the sample test suites and unittests using pytest

If the suites repositories are available in the `specifications/` folder of the PyLD
source directory, then all the tests can be run with the following:
source directory, then all unittests, including the sample test suites, can be run with pytest_:

.. code-block:: bash

pytest

If you wish to store the test suites in a different location than the default
``specifications/`` folder, or you want to test individual manifest ``.jsonld`` files or directories
containing a ``manifest.jsonld``, then you can supply these files or
directories as arguments:

.. code-block:: bash

# use: pytest --tests=TEST_PATH [--tests=TEST_PATH...]
pytest --tests=./specifications/json-ld-api/tests

The test runner supports different document loaders by setting ``--loader requests``
or ``--loader aiohttp``. The default document loader is set to Requests_.

.. code-block:: bash

pytest --loader=requests --tests=./specifications/json-ld-api/tests

An EARL report can be generated using the ``--earl`` option.

.. code-block:: bash

pytest --earl=./earl-report.json

## Running the sample test suites using the original test runner

You can also run the JSON-LD test suites using the original test runner script provided:

.. code-block:: bash

Expand All @@ -241,8 +272,16 @@ directories as arguments:
The test runner supports different document loaders by setting ``-l requests``
or ``-l aiohttp``. The default document loader is set to Requests_.

.. code-block:: bash

python tests/runtests.py -l requests ./specifications/json-ld-api/tests

An EARL report can be generated using the ``-e`` or ``--earl`` option.

.. code-block:: bash

python tests/runtests.py -e ./earl-report.json


.. _Digital Bazaar: https://digitalbazaar.com/

Expand Down
1 change: 1 addition & 0 deletions requirements-test.txt
Original file line number Diff line number Diff line change
@@ -1 +1,2 @@
flake8
pytest
Empty file added tests/__init__.py
Empty file.
153 changes: 153 additions & 0 deletions tests/conftest.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,153 @@
import os
import unittest
import pytest

# Import the existing test runner module so we can reuse Manifest/Test
# implementations with minimal changes.
from . import runtests


def pytest_addoption(parser):
# Do only long options for pytest integration; pytest reserves
# lowercase single-letter short options for its own CLI flags.
parser.addoption('--tests', nargs='*', default=[], help='A manifest or directory to test')
parser.addoption('--earl', dest='earl', help='The filename to write an EARL report to')
parser.addoption('--loader', dest='loader', default='requests', help='The remote URL document loader: requests, aiohttp')
parser.addoption('--number', dest='number', help='Limit tests to those containing the specified test identifier')


def pytest_configure(config):
# Apply loader choice and selected test number globally so that the
# existing `runtests` helpers behave the same as the CLI runner.
loader = config.getoption('loader')
if loader == 'requests':
runtests.jsonld._default_document_loader = runtests.jsonld.requests_document_loader()
elif loader == 'aiohttp':
runtests.jsonld._default_document_loader = runtests.jsonld.aiohttp_document_loader()

number = config.getoption('number')
if number:
runtests.ONLY_IDENTIFIER = number
# If an EARL output file was requested, create a session-level
# EarlReport instance we will populate per-test.
earl_fn = config.getoption('earl')
if earl_fn:
config._earl_report = runtests.EarlReport()
else:
config._earl_report = None


def _flatten_suite(suite):
"""Yield TestCase instances from a unittest TestSuite (recursively)."""
if isinstance(suite, unittest.TestSuite):
for s in suite:
yield from _flatten_suite(s)
elif isinstance(suite, unittest.TestCase):
yield suite


def pytest_generate_tests(metafunc):
# Parametrize tests using the existing manifest loader if the test
# function needs a `manifest_test` argument.
if 'manifest_test' not in metafunc.fixturenames:
return

config = metafunc.config
tests_arg = config.getoption('tests') or []

if len(tests_arg):
test_targets = tests_arg
else:
# Default sibling directories used by the original runner. Keep the
# original relative strings but resolve them relative to this
# `conftest.py` so tests can be discovered regardless of cwd.
base_path = os.path.abspath(os.path.dirname(__file__))

test_targets = []
for d in runtests.SIBLING_DIRS:
d_path = os.path.abspath(os.path.join(base_path, d))
if os.path.exists(d_path):
test_targets.append(d_path)

if len(test_targets) == 0:
pytest.skip('No test manifest or directory specified (use --tests)')

# Build a root manifest structure with target files and dirs (equivalent to the original runner).
root_manifest = {
'@context': 'https://w3c.github.io/tests/context.jsonld',
'@id': '',
'@type': 'mf:Manifest',
'description': 'Top level PyLD test manifest',
'name': 'PyLD',
'sequence': [],
'filename': '/'
}

for test in test_targets:
if os.path.isfile(test):
root, ext = os.path.splitext(test)
if ext in ['.json', '.jsonld']:
root_manifest['sequence'].append(os.path.abspath(test))
else:
raise Exception('Unknown test file ext', root, ext)
elif os.path.isdir(test):
filename = os.path.join(test, 'manifest.jsonld')
if os.path.exists(filename):
root_manifest['sequence'].append(os.path.abspath(filename))

# Use the existing Manifest loader to create a TestSuite and flatten it
suite = runtests.Manifest(root_manifest, root_manifest['filename']).load()
tests = list(_flatten_suite(suite))

# Parametrize the test function with Test instances and use their
# string representation as test ids for readability in pytest output.
metafunc.parametrize('manifest_test', tests, ids=[str(t) for t in tests])


@pytest.hookimpl(hookwrapper=True, tryfirst=True)
def pytest_runtest_makereport(item):
# Hookwrapper gives us the final test report via `outcome.get_result()`.
outcome = yield
rep = outcome.get_result()

# We only handle the main call phase to match
# the behaviour of the original runner which only reported passes
# and failures/errors.
if rep.when not in ('call'):
return

# The parametrized pytest test attaches the original runtests.Test
# instance as the `manifest_test` fixture; retrieve it here.
manifest_test = item.funcargs.get('manifest_test')
if manifest_test is None:
return

# If an EARL report was requested at configure time, add an assertion
# for this test based on the pytest outcome.
earl_report = getattr(item.config, '_earl_report', None)
if earl_report is None:
return

# Map pytest outcomes to whether the test should be recorded as
# succeeded or failed. We skip 'skipped' outcomes to avoid polluting
# the EARL report with non-asserted tests.
if rep.outcome == 'skipped':
return

success = (rep.outcome == 'passed')
try:
earl_report.add_assertion(manifest_test, success)
except Exception:
# Don't let EARL bookkeeping break test execution; be quiet on error.
pass


def pytest_sessionfinish(session, exitstatus):
# If the user requested an EARL report, write it using the existing
# `EarlReport` helper. We can't collect per-test assertions here
# The per-test assertions (if any) were appended to config._earl_report
# during test execution; write the report now if present.
earl = session.config.getoption('earl')
earl_report = getattr(session.config, '_earl_report', None)
if earl and earl_report is not None:
earl_report.write(os.path.abspath(earl))
Loading
Loading