Skip to content

Conversation

@mielvds
Copy link
Collaborator

@mielvds mielvds commented Nov 19, 2025

This PR is an attempt to migrate the original test runner to pytest with minimal changes. It fixes #206.

Tests can be run with or without pytest; I've added notes in tests/runtests.py wherever functionality is handled by pytest and if desired, can be removed after review or somewhere in the future. When more unit tests arrive, we might want to rename runtests.py to something more accurate. (or merge it with test_manifests.py )

Copilot was used to explain the original code and help adding commentary to tests/runtests.py, which I checked for correctness. This helped a lot to understand the existing code, but apologies if I misunderstood anything; it wasn't easy :) Next, I used copilot to provide some pytest boilerplate and took it from there.

The EARL reports match up between two test runners. I've also updated the GitHub actions to use pytest and that seems to work as well.

@mielvds mielvds marked this pull request as ready for review November 19, 2025 13:03
Copy link
Collaborator

@anatoly-scherbakov anatoly-scherbakov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried running both pytest and unittest implementations locally, and results are close. I consider it a good enough reason to get this merged.

I have been thinking about creating a generic tool to run JSON-LD test suites against any toolkit in any language, but that's an endeavor of its own, completely out of scope of this project.

@mielvds
Copy link
Collaborator Author

mielvds commented Nov 26, 2025

@anatoly-scherbakov what do you mean with 'close'? :) As in not equivalent? There should be no difference functionality-wise

@BigBlueHat @davidlehn could you have a look at this and tell me what you think?

@anatoly-scherbakov
Copy link
Collaborator

@mielvds I was being overcautious. :)

I asked Cursor to rerun both test suites again, they match. I am not sure what "pending" tests are though, but I would not consider that a breaking issue.

image

@mielvds
Copy link
Collaborator Author

mielvds commented Nov 27, 2025

As far as I can tell, the 'pending' tests more or less correspond to 'xfail'. However, I can't really find an easy hook to map those. It's also not clear whether 'pending' also means 'skipped' in runtests.py, but I believe it does.

I took another simple stab at it, which classifies the pending tests as xfail, but this might not be the intended behavior.

Copy link
Member

@davidlehn davidlehn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Why have both unittest and pytest supported vs switching to pytest? (If that's the chosen direction.)
  • Needs README update.
  • Needs CHANGELOG entry.
  • Looks like the submodule thing removed normalization tests. I don't know why. I think they should remain until the task of switching to rdf-canon algorithm and tests comes up.

@davidlehn
Copy link
Member

Quite a bit of work in here! Congrats on figuring the testing suite out enough to get this far. I think it grew a bit complex for various reasons while the early JSON-LD specs were in development and we were trying to keep the js and py code sort of in sync.

I'm also forgetting what the pending tests were for. I think it might have been a trick not fail on them, but let us know when code caught up enough for them to pass. I'm probably forgetting details.

I suspect as this moves forward, it would be fine to refactor how things work as long as all the main test suite tests keep running properly.

]

SIBLING_DIRS = [
'../specifications/json-ld-api/tests/',
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like this is expecting specs in a new location now? Historically we had those spec dirs as siblings of implementations (js, py, php), vs in a specifications dir. It's of course opinionated how to set that up, but it did work well across those repos before.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These are the submodules. @mielvds thanks for configuring this! I intended to do that after the submodules PR was merged. I think the docs have to be updated, is that correct?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@davidlehn one of advantages of the specifications dir is that it could be used by an agent to help with pyld. Agent should have definitive documentation right there. Configuring straightforward access to Claude Code or Cursor so that it would look at a sibling dir can be of course done but requires certain extra effort; here, it comes naturally.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@davidlehn one of advantages of the specifications dir is that it could be used by an agent to help with pyld. Agent should have definitive documentation right there. Configuring straightforward access to Claude Code or Cursor so that it would look at a sibling dir can be of course done but requires certain extra effort; here, it comes naturally.

Oh, tbh I didn't realize this was the folder introduced by the submodules. I thought this was where they were always stored and I didn't get the tests to run :)
Submodules or not, a specifications folder to keep things tidy is probably a good idea

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mielvds yes, the spec directories were always supposed to sit beside the root directory of this repo.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As I mentioned in the docs pr, "siblings" naming only makes sense when they are sibling dirs along side the pyld dir, not in it. Fine to fix it to work as is now, but that should change otherwise it's confusing.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@davidlehn one of advantages of the specifications dir is that it could be used by an agent to help with pyld. Agent should have definitive documentation right there. Configuring straightforward access to Claude Code or Cursor so that it would look at a sibling dir can be of course done but requires certain extra effort; here, it comes naturally.

That is so far out of scope I can't even see it. If that wasn't a joke, then I'd suggest using better tools.

@mielvds
Copy link
Collaborator Author

mielvds commented Dec 5, 2025

  • Why have both unittest and pytest supported vs switching to pytest? (If that's the chosen direction.)

I deliberately left that open for discussion. I don't like to swoop in and make drastic changes. Also, immediately migrating the unittest implementation was too much for not having written the code.

We could already get rid of the custom test runners though, but only with your blessing :)

  • Needs README update.
  • Needs CHANGELOG entry.

I've added those

  • Looks like the submodule thing removed normalization tests. I don't know why. I think they should remain until the task of switching to rdf-canon algorithm and tests comes up.

I agree, but let's take that up in #218

@mielvds
Copy link
Collaborator Author

mielvds commented Dec 5, 2025

I'll wait for #218 before merging this one.

@mielvds mielvds changed the base branch from master to adjust-submodules December 5, 2025 10:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Use pytest for (unit)testing?

4 participants