-
Notifications
You must be signed in to change notification settings - Fork 136
Migrate to pytest #213
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: adjust-submodules
Are you sure you want to change the base?
Migrate to pytest #213
Conversation
anatoly-scherbakov
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried running both pytest and unittest implementations locally, and results are close. I consider it a good enough reason to get this merged.
I have been thinking about creating a generic tool to run JSON-LD test suites against any toolkit in any language, but that's an endeavor of its own, completely out of scope of this project.
2df6074 to
bad733f
Compare
|
@anatoly-scherbakov what do you mean with 'close'? :) As in not equivalent? There should be no difference functionality-wise @BigBlueHat @davidlehn could you have a look at this and tell me what you think? |
|
@mielvds I was being overcautious. :) I asked Cursor to rerun both test suites again, they match. I am not sure what "pending" tests are though, but I would not consider that a breaking issue.
|
|
As far as I can tell, the 'pending' tests more or less correspond to 'xfail'. However, I can't really find an easy hook to map those. It's also not clear whether 'pending' also means 'skipped' in I took another simple stab at it, which classifies the pending tests as xfail, but this might not be the intended behavior. |
davidlehn
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Why have both unittest and pytest supported vs switching to pytest? (If that's the chosen direction.)
- Needs README update.
- Needs CHANGELOG entry.
- Looks like the submodule thing removed normalization tests. I don't know why. I think they should remain until the task of switching to rdf-canon algorithm and tests comes up.
|
Quite a bit of work in here! Congrats on figuring the testing suite out enough to get this far. I think it grew a bit complex for various reasons while the early JSON-LD specs were in development and we were trying to keep the js and py code sort of in sync. I'm also forgetting what the pending tests were for. I think it might have been a trick not fail on them, but let us know when code caught up enough for them to pass. I'm probably forgetting details. I suspect as this moves forward, it would be fine to refactor how things work as long as all the main test suite tests keep running properly. |
| ] | ||
|
|
||
| SIBLING_DIRS = [ | ||
| '../specifications/json-ld-api/tests/', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like this is expecting specs in a new location now? Historically we had those spec dirs as siblings of implementations (js, py, php), vs in a specifications dir. It's of course opinionated how to set that up, but it did work well across those repos before.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These are the submodules. @mielvds thanks for configuring this! I intended to do that after the submodules PR was merged. I think the docs have to be updated, is that correct?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@davidlehn one of advantages of the specifications dir is that it could be used by an agent to help with pyld. Agent should have definitive documentation right there. Configuring straightforward access to Claude Code or Cursor so that it would look at a sibling dir can be of course done but requires certain extra effort; here, it comes naturally.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@davidlehn one of advantages of the
specificationsdir is that it could be used by an agent to help withpyld. Agent should have definitive documentation right there. Configuring straightforward access to Claude Code or Cursor so that it would look at a sibling dir can be of course done but requires certain extra effort; here, it comes naturally.
Oh, tbh I didn't realize this was the folder introduced by the submodules. I thought this was where they were always stored and I didn't get the tests to run :)
Submodules or not, a specifications folder to keep things tidy is probably a good idea
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mielvds yes, the spec directories were always supposed to sit beside the root directory of this repo.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As I mentioned in the docs pr, "siblings" naming only makes sense when they are sibling dirs along side the pyld dir, not in it. Fine to fix it to work as is now, but that should change otherwise it's confusing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@davidlehn one of advantages of the
specificationsdir is that it could be used by an agent to help withpyld. Agent should have definitive documentation right there. Configuring straightforward access to Claude Code or Cursor so that it would look at a sibling dir can be of course done but requires certain extra effort; here, it comes naturally.
That is so far out of scope I can't even see it. If that wasn't a joke, then I'd suggest using better tools.
I deliberately left that open for discussion. I don't like to swoop in and make drastic changes. Also, immediately migrating the unittest implementation was too much for not having written the code. We could already get rid of the custom test runners though, but only with your blessing :)
I've added those
I agree, but let's take that up in #218 |
|
I'll wait for #218 before merging this one. |
3713302 to
c221bfe
Compare

This PR is an attempt to migrate the original test runner to pytest with minimal changes. It fixes #206.
Tests can be run with or without pytest; I've added notes in
tests/runtests.pywherever functionality is handled by pytest and if desired, can be removed after review or somewhere in the future. When more unit tests arrive, we might want to renameruntests.pyto something more accurate. (or merge it withtest_manifests.py)Copilot was used to explain the original code and help adding commentary to
tests/runtests.py, which I checked for correctness. This helped a lot to understand the existing code, but apologies if I misunderstood anything; it wasn't easy :) Next, I used copilot to provide some pytest boilerplate and took it from there.The EARL reports match up between two test runners. I've also updated the GitHub actions to use pytest and that seems to work as well.