-
Notifications
You must be signed in to change notification settings - Fork 44
Add ebm backend #914
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Add ebm backend #914
Conversation
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #914 +/- ##
========================================
Coverage 99.63% 99.64%
========================================
Files 103 105 +2
Lines 8237 8402 +165
========================================
+ Hits 8207 8372 +165
Misses 30 30 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
Thanks, this is great! A couple of quick suggestions:
|
8c75919 to
375b999
Compare
375b999 to
f010704
Compare
|
We have identified that the errors in the check are likely related to the download of the default sentenceTransformer model that we configured (BAAI/bge-m3). That fetches 8GB of additional data. We will look into that |
|
The GitHub Actions job for testing on Python 3.11 fails because there is not enough disk space, logs: I think we could use a larger GH Actions runner machine, but its setup is an organization wide setting and it will be billed by usage, so I'll need to check this from our admins (takes less than day I hope). The specs for the larger GitHub hosted machines are these:
Edit: The default runners have 14 GB disk and there is actually more options for the large runners. |
|
Alternatively we could remove some unnecessary stuff from the runner, or distribute the optional dependencies to install to separate jobs. But that could be done in another PR, to keep this one simple. |
|
A larger runner is ready. This is the way how to make CI to use it: Run job on GH hosted large-runner An example run: https://github.com/NatLibFi/Annif/actions/runs/19363594070/job/55401126360 Edit: But as discussed with @osma, it would be better that installing would not require so much disk space and network traffic. Maybe the installation could somehow be slimmed? |
|
Thanks @juhoinkinen for expanding the github runners. We have already tried to reduce traffic by skipping the download of the default sentenceTransformer model that we configured for ebm. It now onlies uses a mock model in the tests, so that the Hugging Face Cache should remain empty. I am not sure how we could trim down the installation. Installing sentenceTransformers is essential for the package and that brings in all the other heavy libraries (transformers, torch, etc.). As we now only work with a mock modelin the tests, maybe one could run the tests without actually installing sentenceTransformers, for example install ebm with --no-deps option. However, one reason for having this CI is that one would want to test that the library can be properly installed, including all dependencies, isn't it? |
|
Good that you were able to avoid downloading the sentence transformer model and replaced it with a mock implementation. I think that installing dependencies (software libraries) is essential for a CI pipeline like this. It's a bit sad that these libraries are so huge. Here are a couple of ideas for slimming down the installation:
The different Pytorch variants are described in the documentation. For CPU-only, you need to pass It looks like Pytorch is also developing new wheel variants that auto-detect the hardware. Not sure if this is ready for this kind of use yet. |
|
Is there a way to figure out these problems in ci/cd in a local mode, e.g. using some docker images, so that we don't need to burn down NatLibFi's ressources? I must admit I am very much unexperienced with github actions. |
|
@mfakaehler there are ways to run GitHub Actions locally, for example https://github.com/nektos/act Please don't worry about the costs. They are really peanuts, and we are very interested in getting the |
|
|
I have been thinking about the CPU-only installation. Before we start to re-program our dependencies, I would like to discuss, what we are aiming for. Would you like to deploy the EBM backend without GPU support entirely, to reduce the size of the installation? Or are we talking about ways to "cheat" the CI-pipeline to only download the pytorch CPU installation, but still enable other users to use GPUs? |
|
Just a thought, but how much work would it require, or is it possible at all, to have an option (probably in emb4subjects) to generate the embeddings in an external service instead of in the local machine? Currently in our deployment setup via are running Annif in OpenShift cluster having just CPUs, but we have GPUs available in a separate cluster. |
|
@mfakaehler Excellent questions! This is a bit similar to the discussion in #804 about Docker image variants (mainly related to the XTransformer backend). There, the conclusion was that it doesn't make sense to include GPU support in Docker images (at least in the primary image variant), because it would increase its size a lot and still be difficult to run. But XTransformer is a bit different than EBM in that it only requires a GPU during training, not so much at inference time. In my understanding, EBM in practice needs a GPU at inference time as well. So we can't just apply the same logic directly. Here are some things that I think would be desirable:
I realize that these may be difficult to achieve in our current way of managing dependencies. In my understanding, Poetry only has limited support for PyTorch variants making it difficult to implement flexible choices. I think Does the GPU inference to calculate embeddings have to happen in the same Annif process or could it be in an external service accessible via an API? For example commercial LLM providers (OpenAI, Anthropic/Claude etc.) have embedding APIs for RAG and similar applications, and also locally run LLM engines such as llama.cpp and Ollama provide embedding APIs. |
|
Thank you both for your thoughts. I think it should be possible to implement the embedding generation also as API calls to an external service like llama.cpp or Hugging Face TEI. I suspect that we might loose some efficiency in contrast to "offline inference" in terms of batch processing. Also we add the complexity of setting up the inference engine to the users burden. So we have fewer python dependencies, but more docker dependencies, thinking of the additional container(s) that will need to run in production services. Offline inference only actually takes up GPU resources when called upon. Online services need to run permanently and consume power and memory in idle mode.
Maybe this is probably a good point in time to make decisions for the future, e.g. when we also envision generative models in future backends like llmensemble.
If sourcing model inference to external services is the path that you prefer, I am sure we can accommodate that. I see benefits on both ends.
…________________________________
Von: Osma Suominen ***@***.***>
Gesendet: Dienstag, 18. November 2025 10:21:15
An: NatLibFi/Annif
Cc: Kähler, Maximilian; Mention
Betreff: Re: [NatLibFi/Annif] Add ebm backend (PR #914)
[https://avatars.githubusercontent.com/u/1132830?s=20&v=4]osma left a comment (NatLibFi/Annif#914)<#914 (comment)>
@mfakaehler<https://github.com/mfakaehler> Excellent questions!
This is a bit similar to the discussion in #804<#804> about Docker image variants (mainly related to the XTransformer backend). There, the conclusion was that it doesn't make sense to include GPU support in Docker images (at least in the primary image variant), because it would increase its size a lot and still be difficult to run.
But XTransformer is a bit different than EBM in that it only requires a GPU during training, not so much at inference time. In my understanding, EBM in practice needs a GPU at inference time as well. So we can't just apply the same logic directly.
Here are some things that I think would be desirable:
1. It should be possible to use Annif (with EBM) without installing GPU dependencies (even if it's slow). This would be especially desirable in the case of GitHub Actions CI because CI jobs run very often so they should be as lightweight as possible and complete as fast as possible.
2. It should be possible to use different brands of GPUs, not just limited to NVIDIA/CUDA but also AMD/ROCm and possibly others (e.g. Vulkan) if PyTorch has support.
I realize that these may be difficult to achieve in our current way of managing dependencies. In my understanding, Poetry only has limited support for PyTorch variants making it difficult to implement flexible choices. I think uv has better support - see e.g. here<https://docs.astral.sh/uv/guides/integration/pytorch/>. So we could consider switching to uv if it helps in this area.
Does the GPU inference to calculate embeddings have to happen in the same Annif process or could it be in an external service accessible via an API? For example commercial LLM providers (OpenAI, Anthropic/Claude etc.) have embedding APIs for RAG and similar applications, and also locally run LLM engines such as llama.cpp and Ollama provide embedding APIs.
—
Reply to this email directly, view it on GitHub<#914 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AOLM2JLFB4QJJVG636YFP3D35LQIXAVCNFSM6AAAAACLYUZPPSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZTKNBWGQYDAOBYGU>.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
|
@mfakaehler Good points. I don't think supporting an external embedding API would solve our problems, even though it would be a nice feature for EBM (as Juho implied, sometimes it's easier to separate the GPU-dependent parts of a system into its own environment). |
|
Just to let you know: there was a public holiday in our region of Germany, which is why I cannot currently discuss thiss with Clemens and Christoph. Meanwhile, I will try to find some example packages, where pytorch is implicitly imported with another high-level library like transformers, and see how others deal with the complexity of pytorch installs. It feels a bit odd to handle that in the ebm4subjects package, as it never explicitly imports pytorch. So I feel inclined to leave that piece of environment management for the user. So if a user needs a particular pytorch install, e.g. with AMD/ROCm support, they would need to install that on their own before installing everything else. However, If we don't find anything more elegant, we will provide optional dependencies for emb4subject in two or three differrent flavours, e.g.:
Annif could then offer equal flavours of ebm and we could resolve to only importing ebm-bare in the CI pipeline. |
|
Thanks @mfakaehler , I think it's a good idea to look at how other packages handle this. I must say I'm tempted by the special support for PyTorch that uv provides. It would e.g. allow specifying the PyTorch variant at install time, something like this: or If we want to make use of that for Annif itself, we would have to switch from Poetry to uv, which is probably not trivial but should be doable. We have already switched dependency management systems several times in Annif history: I think we started with pipenv, then switch to plain pip+venv, and more recently have been using Poetry. |
|
We already experimented with uv for the |
|
@mfakaehler you may want to check out PR #923 where I've experimented with switching to |
|
Thanks. That looks good. @RietdorfC meanwhile analyzed the size of the venv you get, when installing our ebm4subjects standalone (without annif):
We'll report on progress with the discussed changes to ebm at some other time. This is only to confirm, that a switch to uv with the appropriate install flags would indeed help to control the environment size. |
|
Dear Annif-Team,
In case b), Annif could offer two flavours of the backend during installation: Which option would you prefer? Making the sentenceTranformer-Support optional or shipping it with the base package of ebm4subjects? And how would you like to have that tested in the CI-Pipeline? Testing the offline-inference variant will always lead to bundeling pytorch and sentence-transformer in the installation, and thus increase the container size. Whishing you all the best for the holidays! |



Dear Annif-Team,
As announced in issue #855 we would like to propose a new backend for annif Embedding Based Matching (EBM) that has been created by @RietdorfC and myself.
Here is a first draft for a readme article, to be added to the wiki:
Backend-EBM.md
Looking forward to your feedback!
Best,
Maximilian