Skip to content

Conversation

@juhoinkinen
Copy link
Member

Closes #856.

@codecov
Copy link

codecov bot commented Jul 8, 2025

Codecov Report

Attention: Patch coverage is 1.87500% with 157 lines in your changes missing coverage. Please review.

Project coverage is 97.56%. Comparing base (6bae2e5) to head (2013e9c).

Files with missing lines Patch % Lines
annif/backend/llm_ensemble.py 0.00% 155 Missing ⚠️
annif/backend/__init__.py 33.33% 2 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #859      +/-   ##
==========================================
- Coverage   99.64%   97.56%   -2.09%     
==========================================
  Files          99      100       +1     
  Lines        7349     7509     +160     
==========================================
+ Hits         7323     7326       +3     
- Misses         26      183     +157     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

juhoinkinen and others added 4 commits July 8, 2025 15:57
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
@juhoinkinen juhoinkinen requested a review from Copilot July 10, 2025 12:28
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR adds exponentiated weighted averaging to suggestions and implements an LLM-based ensemble backend for ranking and scoring.

  • Extend SuggestionBatch.from_averaged to accept an optional exponents parameter for score exponentiation.
  • Introduce BaseLLMBackend and LLMEnsembleBackend with OpenAI/AzureOpenAI integration and parallel prompt processing.
  • Register the new llm_ensemble backend in the backend factory.

Reviewed Changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 3 comments.

File Description
annif/suggestion.py Added exponents parameter and updated averaging logic/docstring.
annif/backend/llm_ensemble.py New LLM ensemble backend: API calls, prompt handling, ensemble logic.
annif/backend/init.py Registered llm_ensemble backend.
Comments suppressed due to low confidence (2)

annif/suggestion.py:125

  • [nitpick] Update the docstring for from_averaged to include a description of the new exponents parameter and its default behavior.
        """Create a new SuggestionBatch where the subject scores are the

annif/backend/llm_ensemble.py:263

  • [nitpick] Add a brief docstring to _get_labels_batch to clarify its behavior and inputs, improving code readability.
    def _get_labels_batch(self, suggestion_batch: SuggestionBatch) -> list[list[str]]:

juhoinkinen and others added 2 commits July 10, 2025 15:39
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
@sonarqubecloud
Copy link

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

LLM ranking/scoring backend

2 participants