Skip to content

Conversation

@baranylcn
Copy link
Member

@baranylcn baranylcn commented Dec 7, 2025

  • Added user-selectable target role input to the UI layer.
  • Refactored prompt generation to dynamically adapt all evaluation sections based on the selected target role.
  • Strengthened key strengths filtering so only role-relevant or transferable strengths are surfaced when a target role is provided.
  • Added strict score consistency rules:
    • overall_score now always reflects suitability for the target role.
    • role_suitability[0] is forced to match the target role and mirrors overall_score.
  • Improved guardrails to prevent irrelevant strengths (e.g., ML/LLM skills) from appearing when not aligned with the target role.
  • Ensured all narrative assessments remain consistent with numeric scores.

Summary by CodeRabbit

  • New Features
    • Added optional target role selection for resume analysis. When specified, the system now provides tailored feedback including role-specific competency evaluation, strategic insights, and recommendations aligned with your career objectives.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link

coderabbitai bot commented Dec 7, 2025

Walkthrough

This PR adds optional target role-based resume analysis. Users can now select a target role from a predefined list through a new UI flow. The selected role is passed through analyzecv_pdf_withllm to the prompt generator, which tailors the evaluation criteria, scoring rules, and guidance based on the specified role.

Changes

Cohort / File(s) Summary
Resume analysis entry point
levelup/app.py
Added target_role optional parameter to analyzecv_pdf_withllm(). Introduced new UI flow with dropdown menu for role selection (mapped to selected_role variable, defaulting to None). Wired selected role into function call passed to prompt generator.
Prompt generation
levelup/prompts.py
Extended get_resume_analysis_prompt() signature with optional target_role parameter. Added conditional TARGET ROLE FOCUS block that activates when target role is specified, introducing role-specific scoring rules, competency notes, and evaluation criteria. Updated JSON output structure with primary_role_for_json field to reflect target role when provided. Reorganized and expanded multiple prompt sections (domain matching, competency evaluation, insights, recommendations, missing skills, benchmarking, summary) to incorporate role-conditioned guidance.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

  • Prompt logic density: The prompts.py changes introduce multiple conditional blocks and role-specific guidance sections that require careful verification for consistency and correctness.
  • Integration points: The wiring between app UI selection and prompt generator parameter passing should be validated for all code paths.
  • Prompt structure reorganization: Verify that reworded/expanded bullet items align with the conditional role logic and don't introduce redundancy or contradictions.

Possibly related PRs

Suggested reviewers

  • MuhammedSenn

Poem

🐰 A role-aware resume now takes flight,
Users pick their target, shining bright,
The prompt adapts with skill and grace,
Tailored guidance finds its place,
Career paths bloom in morning light! ✨

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 33.33% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat: target role option' clearly summarizes the main change: adding a target role feature to the resume analysis tool. It directly reflects the PR objectives which focus on user-selectable target roles and role-specific evaluation logic.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/target-role

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (3)
levelup/app.py (2)

3-3: Minor style note: prefer str | None union syntax for consistency.

The codebase uses str | None syntax in prompts.py (line 2), but here Optional[str] is used. Consider using the union syntax consistently across files for Python 3.10+ codebases.

-from typing import Any, Optional, cast
+from typing import Any, cast

Then update the function signature and variable annotation:

 def analyzecv_pdf_withllm(
-    text: str, report_language: str, target_role: Optional[str] = None
+    text: str, report_language: str, target_role: str | None = None
 ) -> dict[str, Any] | None:
-        selected_role: Optional[str]
+        selected_role: str | None

339-379: Consider extracting role options to a constant or configuration.

The hardcoded list of 40+ role options is lengthy and embedded in the UI flow. Extracting this to a module-level constant (e.g., ROLE_OPTIONS) or a configuration file would improve maintainability and make it easier to update roles without modifying the UI logic.

+ROLE_OPTIONS = [
+    "No specific target role",
+    "Data Scientist",
+    "Data Analyst",
+    # ... remaining roles
+]
+
 # Then in the UI section:
-        role_options = [
-            "No specific target role",
-            ...
-        ]
+        role_options = ROLE_OPTIONS
levelup/prompts.py (1)

63-69: Inconsistent string formatting for target_role.

Line 64 uses {target_role} without quotes while line 65-68 use "{target_role}" with quotes. This inconsistency may confuse the LLM or produce slightly different parsing behavior.

         notes["strengths"] = f"""
--  Only list the person's strengths that align with the {target_role} role. Do not list strengths that are unrelated to the {target_role} role.
+-  Only list the person's strengths that align with the "{target_role}" role. Do not list strengths that are unrelated to the "{target_role}" role.
 - In all strength-related sections (including "overall_summary.key_strengths"), list strengths that directly support success as "{target_role}" or clearly demonstrate transferable potential toward becoming a stronger "{target_role}" candidate.
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1bf3605 and 7c8c7cd.

📒 Files selected for processing (2)
  • levelup/app.py (3 hunks)
  • levelup/prompts.py (4 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
levelup/app.py (1)
levelup/prompts.py (1)
  • get_resume_analysis_prompt (1-204)
🔇 Additional comments (10)
levelup/app.py (3)

44-47: LGTM!

The function signature update cleanly adds the optional target_role parameter with a sensible default of None, and correctly propagates it to get_resume_analysis_prompt. The change maintains backward compatibility.


381-391: LGTM!

The role selection logic correctly maps "No specific target role" to None and preserves the actual role string otherwise. The type annotation Optional[str] helps with clarity.


393-396: LGTM!

The selected_role is correctly passed to analyzecv_pdf_withllm, completing the data flow from UI selection to prompt generation.

levelup/prompts.py (7)

1-4: LGTM!

Clean function signature with modern union type syntax and a clear docstring describing the purpose.


6-15: LGTM!

The notes dictionary provides a clean, extensible structure for section-specific guidance. Initializing all keys to empty strings ensures the prompt template won't fail if a note is missing.


17-38: Well-structured target role conditioning with clear scoring rules.

The consistency requirements for scores are thorough and should help ensure the LLM produces coherent evaluations. The cap at 70 for candidates without direct evidence of core responsibilities is a good guardrail.


70-75: LGTM!

Good fallback for when no target role is specified—focuses on transferable strengths across plausible career paths rather than role-specific ones.


77-79: LGTM!

Clean conditional assignment that ensures the JSON template has a meaningful placeholder when no target role is specified.


195-198: LGTM!

The JSON template correctly uses primary_role_for_json to ensure the first role_suitability entry matches the target role (or defaults to "Primary Likely Role" when none specified).


102-103: Current implementation safely mitigates prompt injection risk.

The target_role value is interpolated directly into the prompt at line 102. As verified, app.py restricts input through a Streamlit selectbox with a predefined list of role options (lines 381-385), and there are no other code paths that allow arbitrary user input for target_role. The parameter is optional and defaults to None, maintaining safety if omitted. No changes needed.

@baranylcn baranylcn merged commit 6e0d324 into main Dec 7, 2025
6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants