Skip to content

Conversation

@brb-nv
Copy link
Collaborator

@brb-nv brb-nv commented Dec 27, 2025

Description

This MR fixes a hang issue when pipeline parallelism and context (helix) parallelism are working together.

Background:

  • Helix parallelism involves splitting the ISL among CP ranks used only on gen servers in disagg.
  • On ToT main, the system hangs when PP+Helix CP combination is used.

Root cause:

  • In executor_request_queue.py, requests are only broadcast to TP groups, but not CP groups.

Fix:

  • Broadcast requests to CP groups also.

Test Coverage

$ pytest tests/integration/defs/accuracy/test_disaggregated_serving.py::TestDeepSeekV3Lite::test_auto_dtype_with_helix[nccl-cudagraph:none-pp2tp1cp2] -s -v
$ pytest tests/integration/defs/accuracy/test_disaggregated_serving.py::TestDeepSeekV3Lite::test_auto_dtype_with_helix[nccl-cudagraph:with_padding-pp2tp1cp2] -s -v
$ pytest tests/integration/defs/accuracy/test_disaggregated_serving.py::TestDeepSeekV3Lite::test_auto_dtype_with_helix[fifo-cudagraph:none-pp2tp1cp2] -s -v
$ pytest tests/integration/defs/accuracy/test_disaggregated_serving.py::TestDeepSeekV3Lite::test_auto_dtype_with_helix[fifo-cudagraph:with_padding-pp2tp1cp2] -s -v

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

Summary by CodeRabbit

  • New Features

    • Added context parallelism (CP) group broadcasting support for improved distributed communication.
  • Tests

    • Expanded test coverage with new parameterization for disaggregated serving configurations.
    • Updated test suites to support enhanced parallelism strategies.

✏️ Tip: You can customize this high-level summary in your review settings.

@brb-nv brb-nv requested review from a team as code owners December 27, 2025 21:21
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 27, 2025

📝 Walkthrough

Walkthrough

The changes add context-parallel (CP) broadcast functionality to distributed communicator classes, extend request queue broadcasting logic to support CP groups, and parameterize integration tests with new generation parallelism configurations while updating test registry entries.

Changes

Cohort / File(s) Summary
Distributed Communication
tensorrt_llm/_torch/distributed/communicator.py
Added cp_broadcast(self, obj, root=0, chunk_size: int = 4 * 1024 * 1024) method to both MPIDist and TorchDist classes. Mirrors existing tp_broadcast logic to support context-parallel group broadcasting with chunked sending for large objects.
Request Queue Broadcasting
tensorrt_llm/_torch/pyexecutor/executor_request_queue.py
Modified _broadcast_new_requests() control flow to perform intra-first-PP-stage broadcasts within both TP group (when tp_size > 1) and CP group (when cp_size > 1), ensuring all ranks in the first PP stage receive payloads.
Test Implementation
tests/integration/defs/accuracy/test_disaggregated_serving.py
Increased device skip threshold from 4 to 8; parameterized test_auto_dtype_with_helix with gen_pp, gen_tp, gen_cp parameters; introduced gen_ep variable; expanded cache configurations with max_tokens_in_buffer: 8192; updated tensor parallel size to 4 and adjusted topology settings.
Test Registry—QA Core
tests/integration/test_lists/qa/llm_function_core.txt
Updated test entry names to append -pp2tp1cp2 suffix for DeepSeekV3Lite test_auto_dtype_with_helix variants across all cudagraph configurations (6 entries in core QA, 6 entries in accuracy/disaggregated_serving).
Test Registry—GB200 Multi-GPU
tests/integration/test_lists/test-db/l0_gb200_multi_gpus.yml
Removed six test entries for auto_dtype helix tests (2 pre-merge, 4 post-merge).
Test Registry—GB200 Multi-Node
tests/integration/test_lists/test-db/l0_gb200_multi_nodes.yml
Added two new test entries with -pp2tp1cp2 suffix: fifo-cudagraph:with_padding (pre-merge) and nccl-cudagraph:with_padding (post-merge).

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 11.11% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and concisely identifies the main issue being fixed: PP+CP combination with helix parallelism.
Description check ✅ Passed The description includes background context, root cause analysis, the fix applied, and specific test coverage examples following the template structure.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

📜 Recent review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ee07a7c and de89ed6.

📒 Files selected for processing (6)
  • tensorrt_llm/_torch/distributed/communicator.py
  • tensorrt_llm/_torch/pyexecutor/executor_request_queue.py
  • tests/integration/defs/accuracy/test_disaggregated_serving.py
  • tests/integration/test_lists/qa/llm_function_core.txt
  • tests/integration/test_lists/test-db/l0_gb200_multi_gpus.yml
  • tests/integration/test_lists/test-db/l0_gb200_multi_nodes.yml
💤 Files with no reviewable changes (1)
  • tests/integration/test_lists/test-db/l0_gb200_multi_gpus.yml
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Code developed for TensorRT-LLM should conform to Python 3.8+
Indent Python code with 4 spaces. Do not use tabs
Always maintain the namespace when importing in Python, even if only one class or function from a module is used
Python files should use snake_case naming: some_file.py
Python classes should use PascalCase naming: class SomeClass
Python functions and methods should use snake_case naming: def my_awesome_function():
Python local variables should use snake_case naming: my_variable = ...
Python variable names that start with a number should be prefixed with 'k': k_99th_percentile = ...
Python global variables should use upper snake_case with prefix 'G': G_MY_GLOBAL = ...
Python constants should use upper snake_case naming: MY_CONSTANT = ...
Avoid shadowing variables declared in an outer scope in Python
Initialize all externally visible members of a Python class in the constructor
For Python interfaces that may be used outside a file, prefer docstrings over comments
Python comments should be reserved for code within a function, or interfaces that are local to a file
Use Google style docstrings in Python for classes and functions, which can be parsed by Sphinx
Python attributes and variables can be documented inline with type and description
Avoid using reflection in Python when functionality can be easily achieved without reflection
When using try-except blocks in Python, limit the except to the smallest set of errors possible
When using try-except blocks in Python to handle multiple possible variable types (duck-typing), keep the body of the try as small as possible, using the else block for logic

Files:

  • tensorrt_llm/_torch/distributed/communicator.py
  • tests/integration/defs/accuracy/test_disaggregated_serving.py
  • tensorrt_llm/_torch/pyexecutor/executor_request_queue.py
**/*.{cpp,h,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

All TensorRT-LLM Open Source Software code should contain an NVIDIA copyright header that includes the year of its latest meaningful modification

Files:

  • tensorrt_llm/_torch/distributed/communicator.py
  • tests/integration/defs/accuracy/test_disaggregated_serving.py
  • tensorrt_llm/_torch/pyexecutor/executor_request_queue.py
🧠 Learnings (15)
📚 Learning: 2025-09-09T09:40:45.658Z
Learnt from: fredricz-20070104
Repo: NVIDIA/TensorRT-LLM PR: 7645
File: tests/integration/test_lists/qa/llm_function_core.txt:648-648
Timestamp: 2025-09-09T09:40:45.658Z
Learning: In TensorRT-LLM test lists, it's common and intentional for the same test to appear in multiple test list files when they serve different purposes (e.g., llm_function_core.txt for comprehensive core functionality testing and llm_function_core_sanity.txt for quick sanity checks). This duplication allows tests to be run in different testing contexts.

Applied to files:

  • tests/integration/test_lists/qa/llm_function_core.txt
  • tests/integration/test_lists/test-db/l0_gb200_multi_nodes.yml
📚 Learning: 2025-09-17T02:48:52.732Z
Learnt from: tongyuantongyu
Repo: NVIDIA/TensorRT-LLM PR: 7781
File: tests/integration/test_lists/waives.txt:313-313
Timestamp: 2025-09-17T02:48:52.732Z
Learning: In TensorRT-LLM, `tests/integration/test_lists/waives.txt` is specifically for waiving/skipping tests, while other test list files like those in `test-db/` and `qa/` directories are for different test execution contexts (pre-merge, post-merge, QA tests). The same test appearing in both waives.txt and execution list files is intentional - the test is part of test suites but will be skipped due to the waiver.

Applied to files:

  • tests/integration/test_lists/qa/llm_function_core.txt
  • tests/integration/test_lists/test-db/l0_gb200_multi_nodes.yml
📚 Learning: 2025-08-06T13:58:07.506Z
Learnt from: galagam
Repo: NVIDIA/TensorRT-LLM PR: 6487
File: tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py:1-12
Timestamp: 2025-08-06T13:58:07.506Z
Learning: In TensorRT-LLM, test files (files under tests/ directories) do not require NVIDIA copyright headers, unlike production source code files. Test files typically start directly with imports, docstrings, or code.

Applied to files:

  • tests/integration/test_lists/qa/llm_function_core.txt
📚 Learning: 2025-11-27T09:23:18.742Z
Learnt from: fredricz-20070104
Repo: NVIDIA/TensorRT-LLM PR: 9511
File: tests/integration/defs/examples/serve/test_serve.py:136-186
Timestamp: 2025-11-27T09:23:18.742Z
Learning: In TensorRT-LLM testing, when adding test cases based on RCCA commands, the command format should be copied exactly as it appears in the RCCA case, even if it differs from existing tests. For example, some RCCA commands for trtllm-serve may omit the "serve" subcommand while others include it.

Applied to files:

  • tests/integration/test_lists/qa/llm_function_core.txt
📚 Learning: 2025-07-28T17:06:08.621Z
Learnt from: moraxu
Repo: NVIDIA/TensorRT-LLM PR: 6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.

Applied to files:

  • tests/integration/test_lists/qa/llm_function_core.txt
  • tests/integration/test_lists/test-db/l0_gb200_multi_nodes.yml
📚 Learning: 2025-10-22T06:53:47.017Z
Learnt from: xinhe-nv
Repo: NVIDIA/TensorRT-LLM PR: 8534
File: scripts/format_test_list.py:1-6
Timestamp: 2025-10-22T06:53:47.017Z
Learning: The file `scripts/format_test_list.py` in the TensorRT-LLM repository does not require the NVIDIA Apache-2.0 copyright header.

Applied to files:

  • tests/integration/test_lists/qa/llm_function_core.txt
📚 Learning: 2025-08-26T09:49:04.956Z
Learnt from: pengbowang-nv
Repo: NVIDIA/TensorRT-LLM PR: 7192
File: tests/integration/test_lists/test-db/l0_dgx_b200.yml:56-72
Timestamp: 2025-08-26T09:49:04.956Z
Learning: In TensorRT-LLM test configuration files, the test scheduling system handles wildcard matching with special rules that prevent duplicate test execution even when the same tests appear in multiple yaml files with overlapping GPU wildcards (e.g., "*b200*" and "*gb200*").

Applied to files:

  • tests/integration/test_lists/qa/llm_function_core.txt
  • tests/integration/test_lists/test-db/l0_gb200_multi_nodes.yml
📚 Learning: 2025-08-14T06:36:40.701Z
Learnt from: timlee0212
Repo: NVIDIA/TensorRT-LLM PR: 6886
File: tensorrt_llm/_torch/models/modeling_deepseekv3.py:0-0
Timestamp: 2025-08-14T06:36:40.701Z
Learning: In DeepSeek V3 model (tensorrt_llm/_torch/models/modeling_deepseekv3.py), the disagreement between AllReduce.__init__ guard and _compute_mlp_tp_size logic for MNNVL usage is expected by design. The AllReduce component and MLP TP-size computation intentionally use different criteria for MNNVL availability decisions.

Applied to files:

  • tests/integration/test_lists/qa/llm_function_core.txt
📚 Learning: 2025-08-13T11:07:11.772Z
Learnt from: Funatiq
Repo: NVIDIA/TensorRT-LLM PR: 6754
File: tests/integration/test_lists/test-db/l0_a30.yml:41-47
Timestamp: 2025-08-13T11:07:11.772Z
Learning: In TensorRT-LLM test configuration files like tests/integration/test_lists/test-db/l0_a30.yml, TIMEOUT values are specified in minutes, not seconds.

Applied to files:

  • tests/integration/test_lists/test-db/l0_gb200_multi_nodes.yml
📚 Learning: 2025-09-17T06:01:01.836Z
Learnt from: fredricz-20070104
Repo: NVIDIA/TensorRT-LLM PR: 7785
File: tests/integration/defs/perf/utils.py:321-333
Timestamp: 2025-09-17T06:01:01.836Z
Learning: In test infrastructure code for disaggregated serving tests, prefer logging errors and continuing execution rather than raising exceptions on timeout, to avoid disrupting test cleanup and causing cascading failures.

Applied to files:

  • tests/integration/test_lists/test-db/l0_gb200_multi_nodes.yml
📚 Learning: 2025-12-12T03:27:18.859Z
Learnt from: tongyuantongyu
Repo: NVIDIA/TensorRT-LLM PR: 9655
File: tensorrt_llm/_torch/pyexecutor/sampler.py:3031-3031
Timestamp: 2025-12-12T03:27:18.859Z
Learning: In tensorrt_llm/_torch/pyexecutor/sampler.py, when reviewing code that iterates through requests, ensure it does not convert excessive data into Python lists. Instead, the code should use torch.gather or indexing to gather only the data that will be used in the for loop before converting to Python lists. This minimizes data movement and improves performance.

Applied to files:

  • tensorrt_llm/_torch/pyexecutor/executor_request_queue.py
📚 Learning: 2025-12-12T03:27:08.565Z
Learnt from: tongyuantongyu
Repo: NVIDIA/TensorRT-LLM PR: 9655
File: tensorrt_llm/_torch/pyexecutor/sampler.py:3031-3031
Timestamp: 2025-12-12T03:27:08.565Z
Learning: In files under tensorrt_llm/_torch/pyexecutor, avoid accessing torch.Tensor objects inside for-loops when iterating over requests. Convert batched tensors to Python lists beforehand using tensor.tolist(), and then iterate over those lists. This improves performance by reducing tensor-bound operations inside hot loops. Apply this pattern to similar code paths that process batches to access simple Python data structures (lists) inside loops.

Applied to files:

  • tensorrt_llm/_torch/pyexecutor/executor_request_queue.py
📚 Learning: 2025-08-19T12:45:11.997Z
Learnt from: amitz-nv
Repo: NVIDIA/TensorRT-LLM PR: 7033
File: tensorrt_llm/_torch/pyexecutor/model_engine.py:0-0
Timestamp: 2025-08-19T12:45:11.997Z
Learning: In tensorrt_llm/_torch/pyexecutor/model_engine.py, DoRA (Delta Orthogonal Rank Adaptation) functionality was removed from the PyTorch flow to eliminate issues with inverted DoRA detection logic. The original is_dora condition was checking if scaling_vec_pointer == 0, which was potentially incorrect.

Applied to files:

  • tensorrt_llm/_torch/pyexecutor/executor_request_queue.py
📚 Learning: 2025-08-26T06:07:02.166Z
Learnt from: shaharmor98
Repo: NVIDIA/TensorRT-LLM PR: 7231
File: tensorrt_llm/_torch/pyexecutor/_util.py:504-509
Timestamp: 2025-08-26T06:07:02.166Z
Learning: In tensorrt_llm/_torch/pyexecutor/_util.py, when calling model_engine.set_lora_model_config(), pass model_binding_config.mlp_hidden_size directly without multiplying by mapping.tp_size, as the mlp_hidden_size from get_bindings_model_config() is already the per-TP rank value needed for LoRA weight packaging.

Applied to files:

  • tensorrt_llm/_torch/pyexecutor/executor_request_queue.py
📚 Learning: 2025-08-28T10:22:02.288Z
Learnt from: ixlmar
Repo: NVIDIA/TensorRT-LLM PR: 7294
File: tensorrt_llm/_torch/pyexecutor/sampler.py:1191-1197
Timestamp: 2025-08-28T10:22:02.288Z
Learning: In tensorrt_llm/_torch/pyexecutor/sampler.py, the object identity comparison `softmax_req_indices is not group_req_indices_cuda` on line ~1191 is intentional and used as an optimization to determine whether to reuse an existing indexer or create a new one, based on which code path was taken during tensor assignment.

Applied to files:

  • tensorrt_llm/_torch/pyexecutor/executor_request_queue.py
🧬 Code graph analysis (2)
tensorrt_llm/_torch/distributed/communicator.py (3)
tensorrt_llm/_torch/auto_deploy/distributed/common.py (2)
  • broadcast (51-55)
  • broadcast_object_list (58-62)
tensorrt_llm/_torch/device_mesh.py (1)
  • cp_group_pg (61-62)
tensorrt_llm/mapping.py (1)
  • cp_group_pg (530-531)
tensorrt_llm/_torch/pyexecutor/executor_request_queue.py (2)
tensorrt_llm/_torch/distributed/communicator.py (6)
  • is_first_pp_rank (88-89)
  • tp_size (64-65)
  • tp_broadcast (421-423)
  • tp_broadcast (717-728)
  • cp_broadcast (410-412)
  • cp_broadcast (731-742)
tensorrt_llm/mapping.py (1)
  • is_first_pp_rank (267-268)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (7)
tests/integration/test_lists/test-db/l0_gb200_multi_nodes.yml (1)

18-18: LGTM! Test coverage for PP+CP combination added appropriately.

The new test entries validate the PP=2, TP=1, CP=2 configuration with helix parallelism in both pre-merge (fifo backend with padding) and post-merge (nccl backend with padding) stages, providing good coverage for the PP+CP fix.

Also applies to: 40-40

tests/integration/test_lists/qa/llm_function_core.txt (1)

540-545: LGTM! Test list updates align with new parameterization.

The test entries have been updated to use the -pp2tp1cp2 suffix, consistent with the new gen_pp=2, gen_tp=1, gen_cp=2 parameterization in the test implementation. This provides comprehensive coverage across different communication backends (nccl, fifo) and CUDA graph configurations.

Based on learnings, it's common and intentional for tests to appear in multiple test list files when they serve different purposes.

tensorrt_llm/_torch/distributed/communicator.py (2)

410-412: LGTM! CP broadcast implementation consistent with TP broadcast.

The cp_broadcast method correctly mirrors the tp_broadcast implementation (lines 421-423), using self.cp_comm and delegating to safe_broadcast with the same chunking logic for handling large objects.


730-742: LGTM! CP broadcast implementation follows established patterns.

The cp_broadcast method correctly mirrors the tp_broadcast implementation (lines 716-728), properly handling both Tensor and non-Tensor objects, using the appropriate CP process group (self.mapping.cp_group_pg), and including the @log_op decorator for consistency with other distributed operations.

tensorrt_llm/_torch/pyexecutor/executor_request_queue.py (1)

590-598: LGTM! Critical fix for PP+CP combination hang.

This change addresses the root cause identified in the PR objectives: requests are now broadcast to both TP and CP groups within the first PP stage. The addition of CP broadcasting (lines 597-598) ensures that all CP ranks receive requests when helix parallelism is enabled, fixing the hang when PP and CP are used together.

The implementation correctly:

  • Guards CP broadcast with cp_size > 1 check
  • Uses the same pattern as TP broadcast
  • Includes clear comments explaining the cross-group broadcast requirement
tests/integration/defs/accuracy/test_disaggregated_serving.py (2)

874-896: LGTM! Test parameterization enables comprehensive PP+CP coverage.

The parameterization with gen_pp, gen_tp, gen_cp parameters and calculated gen_ep = gen_tp * gen_cp allows testing both pp1tp2cp2 and pp2tp1cp2 configurations. This provides good coverage for different combinations of pipeline and context parallelism with helix, directly validating the PP+CP fix.


903-934: LGTM! Server configurations properly parameterized.

The configuration updates correctly:

  • Use parameterized gen_tp, gen_pp, gen_cp for generation server (lines 917-920)
  • Set moe_expert_parallel_size to the calculated gen_ep (line 920)
  • Add max_tokens_in_buffer: 8192 for UCX backend in both context and generation servers (lines 913, 932)
  • Maintain asymmetric topology with context server at tp=4, pp=1, cp=1 (line 905)

These changes align with the broader test parameterization and PP+CP support.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@brb-nv
Copy link
Collaborator Author

brb-nv commented Dec 28, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30046 [ run ] triggered by Bot. Commit: de89ed6

@brb-nv brb-nv force-pushed the user/brb/fix-cp-pp-combination branch 2 times, most recently from 96e915c to db62a2b Compare December 28, 2025 07:13
@brb-nv
Copy link
Collaborator Author

brb-nv commented Dec 28, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30049 [ run ] triggered by Bot. Commit: db62a2b

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30049 [ run ] completed with state SUCCESS. Commit: db62a2b
/LLM/main/L0_MergeRequest_PR pipeline #23125 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@brb-nv
Copy link
Collaborator Author

brb-nv commented Dec 28, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30058 [ run ] triggered by Bot. Commit: db62a2b

@brb-nv brb-nv force-pushed the user/brb/fix-cp-pp-combination branch from 658663e to cc1bb16 Compare December 28, 2025 21:27
@brb-nv
Copy link
Collaborator Author

brb-nv commented Dec 28, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30059 [ run ] triggered by Bot. Commit: cc1bb16

@brb-nv brb-nv force-pushed the user/brb/fix-cp-pp-combination branch from 9ea59ed to b50bc7d Compare December 29, 2025 02:33
@brb-nv
Copy link
Collaborator Author

brb-nv commented Dec 29, 2025

/bot run --disable-fail-fast

@brb-nv brb-nv enabled auto-merge (squash) December 29, 2025 03:31
@tensorrt-cicd
Copy link
Collaborator

PR_Github #30069 [ run ] triggered by Bot. Commit: 86a7b81

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30069 [ run ] completed with state DISABLED
CI server is currently disabled for scheduled maintenance. Estimated completion time: 6 AM PST on 12/29.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30059 [ run ] completed with state SUCCESS. Commit: cc1bb16
/LLM/main/L0_MergeRequest_PR pipeline #23135 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@brb-nv
Copy link
Collaborator Author

brb-nv commented Dec 29, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30090 [ run ] triggered by Bot. Commit: 86a7b81

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30090 [ run ] completed with state SUCCESS. Commit: 86a7b81
/LLM/main/L0_MergeRequest_PR pipeline #23152 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@brb-nv
Copy link
Collaborator Author

brb-nv commented Dec 30, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30101 [ run ] triggered by Bot. Commit: 86a7b81

@brb-nv brb-nv force-pushed the user/brb/fix-cp-pp-combination branch 3 times, most recently from 665a5e3 to fc12245 Compare December 30, 2025 03:00
Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com>
@brb-nv brb-nv force-pushed the user/brb/fix-cp-pp-combination branch from fc12245 to 24ec122 Compare December 30, 2025 03:02
@brb-nv
Copy link
Collaborator Author

brb-nv commented Dec 30, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30116 [ run ] triggered by Bot. Commit: 24ec122


# Broadcast within first tp group before send/recv chain to other tp groups
if self.dist.tp_size > 1 and self.dist.is_first_pp_rank:
payloads = self.dist.tp_broadcast(payloads, root=0)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

auto tuner also uses tp_broadcast, see

cache_data = self._dist.tp_broadcast(obj=cache_data, root=root)
. Please add a tp_cp_broadcast to Distributed and replace all existed usage of tp_broadcast to tp_cp_broadcast.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated. 4b406b9

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I noticed Distributed doesn't have parallelism-specific broadcasts like tp_broadcast, for e.g.. So, limited my change to updating MPIDist and TorchDist. Also, updated tests.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30116 [ run ] completed with state SUCCESS. Commit: 24ec122
/LLM/main/L0_MergeRequest_PR pipeline #23176 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

@brb-nv brb-nv requested a review from a team as a code owner December 30, 2025 19:09
@brb-nv brb-nv requested a review from hyukn December 30, 2025 19:09
Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com>
@brb-nv brb-nv force-pushed the user/brb/fix-cp-pp-combination branch from 4b406b9 to 708dd1d Compare December 30, 2025 19:38
@brb-nv
Copy link
Collaborator Author

brb-nv commented Dec 30, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30200 [ run ] triggered by Bot. Commit: 708dd1d

@brb-nv brb-nv requested a review from yuxianq December 31, 2025 00:08
@tensorrt-cicd
Copy link
Collaborator

PR_Github #30200 [ run ] completed with state SUCCESS. Commit: 708dd1d
/LLM/main/L0_MergeRequest_PR pipeline #23245 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Copy link
Collaborator

@hyukn hyukn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

cache_data = self.profiling_cache.get_specific_custom_op(custom_op)
root = 0
cache_data = self._dist.tp_broadcast(obj=cache_data, root=root)
cache_data = self._dist.tp_cp_broadcast(obj=cache_data, root=root)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the update. I think there should be no tunable op at this moment during context parallel phase. However, I wonder if all the ranks share the same shapes of input tensors? Because if particular shapes for some ranks exist, the tuner cache from rank 0 might not be sufficient for them.

Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com>
@brb-nv brb-nv force-pushed the user/brb/fix-cp-pp-combination branch from 2e8cfdf to 746512c Compare December 31, 2025 02:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants