Skip to content

Conversation

@Jintao-Huang
Copy link
Collaborator

@Jintao-Huang Jintao-Huang commented Dec 30, 2025

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @Jintao-Huang, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly expands the capabilities of the Megatron framework by introducing comprehensive support for the "MiniMaxAI/MiniMax-M2.1" large language model. The changes involve integrating the new model definition, implementing specialized attention and bridge classes to manage its unique architectural features like QK layernorm and Mixture of Experts (MoE) weight mapping, and updating documentation. These modifications ensure that the "MiniMax-M2.1" model can be effectively utilized and tested within the existing system.

Highlights

  • New Model Integration: Added full support for the "MiniMaxAI/MiniMax-M2.1" model, including its definition and specific requirements within the system.
  • Megatron Customization: Implemented a custom "MinimaxM2SelfAttention" and "MinimaxM2Bridge" to correctly handle the unique QK layernorm and Mixture of Experts (MoE) weight mapping for MiniMaxM2 models within the Megatron framework.
  • Documentation Update: Updated the supported models documentation (both English and Chinese versions) to reflect the newly added "MiniMax/MiniMax-M2.1" and specify "transformers==4.57.1" as a requirement for "MiniMax/MiniMax-M2" models.
  • Configuration Adjustment: Modified the Megatron configuration utility to explicitly set "add_qkv_bias" to "False" for "MiniMaxM2ForCausalLM" architectures, ensuring proper model behavior.
  • Test Coverage: Included a new test case to ensure proper alignment and functionality of the "MiniMax/MiniMax-M2.1" model within the Megatron testing suite.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for the MiniMax-M2.1 model to Megatron. The changes include updating documentation, registering the new model, and implementing a custom bridge for state dictionary conversion. The implementation introduces a new MinimaxM2Bridge to handle model-specific logic, particularly for MoE layers and QK layernorm.

I've found a critical bug in gpt_bridge.py that could cause a NameError for models other than minimax_m2. I've also suggested a minor refactoring in minimax_m2.py to improve code conciseness. Overall, the changes are well-structured to support the new model.

@Jintao-Huang
Copy link
Collaborator Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for the MiniMaxAI/MiniMax-M2.1 model, including necessary code for its architecture, configuration, and documentation. The changes also feature some beneficial refactoring in gpt_bridge.py to better handle model-specific variations. My review focuses on improving dependency management by suggesting a less restrictive version for transformers and enhancing code clarity in the new model implementation with an additional comment.

Comment on lines +28 to +34
q_layernorm = submodules.q_layernorm
k_layernorm = submodules.k_layernorm
submodules.q_layernorm = IdentityOp
submodules.k_layernorm = IdentityOp
super().__init__(config, submodules, *args, **kwargs)
submodules.q_layernorm = q_layernorm
submodules.k_layernorm = k_layernorm
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The logic of temporarily setting q_layernorm and k_layernorm to IdentityOp is a clever way to work around the base class's initialization. For better maintainability, please add a comment explaining why this is done. It will help future developers understand the purpose of this code block more quickly.

@Jintao-Huang
Copy link
Collaborator Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for the MiniMax-M2 and M2.1 models, including their conversion to Megatron format. The changes are well-structured, with new model-specific logic encapsulated in swift/megatron/model/gpt/minimax_m2.py. The refactoring in swift/megatron/model/gpt_bridge.py to introduce helper methods like get_hf_mlp_prefix and _set_qk_layernorm is a great improvement for extensibility and maintainability. The documentation and tests have also been updated accordingly. I have one suggestion to improve the design by moving a model-specific hack out of a generic testing function.

Comment on lines +204 to +207
# router to bfloat16
for n, m in mg_language_model.named_modules():
if n.endswith('router'):
m.to(hf_model.dtype)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This hardcoded loop to change the dtype of 'router' modules is model-specific logic, likely for MiniMax-M2. Placing it inside the generic test_convert_precision function makes this utility less reusable and harder to maintain as more models with special requirements are added.

Consider moving this logic to a model-specific preparation step. For instance, you could introduce a prepare_for_test method in the GPTBridge class, which can be overridden by model-specific bridges like MinimaxM2Bridge to handle such preparations before precision testing.

@Jintao-Huang Jintao-Huang merged commit c383dd4 into modelscope:main Dec 30, 2025
2 of 3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants