Skip to content

Conversation

@gobbleturk
Copy link
Collaborator

@gobbleturk gobbleturk commented Dec 29, 2025

Description

Fixes an issue where gradient accumulation + sft would fail due to missing param sharding

FIXES: b/472309528

Tests

Added a unit test (integration test) for sft + ga

Manually tested via

python3 -m MaxText.sft_trainer MaxText/configs/base.yml run_name=mattdavidow-train-base base_output_directory=$output_dir dataset_path=$dataset steps=5 enable_checkpointing=False enable_goodput_recording=False use_sft=True gradient_accumulation_steps=5

Checklist

Before submitting this PR, please make sure (put X in square brackets):

  • I have performed a self-review of my code. For an optional AI review, add the gemini-review label.
  • I have necessary comments in my code, particularly in hard-to-understand areas.
  • I have run end-to-end tests tests and provided workload links above if applicable.
  • I have made or will make corresponding changes to the doc if needed, including adding new documentation pages to the relevant Table of Contents (toctree directive) as explained in our documentation.

@codecov
Copy link

codecov bot commented Dec 29, 2025

Codecov Report

❌ Patch coverage is 0% with 2 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
src/MaxText/sft_trainer.py 0.00% 2 Missing ⚠️

📢 Thoughts on this report? Let us know!

@gobbleturk gobbleturk mentioned this pull request Dec 29, 2025
4 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants