This repository was archived by the owner on Nov 19, 2025. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 104
This repository was archived by the owner on Nov 19, 2025. It is now read-only.
RuntimeError: Error(s) in loading state_dict for GPTModel #539
Copy link
Copy link
Open
Labels
bugSomething isn't workingSomething isn't working
Description
This issue happens when I try to load the weights for MegatronGPTSPINModel. The checkpoint which is being loaded is a finetuned Llama 3.1 8B which is converted from Huggingface to Nemo format.
Exact error log:
RuntimeError: Error(s) in loading state_dict for GPTModel:
Missing key(s) in state_dict: "decoder.layers.0.self_attention.linear_proj._extra_state", "decoder.layers.0.self_attention.linear_qkv._extra_state", "decoder.layers.0.mlp.linear_fc1._extra_state", "decoder.layers.0.mlp.linear_fc2._extra_state", "decoder.layers.1.self_attention.linear_proj._extra_state", "decoder.layers.1.self_attention.linear_qkv._extra_state", "decoder.layers.1.mlp.linear_fc1._extra_state", "decoder.layers.1.mlp.linear_fc2._extra_state", "decoder.layers.2.self_attention.linear_proj._extra_state", "decoder.layers.2.self_attention.linear_qkv._extra_state", "decoder.layers.2.mlp.linear_fc1._extra_state", "decoder.layers.2.mlp.linear_fc2._extra_state", "decoder.layers.3.self_attention.linear_proj._extra_state", "decoder.layers.3.self_attention.linear_qkv._extra_state", "decoder.layers.3.mlp.linear_fc1._extra_state", "decoder.layers.3.mlp.linear_fc2._extra_state", "decoder.layers.4.self_attention.linear_proj._extra_state", "decoder.layers.4.self_attention.linear_qkv._extra_state", "decoder.layers.4.mlp.linear_fc1._extra_state", "decoder.layers.4.mlp.linear_fc2._extra_state", "decoder.layers.5.self_attention.linear_proj._extra_state", "decoder.layers.5.self_attention.linear_qkv._extra_state", "decoder.layers.5.mlp.linear_fc1._extra_state", "decoder.layers.5.mlp.linear_fc2._extra_state", "decoder.layers.6.self_attention.linear_proj._extra_state", "decoder.layers.6.self_attention.linear_qkv._extra_state", "decoder.layers.6.mlp.linear_fc1._extra_state", "decoder.layers.6.mlp.linear_fc2._extra_state", "decoder.layers.7.self_attention.linear_proj._extra_state", "decoder.layers.7.self_attention.linear_qkv._extra_state", "decoder.layers.7.mlp.linear_fc1._extra_state", "decoder.layers.7.mlp.linear_fc2._extra_state", "decoder.layers.8.self_attention.linear_proj._extra_state", "decoder.layers.8.self_attention.linear_qkv._extra_state", "decoder.layers.8.mlp.linear_fc1._extra_state", "decoder.layers.8.mlp.linear_fc2._extra_state", "decoder.layers.9.self_attention.linear_proj._extra_state", "decoder.lapynemo/0 yers.9.self_attention.linear_qkv._extra_state", "decoder.layers.9.mlp.linear_fc1._extra_state", "decoder.layers.9.mlp.linear_fc2._extra_state", "decoder.layers.10.self_attention.linear_proj._extra_state", "decoder.layers.10.self_attention.linear_qkv._extra_state", "decoder.layers.10.mlp.linear_fc1._extra_state", "decoder.layers.10.mlp.linear_fc2._extra_state", "decoder.layers.11.self_attention.linear_proj._extra_state", "decoder.layers.11.self_attention.linear_qkv._extra_state", "decoder.layers.11.mlp.linear_fc1._extra_state", "decoder.layers.11.mlp.linear_fc2._extra_state", "decoder.layers.12.self_attention.linear_proj._extra_state", "decoder.layers.12.self_attention.linear_qkv._extra_state", "decoder.layers.12.mlp.linear_fc1._extra_state", "decoder.layers.12.mlp.linear_fc2._extra_state", "decoder.layers.13.self_attention.linear_proj._extra_state", "decoder.layers.13.self_attention.linear_qkv._extra_state", "decoder.layers.13.mlp.linear_fc1._extra_state", "decoder.layers.13.mlp.linear_fc2._extra_state", "decoder.layers.14.self_attention.linear_proj._extra_state", "decoder.layers.14.self_attention.linear_qkv._extra_state", "decoder.layers.14.mlp.linear_fc1._extra_state", "decoder.layers.14.mlp.linear_fc2._extra_state", "decoder.layers.15.self_attention.linear_proj._extra_state", "decoder.layers.15.self_attention.linear_qkv._extra_state", "decoder.layers.15.mlp.linear_fc1._extra_state", "decoder.layers.15.mlp.linear_fc2._extra_state", "decoder.layers.16.self_attention.linear_proj._extra_state", "decoder.layers.16.self_attention.linear_qkv._extra_state", "decoder.layers.16.mlp.linear_fc1._extra_state", "decoder.layers.16.mlp.linear_fc2._extra_state", "decoder.layers.17.self_attention.linear_proj._extra_state", "decoder.layers.17.self_attention.linear_qkv._extra_state", "decoder.layers.17.mlp.linear_fc1._extra_state", "decoder.layers.17.mlp.linear_fc2._extra_state", "decoder.layers.18.self_attention.linear_proj._extra_state", "decoder.layers.18.self_attention.linear_qkv._extra_state", "decoder.layers.18.mlp.linear_fc1._extra_state", "decoder.layers.18.mlp.linear_fc2._extra_state", "decoder.layers.19.self_attention.linear_proj._extra_state", "decoder.layers.19.self_attention.linear_qkv._extra_state", "decoder.layers.19.mlp.linear_fc1._extra_state", "decoder.layers.19.mlp.linear_fc2._extra_state", "decoder.layers.20.self_attention.linear_proj._extra_state", "decoder.layers.20.self_attention.linear_qkv._extra_state", "decoder.layers.20.mlp.linear_fc1._extra_state", "decoder.layers.20.mlp.linear_fc2._extra_state", "decoder.layers.21.self_attention.linear_proj._extra_state", "decoder.layers.21.self_attention.linear_qkv._extra_state", "decoder.layers.21.mlp.linear_fc1._extra_state", "decoder.layers.21.mlp.linear_fc2._extra_state", "decoder.layers.22.self_attention.linear_proj._extra_state", "decoder.layers.22.self_attention.linear_qkv._extra_state", "decoder.layers.22.mlp.linear_fc1._extra_state", "decoder.layers.22.mlp.linear_fc2._extra_state", "decoder.layers.23.self_attention.linear_proj._extra_state", "decoder.layers.23.self_attention.linear_qkv._extra_state", "decoder.layers.23.mlp.linear_fc1._extra_state", "decoder.layers.23.mlp.linear_fc2._extra_state", "decoder.layers.24.self_attention.linear_proj._extra_state", "decoder.layers.24.self_attention.linear_qkv._extra_state", "decoder.layers.24.mlp.linear_fc1._extra_state", "decoder.layers.24.mlp.linear_fc2._extra_state", "decoder.layers.25.self_attention.linear_proj._extra_state", "decoder.layers.25.self_attention.linear_qkv._extra_state", "decoder.layers.25.mlp.linear_fc1._extra_state", "decoder.layers.25.mlp.linear_fc2._extra_state", "decoder.layers.26.self_attention.linear_proj._extra_state", "decoder.layers.26.self_attention.linear_qkv._extra_state", "decoder.layers.26.mlp.linear_fc1._extra_state", "decoder.layers.26.mlp.linear_fc2._extra_state", "decoder.layers.27.self_attention.linear_proj._extra_state", "decoder.layers.27.self_attention.linear_qkv._extra_state", "decoder.layers.27.mlp.linear_fc1._extra_state", "decoder.layers.27.mlp.linear_fc2._extra_state", "decoder.layers.28.self_attention.linear_proj._extra_state", "decoder.layers.28.self_attention.linear_qkv._extra_state", "decoder.layers.28.mlp.linear_fc1._extra_state", "decoder.layers.28.mlp.linear_fc2._extra_state", "decoder.layers.29.self_attention.linear_proj._extra_state", "decoder.layers.29.self_attention.linear_qkv._extra_state", "decoder.layers.29.mlp.linear_fc1._extra_state", "decoder.layers.29.mlp.linear_fc2._extra_state", "decoder.layers.30.self_attention.linear_proj._extra_state", "decoder.layers.30.self_attention.linear_qkv._extra_state", "decoder.layers.30.mlp.linear_fc1._extra_state", "decoder.layers.30.mlp.linear_fc2._extra_state", "decoder.layers.31.self_attention.linear_proj._extra_state", "decoder.layers.31.self_attention.linear_qkv._extra_state", "decoder.layers.31.mlp.linear_fc1._extra_state", "decoder.layers.31.mlp.linear_fc2._extra_state", "decoder.final_layernorm._extra_state".
My understanding is that errors about missing _extra_state keys should go away with setting strict=False. However, in this setting the MegatronGPTSPINModel model class sets the strict parameter to be True. Link to the line.
Steps/Code to reproduce bug
- Download the model weights from HF
- Convert the checkpoint to NeMo format using checkpoint converter for llama from HF to NeMo
- Launch the training using this example script (with
strict=Falsewhen initializing the model)
Expected behavior
I expect that by setting strict=False, the model weights should be loaded correctly without any errors.
Environment overview (please complete the following information)
- Environment location: Docker
- Docker pull command:
docker pull nvcr.io/nvidia/nemo:25.04.rc1
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working