Skip to content

Fix MoE target_parameters module_count alignment (#3405, #3701)#499

Open
GoldenGrapeGentleman wants to merge 4 commits intounslothai:mainfrom
GoldenGrapeGentleman:fix/moe-lora-count-alignment
Open

Fix MoE target_parameters module_count alignment (#3405, #3701)#499
GoldenGrapeGentleman wants to merge 4 commits intounslothai:mainfrom
GoldenGrapeGentleman:fix/moe-lora-count-alignment

Conversation

@GoldenGrapeGentleman
Copy link
Copy Markdown
Contributor

Summary

Resolves the TODO: handle MoE target_parameters to align counts at line 410 of saving_utils.py.

Problem

When using PEFT target_parameters for MoE expert LoRA (e.g., gpt-oss), the create_lora_statistics function reports a misleading diagnostic warning:

[Unsloth merge debug] LoRA count mismatch: modules=120, lora_A=144, lora_B=144, scaling=144

This happens because MoE ParamWrapper entries have lora_A/lora_B/scaling counted, but lack a .base_layer module, so module_count is never incremented for them.

Fix

After the LoRA collection loop, count MoE expert entries that have lora_A/lora_B but no .module, aligning module_count with the other counts. Only applies to .mlp.experts entries to avoid affecting non-MoE models.

Changes

  • saving_utils.py: +12 lines / -3 lines (removes TODO comment)

Verified on 8× AMD Instinct MI355X (gfx950), ROCm 7.1

  • gpt-oss-20b BF16 + MoE expert LoRA: 46.2M trainable params (vs 2.0M attention-only)
  • save_pretrained_merged: ✅ success, no count mismatch warning
  • Before fix: modules=120, lora_A=144 warning printed
  • After fix: counts aligned, clean merge

Related: unslothai/unsloth#3405, unslothai/unsloth#3701

cc @danielhanchen @Datta0

MoE ParamWrapper entries (from PEFT target_parameters) have lora_A/B/scaling
but may lack a .base_layer module, leaving module_count short of lora_A_count.
This caused a misleading 'LoRA count mismatch' diagnostic warning during merge.

Fix: after the LoRA collection loop, count MoE expert entries that have
lora_A/B but no module, aligning module_count with lora_A/B/scaling counts.
Also removes the TODO comment at the resolved location.

Tested on: 8x AMD Instinct MI355X (gfx950), ROCm 7.1
- gpt-oss-20b BF16 + MoE expert LoRA (46.2M trainable)
- save_pretrained_merged: success, no count mismatch warning
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @GoldenGrapeGentleman, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses and resolves a misleading "LoRA count mismatch" diagnostic warning that occurred when using PEFT target_parameters for Mixture-of-Experts (MoE) expert LoRA. The fix ensures that the module_count accurately reflects the number of LoRA modules, particularly for ParamWrapper entries within MoE experts, leading to clean merges and correct statistics without false warnings.

Highlights

  • MoE LoRA Count Mismatch Resolution: Resolved an issue in the create_lora_statistics function where Mixture-of-Experts (MoE) expert LoRA configurations would incorrectly trigger a "LoRA count mismatch" warning.
  • Accurate Module Counting for MoE: Implemented logic to correctly increment module_count for MoE ParamWrapper entries that possess lora_A and lora_B but lack a .base_layer module, specifically targeting .mlp.experts.
  • TODO Comment Addressed: Addressed and removed the TODO: handle MoE target_parameters to align counts comment at line 410 in saving_utils.py, indicating the completion of the planned fix.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • unsloth_zoo/saving_utils.py
    • Added a loop to iterate through lora_weights and increment module_count for MoE ParamWrapper entries that have lora_A and lora_B but no associated .module, specifically for keys containing ".mlp.experts".
    • Removed a TODO comment related to handling MoE target parameters for count alignment.
Activity
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request provides a clean and effective fix for a LoRA module count mismatch that occurred with MoE models using target_parameters. The added logic correctly identifies uncounted MoE expert modules by checking for LoRA weights on modules without a registered base layer, and appropriately increments the module count. The change is well-targeted to .mlp.experts to avoid side effects. The removal of the corresponding TODO comment is also appropriate. The implementation is straightforward and directly resolves the issue described.

@Datta0
Copy link
Copy Markdown
Collaborator

Datta0 commented Feb 14, 2026

Hey @GoldenGrapeGentleman thanks for the contribution. I do remember that there's only one base_layer in the experts module (referring to gate_up_proj iirc?). This does effect transformers v5.
But if I also remember, in transformers v4, the experts are nn.ModuleList(nn.Linear) which should play well with LoRA.

Can we handle this specifically for nn.Parameter instead of doing a blanket addition? Also can you test your fix for the following models : unsloth/Qwen3-30B-A3B-Instruct-2507,'unsloth/gpt-oss-20b-BF16',"unsloth/GLM-4.7-Flash",imdatta0/tiny_qwen3_moe_2.8B_0.7B,unsloth/Qwen3-VL-30B-A3B-Instruct? IF you can do that this would be of great help and would ease the review process...

Thanks a lot :)

@GoldenGrapeGentleman
Copy link
Copy Markdown
Contributor Author

GoldenGrapeGentleman commented Feb 14, 2026

Hi @Datta0 Thanks for your quick response, I will do all you mentioned, maybe after the Chinese Spring Festival. BTW, happy new year to you and unsloth team if anyone is from China(❁´◡`❁)

@Datta0
Copy link
Copy Markdown
Collaborator

Datta0 commented Feb 14, 2026

Happy new year @GoldenGrapeGentleman. Wishing you a great time :)

Per review feedback, nn.Linear targets always get .base_layer set
(module != None), so stats.module is None reliably identifies
nn.Parameter (ParamWrapper) targets without path string matching.
Avoids incorrectly counting nn.ModuleList(nn.Linear) in transformers v4.
@Datta0
Copy link
Copy Markdown
Collaborator

Datta0 commented Feb 17, 2026

Hey @GoldenGrapeGentleman
I tried your patch with Qwen3.5 family and I see that there's still some issues.
You don't need to test on the full model. I made a small dummy ~20B scale model for testing purposes. You can give it a try.
Also did you try verifying that merged model is same as the intended model aka training learnings are intact after merging...

[FAIL] save_pretrained_merged: Unsloth: Saving LoRA finetune failed since # of LoRAs = 200 does not match # of saved modules = 0. Please file a bug report!
PS: LoRA is only applied to MoE params not on attn at all

…on_mapping

Models like Qwen3.5 store safetensors with 'model.language_model.layers.'
prefix but load into memory as 'model.layers.'. Without
_checkpoint_conversion_mapping, the merge function cannot match LoRA keys
to safetensor keys, causing n_saved_modules=0.

Fix: when no explicit mapping exists, infer the prefix by matching a LoRA
key suffix against safetensor keys. If a consistent extra prefix is found,
remap all LoRA keys accordingly.

Tested on imdatta0/small_qwen3_5_20b (MoE-only LoRA):
- Before: # of LoRAs = 200 does not match # of saved modules = 0
- After: merge succeeds cleanly
@GoldenGrapeGentleman
Copy link
Copy Markdown
Contributor Author

GoldenGrapeGentleman commented Feb 24, 2026

Hey @Datta0! I'm back from the holidays and excited to dive back into this 🎉 Thanks so much for testing with Qwen3.5 and providing the dummy model~

What happened with Qwen3.5

Great catch. The n_saved_modules = 0 failure was NOT a count alignment issue — it was a key prefix mismatch:

  • Safetensor keys: model.language_model.layers.X.mlp.experts.N.gate_proj.weight
  • LoRA keys (in memory): model.layers.X.mlp.experts

Qwen3_5MoeForCausalLM has no _checkpoint_conversion_mapping, so _convert_lora_keys_to_safetensor_format returned LoRA keys unchanged, and the merge loop couldn't match anything.

Fix: prefix inference fallback

Added a fallback in _convert_lora_keys_to_safetensor_format: when no explicit mapping exists, infer the prefix by matching a LoRA key suffix against safetensor keys. For Qwen3.5, shared_expert.gate_proj.weight provides the match → prefix model.language_ is detected → all LoRA keys remapped correctly.

Verified on 8× MI355X (gfx950)

imdatta0/small_qwen3_5_20b (MoE-only LoRA):

  • Trainable: 99.7M
  • Training: ✅ (5 steps, lr=1e-3)
  • Merge: ✅ (was n_saved_modules=0, now succeeds)
  • Weight diff after merge: max=4.88e-4 ✅ LoRA changes preserved

Previous models (on transformers 5.1.0):

Model Train Merge Mismatch
unsloth/gpt-oss-20b-BF16 None ✅
unsloth/Qwen3-30B-A3B-Instruct-2507 None ✅
imdatta0/tiny_qwen3_moe_2.8B_0.7B None ✅
unsloth/GLM-4.7-Flash None ✅
unsloth/Qwen3-VL-30B-A3B-Instruct None ✅

Note: Qwen3.5 requires transformers 5.3.0.dev0 which breaks gpt-oss patches (gate_up_projsgate_up_proj API change), so cross-testing in a single environment isn't possible. Each model family was tested on its compatible transformers version.

Let me know if you'd like me to test anything else!

Copy link
Copy Markdown
Collaborator

@Datta0 Datta0 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @GoldenGrapeGentleman thanks for the changes and updating to fix Qwen3.5
Also can you point me to the change you're referring to on transformers (gate_up_proj to gate_up_projs)?

if inferred_prefix is not None:
break

if inferred_prefix is not None:
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NIT: What would be the right course of action if we can't find something suitable?
For eg, say gate_up_proj changes to gate_up_projs should we error here? Or do we want to handle it elsewhere

Copy link
Copy Markdown
Contributor Author

@GoldenGrapeGentleman GoldenGrapeGentleman Feb 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When no suitable prefix can be inferred (e.g. gate_up_projgate_up_projs API change), _infer_prefix_and_remap returns None, and the caller falls back to returning keys unchanged — same behavior as before this PR.

I think silently falling back is the right call here rather than erroring, because:

  1. The downstream merge loop already handles mismatches explicitly (it counts n_saved_modules and raises a clear error if 0 modules matched)
  2. A hard error here would break models that genuinely have no prefix discrepancy

So the worst case is: prefix inference fails → keys pass through unchanged → merge loop catches the real mismatch and reports it. No silent data loss.

# consistent prefix is found, remap all LoRA keys accordingly.
if safetensor_keys:
inferred_prefix = None
for lora_key in lora_weights:
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ig we can move this out as a function, something like sanitize_module_name perhaps?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done! ✅ Extracted it into _infer_prefix_and_remap() as a standalone function with a docstring. See commit 643f3d1.

I went with _infer_prefix_and_remap instead of sanitize_module_name since the function does two things: infer the missing prefix AND remap all keys. Happy to rename if you prefer a different name though!

and _stats.lora_B is not None
and _stats.module is None
):
module_count += 1
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So this pretty much fixes only the count part. I think my previous changes (in the #450 perhaps) would automatically handle the right tensor and file placement things I presume.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Exactly right — this fix only addresses the module_count alignment so the mismatch warning no longer fires for nn.Parameter targets. The actual tensor placement and file writing is handled by your work in #450.

The two fixes are complementary: #450 handles the merge mechanics, this PR ensures the diagnostic counts are correct so users don't see a misleading warning during an otherwise successful merge.

Address review feedback from @Datta0:
- Extract prefix inference logic into _infer_prefix_and_remap() for clarity
- Return None when no prefix found (caller falls back to unchanged keys)
- Add docstring explaining the function's purpose and return contract
@Datta0
Copy link
Copy Markdown
Collaborator

Datta0 commented Feb 25, 2026

Let me test on the qwen 3.5 and I can approve later today :)
Thanks a lot for your contribution

danielhanchen added a commit that referenced this pull request Mar 17, 2026
…#555)

* Fix MoE module_count alignment and prefix inference for models without _checkpoint_conversion_mapping

Two fixes extracted from PR #499 (by @GoldenGrapeGentleman), without the
unrelated removals of vocab-resize and tied-embedding handling.

1. MoE module_count alignment (#3405, #3701): Custom MoE LoRA wrappers
   (e.g. GPT-OSS expert LoRA) that match the fallback branch have
   lora_A/B/scaling but no .base_layer child, leaving module_count short.
   Count entries with module=None after the main loop to suppress the
   false "[Unsloth merge debug] LoRA count mismatch" warning.

2. Prefix inference fallback (fixes #4294): Composite models like Qwen3.5
   store safetensors with prefix "model.language_model." but runtime LoRA
   keys use "model.". When no _checkpoint_conversion_mapping exists, infer
   the prefix by suffix-matching a LoRA key against safetensor keys and
   remap. Includes validation that at least one remapped key exists in the
   shard to prevent bad inferences from partial suffix matches.

* Style: use any() and dict comprehension in _infer_prefix_and_remap

Address review suggestions: replace explicit loop with any() generator
expression for validation, and use dict comprehension for remapping.
Remove trailing pass statement.

* Use per-key prefix matching with fallback for unmatched keys

Replace the global single-prefix rewrite with per-key matching:
- Keys that already match a safetensor key are preserved as-is
- Keys with a single unambiguous suffix match are remapped individually
- Keys that cannot be suffix-matched (e.g. MoE fused expert params
  with different naming) inherit the most common inferred prefix from
  successfully matched keys

This handles mixed-prefix shards correctly and avoids double-prefixing
keys that are already in the right format, while still supporting MoE
expert parameters that use different naming conventions.

---------

Co-authored-by: GoldenGrapeGentleman <yueyuan@amd.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants