Fix MoE target_parameters module_count alignment (#3405, #3701)#499
Fix MoE target_parameters module_count alignment (#3405, #3701)#499GoldenGrapeGentleman wants to merge 4 commits intounslothai:mainfrom
Conversation
MoE ParamWrapper entries (from PEFT target_parameters) have lora_A/B/scaling but may lack a .base_layer module, leaving module_count short of lora_A_count. This caused a misleading 'LoRA count mismatch' diagnostic warning during merge. Fix: after the LoRA collection loop, count MoE expert entries that have lora_A/B but no module, aligning module_count with lora_A/B/scaling counts. Also removes the TODO comment at the resolved location. Tested on: 8x AMD Instinct MI355X (gfx950), ROCm 7.1 - gpt-oss-20b BF16 + MoE expert LoRA (46.2M trainable) - save_pretrained_merged: success, no count mismatch warning
Summary of ChangesHello @GoldenGrapeGentleman, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request addresses and resolves a misleading "LoRA count mismatch" diagnostic warning that occurred when using PEFT Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request provides a clean and effective fix for a LoRA module count mismatch that occurred with MoE models using target_parameters. The added logic correctly identifies uncounted MoE expert modules by checking for LoRA weights on modules without a registered base layer, and appropriately increments the module count. The change is well-targeted to .mlp.experts to avoid side effects. The removal of the corresponding TODO comment is also appropriate. The implementation is straightforward and directly resolves the issue described.
|
Hey @GoldenGrapeGentleman thanks for the contribution. I do remember that there's only one base_layer in the experts module (referring to gate_up_proj iirc?). This does effect transformers v5. Can we handle this specifically for nn.Parameter instead of doing a blanket addition? Also can you test your fix for the following models : Thanks a lot :) |
|
Hi @Datta0 Thanks for your quick response, I will do all you mentioned, maybe after the Chinese Spring Festival. BTW, happy new year to you and unsloth team if anyone is from China(❁´◡`❁) |
|
Happy new year @GoldenGrapeGentleman. Wishing you a great time :) |
Per review feedback, nn.Linear targets always get .base_layer set (module != None), so stats.module is None reliably identifies nn.Parameter (ParamWrapper) targets without path string matching. Avoids incorrectly counting nn.ModuleList(nn.Linear) in transformers v4.
|
Hey @GoldenGrapeGentleman
|
…on_mapping Models like Qwen3.5 store safetensors with 'model.language_model.layers.' prefix but load into memory as 'model.layers.'. Without _checkpoint_conversion_mapping, the merge function cannot match LoRA keys to safetensor keys, causing n_saved_modules=0. Fix: when no explicit mapping exists, infer the prefix by matching a LoRA key suffix against safetensor keys. If a consistent extra prefix is found, remap all LoRA keys accordingly. Tested on imdatta0/small_qwen3_5_20b (MoE-only LoRA): - Before: # of LoRAs = 200 does not match # of saved modules = 0 - After: merge succeeds cleanly
|
Hey @Datta0! I'm back from the holidays and excited to dive back into this 🎉 Thanks so much for testing with Qwen3.5 and providing the dummy model~ What happened with Qwen3.5Great catch. The
Fix: prefix inference fallbackAdded a fallback in Verified on 8× MI355X (gfx950)
Previous models (on transformers 5.1.0):
Note: Qwen3.5 requires transformers 5.3.0.dev0 which breaks gpt-oss patches ( Let me know if you'd like me to test anything else! |
Datta0
left a comment
There was a problem hiding this comment.
Hey @GoldenGrapeGentleman thanks for the changes and updating to fix Qwen3.5
Also can you point me to the change you're referring to on transformers (gate_up_proj to gate_up_projs)?
unsloth_zoo/saving_utils.py
Outdated
| if inferred_prefix is not None: | ||
| break | ||
|
|
||
| if inferred_prefix is not None: |
There was a problem hiding this comment.
NIT: What would be the right course of action if we can't find something suitable?
For eg, say gate_up_proj changes to gate_up_projs should we error here? Or do we want to handle it elsewhere
There was a problem hiding this comment.
When no suitable prefix can be inferred (e.g. gate_up_proj → gate_up_projs API change), _infer_prefix_and_remap returns None, and the caller falls back to returning keys unchanged — same behavior as before this PR.
I think silently falling back is the right call here rather than erroring, because:
- The downstream merge loop already handles mismatches explicitly (it counts
n_saved_modulesand raises a clear error if 0 modules matched) - A hard error here would break models that genuinely have no prefix discrepancy
So the worst case is: prefix inference fails → keys pass through unchanged → merge loop catches the real mismatch and reports it. No silent data loss.
unsloth_zoo/saving_utils.py
Outdated
| # consistent prefix is found, remap all LoRA keys accordingly. | ||
| if safetensor_keys: | ||
| inferred_prefix = None | ||
| for lora_key in lora_weights: |
There was a problem hiding this comment.
ig we can move this out as a function, something like sanitize_module_name perhaps?
There was a problem hiding this comment.
Done! ✅ Extracted it into _infer_prefix_and_remap() as a standalone function with a docstring. See commit 643f3d1.
I went with _infer_prefix_and_remap instead of sanitize_module_name since the function does two things: infer the missing prefix AND remap all keys. Happy to rename if you prefer a different name though!
| and _stats.lora_B is not None | ||
| and _stats.module is None | ||
| ): | ||
| module_count += 1 |
There was a problem hiding this comment.
So this pretty much fixes only the count part. I think my previous changes (in the #450 perhaps) would automatically handle the right tensor and file placement things I presume.
There was a problem hiding this comment.
Exactly right — this fix only addresses the module_count alignment so the mismatch warning no longer fires for nn.Parameter targets. The actual tensor placement and file writing is handled by your work in #450.
The two fixes are complementary: #450 handles the merge mechanics, this PR ensures the diagnostic counts are correct so users don't see a misleading warning during an otherwise successful merge.
Address review feedback from @Datta0: - Extract prefix inference logic into _infer_prefix_and_remap() for clarity - Return None when no prefix found (caller falls back to unchanged keys) - Add docstring explaining the function's purpose and return contract
|
Let me test on the qwen 3.5 and I can approve later today :) |
…#555) * Fix MoE module_count alignment and prefix inference for models without _checkpoint_conversion_mapping Two fixes extracted from PR #499 (by @GoldenGrapeGentleman), without the unrelated removals of vocab-resize and tied-embedding handling. 1. MoE module_count alignment (#3405, #3701): Custom MoE LoRA wrappers (e.g. GPT-OSS expert LoRA) that match the fallback branch have lora_A/B/scaling but no .base_layer child, leaving module_count short. Count entries with module=None after the main loop to suppress the false "[Unsloth merge debug] LoRA count mismatch" warning. 2. Prefix inference fallback (fixes #4294): Composite models like Qwen3.5 store safetensors with prefix "model.language_model." but runtime LoRA keys use "model.". When no _checkpoint_conversion_mapping exists, infer the prefix by suffix-matching a LoRA key against safetensor keys and remap. Includes validation that at least one remapped key exists in the shard to prevent bad inferences from partial suffix matches. * Style: use any() and dict comprehension in _infer_prefix_and_remap Address review suggestions: replace explicit loop with any() generator expression for validation, and use dict comprehension for remapping. Remove trailing pass statement. * Use per-key prefix matching with fallback for unmatched keys Replace the global single-prefix rewrite with per-key matching: - Keys that already match a safetensor key are preserved as-is - Keys with a single unambiguous suffix match are remapped individually - Keys that cannot be suffix-matched (e.g. MoE fused expert params with different naming) inherit the most common inferred prefix from successfully matched keys This handles mixed-prefix shards correctly and avoids double-prefixing keys that are already in the right format, while still supporting MoE expert parameters that use different naming conventions. --------- Co-authored-by: GoldenGrapeGentleman <yueyuan@amd.com>
Summary
Resolves the
TODO: handle MoE target_parameters to align countsat line 410 ofsaving_utils.py.Problem
When using PEFT
target_parametersfor MoE expert LoRA (e.g., gpt-oss), thecreate_lora_statisticsfunction reports a misleading diagnostic warning:This happens because MoE
ParamWrapperentries havelora_A/lora_B/scalingcounted, but lack a.base_layermodule, somodule_countis never incremented for them.Fix
After the LoRA collection loop, count MoE expert entries that have
lora_A/lora_Bbut no.module, aligningmodule_countwith the other counts. Only applies to.mlp.expertsentries to avoid affecting non-MoE models.Changes
saving_utils.py: +12 lines / -3 lines (removes TODO comment)Verified on 8× AMD Instinct MI355X (gfx950), ROCm 7.1
save_pretrained_merged: ✅ success, no count mismatch warningmodules=120, lora_A=144warning printedRelated: unslothai/unsloth#3405, unslothai/unsloth#3701
cc @danielhanchen @Datta0