-
-
Notifications
You must be signed in to change notification settings - Fork 4k
fix qwen3 vl gradient accumulation #3598
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Summary of ChangesHello @mmathew23, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request resolves a critical bug affecting gradient accumulation for specific models, such as Qwen3 VL, when used with Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request addresses a double-scaling issue with gradient accumulation for models like Qwen3 VL and Gemma3. The fix involves patching Trainer.__init__ to prevent transformers from applying its own loss scaling when Unsloth has already done so. The implementation correctly identifies and modifies the hasattr(..., 'accepts_loss_kwargs') check using source code manipulation, which is consistent with the existing patching style in this file.
My review includes a couple of suggestions to improve code maintainability by reducing duplication. Overall, the changes look good and effectively solve the described problem.
| # Import all variables that need importing | ||
| import transformers.trainer | ||
|
|
||
| items_in_trainer = dir(transformers.trainer) | ||
| good_items = [] | ||
| for item in items_in_trainer: | ||
| if item in function: | ||
| good_items.append(item) | ||
| exec( | ||
| "from transformers.trainer import (" | ||
| + ", ".join(x for x in good_items) | ||
| + ")", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This block of code for dynamically importing dependencies is duplicated in the new patch for Trainer.__init__ (lines 1758-1771). To improve maintainability and reduce code duplication, consider extracting this logic into a helper function.
You could define a helper function like this:
def _import_dependencies_from_source(source_code: str, global_namespace: dict):
"""Dynamically imports dependencies found in source_code from transformers.trainer."""
import transformers.trainer
items_in_trainer = dir(transformers.trainer)
good_items = [item for item in items_in_trainer if item in source_code]
if good_items:
exec(
f"from transformers.trainer import ({', '.join(good_items)})",
global_namespace,
)Then you could replace this block and the one at lines 1758-1771 with a call to this helper, for instance:
_import_dependencies_from_source(function, globals())
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Qwen3 VL and other models declare accepts_loss_kwargs which can influence whether or not the final loss is loss / gradient_accumulation_steps. Qwen3VL, Gemma3 set this to False in transformers 4.57 which means the loss is double scaled down. Unsloth has already scaled the loss by this point, so this PR changes the behavior to not let accepts_loss_kwargs take priority.
qwen 3 vl notebook now show eval and train loss in line:
https://colab.research.google.com/drive/1pd2Boa3p-aY1u-plHSMegsQ-7CfPv0Rw?usp=sharing