Skip to content

Commit 3c658a6

Browse files
Fix: prevent load_in_fp8 kwarg from reaching Qwen3MoeForCausalLM constructor (Fix #3649) (#3654)
* Fix: remove load_in_fp8 from kwargs to prevent Qwen3Moe init TypeError (Fix #3649) * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
1 parent 05c6f91 commit 3c658a6

File tree

1 file changed

+2
-0
lines changed

1 file changed

+2
-0
lines changed

‎unsloth/models/vision.py‎

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -654,6 +654,8 @@ def from_pretrained(
654654

655655
raise_handler = RaiseUninitialized()
656656
if not fast_inference:
657+
# Prevent load_in_fp8 from being forwarded into HF internal model loading
658+
load_in_fp8 = kwargs.pop("load_in_fp8", None)
657659
model = auto_model.from_pretrained(
658660
model_name,
659661
device_map = device_map,

0 commit comments

Comments
 (0)