-
Notifications
You must be signed in to change notification settings - Fork 7.9k
Open
Labels
bugSomething isn't workingSomething isn't workingpendingThis problem is yet to be addressedThis problem is yet to be addressed
Description
Reminder
- I have read the above rules and searched the existing issues.
System Info
- LLaMA-Factory version: latest (commit 1857fbd)
- Platform: Linux-5.15.0-119-generic-x86_64-with-glibc2.35
- Python version: 3.11.14
- PyTorch version: (installed in deleted llama-factory env)
- Transformers version: (installed in deleted llama-factory env)
- CUDA version: 13.0
- GPU: NVIDIA GeForce RTX 5090 (32607 MiB)
- Driver version: 580.76.05
Reproduction
llamafactory-cli train \
--stage sft \
--do_train True \
--model_name_or_path /root/autodl-tmp/models/Ministral-3-3B-Instruct-2512 \
--preprocessing_num_workers 16 \
--finetuning_type lora \
--template ministral3 \
--flash_attn auto \
--dataset_dir data \
--dataset mydata \
--cutoff_len 18000 \
--learning_rate 8e-05 \
--num_train_epochs 3.0 \
--max_samples 100000 \
--per_device_train_batch_size 1 \
--gradient_accumulation_steps 16 \
--lr_scheduler_type cosine \
--max_grad_norm 1.0 \
--logging_steps 5 \
--save_steps 100 \
--warmup_steps 50 \
--packing False \
--enable_thinking True \
--report_to none \
--output_dir saves/Ministral-3-3B-Instruct-2512/lora/train_2025-12-30-19-09-22 \
--bf16 True \
--plot_loss True \
--trust_remote_code True \
--ddp_timeout 180000000 \
--include_num_input_tokens_seen True \
--optim adamw_torch \
--lora_rank 8 \
--lora_alpha 16 \
--lora_dropout 0 \
--loraplus_lr_ratio 16 \
--lora_target all \
--freeze_vision_tower True \
--freeze_multi_modal_projector True \
--image_max_pixels 589824 \
--image_min_pixels 1024 \
--video_max_pixels 65536 \
--video_min_pixels 256
出现错误
[INFO|2025-12-30 19:45:34] llamafactory.hparams.parser:465 >> Process rank: 0, world size: 1, device: cuda:0, distributed training: False, compute dtype: torch.bfloat16
Traceback (most recent call last):
File "/root/autodl-tmp/LLaMA-Factory/src/llamafactory/model/loader.py", line 78, in load_tokenizer
tokenizer = AutoTokenizer.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/llama-factory/lib/python3.11/site-packages/transformers/models/auto/tokenization_auto.py", line 1137, in from_pretrained
raise ValueError(
ValueError: Tokenizer class TokenizersBackend does not exist or is not currently imported.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/root/miniconda3/envs/llama-factory/bin/llamafactory-cli", line 7, in <module>
sys.exit(main())
^^^^^^
File "/root/autodl-tmp/LLaMA-Factory/src/llamafactory/cli.py", line 24, in main
launcher.launch()
File "/root/autodl-tmp/LLaMA-Factory/src/llamafactory/launcher.py", line 157, in launch
run_exp()
File "/root/autodl-tmp/LLaMA-Factory/src/llamafactory/train/tuner.py", line 126, in run_exp
_training_function(config={"args": args, "callbacks": callbacks})
File "/root/autodl-tmp/LLaMA-Factory/src/llamafactory/train/tuner.py", line 88, in _training_function
run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks)
File "/root/autodl-tmp/LLaMA-Factory/src/llamafactory/train/sft/workflow.py", line 49, in run_sft
tokenizer_module = load_tokenizer(model_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/autodl-tmp/LLaMA-Factory/src/llamafactory/model/loader.py", line 86, in load_tokenizer
tokenizer = AutoTokenizer.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/llama-factory/lib/python3.11/site-packages/transformers/models/auto/tokenization_auto.py", line 1137, in from_pretrained
raise ValueError(
ValueError: Tokenizer class TokenizersBackend does not exist or is not currently imported.
Others
No response
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't workingpendingThis problem is yet to be addressedThis problem is yet to be addressed