Skip to content

Conversation

@LZHgrla
Copy link
Contributor

@LZHgrla LZHgrla commented Apr 23, 2024

No description provided.

# Scheduler & Optimizer
batch_size = 1 # per_device
accumulative_counts = 16
accumulative_counts = 128
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这么夸张的数,有跑过吗? loss 会下降不?

Copy link
Contributor Author

@LZHgrla LZHgrla Apr 29, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@hhaAndroid
测试了一下,是可以正常下降的。因为这是单卡的配置,这样设置来确保 global bs = 128

@pppppM pppppM merged commit 648d63a into InternLM:main May 10, 2024
llkn-2 pushed a commit to llkn-2/xtuner that referenced this pull request Jul 31, 2024
…nternLM#598)

* Update llava_llama3_8b_instruct_qlora_clip_vit_large_p14_336_e1_gpu1_finetune.py

* Update llava_llama3_8b_instruct_quant_clip_vit_large_p14_336_e1_gpu1_pretrain.py
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

3 participants