♻️Continued Pretraining
AKA as Continued Finetuning. Unsloth allows you to continually pretrain so a model can learn a new language.
What is Continued Pretraining?
Advanced Features:
Loading LoRA adapters for continued finetuning
from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "LORA_MODEL_NAME",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
trainer = Trainer(...)
trainer.train()Continued Pretraining & Finetuning the lm_head and embed_tokens matrices
lm_head and embed_tokens matricesLast updated
Was this helpful?

