Skip to content

This document explains the process of fine-tuning the LLaMA 3 model using the unsloth library. The notebook follows a structured approach, from installing dependencies to training the model.

Notifications You must be signed in to change notification settings

Shashank-V-H/Fine_Tuning_llama3

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 

Repository files navigation

Fine-Tuning LLaMA 3: Step-by-Step Documentation

Overview

This document explains the process of fine-tuning the LLaMA 3 model using the unsloth library. The notebook follows a structured approach, from installing dependencies to training the model.

Step 1: Install Required Python Packages

To ensure all necessary libraries are available, the notebook installs the following dependencies:

!pip install -U xformers --index-url https://download.pytorch.org/whl/cu121
!pip install --no-deps packaging ninja einops flash-attn trl peft accelerate bitsandbytes
!pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
  • xformers: Efficient transformer operations.
  • transformers, trl, peft, accelerate: Libraries for model fine-tuning.
  • bitsandbytes: Quantization support for efficient training.
  • unsloth: Optimized framework for fine-tuning large models.

Step 2: Import Required Libraries

The notebook imports essential libraries for data handling, training, and authentication:

import torch
import os
import json
import pandas as pd
from datasets import Dataset, DatasetDict, load_dataset
from huggingface_hub import notebook_login
from transformers import TrainingArguments
from trl import SFTTrainer
from unsloth import FastLanguageModel

Step 3: Login to Hugging Face

To access datasets and model checkpoints, the notebook logs into the Hugging Face Hub:

notebook_login()

This prompts the user to enter their Hugging Face API token.

Step 4: Load Dataset

The notebook loads and processes a dataset for fine-tuning:

dataset = load_dataset("path/to/dataset")
  • The dataset should be formatted correctly for the model.

Step 5: Prepare the Model for Fine-Tuning

The model is loaded using FastLanguageModel from unsloth:

model = FastLanguageModel.from_pretrained("meta-llama/Llama-3-7B", load_in_8bit=True)

This loads LLaMA 3 in an 8-bit format for memory efficiency.

Step 6: Define Training Arguments

The fine-tuning process is configured with training arguments:

training_args = TrainingArguments(
    output_dir="./results",
    per_device_train_batch_size=4,
    num_train_epochs=3,
    save_steps=500,
    logging_dir="./logs",
)

Step 7: Train the Model

Using SFTTrainer, the model is fine-tuned on the dataset:

trainer = SFTTrainer(
    model=model,
    train_dataset=dataset,
    args=training_args
)
trainer.train()

Step 8: Save and Upload the Model

After training, the model is saved and optionally uploaded to Hugging Face Hub:

model.save_pretrained("./fine_tuned_model")
trainer.push_to_hub("my-fine-tuned-llama3")

Conclusion

This process enables efficient fine-tuning of LLaMA 3 using unsloth and Hugging Face tools, optimizing training with 8-bit precision and memory-efficient strategies.

About

This document explains the process of fine-tuning the LLaMA 3 model using the unsloth library. The notebook follows a structured approach, from installing dependencies to training the model.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published