Concepts and examples on using and training LLMs
-
Updated
Aug 28, 2025 - Jupyter Notebook
Concepts and examples on using and training LLMs
Advanced Retrieval-Augmented Generation (RAG) through practical notebooks, using the power of the Langchain, OpenAI GPTs ,META LLAMA3 , Agents.
Offline-first knowledge workspace for reading, chatting with, and organizing research papers.
This repository contains code for fine-tuning the LLama3 8b model using Alpaca prompts to generate Java codes. The code is based on a Google Colab notebook.
This repository includes a variety of notebooks designed for tasks ranging from generative ai text models to image generation and model training to data analysis and visualization.
Python Notebook de um protótipo que coleta informações de um site selecionado pelo usuário, apos a coleta, os dados são enviados para o Llama3.
Repository for running LLMs efficiently on Mac silicon (M1, M2, M3). Features Jupyter notebook for Meta-Llama-3 setup using MLX framework, with install guide & perf tips. Aims to optimize LLM performance on Mac silicon for devs & researchers.
This repository contains code for fine-tuning the LLama3 8b model using Alpaca prompts to generate Java codes. The code is based on a Google Colab notebook.
Llama 3.1 8b finetuning using unsloth which offer upto 2x faster finetuning performance in Juypeter Notebook.
A demo Jupyter Notebook showcasing a simple local RAG (Retrieval Augmented Generation) pipeline to chat with your PDFs.
This repository contains a Jupyter notebook that demonstrates how to build a retrieval-based question-answering system using LangChain and Hugging Face. The notebook guides you through the process of setting up the environment, loading and processing documents, generating embeddings, and querying the system to retrieve relevant info from documents.
The codebase for the "Procedural Quest Generation for Role-Playing Games using Large Language Models", which includes datasets, notebooks, scripts, logs, outputs, and reports for the project.
This document explains the process of fine-tuning the LLaMA 3 model using the unsloth library. The notebook follows a structured approach, from installing dependencies to training the model.
Dive into the world of advanced language understanding with Advanced_RAG. These Python notebooks offer a guided tour of Retrieval-Augmented Generation (RAG) using the Langchain framework, perfect for enhancing Large Language Models (LLMs) with rich, contextual knowledge.
A powerful and adaptable chatbot built on Meta's Llama 3.3-70B model, enabling advanced conversational AI for Q&A, summarization, and chat tasks. Easily customizable and ready for integration in Python notebooks for rapid development and experimentation.
An LLM-powered augmented generation suite leveraging LangChain, Ollama, and vector databases to enhance response quality through caching, contextual memory, and retrieval-based methods. This collection of Jupyter notebooks showcases modular techniques for building intelligent, memory-efficient generative systems with real-time semantic awareness.
Add a description, image, and links to the llama3 topic page so that developers can more easily learn about it.
To associate your repository with the llama3 topic, visit your repo's landing page and select "manage topics."