Skip to content

A hands-on journey into Generative AI and large language models. Explore transformers, prompt engineering, fine-tuning, deployment, and more through practical projects and interactive notebooks. Let's innovate together!

Notifications You must be signed in to change notification settings

sayande01/GenAI_LLM_Marathon

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

32 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

πŸŽ‰ 30-Day GenAI & LLM Marathon

Unleash the power of Generative AI and Large Language Models in just 30 days!


πŸš€ About This Marathon

Welcome to a transformative journey into the world of Generative AI and LLMs! Over the next 30 days, we’ll dive deep into cutting-edge topicsβ€”from the nuts and bolts of Transformer architectures to building conversational agents and deploying production-ready AI systems.

What you'll get:

  • Daily Deep Dives: Each day features a focused topic, real-world projects, and hands-on coding challenges.
  • Interactive Notebooks: Detailed Jupyter notebooks with clear explanations, examples, and exercises.
  • Practical Projects: From prompt engineering to building recommendation systems, every project is designed to boost your practical skills.
  • Community & Collaboration: Share your progress, contribute ideas, and connect with like-minded enthusiasts.

πŸ—“οΈ Marathon Roadmap Overview

Here's a sneak peek at the daily topics along with their key highlights and deliverables:

Day & Date Topic Snapshot Highlights
Day 1 (Feb 3) Intro to GenAI & LLMs Understand the basics, discover applications & meet the pioneers (OpenAI, Hugging Face).
Day 2 (Feb 4) Transformer Architecture Learn about self-attention, encoder-decoder dynamics & positional encoding.
Day 3 (Feb 5) Hugging Face Essentials Work with pre-trained models, tokenizers & explore the Hugging Face hub.
Day 4 (Feb 6) Prompt Engineering 101 Craft effective prompts, compare zero-shot vs. few-shot learning strategies.
Day 5 (Feb 7) Fine-Tuning LLMs Adapt models like GPT-2/T5 on custom datasets, and explore fine-tuning strategies.
Day 6 (Feb 8) Evaluation Metrics for LLMs BLEU, ROUGE, perplexity, comparing models.
Day 7 (Feb 9) Introduction to LangChain Building LLM-powered apps and modular workflows.
Day 8 (Feb 10) Vector Databases Embedding storage & retrieval with tools like Pinecone or Weaviate.
Day 9 (Feb 11) Retrieval-Augmented Generation (RAG) Combining retrieval and generation for Q&A tasks.
Day 10 (Feb 12) Advanced Prompt Engineering Advanced techniques: chain-of-thought, prompt chaining.
Day 11 (Feb 13) Ethical AI in Generative Models Address bias, fairness, and explore mitigation strategies.
Day 12 (Feb 14) Model Distillation Compressing models via techniques like knowledge distillation.
Day 13 (Feb 15) Multimodal Models Explore models such as CLIP and DALL-E for image-text tasks.
Day 14 (Feb 16) LLM Deployment Basics Deploy models using FastAPI/Flask and containerization tools like Docker.
Day 15 (Feb 17) Monitoring LLMs in Production Utilize tools like MLflow for drift detection and performance monitoring.
Day 16 (Feb 18) Building a Domain-Specific Chatbot Fine-tune chatbots for specific domains.
Day 17 (Feb 19) Advanced RAG Pipelines Incorporate hybrid search and query rewriting for robust Q&A.
Day 18 (Feb 20) LLM Security Learn to defend against prompt injection and adversarial attacks.
Day 19 (Feb 21) Low-Code LLM Tools Rapid prototyping with tools like OpenAI API, ChatGPT Plugins, and Hugging Face Spaces.
Day 20 (Feb 22) LLM Explainability Use SHAP, LIME, and attention maps to gain transparency into model decisions.
Day 21 (Feb 23) Introduction to Document Summarization Explore extractive vs. abstractive methods for summarizing text.
Day 22 (Feb 24) Fine-Tuning a Summarization Model Adapt pre-trained models like BART/T5 for domain-specific summarization tasks.
Day 23 (Feb 25) Building a Summarization API Deploy summarization models as REST APIs.
Day 24 (Feb 26) Introduction to Recommendation Systems Understand collaborative filtering and content-based approaches for recommendations.
Day 25 (Feb 27) Fine-Tuning a Recommendation Model Customize LLMs for personalized recommendations using domain-specific datasets.
Day 26 (Feb 28) Building a Recommendation API Deploy recommendation systems with FastAPI/Flask.
Day 27 (Mar 1) Introduction to Conversational Agents Explore the basics of dialogue systems and the importance of memory in conversations.
Day 28 (Mar 2) Integrating Memory into Conversational Agents Add short-term and long-term memory to enhance multi-turn conversations.
Day 29 (Mar 3) Fine-Tuning a Conversational Agent Customize LLMs on dialogue datasets for domain-specific interactions.
Day 30 (Mar 4) Deploying a Conversational Agent Make your conversational agent accessible as an API via FastAPI/Flask.

Scroll down for full details on each day’s challenge!


πŸ’‘ Key Themes & Technologies

This marathon covers a wide spectrum of topics and tools, including:

  • Generative AI Fundamentals: From conceptual overviews to practical applications.
  • Transformer Models: Deep dives into architecture, self-attention, and real-world use cases.
  • Prompt Engineering: Techniques to optimize outputs from your LLMs.
  • Fine-Tuning & Evaluation: Adapt models for specific tasks and measure performance with metrics like BLEU, ROUGE, and perplexity.
  • LangChain & RAG: Build dynamic systems that combine retrieval with generation.
  • Deployment & Monitoring: Transform models into production-grade APIs and monitor their performance.
  • Ethics & Security: Address bias, fairness, and ensure robust AI practices.

Tech Stack Highlights:

  • Language: Python 🐍
  • Frameworks/Libraries: PyTorch, TensorFlow, Hugging Face Transformers, FastAPI, Flask
  • Visualization & Data: Matplotlib, Seaborn, NumPy, Pandas
  • Interactive Environment: Jupyter Notebook / Google Colab

πŸ“‚ Repository Structure

The repository is organized as follows:

30-Day-GenAI-LLM-Marathon/
β”œβ”€β”€ Day_1/
β”‚   β”œβ”€β”€ Intro_to_Generative_AI.ipynb
β”‚   └── README.md
β”œβ”€β”€ Day_2/
β”‚   β”œβ”€β”€ Transformer_Architecture.ipynb
β”‚   └── README.md
β”œβ”€β”€ Day_3/
β”‚   β”œβ”€β”€ HuggingFace_Essentials.ipynb
β”‚   └── README.md
β”œβ”€β”€ ... (more daily folders)
β”œβ”€β”€ requirements.txt
└── README.md  (this file)

Each day’s folder includes:

  • Jupyter Notebooks: Detailed explanations, code examples, and exercises.
  • Day-Specific README: Summaries of concepts, tasks, and key takeaways.
  • Datasets/Resources: Provided directly or linked for further exploration.

πŸ”§ Getting Started

To kick off the marathon:

  1. Clone the Repository:

    git clone https://github.com/your-username/30-Day-GenAI-LLM-Marathon.git
    cd 30-Day-GenAI-LLM-Marathon
  2. Install Dependencies:

    pip install -r requirements.txt
  3. Launch Your Notebook Environment:

    • Open the Jupyter Notebooks in any daily folder using Jupyter Notebook or Google Colab.
    • Run the code, experiment, and learn!

🀝 Join the Conversation

This marathon is all about collaboration and shared learning. You can:

  • Contribute: Fork the repository, add your ideas or improvements, and submit pull requests.
  • Share Feedback: Use the Issues section to discuss challenges, share insights, or ask for help.
  • Spread the Word: If you find this project valuable, give it a star ⭐ and share your progress on social media!

✨ Why I’m Running This Marathon

I created this challenge to:

  • Demystify Generative AI & LLMs: Provide an accessible, hands-on learning experience.
  • Bridge Theory & Practice: Equip you with both the knowledge and skills needed for real-world AI projects.
  • Foster Community: Build a collaborative environment where we can all learn from one another.

If you're passionate about AI and eager to explore the next frontier of technology, you're in the right place!


Thank you for joining the 30-Day GenAI & LLM Marathon. Let’s code, learn, and innovate together!

Happy Learning & Coding! πŸš€


About

A hands-on journey into Generative AI and large language models. Explore transformers, prompt engineering, fine-tuning, deployment, and more through practical projects and interactive notebooks. Let's innovate together!

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published