From the course: Generative AI: Introduction to Diffusion Models for Text Generation
Unlock this course with a free trial
Join today to access over 25,300 courses taught by industry experts.
Challenge: Use a pretrained diffusion model for text generation - Gemini Tutorial
From the course: Generative AI: Introduction to Diffusion Models for Text Generation
Challenge: Use a pretrained diffusion model for text generation
(upbeat music) - [Instructor] In this challenge, the task is to demonstrate how to use the pre-trained diffusion model to fill a masked token using Hugging Face Transformers Library. Field mask is a type of text generation that involves predicting missing words or tokens in a sentence. This is usually achieved using masked language models trained to reconstruct original text from partially masked input. ModernBERT-Diffusion is a masked language model fine tuned on a large corpus of text. It uses diffusion-based techniques to enhance performance in text denoising and token prediction task. So what is your task? You are to complete the following steps to predict the masked word in this sentence. The future of AI is. And we have a placeholder code mask. To implement the code for this task, you start by importing the necessary libraries, and then you load the pre-trained model and tokenize. Then you define the input, you create the string variable. You can name it input text or whatever…
Practice while you learn with exercise files
Download the files the instructor uses to teach the course. Follow along and learn by watching, listening and practicing.