From the course: Generative AI: Introduction to Diffusion Models for Text Generation

Unlock this course with a free trial

Join today to access over 25,300 courses taught by industry experts.

Solution: Use a pretrained diffusion model for text generation

Solution: Use a pretrained diffusion model for text generation - Gemini Tutorial

From the course: Generative AI: Introduction to Diffusion Models for Text Generation

Solution: Use a pretrained diffusion model for text generation

(upbeat music) - [Instructor] Now let's solve a challenge by first importing the necessary libraries from transformers imports AutoTokenizer and AutoModeForMaskedLM. The next step is to load the tokenizer. Let's accept the suggestion. But the model we are using in this instance is modernBERT-diffusion. This is what it looks like on our game phase interface. Copy. Let's clear this. (keyboard clicking) This is as input_text and we are placing just one mask because we want the model to try to predict whatever this mask is. Now, let's go ahead and tokenize the input. You can decide what that looks like. It's always a good idea. And we can see that the input is not in tensors. Now, let's generate output by placing the input in the model that we defined above. And let's get the logits. Now, go ahead and identify the mask token. mask_token index =. Let's verify this code, (inputs.input_ids == tokenizer_mask_token)[0]. Okay, that's correct. And then predicted_token_id = logits, list…

Contents