From the course: Programming Generative AI: From Variational Autoencoders to Stable Diffusion with PyTorch and Hugging Face
Unlock this course with a free trial
Join today to access over 24,800 courses taught by industry experts.
Inference with an autoencoder
From the course: Programming Generative AI: From Variational Autoencoders to Stable Diffusion with PyTorch and Hugging Face
Inference with an autoencoder
- [Instructor] To show you why a autoencoder is not really a generative model in the sense of how we'll be using them, and really is often thought of more as a compression algorithm, it compresses its inputs into some latent space, we can take some random latent vector if we want. So this is thought of as analogous to the sampling process. Let's say a random latent vector of the right dimensionality or latent space is 64. So random vector. We can sample by just passing in this random latent vector to our network. We're not using the encoder in this generation process. Just passing in the random vector, reshape it into an image, because remember, the output is just this stretched out vector. So reshape it back into the 28 by 28 shape. Bring it onto the CPU, since we attached or we are doing our learning on the GPU, detach and numpy here. And if we run this code, we have a sample. If we inspect the sample, it's a bunch of numbers. Importantly, for this image, it's a float32. If we want…
Contents
-
-
-
-
-
(Locked)
Topics54s
-
(Locked)
Representing images as tensors7m 45s
-
(Locked)
Desiderata for computer vision4m 57s
-
(Locked)
Features of convolutional neural networks7m 56s
-
(Locked)
Working with images in Python10m 20s
-
(Locked)
The Fashion-MNIST dataset4m 48s
-
(Locked)
Convolutional neural networks in PyTorch10m 43s
-
(Locked)
Components of a latent variable model (LVM)8m 57s
-
(Locked)
The humble autoencoder5m 29s
-
(Locked)
Defining an autoencoder with PyTorch5m 42s
-
(Locked)
Setting up a training loop9m 47s
-
(Locked)
Inference with an autoencoder4m 16s
-
(Locked)
Look ma, no features!8m 21s
-
(Locked)
Adding probability to autoencoders (VAE)4m 49s
-
(Locked)
Variational inference: Not just for autoencoders7m 20s
-
(Locked)
Transforming an autoencoder into a VAE13m 26s
-
(Locked)
Training a VAE with PyTorch13m 33s
-
(Locked)
Exploring latent space11m 37s
-
(Locked)
Latent space interpolation and attribute vectors12m 30s
-
(Locked)
-
-
-
-
-