From the course: Programming Generative AI: From Variational Autoencoders to Stable Diffusion with PyTorch and Hugging Face

Unlock this course with a free trial

Join today to access over 24,800 courses taught by industry experts.

Inference with an autoencoder

Inference with an autoencoder

- [Instructor] To show you why a autoencoder is not really a generative model in the sense of how we'll be using them, and really is often thought of more as a compression algorithm, it compresses its inputs into some latent space, we can take some random latent vector if we want. So this is thought of as analogous to the sampling process. Let's say a random latent vector of the right dimensionality or latent space is 64. So random vector. We can sample by just passing in this random latent vector to our network. We're not using the encoder in this generation process. Just passing in the random vector, reshape it into an image, because remember, the output is just this stretched out vector. So reshape it back into the 28 by 28 shape. Bring it onto the CPU, since we attached or we are doing our learning on the GPU, detach and numpy here. And if we run this code, we have a sample. If we inspect the sample, it's a bunch of numbers. Importantly, for this image, it's a float32. If we want…

Contents