From the course: Programming Generative AI: From Variational Autoencoders to Stable Diffusion with PyTorch and Hugging Face
Unlock this course with a free trial
Join today to access over 24,800 courses taught by industry experts.
Look ma, no features!
From the course: Programming Generative AI: From Variational Autoencoders to Stable Diffusion with PyTorch and Hugging Face
Look ma, no features!
- One of the really powerful features of neural networks and deep learning in general is their ability to perform automatic feature learning. Gone are the days where you have to handcraft features about what are the important aspects of text, images, or whatever input data. This applies both to special neural network architectures like convolutional neural networks, transformers, et cetera, but also for any standard multi-layer perceptron or feedforward network. When it comes to images, we can think of this natural image manifold that acts almost as the idealized generative process of images, the natural or universal thing that makes images. In this case, it might be a very complex, very high dimensional space, and at one point, we might have artworks. We have things like the Mona Lisa, we have abstract art, we have stained glass, and it's impossible to capture the complexity of all of the space of natural images. So you'll have to use your imagination to think of something much more…
Contents
-
-
-
-
-
(Locked)
Topics54s
-
(Locked)
Representing images as tensors7m 45s
-
(Locked)
Desiderata for computer vision4m 57s
-
(Locked)
Features of convolutional neural networks7m 56s
-
(Locked)
Working with images in Python10m 20s
-
(Locked)
The Fashion-MNIST dataset4m 48s
-
(Locked)
Convolutional neural networks in PyTorch10m 43s
-
(Locked)
Components of a latent variable model (LVM)8m 57s
-
(Locked)
The humble autoencoder5m 29s
-
(Locked)
Defining an autoencoder with PyTorch5m 42s
-
(Locked)
Setting up a training loop9m 47s
-
(Locked)
Inference with an autoencoder4m 16s
-
(Locked)
Look ma, no features!8m 21s
-
(Locked)
Adding probability to autoencoders (VAE)4m 49s
-
(Locked)
Variational inference: Not just for autoencoders7m 20s
-
(Locked)
Transforming an autoencoder into a VAE13m 26s
-
(Locked)
Training a VAE with PyTorch13m 33s
-
(Locked)
Exploring latent space11m 37s
-
(Locked)
Latent space interpolation and attribute vectors12m 30s
-
(Locked)
-
-
-
-
-