From the course: Programming Generative AI: From Variational Autoencoders to Stable Diffusion with PyTorch and Hugging Face
Unlock this course with a free trial
Join today to access over 24,800 courses taught by industry experts.
Convolutional neural networks in PyTorch
From the course: Programming Generative AI: From Variational Autoencoders to Stable Diffusion with PyTorch and Hugging Face
Convolutional neural networks in PyTorch
- [Instructor] And that being said, for our first convolutional neural network, we're going to copy what's maybe the most famous CNN architecture, often called LeNet, named after Yann LeCun, who basically published a research paper that used the LeNet architecture on MNIST to classify handwriting into digits. And this architecture is the first kind of convolutional neural network applied to a task like handwriting recognition and has become the standard hello, world almost of, here's the kind of small, simple but useful baseline convolutional neural network architecture. Using PyTorch, we have pretty much exactly the same setup that we saw in the last lesson. We're subclassing from module, in this case, we're calling super. And the main difference with the kind of LeNet or a convolutional architecture really just comes down to the layers. So in this example, I'm going to show a kind of alternative pattern to using something like the PyTorch containers to do something like a sequential…
Contents
-
-
-
-
-
(Locked)
Topics54s
-
(Locked)
Representing images as tensors7m 45s
-
(Locked)
Desiderata for computer vision4m 57s
-
(Locked)
Features of convolutional neural networks7m 56s
-
(Locked)
Working with images in Python10m 20s
-
(Locked)
The Fashion-MNIST dataset4m 48s
-
(Locked)
Convolutional neural networks in PyTorch10m 43s
-
(Locked)
Components of a latent variable model (LVM)8m 57s
-
(Locked)
The humble autoencoder5m 29s
-
(Locked)
Defining an autoencoder with PyTorch5m 42s
-
(Locked)
Setting up a training loop9m 47s
-
(Locked)
Inference with an autoencoder4m 16s
-
(Locked)
Look ma, no features!8m 21s
-
(Locked)
Adding probability to autoencoders (VAE)4m 49s
-
(Locked)
Variational inference: Not just for autoencoders7m 20s
-
(Locked)
Transforming an autoencoder into a VAE13m 26s
-
(Locked)
Training a VAE with PyTorch13m 33s
-
(Locked)
Exploring latent space11m 37s
-
(Locked)
Latent space interpolation and attribute vectors12m 30s
-
(Locked)
-
-
-
-
-