From the course: Up and Running with PyTorch by Pearson
What is PyTorch? - PyTorch Tutorial
From the course: Up and Running with PyTorch by Pearson
What is PyTorch?
So, hopefully, I haven't kept you waiting too long with Lesson 1, which was, I admit, a little concept slide heavy, a little abstract. But Lesson 2 is all about deep learning with PyTorch, writing Python code. If you're not familiar with the PyTorch framework, it is a Python library like any other Python library. But its special focus is it's specially set up for making deep learning as easy as possible. And the way I frame it, like it's a library that makes deep learning easy, is that it's It's really the APIs architected to really map to the mathematical side of building neural networks. So, I like to think of it as neural network pseudocode with a lot of deep learning, with a lot of generative AI. They're often approached from kind of the research community. And it makes sense for them from a theory first approach. So, there's ways of mathematically defining these networks, setting them up, kind of optimizing them, learning these distributions. And PyTorch is really an API meant to map closely to neural network architectures. But aside from that, it also provides very powerful computational advantages. So, it is this NumPy-like interface to what I have here as, I call it CUDA, but this is just GPUs and device accelerators. CUDA here is the NVIDIA-specific library for their GPUs, and this is where PyTorch came out of. It came out of being this NumPy-like interface for NVIDIA GPUs. It has since broadened. You can use PyTorch with Apple Silicon GPUs. You can use PyTorch with AMD GPUs. You can use PyTorch even with exotic accelerators that are incredibly expensive and highly specialized and often only used in large-scale industrial applications. But it's broadened much more from the original CUDA interface. And secondly, it does automatic differentiation, which is a technique to basically automatically compute derivatives and gradients that are then used in the neural network learning process. A caveat to all of this interesting, powerful features of PyTorch is that oftentimes, and it may not hurt you, but you really only need it if you really need accelerated deep learning on GPUs. If you're just doing data science, if you're doing traditional machine learning, using something like scikit-learn, using something like Pandas, these somewhat simpler libraries and simpler algorithms. Now, that's not to say they're not any less or more powerful than deep learning, But it's more an issue of use the right tool for the job. So don't feel like if you're just doing linear regression that you need to use PyTorch and deal with all the complexities of having it work on whatever hardware or system or deal with GPUs and all of these other kind of nuances.
Contents
-
-
-
What is PyTorch?4m 33s
-
The PyTorch layer cake11m 26s
-
The deep learning software trilemma7m 4s
-
(Locked)
What are tensors, really?5m 24s
-
(Locked)
Tensors in PyTorch10m 3s
-
(Locked)
Introduction to computational graphs12m 45s
-
(Locked)
Backpropagation is just the chain rule16m 31s
-
(Locked)
Effortless backpropagation with torch.autograd13m 39s
-
(Locked)
PyTorch's device abstraction: GPUs4m
-
(Locked)
Working with devices10m 47s
-
(Locked)
Components of a learning algorithm7m 9s
-
(Locked)
Introduction to gradient descent6m 4s
-
(Locked)
Getting to stochastic gradient descent (SGD)4m 8s
-
(Locked)
Comparing gradient descent and SGD5m 50s
-
(Locked)
Linear regression with PyTorch23m 56s
-
(Locked)
Perceptrons and neurons7m 52s
-
(Locked)
Layers and activations with torch.nn12m 41s
-
(Locked)
Multilayer feedforward neural networks (MLP)8m 58s
-
-