From the course: Introduction to Generative AI with GPT

GPT and Generative AI

From the course: Introduction to Generative AI with GPT

GPT and Generative AI

- [Host] Today, AI is being used across all industries and has many uses including fraud detection, search, autonomous driving, spam filtering, recommendation engines and facial recognition. These and others belong to a variety of categories and types within the AI domain. GPT belongs to an AI category called Generative AI. Other technologies and vendors are also in this category including those being developed and deployed from Google, Alibaba, Meta and Nvidia. While I may occasionally allude to these other companies in this course, we're particularly focusing on GPT, a product of the Open AI organization. So what exactly is generative AI? Simply stated, generative AI is a way for software to create new content based on a user providing a prompt. For example, a prompt such as draw a picture of a mountain covered in snow would result in the output of a unique never before seen AI generated image. Prompts include questions, suggestions or narratives that are created in plain language by a user as an input, in response to a prompt generative AI applications then spit out new content in the form of text, images, audio, animation, software code, and more. Given the recent buzz around generative AI, it may seem that it's a completely new category of AI. In fact, generative AI was first introduced in the 1960s, in basic computer chatbots, so-called conversation simulators. Significant progress was made in the early 2010s with the emergence of GANs or Generative Adversarial Networks, technology that enables the creation of images, video, and audio that appear to be authentic. I won't go deeper on GANs in this course but it's an area I recommend you explore depending on how technical you want to get. Generative AI, like a lot of categories within AI, also owes its success to neural networks. In simple terms, a neural network is software designed to learn from finding patterns in datasets. While the first neural nets first appeared as early as the 1950s, it wasn't until the emergence of large volumes of data and high performance computing in the early-2000s that content generation became practical. Additional progress was made through the use of neural net parallel processing on graphical processing units or GPUs, and advances in software including GANs, large language models, or LLMs and transformers. I briefly discuss LLMs and transformers in the next video. Acknowledging that the capabilities of generative AI are emerging rapidly, today the most common use include image and video creation, text generation, and audio output. In many instances, the new content is often indistinguishable from real things or impossible to tell, they were not created by a person. For example, these pictures of human faces aren't real. These people don't actually exist. Clearly, generative AI has a lot of us excited by its potential to transform many aspects of how we work and live, areas that I explore in a later video. But it also raises many questions and implications in the realm of socioeconomics, ethics, legality and a whole lot more. Before proceeding, take a few minutes to consider how generative AI might impact your job organization or industry.

Contents