From the course: Introduction to Large Language Models (LLMs) and Prompt Engineering by Pearson

Intro to large language models (LLMs) and prompt engineering

I'm Sinan Azdemir. I'm a tech entrepreneur focusing on applications in Natural Language Processing, or NLP, as well as artificial intelligence, and I have been working in the fields of deep learning, NLP, and generative AI for the last decade. I have previously lectured at Johns Hopkins on the topics of mathematics, computer science, and machine learning, and I've written over a half dozen books focusing on generative AI, data science, machine learning, and feature engineering. We will begin with an overview of the history of NLP and language modeling, including the mechanisms that make the transformer model so powerful and versatile, and how language models learn to read and write from training data. The next lesson, we'll dive into the powerful applications of LLMs to build a semantic search system to store and retrieve information in the blink of an eye. The next two lessons takes this even further by applying simple yet powerful prompting techniques to build reliable Retrieval Augmented Generation, or RAG, conversational chatbots, as well as AI agents with access to external tools. We will also look at advanced prompting techniques and walk through a case study of creating customized text embeddings for specific task definitions.

Contents