From the course: Introduction to Large Language Models (LLMs) and Prompt Engineering by Pearson
Unlock this course with a free trial
Join today to access over 25,200 courses taught by industry experts.
Using open source models with RAG
From the course: Introduction to Large Language Models (LLMs) and Prompt Engineering by Pearson
Using open source models with RAG
Let's take a look at the rest of our notebook here to assess some open source alternatives as our generator in a retrieval augmented generation system. So for our first open source candidate, we can use something called LLAMA3. So we talked about LLAMA in a previous lesson as an open weights LLM from meta. LLAMA3, as of today at least, is Meta's most recent family of the LLAMA models. I'm going to use their smaller version, the 8 billion parameter version, to see if we can simply drop in a different LLM into our RAG system and have everything still work. This is going to be a good test, because it's going to basically tell us how good our prompt is. Because as I mentioned in the last lesson, A good prompt doesn't just make your LLM more performant. It should do that. It's also going to be your safety net should you want to change to different LLMs. Because as we're starting to understand, you and I at least, LLMs are in a lot of ways all the same. Some are bigger, some are smaller…