From the course: Introduction to AI Orchestration with LangChain and LlamaIndex

Unlock the full course today

Join today to access over 25,100 courses taught by industry experts.

Running local LLMs

Running local LLMs

Maybe you've never thought about running an LLM on your local laptop or desktop. This used to be a complex affair that required a great deal of knowledge and often perseverance to get working. Happily, simpler approaches are now available. This topic is big enough for a course of its own, so we'll only cover just enough to get you up and running. There are a number of ways to serve LLMs locally, using apps with memorable names like llama.cpp, Oobabooga, or H2O GPT, to name just a few. For this course, we'll use LM Studio, which is available as a desktop app for multiple platforms. So in a minute, I'll give you a chance to go ahead and download and install this app. Before LM Studio can run any LLMs, first, you need to select and download a model. On the main screen here, I can search for a model and see what comes up. For some of these different file formats are available. If so, the GGUF format works very well with this application. The list of available models is ever shifting, but…

Contents