From the course: Developing with gpt-oss Models

Setting up local AI

- Let's get you set up with Local AI. Now, for most of this course, we'll use Ollama, and Ollama is extremely flexible and pretty straightforward to get going with. You can download it for your operating system, and there is a way to use it with the CLI, as well as a fun graphical interface. There's also LM Studio, and while it's not as flexible as Ollama, it's very approachable and somewhat beginner-friendly. You don't have to have coder skills in order to use it. So, once you download Ollama, you'll see this interface right here. LM Studio has a similar, although not as minimal interface. Now, it's important to note that not every computer is capable of running these models. From my experimentation, you would probably need about 16 gigabytes of video RAM to run the smaller GPT OSS model. Now, if you're a Mac user, you don't really need that video RAM since a lot of the modern Mac computers have unified memories. And in the next video, we'll look at how to set up the models we'll work with in these environments.

Contents