From the course: Hands-On AI: Build a RAG Model from Scratch with Open Source
Unlock this course with a free trial
Join today to access over 25,300 courses taught by industry experts.
Setting up environment and installing Ollama
From the course: Hands-On AI: Build a RAG Model from Scratch with Open Source
Setting up environment and installing Ollama
- [Instructor] Now that we understand the process, let's get started with the implementation. The first thing that needs to be done is to set ourselves up in a Unix environment, and that's simply because Unix is my environment of choice. That typically means either Mac, Linux, or even WSL on a Windows machine. This can be done on your personal machine or by logging into a remote server, or for the purpose of making all of our work easily reproducible so that all viewers can follow along with this course, we'll be working from GitHub Codespaces as you see here, which provides us with a Linux environment. Now, once you have that set up, the first thing we'll do is dive in and install Ollama. The installation process is explained on their homepage, which you can find at ollama.com/download/linux. So let's go ahead and go there. And they provide us with a one line installation command. Let's go ahead and copy that, and go back to…
Contents
-
-
-
-
Setting up a dev container7m 56s
-
(Locked)
Setting up environment and installing Ollama5m 40s
-
(Locked)
Creating a model file8m 33s
-
(Locked)
Running Ollama programmatically through Python7m 43s
-
(Locked)
Generating the corpus10m 17s
-
(Locked)
Extract text from different local file formats with Docling4m 43s
-
-
-
-