From the course: Proxmox Virtual Environment Essential Training

Plan a Proxmox VE deployment

- [Instructor] Before we jump in and install Proxmox VE we need to have a plan. We need to identify the hardware we'll be using and we need to make choices about the configuration of the host and how it fits into our existing network. In this course, we'll start out with just a single Proxmox VE node and some basic settings. When we're deciding what our host hardware will be, we'll need to take into account the guests that will be running on it. It may be that we're specking out new hardware, or it may be that we're reallocating previously used hardware to a new purpose. The host needs to be an X86-64 system with virtualization enabled. We'll need to consider how many guests we need to support and what their requirements are, and then we'll need to estimate the total resources that the host will need in order to accommodate the guests in a worst case scenario. It's also important to remember that Proxmox VE itself needs resources in order to run, so we'll need to plan for a couple of CPU cores and a couple of gigabytes of memory on top of whatever our guests require. While we can over provision a hypervisor, which means assigning guests more resources in the aggregate than are actually available, that's not usually a good idea, or at least it's not a great idea to over provision by a lot. Depending on your requirements, you may have many guests sitting around idling for most of the time only occasionally actually using any meaningful amount of host resources. Or you might have a couple of guests that are almost always quite busy, or you might have a different usage pattern entirely. For most guests will want to provide at least two CPU cores and probably 4-8 gigabytes of RAM at a minimum. Some guests though may need significantly more resources depending on what we expect them to be doing. My system has a 16 core Xeon processor and 64 gigabytes of RAM and that will be fine for supporting a handful of moderately sized guests. Guests also need to have a disk image from which they boot and run. We'll create these images when guests are created. Depending on the type of underlying storage we're using and the purpose of the virtual machines, we might use raw disk images, which take up all the space allotted to the machine at the time of creation, or we might use sparse images, which only take up space as it's used within the guest. We'll also want to consider whether any of our guests require dedicated hardware. We can pass through USB devices to a guest, and we can also take some extra steps and pass through PCI devices like GPUs to guests as well. So if we do that, we might consider adding a second video card to the host. Just something basic so the hosts only video card isn't passed through to a guest. The host will need storage where we'll install Proxmox VE as well, and this could be a single disk, or if you want additional reliability, you can create a ZFS or BTRFS volume with varying levels of redundancy using two or more disks. Here in my server for my home lab, I'm starting out with just one disk and it's a 512 gigabyte NVME SSD. We'll add some other storage later. When we install Proxmox VE as long as the disk or volume that it's being installed on is more than a couple of gigabytes, the installer will create a few LVM logical volumes for us. The root file system from which PVE boots, a swap volume and a volume called data, which can be used to store virtual machine disk images and container images. So we can have a PVE node with just a single disk, and that will work just fine as long as it's large enough to store what we needed to store. We can also add more disks to the system to add more local storage should we need to do that. In production scenarios and in lab settings where we're either experimenting with redundancy or where we're hosting services that need to be extra reliable, we might use mirrored or rated disks to provide that resiliency. We'll explore this more a little bit later, but for now, my node will use one disk and I'll format it as EXT4. Our Proxmox VE host should have at least one IP network interface, ideally a physical connection using ethernet or fiber. During installation, the software will create a network bridge device attached to the systems network interface, which the host and the guests will use to connect to the host's network. Depending on your requirements, you might consider using more than one network interface if you need to give guests dedicated network access or if you have various networks in your infrastructure, like a management network, a storage network, and so on. During installation, we'll just work with one interface and we can make changes later on. We can also use software defined networking once the node is set up to create VNET's and subnets and things like that. But to start out, we'll just use one interface. That interface will get a DHCP address when the installer starts up, but during installation, we'll set the address as a static address in the configuration so you can change the address to one that you plan the system to have either manually in the installer or you might set up a DHCP reservation based on the network interface's MAC address to guarantee that that interface receives the IP address you expect to use. We also need to set a fully qualified domain name for our host. This can be anything we like, but if we're running DNS on our network, we'll want to make sure the appropriate records are set up there so the host is accessible by its name from other systems on the network. If you don't have a DNS record for your hosts, you can use the dot internal top level domain. Each host needs a different name, and it's usually best to stick to a fairly clear and extensible naming scheme if you're going to have more than one node. For example, my node will be called PVE-01 and then later ones that I'll add will be PVE-02 PVE-03 and so on. I certainly won't have more than a hundred PVE nodes in my home lab, so that numbering scheme won't be a problem. If you're working inside of established infrastructure, be sure to follow the requirements of that environment. Here in my lab, as I mentioned before, I'm using the domain homelab.internal, and that's maintained by my DNS server. You can set up your DNS records accordingly, or you can use the IP address to access your nodes if you aren't running DNS. We'll be asked to provide an email address to which PVE send alerts. This sets the email address for the root user and by default Proxmox VE will send any alert emails to root's email address. We can configure notifications further later, adding other mail destinations or configuring webhook endpoints and so on. One thing to note here is that many home ISPs block outgoing traffic on Port 25, the default unauthenticated SMTP port, so you may need to configure your server with SMTP settings, including credentials to get it to send mail. While we're just configuring a single host here, your deployment may require more than one node. If you're creating a cluster, you'll need three or more hosts, though they don't have to be identical hardware. We'll explore clusters more later in the course. It can be tempting to treat a Proxmox VE node as a regular Linux server and add other services and software onto it, but we should resist this temptation. Infrastructure nodes should have discrete functions and should be thought of as being easily replaceable. While we might virtualize a server that performs many tasks our PVE nodes should remain only PVE nodes. One exception to this is when we use the same hardware to build a Ceph cluster as we'll explore at the end of the course. And if you're deploying Proxmox VE host in production, it's a good idea to get a subscription for that that provides support so you're not on your own in the case that there's a problem. Pricing for that is done by feature set and by the number of CPU sockets your hardware has. I'm not signing up for a subscription for my host here in my home lab. Okay, I recommend that you pause here and spend some time thinking about what your PVE solution will look like. In the next video, we'll build installation media and start setting up a single node so we can start getting our hands on Proxmox VE.

Contents