NSX-T Home Lab – Part 2: Configuring ESXi VMs
Welcome to Part 2 of my NSX-T Home Lab Series. In my previous post, I went over the installation and configuration of a Sophos XG firewall for my nested NSX-T Home Lab. In this post, I will cover the setup and configuration of the ESXi 6.7 VMs.
I recently wrote a post on how to Create an ESXi 6.7 VM Template, which is what I used to deploy my VMs from. After cloning to new VMs, I changed the disk sizes for my cache and capacity disks, increased the CPUs and RAM, and added 2 additional network adapter to give me a total of 4 adapters. The reason I did this is so that I can keep my management and other vmkernel ports on their VDS and have two new ones to use for NSX-T. I may do a follow-up post using only two adapters where I’ll migrate my vmkernel networks over to NSX-T as in the real world, I’m sure there are many customers using dual 10Gb cards in their servers.
Now, I will not be covering how to actually install ESXi as you can follow the documentation for that, or you can reference my post mentioned above. There really isn’t much to that installation…it’s pretty trivial. Instead, I am just going to quickly state the specs used for my ESXi VMs from a resource perspective, and give some additional pointers.
Single-Node Management Cluster VM
- CPUs: 8
- RAM: 32GB
- Disk1: Increased to 500GB (This will serve as a local VMFS6 datastore)
- Disk2: Removed (As I will not be running VSAN)
- Network Adapters: 2 (connected to the Nested VDS port group we created earlier)
On this host, I deployed a Windows Server 2019 Core OS to serve as my domain controller for the nested lab. I also deployed a VCSA to manage the environment.
2-Node VSAN Compute Cluster (with a Witness Appliance)
- CPUs: 8 on each host
- RAM: 16GB on each host
- Network Adapters: 4 on each host (connected to the Nested VDS port group we created earlier)
I used the new Quick Start feature to create and configure my VSAN cluster along with all the networking required, and this has now become one of my favorite new features in vSphere 6.7. There were some nuances I had to fix which were super simple. During the creation of the VDS and process of migrating vmkernel ports to the VDS, my nested ESXi VMs would lose connectivity. Simply restarting the management network from the console proved to fix the issue and I was able to proceed.
I then used VUM to update each host to the latest version (Build 11675023) that was released on 1/17/19. Once everything was configured, I had a nice little, nested playground ready for NSX-T!
In the next post, I will go over the deployment of the NSX-T appliances in the nested lab. Be sure to come back!