NSX-T Home Lab – Part 2: Configuring ESXi VMs


Welcome to Part 2 of my NSX-T Home Lab Series.  In my previous post, I went over the installation and configuration of a Sophos XG firewall for my nested NSX-T Home Lab.  In this post, I will cover the setup and configuration of the ESXi 6.7 VMs.

I recently wrote a post on how to Create an ESXi 6.7 VM Template, which is what I used to deploy my VMs from.  After cloning to new VMs, I changed the disk sizes for my cache and capacity disks, increased the CPUs and RAM, and added 2 additional network adapter to give me a total of 4 adapters.  The reason I did this is so that I can keep my management and other vmkernel ports on their VDS and have two new ones to use for NSX-T.  I may do a follow-up post using only two adapters where I’ll migrate my vmkernel networks over to NSX-T as in the real world, I’m sure there are many customers using dual 10Gb cards in their servers.

Now, I will not be covering how to actually install ESXi as you can follow the documentation for that, or you can reference my post mentioned above.  There really isn’t much to that installation…it’s pretty trivial.  Instead, I am just going to quickly state the specs used for my ESXi VMs from a resource perspective, and give some additional pointers.

Single-Node Management Cluster VM

  • CPUs: 8
  • RAM: 32GB
  • Disk1: Increased to 500GB (This will serve as a local VMFS6 datastore)
  • Disk2: Removed (As I will not be running VSAN)
  • Network Adapters: 2 (connected to the Nested VDS port group we created earlier)

On this host, I deployed a Windows Server 2019 Core OS to serve as my domain controller for the nested lab.  I also deployed a VCSA to manage the environment.

2-Node VSAN Compute Cluster (with a Witness Appliance)

  • CPUs: 8 on each host
  • RAM: 16GB on each host
  • Network Adapters: 4 on each host (connected to the Nested VDS port group we created earlier)

I used the new Quick Start feature to create and configure my VSAN cluster along with all the networking required, and this has now become one of my favorite new features in vSphere 6.7.  There were some nuances I had to fix which were super simple.  During the creation of the VDS and process of migrating vmkernel ports to the VDS, my nested ESXi VMs would lose connectivity.  Simply restarting the management network from the console proved to fix the issue and I was able to proceed.

I then used VUM to update each host to the latest version (Build 11675023) that was released on 1/17/19.  Once everything was configured, I had a nice little, nested playground ready for NSX-T!

In the next post, I will go over the deployment of the NSX-T appliances in the nested lab.  Be sure to come back!


NSX-T Home Lab – Part 1: Configuring Sophos XG Firewall


Welcome to Part 1 of my NSX-T Home Lab Series.  In my previous post, I went over the gist of what I plan to do for my nested NSX-T Home Lab.  In this post, I will cover the setup and configuration of a Sophos XG firewall Home Edition which will serve as the router for my nested lab environment.  My physical Home Lab is configured with Virtual Distributed Switches, or VDS (sometimes seen as DVS) for short, and since this is a nested lab environment that will not have any physical uplinks connected, I will need to create a new VDS without physical uplinks connected to it along with a portgroup for the nested environment and then configure access to the environment from my LAN.  All traffic will flow through virtual router/firewall to communicate to and from the nested lab.


  • VDS and portgroup without physical uplinks
    • Set the VLAN type for this portgroup to VLAN Trunking with the range of 0-4094 to allow all VLANs to trunk through
  • Static route to access the nested lab from my LAN
    • Once you determine the subnets you’d like to use for the nested lab, add a static route summary on your physical router

I have a bunch of VLANs created for my physical Home Lab as I’ve yet to deploy NSX-T in there, but once I do, I’ll be removing the majority of said VLANs and only keeping the required ones needed to run the lab.  With that said, one of the VLANs I have is for “Development” work, such as this so I’ll be connecting one uplink from the router to this VLAN which will serve as the WAN interface while the other uplink will be connected to the new nested portgroup to serve as the LAN for the nested lab.  I’ll describe the basics for deploying the Sophos XG firewall, but will not go into full detail as this is pretty trivial and can be deployed using the following guide as a reference.

  • OS: Other Linux 3.x or higher
  • CPU: 1 (add more as needed – max supported is 4 in the home edition)
  • RAM: 2GB (add more as needed – max supported is 6GB in the home edition)
  • Disk: 40GB thin (you may make this smaller if you’d like)
  • Network Adapter 1: LAN portgroup (nested)
  • Network Adapter 2: WAN portgroup
  • Boot: BIOS (will not boot if you keep as EFI)

Once the VM has been deployed, the Sophos XG will be configured with a address by default.  This will need to be changed to the subnet you’re using for your nested LAN interface.  Login to the console with the default (admin – admin) credentials, and choose the option for Network Configuration to change the IP for your nested LAN port.

Once this is done, you would normally navigate to that address on port 4444 to access the admin GUI.  Unfortunately, this will not work since the LAN side has no physical uplinks.  So what do we do?  We need to run a command to enable admin access on the WAN port.  To do so, choose option 4 to enter the device console and enter the following command:

system appliance_access enable

The WAN port is set to grab an address from DHCP so you’ll need to determine which IP address this is either by going into your physical router, or using a tool like Angry IP.  Once in the Admin GUI, navigate to Administration > Device Access and tick the box for WAN under the HTTPS column.  See this post for reference.

Now, we can create our VLANs for our nested environment.  I’m using the following for my lab:

14010.254.140.1/24VM Network

Navigate to Networking and select Add Interface > VLAN to create each of your networks.

With our VLANs created, we’ll need to create two firewall rules to allow traffic from the WAN port to access the LAN, as well as to allow traffic from LAN to LAN. Navigate to Firewall > Add firewall rule and create the following rules.  Choose something easy to label them as which makes sense to you:

This is where the static route will now be useful to access your nested lab.  I’ve configured a route summary of to go through the IP address of the WAN interface as the gateway so that I can access the Admin UI at as well.  I’ll now also be able to access the ESXi UI and VCSA UI, once they are stood up.

The final thing I will be doing is enabling the native MAC Learning functionality that is now built into vSphere 6.7 so that I do not need to enable Promiscuous Mode, which has normally been a requirement for the Nested portgroup and nested labs in general.  To learn more about how to do this, see this thread.  In my setup, I ran the following to enable this on my nested VDS portgroup:

Set-MacLearn -DVPortgroupName @("VDS1-254-NESTED") -EnableMacLearn $true -EnablePromiscuous $false -EnableForgedTransmit $true -EnableMacChange $false

To check that it was indeed set correctly, I ran the following:

Get-MacLearn -DVPortgroupName @("VDS1-254-NESTED")

And there you have it!  In the next post, I will go over configuring our ESXi VMs for our nested lab!

NSX-T Home Lab Series


I recently upgraded my Home Lab “Datacenter” to support all-flash VSAN and 10Gb networking with the plan to deploy NSX-T so that I can familiarize myself with the solution and use it to better prepare me for the VMware VCP-NV exam certification.  Since this is all brand new to me, I’ve decided that I’ll first deploy it in a nested lab environment in order to learn the deployment process as well as to minimize the risk of accidentally messing up my Home Lab environment. 

Now, I know there are a few blogs out in the wild already that go over the installation and setup of NSX-T, but I wanted to write my own as it will better help me retain the information that I am learning.  Additionally, others may have a different setup than I have and/or may have deployed the solution differently that the way I intend to do which is by following the published documentation.  I’d like to take this time to first shout out some of my colleagues, William Lam, Keith Lee, Cormac Hogan, and Sam McGeown, as their own blogs are what inspired me to deploy the solution for myself and document the process.

This post will serve as the main page where I’ll post the hyperlinks to each post in the series.  I’ll be deploying a virtual router/firewall, 3x ESXi VMs, and a witness appliance so that I can configure a virtual 2-node VSAN compute cluster.  I’ll be managing the environment via a vCenter Server Appliance or VCSA, and a Windows Server 2019 Core OS Domain Controller or DC.  I won’t cover the installation and configuration of the DC as it’s out of scope for this series, nor will I go over the deployment of the VCSA or VSAN configuration as this can be done by following the documentation.  And, since this is just a small nested lab, the remaining host that isn’t a part of the VSAN cluster will serve as a single-node Management cluster host where the DC, VCSA, and NSX-T Appliances will reside.

I will cover the router setup, ESXi VM configuration, and NSX-T deployment.  For my setup, I am going to leverage a Sophos XG firewall Home Edition since I’ve always had an interest in learning more about these firewalls, but also because I typically see pfSense being used for virtual routers and I wanted to try something different.  If you are using this as a guide for your own deployment, feel free to use your router/firewall of choice as there are plenty out there like FreeSCO, Quagga, or VyOS, just to name a few.  So, with that said, I hope you all enjoy the content in this series!

NSX-T Home Lab Series


Create an ESXi 6.7 VM Template

Disclaimer:  The following is not supported by VMware.

Nested virtualization is nothing new, and many of us use it for test or demonstration purposes since they can quickly be stood up or torn down.  William Lam has an ESXi VM which can be downloaded from here, but I wanted to go ahead and create my own for use within my nested lab environments.

In this post, I am going to show you the steps I ran through to create an ESXi 6.7 VM that I can convert to a template for later use.  Props to William for his excellent content on nested virtualization, which I’ve used a ton and will be leveraging here as well.  So without further ado, let’s get to it!

For my ESXi VM, I will be configuring the following:

  • CPU: 2 (Expose hardware assisted virtualization to the guest OS – checked on)
  • RAM: 8GB
  • Disk0: 16GB (bound to the default SCSI controller; thin provisioned)
  • New virtual NVME Controller
  • Disk1: 10GB (for VSAN cache tier bound to NVME Controller; thin provisioned)
  • Disk2: 100GB (for VSAN capacity tier bound to NVME Controller; thin provisioned)
  • 2x Network Adapters (VMXNET3)
  • Some advance configuration settings

Build the VM as follows:

Be sure you connect the ESXi installation media and power on the VM to begin the installation.

Once the VM powers back on, log in and enable SSH so that we can run some additional commands to update the OS and prepare it for cloning use

(Optional) To update ESXi to the latest version, connect to the host via SSH and run the following:

**At the time of this writing, the latest version is Build 11675023 as per profile used below, be sure to change the profile number**

esxcli network firewall ruleset set -e true -r httpClient
esxcli software profile update -p ESXi-6.7.0-20190104001-standard \
esxcli network firewall ruleset set -e false -r httpClient

(Optional) To update the latest version of the ESXi Host client, run the following:

esxcli software vib install -v ""

To prepare the VM for cloning use, run the following:

esxcli system settings advanced set -o /Net/FollowHardwareMac -i 1
sed -i 's#/system/uuid.*##' /etc/vmware/esx.conf

At this point, you can shutdown the VM and convert it to a template for cloning use.

After cloning a VM, if you plan on joining it to a vCenter Server you will need to run the following on each cloned instance via SSH.

esxcli storage vmfs snapshot resignature -l datastore1

Well, that about does it!  Hope you all enjoyed this post!



A Dream Come True!!

| 17/01/2019 |

First off, let me start by apologizing for my long hiatus.  I took some time away from blogging to focus on some personal matters as well as my career development and advancement.  But here were are, brand new year, and a brand new me!

Now, this is an extremely overdue post but I figured why not get back into writing content by posting something that I am excited about, while at the same time, keeping it short and simple. 

Early last year, I decided to do something which has completely changed my life for the better.  I took time to reflect on what I have been through and the challenges I’ve overcome to get to where I am in my career.  Afterward, I decided that I wanted to try something new so I put myself back in the market and searched for opportunities.  After getting some calls, going through the interview process, I am ecstatic to announce that I have landed the dream job I’ve been chasing for so many years. 

Well, I am happy to announce that I have officially joined VMware as a Solutions Engineer for Commercial East as of June 2018!  The process and transition were not easy, but I am so grateful and thankful to now be part of such a great organization that has forever changed the IT industry.  I’ve learned so much already in such a short time, but there is still so much more for me to learn here.  I have such a great team with so many knowledgable people around me at my disposal and I couldn’t be happier!  Finally, I’m at a place that I am passionate of and really enjoy working for.  It literally is a dream come true for me and I’m looking forward to developing my skillset even more in 2019 and beyond!

Be on the lookout for some new material coming soon!