vRealize Suite 2019 – Part 1: Installing vRealize Lifecycle Manager
Welcome to Part 1 of my vRealize Suite 2019 Series. In my previous post, I went over the gist of what I plan to deploy in my nested Home Lab. In this post, I will cover the installation of vRealize Suite Lifecycle Manager using the new vRealize Easy Installer released with the v8.0 of the solution.
With vRealize Easy Installer, you can:
- Install vRealize Suite Lifecycle Manager
- Install a new instance of vRealize Automation
- Register vRealize Automation with Workspace ONE Access
Please note that as of the time of this writing, the latest version of vRealize Suite Lifecycle Manager is v8.0.1. I will focus on deploying v8.0.0 and eventually cover the upgrade to v8.0.1. Let’s get right to it, shall we?
Obtain and Access the Easy Installer
The vRealize Easy installer can be downloaded from My VMware download page. The media comes in the form of a .iso file. Once the .iso has been downloaded, either mount the ISO or extract its contents and launch the Installer.exe file located in the \vrlcm-ui-installer\win32 directory.
Install vRealize Suite Lifecycle Manager
You are required to first define the vCenter Server details, resource location to deploy your appliance, specify resources and then access vRealize Suite Lifecycle Manager. The following steps are outlined in the official documentation.
- Click Install on the vRealize Easy Installer window.
- Click Next after reading the introduction.
- Accept the License Agreement and click Next.
- Read the Customer Experience Improvement Program and select the checkbox to join the program.
- To specify vCenter Server details.
- Enter the vCenter Server Hostname.
- Enter the HTTPs Port number.
- Enter the vCenter Server Username, and Password.
- Click Next and you are prompted with a Certificate Warning, click Yes to proceed.
- You must specify a location to deploy virtual appliances.
- Expand the vCenter Server tree.
- Expand to any data center and map your deployment to a specific VM folder.
- Specify a resource cluster
- Expand the data center tree to an appropriate resource location and click Next.
- Store your deployment, allocate a datastore and click Next.
- Set up Network and Password configuration, enter the required fields, and click Next.
- Enter the NTP Server for the appliance and click Next. The network configurations provided for all products are a one time entry for your configuration settings. The password provided is also common for all products and you need not enter the password again while you are installing the products.
- Set up vRealize Suite Lifecycle Manager configuration settings.
- Enter a Virtual Machine Name, IP Address, and Hostname.
- Click Next. With easy installer, you either import an existing VMware Identity Manager into vRealize Suite Lifecycle Manager or a new instance of VMware Identity Manager can be deployed. For new VMware Identity Manager installation through easy installer only VMware Identity Manager 3.3.1 is allowed. This is a mandatory step for a vRealize Suite Lifecycle Manager deployment. vRealize Automation installation is optional and I am choosing to Skip this installation at this time.
- Review the summary page that contains the vRealize Suite Lifecycle Manager, VMware Identity Manager, and vRealize Automation installation details and click Submit.
The installation will now begin to deploy vRealize Suite Lifecycle Manager followed by Workspace One Access, formerly known as VMware Identity Manager. This will take some time to complete but once it’s done, you can now login to both applications using the credentials specified in the Easy Installer.
Entending Storage Volume
Now, before installing any additional solutions, we first need to increase the storage where vRealize Lifecycle Manager stores the binaries and then import the binaries for each of the solutions we’re going to deploy with vRSLCM. When first logging into vRealize Suite Lifecycle Manager, you’ll see the following dashboard.
- Click Lifecycle Operations, then click the gear icon on the left side to enter the Settings menu.
- Click System Details, and you can see that by default, the storage is set to 20GB.
- I’m going to add 40GB to it so I have enough storage space to house the other product binaries.
- Click Extend Volume.
- Enter the vCenter Server Host Name, select the correct Credential, and enter the amount in GB that you’d like to add and click Extend. Allow some time for the request to complete and refresh the page if necessary. Once it completed, we can see that the volume has been increased.
Add Product Binaries
Before I can deploy any product other than VMware Identity Manager and vRealize Automation, I need to configure the binary mapping for those additional products. The two products I mentioned before are already mapped as they come with the Easy Installer.
- From the Settings menu, click Binary Mappings, then click Add Binaries.
- Select your Location Type, and provide the Base Location path to the shared folder and click Discover.
- There are a few options you can choose from here and I’m going to select NFS since I’ve already placed the binaries in an NFS shared folder.
- Once it’s discovered the binaries, select the ones that you want to map and click Add. Allow some time for this to complete and if you’d like, monitor the Request Status until you see it has completed.
- At this time, I’m not selecting any of the v8.0.1 upgrade binaries. I’ll add them at a later time.
In the next one, I’ll quickly cover accessing the Workspace One Access (VMware Identity Manager) deployment and configure it so that we can use an Identity Manager account to login to vRealize Suite Lifecycle Manager and the other solutions I’ll be deploying in this series.
Well, I hope that you’ve enjoyed this post and hopefully you’ll be back for more. Thanks for reading!
vRealize Suite 2019 Series
Hello, and thank you for visiting my blog! I’d decided to take some time away from writing in order to focus on my role as a Solutions Engineer at VMware, and enhance my skillset by getting more acclimated and accustomed to some of the most utilized solutions by VMware customers. Almost one full year has passed since I last wrote anything, and with the new year underway, what better time to get back into writing some material for myself and the vCommunity.
In this series, I’m going to cover how to easily deploy, and eventually update, each of the solutions that make up the vRealize Suite 2019 set of products. The products that will be covered are as follows:
- vRealize Suite Lifecycle Manager 8.x with VMware Identity Manager 3.3.1 (required for the new vRealize Automation 8.x release)
- vRealize Automation 8.x with embedded vRealize Orchestrator 8.x
- vRealize Operations Manager 8.x
- vRealize Log Insight 8.x
- vRealize Network Insight 5.x (Bonus)
If anyone has followed my NSX-T Home Lab Series from last year, I again will be leveraging a nested lab environment to deploy each of these solutions since I already have these solutions installed and running in my physical lab infrastructure. I did, however, rebuild this nested lab environment since that series was written and only installed Site Recovery Manager which was used to demonstrate to a customer.
For the purposes of this series, my nested lab consists of the following VMs:
- Sophos XG (serves as my virtual router)
- Synology DS918+ (NFS Storage for the nested lab)
- 4 ESXi VMs
- 2 for Management
- 1 for Site A
- 1 for Site B
Once this series is finished up, I plan on revisiting my NSX-T series with a bunch of updated content since the entire deployment has changed since NSX-T 2.3.x (which is the version used in that series).
vRealize Suite 2019 Series:
I’ll continually add links in the series below as they’re published.
- vRealize Suite 2019 – Part 1: Installing vRealize Lifecycle Manager
- vRezlize Suite 2019 – Part 2: Configuring VMware Identity Manager
- vRealize Suite 2019 – Part 3: Installing vRealize Automation
NSX-T Home Lab – Part 6: Upgrading NSX-T
Welcome to Part 6 of my NSX-T Home Lab series. In my previous post, I covered how to configure NSX-T networking to be able to start migrating and running workloads on the NSX-T fabric. In this post, I am going to cover the process of upgrading to the newly released version of NSX-T 2.4. Are you excited? Good!… So am I! Let’s jump right in!
Please see the documentation to follow best practices for upgrading NSX-T. You must follow the prescribed order and upgrade the hosts, NSX Edge cluster, NSX Controller cluster, and Management plane.
The duration for the NSX-T Data Center upgrade process depends on the number of components you have to upgrade in your infrastructure. It is important to understand the operational state of NSX-T Data Center components during an upgrade, such as when some hosts have been upgraded, or when NSX Edge nodes have not been upgraded.
The upgrade process is as follows:
Hosts > NSX Edge cluster > Management plane.
Please see the documentation for more information.
Supported Hypervisor Upgrade Path
The supported hypervisor upgrade paths for the NSX-T Data Center product versions. Adhere to the following upgrade paths for each NSX-T Data Center release version.
- NSX-T Data Center 2.3 > NSX-T Data Center 2.4.
- NSX-T Data Center 2.2 > NSX-T Data Center 2.3 > NSX-T Data Center 2.4.
- NSX-T Data Center 2.1 > NSX-T Data Center 2.3 > NSX-T Data Center 2.4
Please see the documentation for more information.
Upgrade ESXi Hosts
If your ESXi host is unsupported, manually upgrade your ESXi host to the supported version. See the documentation for more information. Since my hosts are already running the latest version of ESXi 6.7 U1, I am good to continue.
Download the NSX-T Upgrade Bundle
The upgrade bundle contains all the files to upgrade the NSX-T Data Center infrastructure. Before you begin the upgrade process, you must download the correct upgrade bundle version.
You can also navigate to the upgrade bundle and save the URL. When you upgrade the upgrade coordinator, paste the URL so that the upgrade bundle is uploaded from the VMware download portal. See the documentation for more information.
Upgrading NSX-T Data Center
After you finish the prerequisites for upgrading, your next step is to update the upgrade coordinator to initiate the upgrade process.
After the upgrade, based on your input, the upgrade coordinator updates the hosts, NSX Edge cluster, NSX Controller cluster, and Management plane.
You can use REST APIs to upgrade your NSX-T Data Center appliance. Identify the NSX-T Data Center version you are upgrading to. Refer to the API guide with your product version in code.vmware.com to find the latest upgrade-related APIs.
See the documentation for more information.
Update the Upgrade Coordinator
See the documentation for more information. From your browser, log in with admin privileges to you NSX Manager and navigate to System > Utilities > Upgrade then click Proceed to Upgrade.
Browse to the upgrade bundle .mub file that was downloaded as part of the prerequisites steps and then click Upload.
The upload may take up to 20 minutes or more to complete. Once the upload has completed, click Begin Upgrade. Accept the EULA, then click Continue to begin the upgrade.
Once, the upgrade completes, click the Run Pre Checks link. All pre checks should come back green but since this is a demo nested lab, I am expecting to see some sort of warning as evident in the image below. The reason for this warning is because I’ve configured my current NSX Manager VM with 4 vCPUs and 16GB RAM, sufficient for my lab and aligns with the “small” deployment. I could’ve left it with 2 vCPUs and 8GB RAM satisfying the “extra small” deployment model had I wanted to leave it alone. In a production environment, you’d want to deploy this at a minimum with a “medium” deployment model which requires 6 vCPUs and 24GB RAM.
At this point, I’m ready to move on to the next part to begin upgrading the NSX-T VIBs on my ESXi hosts. Click Next to proceed.
This part is fairly simple. Just click the blue Start button to begin upgrading the hosts. See the documentation for more information.
Once Complete click Next, to proceed to the next phase in the upgrade process…upgrading the Edges.
Upgrade NSX Edge Cluster
Here, it will be the same process as before. Click the blue Start button to begin upgrading the Edge Cluster. Once complete, click Next to move onto the next phase…upgrading the Controller Nodes. See the documentation for more information.
Upgrade NSX Controller Cluster
This is by far the easiest part of the upgrade because…well…there’s nothing to do! Beginning with NSX-T 2.4.0, the Controllers are migrated into the NSX Manager. But just for informational purposes, you should see the following screen. Click Next to continue to the final part of the upgrade process…upgrading the Management plane.
Upgrade Management Plane
As was done with each previous step, simply click the blue Start button to begin the upgrade. You’ll then be presented with another window, which we go ahead and click Start to kick off the upgrade. See the documentation for more information.
This process will take about 10-15 minutes as it will upgrade the Manager Appliance VM then reboot it. It’s possible that it may look like the upgrade fails but just sit tight as the Manager UI will eventually become inaccessible. Let the VM reboot and allow about 10 minutes for services to start. Afterward, go ahead an log into the NSX Manager UI and you should now be presented with the new simplified NSX-T 2.4.0 interface.
Having a single Manager Appliance is sufficient for my nested lab but it a production environment, you will want to follow this documentation (Step 12) to deploy two more manager appliances and cluster the 3 together. In the image above, you can see the highlighted warning about this. If you are not familiar with the process, you can also follow this official documentation.
Post Upgrade Tasks
Verify the Upgrade
After you upgrade NSX-T Data Center, you can verify whether the versions of the upgraded components have been updated. See the documentation for more information. Navigate to System > Upgrade and confirm that all components are running 2.4.x.
Delete NSX Controllers
After successfully upgrading to NSX-T Data Center 2.4, you can delete the NSX-T Data Center 2.3 NSX Controllers. See the documentation for more information. Simply locate the NSX Controller VM(s) inside your vCenter Server and power them off, then right-click on them and select Delete from Disk. Click Yes to confirm.
That about wraps up this post. In my next post, I will go over the process of completely uninstalling NSX-T so stay tuned!
NSX-T Home Lab – Part 5: Configuring NSX-T Networking
Welcome to Part 5 of my NSX-T Home Lab series. In my previous post, I went over the lengthy process of configuring the NSX-T fabric. In this post, I am going to cover the process of configuring the networking so we can get the logical routers and logical switches in place and ready to attach VMs to them and begin running workloads on NSX. Let get to it, shall we?
An NSX-T Data Center logical switch reproduces switching functionality, broadcast, unknown unicast, multicast (BUM) traffic, in a virtual environment completely decoupled from the underlying hardware.
Logical switches are similar to VLANs, in that they provide network connections to which you can attach virtual machines. For more information, please see the documentation.
I am going to start off by creating a Logical Switch to serve as my uplink from the external network to my Tier-0 router, which I’ll create afterward. To create a logical switch, select Networking > Switching > +ADD. Enter a Name, then from the Transport Zone drop-down menu select the VLAN uplink transport zone that was created in the previous post. Since I’ll be tagging VLANs at the port group level, enter a 0 (zero) for the VLAN ID and click ADD.
And that’s all there is to it! After a logical switch is created, we need to create a port for it to connect it to a logical router, but we first need a Tier-0 Logical Router.
Tier-0 Logical Router
An NSX-T Data Center logical router reproduces routing functionality in a virtual environment completely decoupled from the underlying hardware. The tier-0 logical router provides an on and off gateway service between the logical and physical network. Tier-0 logical routers have downlink ports to connect to NSX-T Data Center tier-1 logical routers and uplink ports to connect to external networks. For more information, please see the documentation.
To create a Tier-0 Logical Router, select Networking > Routers > +ADD and select Tier-0 Router from the drop-down menu. Provide a Name for the router and from the Edge Cluster drop-down menu, select the edge cluster that was created in the previous post then click ADD. Changing the High Availability setting is optional and I’m choosing to leave the default Active-Active setting.
With the Tier-0 logical router created, click on the router and from the Configuration drop-down menu, select Router Ports then click +ADD under Logical Router Ports.
Enter a Name, leave the Type as “Uplink”, optionally change the MTU value to support a configured Jumbo Frame, otherwise leave the default 1500 value (I am using 9000 for Jumbo Frames in my environment). From the Transport Node drop-down menu, select the edge transport node created in the previous post. From the Logical Switch drop-down menu, select the logical switch that was created in the previous step, then provide a name for the Logical Switch Port and provide an address on the “Uplink” VLAN 160 for the router port and click ADD.
Now, with the Tier-0 Logical router created and attached to an uplink Logical Switch, I have the option of either setting up a Static Route to send/receive data to/from or to configure Border Gateway Protocol also known simply as BGP. Until one of these is configured, I won’t be able to ping my Tier-0 router. I am going to opt to configure BGP so that any network I add later on down the road will get advertised properly to the neighbor router (Sophos XG) on my external network instead of using a wide-open static route. I’ll come back to BGP configuration a little later on, but first, I’d like to set up a Tier-1 to connect to my Tier-0. Any VLAN-based logical switches I create from this point on will be attached to the Tier-1 logical router.
Tier-1 Logical Router
Similar to Tier-0 Logical Routers, Tier-1 logical routers have downlink ports to connect to NSX-T Data Center logical switches and uplink ports to connect to NSX-T Data Center tier-0 logical routers. The tier-1 logical router must be connected to the tier-0 logical router to get the northbound physical router access. For more information, please see the
As was done when creating the tier-0 logical router, repeat the same process by selecting Networking > Routing > +ADD but select Tier-1 Router from the drop-down menu instead. Provide a Name, from the Tier-0 Router drop-down menu, select the Tier-0 router that was created in the previous step to attach the Tier-1 to it. Next, from the edge cluster drop-down menu, select the edge cluster that was created in the previous post, leave the default Failover Mode then from the Edge Cluster Members drop-down menu, select the edge transport node that was created in the previous post and click ADD.
To take full advantage of the tier-0 logical router, the topology must be configured with redundancy and symmetry with BGP between the tier-0 routers and the external top-of-rack peers. To enable access between your VMs and the outside world, you can configure an external BGP (eBGP) connection between a tier-0 logical router and a router in your physical infrastructure. For more information, please see the documentation.
To configure BGP on a Tier-0 logical router, select Networking > Routing and select the Tier-0 router. From the Routing drop-down menu, select BGP and click +ADD under the Neighbors section. Enter the neighbor router address, in this case since I am using a VLAN for my Uplink network, I will specify the gateway address of my VLAN 160 configured on my Sophos XG firewall/router. Next, select the Max Hop count needed to reach the neighbor router. In my case, my Tier-0 router is configured with the IP address of 10.254.160.2 and is one hop away from the gateway at 10.254.160.1 so I’ll leave the count set to 1. Finally, provide a Remote AS number which will be configured on the neighbor router (Sophos XG) and click ADD.
Next, click EDIT next to BGP Configuration. Toggle the Status switch to Enabled and enter a Local AS number for the Tier-0 router. Optionally, toggle the Graceful Restart switch to Enabled only if the Edge Cluster has one member, which is the case in this nested lab environment, then click SAVE.
Pretty straight forward right? But we’re not done just yet. In order for routes to be advertised properly to the neighbor router, there are a few more things required one of which is to enable Route Redistribution. To do so, select the Tier-0 Logical Router > Routing drop-down menu, and select Route Redistribution. Click EDIT and toggle the Status switch to Enabled and click SAVE. Next, click +ADD and enter a name for the route redistribution configuration, then from the “Sources” choices, select NSX Static and click ADD. NSX sees any advertised routes as a “dynamic” static route, therefore, this setting needs to be enabled to properly advertise routes to the neighbor router.
With BGP now configured on the virtual NSX side, I need to also configure BGP on physical side meaning on my Sophos XG. Log into the Sophos XG firewall and navigate to Routing > BGP. Here, I will add my VLAN 160 gateway IP address as the Router ID and set the Local AS number for my router and click Apply.
Next, I’ll click Add under the Neighbors section and will add the Tier-0 router IP address along with the AS number I configured it and click Save.
Lastly, I’ll head over to Administration > Device Access and check the boxes for LAN & WAN under Dynamic Routing then click Apply
Now, it’s time to test our configurations. I’ll start off by running a ping test from my jump-box or desktop computer on an external network to the Tier-0 logical router.
Success!! Next, I will check to see that my Sophos XG can see it’s BGP neighbor but navigating to Routing > Information > BGP > Neighbors or Summary
Success again!! I am on a roll!! It’s also a good idea to check the same on the NSX Edge appliance. To do so, open an SSH connection to the NSX Edge appliance and run the following commands.
This will list the available Tier-0 and Tier-1 routers.
Copy the UUID of the Tier-0 router and run the following.
get logical-router <paste-UUID-here> bgp neighbor
Perfect! At this point, I can’t see any advertised routes in BGP because I have yet to create any. In order for the Tier-1 router to advertise any new routes, I need to enable Route Advertisements. To do so, navigate to Networking > Routing and select the Tier-1 Logical Router then from the Routing drop-down menu, select Route Advertisement. Click EDIT and toggle the Status switch to Enabled and the Advertise All NSX Connected Routes to Yes and click SAVE.
As you can see above, I’ve assigned the router port and IP address of 192.168.254.1/24 so now I should be able to see this route being advertised from the NSX Edge and received on the Sophos XG. I’ll first check the NSX Edge.
get logical-router <paste_UUID_here> route static
Excellent! Remember, NSX-T treats the advertised routes as static routes hence the reason that “static” was used in the command syntax on the NSX Edge. Next, I’ll check the routes from the Sophos XG.
BOOM!! Now we’re cooking with gas! BGP is up and running, advertising routes as expected. The only caveat at this point is that I cannot ping this network from my jump-box or local desktop on my external network (not connected to the Sophos XG) unless I create a static route on my physical router. But I should be able to ping from a VM connected to an external network on the Sophos XG (VLAN 140) since the route is advertised and the router knows about it. Let’s test it from the nested lab’s domain controller.
Nice! At this point, why not spin up a VM and attach it to the new Logical Switch and see if we can access the outside world, right? I’ll quickly deploy a Linux VM from a template that I have which is configured to obtain an IP address via DHCP. But wait! I’ll first need a DHCP server within NSX to handle the distribution of dynamic host IPs. While my VM is deploying from a template, let’s cover the process of creating a DHCP server in NSX-T.
DHCP (Dynamic Host Configuration Protocol) allows clients to automatically obtain network configuration, such as IP address, subnet mask, default gateway, and DNS configuration, from a DHCP server. For more information, please see the documentation.
To create a DHCP Server, I first need a DHCP Profile. I’ll create one by navigating to Networking > DHCP > Server Profiles > +ADD. Provide a Name, from the Edge Cluster drop-down menu select the edge cluster that was previously created, and from the Members drop-down menu select the NSX edge transport node that was previously created and click ADD.
Next, click Networking > DHCP > Servers > +ADD. Provide a Name, an IP Address and netmask for the DHCP server and select the DHCP profile that was just created and click ADD. The Common Options, etc. are not required as we can set these in the next step when creating the IP address pool, but you can set it now if you’d like.
Now, click on the newly created DHCP server and under the IP Pool section click +ADD. Provide a Name, an IP address range, a Gateway IP address, and fill out the Common Options then click ADD.
Lastly, I need to attach the DHCP server to the Logical Switch so that it can begin handing out addresses to VMs connected to it. Click the DHCP server and from the “Actions drop-down menu, select Attach to Logical Switch. Select the logical switch from the drop-down menu and click ATTACH. The DHCP server is now ready to hand out IP addresses!
Running Workloads on NSX-T
With the DHCP Server created and a new VM deployed, I’ll attach the VM to the Logical switch and power it on. Select the VM, right-click it and select Edit Settings. Change the network adapter port group to the new Logical Switch that was created and click OK.
Power on the VM and once it’s up, log in and check that it has grabbed an IP address from the DHCP servers IP pool.
Just what I wanted to see! The VM successfully grabbed an IP address from the DHCP server! Woo-Hoo!! I can also ping the VM from the domain controller.
Now, the only thing left is to see if we can ping the outside world like Google’s DNS server. In order for this to work properly, the router would need to know how to NAT this VMs IP address to the outside otherwise, this can be expected to fail as seen below.
There are a couple of options. The recommended way would be to create a SNAT rule on the Sophos XG firewall/router so it knows how to route traffic out to the WAN. Another way would be to set up a SNAT rule in NSX. This can be a bit tricky in a nested lab setup like this one due to it basically being “Double NAT’ed”. I’d prefer to do the rule on the Sophos but as I’m still learning my way around its interface and settings, so it may be easier to simply create a SNAT rule in NSX until I figure out how to do it on the Sophos XG. The one caveat here is that when a SNAT rule is created in NSX, it will break access to the network from an external network, meaning I won’t be able to ping the VM anymore unless I also set up a DNAT rule and ping it’s NAT address to reach the VM. Let me show you how to create the SNAT rule.
Source NAT (SNAT)
Source NAT (SNAT) changes the source address in the IP header of a packet. It can also change the source port in the TCP/UDP headers. The typical usage is to change a private (rfc1918) address/port into a public address/port for packets leaving your network. For more information, please see the documentation.
To create a SNAT rule, navigate to Networking > Routing. Click the tier-1 logical router and from the Services drop-down menu, select NAT. Click +ADD. For the Source IP, add the network that you want to Source NAT. For the Translated IP, pick an IP address on the Uplink VLAN, in my case, this is VLAN 160. My Tier-0 router uses 10.254.160.2 so I will set this Translated IP to 10.254.160.3 then click ADD.
Next, from the Routing drop-down menu, select Route Advertisement. Click EDIT and toggle the Advertise All NAT Routes to Enabled and click SAVE.
Lastly, click the Tier-0 router and from the Routing drop-down menu, select Route Redistribution. Select the route redistribution criteria that was created earlier in this post and click EDIT. Click the checkbox for Tier-1 NAT and click SAVE.
In theory, this all should’ve allowed me to access the outside world but I wasn’t able to do so and I’m thinking there are additional firewall rules required on the Sophos XG to allow it. Of course, things like this are to be expected when working in nested environments but I’ll continue to tinker with this until I get it to work in the nested lab and will update the post should I figure it out.
Well, that about does it for this one! In the next post, I’ll cover the process of upgrading NSX-T. It will be a while before I get to that since the version I just deployed is the latest release. Thanks as always for your support!
NSX-T Home Lab – Part 4: Configuring NSX-T Fabric
Welcome to Part 4 of my NSX-T Home Lab series. In my previous post, I covered the process of deploying the NSX-T appliances and joining them to the management plane to have the foundational components ready for us to continue the configuration. In this post, I will cover all the configurations required to get NSX-T Fabric ready for network configurations in order to run workloads on it. So sit back, buckle up, and get ready for a lengthy read!
A compute manager, for example, a vCenter Server, is an application that manages resources such as hosts and VMs. NSX-T polls compute managers to find out about changes such as the addition or removal of hosts or VMs and updates its inventory accordingly. I briefly touched on this in my previous post, stating that one is required if deploying an NSX Edge appliance directly from the NSX Manager. Otherwise, the need for one of these is completely optional but I find value in it for retrieving my labs’ inventory and is the first thing I like to do in
To configure a Compute Manager, log into you NSX Manager UI and navigate to Fabric > Compute Managers > +ADD. Enter the required information, leaving the SHA-256 Thumbprint field empty, and click ADD. You should receive an error because a thumbprint was not provided but will ask you if you want to use the server provided thumbprint, so click ADD. Allow a brief 30-60 seconds for the Compute Manager to connect to the vCenter server, click refresh if needed, until you see that it’s “Registered” and “Up“.
Tunnel EndPoint (TEP) IP Pool
As stated in the official documentation, Tunnel endpoints are the source and destination IP address used in the external IP header to uniquely identify the hypervisor hosts originating and terminating the NSX-T encapsulations of overlay frames. DHCP or IP pools can be configured for TEP IP addresses, so I’ll create a dedicated pool to be used instead of using DHCP and they’ll reside in the Overlay network VLAN 150. For more information, please see the documentation.
To add a Tunnel EndPoint IP Pool, navigate to Inventory > Groups > IP Pools > +ADD. Provide a Name then click +ADD underneath the Subnets section and provide the required information for the IP pool then click ADD.
An uplink profile defines policies for the links from the hypervisor hosts to NSX-T logical switches or from NSX Edge nodes to top-of-rack switches. The settings defined by these profiles might include teaming policies, active/standby links, transport VLAN ID, and MTU setting. Uplink profiles allow you to consistently configure identical capabilities for network adapters across multiple hosts are nodes. By default, there are two uplink profiles already provided with NSX-T but they cannot be edited, therefore I am going to create new ones for the Edge uplink as well as for my hosts’ uplinks. For more information, please see the documentation.
Edge Uplink Profile
To create an Edge uplink profile, navigate to Fabric > Profiles > Uplink Profiles > +ADD. Provide a Name, optional description then, under Teamings, set the Teaming Policy to Failover Order, set the Active Uplinks to uplink-1. Set the Transport VLAN to 0 as we are tagging at the port group level for our Edge, and either leave the MTU at the default 1600 or set it to a higher value supported for your Jumbo Frames configuration. In my setup, I will set the MTU to 9000, then click ADD.
Host Uplink Profile
Next, I’ll repeat the process to create an uplink profile for my ESXi hosts. This time, I’ll keep the same settings for Teaming but will set the Standby Uplinks as uplink-2, the Transport VLAN will be my Overlay VLAN ID 150 since these uplinks are connected directly to the hosts and need to be tagged accordingly, and again I’ll set the MTU to 9000 and click ADD.
Transport Zones dictate which hosts, and therefore, which VMs can participate in the use of a particular network. There are two types of transport zones, an overlay, and a VLAN. The overlay transport zone is used by both host transport nodes and NSX Edges and is responsible for communication over the overlay network. The VLAN transport zone is used by the NSX Edge for it’s VLAN uplinks. Both types create an N-VDS on the host or Edge to allow for virtual-to-physical packet flow by binding logical router uplinks and downlinks to physical NICs. For more information please see the documentation.
Overlay Transport Zone
To create an overlay transport zone, navigate to Fabric > Transport Zones > +ADD. Provide a Name, an N-VDS Name, select Standard or Enhanced Data Path for the N-VDS Mode, set the Traffic Type as Overlay and click ADD.
Enhanced Data Path is a networking stack mode, which when configured provides superior network performance and is primarily targeted for NFV workloads. This requires you to install an additional VIB, as per your physical NICs, onto your hosts and reboot them before this can be configured, but this is out of scope for this demo. For more information, please see the documentation.
I’ll be choosing Standard for this demonstration as it’s a nested lab and only sees the Physical NICs as traditional VMware VMXNET3 adapters.
VLAN Transport Zone
To create a VLAN uplink transport zone, I’ll
There are two types of transport nodes that need to be configured in NSX-T, a host transport node and an edge transport node. A host transport node is a node that participates in an NSX-T overlay or NSX-T VLAN networking whereas an edge transport node is a node that capable of participating in an NSX-T overlay or NSX-T VLAN networking. In short, they simply facilitate communication in an NSX-T Overlay and/or NSX-T VLAN network. For more information, please see the documentation here and here.
I’ll be automatically creating the host transport nodes when I add them to the fabric but will be manually creating a single edge transport node with two N-VDS’ for my overlay and VLAN uplink transport zones respectively.
Host Transport Node
This part is pretty simple and is where having the Compute Manager that I configured earlier comes in handy. But this process can also be done manually. You’ll need to ensure you have at least 1 Physical NIC (pNIC) available for NSX-T, and you can see here that my hosts were configured with 4 pNICs and I have 2 available, vmnic2 and vmnic3.
Navigate to Fabric > Nodes > Hosts and from the “Managed By” dropdown menu, select the compute manager. Select a cluster and then click Configure Cluster. Toggle both switches to Enabled to Automatically Install NSX and Automatically Create Transport Node. From the Transport Zone dropdown menu, select the Overlay transport zone created earlier, from the Uplink Profile dropdown menu select the host uplink profile created earlier, from the IP Pool dropdown menu select the TEP IP Pool created earlier. Lastly, enter in the pNIC names and choose their respective uplinks. If you have 2 pNICs, click add PNIC and then modify the information before clicking ADD to complete the process.
This will start the host preparation process so allow a few minutes for the NSX VIBs to be installed on the hosts and for the transport nodes to be configured, clicking the refresh button as needed until you see “NSX Installed” and the statuses are “Up”.
Click on the Transport Nodes tab, and we can see that the nodes have been automatically created and are ready to go. And if we look back at the physical NICs in vCenter, we can see that they have now been added to the N-VDS-Overlay.
Edge Transport Node
Begin by navigating to Fabric > Nodes > Transport Nodes > +ADD. Provide a Name, from the Node dropdown menu select the Edge that we deployed and joined to the management plane in the previous post, in the “Available” column select the two transport zones we previously created and click the > arrow to move them over to the “Selected” column, then click the N-VDS heading tab at the top of the window.
From the Edge Switch Name dropdown menu, select the N-VDS we created earlier for the Overlay. From the Uplink Profile dropdown menu, select the edge uplink profile we created earlier. From the IP Assignment dropdown menu, select Use IP Pool, from the IP Pool dropdown menu, select the TEP IP Pool created earlier, and finally, from the Virtual NICs dropdown menus, select
But we’re not done yet! Next, click +ADD N-VDS so that we can add the additional one needed for the remaining transport zone. From the Edge Switch Name dropdown menu, select the N-VDS we created earlier for the VLAN Uplink. From the Uplink Profile dropdown menu, select the edge uplink profile again. And from the Virtual NICs dropdown menus, select
Allow a few minutes for the configuration state and status to report “Success” and “Up” and click the refresh button as needed.
Having a multi-node cluster of NSX Edges helps ensure that at least one NSX Edge is always available. In order to create a tier-0 logical router or a tier-1 router with stateful services such as NAT, load balancer, and so on. You must associate it with an NSX Edge cluster. Therefore, even if you have only one NSX Edge, it must still belong to an NSX Edge cluster to be useful. For more information, please see the documentation.
To create an edge cluster, navigate to Fabric > Nodes > Edge Clusters > +ADD. Enter a Name, and in the “Available” column select the edge transport zone created earlier and click the > arrow to move it to the “Selected” column and click ADD. Well, that was easy!
Wow! That seemed like a lot of work but it was exciting to get the components configured and ready for setting up the Logical Routers and switches, which I’ll cover in the next post, so we can start running VMs in NSX-T. I hope you’ve found this post useful and I thank you for reading.
NSX-T Home Lab – Part 3: Deploying NSX-T Appliances
Welcome to Part 3 of my NSX-T Home Lab Series. In my previous post, I went over the process of setting up the Sophos XG firewall/router VM for my nested lab environment. In this post, we’ll cover the process of deploying the required NSX-T Appliances. There are 3 main appliances that need to be deployed, the first is the NSX-T Manager, followed by a single or multiple Controllers, and lastly, a single or multiple Edge appliances. For the purposes of this nested lab demo, I will only be deploying a single instance of each appliance, but please follow recommended best practices if you are leveraging this series for a production deployment. With all that said, let’s get to it!
NSX-T Manager Appliance
Prior to deploying the appliance VM’s, it’s recommended to create DNS entries for each component. I’ve already done this on my Domain Controller. Additionally, if you need to obtain the OVA’s, please download them from here. At the time of this writing, NSX-T 2.3.1 is the latest version.
We’ll begin by deploying the NSX Manager appliance to our Management Cluster using the vSphere (HTML5) Web Client and deploying the NSX Unified Appliance OVA. You can also deploy via the command line and/or PowerCLI, but for the purposes of this demo, I am going to leverage the GUI. Please use the following installation instructions to deploy the NSX Manager Appliance. I’ve used the following configuration options for my deployment:
- “Small” configuration – consumes 2 vCPU and 8GB RAM.
- The default “Medium” configuration consumes 4 vCPU and 16GB RAM
- Thin Provision for storage
- Management Network port group on VLAN 110
- Role –
- Enable SSH
- Allow root SSH Logins
Once the appliance has been deployed, edit its settings and remove the CPU and Memory reservations by setting the values to 0 “zero”. Normally, these would be left to guarantee those resources for the appliance but since this is a resource-constrained nested lab, I’m choosing to remove them.
At this point, power on the appliance and wait a few minutes for the Web UI to be ready, then open a browser to the IP or FQDN of the NSX Manager and log in using the credentials provided during the deployment (admin/the_configured_passwd). Accept the EULA and choose whether or not to participate in the CEIP.
NSX-T Controller Appliance
Next up is the NSX Controller. There are several ways that this appliance can be deployed such as via the command line, PowerCLI, vSphere Web Client, or directly from the NSX Controller. For this demo, I am opting to continue my deployment via the vSphere Web Client for the sake of simplicity and comfortability. As was done with the NSX Manager, deploy the NSX Controller OVA by following these instructions. Again, since this is a resource-constrained nested lab, I am going to choose the following configuration options and remove the CPU and Memory reservations after the deployment has completed.
- “Small” configuration – consumes 2 vCPU and 8GB RAM.
- The default “Medium” configuration consumes 4 vCPU and 16GB RAM
- Thin Provision for storage
- Management Network port group on VLAN 110
- Enable SSH
- Allow root SSH Logins
- Ignore the “Internal Properties” section as it’s optional
With this part of the deployment complete, go ahead and power on the appliance. The next step is to join the controller to the management plane (NSX Manager) since we deployed the controller manually. Follow these instructions to perform this step, but I’ll also post screenshots below. These steps can either be done from the appliance console or via SSH. I’ll opt for the latter since we chose the option to allow SSH during the previous deployments. Be mindful of the appliances I am running commands on in the screenshots below as some are done on the manager while others are done on the controller.
In the last image above, we can see that the Control cluster status reports “UNSTABLE“. This is to be expected in this deployment scenario as we only deployed a single controller instance. Nothing for us to worry about here.
Now that we’ve joined the Controller to the Manager (management plane), there is one last thing to do which is to initialize the Control Cluster to create a Control Cluster Master. To do so, follow these instructions.
Now, if we log in to the NSX Manager again and click the Dashboard view, we can see that we now have both a Manager and a Controller Node configured.
NSX-T Edge Appliance
Are you still with me? Good! The final component to deploy is the NSX Edge. As we’ve done with the other appliance, I will continue deploying the Edge via the vSphere GUI instead of leveraging the NSX Manager and I’ll tell you why. Deploying via the NSX Manager is the easiest method, but there is a caveat. Deploying from the NSX Manager will only allow you to select a “Medium” or “Large” configuration deployment and will automatically power on the appliance post-deployment thus keeping the set CPU and Memory reservations. I’ve heard there is a way to trick the UI to deploy a “Small” configuration, but I’ve yet to confirm this nor have I seen it done. Additionally, there is a prerequisite for deploying via the NSX Manager which is to configure a Compute Manager which we’ve yet to cover and will be covered in the next post of this series. Follow these instructions to deploy the appliance. By deploying via the vSphere Web Client, I have the ability to select “small” for the deployment and then remove the reservations before powering on the appliance, but feel free to use your method of choice.
As we’ve done with the other deployments, use the following options.
- “Small” configuration – consumes 2 vCPU and 4GB RAM.
- The default “Medium” configuration consumes 4 vCPU and 8GB RAM
- Thin Provision for storage
- Management Network port group on VLAN 110
- Enable SSH
- Allow root SSH Logins
- Ignore the “Internal Properties” section as it’s optional and any other non-relevant fields
When we get to the networking section, select the Management port group for “Network 0”, select the Overlay port group for “Network 1”, and select the Uplink portgroup for “Network 2” and “Network 3”.
Power on the appliance, and when the console is ready, either log in to the console or connect via SSH as we’ll need to also join this appliance to the management place as we did previously with the controller. Follow these instructions to complete this process. Again, be mindful of which appliances the commands are executed against.
Again, if we log into the NSX Manager and click the Dashboard view, we’ll now see that we also have an Edge Node configured. Woo-hoo!!
Well, this completes this post and I hope you had fun following along. In the next post, I’ll cover adding hosts to our NSX-T fabric along with creating all the required transport nodes, zones, edge clusters, profiles, and routing stuff so that we can get everything ready to have workloads run in NSX-T!
NSX-T Home Lab – Part 2: Configuring ESXi VMs
Welcome to Part 2 of my NSX-T Home Lab Series. In my previous post, I went over the installation and configuration of a Sophos XG firewall for my nested NSX-T Home Lab. In this post, I will cover the setup and configuration of the ESXi 6.7 VMs.
I recently wrote a post on how to Create an ESXi 6.7 VM Template, which is what I used to deploy my VMs from. After cloning to new VMs, I changed the disk sizes for my cache and capacity disks, increased the CPUs and RAM, and added 2 additional network adapter to give me a total of 4 adapters. The reason I did this is so that I can keep my management and other vmkernel ports on their VDS and have two new ones to use for NSX-T. I may do a follow-up post using only two adapters where I’ll migrate my vmkernel networks over to NSX-T as in the real world, I’m sure there are many customers using dual 10Gb cards in their servers.
Now, I will not be covering how to actually install ESXi as you can follow the documentation for that, or you can reference my post mentioned above. There really isn’t much to that installation…it’s pretty trivial. Instead, I am just going to quickly state the specs used for my ESXi VMs from a resource perspective, and give some additional pointers.
Single-Node Management Cluster VM
- CPUs: 8
- RAM: 32GB
- Disk1: Increased to 500GB (This will serve as a local VMFS6 datastore)
- Disk2: Removed (As I will not be running VSAN)
- Network Adapters: 2 (connected to the Nested VDS port group we created earlier)
On this host, I deployed a Windows Server 2019 Core OS to serve as my domain controller for the nested lab. I also deployed a VCSA to manage the environment.
2-Node VSAN Compute Cluster (with a Witness Appliance)
- CPUs: 8 on each host
- RAM: 16GB on each host
- Network Adapters: 4 on each host (connected to the Nested VDS port group we created earlier)
I used the new Quick Start feature to create and configure my VSAN cluster along with all the networking required, and this has now become one of my favorite new features in vSphere 6.7. There were some nuances I had to fix which were super simple. During the creation of the VDS and process of migrating vmkernel ports to the VDS, my nested ESXi VMs would lose connectivity. Simply restarting the management network from the console proved to fix the issue and I was able to proceed.
I then used VUM to update each host to the latest version (Build 11675023) that was released on 1/17/19. Once everything was configured, I had a nice little, nested playground ready for NSX-T!
In the next post, I will go over the deployment of the NSX-T appliances in the nested lab. Be sure to come back!
NSX-T Home Lab – Part 1: Configuring Sophos XG Firewall
Welcome to Part 1 of my NSX-T Home Lab Series. In my previous post, I went over the gist of what I plan to do for my nested NSX-T Home Lab. In this post, I will cover the setup and configuration of a Sophos XG firewall Home Edition which will serve as the router for my nested lab environment. My physical Home Lab is configured with Virtual Distributed Switches, or VDS (sometimes seen as DVS) for short, and since this is a nested lab environment that will not have any physical uplinks connected, I will need to create a new VDS without physical uplinks connected to it along with a portgroup for the nested environment and then configure access to the environment from my LAN. All traffic will flow through virtual router/firewall to communicate to and from the nested lab.
- VDS and portgroup without physical uplinks
- Set the VLAN type for this portgroup to VLAN Trunking with the range of 0-4094 to allow all VLANs to trunk through
- Static route to access the nested lab from my LAN
- Once you determine the subnets you’d like to use for the nested lab, add a static route summary on your physical router
I have a bunch of VLANs created for my physical Home Lab as I’ve yet to deploy NSX-T in there, but once I do, I’ll be removing the majority of said VLANs and only keeping the required ones needed to run the lab. With that said, one of the VLANs I have is for “Development” work, such as this so I’ll be connecting one uplink from the router to this VLAN which will serve as the WAN interface while the other uplink will be connected to the new nested portgroup to serve as the LAN for the nested lab. I’ll describe the basics for deploying the Sophos XG firewall, but will not go into full detail as this is pretty trivial and can be deployed using the following guide as a reference.
- OS: Other Linux 3.x or higher
- CPU: 1 (add more as needed – max supported is 4 in the home edition)
- RAM: 2GB (add more as needed – max supported is 6GB in the home edition)
- Disk: 40GB thin (you may make this smaller if you’d like)
- Network Adapter 1: LAN portgroup (nested)
- Network Adapter 2: WAN portgroup
- Boot: BIOS (will not boot if you keep as EFI)
Once the VM has been deployed, the Sophos XG will be configured with a 172.16.1.1 address by default. This will need to be changed to the subnet you’re using for your nested LAN interface. Login to the console with the default (admin – admin) credentials, and choose the option for Network Configuration to change the IP for your nested LAN port.
Once this is done, you would normally navigate to that address on port 4444 to access the admin GUI. Unfortunately, this will not work since the LAN side has no physical uplinks. So what do we do? We need to run a command to enable admin access on the WAN port. To do so, choose option 4 to enter the device console and enter the following command:
system appliance_access enable
The WAN port is set to grab an address from DHCP so you’ll need to determine which IP address this is either by going into your physical router, or using a tool like Angry IP. Once in the Admin GUI, navigate to Administration > Device Access and tick the box for WAN under the HTTPS column. See this post for reference.
Now, we can create our VLANs for our nested environment. I’m using the following for my lab:
Navigate to Networking and select Add Interface > VLAN to create each of your networks.
With our VLANs created, we’ll need to create two firewall rules to allow traffic from the WAN port to access the LAN, as well as to allow traffic from LAN to LAN. Navigate to Firewall > Add firewall rule and create the following rules. Choose something easy to label them as which makes sense to you:
This is where the static route will now be useful to access your nested lab. I’ve configured a route summary of 10.254.0.0/16 to go through the IP address of the WAN interface as the gateway so that I can access the Admin UI at https://10.254.1.1:4444 as well. I’ll now also be able to access the ESXi UI and VCSA UI, once they are stood up.
The final thing I will be doing is enabling the native MAC Learning functionality that is now built into vSphere 6.7 so that I do not need to enable Promiscuous Mode, which has normally been a requirement for the Nested portgroup and nested labs in general. To learn more about how to do this, see this thread. In my setup, I ran the following to enable this on my nested VDS portgroup:
Set-MacLearn -DVPortgroupName @("VDS1-254-NESTED") -EnableMacLearn $true -EnablePromiscuous $false -EnableForgedTransmit $true -EnableMacChange $false
To check that it was indeed set correctly, I ran the following:
Get-MacLearn -DVPortgroupName @("VDS1-254-NESTED")
And there you have it! In the next post, I will go over configuring our ESXi VMs for our nested lab!
NSX-T Home Lab Series
I recently upgraded my Home Lab “Datacenter” to support all-flash VSAN and 10Gb networking with the plan to deploy NSX-T so that I can familiarize myself with the solution and use it to better prepare me for the VMware VCP-NV exam certification. Since this is all brand new to me, I’ve decided that I’ll first deploy it in a nested lab environment in order to learn the deployment process as well as to minimize the risk of accidentally messing up my Home Lab environment.
Now, I know there are a few blogs out in the wild already that go over the installation and setup of NSX-T, but I wanted to write my own as it will better help me retain the information that I am learning. Additionally, others may have a different setup than I have and/or may have deployed the solution differently that the way I intend to do which is by following the published documentation. I’d like to take this time to first shout out some of my colleagues, William Lam, Keith Lee, Cormac Hogan, and Sam McGeown, as their own blogs are what inspired me to deploy the solution for myself and document the process.
This post will serve as the main page where I’ll post the hyperlinks to each post in the series. I’ll be deploying a virtual router/firewall, 3x ESXi VMs, and a witness appliance so that I can configure a virtual 2-node VSAN compute cluster. I’ll be managing the environment via a vCenter Server Appliance or VCSA, and a Windows Server 2019 Core OS Domain Controller or DC. I won’t cover the installation and configuration of the DC as it’s out of scope for this series, nor will I go over the deployment of the VCSA or VSAN configuration as this can be done by following the documentation. And, since this is just a small nested lab, the remaining host that isn’t a part of the VSAN cluster will serve as a single-node Management cluster host where the DC, VCSA, and NSX-T Appliances will reside.
I will cover the router setup, ESXi VM configuration, and NSX-T deployment. For my setup, I am going to leverage a Sophos XG firewall Home Edition since I’ve always had an interest in learning more about these firewalls, but also because I typically see pfSense being used for virtual routers and I wanted to try something different. If you are using this as a guide for your own deployment, feel free to use your router/firewall of choice as there are plenty out there like FreeSCO, Quagga, or VyOS, just to name a few. So, with that said, I hope you all enjoy the content in this series!
NSX-T Home Lab Series
- NSX-T Home Lab – Part 1: Configuring Sophos XG firewall
- NSX-T Home Lab – Part 2: Configuring ESXi VMs
- NSX-T Home Lab – Part 3: Deploying NSX-T Appliances
- NSX-T Home Lab – Part 4: Configuring NSX-T Fabric
- NSX-T Home Lab – Part 5: Configuring NSX-T Networking
- NSX-T Home Lab – Part 6: Upgrading NSX-T
- NSX-T Home Lab – Part 7: Uninstalling NSX-T
Create an ESXi 6.7 VM Template
Disclaimer: The following is not supported by VMware.
Nested virtualization is nothing new, and many of us use it for test or demonstration purposes since they can quickly be stood up or torn down. William Lam has an ESXi VM which can be downloaded from here, but I wanted to go ahead and create my own for use within my nested lab environments.
In this post, I am going to show you the steps I ran through to create an ESXi 6.7 VM that I can convert to a template for later use. Props to William for his excellent content on nested virtualization, which I’ve used a ton and will be leveraging here as well. So without further ado, let’s get to it!
For my ESXi VM, I will be configuring the following:
- CPU: 2 (Expose hardware assisted virtualization to the guest OS – checked on)
- RAM: 8GB
- Disk0: 16GB (bound to the default SCSI controller; thin provisioned)
- New virtual NVME Controller
- Disk1: 10GB (for VSAN cache tier bound to NVME Controller; thin provisioned)
- Disk2: 100GB (for VSAN capacity tier bound to NVME Controller; thin provisioned)
- 2x Network Adapters (VMXNET3)
- Some advance configuration settings
Build the VM as follows:
Be sure you connect the ESXi installation media and power on the VM to begin the installation.
Once the VM powers back on, log in and enable SSH so that we can run some additional commands to update the OS and prepare it for cloning use
(Optional) To update ESXi to the latest version, connect to the host via SSH and run the following:
**At the time of this writing, the latest version is Build 11675023 as per profile used below, be sure to change the profile number**
esxcli network firewall ruleset set -e true -r httpClient esxcli software profile update -p ESXi-6.7.0-20190104001-standard \ -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml esxcli network firewall ruleset set -e false -r httpClient
(Optional) To update the latest version of the ESXi Host client, run the following:
esxcli software vib install -v "http://download3.vmware.com/software/vmw-tools/esxui/esxui-signed-latest.vib"
To prepare the VM for cloning use, run the following:
esxcli system settings advanced set -o /Net/FollowHardwareMac -i 1 sed -i 's#/system/uuid.*##' /etc/vmware/esx.conf ./sbin/auto-backup.sh
At this point, you can shutdown the VM and convert it to a template for cloning use.
After cloning a VM, if you plan on joining it to a vCenter Server you will need to run the following on each cloned instance via SSH.
esxcli storage vmfs snapshot resignature -l datastore1
Well, that about does it! Hope you all enjoyed this post!