iThinkVirtual™

NSX-T Home Lab – Part 5: Configuring NSX-T Networking

Intro

Welcome to Part 5 of my NSX-T Home Lab series. In my previous post, I went over the lengthy process of configuring the NSX-T fabric. In this post, I am going to cover the process of configuring the networking so we can get the logical routers and logical switches in place and ready to attach VMs to them and begin running workloads on NSX. Let get to it, shall we?

Logical Switch

An NSX-T Data Center logical switch reproduces switching functionality, broadcast, unknown unicast, multicast (BUM) traffic, in a virtual environment completely decoupled from the underlying hardware.
Logical switches are similar to VLANs, in that they provide network connections to which you can attach virtual machines. For more information, please see the documentation.

I am going to start off by creating a Logical Switch to serve as my uplink from the external network to my Tier-0 router, which I’ll create afterward. To create a logical switch, select Networking > Switching > +ADD. Enter a Name, then from the Transport Zone drop-down menu select the VLAN uplink transport zone that was created in the previous post. Since I’ll be tagging VLANs at the port group level, enter a 0 (zero) for the VLAN ID and click ADD.

And that’s all there is to it! After a logical switch is created, we need to create a port for it to connect it to a logical router, but we first need a Tier-0 Logical Router.

Tier-0 Logical Router

An NSX-T Data Center logical router reproduces routing functionality in a virtual environment completely decoupled from the underlying hardware. The tier-0 logical router provides an on and off gateway service between the logical and physical network. Tier-0 logical routers have downlink ports to connect to NSX-T Data Center tier-1 logical routers and uplink ports to connect to external networks. For more information, please see the documentation.

To create a Tier-0 Logical Router, select Networking > Routers > +ADD and select Tier-0 Router from the drop-down menu. Provide a Name for the router and from the Edge Cluster drop-down menu, select the edge cluster that was created in the previous post then click ADD. Changing the High Availability setting is optional and I’m choosing to leave the default Active-Active setting.

With the Tier-0 logical router created, click on the router and from the Configuration drop-down menu, select Router Ports then click +ADD under Logical Router Ports.

Enter a Name, leave the Type as “Uplink”, optionally change the MTU value to support a configured Jumbo Frame, otherwise leave the default 1500 value (I am using 9000 for Jumbo Frames in my environment). From the Transport Node drop-down menu, select the edge transport node created in the previous post. From the Logical Switch drop-down menu, select the logical switch that was created in the previous step, then provide a name for the Logical Switch Port and provide an address on the “Uplink” VLAN 160 for the router port and click ADD.

Now, with the Tier-0 Logical router created and attached to an uplink Logical Switch, I have the option of either setting up a Static Route to send/receive data to/from or to configure Border Gateway Protocol also known simply as BGP. Until one of these is configured, I won’t be able to ping my Tier-0 router. I am going to opt to configure BGP so that any network I add later on down the road will get advertised properly to the neighbor router (Sophos XG) on my external network instead of using a wide-open static route. I’ll come back to BGP configuration a little later on, but first, I’d like to set up a Tier-1 to connect to my Tier-0. Any VLAN-based logical switches I create from this point on will be attached to the Tier-1 logical router.

Tier-1 Logical Router

Similar to Tier-0 Logical Routers, Tier-1 logical routers have downlink ports to connect to NSX-T Data Center logical switches and uplink ports to connect to NSX-T Data Center tier-0 logical routers. The tier-1 logical router must be connected to the tier-0 logical router to get the northbound physical router access. For more information, please see the documentation.

As was done when creating the tier-0 logical router, repeat the same process by selecting Networking > Routing > +ADD but select Tier-1 Router from the drop-down menu instead. Provide a Name, from the Tier-0 Router drop-down menu, select the Tier-0 router that was created in the previous step to attach the Tier-1 to it. Next, from the edge cluster drop-down menu, select the edge cluster that was created in the previous post, leave the default Failover Mode then from the Edge Cluster Members drop-down menu, select the edge transport node that was created in the previous post and click ADD.

BGP Configuration

To take full advantage of the tier-0 logical router, the topology must be configured with redundancy and symmetry with BGP between the tier-0 routers and the external top-of-rack peers. To enable access between your VMs and the outside world, you can configure an external BGP (eBGP) connection between a tier-0 logical router and a router in your physical infrastructure. For more information, please see the documentation.

To configure BGP on a Tier-0 logical router, select Networking > Routing and select the Tier-0 router. From the Routing drop-down menu, select BGP and click +ADD under the Neighbors section. Enter the neighbor router address, in this case since I am using a VLAN for my Uplink network, I will specify the gateway address of my VLAN 160 configured on my Sophos XG firewall/router. Next, select the Max Hop count needed to reach the neighbor router. In my case, my Tier-0 router is configured with the IP address of 10.254.160.2 and is one hop away from the gateway at 10.254.160.1 so I’ll leave the count set to 1. Finally, provide a Remote AS number which will be configured on the neighbor router (Sophos XG) and click ADD.

Next, click EDIT next to BGP Configuration. Toggle the Status switch to Enabled and enter a Local AS number for the Tier-0 router. Optionally, toggle the Graceful Restart switch to Enabled only if the Edge Cluster has one member, which is the case in this nested lab environment, then click SAVE.

Pretty straight forward right? But we’re not done just yet. In order for routes to be advertised properly to the neighbor router, there are a few more things required one of which is to enable Route Redistribution. To do so, select the Tier-0 Logical Router > Routing drop-down menu, and select Route Redistribution. Click EDIT and toggle the Status switch to Enabled and click SAVE. Next, click +ADD and enter a name for the route redistribution configuration, then from the “Sources” choices, select NSX Static and click ADD. NSX sees any advertised routes as a “dynamic” static route, therefore, this setting needs to be enabled to properly advertise routes to the neighbor router.

With BGP now configured on the virtual NSX side, I need to also configure BGP on physical side meaning on my Sophos XG. Log into the Sophos XG firewall and navigate to Routing > BGP. Here, I will add my VLAN 160 gateway IP address as the Router ID and set the Local AS number for my router and click Apply.

Next, I’ll click Add under the Neighbors section and will add the Tier-0 router IP address along with the AS number I configured it and click Save.

Lastly, I’ll head over to Administration > Device Access and check the boxes for LAN & WAN under Dynamic Routing then click Apply. I believe that only WAN is required to mimic a true production environment but since this is a lab, it won’t cause any harm to enable both.

Now, it’s time to test our configurations. I’ll start off by running a ping test from my jump-box or desktop computer on an external network to the Tier-0 logical router.

Success!! Next, I will check to see that my Sophos XG can see it’s BGP neighbor but navigating to Routing > Information > BGP > Neighbors or Summary

Success again!! I am on a roll!! It’s also a good idea to check the same on the NSX Edge appliance. To do so, open an SSH connection to the NSX Edge appliance and run the following commands.

get logical-routers

This will list the available Tier-0 and Tier-1 routers.

Copy the UUID of the Tier-0 router and run the following.

get logical-router <paste-UUID-here> bgp neighbor

Perfect! At this point, I can’t see any advertised routes in BGP because I have yet to create any. In order for the Tier-1 router to advertise any new routes, I need to enable Route Advertisements. To do so, navigate to Networking > Routing and select the Tier-1 Logical Router then from the Routing drop-down menu, select Route Advertisement. Click EDIT and toggle the Status switch to Enabled and the Advertise All NSX Connected Routes to Yes and click SAVE.

Now, I’ll go ahead and create a new Logical Switch and will attach it to a Router Port on my Tier-1 Logical Router.

As you can see above, I’ve assigned the router port and IP address of 192.168.254.1/24 so now I should be able to see this route being advertised from the NSX Edge and received on the Sophos XG. I’ll first check the NSX Edge.

get logical-router <paste_UUID_here> route static

Excellent! Remember, NSX-T treats the advertised routes as static routes hence the reason that “static” was used in the command syntax on the NSX Edge. Next, I’ll check the routes from the Sophos XG.

BOOM!! Now we’re cooking with gas! BGP is up and running, advertising routes as expected. The only caveat at this point is that I cannot ping this network from my jump-box or local desktop on my external network (not connected to the Sophos XG) unless I create a static route on my physical router. But I should be able to ping from a VM connected to an external network on the Sophos XG (VLAN 140) since the route is advertised and the router knows about it. Let’s test it from the nested lab’s domain controller.

Nice! At this point, why not spin up a VM and attach it to the new Logical Switch and see if we can access the outside world, right? I’ll quickly deploy a Linux VM from a template that I have which is configured to obtain an IP address via DHCP. But wait! I’ll first need a DHCP server within NSX to handle the distribution of dynamic host IPs. While my VM is deploying from a template, let’s cover the process of creating a DHCP server in NSX-T.

DHCP Server

DHCP (Dynamic Host Configuration Protocol) allows clients to automatically obtain network configuration, such as IP address, subnet mask, default gateway, and DNS configuration, from a DHCP server. For more information, please see the documentation.

To create a DHCP Server, I first need a DHCP Profile. I’ll create one by navigating to Networking > DHCP > Server Profiles > +ADD. Provide a Name, from the Edge Cluster drop-down menu select the edge cluster that was previously created, and from the Members drop-down menu select the NSX edge transport node that was previously created and click ADD.

Next, click Networking > DHCP > Servers > +ADD. Provide a Name, an IP Address and netmask for the DHCP server and select the DHCP profile that was just created and click ADD. The Common Options, etc. are not required as we can set these in the next step when creating the IP address pool, but you can set it now if you’d like.

Now, click on the newly created DHCP server and under the IP Pool section click +ADD. Provide a Name, an IP address range, a Gateway IP address, and fill out the Common Options then click ADD.

Lastly, I need to attach the DHCP server to the Logical Switch so that it can begin handing out addresses to VMs connected to it. Click the DHCP server and from the “Actions drop-down menu, select Attach to Logical Switch. Select the logical switch from the drop-down menu and click ATTACH. The DHCP server is now ready to hand out IP addresses!

Running Workloads on NSX-T

With the DHCP Server created and a new VM deployed, I’ll attach the VM to the Logical switch and power it on. Select the VM, right-click it and select Edit Settings. Change the network adapter port group to the new Logical Switch that was created and click OK.

Power on the VM and once it’s up, log in and check that it has grabbed an IP address from the DHCP servers IP pool.

Just what I wanted to see! The VM successfully grabbed an IP address from the DHCP server! Woo-Hoo!! I can also ping the VM from the domain controller.

Now, the only thing left is to see if we can ping the outside world like Google’s DNS server. In order for this to work properly, the router would need to know how to NAT this VMs IP address to the outside otherwise, this can be expected to fail as seen below.

There are a couple of options. The recommended way would be to create a SNAT rule on the Sophos XG firewall/router so it knows how to route traffic out to the WAN. Another way would be to set up a SNAT rule in NSX. This can be a bit tricky in a nested lab setup like this one due to it basically being “Double NAT’ed”. I’d prefer to do the rule on the Sophos but as I’m still learning my way around its interface and settings, so it may be easier to simply create a SNAT rule in NSX until I figure out how to do it on the Sophos XG. The one caveat here is that when a SNAT rule is created in NSX, it will break access to the network from an external network, meaning I won’t be able to ping the VM anymore unless I also set up a DNAT rule and ping it’s NAT address to reach the VM. Let me show you how to create the SNAT rule.

Source NAT (SNAT)

Source NAT (SNAT) changes the source address in the IP header of a packet. It can also change the source port in the TCP/UDP headers. The typical usage is to change a private (rfc1918) address/port into a public address/port for packets leaving your network. For more information, please see the documentation.

To create a SNAT rule, navigate to Networking > Routing. Click the tier-1 logical router and from the Services drop-down menu, select NAT. Click +ADD. For the Source IP, add the network that you want to Source NAT. For the Translated IP, pick an IP address on the Uplink VLAN, in my case, this is VLAN 160. My Tier-0 router uses 10.254.160.2 so I will set this Translated IP to 10.254.160.3 then click ADD.

Next, from the Routing drop-down menu, select Route Advertisement. Click EDIT and toggle the Advertise All NAT Routes to Enabled and click SAVE.

Lastly, click the Tier-0 router and from the Routing drop-down menu, select Route Redistribution. Select the route redistribution criteria that was created earlier in this post and click EDIT. Click the checkbox for Tier-1 NAT and click SAVE.

In theory, this all should’ve allowed me to access the outside world but I wasn’t able to do so and I’m thinking there are additional firewall rules required on the Sophos XG to allow it. Of course, things like this are to be expected when working in nested environments but I’ll continue to tinker with this until I get it to work in the nested lab and will update the post should I figure it out.

Well, that about does it for this one! In the next post, I’ll cover the process of upgrading NSX-T. It will be a while before I get to that since the version I just deployed is the latest release. Thanks as always for your support!

NSX-T Home Lab – Part 4: Configuring NSX-T Fabric

Intro

Welcome to Part 4 of my NSX-T Home Lab series. In my previous post, I covered the process of deploying the NSX-T appliances and joining them to the management plane to have the foundational components ready for us to continue the configuration. In this post, I will cover all the configurations required to get NSX-T Fabric ready for network configurations in order to run workloads on it. So sit back, buckle up, and get ready for a lengthy read!

Compute Manager

A compute manager, for example, a vCenter Server, is an application that manages resources such as hosts and VMs. NSX-T polls compute managers to find out about changes such as the addition or removal of hosts or VMs and updates its inventory accordingly. I briefly touched on this in my previous post, stating that one is required if deploying an NSX Edge appliance directly from the NSX Manager. Otherwise, the need for one of these is completely optional but I find value in it for retrieving my labs’ inventory and is the first thing I like to do in my deployments. For more information, please see the documentation.

To configure a Compute Manager, log into you NSX Manager UI and navigate to Fabric > Compute Managers > +ADD. Enter the required information, leaving the SHA-256 Thumbprint field empty, and click ADD. You should receive an error because a thumbprint was not provided but will ask you if you want to use the server provided thumbprint, so click ADD. Allow a brief 30-60 seconds for the Compute Manager to connect to the vCenter server, click refresh if needed, until you see that it’s “Registered” and “Up“.

Tunnel EndPoint (TEP) IP Pool

As stated in the official documentation, Tunnel endpoints are the source and destination IP address used in the external IP header to uniquely identify the hypervisor hosts originating and terminating the NSX-T encapsulations of overlay frames. DHCP or IP pools can be configured for TEP IP addresses, so I’ll create a dedicated pool to be used instead of using DHCP and they’ll reside in the Overlay network VLAN 150. For more information, please see the documentation.

To add a Tunnel EndPoint IP Pool, navigate to Inventory > Groups > IP Pools > +ADD. Provide a Name then click +ADD underneath the Subnets section and provide the required information for the IP pool then click ADD.

Uplink Profiles

An uplink profile defines policies for the links from the hypervisor hosts to NSX-T logical switches or from NSX Edge nodes to top-of-rack switches. The settings defined by these profiles might include teaming policies, active/standby links, transport VLAN ID, and MTU setting. Uplink profiles allow you to consistently configure identical capabilities for network adapters across multiple hosts are nodes. By default, there are two uplink profiles already provided with NSX-T but they cannot be edited, therefore I am going to create new ones for the Edge uplink as well as for my hosts’ uplinks. For more information, please see the documentation.

Edge Uplink Profile

To create an Edge uplink profile, navigate to Fabric > Profiles > Uplink Profiles > +ADD. Provide a Name, optional description then, under Teamings, set the Teaming Policy to Failover Order, set the Active Uplinks to uplink-1. Set the Transport VLAN to 0 as we are tagging at the port group level for our Edge, and either leave the MTU at the default 1600 or set it to a higher value supported for your Jumbo Frames configuration. In my setup, I will set the MTU to 9000, then click ADD.

Host Uplink Profile

Next, I’ll repeat the process to create an uplink profile for my ESXi hosts. This time, I’ll keep the same settings for Teaming but will set the Standby Uplinks as uplink-2, the Transport VLAN will be my Overlay VLAN ID 150 since these uplinks are connected directly to the hosts and need to be tagged accordingly, and again I’ll set the MTU to 9000 and click ADD.

Transport Zones

Transport Zones dictate which hosts, and therefore, which VMs can participate in the use of a particular network. There are two types of transport zones, an overlay, and a VLAN. The overlay transport zone is used by both host transport nodes and NSX Edges and is responsible for communication over the overlay network. The VLAN transport zone is used by the NSX Edge for it’s VLAN uplinks. Both types create an N-VDS on the host or Edge to allow for virtual-to-physical packet flow by binding logical router uplinks and downlinks to physical NICs. For more information please see the documentation.

Overlay Transport Zone

To create an overlay transport zone, navigate to Fabric > Transport Zones > +ADD. Provide a Name, an N-VDS Name, select Standard or Enhanced Data Path for the N-VDS Mode, set the Traffic Type as Overlay and click ADD.

Enhanced Data Path is a networking stack mode, which when configured provides superior network performance and is primarily targeted for NFV workloads. This requires you to install an additional VIB, as per your physical NICs, onto your hosts and reboot them before this can be configured, but this is out of scope for this demo. For more information, please see the documentation.

I’ll be choosing Standard for this demonstration as it’s a nested lab and only sees the Physical NICs as traditional VMware VMXNET3 adapters.

VLAN Transport Zone

To create a VLAN uplink transport zone, I’ll repeat the same process as above by providing a Name, an N-VDS Name, but will change the Traffic Type to VLAN before clicking ADD.

Transport Node

There are two types of transport nodes that need to be configured in NSX-T, a host transport node and an edge transport node. A host transport node is a node that participates in an NSX-T overlay or NSX-T VLAN networking whereas an edge transport node is a node that capable of participating in an NSX-T overlay or NSX-T VLAN networking. In short, they simply facilitate communication in an NSX-T Overlay and/or NSX-T VLAN network. For more information, please see the documentation here and here.

I’ll be automatically creating the host transport nodes when I add them to the fabric but will be manually creating a single edge transport node with two N-VDS’ for my overlay and VLAN uplink transport zones respectively.

Host Transport Node

This part is pretty simple and is where having the Compute Manager that I configured earlier comes in handy. But this process can also be done manually. You’ll need to ensure you have at least 1 Physical NIC (pNIC) available for NSX-T, and you can see here that my hosts were configured with 4 pNICs and I have 2 available, vmnic2 and vmnic3.

Navigate to Fabric > Nodes > Hosts and from the “Managed By” dropdown menu, select the compute manager. Select a cluster and then click Configure Cluster. Toggle both switches to Enabled to Automatically Install NSX and Automatically Create Transport Node. From the Transport Zone dropdown menu, select the Overlay transport zone created earlier, from the Uplink Profile dropdown menu select the host uplink profile created earlier, from the IP Pool dropdown menu select the TEP IP Pool created earlier. Lastly, enter in the pNIC names and choose their respective uplinks. If you have 2 pNICs, click add PNIC and then modify the information before clicking ADD to complete the process.

This will start the host preparation process so allow a few minutes for the NSX VIBs to be installed on the hosts and for the transport nodes to be configured, clicking the refresh button as needed until you see “NSX Installed” and the statuses are “Up”.

Click on the Transport Nodes tab, and we can see that the nodes have been automatically created and are ready to go. And if we look back at the physical NICs in vCenter, we can see that they have now been added to the N-VDS-Overlay.

Edge Transport Node

Begin by navigating to Fabric > Nodes > Transport Nodes > +ADD. Provide a Name, from the Node dropdown menu select the Edge that we deployed and joined to the management plane in the previous post, in the “Available” column select the two transport zones we previously created and click the > arrow to move them over to the “Selected” column, then click the N-VDS heading tab at the top of the window.

From the Edge Switch Name dropdown menu, select the N-VDS we created earlier for the Overlay. From the Uplink Profile dropdown menu, select the edge uplink profile we created earlier. From the IP Assignment dropdown menu, select Use IP Pool, from the IP Pool dropdown menu, select the TEP IP Pool created earlier, and finally, from the Virtual NICs dropdown menus, select fp-eth0 which is the 2nd NIC on the Edge VM and then select uplink-1.

But we’re not done yet! Next, click +ADD N-VDS so that we can add the additional one needed for the remaining transport zone. From the Edge Switch Name dropdown menu, select the N-VDS we created earlier for the VLAN Uplink. From the Uplink Profile dropdown menu, select the edge uplink profile again. And from the Virtual NICs dropdown menus, select fp-eth1 which is the 3rd NIC on the Edge VM and then select uplink-1. Now we can finally click ADD.

Allow a few minutes for the configuration state and status to report “Success” and “Up” and click the refresh button as needed.

Edge Cluster

Having a multi-node cluster of NSX Edges helps ensure that at least one NSX Edge is always available. In order to create a tier-0 logical router or a tier-1 router with stateful services such as NAT, load balancer, and so on. You must associate it with an NSX Edge cluster. Therefore, even if you have only one NSX Edge, it must still belong to an NSX Edge cluster to be useful. For more information, please see the documentation.

To create an edge cluster, navigate to Fabric > Nodes > Edge Clusters > +ADD. Enter a Name, and in the “Available” column select the edge transport zone created earlier and click the > arrow to move it to the “Selected” column and click ADD. Well, that was easy!

Wow! That seemed like a lot of work but it was exciting to get the components configured and ready for setting up the Logical Routers and switches, which I’ll cover in the next post, so we can start running VMs in NSX-T. I hope you’ve found this post useful and I thank you for reading.

NSX-T Home Lab – Part 3: Deploying NSX-T Appliances

Intro


Welcome to Part 3 of my NSX-T Home Lab Series. In my previous post, I went over the process of setting up the Sophos XG firewall/router VM for my nested lab environment. In this post, we’ll cover the process of deploying the required NSX-T Appliances. There are 3 main appliances that need to be deployed, the first is the NSX-T Manager, followed by a single or multiple Controllers, and lastly, a single or multiple Edge appliances. For the purposes of this nested lab demo, I will only be deploying a single instance of each appliance, but please follow recommended best practices if you are leveraging this series for a production deployment. With all that said, let’s get to it!

NSX-T Manager Appliance

Prior to deploying the appliance VM’s, it’s recommended to create DNS entries for each component. I’ve already done this on my Domain Controller. Additionally, if you need to obtain the OVA’s, please download them from here. At the time of this writing, NSX-T 2.3.1 is the latest version.

We’ll begin by deploying the NSX Manager appliance to our Management Cluster using the vSphere (HTML5) Web Client and deploying the NSX Unified Appliance OVA. You can also deploy via the command line and/or PowerCLI, but for the purposes of this demo, I am going to leverage the GUI. Please use the following installation instructions to deploy the NSX Manager Appliance. I’ve used the following configuration options for my deployment:

  • “Small” configuration – consumes 2 vCPU and 8GB RAM.
    • The default “Medium” configuration consumes 4 vCPU and 16GB RAM
  • Thin Provision for storage
  • Management Network port group on VLAN 110
  • Role – nsx-manager
  • Enable SSH
  • Allow root SSH Logins

Once the appliance has been deployed, edit its settings and remove the CPU and Memory reservations by setting the values to 0 “zero”. Normally, these would be left to guarantee those resources for the appliance but since this is a resource-constrained nested lab, I’m choosing to remove them.

At this point, power on the appliance and wait a few minutes for the Web UI to be ready, then open a browser to the IP or FQDN of the NSX Manager and log in using the credentials provided during the deployment (admin/the_configured_passwd). Accept the EULA and choose whether or not to participate in the CEIP.

NSX-T Controller Appliance

Next up is the NSX Controller. There are several ways that this appliance can be deployed such as via the command line, PowerCLI, vSphere Web Client, or directly from the NSX Controller. For this demo, I am opting to continue my deployment via the vSphere Web Client for the sake of simplicity and comfortability. As was done with the NSX Manager, deploy the NSX Controller OVA by following these instructions. Again, since this is a resource-constrained nested lab, I am going to choose the following configuration options and remove the CPU and Memory reservations after the deployment has completed.

  • “Small” configuration – consumes 2 vCPU and 8GB RAM.
    • The default “Medium” configuration consumes 4 vCPU and 16GB RAM
  • Thin Provision for storage
  • Management Network port group on VLAN 110
  • Enable SSH
  • Allow root SSH Logins
  • Ignore the “Internal Properties” section as it’s optional

With this part of the deployment complete, go ahead and power on the appliance. The next step is to join the controller to the management plane (NSX Manager) since we deployed the controller manually. Follow these instructions to perform this step, but I’ll also post screenshots below. These steps can either be done from the appliance console or via SSH. I’ll opt for the latter since we chose the option to allow SSH during the previous deployments. Be mindful of the appliances I am running commands on in the screenshots below as some are done on the manager while others are done on the controller.

In the last image above, we can see that the Control cluster status reports “UNSTABLE“. This is to be expected in this deployment scenario as we only deployed a single controller instance. Nothing for us to worry about here.

Now that we’ve joined the Controller to the Manager (management plane), there is one last thing to do which is to initialize the Control Cluster to create a Control Cluster Master. To do so, follow these instructions.

Now, if we log in to the NSX Manager again and click the Dashboard view, we can see that we now have both a Manager and a Controller Node configured.

NSX-T Edge Appliance

Are you still with me? Good! The final component to deploy is the NSX Edge. As we’ve done with the other appliance, I will continue deploying the Edge via the vSphere GUI instead of leveraging the NSX Manager and I’ll tell you why. Deploying via the NSX Manager is the easiest method, but there is a caveat. Deploying from the NSX Manager will only allow you to select a “Medium” or “Large” configuration deployment and will automatically power on the appliance post-deployment thus keeping the set CPU and Memory reservations. I’ve heard there is a way to trick the UI to deploy a “Small” configuration, but I’ve yet to confirm this nor have I seen it done. Additionally, there is a prerequisite for deploying via the NSX Manager which is to configure a Compute Manager which we’ve yet to cover and will be covered in the next post of this series. Follow these instructions to deploy the appliance. By deploying via the vSphere Web Client, I have the ability to select “small” for the deployment and then remove the reservations before powering on the appliance, but feel free to use your method of choice.

As we’ve done with the other deployments, use the following options.

  • “Small” configuration – consumes 2 vCPU and 4GB RAM.
    • The default “Medium” configuration consumes 4 vCPU and 8GB RAM
  • Thin Provision for storage
  • Management Network port group on VLAN 110
  • Enable SSH
  • Allow root SSH Logins
  • Ignore the “Internal Properties” section as it’s optional and any other non-relevant fields

When we get to the networking section, select the Management port group for “Network 0”, select the Overlay port group for “Network 1”, and select the Uplink portgroup for “Network 2” and “Network 3”.

Power on the appliance, and when the console is ready, either log in to the console or connect via SSH as we’ll need to also join this appliance to the management place as we did previously with the controller. Follow these instructions to complete this process. Again, be mindful of which appliances the commands are executed against.

Again, if we log into the NSX Manager and click the Dashboard view, we’ll now see that we also have an Edge Node configured. Woo-hoo!!

Well, this completes this post and I hope you had fun following along. In the next post, I’ll cover adding hosts to our NSX-T fabric along with creating all the required transport nodes, zones, edge clusters, profiles, and routing stuff so that we can get everything ready to have workloads run in NSX-T!

NSX-T Home Lab – Part 2: Configuring ESXi VMs

Intro

Welcome to Part 2 of my NSX-T Home Lab Series.  In my previous post, I went over the installation and configuration of a Sophos XG firewall for my nested NSX-T Home Lab.  In this post, I will cover the setup and configuration of the ESXi 6.7 VMs.

I recently wrote a post on how to Create an ESXi 6.7 VM Template, which is what I used to deploy my VMs from.  After cloning to new VMs, I changed the disk sizes for my cache and capacity disks, increased the CPUs and RAM, and added 2 additional network adapter to give me a total of 4 adapters.  The reason I did this is so that I can keep my management and other vmkernel ports on their VDS and have two new ones to use for NSX-T.  I may do a follow-up post using only two adapters where I’ll migrate my vmkernel networks over to NSX-T as in the real world, I’m sure there are many customers using dual 10Gb cards in their servers.

Now, I will not be covering how to actually install ESXi as you can follow the documentation for that, or you can reference my post mentioned above.  There really isn’t much to that installation…it’s pretty trivial.  Instead, I am just going to quickly state the specs used for my ESXi VMs from a resource perspective, and give some additional pointers.

Single-Node Management Cluster VM

  • CPUs: 8
  • RAM: 32GB
  • Disk1: Increased to 500GB (This will serve as a local VMFS6 datastore)
  • Disk2: Removed (As I will not be running VSAN)
  • Network Adapters: 2 (connected to the Nested VDS port group we created earlier)

On this host, I deployed a Windows Server 2019 Core OS to serve as my domain controller for the nested lab.  I also deployed a VCSA to manage the environment.

2-Node VSAN Compute Cluster (with a Witness Appliance)

  • CPUs: 8 on each host
  • RAM: 16GB on each host
  • Network Adapters: 4 on each host (connected to the Nested VDS port group we created earlier)

I used the new Quick Start feature to create and configure my VSAN cluster along with all the networking required, and this has now become one of my favorite new features in vSphere 6.7.  There were some nuances I had to fix which were super simple.  During the creation of the VDS and process of migrating vmkernel ports to the VDS, my nested ESXi VMs would lose connectivity.  Simply restarting the management network from the console proved to fix the issue and I was able to proceed.

I then used VUM to update each host to the latest version (Build 11675023) that was released on 1/17/19.  Once everything was configured, I had a nice little, nested playground ready for NSX-T!

In the next post, I will go over the deployment of the NSX-T appliances in the nested lab.  Be sure to come back!

-virtualex-

NSX-T Home Lab – Part 1: Configuring Sophos XG Firewall

Intro

Welcome to Part 1 of my NSX-T Home Lab Series.  In my previous post, I went over the gist of what I plan to do for my nested NSX-T Home Lab.  In this post, I will cover the setup and configuration of a Sophos XG firewall Home Edition which will serve as the router for my nested lab environment.  My physical Home Lab is configured with Virtual Distributed Switches, or VDS (sometimes seen as DVS) for short, and since this is a nested lab environment that will not have any physical uplinks connected, I will need to create a new VDS without physical uplinks connected to it along with a portgroup for the nested environment and then configure access to the environment from my LAN.  All traffic will flow through virtual router/firewall to communicate to and from the nested lab.

Prerequisites:

  • VDS and portgroup without physical uplinks
    • Set the VLAN type for this portgroup to VLAN Trunking with the range of 0-4094 to allow all VLANs to trunk through
  • Static route to access the nested lab from my LAN
    • Once you determine the subnets you’d like to use for the nested lab, add a static route summary on your physical router

I have a bunch of VLANs created for my physical Home Lab as I’ve yet to deploy NSX-T in there, but once I do, I’ll be removing the majority of said VLANs and only keeping the required ones needed to run the lab.  With that said, one of the VLANs I have is for “Development” work, such as this so I’ll be connecting one uplink from the router to this VLAN which will serve as the WAN interface while the other uplink will be connected to the new nested portgroup to serve as the LAN for the nested lab.  I’ll describe the basics for deploying the Sophos XG firewall, but will not go into full detail as this is pretty trivial and can be deployed using the following guide as a reference.

  • OS: Other Linux 3.x or higher
  • CPU: 1 (add more as needed – max supported is 4 in the home edition)
  • RAM: 2GB (add more as needed – max supported is 6GB in the home edition)
  • Disk: 40GB thin (you may make this smaller if you’d like)
  • Network Adapter 1: LAN portgroup (nested)
  • Network Adapter 2: WAN portgroup
  • Boot: BIOS (will not boot if you keep as EFI)

Once the VM has been deployed, the Sophos XG will be configured with a 172.16.1.1 address by default.  This will need to be changed to the subnet you’re using for your nested LAN interface.  Login to the console with the default (admin – admin) credentials, and choose the option for Network Configuration to change the IP for your nested LAN port.

Once this is done, you would normally navigate to that address on port 4444 to access the admin GUI.  Unfortunately, this will not work since the LAN side has no physical uplinks.  So what do we do?  We need to run a command to enable admin access on the WAN port.  To do so, choose option 4 to enter the device console and enter the following command:

system appliance_access enable

The WAN port is set to grab an address from DHCP so you’ll need to determine which IP address this is either by going into your physical router, or using a tool like Angry IP.  Once in the Admin GUI, navigate to Administration > Device Access and tick the box for WAN under the HTTPS column.  See this post for reference.

Now, we can create our VLANs for our nested environment.  I’m using the following for my lab:

VLANSubnetPurpose
11010.254.110.1/24Management
12010.254.120.1/24vMotion
13010.254.130.1/24VSAN
14010.254.140.1/24VM Network
15010.254.150.1/24Overlay
16010.254.160.1/24Uplink

Navigate to Networking and select Add Interface > VLAN to create each of your networks.

With our VLANs created, we’ll need to create two firewall rules to allow traffic from the WAN port to access the LAN, as well as to allow traffic from LAN to LAN. Navigate to Firewall > Add firewall rule and create the following rules.  Choose something easy to label them as which makes sense to you:

This is where the static route will now be useful to access your nested lab.  I’ve configured a route summary of 10.254.0.0/16 to go through the IP address of the WAN interface as the gateway so that I can access the Admin UI at https://10.254.1.1:4444 as well.  I’ll now also be able to access the ESXi UI and VCSA UI, once they are stood up.

The final thing I will be doing is enabling the native MAC Learning functionality that is now built into vSphere 6.7 so that I do not need to enable Promiscuous Mode, which has normally been a requirement for the Nested portgroup and nested labs in general.  To learn more about how to do this, see this thread.  In my setup, I ran the following to enable this on my nested VDS portgroup:

Set-MacLearn -DVPortgroupName @("VDS1-254-NESTED") -EnableMacLearn $true -EnablePromiscuous $false -EnableForgedTransmit $true -EnableMacChange $false

To check that it was indeed set correctly, I ran the following:

Get-MacLearn -DVPortgroupName @("VDS1-254-NESTED")

And there you have it!  In the next post, I will go over configuring our ESXi VMs for our nested lab!

NSX-T Home Lab Series

Intro

I recently upgraded my Home Lab “Datacenter” to support all-flash VSAN and 10Gb networking with the plan to deploy NSX-T so that I can familiarize myself with the solution and use it to better prepare me for the VMware VCP-NV exam certification.  Since this is all brand new to me, I’ve decided that I’ll first deploy it in a nested lab environment in order to learn the deployment process as well as to minimize the risk of accidentally messing up my Home Lab environment. 

Now, I know there are a few blogs out in the wild already that go over the installation and setup of NSX-T, but I wanted to write my own as it will better help me retain the information that I am learning.  Additionally, others may have a different setup than I have and/or may have deployed the solution differently that the way I intend to do which is by following the published documentation.  I’d like to take this time to first shout out some of my colleagues, William Lam, Keith Lee, Cormac Hogan, and Sam McGeown, as their own blogs are what inspired me to deploy the solution for myself and document the process.

This post will serve as the main page where I’ll post the hyperlinks to each post in the series.  I’ll be deploying a virtual router/firewall, 3x ESXi VMs, and a witness appliance so that I can configure a virtual 2-node VSAN compute cluster.  I’ll be managing the environment via a vCenter Server Appliance or VCSA, and a Windows Server 2019 Core OS Domain Controller or DC.  I won’t cover the installation and configuration of the DC as it’s out of scope for this series, nor will I go over the deployment of the VCSA or VSAN configuration as this can be done by following the documentation.  And, since this is just a small nested lab, the remaining host that isn’t a part of the VSAN cluster will serve as a single-node Management cluster host where the DC, VCSA, and NSX-T Appliances will reside.

I will cover the router setup, ESXi VM configuration, and NSX-T deployment.  For my setup, I am going to leverage a Sophos XG firewall Home Edition since I’ve always had an interest in learning more about these firewalls, but also because I typically see pfSense being used for virtual routers and I wanted to try something different.  If you are using this as a guide for your own deployment, feel free to use your router/firewall of choice as there are plenty out there like FreeSCO, Quagga, or VyOS, just to name a few.  So, with that said, I hope you all enjoy the content in this series!

NSX-T Home Lab Series

References:

Create an ESXi 6.7 VM Template

Disclaimer:  The following is not supported by VMware.

Nested virtualization is nothing new, and many of us use it for test or demonstration purposes since they can quickly be stood up or torn down.  William Lam has an ESXi VM which can be downloaded from here, but I wanted to go ahead and create my own for use within my nested lab environments.

In this post, I am going to show you the steps I ran through to create an ESXi 6.7 VM that I can convert to a template for later use.  Props to William for his excellent content on nested virtualization, which I’ve used a ton and will be leveraging here as well.  So without further ado, let’s get to it!

For my ESXi VM, I will be configuring the following:

  • CPU: 2 (Expose hardware assisted virtualization to the guest OS – checked on)
  • RAM: 8GB
  • Disk0: 16GB (bound to the default SCSI controller; thin provisioned)
  • New virtual NVME Controller
  • Disk1: 10GB (for VSAN cache tier bound to NVME Controller; thin provisioned)
  • Disk2: 100GB (for VSAN capacity tier bound to NVME Controller; thin provisioned)
  • 2x Network Adapters (VMXNET3)
  • Some advance configuration settings

Build the VM as follows:

Be sure you connect the ESXi installation media and power on the VM to begin the installation.

Once the VM powers back on, log in and enable SSH so that we can run some additional commands to update the OS and prepare it for cloning use

(Optional) To update ESXi to the latest version, connect to the host via SSH and run the following:

**At the time of this writing, the latest version is Build 11675023 as per profile used below, be sure to change the profile number**

esxcli network firewall ruleset set -e true -r httpClient
esxcli software profile update -p ESXi-6.7.0-20190104001-standard \
-d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml
esxcli network firewall ruleset set -e false -r httpClient

(Optional) To update the latest version of the ESXi Host client, run the following:

esxcli software vib install -v "http://download3.vmware.com/software/vmw-tools/esxui/esxui-signed-latest.vib"

To prepare the VM for cloning use, run the following:

esxcli system settings advanced set -o /Net/FollowHardwareMac -i 1
sed -i 's#/system/uuid.*##' /etc/vmware/esx.conf
./sbin/auto-backup.sh

At this point, you can shutdown the VM and convert it to a template for cloning use.

After cloning a VM, if you plan on joining it to a vCenter Server you will need to run the following on each cloned instance via SSH.

esxcli storage vmfs snapshot resignature -l datastore1

Well, that about does it!  Hope you all enjoyed this post!

-virtualex-

References:

Homelab Makeover 2.0

| 31/12/2017 | Tags: , , , ,

Hello and first off, thank so much for visiting my blog!  If you have followed any part of my “Homelab” series, you will be familiar with the components that make up my home “Datacenter”.  If not, take some time to catch up on those posts!

In this post, I am quickly going to cover my lab makeover as I decided to get some new equipment and redo a bunch of my networking.  So without any further hesitation, let’s get to it!

Beginning with my networking equipment, I wanted to move my Cisco SG300-10 out of my home network enclosure cabinet and into my Navepoint rack enclosure.  But then I realized I would have to replace that switch with another to feed the rest of my homes connections.  Currently, I am using Ubiquiti’s UniFi equipment for my home networking and since I’m already running Ubiquiti gear, I figured I would purchase a few more of their 8-port switches to do the job so that I can manage those devices from a “single-pane-of-glass” via the controller.  So I went ahead and purchased 2 US-8 switches, in which 1 will feed the home networking and the other will extend to the lab primarily serving as a trunk for my VLANs to reach the labs Cisco switches.

So now, my UniFi network consists of:

On to the lab network…

The US-8-LAB switch connects to my SG300-10 which I’ve configured 2-ports as a LAG “Trunk” between the switches for VLAN traffic, 2-ports as another LAG “Trunk” connection to the SG300-52 switch, and the others as “Access” ports which connect to the IPMI interfaces of my servers.  The IPMI connections were previously on my SG300-52 switch.  On to the SG300-52 switch, I have configured all of my ESXi management ports, vMotion ports, iSCSI & NFS ports, VSAN ports, and data ports for my servers, along with a few LAG connections which connect to my storage devices, and a few which connect my UPS and ATS/PDU units.  I also configured an additional LAG “Trunk” which connects to a Netgear Prosafe GS108T that I had laying around.  I’ve dedicated that switch and it’sports for my ex-gaming PC turned “DEV” ESXi host.  Eventually, that host will be decommissioned when I add a new host to my rack enclosure.

So now, my lab network consists of:

Now for the storage devices.  Previously, I was running my lab VMs using a Synology DS415+ storage unit via NFS mounts.  This was all fine and dandy, except for the fact that it would randomly shut itself down for no apparent reason, leading to eventual corruption of my VMs.  I got tired of spending hours trying to recover my machines and eventually discovered that my device was plagued by the Intel ATOM C2000 CPU issue described here.  I then reached out to Synology and they quickly responded and issued an immediate RMA of the device.  Again this was fine, but where was I going to move my VMs and data too?  I didn’t have another storage device with an ample amount of free space to accommodate all my data, so I decided to bite the bullet and pick up a brand new Synology RS815+ which I could now mount in my rack enclosure.  I also scooped up some 1TB SSDs from their compatibility matrix to populate the drive bays.  The difference here is that with the new RackStation, I opted to configure my LUNs via iSCSI instead of NFS like I had previously done with the DiskStation.  Once set up and connected, I vMotion’d all of my machines to the new device, and disconnected the DS415+ while I waited for the replacement device to arrive.  That replacement unit eventually came, so I swapped my SSD’s from the old unit into the new unit and fired it back up.  I will eventually recreate some NFS mounts and reconnect them to the vSphere environment.

Now, my lab storage consists of:

Finally, the cabinet.  I became rather displeased with the amount of space I had with my Navepoint 9U 450mm enclosure.  The case itself was great, but I just needed some more room in the event I needed to un-rack a server or do anything else in there.  Also, I started to do some “forward-thinking” about eventual future expansion, and the current 9U enclosure was no longer going to suffice.  I decided to upgrade to a new Navepoint 18U 600mm enclosure, and now I have plenty of room for all of my equipment and future expansion.  After relocating my servers to the new rack enclosure, I now have the following equipment mounted in the rack and, still, have room for growth.

  • 2 x Cat6 keystone patch panels
  • 2 x Cisco SG300 switches
  • 4 x Supermicro servers
  • 1 x Synology storage unit
  • 1 x UPS
  • 1 x ATS/PDU
  • 1 x CyberPower Surge power strip (in the event I need to plug-in some other stuff)

Thanks for stopping by!  Please do leave some comments as feedback is always appreciated!  Until next time!

-virtualex-

Pingbacks: 

Fixing A Corrupt Domain Controller – Stop Code 0x00002e2

Yesterday morning I discovered that my Synology NAS had an unexpected shutdown in the middle of the night while my homelab VMs/workloads were still running.  This caused both of my Domain Controllers databases to become corrupt resulting in being unable to boot those machines.  When attempting to boot them, they would get stuck in a BSOD boot-loop and would display a Stop Error Code of 0x00002e2.

.

After some research I was able to figure out how to recover my VMs and get them to boot up again.  This had happened to me once before sometime earlier this year and luckily I remembered that I had taken some notes on how to fix it so I figured this time I would put together a formal “How To:” guide to have it documented for myself and hopefully to help others facing this issue as well.  So without further adieu…let’s get to it!

.

To start, you will need to power-on the machine and then keep pressing the F8 key to bring up the “Advanced Boot Options” boot menu.  Navigate down to Directory Services Repair Mode enter press Enter to boot you into Safe Mode.

When you reach the login screen, log in with the Local Administrator account since Active Directory Domain Services are obviously unavailable.

Once at the Desktop, open an elevated Command Prompt window.

As a best practice, I feel it is always wise and important to make a backup of the files before doing any modifications.  Since we will be backing up the NTDS directory, create a directory at your preferred location to store the backup files.  I chose to make a backup folder on the root of “C:\” and a sub-directory with the name/date of the directory I am backing up.

md C:\Backup\NTDS_11122017

Then copy everything from the “C:\Windows\NTDS” directory to this new location using xcopy.

xcopy C:\Windows\NTDS\* C:\Backup\NTDS_11122017 /E /Y /V /C /I

Once done, let’s rename any .log file extensions in the NTDS directory to .log.old

cd C:\Windows\NTDS

ren *.log *.log.old

Now, this is where we get to the good stuff!

First, run the following command to repair the database.

esentutl /p "C:\Windows\NTDS\ntds.dit"

This will display the following warning message, click “OK

Let it do its thing and you will see the following once complete.

Now we need to use the NTDS Utility (ntdsutil.exe) to activate the NTDS instance and compact the DB to a new file which will then be used to overwrite the original.  I will be compacting it to a new TEMP directory within the NTDS directory.  The command will automatically create the new directory if it’s not already present.  Enter the following commands.

ntdsutil

activate instance ntds

files

compact to C:\Windows\NTDS\TEMP

If successful, you will be presented with instructions to copy the newly compacted file to the NTDS directory, overwriting the original, and also to delete any .log files in the NTDS directory.  Before we can do that we need to exit out of the ntdsutil.  Type quit twice to exit.

quit

quit

Let’s follow those instructions and also delete the *.log.old files we renamed earlier.

copy "C:\Windows\NTDS\TEMP\ntds.dit" "C:\Windows\NTDS\ntds.dit"

Yes

Ensure you are still in the NTDS directory before entering the following delete commands.

del *.log

del *.log.old

The hard part is now over!  Let’s go ahead and reboot the machine normally.

If all goes well and as expected, your machine will boot successfully and you can log in again with an Active Directory Domain Admin account.

Awesome!  Well, I hope you’ve found this guide useful.  Please feel free to share this and provide me some feedback/comments below.  Thanks for reading!

 

-virtualex-

Home Lab 2017 – Part 1 (Network and Lab Overhaul)

| 12/02/2017 | Tags: , , ,

For the last 6+ months, I haven’t had much time to dedicate to my home lab and overall home network.  Between holidays, transitioning to a new employer/role, and everyday life getting in the way, I found that I had to put everything on the back burner for a bit…so I inevitably shutdown by home lab. Well now I am back and am looking forward to writing up some new material that I have been meaning to do for a while.  I will start this by saying this is a continuation of my Home Lab 2016 Series, now being dubbed as “Home Lab 2017“!

So first and foremost, I powered up my home lab once again and I intend to leave it up and running at 100% uptime.  While doing so, my Synology NAS decided to reboot itself for an auto-update, right in the middle of a VM’s (my domain controller to be exact) boot process.  This would eventually cause my VMDK file to become corrupted and I could no longer boot my DC and reconnect my home lab.  I also had not yet backed anything up since the environment was still fairly new so I figured why not take this opportunity to rebuild everything and get some new components.

I decided to add a few more (3 per host to be exact), extremely quiet, Noctua NF-A4x10 FLX 40mm  fans.  This will help to keep my ATOM CPU cool as well as exhaust any hot air from out of each case.  I had also been contemplating on doing a Network equipment overhaul.  Last year I upgraded my ASUS RT-AC68U SOHO Router with a Ubiquiti ERLite-3 EdgeRouter, and turned the ASUS into a wireless AP only.  I do not have a single complaint in the performance and overall stability of that setup.  But I recently began looking at the Ubiquiti UniFi gear, and noticed that it the Unified Security Gateway basically runs the same EdgeOS found in the ERLite-3, just with a different web-interface.  Realizing that we are in this new wave of cloud-managed networking, and seeing that the USG-3P was basically on-par with the ERLite-3, I bit the bullet and ordered my new Ubiquiti UniFi gear to replace my current setup.  The featureset in the EdgeRouter series of routers still has the edge over the UniFi’s features but it’s only a matter of time before they are equal, or UniFi surpasses the EdgeRouters.

I decided on the following products:

After getting everything connected, I will say that I was extremely impressed with the ease of setup, current feature set, and the presentation of the Web UI.  I am not going to go into the specifics of how to set it all up, etc. as this is not a UniFi tutorial, but I will say that the little quick start guides tell you everything you need to know.  One can also consult “Mr. Google” for more information.  

My only gripe with the current feature set of the USG-3P is that there is no support for Jumbo Frames…yet!…but hopefully that will come in a future firmware release.  The US-8-60W does indeed support Jumbo Frames so I enabled in on there at least for now.  Additionally, the VOIP LAN port on the USG-3P is there for a future release to add support for it.  I have also read some threads were feature requests have been submitted to allow said port to be used as a secondary LAN/WAN port instead of just for VOIP.  This is currently in beta, but once these settings are added, I feel it would bring the device closer to the capabilities of the ERLite-3 in terms of features. Only time will tell…

Now that I had my basic home network configured, LAN & WiFi-LAN, I powered on my Cisco lab switches and began migrating all of my VLANs over to the new USG-3P, thus removing the need for any static routing which I relied on with my previous setup.  Next, I powered on all of my hosts, and began upgrading them to ESXi 6.5.  Finally, I was finally on my way to getting up to the latest release of vSphere!  Once all of my hosts were upgraded, with the exception of my dev-host as the CPU is not supported in ESXi 6.5, I began spinning up a few new VMs.  I took this time to install Windows Server 2016 for my Domain Controllers, and decided to ditch the Windows-based vCenter server in favor of the vCenter Server Appliance (vCSA) since it now has vSphere Update Manager (vUM) integration and the appliance runs on VMware’s Photon OS.

Once my vSphere environment was minimally setup, I started to deploy some more VM’s with the vSphere Web Client, and I must say the speed and performance of the Web Client in 6.5 is “night-and-day” as compared to the Web Client in 6.0!  Nore more need for the Client Integration Plugin as the newer version for 6.5 runs as a service.  This is the way the web client should have been designed from the very beginning instead of making us all suffer because of how slow the Flash-based version previously was.  Although I always preferred to use the Web Client because of the features within it, I can see why so many users still used the C# “fat-client” instead.  Who wants to wait forever and a year just for the Hosts and Clusters view, or VM’s and Templates view to load?!?!?  I know that I dreaded the loading times.  Currently, my vSphere lab consists of the following machines…for now.

  • 2 – Domain Controllers (I’ve learned my lesson and the consequences of only having one DC…)
  • 1 – vCenter Server Appliance
  • 1 – vSphere Data Protection Appliance
  • 1 – Windows 10 Management Jumpbox
  • 1 – IP Address Management Server (phpIPAM)
  • 1 – Mail Server (hMailServer)
  • 1 – WSUS Server
  • 1 – SCCM Server ( I am currently teaching this to myself and may eventually leverage SUP, thus replacing/repurposing my current WSUS server)
  • 1 – vRealize Configuration Manager (vCM) Server ( I am also teaching this to myself as to become more familiar with the product and its capabilities)
  • 1 – OpenVPN Appliance

So now that my Home Lab has been upgraded and completely rebuilt, I look forward to spending more time tinkering with it and putting it to good use for exam studies and personal knowledge.  I am dedicating my Sundays as “Home Lab Fun-days”!  Thanks for stopping by and I hope you enjoyed the read! Please comment below and subscribe!