iThinkVirtual™

NSX-T Home Lab – Part 5: Configuring NSX-T Networking

Intro

Welcome to Part 5 of my NSX-T Home Lab series. In my previous post, I went over the lengthy process of configuring the NSX-T fabric. In this post, I am going to cover the process of configuring the networking so we can get the logical routers and logical switches in place and ready to attach VMs to them and begin running workloads on NSX. Let get to it, shall we?

Logical Switch

An NSX-T Data Center logical switch reproduces switching functionality, broadcast, unknown unicast, multicast (BUM) traffic, in a virtual environment completely decoupled from the underlying hardware.
Logical switches are similar to VLANs, in that they provide network connections to which you can attach virtual machines. For more information, please see the documentation.

I am going to start off by creating a Logical Switch to serve as my uplink from the external network to my Tier-0 router, which I’ll create afterward. To create a logical switch, select Networking > Switching > +ADD. Enter a Name, then from the Transport Zone drop-down menu select the VLAN uplink transport zone that was created in the previous post. Since I’ll be tagging VLANs at the port group level, enter a 0 (zero) for the VLAN ID and click ADD.

And that’s all there is to it! After a logical switch is created, we need to create a port for it to connect it to a logical router, but we first need a Tier-0 Logical Router.

Tier-0 Logical Router

An NSX-T Data Center logical router reproduces routing functionality in a virtual environment completely decoupled from the underlying hardware. The tier-0 logical router provides an on and off gateway service between the logical and physical network. Tier-0 logical routers have downlink ports to connect to NSX-T Data Center tier-1 logical routers and uplink ports to connect to external networks. For more information, please see the documentation.

To create a Tier-0 Logical Router, select Networking > Routers > +ADD and select Tier-0 Router from the drop-down menu. Provide a Name for the router and from the Edge Cluster drop-down menu, select the edge cluster that was created in the previous post then click ADD. Changing the High Availability setting is optional and I’m choosing to leave the default Active-Active setting.

With the Tier-0 logical router created, click on the router and from the Configuration drop-down menu, select Router Ports then click +ADD under Logical Router Ports.

Enter a Name, leave the Type as “Uplink”, optionally change the MTU value to support a configured Jumbo Frame, otherwise leave the default 1500 value (I am using 9000 for Jumbo Frames in my environment). From the Transport Node drop-down menu, select the edge transport node created in the previous post. From the Logical Switch drop-down menu, select the logical switch that was created in the previous step, then provide a name for the Logical Switch Port and provide an address on the “Uplink” VLAN 160 for the router port and click ADD.

Now, with the Tier-0 Logical router created and attached to an uplink Logical Switch, I have the option of either setting up a Static Route to send/receive data to/from or to configure Border Gateway Protocol also known simply as BGP. Until one of these is configured, I won’t be able to ping my Tier-0 router. I am going to opt to configure BGP so that any network I add later on down the road will get advertised properly to the neighbor router (Sophos XG) on my external network instead of using a wide-open static route. I’ll come back to BGP configuration a little later on, but first, I’d like to set up a Tier-1 to connect to my Tier-0. Any VLAN-based logical switches I create from this point on will be attached to the Tier-1 logical router.

Tier-1 Logical Router

Similar to Tier-0 Logical Routers, Tier-1 logical routers have downlink ports to connect to NSX-T Data Center logical switches and uplink ports to connect to NSX-T Data Center tier-0 logical routers. The tier-1 logical router must be connected to the tier-0 logical router to get the northbound physical router access. For more information, please see the documentation.

As was done when creating the tier-0 logical router, repeat the same process by selecting Networking > Routing > +ADD but select Tier-1 Router from the drop-down menu instead. Provide a Name, from the Tier-0 Router drop-down menu, select the Tier-0 router that was created in the previous step to attach the Tier-1 to it. Next, from the edge cluster drop-down menu, select the edge cluster that was created in the previous post, leave the default Failover Mode then from the Edge Cluster Members drop-down menu, select the edge transport node that was created in the previous post and click ADD.

BGP Configuration

To take full advantage of the tier-0 logical router, the topology must be configured with redundancy and symmetry with BGP between the tier-0 routers and the external top-of-rack peers. To enable access between your VMs and the outside world, you can configure an external BGP (eBGP) connection between a tier-0 logical router and a router in your physical infrastructure. For more information, please see the documentation.

To configure BGP on a Tier-0 logical router, select Networking > Routing and select the Tier-0 router. From the Routing drop-down menu, select BGP and click +ADD under the Neighbors section. Enter the neighbor router address, in this case since I am using a VLAN for my Uplink network, I will specify the gateway address of my VLAN 160 configured on my Sophos XG firewall/router. Next, select the Max Hop count needed to reach the neighbor router. In my case, my Tier-0 router is configured with the IP address of 10.254.160.2 and is one hop away from the gateway at 10.254.160.1 so I’ll leave the count set to 1. Finally, provide a Remote AS number which will be configured on the neighbor router (Sophos XG) and click ADD.

Next, click EDIT next to BGP Configuration. Toggle the Status switch to Enabled and enter a Local AS number for the Tier-0 router. Optionally, toggle the Graceful Restart switch to Enabled only if the Edge Cluster has one member, which is the case in this nested lab environment, then click SAVE.

Pretty straight forward right? But we’re not done just yet. In order for routes to be advertised properly to the neighbor router, there are a few more things required one of which is to enable Route Redistribution. To do so, select the Tier-0 Logical Router > Routing drop-down menu, and select Route Redistribution. Click EDIT and toggle the Status switch to Enabled and click SAVE. Next, click +ADD and enter a name for the route redistribution configuration, then from the “Sources” choices, select NSX Static and click ADD. NSX sees any advertised routes as a “dynamic” static route, therefore, this setting needs to be enabled to properly advertise routes to the neighbor router.

With BGP now configured on the virtual NSX side, I need to also configure BGP on physical side meaning on my Sophos XG. Log into the Sophos XG firewall and navigate to Routing > BGP. Here, I will add my VLAN 160 gateway IP address as the Router ID and set the Local AS number for my router and click Apply.

Next, I’ll click Add under the Neighbors section and will add the Tier-0 router IP address along with the AS number I configured it and click Save.

Lastly, I’ll head over to Administration > Device Access and check the boxes for LAN & WAN under Dynamic Routing then click Apply. I believe that only WAN is required to mimic a true production environment but since this is a lab, it won’t cause any harm to enable both.

Now, it’s time to test our configurations. I’ll start off by running a ping test from my jump-box or desktop computer on an external network to the Tier-0 logical router.

Success!! Next, I will check to see that my Sophos XG can see it’s BGP neighbor but navigating to Routing > Information > BGP > Neighbors or Summary

Success again!! I am on a roll!! It’s also a good idea to check the same on the NSX Edge appliance. To do so, open an SSH connection to the NSX Edge appliance and run the following commands.

get logical-routers

This will list the available Tier-0 and Tier-1 routers.

Copy the UUID of the Tier-0 router and run the following.

get logical-router <paste-UUID-here> bgp neighbor

Perfect! At this point, I can’t see any advertised routes in BGP because I have yet to create any. In order for the Tier-1 router to advertise any new routes, I need to enable Route Advertisements. To do so, navigate to Networking > Routing and select the Tier-1 Logical Router then from the Routing drop-down menu, select Route Advertisement. Click EDIT and toggle the Status switch to Enabled and the Advertise All NSX Connected Routes to Yes and click SAVE.

Now, I’ll go ahead and create a new Logical Switch and will attach it to a Router Port on my Tier-1 Logical Router.

As you can see above, I’ve assigned the router port and IP address of 192.168.254.1/24 so now I should be able to see this route being advertised from the NSX Edge and received on the Sophos XG. I’ll first check the NSX Edge.

get logical-router <paste_UUID_here> route static

Excellent! Remember, NSX-T treats the advertised routes as static routes hence the reason that “static” was used in the command syntax on the NSX Edge. Next, I’ll check the routes from the Sophos XG.

BOOM!! Now we’re cooking with gas! BGP is up and running, advertising routes as expected. The only caveat at this point is that I cannot ping this network from my jump-box or local desktop on my external network (not connected to the Sophos XG) unless I create a static route on my physical router. But I should be able to ping from a VM connected to an external network on the Sophos XG (VLAN 140) since the route is advertised and the router knows about it. Let’s test it from the nested lab’s domain controller.

Nice! At this point, why not spin up a VM and attach it to the new Logical Switch and see if we can access the outside world, right? I’ll quickly deploy a Linux VM from a template that I have which is configured to obtain an IP address via DHCP. But wait! I’ll first need a DHCP server within NSX to handle the distribution of dynamic host IPs. While my VM is deploying from a template, let’s cover the process of creating a DHCP server in NSX-T.

DHCP Server

DHCP (Dynamic Host Configuration Protocol) allows clients to automatically obtain network configuration, such as IP address, subnet mask, default gateway, and DNS configuration, from a DHCP server. For more information, please see the documentation.

To create a DHCP Server, I first need a DHCP Profile. I’ll create one by navigating to Networking > DHCP > Server Profiles > +ADD. Provide a Name, from the Edge Cluster drop-down menu select the edge cluster that was previously created, and from the Members drop-down menu select the NSX edge transport node that was previously created and click ADD.

Next, click Networking > DHCP > Servers > +ADD. Provide a Name, an IP Address and netmask for the DHCP server and select the DHCP profile that was just created and click ADD. The Common Options, etc. are not required as we can set these in the next step when creating the IP address pool, but you can set it now if you’d like.

Now, click on the newly created DHCP server and under the IP Pool section click +ADD. Provide a Name, an IP address range, a Gateway IP address, and fill out the Common Options then click ADD.

Lastly, I need to attach the DHCP server to the Logical Switch so that it can begin handing out addresses to VMs connected to it. Click the DHCP server and from the “Actions drop-down menu, select Attach to Logical Switch. Select the logical switch from the drop-down menu and click ATTACH. The DHCP server is now ready to hand out IP addresses!

Running Workloads on NSX-T

With the DHCP Server created and a new VM deployed, I’ll attach the VM to the Logical switch and power it on. Select the VM, right-click it and select Edit Settings. Change the network adapter port group to the new Logical Switch that was created and click OK.

Power on the VM and once it’s up, log in and check that it has grabbed an IP address from the DHCP servers IP pool.

Just what I wanted to see! The VM successfully grabbed an IP address from the DHCP server! Woo-Hoo!! I can also ping the VM from the domain controller.

Now, the only thing left is to see if we can ping the outside world like Google’s DNS server. In order for this to work properly, the router would need to know how to NAT this VMs IP address to the outside otherwise, this can be expected to fail as seen below.

There are a couple of options. The recommended way would be to create a SNAT rule on the Sophos XG firewall/router so it knows how to route traffic out to the WAN. Another way would be to set up a SNAT rule in NSX. This can be a bit tricky in a nested lab setup like this one due to it basically being “Double NAT’ed”. I’d prefer to do the rule on the Sophos but as I’m still learning my way around its interface and settings, so it may be easier to simply create a SNAT rule in NSX until I figure out how to do it on the Sophos XG. The one caveat here is that when a SNAT rule is created in NSX, it will break access to the network from an external network, meaning I won’t be able to ping the VM anymore unless I also set up a DNAT rule and ping it’s NAT address to reach the VM. Let me show you how to create the SNAT rule.

Source NAT (SNAT)

Source NAT (SNAT) changes the source address in the IP header of a packet. It can also change the source port in the TCP/UDP headers. The typical usage is to change a private (rfc1918) address/port into a public address/port for packets leaving your network. For more information, please see the documentation.

To create a SNAT rule, navigate to Networking > Routing. Click the tier-1 logical router and from the Services drop-down menu, select NAT. Click +ADD. For the Source IP, add the network that you want to Source NAT. For the Translated IP, pick an IP address on the Uplink VLAN, in my case, this is VLAN 160. My Tier-0 router uses 10.254.160.2 so I will set this Translated IP to 10.254.160.3 then click ADD.

Next, from the Routing drop-down menu, select Route Advertisement. Click EDIT and toggle the Advertise All NAT Routes to Enabled and click SAVE.

Lastly, click the Tier-0 router and from the Routing drop-down menu, select Route Redistribution. Select the route redistribution criteria that was created earlier in this post and click EDIT. Click the checkbox for Tier-1 NAT and click SAVE.

In theory, this all should’ve allowed me to access the outside world but I wasn’t able to do so and I’m thinking there are additional firewall rules required on the Sophos XG to allow it. Of course, things like this are to be expected when working in nested environments but I’ll continue to tinker with this until I get it to work in the nested lab and will update the post should I figure it out.

Well, that about does it for this one! In the next post, I’ll cover the process of upgrading NSX-T. It will be a while before I get to that since the version I just deployed is the latest release. Thanks as always for your support!

NSX-T Home Lab – Part 4: Configuring NSX-T Fabric

Intro

Welcome to Part 4 of my NSX-T Home Lab series. In my previous post, I covered the process of deploying the NSX-T appliances and joining them to the management plane to have the foundational components ready for us to continue the configuration. In this post, I will cover all the configurations required to get NSX-T Fabric ready for network configurations in order to run workloads on it. So sit back, buckle up, and get ready for a lengthy read!

Compute Manager

A compute manager, for example, a vCenter Server, is an application that manages resources such as hosts and VMs. NSX-T polls compute managers to find out about changes such as the addition or removal of hosts or VMs and updates its inventory accordingly. I briefly touched on this in my previous post, stating that one is required if deploying an NSX Edge appliance directly from the NSX Manager. Otherwise, the need for one of these is completely optional but I find value in it for retrieving my labs’ inventory and is the first thing I like to do in my deployments. For more information, please see the documentation.

To configure a Compute Manager, log into you NSX Manager UI and navigate to Fabric > Compute Managers > +ADD. Enter the required information, leaving the SHA-256 Thumbprint field empty, and click ADD. You should receive an error because a thumbprint was not provided but will ask you if you want to use the server provided thumbprint, so click ADD. Allow a brief 30-60 seconds for the Compute Manager to connect to the vCenter server, click refresh if needed, until you see that it’s “Registered” and “Up“.

Tunnel EndPoint (TEP) IP Pool

As stated in the official documentation, Tunnel endpoints are the source and destination IP address used in the external IP header to uniquely identify the hypervisor hosts originating and terminating the NSX-T encapsulations of overlay frames. DHCP or IP pools can be configured for TEP IP addresses, so I’ll create a dedicated pool to be used instead of using DHCP and they’ll reside in the Overlay network VLAN 150. For more information, please see the documentation.

To add a Tunnel EndPoint IP Pool, navigate to Inventory > Groups > IP Pools > +ADD. Provide a Name then click +ADD underneath the Subnets section and provide the required information for the IP pool then click ADD.

Uplink Profiles

An uplink profile defines policies for the links from the hypervisor hosts to NSX-T logical switches or from NSX Edge nodes to top-of-rack switches. The settings defined by these profiles might include teaming policies, active/standby links, transport VLAN ID, and MTU setting. Uplink profiles allow you to consistently configure identical capabilities for network adapters across multiple hosts are nodes. By default, there are two uplink profiles already provided with NSX-T but they cannot be edited, therefore I am going to create new ones for the Edge uplink as well as for my hosts’ uplinks. For more information, please see the documentation.

Edge Uplink Profile

To create an Edge uplink profile, navigate to Fabric > Profiles > Uplink Profiles > +ADD. Provide a Name, optional description then, under Teamings, set the Teaming Policy to Failover Order, set the Active Uplinks to uplink-1. Set the Transport VLAN to 0 as we are tagging at the port group level for our Edge, and either leave the MTU at the default 1600 or set it to a higher value supported for your Jumbo Frames configuration. In my setup, I will set the MTU to 9000, then click ADD.

Host Uplink Profile

Next, I’ll repeat the process to create an uplink profile for my ESXi hosts. This time, I’ll keep the same settings for Teaming but will set the Standby Uplinks as uplink-2, the Transport VLAN will be my Overlay VLAN ID 150 since these uplinks are connected directly to the hosts and need to be tagged accordingly, and again I’ll set the MTU to 9000 and click ADD.

Transport Zones

Transport Zones dictate which hosts, and therefore, which VMs can participate in the use of a particular network. There are two types of transport zones, an overlay, and a VLAN. The overlay transport zone is used by both host transport nodes and NSX Edges and is responsible for communication over the overlay network. The VLAN transport zone is used by the NSX Edge for it’s VLAN uplinks. Both types create an N-VDS on the host or Edge to allow for virtual-to-physical packet flow by binding logical router uplinks and downlinks to physical NICs. For more information please see the documentation.

Overlay Transport Zone

To create an overlay transport zone, navigate to Fabric > Transport Zones > +ADD. Provide a Name, an N-VDS Name, select Standard or Enhanced Data Path for the N-VDS Mode, set the Traffic Type as Overlay and click ADD.

Enhanced Data Path is a networking stack mode, which when configured provides superior network performance and is primarily targeted for NFV workloads. This requires you to install an additional VIB, as per your physical NICs, onto your hosts and reboot them before this can be configured, but this is out of scope for this demo. For more information, please see the documentation.

I’ll be choosing Standard for this demonstration as it’s a nested lab and only sees the Physical NICs as traditional VMware VMXNET3 adapters.

VLAN Transport Zone

To create a VLAN uplink transport zone, I’ll repeat the same process as above by providing a Name, an N-VDS Name, but will change the Traffic Type to VLAN before clicking ADD.

Transport Node

There are two types of transport nodes that need to be configured in NSX-T, a host transport node and an edge transport node. A host transport node is a node that participates in an NSX-T overlay or NSX-T VLAN networking whereas an edge transport node is a node that capable of participating in an NSX-T overlay or NSX-T VLAN networking. In short, they simply facilitate communication in an NSX-T Overlay and/or NSX-T VLAN network. For more information, please see the documentation here and here.

I’ll be automatically creating the host transport nodes when I add them to the fabric but will be manually creating a single edge transport node with two N-VDS’ for my overlay and VLAN uplink transport zones respectively.

Host Transport Node

This part is pretty simple and is where having the Compute Manager that I configured earlier comes in handy. But this process can also be done manually. You’ll need to ensure you have at least 1 Physical NIC (pNIC) available for NSX-T, and you can see here that my hosts were configured with 4 pNICs and I have 2 available, vmnic2 and vmnic3.

Navigate to Fabric > Nodes > Hosts and from the “Managed By” dropdown menu, select the compute manager. Select a cluster and then click Configure Cluster. Toggle both switches to Enabled to Automatically Install NSX and Automatically Create Transport Node. From the Transport Zone dropdown menu, select the Overlay transport zone created earlier, from the Uplink Profile dropdown menu select the host uplink profile created earlier, from the IP Pool dropdown menu select the TEP IP Pool created earlier. Lastly, enter in the pNIC names and choose their respective uplinks. If you have 2 pNICs, click add PNIC and then modify the information before clicking ADD to complete the process.

This will start the host preparation process so allow a few minutes for the NSX VIBs to be installed on the hosts and for the transport nodes to be configured, clicking the refresh button as needed until you see “NSX Installed” and the statuses are “Up”.

Click on the Transport Nodes tab, and we can see that the nodes have been automatically created and are ready to go. And if we look back at the physical NICs in vCenter, we can see that they have now been added to the N-VDS-Overlay.

Edge Transport Node

Begin by navigating to Fabric > Nodes > Transport Nodes > +ADD. Provide a Name, from the Node dropdown menu select the Edge that we deployed and joined to the management plane in the previous post, in the “Available” column select the two transport zones we previously created and click the > arrow to move them over to the “Selected” column, then click the N-VDS heading tab at the top of the window.

From the Edge Switch Name dropdown menu, select the N-VDS we created earlier for the Overlay. From the Uplink Profile dropdown menu, select the edge uplink profile we created earlier. From the IP Assignment dropdown menu, select Use IP Pool, from the IP Pool dropdown menu, select the TEP IP Pool created earlier, and finally, from the Virtual NICs dropdown menus, select fp-eth0 which is the 2nd NIC on the Edge VM and then select uplink-1.

But we’re not done yet! Next, click +ADD N-VDS so that we can add the additional one needed for the remaining transport zone. From the Edge Switch Name dropdown menu, select the N-VDS we created earlier for the VLAN Uplink. From the Uplink Profile dropdown menu, select the edge uplink profile again. And from the Virtual NICs dropdown menus, select fp-eth1 which is the 3rd NIC on the Edge VM and then select uplink-1. Now we can finally click ADD.

Allow a few minutes for the configuration state and status to report “Success” and “Up” and click the refresh button as needed.

Edge Cluster

Having a multi-node cluster of NSX Edges helps ensure that at least one NSX Edge is always available. In order to create a tier-0 logical router or a tier-1 router with stateful services such as NAT, load balancer, and so on. You must associate it with an NSX Edge cluster. Therefore, even if you have only one NSX Edge, it must still belong to an NSX Edge cluster to be useful. For more information, please see the documentation.

To create an edge cluster, navigate to Fabric > Nodes > Edge Clusters > +ADD. Enter a Name, and in the “Available” column select the edge transport zone created earlier and click the > arrow to move it to the “Selected” column and click ADD. Well, that was easy!

Wow! That seemed like a lot of work but it was exciting to get the components configured and ready for setting up the Logical Routers and switches, which I’ll cover in the next post, so we can start running VMs in NSX-T. I hope you’ve found this post useful and I thank you for reading.

NSX-T Home Lab – Part 3: Deploying NSX-T Appliances

Intro


Welcome to Part 3 of my NSX-T Home Lab Series. In my previous post, I went over the process of setting up the Sophos XG firewall/router VM for my nested lab environment. In this post, we’ll cover the process of deploying the required NSX-T Appliances. There are 3 main appliances that need to be deployed, the first is the NSX-T Manager, followed by a single or multiple Controllers, and lastly, a single or multiple Edge appliances. For the purposes of this nested lab demo, I will only be deploying a single instance of each appliance, but please follow recommended best practices if you are leveraging this series for a production deployment. With all that said, let’s get to it!

NSX-T Manager Appliance

Prior to deploying the appliance VM’s, it’s recommended to create DNS entries for each component. I’ve already done this on my Domain Controller. Additionally, if you need to obtain the OVA’s, please download them from here. At the time of this writing, NSX-T 2.3.1 is the latest version.

We’ll begin by deploying the NSX Manager appliance to our Management Cluster using the vSphere (HTML5) Web Client and deploying the NSX Unified Appliance OVA. You can also deploy via the command line and/or PowerCLI, but for the purposes of this demo, I am going to leverage the GUI. Please use the following installation instructions to deploy the NSX Manager Appliance. I’ve used the following configuration options for my deployment:

  • “Small” configuration – consumes 2 vCPU and 8GB RAM.
    • The default “Medium” configuration consumes 4 vCPU and 16GB RAM
  • Thin Provision for storage
  • Management Network port group on VLAN 110
  • Role – nsx-manager
  • Enable SSH
  • Allow root SSH Logins

Once the appliance has been deployed, edit its settings and remove the CPU and Memory reservations by setting the values to 0 “zero”. Normally, these would be left to guarantee those resources for the appliance but since this is a resource-constrained nested lab, I’m choosing to remove them.

At this point, power on the appliance and wait a few minutes for the Web UI to be ready, then open a browser to the IP or FQDN of the NSX Manager and log in using the credentials provided during the deployment (admin/the_configured_passwd). Accept the EULA and choose whether or not to participate in the CEIP.

NSX-T Controller Appliance

Next up is the NSX Controller. There are several ways that this appliance can be deployed such as via the command line, PowerCLI, vSphere Web Client, or directly from the NSX Controller. For this demo, I am opting to continue my deployment via the vSphere Web Client for the sake of simplicity and comfortability. As was done with the NSX Manager, deploy the NSX Controller OVA by following these instructions. Again, since this is a resource-constrained nested lab, I am going to choose the following configuration options and remove the CPU and Memory reservations after the deployment has completed.

  • “Small” configuration – consumes 2 vCPU and 8GB RAM.
    • The default “Medium” configuration consumes 4 vCPU and 16GB RAM
  • Thin Provision for storage
  • Management Network port group on VLAN 110
  • Enable SSH
  • Allow root SSH Logins
  • Ignore the “Internal Properties” section as it’s optional

With this part of the deployment complete, go ahead and power on the appliance. The next step is to join the controller to the management plane (NSX Manager) since we deployed the controller manually. Follow these instructions to perform this step, but I’ll also post screenshots below. These steps can either be done from the appliance console or via SSH. I’ll opt for the latter since we chose the option to allow SSH during the previous deployments. Be mindful of the appliances I am running commands on in the screenshots below as some are done on the manager while others are done on the controller.

In the last image above, we can see that the Control cluster status reports “UNSTABLE“. This is to be expected in this deployment scenario as we only deployed a single controller instance. Nothing for us to worry about here.

Now that we’ve joined the Controller to the Manager (management plane), there is one last thing to do which is to initialize the Control Cluster to create a Control Cluster Master. To do so, follow these instructions.

Now, if we log in to the NSX Manager again and click the Dashboard view, we can see that we now have both a Manager and a Controller Node configured.

NSX-T Edge Appliance

Are you still with me? Good! The final component to deploy is the NSX Edge. As we’ve done with the other appliance, I will continue deploying the Edge via the vSphere GUI instead of leveraging the NSX Manager and I’ll tell you why. Deploying via the NSX Manager is the easiest method, but there is a caveat. Deploying from the NSX Manager will only allow you to select a “Medium” or “Large” configuration deployment and will automatically power on the appliance post-deployment thus keeping the set CPU and Memory reservations. I’ve heard there is a way to trick the UI to deploy a “Small” configuration, but I’ve yet to confirm this nor have I seen it done. Additionally, there is a prerequisite for deploying via the NSX Manager which is to configure a Compute Manager which we’ve yet to cover and will be covered in the next post of this series. Follow these instructions to deploy the appliance. By deploying via the vSphere Web Client, I have the ability to select “small” for the deployment and then remove the reservations before powering on the appliance, but feel free to use your method of choice.

As we’ve done with the other deployments, use the following options.

  • “Small” configuration – consumes 2 vCPU and 4GB RAM.
    • The default “Medium” configuration consumes 4 vCPU and 8GB RAM
  • Thin Provision for storage
  • Management Network port group on VLAN 110
  • Enable SSH
  • Allow root SSH Logins
  • Ignore the “Internal Properties” section as it’s optional and any other non-relevant fields

When we get to the networking section, select the Management port group for “Network 0”, select the Overlay port group for “Network 1”, and select the Uplink portgroup for “Network 2” and “Network 3”.

Power on the appliance, and when the console is ready, either log in to the console or connect via SSH as we’ll need to also join this appliance to the management place as we did previously with the controller. Follow these instructions to complete this process. Again, be mindful of which appliances the commands are executed against.

Again, if we log into the NSX Manager and click the Dashboard view, we’ll now see that we also have an Edge Node configured. Woo-hoo!!

Well, this completes this post and I hope you had fun following along. In the next post, I’ll cover adding hosts to our NSX-T fabric along with creating all the required transport nodes, zones, edge clusters, profiles, and routing stuff so that we can get everything ready to have workloads run in NSX-T!

NSX-T Home Lab – Part 2: Configuring ESXi VMs

Intro

Welcome to Part 2 of my NSX-T Home Lab Series.  In my previous post, I went over the installation and configuration of a Sophos XG firewall for my nested NSX-T Home Lab.  In this post, I will cover the setup and configuration of the ESXi 6.7 VMs.

I recently wrote a post on how to Create an ESXi 6.7 VM Template, which is what I used to deploy my VMs from.  After cloning to new VMs, I changed the disk sizes for my cache and capacity disks, increased the CPUs and RAM, and added 2 additional network adapter to give me a total of 4 adapters.  The reason I did this is so that I can keep my management and other vmkernel ports on their VDS and have two new ones to use for NSX-T.  I may do a follow-up post using only two adapters where I’ll migrate my vmkernel networks over to NSX-T as in the real world, I’m sure there are many customers using dual 10Gb cards in their servers.

Now, I will not be covering how to actually install ESXi as you can follow the documentation for that, or you can reference my post mentioned above.  There really isn’t much to that installation…it’s pretty trivial.  Instead, I am just going to quickly state the specs used for my ESXi VMs from a resource perspective, and give some additional pointers.

Single-Node Management Cluster VM

  • CPUs: 8
  • RAM: 32GB
  • Disk1: Increased to 500GB (This will serve as a local VMFS6 datastore)
  • Disk2: Removed (As I will not be running VSAN)
  • Network Adapters: 2 (connected to the Nested VDS port group we created earlier)

On this host, I deployed a Windows Server 2019 Core OS to serve as my domain controller for the nested lab.  I also deployed a VCSA to manage the environment.

2-Node VSAN Compute Cluster (with a Witness Appliance)

  • CPUs: 8 on each host
  • RAM: 16GB on each host
  • Network Adapters: 4 on each host (connected to the Nested VDS port group we created earlier)

I used the new Quick Start feature to create and configure my VSAN cluster along with all the networking required, and this has now become one of my favorite new features in vSphere 6.7.  There were some nuances I had to fix which were super simple.  During the creation of the VDS and process of migrating vmkernel ports to the VDS, my nested ESXi VMs would lose connectivity.  Simply restarting the management network from the console proved to fix the issue and I was able to proceed.

I then used VUM to update each host to the latest version (Build 11675023) that was released on 1/17/19.  Once everything was configured, I had a nice little, nested playground ready for NSX-T!

In the next post, I will go over the deployment of the NSX-T appliances in the nested lab.  Be sure to come back!

-virtualex-

NSX-T Home Lab – Part 1: Configuring Sophos XG Firewall

Intro

Welcome to Part 1 of my NSX-T Home Lab Series.  In my previous post, I went over the gist of what I plan to do for my nested NSX-T Home Lab.  In this post, I will cover the setup and configuration of a Sophos XG firewall Home Edition which will serve as the router for my nested lab environment.  My physical Home Lab is configured with Virtual Distributed Switches, or VDS (sometimes seen as DVS) for short, and since this is a nested lab environment that will not have any physical uplinks connected, I will need to create a new VDS without physical uplinks connected to it along with a portgroup for the nested environment and then configure access to the environment from my LAN.  All traffic will flow through virtual router/firewall to communicate to and from the nested lab.

Prerequisites:

  • VDS and portgroup without physical uplinks
    • Set the VLAN type for this portgroup to VLAN Trunking with the range of 0-4094 to allow all VLANs to trunk through
  • Static route to access the nested lab from my LAN
    • Once you determine the subnets you’d like to use for the nested lab, add a static route summary on your physical router

I have a bunch of VLANs created for my physical Home Lab as I’ve yet to deploy NSX-T in there, but once I do, I’ll be removing the majority of said VLANs and only keeping the required ones needed to run the lab.  With that said, one of the VLANs I have is for “Development” work, such as this so I’ll be connecting one uplink from the router to this VLAN which will serve as the WAN interface while the other uplink will be connected to the new nested portgroup to serve as the LAN for the nested lab.  I’ll describe the basics for deploying the Sophos XG firewall, but will not go into full detail as this is pretty trivial and can be deployed using the following guide as a reference.

  • OS: Other Linux 3.x or higher
  • CPU: 1 (add more as needed – max supported is 4 in the home edition)
  • RAM: 2GB (add more as needed – max supported is 6GB in the home edition)
  • Disk: 40GB thin (you may make this smaller if you’d like)
  • Network Adapter 1: LAN portgroup (nested)
  • Network Adapter 2: WAN portgroup
  • Boot: BIOS (will not boot if you keep as EFI)

Once the VM has been deployed, the Sophos XG will be configured with a 172.16.1.1 address by default.  This will need to be changed to the subnet you’re using for your nested LAN interface.  Login to the console with the default (admin – admin) credentials, and choose the option for Network Configuration to change the IP for your nested LAN port.

Once this is done, you would normally navigate to that address on port 4444 to access the admin GUI.  Unfortunately, this will not work since the LAN side has no physical uplinks.  So what do we do?  We need to run a command to enable admin access on the WAN port.  To do so, choose option 4 to enter the device console and enter the following command:

system appliance_access enable

The WAN port is set to grab an address from DHCP so you’ll need to determine which IP address this is either by going into your physical router, or using a tool like Angry IP.  Once in the Admin GUI, navigate to Administration > Device Access and tick the box for WAN under the HTTPS column.  See this post for reference.

Now, we can create our VLANs for our nested environment.  I’m using the following for my lab:

VLANSubnetPurpose
11010.254.110.1/24Management
12010.254.120.1/24vMotion
13010.254.130.1/24VSAN
14010.254.140.1/24VM Network
15010.254.150.1/24Overlay
16010.254.160.1/24Uplink

Navigate to Networking and select Add Interface > VLAN to create each of your networks.

With our VLANs created, we’ll need to create two firewall rules to allow traffic from the WAN port to access the LAN, as well as to allow traffic from LAN to LAN. Navigate to Firewall > Add firewall rule and create the following rules.  Choose something easy to label them as which makes sense to you:

This is where the static route will now be useful to access your nested lab.  I’ve configured a route summary of 10.254.0.0/16 to go through the IP address of the WAN interface as the gateway so that I can access the Admin UI at https://10.254.1.1:4444 as well.  I’ll now also be able to access the ESXi UI and VCSA UI, once they are stood up.

The final thing I will be doing is enabling the native MAC Learning functionality that is now built into vSphere 6.7 so that I do not need to enable Promiscuous Mode, which has normally been a requirement for the Nested portgroup and nested labs in general.  To learn more about how to do this, see this thread.  In my setup, I ran the following to enable this on my nested VDS portgroup:

Set-MacLearn -DVPortgroupName @("VDS1-254-NESTED") -EnableMacLearn $true -EnablePromiscuous $false -EnableForgedTransmit $true -EnableMacChange $false

To check that it was indeed set correctly, I ran the following:

Get-MacLearn -DVPortgroupName @("VDS1-254-NESTED")

And there you have it!  In the next post, I will go over configuring our ESXi VMs for our nested lab!

NSX-T Home Lab Series

Intro

I recently upgraded my Home Lab “Datacenter” to support all-flash VSAN and 10Gb networking with the plan to deploy NSX-T so that I can familiarize myself with the solution and use it to better prepare me for the VMware VCP-NV exam certification.  Since this is all brand new to me, I’ve decided that I’ll first deploy it in a nested lab environment in order to learn the deployment process as well as to minimize the risk of accidentally messing up my Home Lab environment. 

Now, I know there are a few blogs out in the wild already that go over the installation and setup of NSX-T, but I wanted to write my own as it will better help me retain the information that I am learning.  Additionally, others may have a different setup than I have and/or may have deployed the solution differently that the way I intend to do which is by following the published documentation.  I’d like to take this time to first shout out some of my colleagues, William Lam, Keith Lee, Cormac Hogan, and Sam McGeown, as their own blogs are what inspired me to deploy the solution for myself and document the process.

This post will serve as the main page where I’ll post the hyperlinks to each post in the series.  I’ll be deploying a virtual router/firewall, 3x ESXi VMs, and a witness appliance so that I can configure a virtual 2-node VSAN compute cluster.  I’ll be managing the environment via a vCenter Server Appliance or VCSA, and a Windows Server 2019 Core OS Domain Controller or DC.  I won’t cover the installation and configuration of the DC as it’s out of scope for this series, nor will I go over the deployment of the VCSA or VSAN configuration as this can be done by following the documentation.  And, since this is just a small nested lab, the remaining host that isn’t a part of the VSAN cluster will serve as a single-node Management cluster host where the DC, VCSA, and NSX-T Appliances will reside.

I will cover the router setup, ESXi VM configuration, and NSX-T deployment.  For my setup, I am going to leverage a Sophos XG firewall Home Edition since I’ve always had an interest in learning more about these firewalls, but also because I typically see pfSense being used for virtual routers and I wanted to try something different.  If you are using this as a guide for your own deployment, feel free to use your router/firewall of choice as there are plenty out there like FreeSCO, Quagga, or VyOS, just to name a few.  So, with that said, I hope you all enjoy the content in this series!

NSX-T Home Lab Series

References: