{"id":1487,"date":"2019-02-15T10:15:18","date_gmt":"2019-02-15T15:15:18","guid":{"rendered":"https:\/\/ithinkvirtual.com\/?p=1487"},"modified":"2019-02-18T13:09:18","modified_gmt":"2019-02-18T18:09:18","slug":"nsx-t-home-lab-part-4-configuring-nsx-t-fabric","status":"publish","type":"post","link":"https:\/\/ithinkvirtual.com\/2019\/02\/15\/nsx-t-home-lab-part-4-configuring-nsx-t-fabric\/","title":{"rendered":"NSX-T Home Lab – Part 4: Configuring NSX-T Fabric"},"content":{"rendered":"\n

Intro<\/h2>\n\n\n\n

Welcome to Part 4 of my NSX-T Home Lab series. In my previous post<\/a>, I covered the process of deploying the NSX-T appliances and joining them to the management plane to have the foundational components ready for us to continue the configuration. In this post, I will cover all the configurations required to get NSX-T Fabric ready for network configurations in order to run workloads on it. So sit back, buckle up, and get ready for a lengthy read!<\/p>\n\n\n\n

Compute Manager<\/h2>\n\n\n\n

A compute manager, for example, a vCenter Server, is an application that manages resources such as hosts and VMs. NSX-T polls compute managers to find out about changes such as the addition or removal of hosts or VMs and updates its inventory accordingly. I briefly touched on this in my previous post, stating that one is required if deploying an NSX Edge appliance directly from the NSX Manager. Otherwise, the need for one of these is completely optional but I find value in it for retrieving my labs’ inventory and is the first thing I like to do in my<\/g> deployments. For more information, please see the documentation<\/a>.<\/p>\n\n\n\n

To configure a Compute Manager, log into you NSX Manager UI and navigate to Fabric > Compute Managers > +ADD<\/strong>. Enter the required information, leaving the SHA-256 Thumbprint field empty, and click ADD<\/strong>. You should receive an error because a thumbprint was not provided but will ask you if you want to use the server provided thumbprint, so click ADD<\/strong>. Allow a brief 30-60 seconds for the Compute Manager to connect to the vCenter server, click refresh if needed, until you see that it’s “Registered<\/strong>” and “Up<\/strong>“.<\/p>\n\n\n\n

\"\"<\/figure>\n\n\n\n
\"\"<\/figure>\n\n\n\n
\"\"<\/figure>\n\n\n\n

Tunnel EndPoint (TEP) IP Pool<\/h2>\n\n\n\n

As stated in the official documentation, Tunnel endpoints are the source and destination IP address used in the external IP header to uniquely identify the hypervisor hosts originating and terminating the NSX-T encapsulations of overlay frames. DHCP or IP pools can be configured for TEP IP addresses, so I’ll create a dedicated pool to be used instead of using DHCP and they’ll reside in the Overlay network VLAN 150<\/strong>. For more information, please see the documentation<\/a>.<\/p>\n\n\n\n

To add a Tunnel EndPoint IP Pool, navigate to Inventory > Groups > IP Pools > +ADD<\/strong>. Provide a Name then click +ADD<\/strong> underneath the Subnets section and provide the required information for the IP pool then click ADD<\/strong>.<\/p>\n\n\n\n

\"\"<\/figure>\n\n\n\n

Uplink Profiles<\/h2>\n\n\n\n

An uplink profile defines policies for the links from the hypervisor hosts to NSX-T logical switches or from NSX Edge nodes to top-of-rack switches. The settings defined by these profiles might include teaming policies, active\/standby links, transport VLAN ID, and MTU setting. Uplink profiles allow you to consistently configure identical capabilities for network adapters across multiple hosts are nodes. By default, there are two uplink profiles already provided with NSX-T but they cannot be edited, therefore I am going to create new ones for the Edge uplink as well as for my hosts’ uplinks. For more information, please see the documentation<\/a>.<\/p>\n\n\n\n

Edge Uplink Profile<\/h3>\n\n\n\n

To create an Edge uplink profile, navigate to Fabric > Profiles > Uplink Profiles > +ADD<\/strong>. Provide a Name, optional description then, under Teamings, set the Teaming Policy to Failover Order<\/strong>, set the Active Uplinks to uplink-1<\/strong>. Set the Transport VLAN to 0<\/strong> as we are tagging at the port group level for our Edge, and either leave the MTU at the default 1600 or set it to a higher value supported for your Jumbo Frames configuration. In my setup, I will set the MTU to 9000<\/strong>, then click ADD<\/strong>.<\/p>\n\n\n\n

\"\"<\/figure>\n\n\n\n

Host Uplink Profile<\/h3>\n\n\n\n

Next, I’ll repeat the process to create an uplink profile for my ESXi hosts. This time, I’ll keep the same settings for Teaming but will set the Standby Uplinks as uplink-2<\/strong>, the Transport VLAN will be my Overlay VLAN ID 150<\/strong> since these uplinks are connected directly to the hosts and need to be tagged accordingly, and again I’ll set the MTU to 9000<\/strong> and click ADD<\/strong>.<\/p>\n\n\n\n

\"\"<\/figure>\n\n\n\n

Transport Zones<\/h2>\n\n\n\n

Transport Zones dictate which hosts, and therefore, which VMs can participate in the use of a particular network. There are two types of transport zones, an overlay, and a VLAN. The overlay transport zone is used by both host transport nodes and NSX Edges and is responsible for communication over the overlay network. The VLAN transport zone is used by the NSX Edge for it’s VLAN uplinks. Both types create an N-VDS on the host or Edge to allow for virtual-to-physical packet flow by binding logical router uplinks and downlinks to physical NICs. For more information please see the documentation<\/a>.<\/p>\n\n\n\n

Overlay Transport Zone<\/h3>\n\n\n\n

To create an overlay transport zone, navigate to Fabric > Transport Zones > +ADD<\/strong>. Provide a Name, an N-VDS Name, select Standard <\/strong>or Enhanced Data Path<\/strong> for the N-VDS Mode, set the Traffic Type as Overlay<\/strong> and click ADD<\/strong>. <\/p>\n\n\n\n

Enhanced Data Path is a networking stack mode, which when configured provides superior network performance and is primarily targeted for NFV workloads. This requires you to install an additional VIB, as per your physical NICs, onto your hosts and reboot them before this can be configured, but this is out of scope for this demo. For more information, please see the documentation<\/a>. <\/p>\n\n\n\n

I’ll be choosing Standard<\/strong> for this demonstration as it’s a nested lab and only sees the Physical NICs as traditional VMware VMXNET3 adapters. <\/p>\n\n\n\n

\"\"<\/figure>\n\n\n\n

VLAN Transport Zone<\/h3>\n\n\n\n

To create a VLAN uplink transport zone, I’ll repeat<\/g> the same process as above by providing a Name, an N-VDS Name, but will change the Traffic Type to VLAN<\/strong> before clicking ADD<\/strong>.<\/p>\n\n\n\n

\"\"<\/figure>\n\n\n\n

Transport Node<\/h2>\n\n\n\n

There are two types of transport nodes that need to be configured in NSX-T, a host transport node and an edge transport node. A host transport node is a node that participates in an NSX-T overlay or NSX-T VLAN networking whereas an edge transport node is a node that capable of participating in an NSX-T overlay or NSX-T VLAN networking. In short, they simply facilitate communication in an NSX-T Overlay and\/or NSX-T VLAN network. For more information, please see the documentation here<\/a> and here<\/a>.<\/p>\n\n\n\n

I’ll be automatically creating the host transport nodes when I add them to the fabric but will be manually creating a single edge transport node with two N-VDS’ for my overlay and VLAN uplink transport zones respectively.<\/p>\n\n\n\n

Host Transport Node<\/h3>\n\n\n\n

This part is pretty simple and is where having the Compute Manager that I configured earlier comes in handy. But this process can also be done manually. You’ll need to ensure you have at least 1 Physical NIC (pNIC) available for NSX-T, and you can see here that my hosts were configured with 4 pNICs and I have 2 available, vmnic2<\/strong> and vmnic3<\/strong>.<\/p>\n\n\n\n

\"\"<\/figure>\n\n\n\n

Navigate to Fabric > Nodes > Hosts<\/strong> and from the “Managed By” dropdown menu, select the compute manager. Select a cluster and then click Configure Cluster<\/strong>. Toggle both switches to Enabled<\/strong> to Automatically Install NSX<\/strong> and Automatically Create Transport Node<\/strong>. From the Transport Zone dropdown menu, select the Overlay transport zone created earlier, from the Uplink Profile dropdown menu select the host uplink profile created earlier, from the IP Pool dropdown menu select the TEP IP Pool created earlier. Lastly, enter in the pNIC names and choose their respective uplinks. If you have 2 pNICs, click add PNIC<\/strong> and then modify the information before clicking ADD<\/strong> to complete the process. <\/p>\n\n\n\n

\"\"<\/figure>\n\n\n\n

This will start the host preparation process so allow a few minutes for the NSX VIBs to be installed on the hosts and for the transport nodes to be configured, clicking the refresh button as needed until you see “NSX Installed” and the statuses are “Up”.<\/p>\n\n\n\n

\"\"<\/figure>\n\n\n\n
\"\"<\/figure>\n\n\n\n

Click on the Transport Nodes tab, and we can see that the nodes have been automatically created and are ready to go. And if we look back at the physical NICs in vCenter, we can see that they have now been added to the N-VDS-Overlay.<\/p>\n\n\n\n

\"\"<\/figure>\n\n\n\n
\"\"<\/figure>\n\n\n\n

Edge Transport Node<\/h3>\n\n\n\n

Begin by navigating to Fabric > Nodes > Transport Nodes > +ADD<\/strong>. Provide a Name, from the Node dropdown menu select the Edge that we deployed and joined to the management plane in the previous post, in the “Available” column select the two transport zones we previously created and click the ><\/strong> arrow to move them over to the “Selected” column, then click the N-VDS<\/strong> heading tab at the top of the window.<\/p>\n\n\n\n

\"\"<\/figure>\n\n\n\n

From the Edge Switch Name dropdown menu, select the N-VDS we created earlier for the Overlay<\/strong>. From the Uplink Profile dropdown menu, select the edge uplink profile<\/strong> we created earlier. From the IP Assignment dropdown menu, select Use IP Pool<\/strong>, from the IP Pool dropdown menu, select the TEP IP Pool<\/strong> created earlier, and finally, from the Virtual NICs dropdown menus, select fp<\/strong><\/g>-eth0<\/strong> which is the 2nd NIC on the Edge VM and then select uplink-1<\/strong>. <\/p>\n\n\n\n

\"\"<\/figure>\n\n\n\n

But we’re not done yet! Next, click +ADD N-VDS<\/strong> so that we can add the additional one needed for the remaining transport zone. From the Edge Switch Name dropdown menu, select the N-VDS we created earlier for the VLAN Uplink<\/strong>. From the Uplink Profile dropdown menu, select the edge uplink profile<\/strong> again. And from the Virtual NICs dropdown menus, select fp<\/g>-eth1<\/strong> which is the 3rd NIC on the Edge VM and then select uplink-1<\/strong>. Now we can finally click ADD<\/strong>.<\/p>\n\n\n\n

\"\"<\/figure>\n\n\n\n

Allow a few minutes for the configuration state and status to report “Success” and “Up” and click the refresh button as needed.<\/p>\n\n\n\n

\"\"<\/figure>\n\n\n\n

Edge Cluster<\/h2>\n\n\n\n

Having a multi-node cluster of NSX Edges helps ensure that at least one NSX Edge is always available. In order to create a tier-0 logical router or a tier-1 router with stateful services such as NAT, load balancer, and so on. You must associate it with an NSX Edge cluster. Therefore, even if you have only one NSX Edge, it must still belong to an NSX Edge cluster to be useful. For more information, please see the documentation<\/a>.<\/p>\n\n\n\n

To create an edge cluster, navigate to Fabric > Nodes > Edge Clusters > +ADD<\/strong>. Enter a Name, and in the “Available” column select the edge transport zone<\/strong> created earlier and click the > arrow to move it to the “Selected” column and click ADD<\/strong>. Well, that was easy!<\/p>\n\n\n\n

\"\"<\/figure>\n\n\n\n

Wow! That seemed like a lot of work but it was exciting to get the components configured and ready for setting up the Logical Routers and switches, which I’ll cover in the next post, so we can start running VMs in NSX-T. I hope you’ve found this post useful and I thank you for reading.<\/p>\n

<\/div>

<\/path><\/svg><\/i> \"Loading\"<\/p>

<\/div>","protected":false},"excerpt":{"rendered":"

Intro Welcome to Part 4 of my NSX-T Home Lab series. In my previous post, I covered the process of deploying the NSX-T appliances and joining them to the management plane to have the foundational components ready for us to continue the configuration. In this post, I will cover all the configurations required to get…<\/p>\n

<\/div>\n

<\/path><\/svg><\/i> \"Loading\"<\/p>\n

<\/div>\n","protected":false},"author":1,"featured_media":1510,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"footnotes":"","_jetpack_memberships_contains_paid_content":false,"jetpack_publicize_message":"Check out the latest post in my @VMware #NSX-T #HomeLab series: NSX-T Home Lab - Part 4: Configuring NSX-T Fabric","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","enabled":false}}},"categories":[9],"tags":[4,11,43,42,19,33,41],"jetpack_publicize_connections":[],"aioseo_notices":[],"jetpack_sharing_enabled":true,"jetpack_featured_media_url":"https:\/\/i0.wp.com\/ithinkvirtual.com\/wp-content\/uploads\/2019\/02\/2019-02-15_9-59-56.png?fit=1024%2C430&ssl=1","jetpack_shortlink":"https:\/\/wp.me\/p7k0Z6-nZ","jetpack-related-posts":[],"jetpack_likes_enabled":true,"_links":{"self":[{"href":"https:\/\/ithinkvirtual.com\/wp-json\/wp\/v2\/posts\/1487"}],"collection":[{"href":"https:\/\/ithinkvirtual.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/ithinkvirtual.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/ithinkvirtual.com\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/ithinkvirtual.com\/wp-json\/wp\/v2\/comments?post=1487"}],"version-history":[{"count":0,"href":"https:\/\/ithinkvirtual.com\/wp-json\/wp\/v2\/posts\/1487\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/ithinkvirtual.com\/wp-json\/wp\/v2\/media\/1510"}],"wp:attachment":[{"href":"https:\/\/ithinkvirtual.com\/wp-json\/wp\/v2\/media?parent=1487"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/ithinkvirtual.com\/wp-json\/wp\/v2\/categories?post=1487"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/ithinkvirtual.com\/wp-json\/wp\/v2\/tags?post=1487"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}