iThinkVirtual™

NSX-T Home Lab – Part 1: Configuring Sophos XG Firewall

Intro

Welcome to Part 1 of my NSX-T Home Lab Series.  In my previous post, I went over the gist of what I plan to do for my nested NSX-T Home Lab.  In this post, I will cover the setup and configuration of a Sophos XG firewall Home Edition which will serve as the router for my nested lab environment.  My physical Home Lab is configured with Virtual Distributed Switches, or VDS (sometimes seen as DVS) for short, and since this is a nested lab environment that will not have any physical uplinks connected, I will need to create a new VDS without physical uplinks connected to it along with a portgroup for the nested environment and then configure access to the environment from my LAN.  All traffic will flow through virtual router/firewall to communicate to and from the nested lab.

Prerequisites:

  • VDS and portgroup without physical uplinks
    • Set the VLAN type for this portgroup to VLAN Trunking with the range of 0-4094 to allow all VLANs to trunk through
  • Static route to access the nested lab from my LAN
    • Once you determine the subnets you’d like to use for the nested lab, add a static route summary on your physical router

I have a bunch of VLANs created for my physical Home Lab as I’ve yet to deploy NSX-T in there, but once I do, I’ll be removing the majority of said VLANs and only keeping the required ones needed to run the lab.  With that said, one of the VLANs I have is for “Development” work, such as this so I’ll be connecting one uplink from the router to this VLAN which will serve as the WAN interface while the other uplink will be connected to the new nested portgroup to serve as the LAN for the nested lab.  I’ll describe the basics for deploying the Sophos XG firewall, but will not go into full detail as this is pretty trivial and can be deployed using the following guide as a reference.

  • OS: Other Linux 3.x or higher
  • CPU: 1 (add more as needed – max supported is 4 in the home edition)
  • RAM: 2GB (add more as needed – max supported is 6GB in the home edition)
  • Disk: 40GB thin (you may make this smaller if you’d like)
  • Network Adapter 1: LAN portgroup (nested)
  • Network Adapter 2: WAN portgroup
  • Boot: BIOS (will not boot if you keep as EFI)

Once the VM has been deployed, the Sophos XG will be configured with a 172.16.1.1 address by default.  This will need to be changed to the subnet you’re using for your nested LAN interface.  Login to the console with the default (admin – admin) credentials, and choose the option for Network Configuration to change the IP for your nested LAN port.

Once this is done, you would normally navigate to that address on port 4444 to access the admin GUI.  Unfortunately, this will not work since the LAN side has no physical uplinks.  So what do we do?  We need to run a command to enable admin access on the WAN port.  To do so, choose option 4 to enter the device console and enter the following command:

system appliance_access enable

The WAN port is set to grab an address from DHCP so you’ll need to determine which IP address this is either by going into your physical router, or using a tool like Angry IP.  Once in the Admin GUI, navigate to Administration > Device Access and tick the box for WAN under the HTTPS column.  See this post for reference.

Now, we can create our VLANs for our nested environment.  I’m using the following for my lab:

VLANSubnetPurpose
11010.254.110.1/24Management
12010.254.120.1/24vMotion
13010.254.130.1/24VSAN
14010.254.140.1/24VM Network
15010.254.150.1/24Overlay
16010.254.160.1/24Uplink

Navigate to Networking and select Add Interface > VLAN to create each of your networks.

With our VLANs created, we’ll need to create two firewall rules to allow traffic from the WAN port to access the LAN, as well as to allow traffic from LAN to LAN. Navigate to Firewall > Add firewall rule and create the following rules.  Choose something easy to label them as which makes sense to you:

This is where the static route will now be useful to access your nested lab.  I’ve configured a route summary of 10.254.0.0/16 to go through the IP address of the WAN interface as the gateway so that I can access the Admin UI at https://10.254.1.1:4444 as well.  I’ll now also be able to access the ESXi UI and VCSA UI, once they are stood up.

The final thing I will be doing is enabling the native MAC Learning functionality that is now built into vSphere 6.7 so that I do not need to enable Promiscuous Mode, which has normally been a requirement for the Nested portgroup and nested labs in general.  To learn more about how to do this, see this thread.  In my setup, I ran the following to enable this on my nested VDS portgroup:

Set-MacLearn -DVPortgroupName @("VDS1-254-NESTED") -EnableMacLearn $true -EnablePromiscuous $false -EnableForgedTransmit $true -EnableMacChange $false

To check that it was indeed set correctly, I ran the following:

Get-MacLearn -DVPortgroupName @("VDS1-254-NESTED")

And there you have it!  In the next post, I will go over configuring our ESXi VMs for our nested lab!

NSX-T Home Lab Series

Intro

I recently upgraded my Home Lab “Datacenter” to support all-flash VSAN and 10Gb networking with the plan to deploy NSX-T so that I can familiarize myself with the solution and use it to better prepare me for the VMware VCP-NV exam certification.  Since this is all brand new to me, I’ve decided that I’ll first deploy it in a nested lab environment in order to learn the deployment process as well as to minimize the risk of accidentally messing up my Home Lab environment. 

Now, I know there are a few blogs out in the wild already that go over the installation and setup of NSX-T, but I wanted to write my own as it will better help me retain the information that I am learning.  Additionally, others may have a different setup than I have and/or may have deployed the solution differently that the way I intend to do which is by following the published documentation.  I’d like to take this time to first shout out some of my colleagues, William Lam, Keith Lee, Cormac Hogan, and Sam McGeown, as their own blogs are what inspired me to deploy the solution for myself and document the process.

This post will serve as the main page where I’ll post the hyperlinks to each post in the series.  I’ll be deploying a virtual router/firewall, 3x ESXi VMs, and a witness appliance so that I can configure a virtual 2-node VSAN compute cluster.  I’ll be managing the environment via a vCenter Server Appliance or VCSA, and a Windows Server 2019 Core OS Domain Controller or DC.  I won’t cover the installation and configuration of the DC as it’s out of scope for this series, nor will I go over the deployment of the VCSA or VSAN configuration as this can be done by following the documentation.  And, since this is just a small nested lab, the remaining host that isn’t a part of the VSAN cluster will serve as a single-node Management cluster host where the DC, VCSA, and NSX-T Appliances will reside.

I will cover the router setup, ESXi VM configuration, and NSX-T deployment.  For my setup, I am going to leverage a Sophos XG firewall Home Edition since I’ve always had an interest in learning more about these firewalls, but also because I typically see pfSense being used for virtual routers and I wanted to try something different.  If you are using this as a guide for your own deployment, feel free to use your router/firewall of choice as there are plenty out there like FreeSCO, Quagga, or VyOS, just to name a few.  So, with that said, I hope you all enjoy the content in this series!

NSX-T Home Lab Series

References:

Create an ESXi 6.7 VM Template

Disclaimer:  The following is not supported by VMware.

Nested virtualization is nothing new, and many of us use it for test or demonstration purposes since they can quickly be stood up or torn down.  William Lam has an ESXi VM which can be downloaded from here, but I wanted to go ahead and create my own for use within my nested lab environments.

In this post, I am going to show you the steps I ran through to create an ESXi 6.7 VM that I can convert to a template for later use.  Props to William for his excellent content on nested virtualization, which I’ve used a ton and will be leveraging here as well.  So without further ado, let’s get to it!

For my ESXi VM, I will be configuring the following:

  • CPU: 2 (Expose hardware assisted virtualization to the guest OS – checked on)
  • RAM: 8GB
  • Disk0: 16GB (bound to the default SCSI controller; thin provisioned)
  • New virtual NVME Controller
  • Disk1: 10GB (for VSAN cache tier bound to NVME Controller; thin provisioned)
  • Disk2: 100GB (for VSAN capacity tier bound to NVME Controller; thin provisioned)
  • 2x Network Adapters (VMXNET3)
  • Some advance configuration settings

Build the VM as follows:

Be sure you connect the ESXi installation media and power on the VM to begin the installation.

Once the VM powers back on, log in and enable SSH so that we can run some additional commands to update the OS and prepare it for cloning use

(Optional) To update ESXi to the latest version, connect to the host via SSH and run the following:

**At the time of this writing, the latest version is Build 11675023 as per profile used below, be sure to change the profile number**

esxcli network firewall ruleset set -e true -r httpClient
esxcli software profile update -p ESXi-6.7.0-20190104001-standard \
-d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml
esxcli network firewall ruleset set -e false -r httpClient

(Optional) To update the latest version of the ESXi Host client, run the following:

esxcli software vib install -v "http://download3.vmware.com/software/vmw-tools/esxui/esxui-signed-latest.vib"

To prepare the VM for cloning use, run the following:

esxcli system settings advanced set -o /Net/FollowHardwareMac -i 1
sed -i 's#/system/uuid.*##' /etc/vmware/esx.conf
./sbin/auto-backup.sh

At this point, you can shutdown the VM and convert it to a template for cloning use.

After cloning a VM, if you plan on joining it to a vCenter Server you will need to run the following on each cloned instance via SSH.

esxcli storage vmfs snapshot resignature -l datastore1

Well, that about does it!  Hope you all enjoyed this post!

-virtualex-

References:

Achievement Unlocked!…vExpert 2018!

| 13/03/2018 | Tags: ,

I am extremely humbled to have been awarded the VMware vExpert 2018 award for the 3rd consecutive time and am honored to be amongst the likes of some of the smartest and passionate individuals in the vCommunity!  I’m vExpert #1301 as per my public directory page.

As I mentioned in last years post, it takes effort, passion, and dedication to elevate your personal skill set and knowledge-base and I am thankful that I have had a great experience in working with VMware’s product line.  I look forward to keeping this going for years to come!  Congratulations to all my fellow vExperts, past and present!!

And special thanks go to VMware, Corey Romero, and the entire VMTN Community for all their efforts into making the vExpert program such a success!

Links:

 

 

VMware Software Manager: The Good…The Bad…The Alternative!

In this post, I am going to discuss a little, “not-so-well-known” utility, called VMware Software Manager.  This little “beast” was first released as v1.0 back on 2015-03-12, and its most current release, v1.5, came out on 2016-08-25.  So as you can see, it’s been quite a while since this tool has seen a new update release.  The problem now is that this utility seems to have been forgotten and/or neglected by VMware, but I will get into more of that a little later.  Let’s start off with the positive stuff.

The Good:

The VMware Software Manager allowed valid VMware-account holders the ability to download various software, such as ESXi or vCenter server, quickly and easily.  The installation was a breeze, the interface was clean and downloading software was effortless.  It basically just worked flawlessly!  The new software was readily available for download shortly after it was announced/released since this utility would read configuration files to see what software is available from the VMware repository and then provided that software for download.

So, having said that, what could possibly be wrong with this thing?  Let’s continue…

The Bad:

My main gripe with this utility was that support was only community-based, so if you had issues, you could forget about raising an official SR with support.  You had to rely patiently on the VMTN community and hope that users were knowledgeable enough and willing to help out.  Not saying anything bad about the community though, it’s a great forum full of some really smart people and it provides a wealth of information.

As I mentioned earlier, when this thing worked…it just plain WORKED!  Then, vSphere 6.0U3A was released with some bad/corrupt or missing files and upon launching the utility, it would simply hang at login or error out due to the missing files not being available! (Shhhh!..someone never updated the configuration files… ¯\_(ツ)_/¯  )  There is a thread over on the VMTN community that myself and others have contributed to and another describing this issue and a workaround method to ultimately get you to log in again.  This involved looking at log files to find which missing file was causing the error, then editing the configuration file to remove the missing files, and it was a major pain!

Lastly, it’s become extremely out of date and it seems as if VMware has completed forgotten about and/or decided to neglect and give up on it as there has not been any new software available for download in at least 6 months.  It’s also possible that the original responsible for this thing disbanded or moved onto other roles…who really knows?  Luckily, a friend and fellow community/vExpert member has provided a solution!

The Alternative:

Fellow vExpert, Edward Haletky aka Texiwill, has created a Linux-based port of the utility, titled “VSM“, which he updates almost daily to add new software and simply improve the appliance and it runs on an RHEL type distro like CentOS.  You can hit the ground running with this appliance in about 30 mins or less.  I have been fortunate enough to serve as a beta tester for him and have been doing so since he released v0.95, just shortly after the initial public launch.  At the time of this writing, the most recent release was v3.7.7 but has now been updated to the newest 4.5.3.  Screenshots may reflect previous versions.

Well…How do you get it?

In this post, I am going to cover how to install this bad-boy on a CentOS 7 minimal installation on ESXi using NFS, and on VMware Workstation leveraging the “Shared Folders” feature using VMHGFS.  Another vExpert, Michael White, has a similar post on setting up this appliance and using SMB/CIFS for storing the downloaded software.  Let’s get to it, shall we!

Prerequisites:

  • CentOS 7 x64 (Minimal) w/ open-vm-tools installed
  • a non-root user account
  • NFS storage or Local Storage (if running this on VMware Workstation/Fusion)

I will assume that you already have a basic installation of CentOS 7 running, or know how to set up a minimal installation, so that process is out of scope for this post.  

If you plan on running this on VMware Workstation/Fusion, run the following command to install the prerequisites needed for installing VMware Tools to get the vmhgfs driver for Shared Folders support.

sudo yum install -y perl gcc binutils make fuse kernel-headers kernel-devel net-tools policycoreutils-python

Otherwise, if you plan on using NFS to store the software, run the following command to install the NFS utilities.

sudo yum install -y nfs-utils

VMware Tools Installation:

Note: Enables vmhgfs driver support for Shared Folders on VMware Workstation or VMware Fusion

I will be installing the latest version of VMware Tools, version 10.2.0.  This can be obtained here.  Extract the .zip and attach the linux.iso to your CentOS VM in Workstation or Fusion.  Once connected, do the following.

Make a directory to mount the cdrom to.

sudo mkdir /mnt/cdrom

Mount the cdrom to this new directory.

sudo mount /dev/cdrom /mnt/cdrom

With the cdrom now mounted, let take a look at whats on the .iso

sudo ls /mnt/cdrom

We can see the compressed file that we will need to extract

Extract/Uncompress this file by running the following.  This is going to extract the file to my current working directory, $HOME

sudo tar zxpf /mnt/cdrom/VMwareTools-10.2.0-7253323.tar.gz

When that completes, we can see what was extracted by running

ls

At this point, we are done with the .iso and can unmount it by running.

sudo umount /dev/cdrom

Now, let’s navigate to the extracted folder and see what we’ve got.

cd vmware-tools-distrib/
ls

The Perl script is what we’re looking for here to install VMware Tools.  This will uninstall “open-vm-tools” if that is already installed on this machine.  Let’s go ahead and run it. 

sudo ./vmware-install.pl

Proceed to answer all the questions asked and select all of the defaults until the installation completes.

(Optional:) If you’d prefer to bypass the questions and force-install with all defaults run the following instead

sudo ./vmware-install.pl --force-install --default

This completes the VMware Tools installation and not the vmhgfs driver is installed.  Onto the good stuff!

VSM Installation:

Texiwill has provided an installation script on his GitHub page which will take care of the installation and its prerequisites. This is the preferred installation method so I’d advise that you download this script, and run it.  You’ll first have to make the script “executable” by running

chmod +x install.sh

For those who prefer the manual approach, you’ll first need to install “wget“, then run the following commands to install the utility.

Note: please change timezone location to your respective location

sudo yum install -y wget
mkdir aac-base
cd aac-base
wget -O aac-base.install https://raw.githubusercontent.com/Texiwill/aac-lib/master/base/aac-base.install
chmod +x aac-base.install
./aac-base.install -u America/New_York
sudo ./aac-base.install -i vsm America/New_York

Congrats!  You’ve successfully installed the utility.  Now to configure it for use.

By default, the VSM script will save the config file (.vsmrc) to “$HOME“.  It will also save the index.html and credstore files in “/tmp/vsm“.  I highly recommend you create an alternate directory to store these files in if you are using a system that has multiple users or change it to save them in the users $HOME directory.  Again, this is optional and can be skipped, but if you’d like to do so, run the following

mkdir /tmp/my_vsm

VSM Initial Configuration:

The last step before you can use the VSM script utility is to configure it.  To list the all available parameters, execute the script with the help parameter.

vsm.sh -h

Based on the help info, the required parameters to set are:

  • -v | –vsmdir VSMDirectory (this will be the “my_vsm” directory we created earlier)
  • –repo repopath (this will be the path to the directory we’ll create to save our download)

When first executing the script utility, you will be required to enter valid MyVMware credentials which will be saved to the configuration file.

Let’s go ahead and create our repo directories first.  Feel free to use my examples or anything else you’d like.  We will also use the same steps mentioned about to take ownership of these directories and then mount our physical storage to the newly created mount point directory. 

Note: Be sure to enable “Shared Folders” on Workstation or Fusion and select the shared folder to mount it to the VM.  Commands may be different from screenshots as I’ve updated some commands.

For VMHGFS:

sudo mkdir -pvm 755 /mnt/vmhgfs/depot/content
sudo chown -R <non-root_username>.<non-root_username> /mnt/vmhgfs

For NFS:

sudo mkdir -pvm 755 /mnt/nfs/depot/content
sudo chown -R <non-root_username>.<non-root_username> /mnt/nfs

With our directories ready, let’s go ahead and mount our storage to them!

For VMHGFS:

Note: I have mounted the following shared folder to the VM to use for this command.

sudo mount -t fuse.vmhgfs-fuse .host:/content /mnt/vmhgfs/depot/content -o allow_other

For NFS:

sudo mount -t nfs <IP_address>:/<volume#>/<path_to_share> /mnt/nfs/depot/content

For NFS 4.1 (optional):

sudo mount -t nfs4 -o vers=4,minorversion=1 <IP_address>:/<volume#>/<path_to_share> /mnt/nfs/depot/content

Now, with our storage mounted we can run the VSM script and configure for use.

Note: You will be prompted to enter your VMware credentials unless you supply the “-u” and “-p” parameters

For VMHGFS:

vsm.sh -y -v /tmp/my_vsm --repo /mnt/vmhgfs/depot/content --save

For NFS:

vsm.sh -y -v /tmp/my_vsm --repo /mnt/nfs/depot/content --save

That about does it!  The appliance is ready to use and you can navigate through the menus to find your desired software. 

Extras:

To make your mount points persistent through reboots, edit the “fstab” file with your editor of choice.  I prefer to use vi or vim, but many may choose to use nano instead.

sudo vi /etc/fstab

Then, add the following line and save the file.

For VMHGFS:

.host:/content /mnt/vmhgfs/depot/content    fuse.vmhgfs-fuse        allow_other     0 0

For NFS:

<IP_address>:/<volume#>/<path_to_share> /mnt/nfs/depot/content    nfs      soft,bg,rsize=8192,wsize=8192       0 0

For NFS 4.1 (optional):

<IP_address>:/<volume#>/<path_to_share> /mnt/nfs/depot/content    nfs      nfsvers=4.1,soft,bg,rsize=8192,wsize=8192       0 0

Updates are released quite frequently.  To update VSM to the latest version, you can run the following commands manually or add them to a shell script within “/etc/cron.daily” which will run around 3 AM.

Note: please change timezone location to your respective location

cd /home/<user_name>/aac-base; ./aac-base.install -u; ./aac-base.install -i vsm America/New_York

Additionally, if you’d like to keep your favorite download repository up-to-date, edit crontab

Note:  This requires that you first “mark” a repository by using that menu option

crontab -e

Then add the following line to it and save.  This will run daily at 6 AM.

0 6 * * * /usr/local/bin/vsm.sh -y -mr -c --favorite

My Preference:

I tend to use the following command when running VSM to get missing suites and packages from MyVMware.  If you do not use the “-m” parameter, it will only pull the same software available in the original VMware Software Manager tool from VMware.  The “-mr” parameter resets the “MyVMware” software info and implies “-m” hence why this is also used in the crontab line above.  The “-c” tells VSM to generate sha256sum checks against each downloaded file.

vsm.sh -y -mr -c

Note: Depending on what you downloaded, there may be certain .txt files that will fail the checksum.  This is expected and can be safely ignored.

Well, I hope you’ve found this post useful and I thank you for reading!  Special thanks again to Texiwill for making this awesome utility, as well as Mike White for posting his similar article using SMB/CIFS.

-virtualex-

Pingbacks:

Updates:

  • 03.23.18 – Updated to reflect changes in manual install commands for VSM v4.0.2, and cron.daily entry since cron runs as root so no need to use sudo
  • 04.12.18 – Added TZ (Time Zone) setting to manual install commands and modified /etc/fstab command for NFS mounts
  • 04.15.18 – Added “-c” parameter reference
  • 01.17.19 – Added some command syntaxes for NFS 4.1

Install PowerShell and VMware PowerCLI on Ubuntu

| 04/03/2018 | Tags: , , , ,

Just a few days ago, PowerShell Core v6.0 was released for Windows, Linux, and macOS systems.  Alongside this release came the release of VMware PowerCLI 10.0.0.78953 which is VMware’s own “PowerShell-like” utility.  

In my previous posts (here and here), I covered how to install those on to a macOS 10.13.x “High Sierra” system and a CentOS 7 system.  In this post, I am going to show how to install both on to an Ubuntu 17.10 system as this is another common distro which I also use in my environments.  Let’s get to it!

Note: If you’re interested in installing this on other Linux distros, please consult the following link.

There is a prerequisite needed before PowerShell can be installed on Ubuntu and that is to install “curl” and then add the PowerShell Core repository (recommended) to your system.

To install curl, enter the following.

sudo apt-get install -y curl

To add the PowerShell Core repository to Ubuntu, run the following command.  Enter your password if prompted

curl https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -

To register the repository, enter the following command.  Again, enter your password if prompted.

curl https://packages.microsoft.com/config/ubuntu/17.04/prod.list | sudo tee /etc/apt/sources.list.d/microsoft.list

Great!  With the prerequisites complete, it’s time to install PowerShell Core 6.0.1.  Run the following command to do so and enter your password when prompted.

sudo apt-get install -y powershell

Awesome!  Now, to launch a PowerShell session in CentOS, enter the following.

pwsh

Within a PowerShell session, you can check the version of PowerShell by running the following.

$PSVersionTable.PSVersion

As new versions of PowerShell are released, simply update by running the following command.

sudo apt-get upgrade -y powershell

While leveraging the PowerShell Core repository is the recommended installation method, there are alternate methods as well.  For more information on that along with uninstallation commands, please see the following link.

Congratulations!  You’ve successfully installed PowerShell Core 6.0.1 onto Ubuntu!  Next comes the fun stuff for us VMware enthusiasts, installing VMware PowerCLI from the “PSGallery”.  Let’s continue!

Since VMware PowerCLI has moved from being its own native installer to the PSGallery, the PSGallery needs to be “Trusted” before anything from it can be installed.  To trust the PSGallery, entering the following command in the PowerShell session.

Note: This is optional and if it is skipped, you will be prompted to trust the gallery when entering the PowerCLI module install command

Set-PSRepository -Name "PSGallery" -InstallationPolicy "Trusted"

Next, run the following command to install the VMware.PowerCLI module.  This will find and install the latest version of the module available in the PSGallery

Find-Module "VMware.PowerCLI" | Install-Module -Scope "CurrentUser" -AllowClobber

Note: Alternatively, you could set the “-Scope” parameter to “AllUsers” and if you wanted to install a different version you could use the “-RequiredVersion” parameter and specify the version number.

Once this finishes, we can check to make sure the module is installed by running the following command.

Get-Module "VMware.PowerCLI" -ListAvailable | FT -Autosize

And if you’d like to see all of the VMware installed modules, run the following.

Get-Module "VMware.*" -ListAvailable | FT -Autosize

As new versions of VMware.PowerCLI are released, you can run the following command to update it.

Update-Module "VMware.PowerCLI"

With VMware.PowerCLI now installed, you can connect to your vCenter Server or ESXi host and begin using its cmdlets to obtain information or automate tasks!

I went ahead and ran the following to ensure the module was imported.  

Import-Module "VMware.PowerCLI"

I noticed one caveat, the SRM module does not seem to be supported in PowerShell Core, so I hope that gets resolved soon.

Let’s test connecting to vCenter server…

Connect-VIServer -Server "<Server_Name>"

I also noticed an error when running the above command stating that the “InvalidCertificateAction” setting was “Unset” and not supported.

To bypass this, enter the following command and then enter “Y” when prompted.  This will set the parameter for the current user.

Set-PowerCLIConfiguration -InvalidCertificateAction "Ignore"

Note: Alternatively, you can also use the “-Scope” parameter and enter “Session”, “User”, or “AllUsers” to apply the setting to those options respectively.

Now, if we try to connect to vCenter again, we should be successful.

Well, that about does it!  I hope that you have found this post useful and I thank you for stopping by and reading my content.  Until next time!

-virtualex-

Pingbacks: 

Install PowerShell and VMware PowerCLI on CentOS

Just a few days ago, PowerShell Core v6.0 was released for Windows, Linux, and macOS systems.  Alongside this release came the release of VMware PowerCLI 10.0.0.78953 which is VMware’s own “PowerShell-like” utility.  

In my previous post, I covered how to install those on to a macOS 10.13.x “High Sierra” system.  In this post, I am going to show how to install both on to a CentOS 7 system as this is the distro I mostly use in my environments.  I may follow this up with an Ubuntu install version.  Anyway, let’s get to it!

Note: If you’re interested in installing this on other Linux distros, please consult the following link.

There is a prerequisite needed before PowerShell can be installed on CentOS and that is to add the PowerShell Core repository (recommended) to your CentOS system.

sudo curl https://packages.microsoft.com/config/rhel/7/prod.repo | sudo tee /etc/yum.repos.d/microsoft.repo

This will prompt you to enter your password since “sudo is being used. Great!  With the prerequisite complete, it’s time to install PowerShell Core 6.0.1.  Run the following command to do so and enter your password when prompted.

sudo yum install -y powershell

Awesome!  Now, to launch a PowerShell session in CentOS, enter the following.

pwsh

Within a PowerShell session, you can check the version of PowerShell by running the following.

$PSVersionTable.PSVersion

As new versions of PowerShell are released, simply update PowerShell by running the following command.

sudo yum update -y powershell

While leveraging the PowerShell Core repository is the recommended installation method, there are alternate methods as well.  For more information on that along with uninstallation commands, please see the following link.

Congratulations!  You’ve successfully installed PowerShell Core 6.0.1 onto CentOS!  Next comes the fun stuff for us VMware enthusiasts, installing VMware PowerCLI from the “PSGallery”.  Let’s continue!

Since VMware PowerCLI has moved from being its own native installer to the PSGallery, the PSGallery needs to be “Trusted” before anything from it can be installed.  To trust the PSGallery, entering the following command in the PowerShell session.

Note: This is optional and if it is skipped, you will be prompted to trust the gallery when entering the PowerCLI module install command

Set-PSRepository -Name "PSGallery" -InstallationPolicy "Trusted"

Next, run the following command to install the VMware.PowerCLI module.  This will find and install the latest version of the module available in the PSGallery

Find-Module "VMware.PowerCLI" | Install-Module -Scope "CurrentUser" -AllowClobber

Note: Alternatively, you could set the “-Scope” parameter to “AllUsers” and if you wanted to install a different version you could use the “-RequiredVersion” parameter and specify the version number.

Once this finishes, we can check to make sure the module is installed by running the following command.

Get-Module "VMware.PowerCLI" -ListAvailable | FT -Autosize

And if you’d like to see all of the VMware installed modules, run the following.

Get-Module "VMware.*" -ListAvailable | FT -Autosize

As new versions of VMware.PowerCLI are released, you can run the following command to update it.

Update-Module "VMware.PowerCLI"

With VMware.PowerCLI now installed, you can connect to your vCenter Server or ESXi host and begin using its cmdlets to obtain information or automate tasks!

I went ahead and ran the following to ensure the module was imported.  

Import-Module "VMware.PowerCLI"

I noticed one caveat, the SRM module does not seem to be supported in PowerShell Core, so I hope that gets resolved soon.

Let’s test connecting to vCenter server…

Connect-VIServer -Server "<Server_Name>"

I also noticed an error when running the above command stating that the “InvalidCertificateAction” setting was “Unset” and not supported.

To bypass this, enter the following command and then enter “Y” when prompted.  This will set the parameter for the current user.

Set-PowerCLIConfiguration -InvalidCertificateAction "Ignore"

Note: Alternatively, you can also use the “-Scope” parameter and enter “Session”, “User”, or “AllUsers” to apply the setting to those options respectively.

Now, if we try to connect to vCenter again, we should be successful.

Well, that about does it!  I hope that you have found this post useful and I thank you for stopping by and reading my content.  I’d like to give a shoutout to Jim Jones for his post on the same topic.  Until next time!

-virtualex-

Pingbacks: From Zero to PowerCLI: CentOS Edition

Install PowerShell and VMware PowerCLI on macOS

Just a few days ago, PowerShell Core v6.0 was released for Windows, Linux, and macOS systems.  Alongside this release came the release of VMware PowerCLI 10.0.0.78953 which is VMware’s own “PowerShell-like” utility.  In this post, I am going to show how to install both on to a macOS system.  Let’s get to it!

There are a few prerequisites needed before PowerShell can be installed on macOS which I will cover, and they are as follows:

  • Homebrew – Homebrew installs the stuff you need that Apple didn’t.
  • Homebrew-Cask – extends Homebrew and brings its elegance, simplicity, and speed to macOS applications and large binaries alike.
  • Xcode Command Line Tools

Per the instructions on the Homebrew site, copy and paste the following command into a terminal window to install Homebrew.

/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

This will prompt you to press Enter to continue and the prompt you to enter your password.  It will also check if Xcode command line tools are installed, and if not, it will download and install it for you before completing the installation of Homebrew. Nice!

Now, the next step is to install Homebrew-Cask and per its sites installation notes, copy and paste the following command into a terminal window.

brew tap caskroom/cask

Great!  With the prerequisites complete, it’s time to install PowerShell Core 6.0.1.  Run the following command to do so and enter your password when prompted.

brew cask install powershell

Awesome!  Now, to launch a PowerShell session in macOS, enter the following in terminal.

pwsh

Within a PowerShell session, you can check the version of PowerShell by running the following.

$PSVersionTable.PSVersion

As new versions of PowerShell are released, simply update the Homebrew formulae and update PowerShell by running the following commands in terminal.

brew update
brew cask upgrade powershell

While leveraging Homebrew is the recommended installation method, there are alternate methods as well.  For more information on that along with uninstallation commands, please see the following link.

Congratulations!  You’ve successfully installed PowerShell Core 6.0.1 onto macOS!  Next comes the fun stuff for us VMware enthusiasts, installing VMware PowerCLI from the “PSGallery”.  Let’s continue!

Since VMware PowerCLI has moved from being its own native installer to the PSGallery, the PSGallery needs to be “Trusted” before anything from it can be installed.  To trust the PSGallery, entering the following command in the PowerShell session.

Note: This is optional and if it is skipped, you will be prompted to trust the gallery when entering the PowerCLI module install command

Set-PSRepository -Name "PSGallery" -InstallationPolicy "Trusted"

Next, run the following command to install the VMware.PowerCLI module.  This will find and install the latest version of the module available in the PSGallery

Find-Module "VMware.PowerCLI" | Install-Module -Scope "CurrentUser" -AllowClobber

Note: Alternatively, you could set the “-Scope” parameter to “AllUsers” and if you wanted to install a different version you could use the “-RequiredVersion” parameter and specify the version number.

Once this finishes, we can check to make sure the module is installed by running the following command.

Get-Module "VMware.PowerCLI" -ListAvailable | FT -Autosize

And if you’d like to see all of the VMware installed modules, run the following.

Get-Module "VMware.*" -ListAvailable | FT -Autosize

As new versions of VMware.PowerCLI are released, you can run the following command to update it.

Update-Module "VMware.PowerCLI"

With VMware.PowerCLI now installed, you can connect to your vCenter Server or ESXi host and begin using its cmdlets to obtain information or automate tasks!

I went ahead and ran the following to ensure the module was imported.  

Import-Module "VMware.PowerCLI"

I noticed one caveat, the SRM module does not seem to be supported in PowerShell Core, so I hope that gets resolved soon.

Let’s test connecting to vCenter server…

Connect-VIServer -Server "<Server_Name>"

I also noticed an error when running the above command stating that the “InvalidCertificateAction” setting was “Unset” and not supported.

To bypass this, enter the following command and then enter “Y” when prompted.  This will set the parameter for the current user.

Set-PowerCLIConfiguration -InvalidCertificateAction "Ignore"

Note: Alternatively, you can also use the “-Scope” parameter and enter “Session”, “User”, or “AllUsers” to apply the setting to those options respectively.

Now, if we try to connect to vCenter again, we should be successful.

Well, that about does it!  I hope that you have found this post useful and I thank you for stopping by and reading my content.  I’d like to give a shoutout to Mike White for his post on the same topic.  Until next time!

-virtualex-

Pingbacks: Installing PowerShell/PowerCLI on a Mac

Deploy A Virtual Appliance Using PowerCLI

| 04/02/2018 | Tags: , , ,

Hello all and thank you for visiting my blog!  In today’s post, I am going to cover how to deploy a VMware virtual appliance (.ova) using PowerCLI.  “Why?” you asked?  Well, because scripting and automation via PowerCLI is fun and awesome!  Sure, it’s simple enough to deploy an appliance natively within the vSphere Web Client by selecting the .ova that you’d like to import, press a few mouse clicks, enter some info, and off you go!  But who wants to do stuff the easy way?  It takes the fun away!

In my opinion, scripting this out is just as easy since you can pre-populate your information into variables, and then run a simple “one-liner” command to kick off the deployment.  Pretty neat right? 

Now, I do understand that initially doing a deployment this way is a bit time-consuming.  But once you have the method and process down, you can create a simple PowerShell script with all of your information embedded, then simply tweak/adjust it as needed per appliance.  The only time-consuming part is identifying the proper variables for each appliance.  Please keep in mind that while most appliances have the same initial setup variables, some may have more and some may have less so it is always best to follow the initial steps I’ll cover below for each appliance to ensure you have all the correct information for your deployment and/or script.

Well, let’s get to it, shall we!

In this example, I will be using the latest version of PowerCLI which, at the time of this writing, is 6.5.4.7155375, to deploy the VMware Support Assistant appliance for vSphere 6.5.

To kick things off, we will be using two important variables, $ovfPath, and $ovfConfig.  The latter will use the $ovfPath variable to discover the correct variable properties to build out our $ovfConfig.  I’ll assume that you’re already connected to your vCenter server in the PowerCLI session but, if not, please go ahead and do so using the “Connect-VIServer” cmdlet.

Let’s define our $ovfPath first:

$ovfPath = "<path_to_ova_file>"

Next, let’s define $ovfConfig

$ovfConfig = Get-OvfConfiguration -Ovf $ovfPath

Great!  Now if we simply type $ovfConfig, it will check our $ovfPath for the file and list the setup properties.

We can see that it has identified “Common“, “IpAssignment“, “NetworkMapping“, and “vami” as the starting base properties.  Next, we will have to drill down into each of these properties to determine the full property “Value” to define our variable with.

So starting with “Common“, let’s drill down more by typing:

$ovfConfig.Common

This now identifies the next “Common” property which is “varoot_password“.  Let’s drill into that to see what it finds.

$ovfConfig.Common.varoot_password

This gives us more information about the property and the key one here is “Value“. 

This means that we have reached the last property entry needed to define our first config variable with.  Great! With this information, our first defined configuration variable will look like this:

$ovfConfig.Common.varoot_password.Value = "<some_password>"

Let’s now move to the next property, “IpAssignment“.  Following the same logic as before and drilling into this property identifies “IpProtocol” which requires a “string” value of “IPv4 or IPv6“. 

This means our next defined variable will look like this.

$ovfConfig.IpAssignment.IpProtocol.Value = "IPv4"

Now for the next property, “NetworkMapping“.  Drilling down into this property identifies “Network_1” which again will be a “string” value for our variable.  This string is the VM PortGroup that you want to attach to the appliance, whether it be on a virtual Standard Switch (vSS) or Distributed Switch (vDS).

This defined variable will look like this.

$ovfConfig.NetworkMapping.Network_1.Value = "<some_vm_portgroup>"

Getting the gist of this yet?  Let’s move onto the final property, “vami“.  Again, following the same logic we’ve been doing and drilling into the “vami” property, we can see that it has identified the “appliance” as “VMware_vCenter_Support_Assistant_Appliance“.  Drilling down further, there are multiple properties discovered in this one and they are “gateway“, “domain“, “searchpath“, “DNS“, “ip0“, and “netmask0“.

Whoa! Quite a few huh?  Yet again, drilling into each of these lets us know that “string” values are required for each property.

These config variables will be defined like this.

$ovfConfig.vami.VMware_vCenter_Support_Assistant_Appliance.gateway.Value = "<gateway_ip>"
$ovfConfig.vami.VMware_vCenter_Support_Assistant_Appliance.domain.Value = "<domain_name>"
$ovfConfig.vami.VMware_vCenter_Support_Assistant_Appliance.searchpath.Value = "<dns_searchpath>"
$ovfConfig.vami.VMware_vCenter_Support_Assistant_Appliance.DNS.Value = "<dns_ip(s)>"
$ovfConfig.vami.VMware_vCenter_Support_Assistant_Appliance.ip0.Value = "<appliance_ip>"
$ovfConfig.vami.VMware_vCenter_Support_Assistant_Appliance.netmask0.Value = "<netmask_ip>"

From what we’ve obtained so far, all of our defined variables look like this.

Ready to deploy the appliance?  Not quite yet!  There are some additional variables we can create/define to make the syntax of our command shorter, neater, and nicer.  First, let’s see what the cmdlet requires.  The cmdlet used to deploy an appliance is “Import-vApp“.  Clinking that link and reviewing the table show us what is required or not.

From this table, I am going to define some more variables for the following parameters.

  • Source
  • Name
  • VMHost
  • Datastore
  • DiskStorageFormat
  • Location
  • OvfConfiguration

Now, with all of the variables defined, we are ready to enter the “Import-VApp” command with the required parameters.

Import-VApp -Source $ovfpath -OvfConfiguration $ovfConfig -Name $VMName -VMHost $VMHost -Location $Cluster -Datastore $Datastore -DiskStorageFormat $DiskFormat -Confirm:$false

A progress bar will load in your session showing that the deployment has kicked off, and a short while later it will end, meaning that the appliance has been successfully deployed.

A quick look at the vSphere Client, and we can see that the appliance is indeed there and configured as per the settings we defined earlier in our configuration.

At this point, you can safely power-on the appliance and proceed with normal setup processes.  Also, as I noted earlier, you can take all of these variables and create a PowerShell script to deploy your appliances with and just add/remove/change variables as needed per appliance!  Automation at its finest!

Well, I’d like to thank you for stopping by and supporting my page.  I do hope that you have found this information useful, and hope you’ll return again.  Thanks much and I’ll catch you on the next one!

-virtualex-

vSphere…Synology…NFS v4.1

| 31/12/2017 | Tags: , , , ,

Welcome, and thanks for visiting my blog!

In this post, I am going to cover how to enable NFS v4.1 on a Synology device and then mount and NFS v4.1 datastore in VMware vSphere 6.5.  By default, Synology devices support NFS v4 natively, and although they can also support NFS v4.1, it is not enabled.  Well, not to worry because I am going to show you just how to enable the feature on your device.

NFS v4 and v4.1 have been around for quite a few years but it has not taken off then way NFS v3 did way back when.  There were some major flaws pointed out with NFSv4 so NFSv4.1 was created to rectify those flaws, and VMware was one of the first major companies to adopt and support the new Network File System.  But unless your storage device supported the newer NFS versions, you would be stuck mounting NFSv3 volumes by default.

In this demo, I will be using my new replacement Synology DiskStation DS415+ and my homelab “datacenter” running the latest version vSphere 6.5.  So let’s jump right in!

Using a terminal application like PuTTY, connect to your Synology device via SSH using an admin user account.  This can be the default “admin” account and any new user account with Administrator privileges.  Once connected enter the following command to change the directory:

cd /usr/syno/etc/rc.sysv

Once in this directory, run the following command (enter the account password if prompted):

sudo cat /proc/fs/nfsd/versions

This will show us the current NFS version currently enabled and supported by the Synology device. 

We can see that all versions prior to 4.1 have a “+” sign next to them and 4.1 has a “-” sign next to it.  Let’s change that!

In order to change this, we will need to edit a shell (S83nfsd.sh) file using “vi”.  Run the following command to open the file with VI Editor:

sudo vi S83nfsd.sh

This will open the shell file and will place the cursor at Line 1, Character 1 as depicted in the following screenshot.  

Navigate down to line 90 using the down arrow and you will see the following line of text.

This is where the magic happens!  To edit the file now, press the “I” key on your keyboard to initiate an “Insert” then add the following to the end of the text so the line looks like the following screenshot.

-V 4.1

To commit and save this change, first press the Esc key.  Next type the following command and hit “Enter” to write and then quit vi editor.

:wq

 

 

Next, we need to restart the NFS service.  To do so enter the following command:

sudo ./S83nfsd.sh restart

 

If we again run the following command, we will see that there is now a “+” sign next to 4.1.  Hooray!

sudo cat /proc/fs/nfsd/versions

 

Now that we have enabled NFSv4.1 functionality on your storage device, let’s go ahead and mount an NFS volume to our hosts in vSphere.

I have enabled NFS and NFS v4 support then created the following shares with assigned permissions on my device, and am going to mount the ISOs share first in this example by issuing a command via PowerCLI.  We can also see that I do not have any NFS mounts currently in my environment

I’ve launched PowerCLI and connected to my vCenter Server using the Connect-VIServer cmdlet then issued the following command:

Get-VMHost | New-Datastore -Nfs -FileSystemVersion '4.1' -Name SYN-NFS04-ISOs -Path "/volume3/NFS04-ISOs" -NfsHost DS415 -ReadOnly

*Note:* an important argument here in the “-FileSystemVersion”.  If I do not specify the version, it will assume version 3.0 by default.

If I go back and look at my datastores via the Web Client, I can see that my new NFS 4.1 datastore has been mounted to each one of my ESXi hosts. Nice!

*Bonus:* If I’d like to easily remove this datastore from all of my hosts, I can issue the following command via PowerCLI.

Get-VMHost | Remove-Datastore -Datastore SYN-NFS04-ISOs -Confirm:$false

Now I can see that the host has been removed successfully!

Well, that about wraps this one up.  I hope that this has been useful and informative for you and I’d like to thank you for reading!  Until next time!

-virtualex-