iThinkVirtual™

vSphere…Synology…NFS v4.1

| 31/12/2017 | Tags: , , , ,

Welcome, and thanks for visiting my blog!

In this post, I am going to cover how to enable NFS v4.1 on a Synology device and then mount and NFS v4.1 datastore in VMware vSphere 6.5.  By default, Synology devices support NFS v4 natively, and although they can also support NFS v4.1, it is not enabled.  Well, not to worry because I am going to show you just how to enable the feature on your device.

NFS v4 and v4.1 have been around for quite a few years but it has not taken off then way NFS v3 did way back when.  There were some major flaws pointed out with NFSv4 so NFSv4.1 was created to rectify those flaws, and VMware was one of the first major companies to adopt and support the new Network File System.  But unless your storage device supported the newer NFS versions, you would be stuck mounting NFSv3 volumes by default.

In this demo, I will be using my new replacement Synology DiskStation DS415+ and my homelab “datacenter” running the latest version vSphere 6.5.  So let’s jump right in!

Using a terminal application like PuTTY, connect to your Synology device via SSH using an admin user account.  This can be the default “admin” account and any new user account with Administrator privileges.  Once connected enter the following command to change the directory:

cd /usr/syno/etc/rc.sysv

Once in this directory, run the following command (enter the account password if prompted):

sudo cat /proc/fs/nfsd/versions

This will show us the current NFS version currently enabled and supported by the Synology device. 

We can see that all versions prior to 4.1 have a “+” sign next to them and 4.1 has a “-” sign next to it.  Let’s change that!

In order to change this, we will need to edit a shell (S83nfsd.sh) file using “vi”.  Run the following command to open the file with VI Editor:

sudo vi S83nfsd.sh

This will open the shell file and will place the cursor at Line 1, Character 1 as depicted in the following screenshot.  

Navigate down to line 90 using the down arrow and you will see the following line of text.

This is where the magic happens!  To edit the file now, press the “I” key on your keyboard to initiate an “Insert” then add the following to the end of the text so the line looks like the following screenshot.

-V 4.1

To commit and save this change, first press the Esc key.  Next type the following command and hit “Enter” to write and then quit vi editor.

:wq

 

 

Next, we need to restart the NFS service.  To do so enter the following command:

sudo ./S83nfsd.sh restart

 

If we again run the following command, we will see that there is now a “+” sign next to 4.1.  Hooray!

sudo cat /proc/fs/nfsd/versions

 

Now that we have enabled NFSv4.1 functionality on your storage device, let’s go ahead and mount an NFS volume to our hosts in vSphere.

I have enabled NFS and NFS v4 support then created the following shares with assigned permissions on my device, and am going to mount the ISOs share first in this example by issuing a command via PowerCLI.  We can also see that I do not have any NFS mounts currently in my environment

I’ve launched PowerCLI and connected to my vCenter Server using the Connect-VIServer cmdlet then issued the following command:

Get-VMHost | New-Datastore -Nfs -FileSystemVersion '4.1' -Name SYN-NFS04-ISOs -Path "/volume3/NFS04-ISOs" -NfsHost DS415 -ReadOnly

*Note:* an important argument here in the “-FileSystemVersion”.  If I do not specify the version, it will assume version 3.0 by default.

If I go back and look at my datastores via the Web Client, I can see that my new NFS 4.1 datastore has been mounted to each one of my ESXi hosts. Nice!

*Bonus:* If I’d like to easily remove this datastore from all of my hosts, I can issue the following command via PowerCLI.

Get-VMHost | Remove-Datastore -Datastore SYN-NFS04-ISOs -Confirm:$false

Now I can see that the host has been removed successfully!

Well, that about wraps this one up.  I hope that this has been useful and informative for you and I’d like to thank you for reading!  Until next time!

-virtualex-

Homelab Makeover 2.0

Hello and first off, thank so much for visiting my blog!  If you have followed any part of my “Homelab” series, you will be familiar with the components that make up my home “Datacenter”.  If not, take some time to catch up on those posts!

In this post, I am quickly going to cover my lab makeover as I decided to get some new equipment and redo a bunch of my networking.  So without any further hesitation, let’s get to it!

Beginning with my networking equipment, I wanted to move my Cisco SG300-10 out of my home network enclosure cabinet and into my Navepoint rack enclosure.  But then I realized I would have to replace that switch with another to feed the rest of my homes connections.  Currently, I am using Ubiquiti’s UniFi equipment for my home networking and since I’m already running Ubiquiti gear, I figured I would purchase a few more of their 8-port switches to do the job so that I can manage those devices from a “single-pane-of-glass” via the controller.  So I went ahead and purchased 2 US-8 switches, in which 1 will feed the home networking and the other will extend to the lab primarily serving as a trunk for my VLANs to reach the labs Cisco switches.

So now, my UniFi network consists of:

On to the lab network…

The US-8-LAB switch connects to my SG300-10 which I’ve configured 2-ports as a LAG “Trunk” between the switches for VLAN traffic, 2-ports as another LAG “Trunk” connection to the SG300-52 switch, and the others as “Access” ports which connect to the IPMI interfaces of my servers.  The IPMI connections were previously on my SG300-52 switch.  On to the SG300-52 switch, I have configured all of my ESXi management ports, vMotion ports, iSCSI & NFS ports, VSAN ports, and data ports for my servers, along with a few LAG connections which connect to my storage devices, and a few which connect my UPS and ATS/PDU units.  I also configured an additional LAG “Trunk” which connects to a Netgear Prosafe GS108T that I had laying around.  I’ve dedicated that switch and it’sports for my ex-gaming PC turned “DEV” ESXi host.  Eventually, that host will be decommissioned when I add a new host to my rack enclosure.

So now, my lab network consists of:

Now for the storage devices.  Previously, I was running my lab VMs using a Synology DS415+ storage unit via NFS mounts.  This was all fine and dandy, except for the fact that it would randomly shut itself down for no apparent reason, leading to eventual corruption of my VMs.  I got tired of spending hours trying to recover my machines and eventually discovered that my device was plagued by the Intel ATOM C2000 CPU issue described here.  I then reached out to Synology and they quickly responded and issued an immediate RMA of the device.  Again this was fine, but where was I going to move my VMs and data too?  I didn’t have another storage device with an ample amount of free space to accommodate all my data, so I decided to bite the bullet and pick up a brand new Synology RS815+ which I could now mount in my rack enclosure.  I also scooped up some 1TB SSDs from their compatibility matrix to populate the drive bays.  The difference here is that with the new RackStation, I opted to configure my LUNs via iSCSI instead of NFS like I had previously done with the DiskStation.  Once set up and connected, I vMotion’d all of my machines to the new device, and disconnected the DS415+ while I waited for the replacement device to arrive.  That replacement unit eventually came, so I swapped my SSD’s from the old unit into the new unit and fired it back up.  I will eventually recreate some NFS mounts and reconnect them to the vSphere environment.

Now, my lab storage consists of:

Finally, the cabinet.  I became rather displeased with the amount of space I had with my Navepoint 9U 450mm enclosure.  The case itself was great, but I just needed some more room in the event I needed to un-rack a server or do anything else in there.  Also, I started to do some “forward-thinking” about eventual future expansion, and the current 9U enclosure was no longer going to suffice.  I decided to upgrade to a new Navepoint 18U 600mm enclosure, and now I have plenty of room for all of my equipment and future expansion.  After relocating my servers to the new rack enclosure, I now have the following equipment mounted in the rack and, still, have room for growth.

  • 2 x Cat6 keystone patch panels
  • 2 x Cisco SG300 switches
  • 4 x Supermicro servers
  • 1 x Synology storage unit
  • 1 x UPS
  • 1 x ATS/PDU
  • 1 x CyberPower Surge power strip (in the event I need to plug-in some other stuff)

Thanks for stopping by!  Please do leave some comments as feedback is always appreciated!  Until next time!

-virtualex-

Pingbacks: 

macOS 10.13 High Sierra on ESXi 6.5

**NOTE: This is completely for experimental purposes and is unsupported by both Apple and VMware**

Hello all!  This is just a quick follow up to my previous guide on running macOS 10.12 Sierra on ESXi 6.x, where I have now successfully updated the VM to macOS 10.13 High Sierra.

If you simply try to run the upgrade via a self-made ISO, or via the Mac App Store, the final image will fail to boot.  The reason being is because starting with macOS 10.13, Apple has converted the file system from Hierarchical File System Plus (HFS Plus orHFS+) to the new Apple File System (APFS).  During the upgrade process, the HFS+ will be converted to APFS, and the unlocker utility which allows us to even run a macOS VM on ESXi doesn’t support APFS.  In fact, support for ESXi, in general, is no longer available in the latest Unlocker 2.1.1 so I am still using the Unlocker 2.1.0 for ESXi, and Unlocker 2.1.1 for VMware Workstation 14.

For this quick tutorial, I am using the latest VMware ESXi 6.5 Update 1 Build 7388607 and I started by simply cloning my macOS 10.12 VM to a new virtual machine.

Once powered on, go to the Mac App Store and download the macOS High Sierra installation.  When the download is complete, DO NOT run the installer and quit it instead.  You will now have the installer application available in your Applications folder.

Now, open a Terminal session and enter the following command as one line.  Depending on the account you’re are logged in with, sudo may or may not be needed.

sudo /Applications/Install\ macOS\ High\ Sierra.app/Contents/Resources/startosinstall --converttoapfs NO --agreetolicense --nointeraction

The key argument here is the “–converttoapfs NO” which prevents the OS from converting the drives file system format from HFS+ to APFS.  Additionally, the “–nointeraction” argument is optional.

Now sit back, relax, and let the upgrade do its thing.  When the upgrade is complete, the VM should have successfully booted up and you will now be running macOS High Sierra.

-virtualex

Pingbacks:

Achievement Unlocked! VCP6-DCV!

| |

This is an extremely long overdue post, but I have had such a busy year that I really haven’t been able to find much time to write and publish material that I have been meaning to get out.  So, as I attempt to get back into the groove, I wanted to publish this quick post of my most recent accomplishment, obtaining the VCP6-DCV certification!

I actually completed and obtained this certification back on August 31st, 2017, just a few hours before heading out to take part in and witness the wedding of my beautiful sister.  Talk about a stressful day, not to mention the stressful weeks/ends/nights spent trying to cram in as much study to ensure I was well prepared for the exam. 

I felt very confident in my knowledge and that I would pass the exam on the first attempt and since I was already a VCP5 holder, I chose to take the delta (2V0-621D) exam which I completed in about 90 minutes.  I took my time, read every question and diagram thoroughly, and ended up with an impressive 447/500 score!  Not too bad if I don’t say so myself.

Anyhow, I have now renewed my VCP5, obtained my VCP6, and get to begin my journey’s toward VCP65 and VCAP6x!  Can’t wait!!!

I’d also like to take a moment to send out congratulations to anyone who has taken and passed a VMware certification this year and best of luck to anyone planning on taking an exam in 2018!