Configure OpenDNS on Ubiquiti EdgeRouter Lite
I recently picked up a new router/firewall for my home, and chose the Ubiquiti EdgeRouter Lite (ERLite-3). This device comes with a lot of bells and whistles and if you would like more information on it, please see here.
I am a huge fan of speed and security, and for this purpose I always choose to configure my home network to use OpenDNS name servers rather than my ISP’s (Internet Service Provider) name servers. OpenDNS has some great setup guides available for users to configure their devices and you can view the setup guides here and choose the solution that best suits your needs. Now, I prefer to configure OpenDNS right on the router so that it applies to any and all devices on my network that use the internet. Unfortunately though, I did not find a setup guide for Ubiquiti devices…but since I was already familiar with the process, I tinkered a bit around the EdgeOS and managed to figure it out.
In this post, I am going to cover how to set up your EdgeRouter to use OpenDNS name servers. By default, the router is configured to forward DNS queries to the name server IP addresses obtained from your ISP via DHCP. So let’s go ahead and change that!
First, let’s head over to the following OpenDNS page to test our settings and ensure we are not already configured to use OpenDNS’s DNS name servers somehow (this is just for good measure). On the right-hand site there is a link that reads:
“click here to test your settings”
Go ahead and click it, and it should return the following message:
Next, login to your EdgeOS dashboard WebGUI (by default it’s configured as 192.168.1.1) and login using the default user credentials (ubnt/ubnt).
On the bottom left corner, click the System tab to open it up.
In the Name Server area, add the following OpenDNS name server IP addresses then scroll to the bottom and click Save before closing this window:
Great, Step 1 is done! Now onto Step 2…this can be done via the CLI or the Config Tree. (I chose the latter…)
Click on the Config Tree button and navigate to the following from the left-side Configuration pane:
service / dns / forwarding / system
Click the triangle-like icon direct to the left of the word system, then click the “+” icon to the right of the word system. If done correctly you should see the following header on the top of the page and two buttons, Discard & Preview, on the bottom of the page.
Next, click the Preview button at the bottom of the page. This will show you the command that would be used in the CLI to apply the name servers. The great thing about this is that it also give you the option to “Apply” it now via the GUI! I chose to hit Apply.
Once, completed, there should be some green text at the bottom reading:
“The configuration has been applied successfully”
Now, the final step is to go back to the OpenDNS page and test the settings again. If everything is applied correctly, you will be presented with the following:
Now your router is fully configured to use OpenDNS!
I hope that you have found this to be informative so please be sure to comment below and share this post.
Intel NIC not detected by ESXi
Intel NIC Not Detected by ESXi
In this post I am going to cover a random issue I encountered after installing ESXi 6.0 Update 2 on one of my new Home Lab 2016 hosts. The actual installation of ESXi was extremely easy and painless (I may cover that in another post). After I had completed the installation, I was attempting to configure my Management network interfaces and suddenly noticed that only 4 network interfaces were being detected!
As I then thought to myself, “I wonder what is going on here since I didn’t get any POST issues?”, I noticed that I was getting a message during POST regarding the initialization of the Intel Boot Agent for PXE booting. The message stated:
“…PXE-E05: The LAN adapter’s NVM configuration is corrupted or has not been initialized. The Boot Agent cannot continue.”
Immediately, I began to consult “Mr. Google” and see if there was anything I could find related to this particular problem. After reading a few threads, many users had mentioned and/or suggested that the NIC’s firmware was corrupted and needed to be “re-flashed”. I quickly got to work and researched a bit further to understand the process of flashing the NIC firmware. I downloaded the latest version of PREBOOT (at the time of this writing, it was version 20.7)which contains the “BootUtility” needed to perform the flash.
Next, I prepared myself a DOS-bootable USB using Rufus. I then extracted the PREBOOT.exe file using 7zip and placed the contents on the newly created USB. This would allow me to either boot into the USB and access DOS or boot into the UEFI: Built-in Shell on Supermicro motherboards, and access the necessary files. Once I had my drive ready, I went ahead and plugged it into my server and initiated a reboot. During POST, I invoked the boot menu so and chose the option to boot into the Built-in Shell.
Once in the shell, I determined that my USB was mounted at fs4:
I navigated through the directories so I can see the contents of each folder until I saw the BootIMG.FLB file which is the new flash image I want to apply. I then navigated to the location of the BootUtility. Since I am using the built-in shell, I needed to ensure that I used the BootUtil for x64-bit EFI so I navigated to the following location:
Running the BOOTUTIL64E.EFI file will simply list your network interfaces and I could then see the current firmware version for all of my interfaces, although for some reason the ones in question are displaying “Not Present”. Adding a “-?” suffix will bring up the help and list all the parameters to execute the commands properly. I found a great reference article here which made it easier for me to see what parameters I needed in my command.
To begin, I entered the following command since I wanted to enable flash on all of my NICs.
BOOTUTIL64E.EFI -ALL -FLASHENABLE
Or, if you wanted to do each one individually, you could specify the NIC number (referenced as X below) manually.
BOOTUTIL64E.EFI -NIC=X -FLASHENABLE
A reboot is required after successful completion of this command before proceeding, so I went ahead and rebooted my system and then booted back into my USB via the built-in shell.
Afterwards, simply running the utility again showed that NIC ports 1-4 were all PXE ready.
Now it was time to run the following flash command. Note: Specifying the file parameter is optional. Without it, it will assume that the BOOTIMG.FLB file is in the same location you are executing the command from. Since I left the file in its originating location, I had to specify it manually.
BOOTUTIL64E.EFI -UP=PXE -ALL -FILE=\SuperMicro\PREBOOT\APPS\BootUtil\BOOTIMG.FLB
You will then be prompted to create a restore image, in the event something goes awry but I chose not to create a restore image.
Upon successful completion, I can now see that my firmware version has been upgraded from version 1.3.98 and is now running version 1.5.78!
Now that my firmware has been upgraded, I rebooted the host and accessed the Management Network settings screen, and to my delight, ESXi was now detecting all of my network interfaces! Woohoo!!
I am still trying to figure out why flash is not present on the NICs that previously were not detected, and my assumption is that it’s due to these being relatively cheap $80 network cards instead of full priced (~$300) network cards. In any case, they still work and I am quite happy with them. I hope you have found this information useful. Thanks for reading!
Home Lab 2016 – Part 2
Home Lab 2016 – Part 2
Welcome back for Part 2 of my Home Lab 2016 Series. I hope that you enjoyed my previous post, Part 1 from last week, where I covered the basis of my home lab and presented the Bill of Materials (BOM) for my mini-datacenter environment.
Today I am bringing you Part 2 and will cover the actual physical build process, putting together the components to build each ESXi host server. I hope you’re as excited as I am!
Beginning with the case, I chose to go with the Supermicro CSE-504-203B which has the motherboard backplane and all connections at the rear of the case, instead of the CSE-505-203B which has everything in the front of the case. I wanted to have more of a cleaner look to my rack enclosure, and the best thing about these cases is that they come with a 200W High-efficiency “80 Gold Level Certified” power supply!
The next component to go into this case is the motherboard. I chose the Supermicro A1SAi-2750 with an Intel ATOM “System on a Chip” (SoC) CPU. This is a 20W, 8-Core processor, is compatible with “Westmere” VMware Enhanced vMotion Compatibility mode, and supports a maximum of 64GB DDR3 RAM in (4) DIMM sockets! I went ahead and maxed the RAM on each board with (4) 16GB Micron MEM-DR316L-CL02-ES16 DDR3 1600MHz ECC 204-pin 1.35V SO-DIMM chips.
Since I wanted to have redundancy for all my network connections, as per “best practices”, I decided to install an Intel I350-T4 quad-port NIC. Unfortunately, even with the low-profile mounting brackets that come with the cards, they simply would not fit in a small 1U case, as they are designed to be installed horizontally. I picked up a couple of Supermicro RSC-RR1u-E8 PCI-E x8 riser cards which would allow me to insert the NICs properly.
Next, came the disk drives to run ESXi as well as VM’s, in a VSAN cluster, for management machines if I wanted to move them off of my shared storage device. I also wanted to have the ability to create a VSAN environment for testing and educational purposes (i.e.: VCP/VCAP certifications). I decided to utilize the onboard USB 3.0 socket and installed a SanDisk Ultra Fit 16GB USB 3.0 flash drive to run ESXi, after all…this is a lab right? For my VSAN drives, I decided to pair a Kingston SSDNow V300 series 120GB SATA III SSD with an HGST Travelstar Z7K500 500GB 7200RPM HDD.
In order to stack them together, I picked up a Supermicro MCP-220-00044-0N HDD Converter bracket.
Here is the end result of the insides after all the components above were installed.
Once I had the first server built, I powered it on to ensure it was in working order before continuing on and building the remaining (3) hosts. Afterwards, I decided to tidy things up a bit further, zip-tying cables, etc. for a cleaner look, before closing up the cases to place them in my rack enclosure.
Please stay tuned for Part 3, where I will quickly cover my networking and storage solutions! Thanks for stopping by!
Home Lab 2016 – Part 1
Home Lab 2016 – Part 1
Having a home lab is every IT enthusiasts dream come true, and now I can finally say that I have fulfilled that dream! I previously was (and currently still am…) using a 1-node “white box” system I had built from a spare gaming machine I had laying around, running on an open-air tech bench from TopDeck. It’s comprised of the following:
- 1 x ASUS Maximus Gene V motherboard w/ Intel 82579 LOM NIC
- 1 x Intel Core i5-3570K
- 1 x ADATA SP600 32GB SSD
- 1 x Samsung Evo 840 1TB SSD
- 1 x Seagate 1TB HDD
- 1 x Intel 82574 (single-port) NIC
- 1 x Intel 82576 (dual-port) NIC
- 1 x Corsair HX650 PSU
And even though it runs great, I simply felt it wasn’t enough as I basically wanted to replicate a mini-datacenter for my lab which would help tremendously with my VMware studies and overall VMware knowledge.
So I quickly got to work and embarked on the adventure of creating my new lab. I started off by opening a Feedly account and subscribing to numerous other user and community blogs, reading what others did to create and build/setup their homelabs, and also checked out some youtube channels.
Lot’s of good reads out there…
Just to name a few…
I also spent the last year+ researching, planning, designing, and purchasing the equipment for my new lab. And since I wanted somewhat of a low power solution (as to not incur outrageous electric bill charges) I settled on SuperMicro’s A1SAi-2750 ATOM SOC (System-on-a-chip) Mini-ITX motherboards. Boy, do these things boast a boatload of features (not getting into specifics as you guys know how to use Google I’m sure…)! Since I also wanted to have them in a rack to replicate a mini-datacenter, I went with a Navepoint 9U rack enclosure. I bought some Sandisk USB’s, some SSD’s & HDD’s (for eventual VSAN setup), and extra NIC’s (for redundancy and best practices), 1U cases, and some Synology NAS devices. Here’s my entire part’s list…
- 1 x Navepoint 9U Rack Enclosure
- 1 x ICC 48-port feed-thru Cat6 Patch Panel – 1U
- 1 x Cyberpower PR1000LCDRT2U UPS
- 1 x Cyberpower PDU15SW10ATNET ATS/PDU
- 4 x SuperMicro A1SAi-2750
- 4 X SuperMicro 504-203B 1U rackmount cases (contains a 200W PSU in each case)
- 4 x Sandisk UltraFit 16GB USB 3.0
- 4 x Kingston 120GB SSD
- 4 x HGST 500GB 7200RPM 2.5” HDD
- 4 x SuperMicro 1×3.5” to 2×2.5” Converter Brackets
- 4 x SuperMicro PCI-E x8 L-shape riser cards
- 4 x Intel I350-T4 (quad-port) NIC
- 16 x 16GB Micron 1600MHz DDR3 204-pin SO-DIMM RAM
And for NAS storage…..
- 1 x Synology DS415+
- 1 x Synology DX213 Expansion Unit
- 2 x Micron M500DC 800GB SSD
- 2 x Micron M500 480GB SSD
- 4 x Sabrent 2.5” to 3.5” Bay Converter
- 2 x HGST 6TB 7200RPM 3.5” HDD
The Networking components…
- 1 x Ubiquiti EdgeRouter ERLite-3
- 1 x ASUS RT-AC68U wifi router
- 1 x Cisco SG300-10 SMB L2/L3 switch
- 1 x Cisco SG300-52 SMB L2/L3 switch
And last, but definitely not least…. a slew of Monoprice Cat6 24AWG Flexboot cables (various lengths)
Phew!…what a list! Wait!…am I missing anything??
The end result…my new mini-datacenter homelab 2016!! (with previous Dev “white box” system to the side)
Stay tuned for Part 2 ( I hope) where I plan on “Putting it all together”!
Feel free to comment and let me know your thoughts/feedback…and words of encouragement so I can continue on this new blogging adventure!