{"id":185,"date":"2016-03-15T19:36:15","date_gmt":"2016-03-15T23:36:15","guid":{"rendered":"https:\/\/ithinkvirtual.com\/?p=185"},"modified":"2018-02-10T11:19:48","modified_gmt":"2018-02-10T16:19:48","slug":"homelab-pt2","status":"publish","type":"post","link":"https:\/\/ithinkvirtual.com\/2016\/03\/15\/homelab-pt2\/","title":{"rendered":"Home Lab 2016 – Part 2"},"content":{"rendered":"
Home Lab 2016 – Part 2<\/b><\/span><\/p>\r\n Welcome back for Part 2 of my Home Lab 2016 Series. \u00a0I hope that you enjoyed my previous post, Part 1<\/a> from last week, where I covered the basis of my home lab and presented the Bill of Materials (BOM) for my mini-datacenter\u00a0environment.<\/span><\/p>\r\n Today I am bringing you Part 2 and will cover the actual physical build process, putting together the components to build each ESXi host server. \u00a0I hope you\u2019re as excited as I am!<\/span><\/p>\r\n Beginning with the case, I chose to go with the Supermicro CSE-504-203B<\/span><\/a> which has the motherboard backplane and all connections at the rear of the case, instead of the CSE-505-203B<\/span><\/a> which has everything in the front of the case. \u00a0I wanted to have more of a cleaner look to my rack enclosure, and the best thing about these cases is that they come with a 200W High-efficiency “80 Gold Level Certified” power supply!<\/span><\/p>\r\n \u00a0<\/a>\u00a0<\/a><\/p>\r\n The next component to go into this case is the motherboard. \u00a0I chose the Supermicro A1SAi-2750<\/span><\/a> with an Intel ATOM \u201cSystem on a Chip\u201d (SoC) CPU. \u00a0This is a 20W, 8-Core processor, is compatible with \u201cWestmere\u201d VMware Enhanced vMotion Compatibility mode, and supports a maximum of 64GB DDR3 \u00a0RAM in (4) DIMM sockets! \u00a0I went ahead and maxed the RAM on each board with (4) 16GB Micron MEM-DR316L-CL02-ES16<\/span><\/a> DDR3 1600MHz ECC 204-pin 1.35V SO-DIMM chips.<\/span><\/p>\r\n \u00a0<\/a>\u00a0<\/a><\/p>\r\n Since I wanted to have redundancy for all my network connections, as per \u201cbest practices\u201d, I decided to install an Intel I350-T4<\/span><\/a> quad-port NIC. \u00a0Unfortunately, even with the low-profile mounting brackets that come with the cards, they simply would not fit in a small 1U case, as they are designed to be installed horizontally. \u00a0I picked up a couple of Supermicro RSC-RR1u-E8<\/span><\/a> PCI-E x8 riser cards which would allow me to insert the NICs properly.<\/span><\/p>\r\n \u00a0<\/a>\u00a0<\/a><\/p>\r\n Next, came the disk drives to run ESXi as well as VM\u2019s, in a VSAN cluster, for management machines if I wanted to move them off of my shared storage device. \u00a0I also wanted to have the ability to create a VSAN environment for testing and educational purposes (i.e.: VCP\/VCAP certifications). \u00a0I decided to\u00a0utilize\u00a0the\u00a0onboard\u00a0USB 3.0 socket and installed a SanDisk Ultra Fit 16GB USB 3.0<\/span><\/a>\u00a0flash drive to run ESXi, after all\u2026this is a lab right? \u00a0For my VSAN drives, I decided to pair a Kingston SSDNow V300 series 120GB SATA III SSD<\/span><\/a> with an HGST Travelstar Z7K500 500GB 7200RPM HDD<\/span><\/a>.\u00a0<\/span><\/p>\r\n \u00a0<\/a>\u00a0<\/a>\u00a0<\/a><\/p>\r\n In order to stack them together, I picked up a Supermicro MCP-220-00044-0N HDD Converter bracket<\/span><\/a>.<\/p>\r\n