Sweating your (Hardware) Assets

A long time ago now I invested in a modest VMware home lab which overtime has grown to the point where I no longer think I can call it a home lab anymore colleagues describe it as a VMware home data center.

My original purchase was a single DELL PowerEdge R610 with no CPUs and no memory and the PERC6 integrated RAID card, which I then set about upgrading and customizing to meet my needs.

I bought a pair of INTEL Xeon X5675 6 Core Hyper-Threaded 3.06GHz CPUs and 6 sticks of 16GB DDR3 1333MHz ECC memory.

I then purchased some what I assumed were fairly decent USB memory sticks on which I was planning to install VMware ESXi v5.0 at the time. I also purchased the TCP/IP Offload Engine 4 Port Key, the iDRAC 6 Express Module and the iDRAC 6 Enterprise Module ( back when the hardware was the license unlike today).

I threw away the DELL PERC6 Integrated RAID Card and the cables that came with it and I purchased a DELL PERC H200i SAS RAID Card which works in the integrated slot in the DELL PowerEdge R610s and a pair of new SAS cables to go with it.

After some googling I managed to find the DELL Bootable Firmware Upgrade ISO download which I burnt to a DVD and booted the server off because on the iDRAC 6 the network interface is only 100Mbit/s which gives a transfer rate of 10MBytes/s and I didn’t want to wait for a DVD to be streamed at this speed just to upgrade the firmware.

Firmware upgrade complete I installed ESXi and everything worked beautifully for a year or so until the USB memory sticks that ESXi was installed on started to fail so I then purchased the SD Card module and a SanDisk Extreme PLUS 90MB/s 8GB SD Card to install ESXi onto.

I was using the 4 x 1GbE onboard network ports and I was finding it was a bottle neck for the scenarios that I was trying to test out, so I purchased an INTEL I350-T4 Quad Port 1GbE network card to double my network bandwidth and 4 more network cables to get it all connected up.

I was using a mixture of iSCSI and NFS storage from a Synology DS1813+ NAS which had 8 x 3TB WD Red HDDs in it to run all of the VMs in my home lab, this quickly became the bottleneck as I needed more IO from the storage and I have no simple options available to me at the time.

So I invested in a DELL PowerEdge R510 and upgraded this to suit my needs as a FreeNAS appliance with a a pair of INTEL Xeon X5675 6 Core Hyper-Threaded 3.06GHz CPUs and 8 sticks of 16GB DDR3 1333MHz ECC memory.

I moved all my data off my Synology DS1813+ and onto random portable HDDs that I had and several of the PCs in my house, so that I could move the 8 x 3TB WD Red HDDs in it over to the DELL PowerEdge R510 which had 12 x 3.5″ Bays on the front, so I also purchased 4 more 3TB WD Red HDDs to complete the solution, I also installed a pair of 2.5″ Samsung 840 PRO 512GB SSDs to use as the cache drives.

This gave me all of the IO that I needed for my growing home lab but I was still bandwidth constrained to 1Gbit/s which is 100MBytes/s so the next upgrade was some INTEL X520-SR2 Dual Port 10GbE network cards and all was good again and there was a big cable exodus as I removed 8 x 1GbE cables and replaced them with 2 x LC-LC OM3 Fibre Cables.

I was then working on vCAC and vRA solutions which required a LOT more memory so I purchased 6 more sticks of 16GB DDR3 1333MHz ECC memory, this took my home lab server to a spectacular 192GB of memory and 12 cores and 24 threads and dual 10GbE of network bandwidth.

I also started to run Plex Media Server and I had a requirement to store copies of my blu-rays which consume between 20GB and 40GB each so I decided to build a new storage solution based on the DELL PowerEdge R810 which I had inherited, I upgraded this with 4 x INTEl Xeon E7-4870 10 Core 2.4GHz Hyper Threaded CPUs and 32 sticks of 16GB DDR3 1333MHz ECC memory. I added a pair of INTEL X520-SR2 Dual Port 10GbE network cards and a pair of External LSI SAS HBAs so that I could attach some external DELL SAS Shelves, I purchased a DELL PowerVault MD3200 which is a 12 Bay 3.5″ 2U shelf and also a DELL PowerVault MD3220 which is a 24 Bay 2.5″ 2U shelf. I then hunted around for the largest capacity 2.5″ SATA HDD that I could find which turned out to be the WD 5TB which could be found in retail packaging for £200 or in a portable caddy for £100, I’ll give you 1 guess which option I chose.

After this was all setup I migrated all of my data over to the new FreeNAS solution running on the DELL PowerEdge R810 and I turned the old DELL PowerEdge R510 into my Veeam backup target.

But then vSAN became a thing and I wanted to experiment so I bought a PCIe to M.2 adapter card from StarTech.com and a Samsung 950 PRO 512GB M.2 SSD for the vSAN cache and 2 x Samsung 840 PRO 512GB SATA SSDs for the data drives.

I then discovered that the DELL PERC H200i SAS RAID Card which works in the integrated slot in the DELL PowerEdge R610 could be flashed to the LSI SAS HBA firmware and still be made to work in the integrated slot in the DELL PowerEdge R610 by changing the PCIe ID to match the original DELL card. So after a very nervous firmware flashing session using a boot-able CentOS DVD I had success, vSAN had direct access to the 2 x Samsung 840 PRO 512GB SATA SSDs as the DELL PERC H200i SAS RAID Card was now functioning as a LSI SAS HBA instead.

I had now upgraded my home lab server as much as possible and the next logical step was to buy another server so I once again bought all of the same parts this time making sure I wasn’t paying for parts that I was going to throw away e.g. old generation CPUs, small memory modules, SAS drives etc.

This went on for quite a few years and got to the point that I had accumulated 4 fully upgraded DELL PowerEdge R610s all running vSAN and all with access to each other and my FreeNAS storage over dual 10Gb Ethernet links.

I then moved into designing and deploying metro stretched clusters and I required my home lab to become 2 sites but I had essentially used up all of the memory of 2 and a half servers with all my solutions so I invested in 4 more DELL PowerEdge R610s all the same specification except for the CPUs which I ended up buying a pair of INTEL Xeon X5670 6 Core Hyper-Threaded 2.93GHz CPUs because there weren’t any X5675s available at the time.

I then found myself in need of more storage so I purchased 4 more Samsung 840 PRO 512GB SATA SSDs for the vSAN data drives, this gave me 3TB per server and 9TB per site of raw storage with a FTT value of 1 for the vSAN. You would be surprised how quickly this disappears when you have vSphere Replication and Site Recovery Manager protecting both sites.

These pretty amazingly powerful servers have served me very well for the last 8 years, I say this because when VMware announced that ESXi v6.5 would be the last supported version to work with my CPUs I started looking at what I would replace the home data center with, it turned out that I was able to get VMware ESXi v6.7 installed on them with no problems and all subsequent patches too.

Well now I am truly stuck if I want VMware ESXi v7 and the new vSAN in my home data center I have to buy new hardware but I don’t want to just go up a generation to the DELL PowerEdge Rx20 Series I want to jump over it and go to the DELL PowerEdge Rx30 Series so that my home data center investment lasts for at least another 8 years.

I decided that the place to start the new home data center infrastructure was with an upgrade to the the storage solution.

Stay tuned for the next installment…

Leave a Reply