Intel Optane… the start of #LabWars
So I have been meaning to get this blog series up for a number of weeks but like everything else as many know life gets in the way or the bugs come at you like its no tomorrow….
In short this series had been made possible thanks to Intel and their sample hardware so a massive shout out to Mr VSAN and Corey for getting this arranged within Europe.
Super exciting day today! A massive thanks to @MR__vSAN @vCommunityGuy and @intel for the sample hardware via the @vExpert program making this happen! ????
— Gareth Edwards (@GarethEdwards86) April 12, 2023
Time for #LabWars with @jameskilbynet to commence@VMware @vmwarevsan ESA testing #HomeLab #vCommunity #Intel #IntelOptane pic.twitter.com/ZcXAetgPHD
Before I go off an a massive tangent I also wanted to thank James Kilby for giving me a great idea and going in on this together to create #LabWars
Each of our respective labs has its pros and cons and I will be sure to do a BOM for mine soon, but I do highly encourage heading over to his blog so you can check out what he is doing. Also his explanation of TrueNAS and ZFS is next level
The main few objectives of this series was to compare a few different storage types within a lab scenario, I need to emphasise that despite the kit I have it does have its constraints… more on that later in the series. But the other thing was to push boundaries and ultimately have some fun in doing so. I also wanted to start to question many of those what if statements such as what happens if you game on something like this or how would this apply to AI inference or just day to day tasks such as copying a VM. Where to bottleneck ultimately end up occurring.
A few of the things I wanted to try from a storage perspective were the following:-
- TrueNAS Scale enhanced with Optane… Does this make a difference
- VSAN OSA
- VSA ESA
From here I also wanted to explore things as mentioned but can I make a cloud gaming server with these, how does Horizon behave with so many IOPS does this now provide me with a different bottle neck. What about day-to-day tasks such as creating VMs from scratch or even cloning templates where do design considerations come in to enhance the usability of the platform
First of all lets level set with a basic performance test on a run of the mill SSD, below are the results
Next what happens with TrueNAS…
Well that is an improve me for sure but I have hit a limit in my first pass and this was all down to my networking being 10GBE
Now lets try that with vSAN ESA, I started with ESA as it was so simple for me to set up! It took me longer to rack and I didn’t even have time to be a British stereotype and finish my cup of tea
Well this is a massive improvement, remember I am using under the recommended drives per server and also only two active 10GB network paths
What was also interesting was the same tests but with a 4GB file
TrueNas
vSAN
The thing to blow me away so far is those high sustained writes all the way up to the higher end blocks, I do wonder what else I can throw at this in the coming weeks.
So I better cover some of the why did it take me so long to get to this stage… quite simply networking bugs!
It would appear when using the 10GB NICs and the DL380 G8 when I was trying to enable Jumbo Frame this would always roll back.
After much trial and error I decided to set the jumbo frame on the other path prior to plugging it in and wait, WHAT!! It syncs as expected, I then tinkered around and was able to recreate the issue and it appears that if the NIC is up when you apply this for some reason the renegotiation just does not happen in time so it rolls back, I am fairly sure this is a bug and I do not have any other GBICs or another switch as this time to rule it out. I doubt it is that though as the same operation within VMware ESXi works as expected.
More to come soon!