So we started the new year with a bang and I have a whole post about the 1st LonVMUG anyway but I thought I would get this up and online. I did my 1st ever ’45 minute’ session and I am sure I now have the bug… I am already thinking what can I do next if the committee will let me back
It was a great session and it was nice with it being interactive as I have already fed some of the ideas back but here were a few of the top ones. The main thing to stress is this was done on a HP mixed reality headset so these are far more affordable than the Oculus and the Vive so anyone can give it a go. Also due to the app it does run on quite a few machines.
- Try and make it AR compatible, maybe so you can see the hosts in front of you
- Read a QR code from a device to get an exploded view
- View stats or performance metrics within the experience
- One great one from the community was to use this for training and the ability to dive into devices and make it interactive or even allow for remote training.
This tweet did also amuse me but I don’t think Pat has anything to worry about
I may re-upload this but here is the video of me looking silly anyway!
So I am at it again… I was very very lucky to get a HP mixed reality headset for Christmas mainly down to the fact I like being purchase savvy and the laptop I purchased (sorry my amazing wife as a present) wasn’t only on offer but it still qualified for the headset for free! I thought again this was going to be a very drawn out process and well due to the work I did with the Oculus well it took me all of 20 minutes to get this up and going! The nice thing is its a little easy to set up if I want to demo this out anywhere!
There are only a few steps some hopefully oblivious you miss and others you need to do but go open my other guide in another tab here! (just for reference)
Step 1. Get all the required files for VRDCEX by cloning/downloading the GitHub Repo and getting the build from here
Step 2. Extract both these files to a common location, I decided to put these under the below folder but its fine to chose your own
Step 3. Go setup your Windows Mixed Reality Headset as you usually would if you haven’t already done so
Step 4. Go grab the latest copy of Steam and Steam VR once Steam is setup. This essentially works as our translator/interpreter for our Mixed reality headset. so we don’t need any coding. Before we start to configure VRDCEX we need to launch Steam VR just to ensure it can see the Mixed reality headset. You will also need the following plugin from here via Steam. I found I had to reboot once or twice and ensure that playing around with the Mixed Reality portal being open or not
For any demo’s I usually try and use Standing Only as space is usually limited, you may also get the very cool Portal inspired intro to help you configure the headset… I am not sure if you will see all this below or if it was because I also have the Oculus available on my machine.
If you have got this far hopefully you will now see the below. Try and put on the headset and just make sure that SteamVR does load before continuing
You can see Steam sees the Mixed Reality and Oculus with their respective components
Step 5. Lets get down to the main bit and install/configure VRDCEX. Firstly go back to Steam and select add game at the bottom of your games list.
From here select browse and navigate to where you extracted your downloads earlier and select the executable. Hit add selected programs and you should now see this appear in your Steam library.
Step 6 (Optional). Go to your extracted files and find the assets folder and wire mock. Create a shortcut on your desktop for On-Prem_Endpoint.bat. You will also need the latest version of Java JRE. The nice thing about this is until you are ready it allows you to play with the app by emulating vCenter.
I used JRE 8U151 for my configuration
Double click your shortcut but you will need to override Windows 10 protection as it was downloaded
Check that everything is running by going to https://localhost:8082 before proceeding
Step 7. It sounds odd but go and launch the exe but don’t really worry about your headset. You need to do this as it generates a configuration file under your app data folder that you need to amend. Close the VRDCEX app once its opened and then open the config file. In mine I have put the Wireframe emulator in step 5 but you can put in your own vCenter here at any point.
Step 8. Enjoy…. Go back to Steam and launch your App! Put on your headset and enjoy your virtual datacenter
I have done a little demo video of it running just to show you the subtlety of the way Steam pulls in the Windows controllers VS the Hive or Oculus its really quite clever
So last week I was fortunate enough to attend a blogger briefing with Eric Wright prior to VMworld 2016. I hadn’t heard too many of the rumor prior to the event and to be fair I am still on the come down from vDM that I had been working on with Eric already.
I have recently had a great demo of the solution prior to this announcement and I loved the way it could monitor my environment be this on premise or cloud. So when Eric released the name I was a little confused at first but actually I the more I think about it I think they have got it right… Their product doesn’t just monitor a VM its the whole application/life cycle with it providing an ability to make intelligent decisions on trends and stress that are unique to your environment.
If you break down the name it does all then make sense
turbo– They have retained the faithful green circle from the VMTurbo era which can aid with the brand loyalty. I have seen plenty of rebrands where there is a major switch and this can lead to confusion. Turbo also relates to real-time performance of the product of which I believe is key to the organisation and they have not lost sight of this.
nomic– This I had to wait for Eric’s explanation and also Chris Bradshaw‘s great blog post. This appears to come from the Autonomic Control that their product offers providing an ability for the systems to manage themselves in realtime and being application aware of which many of the virtual platforms systems out there have no intelligence on this. They just see RAM, CPU or disk usage and don’t also look back at historic trends for comparisons. The 2nd is economy you are getting out of your current system, have you over provisioned systems and wasting space and even money on cloud resources.
With all these controls it allows you to better manage your environment and maybe even prove the requirement for growth in areas that are not usually spotted.
I am yet to try the product in my own environment but I am hoping to as part of a new blog series I am working on. From all of this and what I have been informed I can have a more intelligent environment of which I don’t have to balance the loads all over the place with it just becoming a thing of the past. Thus allowing me more time to get on with better and more interesting things within my environment. I believe it will also give me to opportunity to better forecast for when I will need new hardware within any of my environments. I can also hopefully resize my lab environment as I am sure I have over provisioned VMs and this will let me know where I can pull back some of my resources.
If I can get all the stuff working that I saw in the demo all I can say is I will be a very happy sysadmin and I can only see this aiding more sysadmins out there. So I wish turbonomic the best of luck and hope their re-brand does what they want it to. I have also heard that they should be giving away some nice goodies over at the stand at VMworld if you are fortunate enough to make it there!
I thought I would write a quick blog post about PernixData and their freedom product as I have used their full version in a POC and was blown away by the performance gains I got. Safe to say my expectations were high. The main reason that prompted me to use this was not only for the performance gain but it was also for the graphing that helps out with day to day performance monitoring which can help assist you in isolating any issues.
Firstly I know the two storage appliances we have in the environment that we were linking this to should have not been having the issues that we were experiencing but I wanted to get to the bottom of it. I was sure this was a networking issue or profile but our users were most upset and frustrated as their VDIs were slow. I know the one storage appliance being used to accelerate this work load should be able to do sub 2ms and lower performance but this was more in the region of 45ms!
The installation process was painless as always, upload a few package files to your local data store on your ESX hosts and enable SSH. From here follow the user manual for your version of ESX to install the VIB, one note here is make sure your host is in maintenance mode as I always forget. Give it a reboot and your ready for step 2. Installing the management server again was easy enough but the only hint I would provide here is if you are installing a separate SQL express instance ensure you pop into the SQL configuration tool and enable the named pipes and the TCP ports ensuring they both automatically started and are running. This isn’t configured as standard and usually causes a bit of head scratching into why the setup cant see the SQL instance thus slowing you don’t from RAM accelerated performanceness
Once you have done all your installs and rebooted hop yourself into the PernixData console, you of course will need to make sure you have pointed this at the right VMware cluster if you have a few. Now get ready to screenshot some baseline stats! I will elaborate on this more later on but the logs are only kept for 10 minutes so ideally you want to get the stats a few minutes after the product is in. Then try and get this during peak times such as log ons and off and repeat after a day or two. This really helps you analyse your baseline with some good solid ammunition to provide to your board/steering group into why you would upgrade to the full version if you do need write saving too. I say this from experience as I really wish I knew the magic number of what IO and latency this saved us as the product gets to work straight away within reason.
As you can see below this me logging into my non persistent desktop for the first time all bar the fact I got pushed to another VM as my old one was being rebuilt resulting in the storage spike
At this point I thought I would log on and off a few times to see if the performance got better and well yes it did. To try out my theory I tried a good old IOMeter test and well the results were more than surprising with the fact this product is FREE!!
As you can see I was pushing close to 4195 IOPS from one VM and this is what the VM Observed too, the best bit is the data store never really peaked above 500iops and latency was well within tolerance.
From here just don’t tell your users and wait till they see you making a coffee or better yet they may come to you with one asking how its got better. Once they have disappeared pop back to the dashboard and have a look at the savings! By day two on average we were now getting 2.6ms latency on our very busy VDIs and we saved nearly a TB of bandwidth in 2 weeks
What happens if this isn’t enough?? Well all I can say is I suggest you also take advantage of the Architect trial available whilst you are setting this all up. Why you ask, I say easy response to that is that you get all the stats you need to justify ever upgrading if you need to. Within 8 hours it starts to provide some recommendations and even if you need to upgrade to the full version. As you can see from my screen shot I would benefit from this on two VMs as they have write heavy workloads . It also shows me what is going on with the block sizes just in case I have set some of these wrong.
It also takes a lot of the guess work out in sizing SSDs to get the best bang for your buck. As you can see if I am not too fussed about resiliency of which I wouldn’t be in this environment as they are non-persistent VDIs I should be able to get away with a 256gb SSD
So back to how this product has helped me isolate my issue, well on day 2 on the 19th once the product started to get used in anger I monitored the busy periods along with a few I created myself. As you can see around 11:08 the standard latency looks good all being sub 2 millisecond
I then decided to log on and off to rebuild my image
I also saw a little spike at lunch but this appeared to then go into the cache, maybe a bit of lunch time surfing caused this.
The next major hits were around 15:30, 16:30 and 17:30 when some of our users leave. Really worryingly I saw 580+ms latency at which point this is something to do with the network or our profiles copying across. The nice thing is you can do this at a per VM level to isolate the issue even further, I have removed the names of course from my screen shots. Maybe you just have one troublesome VM being a noisy neighbour
On a positive note the read cache limited the impact to two of our busiest persistent desktops sub 2ms again!
During this time I got a little excited from the savings and decide to run some tests at the same time and all I can say is WOW, no impact to the other users and look at the performance I got.
Another massive plus is when I came to log off my VDI and there had already been other log off processes of which had been put into the cache and the latency had dropped. Nowhere near the 445ms from some VMs earlier
By this point I appreciate this is a dry read and you are probably bored and just want the facts
What we are running this on:-
HP Bl460 Gen 9 – 256GB RAM in each host
HP 4730 Lefthand 2 x Nodes (on 10GB)
Mix of Windows 7,8,10 clients to try and diagnose performance issues
- Its free!!! Not many things are in the IT world
- You can use up spare RAM if you have it
- Users get a better experience
- The stats help you isolate peak issue times to see if it’s a data store/networking issue or even just one VM
- The logs don’t run for long enough, 10 minutes just isn’t enough. Users never tell you when the issue occurs unless they are very frustrated
- Cant use SSD for now
- The overall limit really makes this product a sweet spot for 3 hosts only maxing out at 128GB cache.
So would I implement this product again? Put simply yes, yes I would. I’d question why wouldn’t you? Its free and when not many things in the IT world are and your users will get some form of better experience why wouldn’t you. This could be a DBA report running a bit better, VDIs being accelerated on login and maybe even their mailboxes just being a little more responsive if you put this on Exchange.
Top Tip Check List:-
- Just get on and install it!
- Work with your DBA and get the database configured in advance. Tell him he may get better performance
- Install the Architect trial from day 1 and get those stats! This may make it an easy sell to get the full product from the board/steering group
- Get a cup of tea and enjoy the latency drops = happier users and a few less support calls
I am still looking at the back end and verifying the gold build and network as I now know its not the storage causing this issue directly. I am sure this is something to do with the roaming profiles as I know both the storage appliance we are using can perform much better than they are. Basically Pernix Freedom has kept my users happy, me happy and given me more time to find the root cause without any impact.
I am hoping to compare this to their full product over the coming weeks once I can get a separate lab up and running to cross compare results.
Update:- Just thought I would add a video as I still can’t believe it myself. I know this isn’t real world data but if it can do this it should be able to cope with a fair amount of users all logging in and reading data from an image!
Disclaimer:- These views are my own personal views and do not represent the views of my current or past employers and/or partners
Want a way to bust those January blues? Want to game from your DC? Well if you have spare server kicking around or at least one you don’t mind shutting down and it has VT-D extensions enabled I would have a chat with 10Zig and grab a VMware Horizon demo. Why…. Well here is why
You of course will also need a compatible GPU but I found that a Quadro K2000 I had access to did more than an ample job to prove this in concept.
I ran this very brief POC on a HP DL380 G7, VMware 5.5 and a small iSCSI RAID 5 LUN and got these results. I would be amazed to know what I could have got with more tuning and an SSD or two. As you can see it got around 120FPS at points so pretty good going.
I am going to do a full post on the 10Zig clients as well as they put simply they rock and I would like to test them further. I would love to do a true show down of RDP, PCoIP and also Citrix. From this it would be good to see how Citrix XenDesktop compare against VMware Horizon in a head to head app demo.
I did try a few other older games but I had issue getting them going but either way it’s an interesting notion that you can get full 3D graphics from your data canter. Maybe this is also a good way at last to get some of your Adobe fans (Also Mac users) within VDI as you can at last really provide them a true GPU and support. It also can hopefully satisfy your DR plans as who has a Mac spare or a high end graphics machine for when disaster happens!
*These views are my own and may not be the same as my employer.