I’ve been using Hyper-V for a couple of years now and there are a few things I’ve seen that just plain work, and if you go a different path you are either taking your chances or wasting your time. Of course, let me point out first that I am not the true authority on Hyper-V, Aidan Finn is, and you will see me reference his blog from time-to-time. You can visit Aidan’s blog here: www.aidanfinn.com (it’s also in my links area to the right). I’m pretty confident, however, that Aidan may agree with me on most of what I’m about to say. Here goes:
- Do not use a dynamic disk with any relational database. This includes, but is not limited to: SQL Server, Oracle, DB2, Microsoft Exchange (yes, it uses a relational database to store your email based on the Jet Engine from Microsoft), and so forth.
- Always provision 4G of RAM minimum for the host operating system.
- Always provide the host operating system with its own NIC. Do not let is share a connection with a VM.
- If you are in a small environment (e.g. 5 or fewer host operating systems), do not join the host to the domain. The benefits do not equal or exceed the hassle. For a few host operating systems, it’s easy enough to log on to them individually to update them or check stuff.
- Always disable time synchronization if your guest VM’s are Windows XP or higher (I cannot speak for older Windows guests or non-Windows guests) and have Internet connectivity. I cannot think of a single reason to have that feature on in Hyper-V – especially if the guest is a member of a Windows domain.
- Unless you have a solid reason to do otherwise, always set the Hyper-V host to properly shutdown the guest operating system if the host is shutdown (the default is to perform a save). This is especially true for guests that have a relational database such as SQL Server, Oracle, Exchange, etc..
- Do not go out of your way to use SCSI virtual disks for your VM’s. The IDE and SCSI virtual disk adapters have almost no differences in performance.
- Unless you have a bleeding need for speed (i.e. you run the New York Stock Exchange), do not go out of your way to use pass-through disks.
- If you are putting your virtual machines on a physical RAID 5 array, your controller should have a minimum of 512MB of RAM on the board – more if all or most of your VM’s are doing heavy writes. From what I can see, 512MB is pretty much the minimum these days, but there are still some used/cheap controllers out there.
- If you are going to virtualize a Terminal Server, stop what you are doing and read the free white papers here: http://www.projectvrc.com/
- If your machine is to be a Hyper-V machine, then that is the only role it should do. Install no other roles or features.
- If your machine is to be a Hyper-V machine, then the backup software agent you are using should be the only software you install on the machine. Furthermore, you should not install the entire backup suite (e.g. BackupExec), just the agent needed. If your only physical machine is a Hyper-V host and you need back up software that isn’t some big suite like BackupExec, check out www.backupassist.com.
- Never leave your host machine logged on. Once you are done administering the machine, log off.
- Guest VM’s on the same machine that need to communicate with one another often should be on the same virtual NIC when possible.
- Aidan’s going to kill me for this one, but: It is OK to install a VM on the same partition as the host operating system as long as the VM is low impact. Meaning it does not do any heavy reads, does not do any heavy writes, does not consume heavy CPU. For example, we use Team Foundation Server 2010 as our source code repository. There are only TWO developers. How much work do you think that TFS guest does? Barely noticeable.
That’s pretty much all I have for now. I may add more to this list as I learn more or read more about Hyper-V. Of course, comments either confirming or un-confirming what I say here are MORE than welcome.
4 thoughts on “Hyper-V Tips That I HIGHLY Recommend”
[…] afraid I’m going to have to disagree. As I posted earlier, I recommend FOUR GB of RAM for the host at an absolute minimum. My reasoning is very […]
[…] I wrote a post about my recommendations for Hyper-V virtualization. One of the key factors I spoke of was […]
Question from a SysAdmin that is going to be doing a domain upgrade and likely into Hyper-V. Using Sata 6G Enterprise SSDs in either Raid 1 or Raid 10, would you be in favor of having the host 2012 Server Core and 2 VMs together on that array? Data such as an Exchange VM and/or file storage would be done on standard HDDs in a separate local RAID 10 array. I’m just trying to get a handle on just how much writing is done in a standard server with no significant high write functions(basically DC/DNS/DHCP/Print/Several programs and VPN access)
Nah, come on. Even while this is quite old, some recommendations are at least old-school. It’s pretty much useless recommending anything without proper explanation of WHY you should or should not do things.
True when using VHD (2008R2), don’t agree when using VHDx on 2012 or higher.
We’ve been doing it all the time for years now without any negative result or impact. Why not ? I would only agree to this for a really heavy use SQL Server. Exchange 2010 for example is no problem at all, since it allocates huge chunks of space when it needs it, aside from the best practice you should host databases on 64K formatted disks anyhow (in other words, resulting host-fragmentation, normally the main reason for NOT using dyn. disks for these cases, is totally neglegible. This gets even better on Exchange 2013 and even better on Exchange 2016. Have a look on the VHDx holding the Database, and then have a look at VHDx-fragmentation from the host. You’ll find that actually Exchange-Database disks will have the least fragmentation of all disks.). This recommendation is superseeded anyhow when using a tiered storage space on the Host, as that would result in major fragmentation anyhow, also with fixed disks. So, especially the combination of tiered host-storage is ideal for using dyn. disks on the VMs. The fragmentation gets mostly “negated” by the SSDs. Unless you’re going to store hundreds of VHDx on there, then it might become a problem later on. Even then though, the default Tiered Storage Space Optimization that runs every night does a very decent job of defragging the HDD-part, leaving the SSD-part alone !
Point 16 is what you should aim for, because that’s going to have a much much bigger impact on your performance than any fragmentation (inside VM and outside on the host), unless we’re talking millions of files, then NTFS itself becomes a bottleneck, even on SSDs
Bare minimum for an OS is 4 Gbyte for 2008(R2)/2012, 2012R2 can do with 2Gb min. (this is without any additional heavy software, that is). Reason for 2008/2012 double is because Windows Update caches it’s entire database (~1Gbyte) in RAM. 2012R2 and higher resolved this “issue” with a mere ~200 MB in RAM. That’s VMs. As for the Hyper-V Host, same applies actually: Reserve min. 2Gb free for 2012R2, 4Gb for 2008/2008R2/2012. Reason is also same. WU database cached in RAM. If you do not install any software on the Host, per point 11, you’d need ~1Gb for WU cache, you then have around 3Gb free on 2008/R2/2012, and 1Gb on 2012R2. For 2012R2 you don’t have to reserve anything, Hyper-V does it automatically.
Don’t agree. Again, why ? I’m assuming you mean to have a seperate physical NIC for the Management-OS ? You would normally create a team-flavour, put a vSwitch on that and create a VM-NIC for the Management-OS, meaning physically speaking Management-OS is connected on the same physical NICs. It just depends on the situation and networking requirements you’re facing if this is okay to do or not.
What exactly is the hassle you’re referring to ? The fact that Group Policy would initially fail ?
Not domain-joining means you lose the prospect of having one unified environment, setup and maintained through Group Policy. Not having your Host in the target-domain also presents another problem: Time. Next point relates with that. If your host is not domain-joined, it won’t be able to keep correct domain-time, and as a consequence your VMs’ time will be anything but the domain-time. So, points 4 and 5 here go hand in hand. Your host should be domain-joined, your domain-server should have proper time setup and VMIC disabled in Registry, and any member-VM should have time sync enabled and setup getting time from the domain.
Disabling VMIC in Registry on a vPDC is not necessary anymore since 2016, by the way.
So wrong. The Time Sync Integration Service is what keeps a VM perfectly on time. A VM might heavily fluctuate time without it depending on it’s load. Hyper-V corrects that on the fly. Also, If you save and later restore, it corrects the time. It auto-adjusts also when working with snapshots/checkpoints. Just having the VM sololy rely on a domain for time is wrong. You can have a perfect domain-time sync with this left on (the default) however by setting a registry-entry in the VM to just disable the VMIC-provider, which you then only have to do on the PDC. Turning VMIC off means your VMs will heavily fluctuate time in between the default domain-interval corrections of 15 min. !
From my experience, true, for the reason of it usually being faster than a save 🙂 Especially for big VMs. For any serious Database, also, yes, just to prevent any form of data-loss from being possible to even occur in the first place.
Normal work, true, a VM with very heavy I/O on multiple virtual disks would benefit from SCSI-controllers though, since their I/O is scheduled per LP, eg. each controller uses a different host-LP for I/O interrupts, meaning 2 SCSI-disks on 2 controllers can do true parallel I/O, while on IDE it would get serialized (inside the VM) on the same host LP. So, across a vast number of VMs, using SCSI increases your parallel I/O (as seen from the Host). SCSI disks also have the benefit of being able to do Online-Resize of the disks.
Pass-through is old-school and a big no-no nowadays for the simple fact it will totally make the VM immobile among multiple hosts, among a number of other things you lose, like Replica and Host-Backup to name the most influential ones.
9. You should only put VMs on a RAID10. RAID5 is just not up for the job of random-I/O thrown at it by the Host. Caching rule is simple. The more the better. Write-back only if backed by a battery! RAID is a no-no when using (Tiered) Storage Spaces (Although it can work savely with cache in Write-Through and using SSDs with Power Protection are a must).
Huh ? Links to a site mainly discussing VDI, which is not Terminal Server, or newer Remote Desktop Services anyhow. An RDS Server is one of the best candidates for virtualization, just because of it’s usage-profile alone. It’s our core-business and allows for one of the best consolidations of all types of servers.
Correct. I add to that, no software of any kind either. You for one lose any support from MS if you do otherwise. Aside from the possible negative performance impact on your Hypervisor, that is.
Correct. Even better, if available, have a Hyper-V Replica for “limited manual Failover” and backup, which means zero impact on your production-VMs during backup !
14. Yup. You mean on the same Virtual Switch, that is. Can’t even think of a situation where you would have it any different than that…?
Haha. Yes, it’s totally okay, as long as you don’t overprovision (Dyn. of Fixed). Better said, it’s okay as long as you make sure you underprovision available storage capacity. I’d even turn that around. It’s okay as long as the host itself doesn’t need any serious load, which, if you apply point 11, is true by default. Create a Hypervisor, and leave it idle while measuring what it does in terms of CPU, Mem. and I/O. It’s nothing. Meaning, you’re VMs would still get 99% of it’s I/O done from that drive. Most things Hyper-V does through the Management-OS is done in Memory, not disk, anyhow. Only reason for NOT doing this would be if there is potential growth of your VMs which would eat up available capacity, which in turn would make VMs go in paused state or worse, would crash your Hypervisor. Vastly underprovisioning storage-capacity would make sure you don’t run into that scenario.
Make a rough estimate of required host-storage performance (be it OS-drive or not). Your VM-performance almost scales equally with the hosts’ storage-performance. Before starting work, benchmark you storage using diskspd, not a regular benchmark like ATTO, etc. If you’re storage comes at 8000 4K random I/Ops, it means you can deliver roughly 8 VMs getting 1000 I/Ops each, or 4 VMs getting 2000 I/Ops each, etc. So, if you need to put 16 VMs each VM would get 500 I/Ops, which is VERY VERY marginal. I myself consider 2000 I/Ops PER VM to be the standard minimum to get a decent user experience. Anything less will just “feel sluggish”. I’m all talking write I/Ops, reading is less critical as it gets largely cached. Random writes are the bottleneck in 99% of the cases. Not CPU, not mem., not read I/Ops.