When creating virtual disks in ESXi there are lots of questions that most people ignore. 90% of the time you can ignore them but a few can really help performance.
Device Type
- Create a new virtual Disk
- Use an existing virtual Disk – Used to connect to a shared disk or a disconnected one
- Raw Device Mapping – Raw device direct access to a physical lun – Used in microsoft clustering performance uses etc..
Disk-provisioning
- Thick Provision Lazy Zeroed – Space required for virtual disk is allocated at creation time but the disk is zeroed out on demand when requested by the guest operating system (like a fast format in Windows). Fast creation, fully allocated blocks on datastore, high chance of contiguous file blocks
- Thick Provision Eager Zeroed – Space required for virtual disk is allocated at creation time and every sector of the disk is zeroed during disk creation. Slow creation, fully allocated blocks on datastore, highest chance of contiguous file blocks
- Thin Provision – Disk only uses as much space as it initially needs. Fastest creation, Allocated and zeroed out on demand, low chance of contiguous file blocks, uses less disk space
Which one do you choose?
Well that depends on your needs. If performance is your critical issue then Thick provisioned is the only choice. If you need to save disk space or doubt that your customer will really use the 24TB’s of disk space they have requested then thin provisioned is the choice. Lazy Zeroed is something between the two.
How do I switch?
As of ESXi 5 you have two choices: storage vmotion and inflate. When initiating a storage vmotion you have the option to choose any of the three options above and convert it. You can also turn a thin into thick by finding the flat file using the datastore browser and selecting inflate.
SCSI Controller type (Only on first disks):
Much like disk type there are many choices:
- BusLogic Parallel
- LSI Logic Parallel
- LSI Logic SAS – Requires Hardware 7 or later
- VMware Paravirtual – Requires Hardware 7 or later
Paravirtual is a physical adapter that requires vmtools drivers in order to use. Paravirtual adapters provide the best performance but can be only used in new operating systems. Also they cannot be used on boot devices. Normally your OS selection handles the best scsi type for you.
SCSI Bus Sharing:
When you add a new SCSI Bus you have options on the scsi type but it also gives you the following options (can only be changed when added or vm is powered down)
- None – Virtual disks cannot be shared
- Virtual – Virtual disks can be shared between virtual machines on the same server
- Physical – Virtual disks can be shared between virtual machines on any server
Of course you still need a cluster file system but if you plan on using this system then select Physical.
Scsi bus location:
Each virtual machine can have up to 4 scsi buses each with their own controller. Lots of people have questioned the advantage of multipe buses in vmware. In traditional hardware you have multiple buses to provide redundancy in case of a bus failure. This does not apply to virtual hardware. But it still provides the virtual operating system multiple channels to handle I/O which is always a good thing.
Mode:
- Independent (Not affected by snapshots)
- Virtual (Default)
Independent Mode:
- Persistent (Changes are written to disk) – great for databases and other data where a snapshot does not make sense.
- Nonpersistent (Changes to this disk are discared when you power off or revert to the snapshot) – Used on lab computers, kiosks etc..
“Paravirtual adapters provide the best performance but can be only used in new operating systems. Also they cannot be used on boot devices.”
This is not exactly true. There are cases where paravirtual can actually be detrimental to performance if a VM’s I/O is not above a particular threshold. Furthermore, PVSCSI adapters can be used on boot devices…it is done all the time. The only caveat I have run into personally is that Windows boot media (haven’t tested Server 2012 yet) does not contain the PVSCSI driver by default. This causes the controller and any discs to be unavailable when trying to install the OS. However, this can be overcome by copying the PVSCSI driver from a booted VM running VMware Tools and injecting that driver into the Windows install process through the F6 driver load.