Vmware port group port binding change

So you just read my post on how you will want to change your vcenter to a ephemeral port binding… but how is it done in 5.1 web client?  Well here yo go:

  1. Click vcenter
  2. Click Distributed Switches
  3. Select your dV switch
  4. Select your port group
  5. Click Manage and Settings
  6. Select Edit
  7. Select the drop down on Port binding
  8. Select Ephemeral
  9. Select OK

That’s it and your all set.

Ran out of ports on vmware switch now what?

This is a common issue… you have run out of ports on your dVS or virtual switch port group.   You are running a Static binding port which means you have a set number of total ports and no more.  How do you fix it? There are two ways:

  • Increase the number of ports
  • Enabled auto-expand

Increase the number of ports:

-Remember that static binds assign a port per virtual machine… it does not matter if it is powered on or not as long as it’s on the port group it get’s a static port.  To increase the number of ports in the web client (5.1) do the following:

  • Click vcenter
  • Click Distributed Switches
  • Select your dV switch
  • Select your port group
  • Click Manage and Settings
  • Select Edit

Adjust the number of ports and click ok.

Enabled auto-expand:

To increase the number of ports in the web client (5.1) do the following:

  • Click vcenter
  • Click Distributed Switches
  • Select your dV switch
  • Select your port group
  • Click Manage and Settings
  • Select Edit

This time choose port allocation and select Elastic and click ok.

Just remember that new ports are allocated on dVS switches by vcenter and it has to be available to provide ports.

You can also automate this with vMA using the following command:

updatedvPortgroupAutoExpand.pl --operation enable --dvportgroup portgroupname

Building a custom ISO for Vmware

This has been a pain of mine for a while now… I have to use the customized HP image but I need some additional Storage drivers to support VAII.  So I have to install HP’s ESXi then install the drivers later… with this tool on a Windows box I can bundle them together without much pain:

ESXi-Customizer -> http://www.v-front.de/p/esxi-customizer.html

Free Vmware Training

Right now Vmware is offering a whole bunch of free vmware training courses that you should be doing for free.  Here are some of the new ones:

And a lot more just head over to http://mylearn.vmware.com login or create and account and select the free eLearning tab.

Vmware virtual disk types

Vmware supports three different type of disks at this point (5.1)

  • Eager-zeroed thick
  • Lazy-zeroed thick
  • Thin

Eager-zeroed thick:

Disk space is allocated and zeroed out at creation time.   It takes the longest time to create but provides the best possible performance at first use. Mostly used for MSCS and FT virtual machines.

Lazy-zeroed thick

Disk space is allocated but not zeroed at creation time.  The first time your operating system requests a new block it is zero’ed out.   Performance is a little less than Eager-zeroed on first write then equal on each additional write to same sector.

Thin

Disk space is allocated and zero’ed upon request.

Which one do you choose?

Well that depends on your needs.  If performance is your critical issue then Thick eager zeroed provisioned is the only choice.  If you need to save disk space or doubt that your customer will really use the 24TB’s of disk space they have requested then thin provisioned is the choice.  Lazy zeroed is something between the two.  At this point vmware recommends Lazy zeroed.

How do I switch?

As of ESXi 5 you have two choices: storage vmotion and inflate.  When initiating a storage vmotion you have the option to choose any of the three options above and convert it.  You can also turn a thin into thick by finding the flat file using the datastore browser and selecting inflate.

SCSI Controller type (Only on first disks):

Much like disk type there are many choices:

  • BusLogic Parallel
  • LSI Logic Parallel
  • LSI Logic SAS – Requires Hardware 7 or later
  • VMware Paravirtual – Requires Hardware 7 or later

Paravirtual is a physical adapter that requires vmtools drivers in order to use.  Paravirtual adapters provide the best performance but can be only used in new operating systems.  Also they cannot be used on boot devices.   Normally your OS selection handles the best scsi type for you.

SCSI Bus Sharing:

When you add a new SCSI Bus you have options on the scsi type but it also gives you the following options (can only be changed when added or vm is powered down)

  • None – Virtual disks cannot be shared
  • Virtual – Virtual disks can be shared between virtual machines on the same server
  • Physical – Virtual disks can be shared between virtual machines on any server

Of course you still need a cluster file system but if you plan on using this system then select Physical.

Scsi bus location:

Each virtual machine can have up to 4 scsi buses each with their own controller.  Lots of people have questioned the advantage of multipe buses in vmware.  In traditional hardware you have multiple buses to provide redundancy in case of a bus failure.  This does not apply to virtual hardware.  But it still provides the virtual operating system multiple channels to handle I/O which is always a good thing.

Mode:

  • Independent (Not affected by snapshots)
  • Virtual (Default)

Independent Mode:

  • Persistent (Changes are written to disk) – great for databases and other data where a snapshot does not make sense.
  • Nonpersistent (Changes to this disk are discared when you power off or revert to the snapshot) – Used on lab computers, kiosks etc..

Vmware virtual network interface cards which do I choose?

Classic question there are all these virtual networking adapter types which one do I choose?  99% of the people you talk to will tell you they let vmware choose when then select the operating system.  This will choose a compatible network adapter type but not always the best type.  Each generation of virtual adapter brings better performance and features.  As a rule of thumb you want the highest vmxnet adapter your system supports.  As of ESXi 5 the following adapters are available listed in order preference (worst to best):

  • Flexible – Has two functions can function as a vlance or vmxnet (will be vlance unless vmware tools is installed)  vlance is an emulated 10Mbps nic available on almost all operating systems.  vmxnet is the first generation of virtualized only network cards and requires vmware tools to be installed.
  •  e1000 – is an emulated Intel 82545EM Gigabit ethernet NIC with support in most operating systems.  It is the default adapter for all 64-bit operating systems and is required for guest VLAN tagging.
  • vmxnet2 – Updated version of vmxnet that contains VLAN tagging, jumbo frames and hardware off-load with additional high-performance features
  • vmxnet3 – Is not really related the vmxnet2 but does represent the next generation of nic drivers it includes all features of vmxnet2 plus multiqueue support, IPv6 offloads, MSI/MSI-X interrupt – this driver has limited OS support requires vmware tools like all vmxnet adapters and requires Esxi hardware version 7 (Esxi version 4 at least)

How do I choose?  The best answer is consult vmwares knowledge base for information:

http://kb.vmware.com/kb/1001805

File’s that make up a vm in ESXi

For the longest time I always wondered what exactly all those files inside your directory do and their purpose so here is a handy guide:

Configuration File -> VM_name.vmx

Swap File -> VM_name.vswp or vmx-VM_NAME.vswp

BIOS File -> VM_name.nvram

Log files -> vmware.log

Disk descriptor file -> VM_name.vmdk

Disk data file -> VM_name-flat.vmdk

Suspended state file -> VM_name.vmss

Snapshot data file -> VM_name.vmsd

Snapshot state file -> VM_name.vmsn

Template file -> VM_name.vmtx

Snapshot disk file -> VM_name-delta.vmdk

Raw Device map file -> VM_name-rdm.vmdk

.vmx – Contains all the configuration information and hardware settings for the virtual machine, it is stored in text format.

.vswp – is a file that is always created for virtual machines during power on.  It’s equal to the size of allocated ram minus any memory reservation at boot time.   This swap file is used when the physical host exhausts all of its allocated memory and guest swap is used.

.nvram – is a binary formated file that contains BIOS information much like a BIOS chip.   If deleted it is automatically recreated when the virtual machine is powered back on.

.log – Log files are created when the machine is power cycled the current log is always called vmware.log

Issues with multiple CPU’s in Vmware

For the longest time I would not allow people to power on virtual machines with more than one CPU without proving to me they needed more than one virtual cpu.  This is due to strict co-scheduling in vmware.   Over the years they have relaxed this a lot.  Making multiple CPU virtual machines a lot more possible.  To understand the issue take a look at the following diagram:

The assumption of this diagram is rather silly but it helps me explain the problem.  It is important to take into account that this diagram assumes that both virtual processors require 100% of the physical CPU.  As such they have to share it.   This problem is not too bad.  This is known as co-scheduling on the virtual CPU.  In vsphere 4 you did co-scheduling at the virtual machine level; meaning this 2 vCPU vm could not run on a single core single socket physical system.   Because both vCPU’s had to be scheduled to run at the same time.   Since Vmware 5 now schedules per vCPU we have a lot more flexability. The co-stop allows us to get reasonable performance on each vCPU.

It is critical to understand that adding more vCPU’s may not always provide better performance.  If there is any contention for cpu’s then you will have the co-stoping practice enacted.  Causing slow downs.  You can monitor this in esxtop with the following settings:

  • Press c for cpu
  • Press upper case V to see per virtual machine

The key fields here are:

  • %USED – (CPU Used time) % of cpu used at current time.  This number is represented by 100 X Number_of_vCPU’s so if you have 4 vCPU’s and your %USED shows 100 then you are using 100% of one cpu or 25% of four cpu’s.
  • %RDY – (Ready) % of time a vCPU was ready to be scheduled on a physical processor but could not be due to contention.  You do not want this above 10% and should look into anything above 5%.
  • %CSTP – (Co-Stop) % in time a vCPU is stopped waiting for access to physical CPU high numbers here represent problems.  You do not want this above 5%
  • %MLMTD – (Max Limited) % of time vmware was ready to run but was not scheduled due to CPU Limit set (you have a limit setting)
  • %SWPWT – (Swap Wait) – Current page is swaped out

Vmware Scenario’s: CPU sensitive application

Vmware Scenario’s are hypothetical situations and solutions.

This came from a post on the official Vmware forums:

– We have a ESXi server with 1 Socket 4 Core CPU with Hyper theading.  We currently have 2 VM’s with 2 vCPU’s and the performance is great.  We want to add a third VM and split up the cores among the 3 VM’s.  The only issue is the third vm will be running a really old application in Windows XP.  This application is very CPU sensitive lack of resources will cause it to crash and it’s single threaded.   How do I solve this issue?

This is fine situation that takes into account the issues discussed in this article .   There are a few questions left unanswered.

First are the first two vm’s using over 75% of their two CPU’s?

– Why do I ask this question?  Well most of the time servers are not using much of there cpu.  It seems to be the most under utilized resource in most ESXi setups.  Unless we have resource contention processor co-stop does not figure into the equation.

So we are going to assume that the processors are not 75% or more used all the time.  So co-stop is now not an issue.  Given that assumption the solution might just be as simple as reservation:

Take your new virtual machine and use CPU reservation to ensure it always get’s 100% of it’s single cpu. You single threaded application will never fail and since your not using 75% or more of the resources on your other vm’s you should never even notice.

What about hypertheading?

Well there are lots of theories on this matter an it really depends on your situation.  Unless you have over subscribed your CPU then none of the theories really come into play.

ESXi Memory Management

ESXi has some interesting techniques used in order to save memory.   These techniques allow for memory over commitment and utilization.

Transparent Page Sharing

This is a process of identifying common items in memory.  For example each operating system has a number of files it loads into memory to operate.  These files are never changed but allow the system to run.  For example windows has a lot of DLL files that are loaded and never changed unless Microsoft patches them.   If you have two guest virtual machines with the same operating system these files can be shared in memory allowing for less memory allocation.   For this reason it’s good to run as much of the same type of operating system / application together as possible.   Creating a standard build and standard operating system allows for a huge gain with transparent page sharing.  TPS is based on pages not specific files so the pages have to be exactly the same.

Memory Ballooning

Lots of people have discussed memory ballooning and many of them do a better job explaining it than I do.   In order to explain ballooning it is important to have a few common terms:

  • host = ESXi Server
  • Guest = Virtualized server running an operating system (in this example we will use Linux)
  • Reserved memory – memory that is guaranteed by ESXi to the guest it can never be swaped

In addition there are three times of memory:

  • Active memory – Memory actively in use
  • Idle memory – Allocated memory not currently in use
  • Free memory – memory available

Guest Swapping

Each guest has memory and swap or paging capacity.  When a process requests more memory than is available in free the operating system swaps out the oldest idle memory to disk.  This is a very costly operation because of the speed of the disk.   The guest operating system is in the best possible situation to choose which process should be swapped due to it’s knowledge of active processes. This is normal system swapping for all operating systems.

Host Swapping

Vmware hosts also has a .vswp file that is create when a guest is powered on.  This file is stored with the virtual machine and is equal to allocated memory – reserved memory.   This file is available for when memory becomes so over used that vmware has to swap.  This is the worst type of swapping ESXi has no knowledge which pages on the guest are active, idle or free so it just guesses.  This can really effect performance.

Remember that only TPS is in operation unless there is contention for memory.

Memory Ballooning

In order to avoid host swapping vmware implemented the balloon driver (known as memctl on esxtop).  It is included with the vmware tools.  What the driver does is work within the guest operating system and request memory pages.  Since it is a driver it has high priority and does not have to return the memory.  This then forces the guest operating system to swap to OS page files.  Since the guest swap is better than a host swap this is a preferred operation.  This can happen up to 65% of the guest allocated memory.   In effect we are tricking the operating system into using less memory than it’s been allocated by having the balloon driver steal some RAM.   This memory gain is then passed off to other hosts allowing for over commitment of RAM.   The problem with ballooning is that it will eat into active processes if the need for RAM is too high, thus forcing the host swap.   Ballooning can only be active if your running vmware tools and if your host has been up for a little while.

Ok so what steps does ESXi take in what order:

This only happens when there is memory contention:

  1. (State:High) If no there is no contention then it does nothing.
  2. (State:Soft) ESXi starts using the balloon driver to allocate memory from the guest OS up to 65% of allocated memory.
  3. (State:Hard) Vmware tries to do a compression on the memory page, if they can get 50% compression then it goes into a special memory location known as compression cache. (Can also be SSD)
  4. (State:Low) If compression cache is not possible then the memory is sent to the host swap .vswap file.

You can tell what state your host is in by using esxtop.

  • Log into esxtop
  • Press m for memory
  • Find the state at top:

In my case it’s high state no memory management going on.   The state is determined by a sliding scale configured by ESXi.   Stored in the advanced variable Mem.MinFreePct.  Once this limit has been reached the state will change until our host is under the limit again.

We can also use esxtop to see how much memory is ballooning on the same page as before

  • Log into esxtop
  • Press m for memory
  • Press V for virtual machines only
  • Find the MEMCTL/MB to tell us how much swapping we are doing.

You can see I am ballooning 633 MB’s.

How do I create an artificial situation to simulate ballooning and swapping?

Just put in a memory limit resource pool and choke your vm’s.

Memory Usage:

What is the difference between Consumed host memory and active guest memory?

  • Consumed host memory – The amount of host memory allocated
  • Active guest memory – Amount of memory actively in use by guest and applications.