Vmware port group port binding change

So you just read my post on how you will want to change your vcenter to a ephemeral port binding… but how is it done in 5.1 web client?  Well here yo go:

  1. Click vcenter
  2. Click Distributed Switches
  3. Select your dV switch
  4. Select your port group
  5. Click Manage and Settings
  6. Select Edit
  7. Select the drop down on Port binding
  8. Select Ephemeral
  9. Select OK

That’s it and your all set.

Ran out of ports on vmware switch now what?

This is a common issue… you have run out of ports on your dVS or virtual switch port group.   You are running a Static binding port which means you have a set number of total ports and no more.  How do you fix it? There are two ways:

  • Increase the number of ports
  • Enabled auto-expand

Increase the number of ports:

-Remember that static binds assign a port per virtual machine… it does not matter if it is powered on or not as long as it’s on the port group it get’s a static port.  To increase the number of ports in the web client (5.1) do the following:

  • Click vcenter
  • Click Distributed Switches
  • Select your dV switch
  • Select your port group
  • Click Manage and Settings
  • Select Edit

Adjust the number of ports and click ok.

Enabled auto-expand:

To increase the number of ports in the web client (5.1) do the following:

  • Click vcenter
  • Click Distributed Switches
  • Select your dV switch
  • Select your port group
  • Click Manage and Settings
  • Select Edit

This time choose port allocation and select Elastic and click ok.

Just remember that new ports are allocated on dVS switches by vcenter and it has to be available to provide ports.

You can also automate this with vMA using the following command:

updatedvPortgroupAutoExpand.pl --operation enable --dvportgroup portgroupname

Building a custom ISO for Vmware

This has been a pain of mine for a while now… I have to use the customized HP image but I need some additional Storage drivers to support VAII.  So I have to install HP’s ESXi then install the drivers later… with this tool on a Windows box I can bundle them together without much pain:

ESXi-Customizer -> http://www.v-front.de/p/esxi-customizer.html

Free Vmware Training

Right now Vmware is offering a whole bunch of free vmware training courses that you should be doing for free.  Here are some of the new ones:

And a lot more just head over to http://mylearn.vmware.com login or create and account and select the free eLearning tab.

Vmware virtual disk types

Vmware supports three different type of disks at this point (5.1)

  • Eager-zeroed thick
  • Lazy-zeroed thick
  • Thin

Eager-zeroed thick:

Disk space is allocated and zeroed out at creation time.   It takes the longest time to create but provides the best possible performance at first use. Mostly used for MSCS and FT virtual machines.

Lazy-zeroed thick

Disk space is allocated but not zeroed at creation time.  The first time your operating system requests a new block it is zero’ed out.   Performance is a little less than Eager-zeroed on first write then equal on each additional write to same sector.

Thin

Disk space is allocated and zero’ed upon request.

Which one do you choose?

Well that depends on your needs.  If performance is your critical issue then Thick eager zeroed provisioned is the only choice.  If you need to save disk space or doubt that your customer will really use the 24TB’s of disk space they have requested then thin provisioned is the choice.  Lazy zeroed is something between the two.  At this point vmware recommends Lazy zeroed.

How do I switch?

As of ESXi 5 you have two choices: storage vmotion and inflate.  When initiating a storage vmotion you have the option to choose any of the three options above and convert it.  You can also turn a thin into thick by finding the flat file using the datastore browser and selecting inflate.

SCSI Controller type (Only on first disks):

Much like disk type there are many choices:

  • BusLogic Parallel
  • LSI Logic Parallel
  • LSI Logic SAS – Requires Hardware 7 or later
  • VMware Paravirtual – Requires Hardware 7 or later

Paravirtual is a physical adapter that requires vmtools drivers in order to use.  Paravirtual adapters provide the best performance but can be only used in new operating systems.  Also they cannot be used on boot devices.   Normally your OS selection handles the best scsi type for you.

SCSI Bus Sharing:

When you add a new SCSI Bus you have options on the scsi type but it also gives you the following options (can only be changed when added or vm is powered down)

  • None – Virtual disks cannot be shared
  • Virtual – Virtual disks can be shared between virtual machines on the same server
  • Physical – Virtual disks can be shared between virtual machines on any server

Of course you still need a cluster file system but if you plan on using this system then select Physical.

Scsi bus location:

Each virtual machine can have up to 4 scsi buses each with their own controller.  Lots of people have questioned the advantage of multipe buses in vmware.  In traditional hardware you have multiple buses to provide redundancy in case of a bus failure.  This does not apply to virtual hardware.  But it still provides the virtual operating system multiple channels to handle I/O which is always a good thing.

Mode:

  • Independent (Not affected by snapshots)
  • Virtual (Default)

Independent Mode:

  • Persistent (Changes are written to disk) – great for databases and other data where a snapshot does not make sense.
  • Nonpersistent (Changes to this disk are discared when you power off or revert to the snapshot) – Used on lab computers, kiosks etc..

Vmware virtual network interface cards which do I choose?

Classic question there are all these virtual networking adapter types which one do I choose?  99% of the people you talk to will tell you they let vmware choose when then select the operating system.  This will choose a compatible network adapter type but not always the best type.  Each generation of virtual adapter brings better performance and features.  As a rule of thumb you want the highest vmxnet adapter your system supports.  As of ESXi 5 the following adapters are available listed in order preference (worst to best):

  • Flexible – Has two functions can function as a vlance or vmxnet (will be vlance unless vmware tools is installed)  vlance is an emulated 10Mbps nic available on almost all operating systems.  vmxnet is the first generation of virtualized only network cards and requires vmware tools to be installed.
  •  e1000 – is an emulated Intel 82545EM Gigabit ethernet NIC with support in most operating systems.  It is the default adapter for all 64-bit operating systems and is required for guest VLAN tagging.
  • vmxnet2 – Updated version of vmxnet that contains VLAN tagging, jumbo frames and hardware off-load with additional high-performance features
  • vmxnet3 – Is not really related the vmxnet2 but does represent the next generation of nic drivers it includes all features of vmxnet2 plus multiqueue support, IPv6 offloads, MSI/MSI-X interrupt – this driver has limited OS support requires vmware tools like all vmxnet adapters and requires Esxi hardware version 7 (Esxi version 4 at least)

How do I choose?  The best answer is consult vmwares knowledge base for information:

http://kb.vmware.com/kb/1001805

Issues with multiple CPU’s in Vmware

For the longest time I would not allow people to power on virtual machines with more than one CPU without proving to me they needed more than one virtual cpu.  This is due to strict co-scheduling in vmware.   Over the years they have relaxed this a lot.  Making multiple CPU virtual machines a lot more possible.  To understand the issue take a look at the following diagram:

The assumption of this diagram is rather silly but it helps me explain the problem.  It is important to take into account that this diagram assumes that both virtual processors require 100% of the physical CPU.  As such they have to share it.   This problem is not too bad.  This is known as co-scheduling on the virtual CPU.  In vsphere 4 you did co-scheduling at the virtual machine level; meaning this 2 vCPU vm could not run on a single core single socket physical system.   Because both vCPU’s had to be scheduled to run at the same time.   Since Vmware 5 now schedules per vCPU we have a lot more flexability. The co-stop allows us to get reasonable performance on each vCPU.

It is critical to understand that adding more vCPU’s may not always provide better performance.  If there is any contention for cpu’s then you will have the co-stoping practice enacted.  Causing slow downs.  You can monitor this in esxtop with the following settings:

  • Press c for cpu
  • Press upper case V to see per virtual machine

The key fields here are:

  • %USED – (CPU Used time) % of cpu used at current time.  This number is represented by 100 X Number_of_vCPU’s so if you have 4 vCPU’s and your %USED shows 100 then you are using 100% of one cpu or 25% of four cpu’s.
  • %RDY – (Ready) % of time a vCPU was ready to be scheduled on a physical processor but could not be due to contention.  You do not want this above 10% and should look into anything above 5%.
  • %CSTP – (Co-Stop) % in time a vCPU is stopped waiting for access to physical CPU high numbers here represent problems.  You do not want this above 5%
  • %MLMTD – (Max Limited) % of time vmware was ready to run but was not scheduled due to CPU Limit set (you have a limit setting)
  • %SWPWT – (Swap Wait) – Current page is swaped out

Vmware Scenario’s: CPU sensitive application

Vmware Scenario’s are hypothetical situations and solutions.

This came from a post on the official Vmware forums:

– We have a ESXi server with 1 Socket 4 Core CPU with Hyper theading.  We currently have 2 VM’s with 2 vCPU’s and the performance is great.  We want to add a third VM and split up the cores among the 3 VM’s.  The only issue is the third vm will be running a really old application in Windows XP.  This application is very CPU sensitive lack of resources will cause it to crash and it’s single threaded.   How do I solve this issue?

This is fine situation that takes into account the issues discussed in this article .   There are a few questions left unanswered.

First are the first two vm’s using over 75% of their two CPU’s?

– Why do I ask this question?  Well most of the time servers are not using much of there cpu.  It seems to be the most under utilized resource in most ESXi setups.  Unless we have resource contention processor co-stop does not figure into the equation.

So we are going to assume that the processors are not 75% or more used all the time.  So co-stop is now not an issue.  Given that assumption the solution might just be as simple as reservation:

Take your new virtual machine and use CPU reservation to ensure it always get’s 100% of it’s single cpu. You single threaded application will never fail and since your not using 75% or more of the resources on your other vm’s you should never even notice.

What about hypertheading?

Well there are lots of theories on this matter an it really depends on your situation.  Unless you have over subscribed your CPU then none of the theories really come into play.

Vmware vMA basics

What is the vMA?

vMA stands for vSphere Management Assistant.  Is it a virtual appliance provided by vmware that is running SUSE Linux 11 and has some custom vmware commands for scripting and authentication that allows you to manage a vmware infrastructure.   You can download it from Vmware for free from this location:

http://www.vmware.com/support/developer/vima/

In order to deploy into your infrastructure you need to deploy it into your ESXi host like any other virtual appliance:

From the web client

  • vCenter -> Hosts -> select your ESXi host
  • Actions -> Deploy OVF Template
  • Go through the prompts to deploy

Once the OVF has been deployed you can power it on.  It will ask you for ip information.  Once it’s setup with ip you can manage it via the web interface https://your_ip:5480.  From here you can setup common settings and update the machine.   You just need to login as vi-admin with the default password of vmware.  Make sure to change this default password in the web interface.

Command Line

vMA’s real power is in the command line you can access it via ssh logging in as vi-admin.    Vmware has included all their common commands but one command is very powerful it’s vifp (vi fast pass) this allows you to stored login information for ESXi hosts so you don’t have to type them on every script.   You can use it like this:

To add a server (you should do this for vcenter and each ESXi host):

vifp addserver vcenter_or_esxi_host

To see what servers you have in vifp type the following:

vifp listservers

There are times where you want to manage only one host at a time you can select your target server via this command

vifptarget -s server_to_manage_name_from_listservers

Once you have selected a target you will notice it as part of your command line prompt.

Once you have added hosts and selected a target any vmware commands you run will be against the target host.

You can also add the vMA to active directory so you don’t have to store the passwords in vifp.  This is done via :

sudo domainjoin-cli join domain_name domain_admin_user

Commands:

Since the vMA is running SUSE linux a full featured linux kernel is available to us.  Which means there are a lot of things we can do to make our life easier.   You can use aliases and variables.  For example if I wanted to execute the command:

esxcli hardware cpu list

I could use variables to help:

variable_name=”data for variable”
For example:

cmd="hardware cpu list"

Then I could create an alias for the esxcli command:

alias e="esxcli"

Then I could run the following command:

e $cmd

And that would execute

esxcli hardware cpu list

If you want to keep these aliases or variables then you can add them to your environment settings via these commands:

vi ~/.profile

Add your variables and aliases and they will be available to you when you login.  There is one word of caution these variables are not available to your scripts unless you manually run them.  Any variables you need in scripts should be included in the scripts.

Some common aliases I use are the following:

alias e='esxcli'
alias v='vmware-cmd'
conn="Your connection information for older commands"

ETH0 missing after cloning a Linux virtual machine

This is a fun one.  I never used to have this problem with RHEL 5 but RHEL 6 and debian based distros have had this issue all along.   You clone from a Linux template and you get a new mac address.  But the new interface comes in as ETH1 or perhaps if you have multiple generations you might have eth6 or eth7.  How do you clear this up?  Well it’s all about the fact that the operating system keeps track of the mac address in it’s udev rules. So open up /etc/udev/rules.d/70-persistent-net.rules and delete all the mac address entries.  Once you reboot the machine you should have your interface come in as eth0 again.

In Debian it is normally named: /etc/udev/rules.d/z25_persistent-net.rules.

Enjoy!