Migrating off a distributed virtual switch to standard switch Article 2

Normally people want to migrate from virtual standard switches to distributed switches.   I am a huge fan of the distributed switch and feel it should be used everywhere.   The distributed switch becomes a challenge when you want to migrate hosts to a new vCenter.   I have seen a lot of migrations to new vCenters via detaching the ESXi hosts and connecting to the new vCenter.   This process works great assuming you are not using the distributed switch.   Removing or working with VM’s on a ghosted VDS is a real challenge.   So remove it before you migrate to a new vCenter.

In this multi-article solution I’ll provide some steps to migrate off a VDS to a VSS.

Article 2:  Migrating the host off the VDS.  In the last article we moved all the virtual machines off the VDS to a VSS.   We now need to migrate the vMotion and management off the VDS to a VSS.   This step will cause interruption to the management of a ESXi host.   Virtual machines will not be interrupted but the management / will be.   You must have console access to the ESXi host for this to work.  Steps at a glance:

  1. Confirm that a switch port exists for management and vMotion
  2. Remove vMotion, etc.. from VDS and add to VSS
  3. Remove management from VDS and add to VSS
  4. Confirm settings

Confirm that a switch port exists for management and vMotion

Before you begin examine the VSS to confirm that management and vMotion port groups were created correctly by Article 1's script.   Once your sure the VLAN settings for the port group are correct then you can move to the next step.  You may want to confirm your host isolation settings it’s possible these steps will cause a HA failure if you take too long to switch over and don’t have independent datastore networking.  Best practice would be to disable HA or switch to leave powered on isolation response. 

Remove vMotion, etc.. from VDS and add to VSS

Login to the ESXi host via console and ssh.  (Comments are preceded with #) 

#use the following command to identify virtual adapters on your dvs

esxcfg-vswitch -l

# sample output from my home lab

DVS Name         Num Ports   Used Ports  Configured Ports  MTU     Uplinks

dvSwitch         1792        7           512               1600    vmnic1

 

  DVPort ID           In Use      Client

  675                 0

  676                 1           vmnic1

  677                 0

  678                 0

  679                 1           vmk0

  268                 1           vmk1

  139                 1           vmk2

 

# We can see we have three virtual adapters on our host use the following command to identify their use and IP addresses

esxcfg-vmknic -l

#Sample output from my home lab cut out some details to make it more readable

Interface  Port Group/DVPort   IP Family IP Address     

vmk0       679                 IPv4      192.168.10.16                

vmk1       268                 IPv4      192.168.10.26                   

vmk2       139                 IPv4      192.168.10.22     

 

Align you vmk# with vCenter to identify which adapter provides the function (vmk0 management, vmk1 vMotion, vmk2 FT)

 

# We can now move all adapter other than management which in my case is vmk0 #we will start with vmk1 on dvSwitch on port 268

esxcfg-vmknic -d -v 268 -s "dvSwitch"

 

# Then add to vSwitch0 vmk1

esxcfg-vmknic -a -i 192.168.10.26 -n 255.255.255.0 -p PG-vMotion

 

Remove FT

esxcfg-vmknic -d -v 139 -s "dvSwitch"

 

esxcfg-vmknic -a -i 192.168.10.22 -n 255.255.255.0 -p PG-FT

 

Remove management from VDS and add to VSS

Remove management (this stage will interrupt management access to ESXi host – make sure you have console access) You might want to pretype the add command in the console before you execute the remove.  If you are having trouble getting the shell on a ESXi host do the following:

  • You will need to login to the console go to troubleshooting options -> Enable ESXi Shell

  • Press Alt-Cntr-F1 to enter shell and login

 

Remove management:

esxcfg-vmknic -d -v 679 -s "dvSwitch"

 

Add management to VSS:

esxcfg-vmknic -a -i 192.168.10.16 -n 255.255.255.0 -p PG-Mgmt

 

Confirm settings

Ping the host to ensure networking has returned to management.   Ensure the host returns to vCenter by waiting 2 minutes.    After you move the host to a new vCenter you can remove via:

  • Go to the host in vCenter and select dvs it should provide a remove button.

 

 

 

Migrating off a distributed virtual switch to standard switch Article 1

Normally people want to migrate from virtual standard switches to distributed switches.   I am a huge fan of the distributed switch and feel it should be used everywhere.   The distributed switch becomes a challenge when you want to migrate hosts to a new vCenter.   I have seen a lot of migrations to new vCenters via detaching the ESXi hosts and connecting to the new vCenter.   This process works great assuming you are not using the distributed switch.   Removing or working with VM’s on a ghosted VDS is a real challenge.   So remove it before you migrate to a new vCenter.

In this multi-article solution I’ll provide some steps to migrate off a VDS to a VSS.

It’s important to understand that assuming that networking is correct this process should not interrupt customer virtual machines.   The movement from a distributed switch to a standard switch at most will lose a ping.   When you assign a new network adapter a gratuitous arp is sent out the new adapter.   If you only have two network adapters this process does remove network adapter redundancy while moving.

Step 1: Create a VSS with the same port groups

You need to create a standard switch with port groups on the correct VLAN ID’s.   You can do this manually but one of the challenges of the standard switch is the name must be exactly the same including case sensitivity to avoid vMotion errors.  (One great reason for the VDS)  So we need to use a script to create the standard switch and port groups.   Using PowerCLI (sorry orchestrator friends I didn’t do it in Orchestrator this time)

Code:

#Import modules for PowerCLI

    Import-Module -Name VMware.VimAutomation.Core

    Import-Module -Name VMware.VimAutomation.Vds

 

  #Variables to change

    $standardSwitchName = "StandardSwitch"

    $dvSwitchName = "dvSwitch"

    $cluster = "Basement"

    $vCenter = "192.168.10.14"

 

    #Connect to vCenter

    connect-viserver -server $vCenter

 

 

 

  $dvsPGs = Get-VirtualSwitch -Name $dvSwitchName | Get-VirtualPortGroup | Select Name, @{N="VLANId";E={$_.Extensiondata.Config.DefaultPortCOnfig.Vlan.VlanId}}, NumPorts

 

  #Get all ESXi hosts in a cluster

  $vmhosts = get-cluster -Name $cluster | get-vmhost

 

    #Loop ESXi hosts

    foreach ($vmhost in $vmhosts)

    {

        #Create new VSS

        $vswitch = New-VirtualSwitch -VMHost $vmhost -Name $standardSwitchName -Confirm:$false

 

        #Look port groups and create

        foreach ($dvsPG in $dvsPGs)

        {

            #Validate the port group is a number the DVUplink returns an array

            if ($dvsPg.VLANId -is [int] )

            {

                New-VirtualPortGroup -Name $dvsPG.Name -VirtualSwitch $vswitch -VlanId $dvsPG.VLANId -Confirm:$false

            }

 

        }

 

    } 

 

Explained:  

  • Provide variables

  • Connect to vCenter

  • Get all port groups into $dvsPGs

  • Get all ESXi hosts

  • Loop though ESXi hosts one at a time

  • Create the new standard switch

  • Loop through port groups and create them with same name as DVS and VLAN ID

 

This will create a virtual standard switch with the same VLAN and port group configuration as your DVS.    

 

I like to be able to validate that the source and destination are configured the same so this powercli script provides the checking:

Code:

#Validation check DVS vs VSS for differences

 

    $dvsPGs = Get-VirtualSwitch -Name $dvSwitchName | Get-VirtualPortGroup | Select Name, @{N="VLANId";E={$_.Extensiondata.Config.DefaultPortCOnfig.Vlan.VlanId}}, NumPorts

    #Get all ESXi hosts in a cluster

    $vmhosts = get-cluster -Name $cluster | get-vmhost

 

    #Loop ESXi hosts

    foreach ($vmhost in $vmhosts)

    {

        #Write-Host "Host: "$vmhost.Name "VSS: "$standardSwitchName

 

        #Get VSSPortgroups for this host

        $VSSPortGroups = $vmhost | Get-VirtualSwitch -Name $standardSwitchName | Get-VirtualPortGroup

            #Sort based upon name of VSS

            foreach ($dvsPG in $dvsPGs)

            {

                if ($dvsPg.VLANId -is [int] )

                {

                #Write "VSSPortGroup: " $VSSPortGroup.Name

                #Loop on DVS

                $match = $FALSE

                foreach ($VSSPortGroup in $VSSPortGroups)

                {

                    if ($dvsPG.Name -eq $VSSPortGroup.Name)

                    {

                        #Write-Host "Found a Match vss: "$VSSPortGroup.Name" to DVS: "$dvsPG.Name" Host: " $vmhost.name

                        $match = $TRUE

                        $missing = $dvsPG.Name

                    

                    }

 

                }

                if ($match -eq $FALSE)

                {

                    Write-Host "Did not find a match for DVS: "$missing " on "$vmhost.name

 

                }

 

            }

            }

 

    } 

 

Explained:

  • Get the VDS

  • Get all ESXi hosts

  • Loop through VM hosts

  • Get port groups on standard switch

  • Loop though the standard switch port groups and look for matches on DVS

  • If missing then output missing element

 

 

Now we need to give the standard switch an uplink (this is critical otherwise VM’s will fail when moved)

 

Once it has an uplink you can use the following script to move all virtual machines:

 

Code:

#Move Virtual machines to new Adapters

 

    $vms = get-vm 

 

    foreach ($vm in $vms)

      {

        #grab the virtual switch for the hosts 

        $vss = Get-VirtualSwitch -Name $standardswitchname -VMHost $vm.VMHost

        #check that the virtual switch has at least one physical adapter

        if ($vss.ExtensionData.Pnic.Count -gt 0)

        {

        #VMHost

        $adapters = $vm | Get-NetworkAdapter 

 

        #Loop through adapters

        foreach ($adapter in $adapters)

        {

            #Get VSS port group of same name returns port group on all hosts

            $VSSPortGroups = Get-VirtualPortGroup -Name $adapter.NetworkName -VirtualSwitch $standardSwitchName

   

            #Loop the hosts

            foreach ($VSSPortGroup in $VSSPortGroups)

            {

                #Search for the PortGroup on our host

                if ([string]$VSSPortGroup.VMHostId -eq [string]$vm.VMHost.Id)

                {

                    #Change network Adapter to standard switch

                    Set-NetworkAdapter -NetworkAdapter $adapter -Portgroup $VSSPortGroup -Confirm:$false

                }

            }

        }

        }

    } 

 

Explained:  

  • Used same variables from previous script

  • Get all virtual machines (you could use get-vm “name-of-vm” to test a single vm

  • Loop through all virtual machines one at a time

  • Get the VSS for the VM (host specific)

  • Check for at least one physical uplink to switch (gut / sanity check)

  • Loop though the adapters on a virtual machine 

  • For each adapter get VDS port group name and switch the adapter

 

 

 

 

 

VMkernel types updated with design guidance for multi-site

Holy crap what do all these VMware VMkernel type mean?  I started this article and realized I had already written one here.  Sad when google leads you to something you wrote… looks like I don’t remember too well… Perhaps I should just go yell for the kids to get off my lawn now.   I wanted to take a minute to revise my post with some new things I have learned and some guidance.

capture

From my previous post:

  • vMotion traffic – Required for vMotion – Moves the state of virtual machines (active datadisk svMotion, active memory and execution state) during a vMotion
  • Provisioning traffic – Not required will use management network if not setup – cold migration, cloning and snapshot creation (powered off virtual machines = cold)
  • Fault tolerance traffic (FT)  – Required for FT – Enables fault tolerance traffic on the host – only a single adapter may be used for FT per host
  • Management traffic – Required – Management of host and vCenter server
  • vSphere replication traffic – Only needed if using vSphere replication– outgoing replication data from ESXi host to vSphere replication server
  • vSphere replication NFC traffic – Only needed if using vSphere replication – handles incoming replication data on the target replication site
  • Virtual SAN – Required for VSAN – virtual san traffic on the host
  • VXLAN – used for NSX not controlled from the add vmkernel interface.

I wanted to provide a little better explanation around design elements with some interfaces.  Specifically I want to focus on vMotion and Provisioning traffic.    Let’s create a few scenario’s and see what interface is used assuming I have all the VMkernel interfaces listed above:

  1. VM1 is running and we want to migrate from host1 to host2 at datacenter1 – vMotion
  2. VM1 is running with a snapshot and we want to migrate from host1 to host2 at datacenter1 – Provisioning traffic (if it does not exist management network is used)
  3. VM1 is running with a snapshot and we want to storage migrate from host1 DC1 to host4 DC3 – storage vMotion – Provisioning traffic (if it does not exist management network is used)
  4. VM1 is not running and we want to migrate from host1 to host2 at datacenter1 – Provisioning traffic (very low bandwidth used)
  5. VM1 is not running has a snapshot and we want to migrate from host1 to host2 at datacenter1 – Provisioning traffic (very low bandwidth used)
  6. VM2 is being created at datacenter1 – Provisioning traffic

 

So design guidance in a multi-site implementation you should have the following interfaces if you wish to separate the TCP-IP stack  or use network IO control to avoid bad neighbor situations.   (Or you could just assign it all to management vmk and go nuts on that interface = bad idea)

  • Management
  • vMotion
  • Provisioning

Use of other vmkernel interfaces depends on if you are using replication, vSAN or NSX.

Should you have multi-nic vMotion? 

Multi-nic vMotion enables faster vMotion of multiple entries off a host (as long as they don’t have snapshots).   It still is a good idea if you have large vm’s or lots of vm’s on a host.

Should you have multi-nic Provisioning?

No idea if it’s even supported or a good idea.  Provisioning network is used for long distance vMotion so the idea might be good… I would not use it today.

Configuring a NSX load balancer from API

A customer asked me this week if there was any examples of customers configuring the NSX load balancer via vRealize Automation.   I was surprised when google didn’t turn up any examples.  The NSX API guide (which is one of the best guides around) provides the details for how to call each element.  You can download it here. Once you have the PDF you can navigate to page 200 which is the start of the load balancer section.

Too many Edge devices

NSX load balancers are Edge service gateways.   A normal NSX environment may have a few while others may have hundreds but not all are load balancers.   A quick API lookup of all Edges provides this information: (my NSX manager is 192.168.10.28 hence the usage in all examples)

 

This is for a single Edge gateway in my case I have 57 Edges deployed over the life of my NSX environment and 15 active right now.   But only Edge-57 is a load balancer.   This report does not provide anything that can be used to identify it as a load balancer from a Edge as a firewall.   In order to identify if it’s a load balancer I have to query it’s load balancer configuration using:

Notice the addition of the edge-57 name to the query.   It returns:

Notice that this edge has load balancer enabled as true with some default monitors.   To compare here is a edge without the feature enabled:

Enabled is false with the same default monitors.   So now we know how to identify which edges are load balancers:

  • Get list of all Edges via API and pull out id element
  • Query each id element for load balancer config and match on true

 

 

Adding virtual servers

You can add virtual servers assuming the application profile and pools are already in place with a POST command with a XML body payload like this (the virtual server IP must already be assigned to the Edge as an interface):

capture

You can see it’s been created.  A quick query:

 

Shows it’s been created.  To delete just use the virtualServerId and pass to DELETE

 

Pool Members

For pools you have to update the full information to add a backend member or for that matter remove a member.  So you first query it:

Then you form your PUT with the data elements you need (taken from API guide).

In the client we see a member added:

capture

Tie it all together

Each of these actions have a update delete and query function that can be done.  The real challenge is taking the API inputs and creating user friendly data into vRealize Input to make it user friendly.    NSX continues to amaze me as a great product that has a very powerful and documented API.    I have run into very little issues trying to figure out how to do anything in NSX with the API.  In a future post I may provide some vRealize Orchestrator actions to speed up configuration of load balancers.

 

 

 

 

 

 

 

 

 

VMkernel interfaces in vSphere 6

Not every one has noticed the new types of vmkernel interfaces in vSphere 6.   Here is a quick note to identify the types of interfaces available:

  • vMotion traffic – Required for vMotion – Moves the state of virtual machines (active datadisk svMotion, active memory and execution state) during a vMotion
  • Provisioning traffic – Not required will use management network if not setup – cold migration, cloning and snapshot creation (powered off virtual machines = cold)
  • Fault tolerance traffic (FT)  – Required for FT – Enables fault tolerance traffic on the host – only a single adapter may be used for FT per host
  • Management traffic – Required – Management of host and vCenter server
  • vSphere replication traffic – Only needed if using vSphere replication – outgoing replication data from ESXi host to vSphere replication server
  • vSphere replication NFC traffic – Only needed if using vSphere replication – handles incoming replication data on the target replication site
  • Virtual SAN – Required for VSAN – virtual san traffic on the host

The purpose of the multiple interface types is you are now allowed to route all these types of traffic in 6.  Allowing you to segment this traffic even more.  (In ESXi 5.xx only management had a TCP/IP stack.)  I recommend that you create unique subnets for each of these types of traffic you can use.  In addition many of them support multiple concurrent nic’s (like multi-nic vMotion) which can improve performance.   When possible setup multi-nic.

VCIX-NV VMware Certified Implementation Expert Network Virtualization Exam Experience

While attending VMworld I have made a habit of taking advantage of the discounted certifications.   Each year I have pushed into a new certification and given a try.   This year I have my sights set on VCIX-NV.   I normally like to schedule these tests during the general session to avoid the impact of a 3 hour test during VMworld sessions.  (I can always watch the general sessions on YouTube Later)  This year they closed my secret loop-hole only allowing me to take it during Thursday’s general session.    This has a profound impact on my plans for Wednesday night.   Wednesday Night is the VMworld party which is always really awesome.   I am sad to say I skipped it 100% this year and spent the time studying and going to bed early.   I am happy to announce I passed the exam and earned a VCP-NV and VCIX-NV in the same day.   Here is some details on what I studied:

  • The VMware HOL labs
  • The blueprint documents (yes I read all the documentation provided by VMware except the CLI guide.. it’s a really more of a reference guide)
  • I was lucky to spend some time with Elver Sosa and Ron Flax (both VCDX-NV’s) last year that helped me understand the technology
  • Plural Sight course on NSX by Jason Nash (these are really good)
  • Time in the HOL doing things not on the list (like more firewall rules)

 

This test requires that you do a series of live exercises that can build upon each other.  Some time management tips are:

  • Skip ahead if allows and see how tasks fit in
  • Read carefully what is expected there are a lot of tips
  • Do what you can partial credit is points (at least I think it is)
  • Spend time before understanding how to troubleshoot nsx and verify your settings
  • Don’t be afraid to skip a question if you really don’t know time is not your friend

 

It was like the VCAP-DCA test something I really enjoyed doing… I really wish it didn’t have the time crunch but it was a fun exam.  The best advice I can give you is read the blueprint and documents and use the HOL from VMware to gain experience.

VCIX-NV Study Guide Objective 1.2

To see other posts in this series go here.

This section deals with upgrading from older versions of vShield to NSX.   The simple answer is there is a specific order that must be followed.  Upgradeds from vShield require version 5.5.  Most of it is in the GUI via vCenter except the vShield Manager which will be replaced by NSX Manager.  Most of these processes roughly follow the documented process in this document.

Products name translation:

Roughly here are the old names to new names or new service providing function:

vShield Manager -> NSX Manager

Virtual Wires -> NSX Logical Switch

vShield App -> NSX Firewall

vShield -> NSX Edge

vShield Endpoint ->vShield Endpoint

Data security -> NSX Datasecurity

 

Practicing this process:

Unless you want to take a few hours configuring all vShield products it’s hard to practice.  You can do the upgrade from vShield Manager to NSX manager really quickly.   Just download the vShield Manager and setup with the following:

  • Deploy OVF
  • Power on
  • Console login as admin with password of default
  • type enable with password of default
  • type setup
  • Setup your IP settings
  • Wait 5 minutes
  • Login via IP with web browser and do upgrade

The rest of the upgrade requires you understand vShield products which is not required for NV so I vote you skip it and be familiar with process, order and requirements.

 

Objective  1.2 Denotes the following items:

Upgrade vShield Manager 5.5 to NSX Manager 6.x.

Upgrading vShield Manager to NSX Manager can only be done from version 5.5 of vShield.  It also requires the following things:

  • vCenter 5.5
  • vShield Data Security uninstalled
  • vShield Edges be upgraded to 5.5

 

Process:

  1. Download the NSX upgrade bundle called vCNS to NSX for vSphere xxx Upgrade Bundle
  2. Login to vShield Manager and click Settings & Reports
  3. Click Updates tab and click upload upgrade bundle
  4. Click Choose file Browse to the vCNS to NSX for vSphere xxx Upgrade Bundle  and click open
  5. Click Upload file – this process will take some time
  6. Click Install to begin the upgrade process
  7. Click confirm install – this will reboot the vShield manager – none of the other components are rebooted
  8. After upgrade visit the ip address of your vShield manager again via https
  9. Login and look at summary page to confirm you are running NSX Manager
  10. Log off all windows and close your browser to clear cache
  11. Login to vSphere Web client
  12. Shutdown your NSX Manager vm and increase memory to 12GB and vCPU to 4

Upgrade NSX Manager 6.0 to NSX Manager 6.0.x
Upgrade Virtual Wires to Logical Switches

Virtual wires must be upgraded to NSX logical switches to use NSX features.   The process is required even if you don’t use virtual wires.   In order for this to work you need to upgrade your vShield manager to NSX manager and make sure it’s connected to vSphere.

Process

  • Login to Web client
  • Networking and Security Tab click install
  • Click host prepare
  • Virtual wires will show as Legacy
  • Click update on each wire
  • Wait for them to show green and no longer legacy

Upgrade vShield App to NSX Firewall

You can only upgrade vShield App 5.5 to NSX.  It requires that vShield manager be upgraded to NSX manager and virtual wires upgraded to NSX logical switches.

  • A pop up window should ask if you want to upgrade
  • Click update and wait
  • Done

Upgrade vShield 5.5 to NSX Edge 6.x

This upgrade requires the following:

  • vShield 5.5
  • NSX Manager
  • Virtual wires upgraded to NSX logical switches

Processes:

  • Login to web client
  • Networking & Security tab
  • NSX Edges button
  • Select upgrade version from actions menu on each edge
  • After compete check the version number tab

Upgrade vShield Endpoint 5.x to vShield Endpoint 6.x

This upgrade requires the following:

  • vShield Manager upgraded to NSX Manager
  • Virtual wires upgraded to NSX Logical switches

Process:

  • Login to web client
  • Networking & Security tab
  • Click Installation
  • Click Service deployments tab
  • Click on upgrade available
  • Select datastore (must be shared) and network and ok

Upgrade to NSX Data Security

There is no clean upgrade path you have to remove before install of NSX manager.  You have to re-register the solution with NSX if available.