VMkernel interfaces in vSphere 6

Not every one has noticed the new types of vmkernel interfaces in vSphere 6.   Here is a quick note to identify the types of interfaces available:

  • vMotion traffic – Required for vMotion – Moves the state of virtual machines (active datadisk svMotion, active memory and execution state) during a vMotion
  • Provisioning traffic – Not required will use management network if not setup – cold migration, cloning and snapshot creation (powered off virtual machines = cold)
  • Fault tolerance traffic (FT)  – Required for FT – Enables fault tolerance traffic on the host – only a single adapter may be used for FT per host
  • Management traffic – Required – Management of host and vCenter server
  • vSphere replication traffic – Only needed if using vSphere replication – outgoing replication data from ESXi host to vSphere replication server
  • vSphere replication NFC traffic – Only needed if using vSphere replication – handles incoming replication data on the target replication site
  • Virtual SAN – Required for VSAN – virtual san traffic on the host

The purpose of the multiple interface types is you are now allowed to route all these types of traffic in 6.  Allowing you to segment this traffic even more.  (In ESXi 5.xx only management had a TCP/IP stack.)  I recommend that you create unique subnets for each of these types of traffic you can use.  In addition many of them support multiple concurrent nic’s (like multi-nic vMotion) which can improve performance.   When possible setup multi-nic.

8 Replies to “VMkernel interfaces in vSphere 6”

  1. Hi joseph

    In your opinion, is correct to state that at the moment, there are 9 types of vmkernel interfaces?

    Management
    Vmotion
    Fault Tolerance
    Provisioning
    Vsphere Replication
    Vspere Replication NFC
    Virtual SAN
    nfs
    iscis

    1. To be technical there are exactly seven types of VMkernel interfaces available in 6.0:
      vMotion, Provisioning, Fault Tolerance, Management, vSphere replication, vSphere replication NFC, Virtual SAN

      The last two NFS and iSCSI are not vmkernel interfaces:
      – iSCSI is storage adapter tied to a VMkernel interface (vMotion, VSAN or Management type – recommended Management)
      – NFS is a storage protocol tied to an the management network interface and default gateway or the interface with that is directly on the same subnet for NFS (Best practice put a management port group in the same subnet as NFS if possible)

    1. Happy to help. Chris does a great job of explaining this in more depth than I have. I’ll try to simplify. There is no NFS VMkernel type NFS traffic does go over VMkernel interface of management type. You cannot designate which VMkernel interface is used for NFS. ESXi chooses which interface to use based upon the following tree:
      – Is there any Management interfaces in the same subnet as the NFS storage – if yes then that interface is used
      – If not then the default gateway for management is used

      If you look at Chris’s article you will notice that the simplified or VMkernel interface for NFS is on the same subnet and on the same VLAN (VLAN 200 and 10.0.200.0/24). He has another article that clarifies this even more: http://wahlnetwork.com/2012/04/19/nfs-on-vsphere-a-few-misconceptions/. It is a best practice to create a management VMkernel interface in the same subnet as your NFS storage to designate which port is for management and which is for storage. Does that help?

  2. Hi joseph

    Now I think to have understood but I still have a doubt.
    Supposing to have a simplified schema like this:

    Esxi host with 4(x 1Gbit) physical NICs and an NFS storage

    esxhost—L2switch—L3switch—L2switch—storage(192.168.10.0)
    |
    |__NIC1- vswitch0 (vmk0 192.168.30.1 – mgmt)
    |
    |__NIC2 -vswitch1 (vmk1 192.168.40.1 – vmotion)
    |
    |__NIC3 – vswitch2 (192.168.50.0 – vmnet)
    |
    |__NIC4 free

    In this configuration, the connection between the ESXi host and the NFS storage go through the
    management interface, physical NIC1. Basically this means two bad things:

    -routed connection toward NFS storage
    -same pNIC for both management and NFS traffic

    If I have correctly understood, in this configuration, the best would be:

    -take advantage of the NIC4 (dedicated to the storage traffic)
    -move routed connection towards a switched network reducing hopes as much as possibile

    This mean:

    to create a new vswitch (vsiwtch3), assign NIC4 to it, create a new vmkernel (vmk3) and assign an IP address in the range of NFS storage (192.168.10.X).

    The connection should looks like this:

    esxhost—L2switch—storage(192.168.10.0)
    |
    |__NIC1- vswitch0 (vmk0 192.168.30.1 – mgmt)
    |
    |__NIC2 -vswitch1 (vmk1 192.168.40.1 – vmotion)
    |
    |__NIC3 – vswitch2 (192.168.50.0 – vmnet)
    |
    |__NIC4 – vswitch3 (vmk3 192.168.10.1 – nfs)

    Questions:

    1) Is my assumption that correct?
    2) Supposing to have 4x10Gbit network interfaces, would be ok to have storage traffic + managemenet on NIC1 using different VLANs?
    3) You wrote: There is no NFS VMkernel type NFS traffic does go over VMkernel interface of management type.
    My doubt is:
    All the seven vmkernel types (vMotion, Provisioning, Fault Tolerance, Management, vSphere replication, vSphere replication NFC, Virtual SAN) are classified as “management type”? If yes, what it the difference with NON management type? I mean: when you creare a vmkernel interface, you can enable one of those services or not. In my knowledge, enabling a service (e.g. vmotion) means that on this vmkernel interface ONLY vmotion traffic will be allowed; and this is a management type. Instead, if I create a vmkernel interface for NFS, obviously I shouldn’t tick any of those services and, in this case, it is not a management type. Correct?

    1. Thanks again for reading. I’ll answer your questions first directly:
      1) Is my assumption that correct? – Yes NFS will go over management VMK
      2) Supposing to have 4x10Gbit network interfaces, would be ok to have storage traffic + managemenet on NIC1 using different VLANs? That is a bandwidth question. I’ll address in my end comments
      3) You wrote: There is no NFS VMkernel type NFS traffic does go over VMkernel interface of management type.
      My doubt is:
      All the seven vmkernel types (vMotion, Provisioning, Fault Tolerance, Management, vSphere replication, vSphere replication NFC, Virtual SAN) are classified as “management type”? If yes, what it the difference with NON management type? I mean: when you create a vmkernel interface, you can enable one of those services or not. In my knowledge, enabling a service (e.g. vmotion) means that on this vmkernel interface ONLY vmotion traffic will be allowed; and this is a management type. Instead, if I create a vmkernel interface for NFS, obviously I shouldn’t tick any of those services and, in this case, it is not a management type. Correct? Yes so here is the oddity. The NFS selection will choose the first vmknic of vMotion, Management or VSAN designation. So for example:
      vmk0 – vMotion
      vmk1 – Management
      vmk2 – VSAN

      vmk0 will be used for NFS

      vmk0 Virtual machine only
      vmk1 VSAN

      vmk1 will be used for NFS

      In reality it’s actually really rare to have vmk0 be anything but management so it’s most common to have management and vmk0 used for NFS.

      To answer the bandwidth question / redundancy. Remember multiple types of traffic can share an uplink. Multiple vmknics can share an uplink. Every traffic type should have more than one uplink. So if you have the following traffic types:
      – Management
      – vMotion
      – VSAN
      – Virtual machine
      – NFS storage

      You would want to have at least two uplinks assigned to all these storage types. Since ESXi without storage is useless we want to makes sure storage get’s bandwidth. On the other side virtual machines without network also are useless. If you have enterprise plus you can use load based teaming and network I/O control to manage traffic and sharing. If you don’t then I would suggest the following assuming that your current usage of any one type of traffic does not exceed 2GB:

      – Uplink1, Uplink2 – Management, vMotion, Virtual machine
      – Uplink3, Uplink4 – VSAN, NFS

      On the VLAN side it’s critical that Uplink3 and Uplink4 have a VLAN in 192.168.10.X so NFS uses that link. If it’s not possible for Uplink3 and Uplink4 to be in the same subnet via a VLAN then I would do the following:

      -Uplink1, Uplink2 – Management, VSAN, NFS
      -Uplink3, Uplink4 – Virtual machine, vMotion

      I hope this helps let me know if you have additional questions. (there are a few articles on network load balancing and Network I/O control on my blog as well)

  3. Joseph, I’m really “moved” by your clearness 🙂

    I really thank you because I’m working on a big project and I had some doubts that I never completely figured out but now, thanks to you, they are.
    I often read your blog because, as I already said to you in the past (this is not the first time I comment your posts..), apart your skills, you have the ability to explain complex things in a simple way.

    Many many thanks

    bye
    Marco
    Italy

Leave a Reply to marco Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.