Network IO Control has been around since 4.1 but I mostly ignored it. We mostly would use HP’s virtual connect to divide up 10GB connections into smaller nics. As I spend more time designing solutions I have found Network IO control to be my best friend. I allows you to get the most of out of a 10GB connection at all times. The concept is simple: Each type of network traffic is allocated a share (number between 1 and 100). Network IO comes with some predefined traffic classes that are automatically assigned by vsphere.
- vMotion
- iSCSI
- FT Logging
- Management
- NFS
- Virtual machine traffic
In addition you can create your user defined class of traffic. User defined classes of traffic can be assigned at the port group level. You need to keep a few things in mind when working with Network IO
- Network IO is evaulated on the dvUplink level (shares are per network uplink)
- Network IO requires vNetwork Distributed Switch (vDS)
- Network IO shares are only used when contention is present (in other words each type of traffic get 100% of requested bandwidth unless contention)
- When evaluating shares only active traffic is taken into account (For example if you have NFS with 30, Virtual machine with 100, and vMotion with 50 but you don’t use NFS then you only have 150 shares to divide your 10GB)
- Network IO only applied to outbound flow – it cannot do anything about shaping in bound flow.
In addition network IO control offers two addition features:
- Limits – Just like cpu or memory limits (allows you make the customer think they have 10Gb for their virtual machine but never allowing them more than 1gb) – I would avoid limits unless you have a odd use case
- Load-Based Teaming – this new feature allows you to move traffic flow to an addition uplink once you reach 75% of capacity over a 30 second period. This is 100% the best load balancing option with vmware.
Network IO provides some awesome features that you should play with. You can read a older but still valid white paper here.