Design Scenario: Gigabit networking with 10GB for storage SMB setup

Yesterday I got a comment on a older blog article asking for some help.

Caution

While it would be a bad idea personally and professionally for me to give specific advise without a design engagement I thought I might provide some thoughts about the scenario here.  This will allow me to justify some design choices I might make in the situation.   In no way should this be taken as law.  In reality everyone situation is different and little requirements can really change the design.   Please do not blindly create this infrastructure these are only guidelines.  It does not take into account specific vendor best practices (because I am too lazy to look them up).

 

Information provided:

We are a SMB that’s starting to cross over to the world of virtualization. I could really use your help on our network design. This is the current equipment we have:

 

2 (ESXi Hosts) Dell R630 with 512GB Ram, 2×4 1GB port NICS each (8 Total each host) and 2 x dual port 10GB NIC(4 Total) on each host

 

Equal Logic PS6210XS SAN with Dual 10GB Controllers

 

2 Dell N4032F 10GbE switch

 

We are planning to use the 10GbE for the SAN(isolated) and use the remaining 8 x 1GB port for Management/vMotion and our Server Network.

 

How would you go about designing the network for our environment?

 

Requirements

  • Must use the current hardware

 

Constraints

  • The 10GB network adapters are for isolated SAN only

 

Assumptions

  • Since this customer is a SMB i doubt they will buy Enterprise plus licenses so we will design around standard switches
  • The virtual machine / management network ports are distributed on two different upsteam switches
  • Your storage solution supports some type of multipathing with two switches

 

The question was related to networking so here we go:

Virtual machine and vSphere networking

It’s hard to make a determination here without understanding the number of virtual machines and network bandwidth needs.   It is really tempting to use two of the 10Gb nic’s (total of 4)  for the vSphere and virtual machine networking.  Due to the constraints it we will avoid that temptation.

Management Network

Management is easy.  vCenter and console access I assume.  If this is true I would assign two network adapters to Management.  One active the other standby.   You really want two in order to assure it’s up and for host isolation.

vMotion network

Our hosts are large (512GB of RAM) which would lead me to believe we are going to have  a lot of virtual machines on each host.   With only two hosts I am very concerned about taking down one host to patch and how long it will take to move virtual machines between host with one single 1GB network adapter.  You might want to consider multi-nic vMotion, which introduces complexity in the vSphere design and managability.    You should weigh how often you are going to schedule downtime on a host against the complexity.  My guess is that you will not patch all that often on a SMB.   So I would assign two network adapters to vMotion.  One should be active the other standby, You can use the same network adapter as management just use opposite adapters.  (Nic1 active for management nic2 standby for management,  nic1 standby for vMotion nic2 active for vmotion)

Virtual machine networks

At this point we have 6 adapters left for virtual machines.  Assign them all to virtual machines.   What really matters is the load balancing we use for these adapters.  Let’s be clear you cannot provide more than 1GB of total bandwidth to an individual virtual machine with this configuration without using port channel or LACP configurations.   I assume you don’t want to mess with port channel or virtual port channel across two switches.  So we need to look at the remaining options for balancing and using these nics:

Options (taken from here.) with IP hash removed due to lack of port channel, Route based on physical nic load removed due to lack of enterprise plus

  • Route based on the originating port ID: Choose an uplink based on the virtual port where the traffic entered the virtual switch.
  • Route based on a source MAC hash: Choose an uplink based on a hash of the source Ethernet.
  • Use explicit failover order: Always use the highest order uplink from the list of Active adapters which passes failover detection criteria.

There is a holy war between factions of VMware on which one to choose.  None will balance traffic perfectly.  Personally I would go with the default load balancing method of Route based on originating port ID.

How many VLANS

If possible please use a different VLAN for at least the following: Management, vMotion and virtual machines.  Multiple virtual machine vlans are wonderful.   It is critical from a security perspective that vMotion not be shared.

How many virtual switches

Now to the question of virtual switches.   Remember no enterprise plus so we are using standard switches.  These have to have the same configuration including case sensitivity on each host (good thing we only have 2 hosts).   You might want to consider configuring them via a script (I have a older blog post on that somewhere.)   You have two sets of network adapters vMotion/Management and virtual machine.   I would connect them all to the same virtual switch just for ease of management.   So your setup would look like this assuming your 1GB nics come into ESXi as nic0 – nic7

vSwitch0

Port Group or PG

PG-vMotion

Active nic1

Standby nic0

PG-Management  

Active nic0

Standby nic1

Port groups for virtual machines (one port group per VLAN)

Active nic2-nic7

Storage networking

This choice is determined by the vendor best practices.  It’s been a while on Equal Logic and you should use Dell’s documentation 100% before doing anything.  Let me say that again consult Dell’s documentation before doing this and make sure it aligns.   Any EQLogic master is welcome to add via comments.   I assume you will be using software iSCSI to do these connections.   You have 4 total 10GB nic’s with two switches.   I would create another virtual standard switch for these connections (does it have to be another switch?  no but I would for ease of management)  So it’s pretty cut and dry two dual port nics like this:

Card 1 Port 1  – we will call it nic8

Card 1 Port 2 – we will call it nic9

Card 2 Port 1 – we will call it nic10

Card 2 Port 2 – we will call it nic11

We have the following switches

SwitchA

SwitchB

I would do the following physical connections:

SwitchA -nic8,nic10

SwitchB – nic9,nic11

 

Normally software iscsi has you setup a port group per uplink all on the same vlan or native if your switches are only doing iSCSI. So I would create the following port groups

PG-iSCSI-Nic8-SwitchA

PG-iSCSI-Nic9-SwitchB

PG-iSCSI-Nic10-SwitchA

PG-iSCSI-Nic11-SwitchB

 

Assign the nics to be active only on their designated port groups (nic8 active on PG-iSCSI-Nic8-SwitchA and unused on all others)  Then setup iSCSI storage.   Your multipathing on the port groups should be setup as explicit failover.

 

Last Thoughts

With limited information it’s hard to comment on additional options.  I would carefully consider and implement percentage based admission control (think 50% or more reserved on each host).  If possible monitor your network bandwidth usage to make sure your virtual machine are getting the required traffic.   I hope this rant is useful to someone.  Leave me your thoughts or questions.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.