Deep Dive: Configuration Maximums for dVS

Recently I have been thinking about configuration maximums of the current virtual distributed switches.   In the configuration maximum’s document for 5.5 it states the following:

– Total virtual network switch ports per host (VDS and VSS ports) – 4096
– Maximum active ports per host (VDS and VSS) – 1016
– Hosts per distributed switch – 1000
– Static/Dynamic port groups per distributed switch – 6500
– Ephemeral port groups per distributed switch – 1016
– Distributed virtual network switch ports per vCenter – 60000

The first question is between these numbers:

– Total virtual network switch ports per host (VDS and VSS ports) – 4096
– Maximum active ports per host (VDS and VSS) – 1016

In order to explain these numbers you must have some context about how a vDS and VSS work and allocate ports:

  • virtual standard switch (VSS)- allocates ports statically when a port group is created on the local ESXi host – so if you allocate 24 ports to a port group then 24 ports are taken.
  • virtual distributed switch (dVS) – allocates ports to the port group in vCenter but each individual ESXi host only allocates ports based upon currently powered on machines (assuming Dynamic or static port binding).  so if you create a dVS port group with 24 ports but there is only one virtual machine in the port group it would only take one port on it’s assigned ESXi host.

Ephemeral ports on a dVS work just like a VSS, so each local ESXi host uses all ports in a port group.

 

What is a proxy switch?

Proxy or Ghost switch is a term that you may see around to reference the local copy of the vDS on each host.  The proxy switch only contains relavant information to its virtual machines.   When you vMotion a new virtual machine to the host, vCenter allocates a new port on the ESXi host and sync’s a new proxy configuration to that switch alone.

What is the difference between an active port and total ports?

An active port is defined different between the switches

  • VSS any port on a port group is considered active on each ESXi host
  • dVS static or dynamic port in use on the ESXi host
  • dVS Ephemeral any ports on the port group are allocated on all ESXi hosts.

 

So in order to hit the 4096 total ports you would need a combination of VSS and dVS ports.    When using a single dVS you will hit the 1016 total active ports and never hit the 4096 total ports.

Lets look at some dVS switch maximums:

– Static/Dynamic port groups per distributed switch – 6500
– Ephemeral port groups per distributed switch – 1016

These are software limits static and dynamic are enforced by the dVS at vCenter and have no relationship to the ESXi hosts.   Ephemeral port groups have the hard limit of 1016 which aligns with the maximum number of active ports, which assumes you have 1016 port groups each with a single port.

How about the last set of numbers:

– Hosts per distributed switch – 1000
– Distributed virtual network switch ports per vCenter – 60000

Not much to say here.  The 60,000 creates a boundary that may require you not to allocate 1,000 ports per port group, it is per vCenter not dVS.  So that limit can span multiple vDS’s.

Best practices and design considerations:

Given that only active ports take memory on a ESXi host there is no reason not to allocate larger port groups, then again since port groups can be grown dynamically there is no reason not to keep them small.  I vote for something in between.  It would provide the best manageability without getting close to the maximums.

VMware NSX how to firewall between IP’s and issues

The first thing everyone does with NSX is try to create firewall rules between IP addresses.  I consider this a mistake because the DFW can key off a lot better markers than IP addresses.   Either way at some point you will want to use IP addresses in your rules.  This post will describe how to setup firewall rules between IP addresses.

 

Setup:

I have two Linux machines each on their own subnet:

Linux1 – 172.16.1.10 – 172.16.1.0/24 network

Linux3 – 172.16.10.10 – 172.16.10.0/24 network

Routing is setup between the hosts so they can connect to each other.  I would like to block all traffic except ssh between these subnets.   We are going to assume that both of these networks exist in NSX.

NSX Setup:

First we have to set up an IP set in NSX Manager.  This is suprisingly a set of IP addresses.

  • Login to the vSphere web client
  • Click networking and security
  • Select your NSX Manager and expand it
  • Select Manage -> grouping objects
  • On the lower pane select IP Sets
  • Press the green plus button to add a new set
  • Setup each set as shown below:

Capture

Capture

Tale of multiple cities:

Here is where NSX gets interesting you have multiple ways to block access.  First a little understanding of firewall constructs in NSX:

  • Security Groups – these are groups of machines / constructs they can include IP sets, MAC sets, dynamic name based wild card information.  They can contain whole datacenters or a single virtual machine.  It can be very dynamic with boolean conditions.
  • Security Policies – These are groups of firewall rules and introspection services.  These are policies that are applied to security groups.  Each of the firewall policies assume that they are assigned to one or more security groups.   So your source or destination needs to be the policies assigned security group.  The opposite side (source/destination) needs to either be a security group or any.

Remember we want the following rules:

  • SSH between 172.16.1.0/24 and 172.16.10.0/24 should be allowed bi-directional
  • Everything else between them should be blocked

Within these constructs there are a number of possible options for the firewalls:

  • Option 1 – rules in this order
    • Firewall rule allowing ssh between source: assigned policy group and destination: 172.16.10.0/24
    • Firewall rule allowing ssh between source: 172.16.10.0/24 and destination: assigned policy group
    • Firewall rule blocking any between source: assigned policy group and destination: 172.16.10.0/24
    • Firewall rule blocking any between source: 172.16.10.0/24 and destination: assigned policy group
    • Assign the security policy to 172.16.1.0/24
  • Option 2 – Security Groups
    • Firewall rule allowing ssh between source: assigned policy group and destination: assigned policy group
    • Firewall rule blocking any between source: assigned policy group and destination: assigned policy group
    • Assign the security policy to 172.16.1.0/24 and 172.16.10.0/24
  • Option 3 – Two rules
    • Rule 1
    • Firewall rule allowing ssh between source: Assigned Policy group and destination: 172.16.10.0/24
    • Firewall rule blocking any between source: Assigned Policy group and destination: 172.16.10.0/24
    • Assign Policy to 172.16.1.0/24
    • Rule 2
    • Firewall rule allowing ssh between source: Assigned Policy group and destination: 172.16.1.0/24
    • Firewall rule blocking any between source: Assigned Policy group and destination: 172.16.1.0/24
    • Assign Policy to 172.16.10.0/24

First question anyone will ask is why would I not use option 2?  It’s smaller and easier to read.  It does accomplish the same goal.   It does lack granularity in design.  What if you had a third subnet 172.16.20.0/24 and you only wanted it to access 172.16.1.0/24.  Option 1 would easily be able to do this, while option 2 would mistakenly open up 172.16.10.0/24.   This is the heart of firewall design.  Layer rules to create granularity.    I am not a master of the firewall but I do have a few suggestions:

  • Outbound firewall rules sound great but right away will kill you in complexity
  • Protect the end points… apply rules to the destination (think apply rules to the web server instead of every PC)  If you need to apply source rules do it on the destination
  • Use naming conventions that describe the purpose of the rule  Allow-SSH-Into-Production
  • Consider using a DROP all on your default rule and then applying only allow rules in security groups
  • Rules that are part of the default and not created in service composer don’t show up in the GUI so don’t use them beyond the default DROP apply everything as a security policy

 

Let’s do Option 1

  • Return to networking and security and select service composer
  • Select security groups and create a security group for each IP Set

Capture

Capture

  • Repeat for the other subnet
  • Click on security policies
  • Create a new policy as shown belowCapture

Capture

Capture

Capture

Capture

Capture

  • Now that you have it build your just need to apply it to a security group
  • Click on the text of your Security Policy
  • Select Manage -> Security Groups
  • Click edit and add 172.16.1.0/24

Now your rules should work.  You can test with ping and SSH.   Using the same dialog’s you can create option 2 or 3.   The same rules you use for firewalls on physical entities need to apply to DFW.   You need to think before you create or you will be in firewall spawl.

Deep Dive: How does NSX Distributed Firewall work

This is a continuation of my posts on NSX features you can find other posts on the Deep Dive page.   My favorite feature of VMware NSX is the Distributed firewall.   It provides some long over due security features.  At one time I worked in an environment where we wanted to ensure that every type of traffic was filtered with a firewall.   This was an attempt to increase security.  We wanted to ensure that there was no east <-> west traffic between hosts; so everyone was in its own subnet.  Each virtual machine was deployed inside a /27 subnet alone.   Every communication required a trip to the firewall which was also serving as a router.

LunchThis model worked but made us very firewall centric.  Everything required multiple firewall changes.  Basic provisioning took weeks because of the constant need for more firewall changes.   In addition we wanted secondary controls so each host ran their own host based firewall as well.   This model caused a few major design constraints: you had to buy larger firewalls to handle all the routing and you had to take your firewall guys to lunch all the time to avoid mega rage.

Enter the distributed firewall

The distributed firewall applies firewall rules at the virtual machine kernel and network interface right above the guest OS.  This has a few advantages:

  • No one on the OS can change firewall rules
  • Only traffic that should be on the network is on the network everything else gets blocked before leaving the virtual machine (Think mega cost savings, and less garbage traffic)
  • You can inspect each packet before it gets to the network and take action (lots of third-party plugins will be able to do this)
  • You can scale out your firewalls capacity by adding more hosts in a modular fashion that matched your server growth

The firewall has a api for third-party solutions like virus scanners or IDS.   This allows them to be part of the data stream in real-time.

Components of Distributed firewall (DFW)

The DFW has a management plane, control plane and data plane which should be familiar to network admins.

  • Management Plane – is Done via vCenter plugin or API access to the NSX manager – This allows you to use any vCenter object as the source or destination (Datacenter, VM name, vNic etc..) It also allows you to define IP ranges for more traditional firewalls between IP’s
  • Control Plane – is done by the NSX manager it takes changes from vCenter and stores them in a central database and then pushes the rules down to each ESXi host.  (Database is /etc/vmware/vsfwd/vsipfw_ruleset.dat on each ESXi host)
  • Data Plane – ESXi hosts are the data plane doing the actual work of the firewall.  All firewall functions take place in kernel modules on the ESXi hosts.  Remember that enforcement is done locally and at the destination reducing the traffic on the wire.

Each vNIC get its own instance of DFW put into place and managed by a set of daemons called vsfwd.

How does it work?

Each firewall rule is created and applied via the NSX manager GUI or API.   When published it pushes all rules down to each ESXi host.  They create a file on disk which holds the all the firewall rules.   The ESXi host applies rules to the instance of DFW when a change in vCenter (remember management plane – like a new vNic vlan change etc..) happens the firewall rules are re-consulted.  IP-based rules require VMware tools to identify the IP address / addresses of the server.

How about vMotion?

Since the rules are applied to the virtual container they are moved with the host when vMotion is used, no effect.

How about HA events?

Rules are loaded off disk and applied to virtual machines.

What about if NSX Manager is not available?

Rules are loaded off disk. New systems will get the rule set that apply to them, for example if my new server is called Web-Machine12 and I have rules that are applied to all vm’s named Web-* then it will get them from disk.  This entourages the use of naming standards.

How about if I create a new virtual machines and it does not have any rules?

At the bottom is a default rule (some vote for allow all other deny all, I vote deny all) so you machine will have deny all.

Group and Policies

DFW has the concept of Security Groups (yep like it sounds) groups of similar systems, these can be hard-coded to specific entities or dynamic using regular expresses on any vCenter entity.   They also have security policies these are groups of like-minded rules to be processes in order.   So you define the scope of the rules in the Security Groups and define what is done in Security policies.  It can be a one to many reference on both sides.  (A security group can have many policies or a policy can have may groups) providing the ability to layer rules.

How do I track my firewall drops / accepts?

This is the first thing your firewall guys are going to ask for…  And I don’t like the answer right now.  They are logged to the ESXi hosts syslog.   So you need to centralize your host logs and do some searches to gather the firewalls into one place.   If you search your host based logs for “vsip_pkt” (In 6.1 they changed this to dfwpktlogs:) you will find the firewall drops / accepts.

 

Deep Dive: How does NSX Distributed routing work

As a continuation of my previous host How does NSX virtual switch work I am now writing about how the routing works.   I should thank Ron Flax and Elver Sena for walking through this process with me.   Ron is learning about what I mean by knowledge transfer and being very patient with it.   This post will detail how routing works in NSX.  The more I learn about NSX the more it makes sense.  It is really a great product.  My post are 100% about VMware NSX not multi-hypervisor NSX.

How does routing work anyways

Must like my last post I think it’s important to understand how routing in a physical environment works.   Simply put when you want to go from one subnet to another subnet (layer 2 segment) you have to use a router.   The router receives a packet and has two choices (some routers have more but this is generic):

  • I know that destination network lives down this path and send it
  • I don’t know that destination network and forward it out my default gateway

IP packets continue to forward upward until a router knows how to deliver the IP packet.  It all sounds pretty simple.

Standardized Exterior Gateway protocols

It would be simple if someone placed a IP subnet at a location and never changed it.  We could all learn a static route to the location and never have it change.   Think of the internet like a freeway.  Every so often we need to do road construction this may cause your journey to take longer using an alternate route but you will still get there.  I am afraid that the world of IT is constant change.  So protocols were created to dynamically update router of these changes standardized exterior gateway protocols were born. (BGP, OSPF etc..) I will not go into these protocols because I have a limited understanding and because it’s not relevant for the topic today (It will be relevant later).   It’s important to understand that routes can change and there is an orderly way of updating (think DNS for routing.. sorta).

IP

A key component of routing is the internet protocol.  This is a unique address that we all use every single day.   There are public ip address and internal IP addresses.  NSX can use either and has solutions for bridging both.  This article will use two subnets 192.168.10.0/24 and 192.168.20.0/24.   The /24 after the IP address denotes the size of the range in cidr notation.  For this article it’s enough to denote that these ranges are on different layer 2 segments and normally cannot talk to each other without a router.

Setup

We are going to place these two networks on different VXLAN backed network interfaces (VNI’s) as shown below:

Lunch

If you are struggling with the term VNI just replace with VLAN and it’s about the same thing.  (Read more about the differences in my last post).   In this diagram we see that each virtual machine is connected to its own layer 2 segment and will not be able to talk to each other or anything else.  We could deploy another host into VNI 5000 with the address 192.168.10.11 and they would be able to talk using NSX switching but no crossing from VNI 5000 to VNI 5001 will be allowed.  In a physical environment a router would be required to allow this communication, in NSX this is also true:

Lunch

Both of the networks shown would set their default gateway to be the router.  Notice the use of distributed router.  In a physical environment this would be a single router or a cluster.  NSX uses a distributed router.  It’s capability scales up as you scale up your environment, each time you add a server you get more routing capacity.

Where does the distributed router live?

This was a challenge for me when I first started working with NSX, I thought everything was a virtual machine.   The NSX vSwitch is really just a code extension of the dVS.  This is also true of the router.  the hypervisor kernel does the router with mininal physical overhead.  This provides an optimal path for data, if the data is on the same machine communication never leaves the machine (much like switching in normal vss).  The data plane for the router lives on the dVS.    There are a number of components to consider:

  • Distributed router – code that lives on each ESXi host as part of the dVS that handles routing.
  • NSX Routing Control VM – this virtual machine that controls aspects of routing (such as BGP peering)  it is in the control plane not data plane (in order words it is not required to do routing) (Design Tip: You can make it highly available by clicking the HA button at anytime, this will create another vm with a anti-affinity rule)
  • NSX Control cluster – This is the control cluster mentioned in my last post.   It syncs configuration between the management and control plane elements to the data plane.

How does NSX routing work?

Here is the really neat part.  A routers job to deliver IP packets.  It is not concerned if the packets should be delivered it just fings IP’s to their destination.   So let’s go through a basic routing situation in NSX.  Assume that Windows virtual machine wants to talk to Linux virtual machine.

Lunch

 

The process is like this:

  1. The L3 Local router becomes aware of each virtual machine as it talks out and updates the control cluster with arp entry including VNI and ESXi Node
  2. The control cluster updates all members of the same transport zone so everyone knows the arp entries
  3. Windows virtual machine wants to visit the website on Linux so it arps
  4. ESXi1’s DLR (Distributed Logical Router) returns its own mac address
  5. Windows sends a packet to ESXi1’s LDR
  6. Local LDR knows that Linux is on VNI 5001 so it routes the packet to the local VNI 5001 on ESXi1
  7. Switch on ESXi1 knows that Linux lives on ESXi2 so it sends the packet to VTEP1
  8. VTEP1 sends the packet to VTEP2
  9. VTEP2 drops the packet into VNI 5001 and Linux gets the message

It really makes sense if you think about it.  It works just like any router or switch you have mostly ever used.  You just have to get used to the distributed nature.   The greatest strength of NSX is the ability to handle everything locally.   If Linux was on the same ESXi host then the packet would never leave ESXi1 to get to Linux.

What is the MAC address and IP address of the DLR?

Here is where the fun begins.   It is the same on each host:

Lunch

Yep it’s not a typo each router is seen as the default gateway for each VNI.  Since the layer 2 networking is done over VXLAN (via VTEP) each local router can have the same IP address and mac address.  The kernel code knows to route it locally and it all works.   This does present one problem: External access.

External Access

In order for your network to be accessible via external networks the DLR has to present the default gateway outside, but if each instance has the same IP / Mac who responds to requests to route traffic?   Once instance gets elected as the designated instance (DI) and answers all questions.   If a message needs to be sent to another ESXi host than the one running the DI then it routes like above.   It’s a simple but great process that works.

Network Isolation

What if your designated instance becomes isolated?  There is an internal heartbeat that if not responded to will cause a new DI election to happen.   What about if networking fails on my ESXi host?  Well then every other instance will continue to communicate with everyone else, packets destined for the failed host will fail.

Failure of the control cluster

What about if the control cluster fails?   Well since all the routing is distributed and held locally everything will continue to operate.  Any new elements in the virtual world may fail but everything else will be good.  It’s a good idea to ensure that you have enough control clusters and redundancy as they are a critical component of the control plane.

Deep Dive: How does the NSX vSwitch Work

Edit: Thank to Ron Flax, Todd Craw for helping me correct some errors.

I have been blessed of late to be involved in some VMware NSX deployments and I am really excited about the technology.   I am by no means a master of NSX but I will post about my understand as a method to spread information and assist with my personal learning.   In this post I will be covering only the switch capabilities of NSX.

 

Traditional Switches

The key element of a layer 2 ethernet switch is the MAC address.  This is a unique (perhaps)  identifier on a network card.  Each network adapter should have a unique address.   A traditional physical switch learns the mac addresses connected on each port when the network device first tries to communicate.  For example:

Lunch

When you power on Windows Physical server the physical switch learns that MAC 00:00:00:00:01:01 is connected to port 1.  Any messaged destined for 00:00:00:00:01:01 should be sent to port 1.   This allows the switch to create logical connections between ports and limit the amount of wasted traffic.   This entry in the switches MAC table (sometimes called a cam table) stays present for 5 minutes (user configurable)  and is refreshed whenever the server uses it’s network card.   The Linux server on port two is discovered exactly the same way via physically talking on the port, the table is updated for port 2.   If Windows wants to talk to linux their communication never leaves the switch as long as they are in the same subnet.   If the MAC address is unknown by the switch it will broadcast the request out all ports with hopes that something will respond.

Address Resolution Protocol (ARP)

ARP is a protocol used to resolve IP addressed to their MAC addresses.  It is critical to understand that ARP does not return the MAC address of the final destination it only returns the mac address of the next hop unless the final destination is on the same subnet.  This is because ethernet is only concerned with next hop via mac not end destination.

Lunch

You can follow the communication with ARP’s between each layer of the diagram the key component is that if the IP is not local then it returns its own MAC and forwards it out the default gateway.

Traditional Virtual Switches

In order to understand NSX vSwitch it is critical that you understand how the traditional virtual switch works.  In a traditional virtual switch (VSS and dVS) the switch learns the mac addresses of virtual machines when they are powered on.  As soon as a virtual machine is assigned a switch port it becomes hard-coded in the MAC table for that virtual switch.   Anything that is local to that switch in the same vlan or segment will be delivered locally.    Otherwise the virtual switch just forwards the message out it’s uplink and allows the physical switches to resolve the connection.

NSX Virtual Switch

The NSX virtual switch includes additional functionality from the traditional virtual switch.  The key feature is the ability to use VXLAN to span layer 2 segments between hosts without the use of multiple streched VLAN’s.   VXLAN also allows strech layer 2 to distant datacenters and up to 16 million segements vs the current limit of 4096 vlans.  There are some common components that need to be understood:

  • VTEP (VXLAN Tunnel End Point)  – this is a ESXi virtual adapter that has its own vlan and ip address including gateway.  This interface must be set for 1600 MTU and all physical switches/routers that handle this traffic must allow at least 1600 MTU.
  • NSX virtual switch (also called logical switch) – This is a software kernel based construct that does the heavy lifting. This is deployed to a dVS switch and works as extensions to the dVS.
  • NSX Manager – This is the management plane for NSX, it acts as a central point for communication, scripting and control.  It is only required when making changes as part of the management plane
  • NSX Control cluster – This is a series of virtual machines that are clustered via software.  Each node (should be a odd number and at least three)  contains all required information and load is distributed between all three.  (Best Practice: Do a DRS rule to keep these on separate hosts, future releases may do this for you)
  • VNI – Virtual network interface – this is an identifier used by VXLAN to separate networks (think vlan tag) they start at 5000 and go to 16,000,000.  It easiest for people to think vlan tags when working with VNI’s.

With all the terminology out-of-the-way it’s time to get down to the path.   The NSX Virtual switch includes one key component the ability to switch packets between nodes or clusters without having the layer 2 streched between the clusters.  For my networking friends this means reduction in spanning tree issues.

So let me lay it out below:

Lunch

We have a three node NSX control cluster that has been deployed.  We have two ESXi hosts running dVS’s with the NSX Virtual switch.  VXLAN has been enabled and a virtual network VNI:5000 has been created.   The VTEP’s have been configured.   We have created two virtual machine as shown in green.  Neither has been connected to the VNI network yet.

 

Time to learn our first MAC:

  • We connect the Windows server to VNI:5000 as shown below
  • The MAC table on our local switch is updated (Learns) then passes it’s learned information to the control cluster
  • The control cluster passes it to all members of the logical switch (there are three methods to pass the information which I will cover in another post unicast, multicast and hybrid)

Lunch

 

This syncing of the MAC table ensures that each member of VNI knows how to handle switching creating a distributed switch (like a switch stack that has multiple switches that act as one).

When we power on the linux server the same method is used:

  • We connect the Windows server to VNI:5000 as shown below
  • The MAC table on our local switch is updated (Learns) then passes it’s learned information to the control cluster
  • The control cluster passes it to all members of the logical switch (there are three methods to pass the information which I will cover in another post unicast, multicast and hybrid)

Lunch

Now we have a ARP table available on each switch that works great.   Let’s follow the flow of communication: Assume the following.   Windows server wants to open a web page on Linux server on port 80:

  • User on Windows server brings up internet explorer and types in 192.168.10.11
  • Windows server sends out a arp entry for 192.168.10.11
  • ESXi1 ‘s virtual switch returns the MAC address 00:00:00:00:02:02
  • Windows server sends out a IP packet with the MAC address of 00:00:00:00:02:02
  • ESXi’s virtual switch forwards the packet out VTEP1 by encapsulating it destined for the IP of VTEP2
  • VTEP2 opens the packet and removes the VTEP encapsulation and forwards the packet to ESXi2 virtual switch on VNI:5000
  • The switch on ESXi2 sends the packet to the virtual port that the linux servers network card is connected on.

 

This is how a NSX virtual switch handles switching.  At first you may say this makes no sense at all… wouldn’t a VLAN just be easier.   There are a number of benefits this brings:

  • Limits your Spanning tree to potentially top of rack switches if architected correctly
  • Allows you to expand past the 4096 VLAN limit
  • Opens the door for other NSX services (which I will post about in the future.)

 

As I mentioned this is just my understanding I do not have inside knowledge if I have made a mistake let me know, I’ll test then correct it.

Central Ohio VMware Lunch and Learn

I have been toying with the idea of starting a community series of lunch and learn sessions to assist people in learning about VMware technology.   I am happy to announce that the first session will be Sep. 25th at Noon at:

 

OARnet – Bale Conference Room

1224 Kinnear Road

Columbus, OH 43212

 

It was very kind of my previous employer to be willing to host us for these sessions.   I am excited to announce that VMware education has also provided some certification discount codes for me to pass out.   The format will be a bit loose.  I will be focusing on VCP content but it will be open to discussion.   I want it to be a forum.   I have also invited others to present in the future and hope to make it a monthly occurrence.   The topic for this month with be vSphere networking.  It will be a great refresher course for anyone looking to study for the VCP-NV.    The event it 100% open to the public and we have seating for about 60 people.  There is standing room for about 40 more.   Bring your lunch and join us.  Feel free to contact me via comments or twitter if you have questions or would like to present a future topic.  The one request I have is this is a technical conversation not a sales pitch.  I want it to be a discussion between technical people.

 

Looking forward to seeing you there.

How do I explain virtualization to my Mother

IMG_20140730_174059As I have progressed in my career it’s been increasing hard to explain my job to people both inside and outside IT.    There used to be a time when people in IT understood what I did… at this point most people really don’t understand what I do or why.   I have given up explaining it to people I just say I work with computers.    Two years ago while at VMworld the crew from VMware TV stopped me on the street and asked how do you explain virtualization to your mother.   They totally stumped me.   I am lucky my mother has some technology in her life.   She recently got a nook and has discovered she can get books without leaving the house.  For a woman in her 70’s she is about as technically savvy as I can expect.    My religious studies have taught me that analogies can be a great way to teach.   So I present my analogy to explain virtualization.

The Apartment building

Imagine with me that I have just bought a 30,000 square foot housing space.   As the owner I could rent out this space to a single four person family.   They would be very happy and have more space than they could ever use.   It does present some critical problems.  The family would have to be very rich in order to pay for my whole building.   There is no way they could possibly use all the space so there would be lots of wasted space.   In my case if the one family moved out I would have a huge expense that I would have to shoulder until I found another rich family who wanted 30,000 square feet.   I have other issues unless I was very handy I would have to hire someone to fix and repair the apartment when things broke.  This is an expense that is wasted when no one is living in the apartment.   The cost for heating, cooling and powering the apartment would be a huge expense that I would pass on to my single family.  At this point the power bill alone might force the family to move out, once again leaving me to shoulder the whole bill.   In reality running a 30,000 square foot apartment building with a single tenant is a huge risk.  In some neighborhoods it’s totally possible to rent out a space like this to a single family and make a huge profit, either because money is no object or they have some requirement that offsets the costs (like a home office).

The subdivided apartment

I prefer investments with less risk.    After some examination I have discovered that in the neighborhood there is a demand for one, two and three bedroom apartments.   Each type of apartment has some common components: bathroom, a living room and a kitchen.   I create three standard configurations and start to subdivide my building into separate living spaces.  Some of my living space is lost to overhead like hallways and doors.  There are some shared area which represent a space saver for example stairs, elevators and laundry rooms.   Making some area’s shared reduces the lost space to overhead.    I may even consider putting in a pool on the roof to increase the price of my apartments individual rent and increase my profit.   Each of the apartments have their own plumbing with sinks, toilets and showers.   Once these shared components leave your individual space they join the building plumbing and water and utilize shared resources.   It’s important that I take into account the total amount of possible shared utilization at the same time to avoid loss of individual services.   After all if everyone flushes their toilet at 5:00 PM I cannot have the pipes get stuck.   I have to be careful that the individual actions of a single tenant cannot create a failure for all other tenants.   This is one of the key reasons why each apartment has their own water heater, we never want the actions of a single bad neighbor to affect everyone else’s experience.

What does the apartment have to do with virtualization

Virtualization is very much like the apartment.   I have a large computer.  Most of the time it’s 30,000 square feet is about 2% utilized.   If I engineered the correct solution I could utilize the other 98% of wasted space.   Much like humans my applications don’t like to live in the same space.   Virtualization creates separate apartments for each service, these virtual apartments have some shared components and some individual components.   For example I may have shared network connections, power, even portions of memory (hallways) and shared storage (laundry room)  while I have my own water heater (reservation/allocation of resources).  I may have a flash cache on my server (pool on the room) to improve the amenities and encourage higher rent.    All of this is done in a fashion to protect the security of individual families and homes (hypervisor security).   Virtualization has to take into account peak usage to avoid having the pipes filled with you know what at 5:00 PM.   Much like my apartment I need to hire systems administrators to provide care and feeding to my virtualization, the more apartments I deploy the better my cost savings in theory (Yes I know there is diminishing returns when I need more workers)

What does virtualization not have to do with an apartment building

Virtualization brings a few key differences to the table over my apartment building.   It is very costly for me to reconfigure my available space into larger to smaller apartments to fulfill demand, virtualization can do this on demand.   If my apartment burns to the ground due to faulty wiring my families cannot be moved within minutes to another apartment nearby with their furniture and home goods intact.  Virtualization can do that.

Key elements

  • Virtualization is like an apartment building created to make efficient use of large wasted space
  • Virtualization has overhead due to shared components but the overhead uses what would be wasted space so it’s a net gain in most situations
  • Virtualization has limits on shared components and should be sized correctly (no full pipes at 5:00PM)
  • Virtualization is better than single homes in almost every way except one: It is still a shared resource and bad neighbors can still make it unlivable