Does Cloud + REST API spell the end of GUI

Fun question:  Does API spell the end of the GUI?

I started my career as a Solaris and Linux administrators mostly because I felt that working in Windows Server took away most of my control.  I loved configuring a web server in text and having full control.   I love having to understand what each variable did so I could tune my web server to meet my needs.    It was a great job which led into configuration management with puppet.   Full control and text once again…

This evening I was working with the REST API for NSX working on a side project and to confirm the results of my query I just used REST… I got my answer is a millisecond… I could not have refreshed the GUI that quickly.   It was so easy and it reminded me of the good old Linux days long forgotten as a architect.

Make no mistake it’s a coders world out there infrastructure folks need to get comfortable with API’s and code.   The future is a process of automating different units together using API’s.   Working with Rest has taught me so much about the platform.   You start to understand how the solution was built.   It exposes workflows that helps you build efficiency…

I suggest that if you really want to understand your product you need to learn it’s API.  If it does not have an API consider a different product.   I know GUI’s will be around but I do believe they will continue to have less value in enterprise deployments.  Strap on your code and join the power users.

Powershell Functions for tags

Some quick powershell functions for tags in ESXi enjoy:

Operational aspects of HCI video

While attending VMworld 2017 I presented on some of the operational aspects of hyper-converged infrastructure.   I believe the key takeaways were:

  • Hyper-converged is more than just storage to gain the real benefits
  • Hyper-converged has a difference scalability model  (more linear)
  • Hyper-converged requires a difference organizational structure to be successful
  • Hyper-converged performance and availability are policy based instead of location based.

You can watch the video here:


One question I was asked after the presentation was about scalability.   It was a really good question so I wanted to answer here.   Let’s assume that you start with a 3 node cluster.  After three years you add 12 nodes to a total of 15 nodes.   At some point newer hardware types are available does that mean that you now need to buy 15 brand new nodes running into a major investment instead of incremental growth?

  • The answer is yes and no.   At some point you do have to make the three node investment but you should do it long before the end of life for your current cluster so you can organically grow new hardware before end of life.   This should be taken into account on your growth models.

Thanks to all who attended.

How to operate IT with full Velocity

I was honored to be able to present this last week at VMworld 2017.   I have always been a huge supporter of vBrownBag and was really happy to present for them at VMworld again this year.   One of my presentations was on how to operate IT will full velocity as a follow-up to my post on how to make IT Agile.   The session was recorded and posted on YouTube.   You can view the brief (12 minute) talk here:

Please let me know if you have any feedback or thoughts

Upgrading to vSphere 6.5 FAQ

I was recently involved in recording a series of Webinars to help customers understand how to upgrade to vSphere 6.5.   You can see the on demand recordings here:;F:APIUTILS!51004&PageID=747A2F8A-E3DD-451B-8172-0F8F16EB464B

A number of live questions were asked and I figured I would highlight some frequently asked questions from the series:


Q. Is having three platform service controllers and three vCenters each vCenter pointing to their own PSC supported.

A. Yes 100% supported up to 10 PSC’s and 10 VC’s total pointing any combination you want. If you want enhanced linked mode the PSC’s will have to be external.

Q. Is there a manual step to make the load balancer switch to the secondary PSC?

A. Both PSC’s are active but only one PSC at a time can service requests. So assume we have two PSCs: PSC1 and PSC2 the load balancer points to PSC1 and it fails then the load balancer points all traffic to PSC2 and resumes traffic.

Q. What is the link for the decision tree to choose platform services controller topologies?


Q. Do you need external PSC if using products such as site recovery manager?

A. The *only* reason you need an external PSC in v6.5 is if you want to use Enhanced Linked Mode (ELM).

Q. Why should we use the vCenter appliance on 6.5 instead of windows?

A. There are a number of features only available to the appliance including: native vCenter HA, native backup and restore, single click upgrade and simplified support models.


Predictive DRS

Q. What are the added requirements on the vCenter server for predictive DRS?

A. You will need to install vROps – at least the standard addition



Q. What happens if the PSC is ‘down’? What functionality do you lose?

A. If a PSC is not functioning new authentication attempts to vCenter will not work. Already authenticated sessions will remain connected.

Q. When using VCHA how many vCenter licenses are required for the three machines?

A. A single vCenter license for a VCHA setup of three machines.

Q. Can the vCenter appliance backups be scheduled to run on a regular basis?

A. Yes, You can set the tool up to do a one time or a schedule.



Q. Is there a hardening guide for vSphere 6.5?

A. Absolutely we just released the hardening guide for vSphere 6.5 at

Q. Can you still encrypt VMs with 3rd party vendors?

A. Of course – those APIs are still available to those vendors.

Q. Will the vmotion encryption slow down the vmotion?

A. Less than 5% but yes. You’ll have to account for time to encrypt / decrypt.

Q. What KMS servers are supported?

A. We support any KMIP 1.1 compliant key management server.

Q. Where are the keys stored for VM encryption?

A. Encryption keys are stored in whatever KMIP 1.1 compliant KMS you decided to deploy. The keys never persist in vCenter and simply pass-through to the cluster hosting the workload. The actual key encrypting the VM is stored encrypted using the KMIP key inside the vmx file.   Should you lose your vCenter you would simply re-connect with your KMS infrastructure.


Q. Is there any way to change SSO domain in 6.5 after initial installation?

A. Unfortunately No. If you need to change your SSO domain you must do it in v5.5 before you upgrade (also not possible in v6.0).

Q. If you are upgrading from 6.0 to 6.5 with multiple PSC & VCSA on same SSO domain across 2 sites can you upgrade PSC’s over multiple days/weeks & then VCSA’s over days/weeks. Or does it all need to be done in one window?

A. Our official answer is: Mixed-version environments are not supported for production. Use these environments only during the period when an environment is in transition between vCenter Server versions.

Q. Does the upgrade from 6.0 to 6.5 keep your root certificate store?

A. Yes it does – the upgrade does not affect your certificate Store

Q. Do we have vCenter 6.0 Windows with MSSQL to Appliance 6.5 Converter?

A. The migration tool from 6.0 Windows vCenter to 6.5 vCenter Server Appliance is included as part of the vCenter 6.5 Appliance ISO.

Q. If we want to move vCenter from embedded to external SSO what is the best path?

A. I’d recommend you perform your upgrade to the vCenter appliance using the migration wizard and then post migration deploy a new PSC appliance joined to the embedded and repoint your vCenter to this new PSC.


Let me know if you have additional questions.

How to migrate current workloads into NSX

I get this question all the time.

How do I migrate my existing VLAN backed workloads into NSX?

The answer is pretty simple but it has some design concerns.   In order to explain the process let’s make some assumptions:

  • You have two virtual machines (VM1, VM2)
  • They are all on the same subnet that is backed by a VLAN 310
  • The subnet assigned to the VLAN is
  • Subnet is routed by physical_router1


The environment is shown below:

Let’s assume that our NSX network is also built out at this time as follows:

  • Edge Services gateway ESG_1 provides routing between physical and virtual using OSPF area 10 to peer with physical_router1
  • ESG_1 connects to a distributed logical router (DLR_1)
  • Virtual networks backed by VXLAN operate behind DLR_1
  • The ESG is advertising for running on VNI5000

The setup is visualized below:

Ok so how to do we get virtual machines on VLAN310 behind the DLR_1 so we can take advantage of all of the NSX routing advantages?

#1 Create a new destination network

Create a new logical switch which will be VNI5001 (assigned number by NSX) at this point don’t assign it a gateway on the DLR_1

#2 Deploy a new Edge gateway

Deploy a new ESG we will call ESG_2 with just management interfaces

#3 Create a bridge

Use ESG_2 to bridge VLAN310 and VNI5001.  There a number of constraints with bridges which I will mention after the process steps.

#4 Test VNI5001

Put a test virtual machine into on VNI5001 and test connectivity to VM1 and VM2.


#5 Move virtual machines into VNI5001

Switch the network interface for VM1 and VM2 into VNI5001.   They will take a single ping interruption and should continue to work.

#6 Change routing

Here is the interruptive part.  Currently routing to VM1 is going from physical_router1 to switch on VLAN310 through ESG_2 into VNI5001 not an ideal path.    We need to switch to be advertised by ESG_1.   We can do this by removing ESG_2 (interrupts network to VM1 + VM2) and adding a gateway for on the DLR_1 for VNI5001.   ESG_1 will then advertise the new subnet to the physical_router1 assuming it’s accepted because the old route has been removed traffic will resume.

Bridge mode allows you to migrate into virtual networking with IP address changes.  It does cause an interruption.   One might wonder if you could not just run bridge mode forever.   There are performance and latency concerns to consider with this plan.

Design considerations to bridge mode:

  • An ESG used to provide a L2 bridge maps to a single VLAN so for each bridge you require a new ESG
  • If the ESG fails anything on the virtual networking side will fail because it’s the single point to bridge
  • Performance can be impacted all traffic crossing the bridge has to route into the ESG bridge then to the destination VM
  • If redundancy beyond VMware HA is a concern active / passive ESG’s are supported
  • L2 VLAN must be present on all ESXi hosts that may run the ESG with the bridge


So with some design considerations in the book this did not address VLAN’s with physical and virtual machines.   A bridge can provide the functionality of communication between physical and virtual.   This may seem like a good solution but it requires careful design and performance considerations.   Single points of failure or configuration challenges on the physical network can cause the whole solution to fail.

You can read more about bridges on VMware’s documentation here.

Cross site vMotion requires VMware switches technology

Cross site vMotion is a feature that really shows the power of the VMware platform.   When combined with NSX you can move live running virtual machines across long distances.    It’s a huge advantage for customers looking to balance workloads or avoid potential disasters.   I learned today that this feature does require VMware’s virtual standard switch or distributed switch it will not work on any third party switches today.   In addition there are only certain supported migration paths:

VSS = Virtual Standard Switch

VDS = Virtual distributed switch

There are only certain supported migration paths:




Notice that VDS -> VSS is not supported.