Public Speaking Help

I recently asked who needed help with public speaking on Twitter and was surprised by the number of people who reached out.   I wanted to give some personal tips that work for me with the hope that they might help you.    I wanted to avoid a pile of generic public speaking tips…   Last year was my first year presenting at VMworld which was a little scary.  The day before I spent some time reading public tips and ran across an old one which is if you are nervous imagine everyone in their underwear.   I’ll be 100% honest imagining 150 CIO’s in their undies was about the scariest thing I have ever done in my life… it’s a image I cannot burn out of my mind.   Which brings me to my first tip:

Start with something funny, unique or thought provoking:

You need to start with a hook or attention activity.   Last year at VMworld my co-presenter took off his mic and played AC/DC’s back in black over the sound system before he would begin the session.   It was a great way to grab attention of others.   I personally like to ask a question to grab attention and get them thinking.   Of late I have been using why are space suits white…  Other can do funny… I learn at a early age what I find funny others do not… they mostly think I am weird…   You need to tie your element into the total story of your presentation.

A presentation is a story with a moral or point

Everyone has so much to say in a presentation.  Technical presentations have so many details.  The best presentations invite the listener to go on a journey and have a moral or point.   I now start my presentations with what is the point.   If I cannot define what I want you to learn from my presentation in 15 words or less I should not be presenting.   Allow me to illustrate:  I gave a presentation on three things that need to change in IT.    I used the example of a road trip and the elements required to be successful.  (https://www.youtube.com/watch?v=6AHkcS3PzVg&t=2s)  The example was weaved throughout the whole presentation and used to provide grounding.   The story allows you to follow my insane train of thought.

Remove the extra

When working on your story anything that does not prove the point should be eliminated.   It’s really simple yet hard in practice… we have so many good ideas but they don’t always fit the specific point.   You have to cut the extra to be successful.

What this does not help me at all!!!!

Almost universally people tell me creating a presentation is not the problem it’s delivery.   I personally struggle delivering others presentations…  I suck at it.    When I create the presentation it’s a personal journey story that I am sharing which right away makes it easier.   I can speak from experience and knowledge instead of slide ware…   Am I afraid to present in front of others YES!  How do I get better at it?  By doing it… I started with Church and went to VMUG’s…. now I speak anywhere they will let me…   Give me a topic and I’ll talk on it.    I

Practice

Yes….Yes…Yes record yourself and practice your talk track many times it makes a huge difference.  Start making youtube videos of yourself presenting on a topic…  It will be hard but will pay of in the end.

VRO to delete lots of virtual switches in NSX

I created a workflow to create 5000 virtual switches in NSX… which was cool.  But it did create 5000 virtual switches which now needed to be deleted.  So I created a new workflow to delete them.   In my case I created all 5000 switches on the same distributed logical router.   I had to delete the DLR before starting this workflow.   I used two RESTAPI calls to complete this work as show below:

And

 

Notice the URL templates on both as they are critical.  Otherwise it’s just a RESTAPI call to NSX Manager.

These two workflows along with the RestHost get passed into a workflow as shown:

 

Otherwise it’s just a scriptable task.  I ran into an issue with it running too fast for NSX manager so I put a 1 second wait in between each call.  Here is the code:

function sleep(milliseconds) {
 var start = new Date().getTime();
 for (var i = 0; i < 1e7; i++) {
 if ((new Date().getTime() - start) > milliseconds){
 break;
 }
 }
}

//Setup the get request for wires
var inParamtersValues = [];
var request = restOperation.createRequest(inParamtersValues, null);
request.contentType = "";
var response = request.execute();
//prepare output parameters
//System.log("Response: " + response);
statusCode = response.statusCode;
statusCodeAttribute = statusCode;
//System.log("Status code: " + statusCode);
contentLength = response.contentLength;
headers = response.getAllHeaders();
contentAsString = response.contentAsString;

System.log(response);

var xmlObj = new XML(contentAsString);

var document = XMLManager.fromString(contentAsString);

var count = document.getElementsByTagName("vdnId");

System.log("Count : " + count.length);

j = 0;

for (i=0; i < count.length; i++)
{
 //System.log("Scope: " + xmlObj.dataPage.virtualWire[i].objectId + " vdnId: " + xmlObj.dataPage.virtualWire[i].vdnId );
 if ( xmlObj.dataPage.virtualWire[i].vdnId > 5010)
 {
 System.log("Scope: " + xmlObj.dataPage.virtualWire[i].objectId + " vdnId: " + xmlObj.dataPage.virtualWire[i].vdnId );
 var virtualWireID = xmlObj.dataPage.virtualWire[i].objectId;
 var inParamtersValues = [virtualWireID];
 var request = restOperationDelete.createRequest(inParamtersValues, null);
 request.contentType = "";
 var response = request.execute();
 statusCode = response.statusCode;
 System.log("Response : " + statusCode + " Rest request : " + virtualWireID);
 j++;
 sleep(1000);
 }
}
System.log("j : " + j + " Total : " + count.length);

It should work great it’s currently set to delete every vnri above 5010 (mine start at 5000)  you can adjust this number to whatever you want…

 

 

 

 

Your IT shop is Ugly! Part 3

This is part 3 of a multi-part article series see other articles here:

 

Perfect requires …. 

If you are still reading allow me to reward you with some measure of answers:

 

The first real challenge is change happens and you will not have the funding to remove the old and replace with the new.      

The second real challenge is that innovation and agility demand change.

The third challenge is that we continue to focus on initial state instead of life of a service as a constant source of change.

 

So, the challenge is change.

 

Perfection is the process of refinement until only desirable elements, qualities or characteristics remain.

I have illustrated that change is both the problem and the solution.   How can we resolve these two opposites: I love change, but I hate its effects?

 

There are two IT approaches to this challenge:

  • Take on day two operations (standardize, quantify, change management etc..)
  • Move to micro-service architecture

 

Many organizations have embraced change management as the way to approach change.   Every single change has to be approved by subject matter experts thus reducing risk.  In practice this only serves to slow down innovation by making it filter though a committee.   Management of change is the enemy of innovation.   It’s truly the relish of IT’s failure today.   Change management rarely stops failures because of the complexity of systems involved.      While I am a huge fan of configuration management as a method of maintaining initial state it’s only a band-aid for the real solution.  It’s a reactive approach which rarely takes into account the master plan.

The allure of micro-service architecture can be easily understood but in reality many applications both COTS and in-house developed struggle to achieve this as a reality.  Many customers have a strategy that is single sided favoring stateless and micro-service architectures while pretending traditional applications don’t exist.   A quick survey of your application portfolio might show that 80% of your business revenue is generated by the least stateless architecture you support.   It’s a rip and replace plan that rarely takes into account current realities.

 

So how can we embrace change and innovation?

I believe this is where we combine the best of both worlds with a clear understanding of reality.   We need the replacability of micro-service architecture with the compatibility of legacy servers.   We need to speed and agility of constant change with the stability of configuration management.   For me it’s come down to application as code.  Can you define your application as code?  Architects have been defining their applications in Visio for years… would it not be easier to define it as code.   This code can then be versioned and updated.   envision everything unique about your application from network, security, storage, servers, configuration and applications defined in a code construct.   You could then check that construct into a code repository.  When change is required the complete environment including change can be deployed using the code construct.   The deployed infrastructure can be tested automatically for required functionality and then be used to replace current production.   If it fails functionality tests then it returns to the drawing board.   This type of infrastructure as code can be deploy 100 times a day driving innovation speeds.   If failures become an issue the application and infrastructure are rolled back to last known good state.  I am not suggesting we adoption 100% stateless infrastructure, containers or magic fairy dust… I am suggesting we tighten our belts and do the hard work to truly define applications as code.   In order to define things as code we need to have three things:

  1. Software based constructs for everything – if your solution requires physical hardware it cannot be automated or replicated without time and cost – no one has hardware on demand for every dynamic situation
  2. coordination between silo’ed teams (break down the solo and form one team, no more infrastructure, application, network, security, operations)
  3. Development skills

 

Combining all these elements provide the basis for successful application as code.   You will have to orchestrate many different methods into a cohesive approach and use iterative software development.   In order to solve the problem you will have to approach a new project with this team not try to redesign or replatform a old one.    These basic blocks can provide the basis for immutable applications thus making infrastructure just plumbing.

 

(everything unique about an application is replaceable) 

Your IT shop is Ugly! Part 2

This is part 2 of a multi-part article series see other articles here:

 

Innovation is your chaos monkey! (Bob’s your uncle)

Innovation and agility are buzz words I hear a lot in IT.   Innovation is more about culture than capabilities.   Innovation is inherently a proactive activity, see a problem and choose to solve it in a new way.   Agility is the ability to embrace change quickly it is inherently reactive.   I had some exposure to an IT environment where everyone was compensated based upon not being the cause of outages.   After an outage the root cause analysis would be done to determine which groups compensation was negatively affected by the outage. As you can image this policy was created to reduce outages.   In fact, it had a direct negative effect on mean time to resolution.   During an outage everyone was focused on making sure they didn’t get any of the blame.   Innovation did not exist in this company because it had the potential of creating outages which were unacceptable.   No one would work together on anything they had a culture of blame instead of innovation.   Innovation requires organizations to be willing to endure acceptable downtime.   Acceptable downtime was defined by Google as part of it’s site reliability engineering.   It is focused on the idea that we can continue to innovate until we have passed the threshold for acceptable downtime for the month.   Site reliability engineers focused 50% of their time on operations and 50% automating operations. Once the month has passed innovation can continue.   Using the acceptable downtime or allowed downtime has turned the traditional SLA model upside down.   It allowed Google’s IT to innovate at a much faster pace.   Increased proactive innovation has a direct effect on reducing the amount of reactive work being done.

 

The second real challenge is that innovation and agility demand change.

 

We are focused on the initial state

When you consider manufacturing they are concerned with the initial state.   Auto manufacturing has really optimized every portion of the process.   They have supply chain whipped, huge buildings full of robots and they produce tens of thousands of cars a day.   All of these efforts optimize the end deliverable product of a car to the consumer.    Once the consumer takes ownership all optimized automation ends.   Once you reach 5,000 miles you have to take the car to a shop where a human changes the oil.   If something breaks humans change the parts and immediately start to break all the standardization and quality created by initial automation.   End to end creation of a car takes roughly 17 hours.   That same car is likely to be in the wild for 87,600 hours (10 years) yet everything is focused on optimization of the 17 hours of initial state.  There are a number of parallels to IT with cars.   Most IT shops seem to be focused on delivering initial state quickly(day 1),  a lot less thought is given to day two operations which will persist for the next five to ten years.   The major difference is the customer expected outcome.   With a car you expect a drivable product with some level of quality.   With a server you expect it to operate on the fifth year the same as initial delivery.

 

The third challenge is that we continue to focus on initial state instead of life of a service as a constant source of change.

Your IT shop is Ugly! Part 1

This is part 1 of a multi-part article series see other articles here:

Big News everyone!  Your IT shop is ugly!  Good News everyone’s IT shop is ugly.   As a ‘older’ IT professional (In IT that means you are past 30) I have seen my fair share of IT shops.  As a solutions architect for VMware I have seen even more IT practices.   The truth is everyone’s IT shop is ugly there are no pretty IT shops.   In this article I will explain why it’s ugly and provide some prescriptive steps to solving the issues you may face.

Master Planned community

I recently moved to Texas with my job (It’s been great btw).   I had to sell my beloved first home and move into another house.   This home happened to be in a master planned community.   My community has hundreds of homes that have all been built-in the last five years.   Before a single home was built a developer got together with an architect and planned out the whole community.  Every inch of the community was planned from the placement of houses down to location for bushes.   It’s a beautiful place to live and very orderly.   To preserve the experience envisioned by the architect a 250-page HOA document exists to maintain the master plan.    I learned quickly that my home was missing a bush in front of the air conditioner and I could not leave my garbage cans out overnight.  As I drive out of my community the center island of the road is lined with trees.   I noticed the other day one tree had been replaced with a new tree due to the death of the previous tree.   This has upset the balance of my community the master plan now has a tree that is ten feet shorter than the rest.   Chaos has happened the master plan could not account exactly for the chaos.

I don’t care about your master planned community why do you bring it up? 

Honestly, I don’t really care about my master planned community either.   It is a great and safe place to live which were my requirements I could care less about the tree, but I believe it’s a perfect way to explain why your IT shop is ugly.   Your IT environment is as old as your company (in most cases) which means it was master planned a while ago.   Since the original plan you may have expanded, contracted, taken on new building techniques and changed contractors.   Your original architect has retired, moved to a new company, been promoted, continued, or stayed put and not updated skills.   New architects have come and gone each looking to improve the master design with their own unique knowledge and skill set.   Some organizations even have architects for each element who rarely coordinate with each other.   Each of these architects understood the technical debt created and left by previous architects.  Older architectures, applications, methods each with their aging deterioration and mounting cost.  Some of your architects have suggested solving these technical debt monsters in two potential ways:

  • Wipe it out and start over (bi-modal)
  • Update the homes where they stand (Upgrade)

Each of these methods provide the simple benefit of reducing the total cost of ownership of these legacy platforms.

The wipe it out method requires some costly steps:

  • Build new homes that could be used to turn a profit if they were not part of the re-platform
  • Move everyone into the new homes
  • Ensure everyone is happy with their new home (which turns into a line by line comparison – my kitchen used to be on the right not the left…)
  • Switch the owners into the new homes
  • Plow down older homes
  • Build new homes on the land to turn a profit (or get cost savings from the re-platform to improve bottom line)

The update homes where they stand seems like a good plan but requires some steps:

  • Buy new materials to replace sections of the home
  • Move owners into temporary housing
  • Update their homes
  • Move them back
  • It’s a long process

Both methods are costly and removing technical debt rarely makes the businesses radar of critical to the health of the business, so they get ignored.

So, the first set of things that made your IT shop ugly are:

  • Many different architects over time each with a different vision
  • Legacy IT, with Legacy Legacy IT, with Legacy Legacy Legacy IT, with Mainframe (Technical Debt)
  • Business does not want to spend money on technical debt projects because they don’t provide revenue

The first real challenge is change happens and you will not have the funding to remove the old and replace with the new.      

Will FaaS mean the end of servers?

A few years ago there were many articles about how containers would mean the end of servers.   From a technical standpoint Function as a Service (FaaS) and containers both run on servers.   So simple answer no it does not mean the end of servers.   I have seen a lot of rumbling around FaaS of late.   Those who have heard me speak on automation know I am all about functions, modular blocks and FaaS, as long as you can keep the servers secure and safe from internal and outside damage. Most outside damage comes from room vibrations which can be easily monitored by SpotSee.   We do need to break code down to simplest terms to encourage innovation and re-use.   FaaS has a place in your overall design.   Application design continues to pivot away from monolithic design to more micro-service models.   FaaS is part of that pie.   When considering any of these strategies the same overall design challenges exist:

  • Data persistence
  • Data gravity
  • Security

Data persistence:

No matter how stateless your environment sooner or later data is involved.  There are some exceptions but they are really rare.   The internet runs on data.  The real value is identification of you as a user and selling that data in mass not the $.99 cents you paided for the app.    Applications exist to do something then keep state… or record your reactions either way the data needs to be stored.  Pure stateless applications are stateless.  FaaS is stateless.  So somewhere in the pie we need state.   Something to orchestrate the next step and provide the value to user and the developer.  Where you store this data depends on the application from a simple text file to a share nothing database someone is keeping the data.   Lets just be honest that 90% of the world still lives on a relational database (Oracle, MS-SQL, My-SQL) with a small portion using a share nothing database (Cassandra etc..).  This persistence layer has all the same concerns as any other non-immutable infrastructure.   If you loose all your copies you loose data.   Even with every function of an application as a FaaS you still need a database.   The challenge of persistence means you have to live in both worlds a persistent and non-persistent.  It’s important to consider the manageability of both these worlds when you consider implementing new technologies.

 

Data gravity:

The idea of FaaS or stateless is I can deploy anywhere… while this is technically true you want your application/functions to be close to that persistent data to ensure performance is observed.   Which means you either need to real time replicate data between anywhere you want to operate or operate in the same locality as your stateless / function.   No share databases have massive concerns with write amplification, confirming a write across long distances introduces unacceptable latency into every write.   Sharding of these databases is touted as the solution using sync writes in the same location for redundancy,  sharding is possible it’s a complex and you still have latency when the data needed is not local.   Now we have created a MC Escher puzzle with our application architecture.   Gravity of data will continue to drive location more than feature / functionality of location.   It’s an instant world and no one is going to wait for anything anymore.

 

Security

While not as interesting as the bling of FaaS security is a real concern.  Unless you plan on running your FaaS inside your private datacenter it’s a concern.   Your functions have data to do their work in memory.  The function is running on a server.  Like all multi-tenant situations how do we avoid having a bad or untrusted actor access our data in flight?   Anyone who has worked in a multi-tenant provider understands this challenge.   Cloud providers have long deployed containers with light weight containers to ensure isolation is present (instead of shared worker nodes).  I personally don’t know what measures providers have taken to isolate FaaS offerings but you do have to consider how you will ensure there is not a hacker running a buffer overflow and reading your memory FaaS.

 

At the end of the day what is old is new and what is new is old.   FaaS, containers, virtual machines, physical servers, laptops, phones all have the same fundamental applications challenges.  These all provide options.   You may be considering a FaaS strategy for many reasons.  My point is don’t ignore good design principles just because its new technology.

vRealize Orchestrator scaling with 4K displays

I ran into this issue this week.   vRealize Orchestrator with Windows 10 on 4K displays makes the UI so small not even my seven year old could read it.   For those lucky enough to have 4K displays it’s a real challenge.  It’s a problem with java and dpi scaling not Orchestrator but it’s a magical challenge.   As much as I want to return to the day of using magnify glasses to read computer screens… here is a simple fix.

 

Download the client.jnlp and run it from an administrative command line with the following command:

 

javaws -J -Dsun.java2d.dpiaware=false client.jnlp

 

This should fix your vision issues and it’s cheaper that a new pair of glasses.

vRO Action to return virtual machines with a specific tag

I am a huge fan of tags in vSphere.   Meta data is the king for modular control and policy based administration.   I wrote a action for a lab I presented at VMUG sessions.   It takes the name of a tag as a string and returns the names of all virtual machines as an array of strings.  It does require a VAPI endpoint setup as explained here: (http://www.thevirtualist.org/vsphere-6-automation-vro-7-tags-vapi-part-i/) Here it is:

 

Return Type: Array/String (Could be Array VC:VirtualMachine)

Parameter: tagNameToMatch string

Code: return_vm_with_tag

 // array to hold the vm names
 var vmsWithSpecificTag = new Array();
 // VAPI connection
 var endpoints = VAPIManager.getAllEndpoints();
 //Use the first returned endpoint to gather information
 var client = endpoints[0].client();
 //var client = end.client(); 
 //Get associations
 var tagging = new com_vmware_cis_tagging_tag__association(client);
 //Get tags
 var tagMgr = new com_vmware_cis_tagging_tag(client);
 //Create object to hold VM
 var objId = new com_vmware_vapi_std_dynamic__ID() ;

//Get all virtual machines
 vms = System.getModule("com.vmware.library.vc.vm").getAllVMs();
 //loop though virtual machines
 for each (vm in vms)
 {
 // assign VM data to object
 objId.id = vm.id;
 // assign VM data to object
 objId.type = vm.vimType;
 //Get tags assigned to VM
 var tagList = tagging.list_attached_tags(objId);
 //Loop through VM assigned tags
 for each (var tagId in tagList) {
 //get object on tag
 var theTag = tagMgr.get(tagId);
 //assign name to compare
 var tagName=theTag.name;
 //compare to our requested tag
 if (tagName == tagNameToMatch) {
 System.log("VM : " + vm.name + " has the tag : " + tagName );
 // add to array
 vmsWithSpecificTag.push(vm.name);
 }
 
 }
 }


return vmsWithSpecificTag;

Learning vRealize Orchestrator

Recently I provided a lab to learn vRealize Orchestrator to the Austin VMUG.   It’s been too long since I attended a VMUG meeting due to moving to Dallas.   It was great to meet new peers in the Austin area.    The purpose of the lab was to provide people hands on experience using Orchestrator with some real world helpful examples.   I was present to answer questions and help the learning process.

I am working to bring the same lab to Dallas and Houston in the next few months but wanted to share the labs here.  It’s mostly possible to do the labs in the hands on lab environment of hol-1821-05-CMP partially made by my friend Ed Bontempo.  You will have to type a lot of code examples since HOL does not support cut and paste.   You can do it all in your home lab.    Contact me if you want to know about the next session we are presenting to a live VMUG if you want some instructor help.

Code: code_text

Lab Manual: VMUG_VRO_LAB_Manual

Enjoy!