Public Speaking Help

I recently asked who needed help with public speaking on Twitter and was surprised by the number of people who reached out.   I wanted to give some personal tips that work for me with the hope that they might help you.    I wanted to avoid a pile of generic public speaking tips…   Last year was my first year presenting at VMworld which was a little scary.  The day before I spent some time reading public tips and ran across an old one which is if you are nervous imagine everyone in their underwear.   I’ll be 100% honest imagining 150 CIO’s in their undies was about the scariest thing I have ever done in my life… it’s a image I cannot burn out of my mind.   Which brings me to my first tip:

Start with something funny, unique or thought provoking:

You need to start with a hook or attention activity.   Last year at VMworld my co-presenter took off his mic and played AC/DC’s back in black over the sound system before he would begin the session.   It was a great way to grab attention of others.   I personally like to ask a question to grab attention and get them thinking.   Of late I have been using why are space suits white…  Other can do funny… I learn at a early age what I find funny others do not… they mostly think I am weird…   You need to tie your element into the total story of your presentation.

A presentation is a story with a moral or point

Everyone has so much to say in a presentation.  Technical presentations have so many details.  The best presentations invite the listener to go on a journey and have a moral or point.   I now start my presentations with what is the point.   If I cannot define what I want you to learn from my presentation in 15 words or less I should not be presenting.   Allow me to illustrate:  I gave a presentation on three things that need to change in IT.    I used the example of a road trip and the elements required to be successful.  (https://www.youtube.com/watch?v=6AHkcS3PzVg&t=2s)  The example was weaved throughout the whole presentation and used to provide grounding.   The story allows you to follow my insane train of thought.

Remove the extra

When working on your story anything that does not prove the point should be eliminated.   It’s really simple yet hard in practice… we have so many good ideas but they don’t always fit the specific point.   You have to cut the extra to be successful.

What this does not help me at all!!!!

Almost universally people tell me creating a presentation is not the problem it’s delivery.   I personally struggle delivering others presentations…  I suck at it.    When I create the presentation it’s a personal journey story that I am sharing which right away makes it easier.   I can speak from experience and knowledge instead of slide ware…   Am I afraid to present in front of others YES!  How do I get better at it?  By doing it… I started with Church and went to VMUG’s…. now I speak anywhere they will let me…   Give me a topic and I’ll talk on it.    I

Practice

Yes….Yes…Yes record yourself and practice your talk track many times it makes a huge difference.  Start making youtube videos of yourself presenting on a topic…  It will be hard but will pay of in the end.

Your IT shop is Ugly! Part 3

This is part 3 of a multi-part article series see other articles here:

 

Perfect requires …. 

If you are still reading allow me to reward you with some measure of answers:

 

The first real challenge is change happens and you will not have the funding to remove the old and replace with the new.      

The second real challenge is that innovation and agility demand change.

The third challenge is that we continue to focus on initial state instead of life of a service as a constant source of change.

 

So, the challenge is change.

 

Perfection is the process of refinement until only desirable elements, qualities or characteristics remain.

I have illustrated that change is both the problem and the solution.   How can we resolve these two opposites: I love change, but I hate its effects?

 

There are two IT approaches to this challenge:

  • Take on day two operations (standardize, quantify, change management etc..)
  • Move to micro-service architecture

 

Many organizations have embraced change management as the way to approach change.   Every single change has to be approved by subject matter experts thus reducing risk.  In practice this only serves to slow down innovation by making it filter though a committee.   Management of change is the enemy of innovation.   It’s truly the relish of IT’s failure today.   Change management rarely stops failures because of the complexity of systems involved.      While I am a huge fan of configuration management as a method of maintaining initial state it’s only a band-aid for the real solution.  It’s a reactive approach which rarely takes into account the master plan.

The allure of micro-service architecture can be easily understood but in reality many applications both COTS and in-house developed struggle to achieve this as a reality.  Many customers have a strategy that is single sided favoring stateless and micro-service architectures while pretending traditional applications don’t exist.   A quick survey of your application portfolio might show that 80% of your business revenue is generated by the least stateless architecture you support.   It’s a rip and replace plan that rarely takes into account current realities.

 

So how can we embrace change and innovation?

I believe this is where we combine the best of both worlds with a clear understanding of reality.   We need the replacability of micro-service architecture with the compatibility of legacy servers.   We need to speed and agility of constant change with the stability of configuration management.   For me it’s come down to application as code.  Can you define your application as code?  Architects have been defining their applications in Visio for years… would it not be easier to define it as code.   This code can then be versioned and updated.   envision everything unique about your application from network, security, storage, servers, configuration and applications defined in a code construct.   You could then check that construct into a code repository.  When change is required the complete environment including change can be deployed using the code construct.   The deployed infrastructure can be tested automatically for required functionality and then be used to replace current production.   If it fails functionality tests then it returns to the drawing board.   This type of infrastructure as code can be deploy 100 times a day driving innovation speeds.   If failures become an issue the application and infrastructure are rolled back to last known good state.  I am not suggesting we adoption 100% stateless infrastructure, containers or magic fairy dust… I am suggesting we tighten our belts and do the hard work to truly define applications as code.   In order to define things as code we need to have three things:

  1. Software based constructs for everything – if your solution requires physical hardware it cannot be automated or replicated without time and cost – no one has hardware on demand for every dynamic situation
  2. coordination between silo’ed teams (break down the solo and form one team, no more infrastructure, application, network, security, operations)
  3. Development skills

 

Combining all these elements provide the basis for successful application as code.   You will have to orchestrate many different methods into a cohesive approach and use iterative software development.   In order to solve the problem you will have to approach a new project with this team not try to redesign or replatform a old one.    These basic blocks can provide the basis for immutable applications thus making infrastructure just plumbing.

 

(everything unique about an application is replaceable) 

Your IT shop is Ugly! Part 2

This is part 2 of a multi-part article series see other articles here:

 

Innovation is your chaos monkey! (Bob’s your uncle)

Innovation and agility are buzz words I hear a lot in IT.   Innovation is more about culture than capabilities.   Innovation is inherently a proactive activity, see a problem and choose to solve it in a new way.   Agility is the ability to embrace change quickly it is inherently reactive.   I had some exposure to an IT environment where everyone was compensated based upon not being the cause of outages.   After an outage the root cause analysis would be done to determine which groups compensation was negatively affected by the outage. As you can image this policy was created to reduce outages.   In fact, it had a direct negative effect on mean time to resolution.   During an outage everyone was focused on making sure they didn’t get any of the blame.   Innovation did not exist in this company because it had the potential of creating outages which were unacceptable.   No one would work together on anything they had a culture of blame instead of innovation.   Innovation requires organizations to be willing to endure acceptable downtime.   Acceptable downtime was defined by Google as part of it’s site reliability engineering.   It is focused on the idea that we can continue to innovate until we have passed the threshold for acceptable downtime for the month.   Site reliability engineers focused 50% of their time on operations and 50% automating operations. Once the month has passed innovation can continue.   Using the acceptable downtime or allowed downtime has turned the traditional SLA model upside down.   It allowed Google’s IT to innovate at a much faster pace.   Increased proactive innovation has a direct effect on reducing the amount of reactive work being done.

 

The second real challenge is that innovation and agility demand change.

 

We are focused on the initial state

When you consider manufacturing they are concerned with the initial state.   Auto manufacturing has really optimized every portion of the process.   They have supply chain whipped, huge buildings full of robots and they produce tens of thousands of cars a day.   All of these efforts optimize the end deliverable product of a car to the consumer.    Once the consumer takes ownership all optimized automation ends.   Once you reach 5,000 miles you have to take the car to a shop where a human changes the oil.   If something breaks humans change the parts and immediately start to break all the standardization and quality created by initial automation.   End to end creation of a car takes roughly 17 hours.   That same car is likely to be in the wild for 87,600 hours (10 years) yet everything is focused on optimization of the 17 hours of initial state.  There are a number of parallels to IT with cars.   Most IT shops seem to be focused on delivering initial state quickly(day 1),  a lot less thought is given to day two operations which will persist for the next five to ten years.   The major difference is the customer expected outcome.   With a car you expect a drivable product with some level of quality.   With a server you expect it to operate on the fifth year the same as initial delivery.

 

The third challenge is that we continue to focus on initial state instead of life of a service as a constant source of change.

Your IT shop is Ugly! Part 1

This is part 1 of a multi-part article series see other articles here:

Big News everyone!  Your IT shop is ugly!  Good News everyone’s IT shop is ugly.   As a ‘older’ IT professional (In IT that means you are past 30) I have seen my fair share of IT shops.  As a solutions architect for VMware I have seen even more IT practices.   The truth is everyone’s IT shop is ugly there are no pretty IT shops.   In this article I will explain why it’s ugly and provide some prescriptive steps to solving the issues you may face.

Master Planned community

I recently moved to Texas with my job (It’s been great btw).   I had to sell my beloved first home and move into another house.   This home happened to be in a master planned community.   My community has hundreds of homes that have all been built-in the last five years.   Before a single home was built a developer got together with an architect and planned out the whole community.  Every inch of the community was planned from the placement of houses down to location for bushes.   It’s a beautiful place to live and very orderly.   To preserve the experience envisioned by the architect a 250-page HOA document exists to maintain the master plan.    I learned quickly that my home was missing a bush in front of the air conditioner and I could not leave my garbage cans out overnight.  As I drive out of my community the center island of the road is lined with trees.   I noticed the other day one tree had been replaced with a new tree due to the death of the previous tree.   This has upset the balance of my community the master plan now has a tree that is ten feet shorter than the rest.   Chaos has happened the master plan could not account exactly for the chaos.

I don’t care about your master planned community why do you bring it up? 

Honestly, I don’t really care about my master planned community either.   It is a great and safe place to live which were my requirements I could care less about the tree, but I believe it’s a perfect way to explain why your IT shop is ugly.   Your IT environment is as old as your company (in most cases) which means it was master planned a while ago.   Since the original plan you may have expanded, contracted, taken on new building techniques and changed contractors.   Your original architect has retired, moved to a new company, been promoted, continued, or stayed put and not updated skills.   New architects have come and gone each looking to improve the master design with their own unique knowledge and skill set.   Some organizations even have architects for each element who rarely coordinate with each other.   Each of these architects understood the technical debt created and left by previous architects.  Older architectures, applications, methods each with their aging deterioration and mounting cost.  Some of your architects have suggested solving these technical debt monsters in two potential ways:

  • Wipe it out and start over (bi-modal)
  • Update the homes where they stand (Upgrade)

Each of these methods provide the simple benefit of reducing the total cost of ownership of these legacy platforms.

The wipe it out method requires some costly steps:

  • Build new homes that could be used to turn a profit if they were not part of the re-platform
  • Move everyone into the new homes
  • Ensure everyone is happy with their new home (which turns into a line by line comparison – my kitchen used to be on the right not the left…)
  • Switch the owners into the new homes
  • Plow down older homes
  • Build new homes on the land to turn a profit (or get cost savings from the re-platform to improve bottom line)

The update homes where they stand seems like a good plan but requires some steps:

  • Buy new materials to replace sections of the home
  • Move owners into temporary housing
  • Update their homes
  • Move them back
  • It’s a long process

Both methods are costly and removing technical debt rarely makes the businesses radar of critical to the health of the business, so they get ignored.

So, the first set of things that made your IT shop ugly are:

  • Many different architects over time each with a different vision
  • Legacy IT, with Legacy Legacy IT, with Legacy Legacy Legacy IT, with Mainframe (Technical Debt)
  • Business does not want to spend money on technical debt projects because they don’t provide revenue

The first real challenge is change happens and you will not have the funding to remove the old and replace with the new.      

vRO Action to return virtual machines with a specific tag

I am a huge fan of tags in vSphere.   Meta data is the king for modular control and policy based administration.   I wrote a action for a lab I presented at VMUG sessions.   It takes the name of a tag as a string and returns the names of all virtual machines as an array of strings.  It does require a VAPI endpoint setup as explained here: (http://www.thevirtualist.org/vsphere-6-automation-vro-7-tags-vapi-part-i/) Here it is:

 

Return Type: Array/String (Could be Array VC:VirtualMachine)

Parameter: tagNameToMatch string

Code: return_vm_with_tag

 // array to hold the vm names
 var vmsWithSpecificTag = new Array();
 // VAPI connection
 var endpoints = VAPIManager.getAllEndpoints();
 //Use the first returned endpoint to gather information
 var client = endpoints[0].client();
 //var client = end.client(); 
 //Get associations
 var tagging = new com_vmware_cis_tagging_tag__association(client);
 //Get tags
 var tagMgr = new com_vmware_cis_tagging_tag(client);
 //Create object to hold VM
 var objId = new com_vmware_vapi_std_dynamic__ID() ;

//Get all virtual machines
 vms = System.getModule("com.vmware.library.vc.vm").getAllVMs();
 //loop though virtual machines
 for each (vm in vms)
 {
 // assign VM data to object
 objId.id = vm.id;
 // assign VM data to object
 objId.type = vm.vimType;
 //Get tags assigned to VM
 var tagList = tagging.list_attached_tags(objId);
 //Loop through VM assigned tags
 for each (var tagId in tagList) {
 //get object on tag
 var theTag = tagMgr.get(tagId);
 //assign name to compare
 var tagName=theTag.name;
 //compare to our requested tag
 if (tagName == tagNameToMatch) {
 System.log("VM : " + vm.name + " has the tag : " + tagName );
 // add to array
 vmsWithSpecificTag.push(vm.name);
 }
 
 }
 }


return vmsWithSpecificTag;

Two simple things to improve your life

Warning this is an end of year personal post and has very little to do with technology so if you are looking for a technology post feel free to skip it.   Two things have been rattling around in my head of late and I wanted to share them as an end of year post.   These two things have proven to improve my life many times over.

Perception determines reality

I do not wish to diminish the real life challenges you each face.  I have lived long enough to understand that each of us faces monstrous mountains throughout our lives.   Some of us face challenges with family, friends, relationships, actions of others, health and many other things that are real. 

When used professionally, this has the potential to reduce stress and enhance wellbeing. By applying it directly to your skin, you can check here for more benefits; for example, relieving your headache or aiding in your circulation.

When we faces these mountains our outlook truly can change the outcomes.   I learned this in a simple way many years ago.   I was a young missionary serving a dedicated two years spreading the gospel.  I was 22 years old and in Michigan 2,000 miles away from any family.   I had been a missionary for over 14 months well experienced in constant negative response we received from our work.   I was assigned to train a new missionary and it was Christmas time.   It’s a particularly hard time to be away from the family and virtually cut off (we were only allowed to communicate via mail once a week).  We had a particularity hard area full of rich people (they are generally not receptive to our message).  There were many days it poured freezing rain or snow while we traveled by bike.  It was cold and dark.  A week before Christmas we discussed getting a Christmas tree and determine that our monthly food budget of $115 dollars could not afford a tree.   So we pressed on with the long days work (9 am – 8 pm knocking on doors every day six days a week).   My new missionary friend always had great attitude nothing phased him.   We joked and talked every day while we seemed to be doing nothing.   One night I was preparing for bed and came out into the front room to discover my companion coloring boxes green.   When I asked him what he was doing he simply stated making a Christmas tree.   Later that night he stacked the boxes of various sized on top of each other to roughly resemble a tree.   He then took a red marker and drew balls on the tree.  Satisfied with the results he went to bed.  He and I spent three months together knocking on doors for 11 hours a day and didn’t get the opportunity to teach a single lesson.   By all results we had an epic fail.   Looking back 16 years later I can clearly say I learned one of lives most important lessons: it is not the results that count it is how you face them.  I have had many failures in my life since then but I have been able to remember the lessons he taught me:  make the best of what you have and don’t let any external event get you down.   You might have to make a Christmas tree out of boxes or lower your expectations considering getting out of bed the crowning achievement of the day but your perception determines your reality no external event.

 

By small and simple things great things are brought to pass

When I was young I was convinced that I needed to locate some great event to prove nobility.   It was in a single moment these great things happen.   It is simply not true.  While people are noble and at times do great things I suggest this is just an extension of the many small things they have done for many years.  A friend once called it healthy life habits practiced regularly.  I have learned that I do my best work in small burst practiced regularly.  Simple things like choosing to go to bed on time makes me a better father the next day.. when practiced for a lifetime consistently makes a better father for life.   Choosing to allocate time to service makes me less selfish for a day practiced each day makes me a less selfish person.   Other examples may include: reading a book to improve myself, spending one on one time with my children, driving a little slower and letting people merge, choosing to do the dishes, reading religious words, turning off my phone etc..  I am convinced that by doing all these small simple things consistently I will find I have become a great and noble man at the end of my life.

 

 

Operational aspects of HCI

Hyperconverged infrastructure (HCI) is natively software defined proving a shift of operations away from the traditional storage management paradigm.  ​​​​ Many of​​ my​​ customers have struggled with the paradigm shift​​ when​​ adopting​​ storage​​ HCI.  ​​​​ HCI has been very successful in addressing​​ specific use cases.​​ ​​ Many of these use cases have been successful because they represent workloads that have not been traditionally managed by storage teams for example VDI. ​​ Adoption of HCI beyond these use cases requires large organizations to implement people and process transformation to be successful. ​​ ​​ Discussions with customers has shown that fear​​ about the operational transition has created a​​ lack of adoption.  ​​​​ The net gain of HCI in the datacenter is a significant reduction in the total cost of ownership for storage.

 

What is your storage strategy?

When looking at your storage strategy you are likely to see a mixture of solutions to meet your needs.  ​​​​ I​​ have found that the following questions help​​ people​​ identify their requirements which ultimately lead to strategy:

  • What storage requirements do your applications have and how are they measured?

  • How is storage involved in your business continuity, disaster recovery, backup and availability strategy?

  • What data security requirements does your organization have?

  • What is your storage strategy for the cloud?

 

Once you identify your storage requirements the strategy can be aligned with functional needs. ​​ Functions that may be important to your organization around storage may include:

  • Capacity

  • Performance

  • Redundancy

  • Data security

  • Ease of management

  • Cost

  • Replication​​ capabilities

Assigning measurements to these functions allow you to identify the correct storage “profiles” to be used in your organization. ​​ These profiles can then be aligned with​​ your storage strategy. ​​​​ 

 

Differences in HCI

HCI does present some differences from many​​ traditional storage arrays. ​​ Four​​ common​​ elements of​​ difference are capacity management,​​ scalability,​​ policy based management and roles.

 

Capacity Management

Capacity management in most traditional systems​​ require a measurement based on​​ historical usage metrics. ​​ Historical​​ data is taken into account then a​​ “bet”​​ is made about required capacity for the next​​ X​​ years.  ​​​​ The​​ large “bet”​​ on storage array capacity and performance does not allow IT to be agile to business changes.  ​​​​ Growth beyond the initial implementation is possible by adding additional storage shelves or buying new arrays. ​​​​ HCI by contrast takes a​​ linear model. ​​​​ You can scale up​​ and out incrementally. ​​ You​​ add capacity by adding additional drives to your current servers or add additional servers to increase available controllers​​ and drive bays.​​ ​​ I​​ find that customer who adopt HCI​​ are:

  • Able to procure storage in incremental blocks instead of via large capital expense​​ “bets”

  • Able to have a predictable outcome on capacity management

  • Able to adopt new technology faster

  • Able to​​ utilize storage resources without depreciation​​ “bets”

 

Once a storage becomes aligned with HCI based capacity management they find that storage capacity growth is no longer a “flak jacket” exercise. ​​ The business can accept that their new project requires some incremental increase in cost instead​​ of requiring a large CapEx spend. ​​ The integrated nature of HCI means that compute capacity sizing is integrated in part with storage capacity.  ​​​​ This simplified​​ capacity management​​ allows the IT budget to stretch farther. ​​ ​​ Best practices for HCI include:

  • Design for scale, but build incrementally

  • Overall capacity management process is the same as traditional arrays but lead times are shorter and potentially more frequent

  • Choose servers with maximum available drive bays

 

Traditional storage capacity management requires procurement at roughly 60% usage to allow for growth.  ​​​​ In large environment this means that large amounts of capacity will never be used increasing to total cost per GB of storage usage.  ​​ ​​​​ HCI’s lower capacity expansion cost should allow large organizations to utilize 80% or more of capacity before buying expansions.

 

Some capacity metrics that you should monitor include:

  • Total available space

  • Used space

  • Used capacity breakdown including (VM’s, Swap,​​ Snapshots etc.)

  • Dedupe and compression savings

 

Scalability

A common concern with HCI​​ is scalability.​​ ​​ Independent scalability​​ is touted as one of the primary benefits of traditional three tier infrastructure: compute, storage, and networking.  ​​​​ When considering the scalability of traditional storage systems the follow are considered:

  • Capacity in TB’s

  • Required IOPs

  • Throughput of storage systems (link speed)

  • Throughput of controllers

 

The adoption of flash drives has changed the scalability painpoint, IOPs are no longer a concern for most enterprises. Flash drives have increased the pressure on link speed and controller throughput forcing architecture changes in traditional arrays.  ​​​​ When adopting HCI controllers and link speed becomes distributed removing both bottlenecks leaving only capacity to be considered assuming all flash arrays. ​​ HCI addresses​​ capacity scalability​​ in two ways: adding additional drives and increasing the capacity of existing drives. ​​ It is considered a best practice when implementing HCI​​ to get servers with as many drive bays as possible.  ​​​​ This allows you to increase capacity across the cluster by adding drives.  ​​​​ The explosive adoption of HCI​​ and flash​​ has driven manufactures to provide increasing larger capacity drives.  ​​​​ With VMware VSAN you can replace existing drives with larger​​ drives without interrupting​​ operations​​  ​​ ​​​​ Customers​​ can​​ double​​ storage capacity​​ without adding additional compute nodes. ​​ HCI scales​​ in​​ a distributed fashion for linear growth. ​​ Some best practices to consider around scalability are:

  • Consider using traditional servers instead of blades to increase the available drive bays

  • Consider using all flash drives to​​ remove all potential performance concerns

  • HCI does implement a flash cache which greatly improves performance without having to implement all flash

 

Policy Based Management

Many traditional arrays​​ availability and performance is tied to logical unit number (LUN).  ​​​​ These capabilities are set in stone at time of creation.  ​​​​ In order to change these capabilities moving the data is required.  ​​​​ This type of allocation creates challenges for capacity management and increases the number of day two operations required in order to meet business needs.  ​​​​ HCI takes a policy based approach and removes the constraints of LUNs.  ​​​​ There is a single​​ datastore​​ provided by HCI radically simplifying traditional storage​​ management.  ​​​​ Policies define availability and performance​​ requirements and the HCI system enforces the policies.  ​​​​ To increase the performance of a specific workload a new policy is defined and assigned to the workload. ​​ The HCI system works to ensure policy compliance without interruption to the workload.  ​​​​ Policy based management provides large operational efficiencies. ​​ An IDC study has shown that​​ HCI can lower the OpEx​​ cost​​ of storage by 50% or more. ​​ In VSAN there are two key elements in a policy: stripe count and failures to tolerate (FTT).  ​​​​ Stripe count denotes the number of drives an object needs to be striped across thus improving performance.  ​​​​ Each object will have its data spread across X number of disks on the same compute note. ​​ Failure to​​ tolerate denotes the number of compute nodes that can fail before data access is affected.  ​​​​ A FTT setting of 1 is essentially a mirror each object must have one duplicate copy on another node.  ​​​​ FTT of 2 provides two copies of the data across three total nodes.  ​​​​ FTT has a direct effect on the amount of storage used in the HCI implementation.  ​​​​ Policies should be designed to meet the business needs of the application.  ​​​​ A few best practices to consider:

  • Do not use FTT of 0 unless you truly don’t care about loss of the data (stateless services)

  • Depending on the type of disks backing the HCI solution additional stripes may not provide performance boosts

Some general​​ VSAN​​ performance​​ guidance is provided below:

Some general​​ VSAN​​ availability guidance provided below:

The policies should align with organizational application requirements. ​​ Management by policy provide the greatest flexibility and reduces the management cost. ​​ 

 

Roles

 

Many organizations have struggled to adopt HCI because of the change in​​ skills and process​​ required to be successful.  ​​​​ The best case scenario for HCI bridges the world of compute,​​ storage, networking and security together into a single platform.  ​​​​ This single platform provides operational synergy and encourages standards.  ​​​​ Organizations that have been successful in adoption of HCI have learned that it requires a cross functional skills set. ​​ The current reality of siloed teams struggle to adopt HCI. ​​​​ Creation of cross functional teams with blended skills allows accelerated adoption of HCI. ​​ 

 

Some best practices for successful HCI adoption include:

  • Cross functional training

  • Blended teams

  • Rotating subject matter experts who are expected to own a product but train others

  • Outcome-oriented teams​​ and compensation​​ instead of activity-oriented

 

Many of​​ my customers​​ have adopted a plan, build run methodology in these organizations it is recommended that teams at each tier be blended.  ​​​​ I recommend that members of each silo of plan, build and run rotate though plan, build and run to better understand each role.

 

Benefits of HCI

HCI can provide many benefits required by modern datacenters.  ​​​​ I​​ have​​ observed customers successfully adopting HCI have achieved the following outcomes:

  • Hyper Scalability

  • Operation agility

  • Operation efficiency

  • Simplified operations and support

  • Improved availability and performance

 

I truly believe it’s time to adopt HCI in your datacenter​​ and realize the operational and cost benefits.

 

 

Basic NSX Setup using RESTAPI

In a previous article I used the GUI to deploy a basic NSX setup I wanted to do the same thing using the RESTAPI.   Remember the manual process took me about 20 minutes to complete the RESTAPI calls took one minute.   I am defining the network setup via code.  Please review this article for specifics on design here.

 

To publish new NSX configurations on Edges you need to do POSTS against : https://nsx-manager-address/api/4.0/edges with the body type of XML/Application

Inside the body you need to modify a few things (if you do a dump of a current edge)

  • Add a cli password section
  • Modify the name of the Edge if it’s a duplicate name (or remove the old)

 

Add cli password section as shown below with your password (you can change via GUI once deployed)

Then use the following code to deploy the ESG-3 (resource pool etc is unique to my environment so you have to change too)

<?xml version=”1.0″ encoding=”UTF-8″?>
<edge>
<id>edge-73</id>
<version>5</version>
<status>deployed</status>
<datacenterMoid>datacenter-21</datacenterMoid>
<datacenterName>Home</datacenterName>
<tenant>default</tenant>
<name>ESG-3</name>
<fqdn>esg3</fqdn>
<enableAesni>true</enableAesni>
<enableFips>false</enableFips>
<vseLogLevel>emergency</vseLogLevel>
<vnics>
<vnic>
<label>vNic_0</label>
<name>Uplink</name>
<addressGroups>
<addressGroup>
<primaryAddress>192.168.10.223</primaryAddress>
<subnetMask>255.255.255.0</subnetMask>
<subnetPrefixLength>24</subnetPrefixLength>
</addressGroup>
</addressGroups>
<mtu>1500</mtu>
<type>uplink</type>
<isConnected>true</isConnected>
<index>0</index>
<portgroupId>dvportgroup-106</portgroupId>
<portgroupName>DV-VM</portgroupName>
<enableProxyArp>false</enableProxyArp>
<enableSendRedirects>false</enableSendRedirects>
</vnic>
<vnic>
<label>vNic_1</label>
<name>vnic1</name>
<addressGroups>
<addressGroup>
<primaryAddress>10.0.0.1</primaryAddress>
<subnetMask>255.255.255.0</subnetMask>
<subnetPrefixLength>24</subnetPrefixLength>
</addressGroup>
</addressGroups>
<mtu>1500</mtu>
<type>internal</type>
<isConnected>true</isConnected>
<index>1</index>
<portgroupId>virtualwire-45</portgroupId>
<portgroupName>Transport-10.0.0.0</portgroupName>
<enableProxyArp>false</enableProxyArp>
<enableSendRedirects>true</enableSendRedirects>
</vnic>
<vnic>
<label>vNic_2</label>
<name>vnic2</name>
<addressGroups />
<mtu>1500</mtu>
<type>internal</type>
<isConnected>false</isConnected>
<index>2</index>
<enableProxyArp>false</enableProxyArp>
<enableSendRedirects>true</enableSendRedirects>
</vnic>
<vnic>
<label>vNic_3</label>
<name>vnic3</name>
<addressGroups />
<mtu>1500</mtu>
<type>internal</type>
<isConnected>false</isConnected>
<index>3</index>
<enableProxyArp>false</enableProxyArp>
<enableSendRedirects>true</enableSendRedirects>
</vnic>
<vnic>
<label>vNic_4</label>
<name>vnic4</name>
<addressGroups />
<mtu>1500</mtu>
<type>internal</type>
<isConnected>false</isConnected>
<index>4</index>
<enableProxyArp>false</enableProxyArp>
<enableSendRedirects>true</enableSendRedirects>
</vnic>
<vnic>
<label>vNic_5</label>
<name>vnic5</name>
<addressGroups />
<mtu>1500</mtu>
<type>internal</type>
<isConnected>false</isConnected>
<index>5</index>
<enableProxyArp>false</enableProxyArp>
<enableSendRedirects>true</enableSendRedirects>
</vnic>
<vnic>
<label>vNic_6</label>
<name>vnic6</name>
<addressGroups />
<mtu>1500</mtu>
<type>internal</type>
<isConnected>false</isConnected>
<index>6</index>
<enableProxyArp>false</enableProxyArp>
<enableSendRedirects>true</enableSendRedirects>
</vnic>
<vnic>
<label>vNic_7</label>
<name>vnic7</name>
<addressGroups />
<mtu>1500</mtu>
<type>internal</type>
<isConnected>false</isConnected>
<index>7</index>
<enableProxyArp>false</enableProxyArp>
<enableSendRedirects>true</enableSendRedirects>
</vnic>
<vnic>
<label>vNic_8</label>
<name>vnic8</name>
<addressGroups />
<mtu>1500</mtu>
<type>internal</type>
<isConnected>false</isConnected>
<index>8</index>
<enableProxyArp>false</enableProxyArp>
<enableSendRedirects>true</enableSendRedirects>
</vnic>
<vnic>
<label>vNic_9</label>
<name>vnic9</name>
<addressGroups />
<mtu>1500</mtu>
<type>internal</type>
<isConnected>false</isConnected>
<index>9</index>
<enableProxyArp>false</enableProxyArp>
<enableSendRedirects>true</enableSendRedirects>
</vnic>
</vnics>
<appliances>
<applianceSize>compact</applianceSize>
<appliance>
<highAvailabilityIndex>0</highAvailabilityIndex>
<vcUuid>500cf09e-2945-2df1-ca4f-1accbf151185</vcUuid>
<vmId>vm-1033</vmId>
<resourcePoolId>domain-c861</resourcePoolId>
<resourcePoolName>Office</resourcePoolName>
<datastoreId>datastore-998</datastoreId>
<datastoreName>SYN9-NFS-GEN-VOL1</datastoreName>
<hostId>host-863</hostId>
<hostName>esx1.griffiths.local</hostName>
<vmFolderId>group-v22</vmFolderId>
<vmFolderName>vm</vmFolderName>
<vmHostname>esg3-0</vmHostname>
<vmName>ESG-3-0</vmName>
<deployed>true</deployed>
<cpuReservation>
<limit>-1</limit>
<reservation>0</reservation>
</cpuReservation>
<memoryReservation>
<limit>-1</limit>
<reservation>0</reservation>
</memoryReservation>
<edgeId>edge-73</edgeId>
<configuredResourcePool>
<id>domain-c861</id>
<name>Office</name>
<isValid>true</isValid>
</configuredResourcePool>
<configuredDataStore>
<id>datastore-998</id>
<name>SYN9-NFS-GEN-VOL1</name>
<isValid>true</isValid>
</configuredDataStore>
<configuredHost>
<id>host-882</id>
<name>esx3.griffiths.local</name>
<isValid>true</isValid>
</configuredHost>
</appliance>
<deployAppliances>true</deployAppliances>
</appliances>
<cliSettings>
<remoteAccess>true</remoteAccess>
<userName>admin</userName>
<password>yourpassword</password>
<sshLoginBannerText>*************************************************************************** NOTICE TO USERS This computer system is the private property of its owner, whether individual, corporate or government. It is for authorized use only. Users (authorized or unauthorized) have no explicit or implicit expectation of privacy. Any or all uses of this system and all files on this system may be intercepted, monitored, recorded, copied, audited, inspected, and disclosed to your employer, to authorized site, government, and law enforcement personnel, as well as authorized officials of government agencies, both domestic and foreign. By using this system, the user consents to such interception, monitoring, recording, copying, auditing, inspection, and disclosure at the discretion of such personnel or officials. Unauthorized or improper use of this system may result in civil and criminal penalties and administrative or disciplinary action, as appropriate. By continuing to use this system you indicate your awareness of and consent to these terms and conditions of use. LOG OFF IMMEDIATELY if you do not agree to the conditions stated in this warning. ****************************************************************************</sshLoginBannerText>
<passwordExpiry>99999</passwordExpiry>
</cliSettings>
<features>
<l2Vpn>
<version>2</version>
<enabled>false</enabled>
<logging>
<enable>true</enable>
<logLevel>notice</logLevel>
</logging>
</l2Vpn>
<featureConfig />
<firewall>
<version>3</version>
<enabled>false</enabled>
<globalConfig>
<tcpPickOngoingConnections>false</tcpPickOngoingConnections>
<tcpAllowOutOfWindowPackets>false</tcpAllowOutOfWindowPackets>
<tcpSendResetForClosedVsePorts>true</tcpSendResetForClosedVsePorts>
<dropInvalidTraffic>true</dropInvalidTraffic>
<logInvalidTraffic>false</logInvalidTraffic>
<tcpTimeoutOpen>30</tcpTimeoutOpen>
<tcpTimeoutEstablished>21600</tcpTimeoutEstablished>
<tcpTimeoutClose>30</tcpTimeoutClose>
<udpTimeout>60</udpTimeout>
<icmpTimeout>10</icmpTimeout>
<icmp6Timeout>10</icmp6Timeout>
<ipGenericTimeout>120</ipGenericTimeout>
<enableSynFloodProtection>false</enableSynFloodProtection>
<logIcmpErrors>false</logIcmpErrors>
<dropIcmpReplays>false</dropIcmpReplays>
</globalConfig>
<defaultPolicy>
<action>deny</action>
<loggingEnabled>false</loggingEnabled>
</defaultPolicy>
<firewallRules>
<firewallRule>
<id>131075</id>
<ruleTag>131075</ruleTag>
<name>routing</name>
<ruleType>internal_high</ruleType>
<enabled>true</enabled>
<loggingEnabled>false</loggingEnabled>
<description>routing</description>
<action>accept</action>
<application>
<service>
<protocol>ospf</protocol>
<port>any</port>
<sourcePort>any</sourcePort>
</service>
</application>
</firewallRule>
<firewallRule>
<id>131073</id>
<ruleTag>131073</ruleTag>
<name>default rule for ingress traffic</name>
<ruleType>default_policy</ruleType>
<enabled>true</enabled>
<loggingEnabled>false</loggingEnabled>
<description>default rule for ingress traffic</description>
<action>deny</action>
</firewallRule>
</firewallRules>
</firewall>
<dns>
<version>2</version>
<enabled>false</enabled>
<cacheSize>16</cacheSize>
<listeners>
<vnic>any</vnic>
</listeners>
<dnsViews>
<dnsView>
<viewId>view-0</viewId>
<name>vsm-default-view</name>
<enabled>true</enabled>
<viewMatch>
<ipAddress>any</ipAddress>
<vnic>any</vnic>
</viewMatch>
<recursion>false</recursion>
</dnsView>
</dnsViews>
<logging>
<enable>false</enable>
<logLevel>info</logLevel>
</logging>
</dns>
<sslvpnConfig>
<version>2</version>
<enabled>false</enabled>
<logging>
<enable>true</enable>
<logLevel>notice</logLevel>
</logging>
<advancedConfig>
<enableCompression>false</enableCompression>
<forceVirtualKeyboard>false</forceVirtualKeyboard>
<randomizeVirtualkeys>false</randomizeVirtualkeys>
<preventMultipleLogon>false</preventMultipleLogon>
<clientNotification />
<enablePublicUrlAccess>false</enablePublicUrlAccess>
<timeout>
<forcedTimeout>0</forcedTimeout>
<sessionIdleTimeout>10</sessionIdleTimeout>
</timeout>
</advancedConfig>
<clientConfiguration>
<autoReconnect>true</autoReconnect>
<upgradeNotification>false</upgradeNotification>
</clientConfiguration>
<layoutConfiguration>
<portalTitle>VMware</portalTitle>
<companyName>VMware</companyName>
<logoExtention>jpg</logoExtention>
<logoUri>/api/4.0/edges/edge-73/sslvpn/config/layout/images/portallogo</logoUri>
<logoBackgroundColor>56A2D4</logoBackgroundColor>
<titleColor>996600</titleColor>
<topFrameColor>000000</topFrameColor>
<menuBarColor>999999</menuBarColor>
<rowAlternativeColor>FFFFFF</rowAlternativeColor>
<bodyColor>FFFFFF</bodyColor>
<rowColor>F5F5F5</rowColor>
</layoutConfiguration>
<authenticationConfiguration>
<passwordAuthentication>
<authenticationTimeout>1</authenticationTimeout>
<primaryAuthServers />
<secondaryAuthServer />
</passwordAuthentication>
</authenticationConfiguration>
</sslvpnConfig>
<routing>
<version>4</version>
<enabled>true</enabled>
<routingGlobalConfig>
<routerId>192.168.10.223</routerId>
<ecmp>false</ecmp>
<logging>
<enable>false</enable>
<logLevel>info</logLevel>
</logging>
</routingGlobalConfig>
<staticRouting>
<defaultRoute>
<vnic>0</vnic>
<mtu>1500</mtu>
<gatewayAddress>192.168.10.1</gatewayAddress>
<adminDistance>1</adminDistance>
</defaultRoute>
<staticRoutes />
</staticRouting>
<ospf>
<enabled>true</enabled>
<ospfAreas>
<ospfArea>
<areaId>2</areaId>
<type>normal</type>
<authentication>
<type>none</type>
</authentication>
</ospfArea>
</ospfAreas>
<ospfInterfaces>
<ospfInterface>
<vnic>0</vnic>
<areaId>2</areaId>
<helloInterval>10</helloInterval>
<deadInterval>40</deadInterval>
<priority>128</priority>
<cost>1</cost>
<mtuIgnore>false</mtuIgnore>
</ospfInterface>
</ospfInterfaces>
<redistribution>
<enabled>false</enabled>
<rules />
</redistribution>
<gracefulRestart>true</gracefulRestart>
<defaultOriginate>false</defaultOriginate>
</ospf>
</routing>
<highAvailability>
<version>2</version>
<enabled>false</enabled>
<declareDeadTime>15</declareDeadTime>
<logging>
<enable>false</enable>
<logLevel>info</logLevel>
</logging>
<security>
<enabled>false</enabled>
</security>
</highAvailability>
<syslog>
<version>1</version>
<enabled>false</enabled>
</syslog>
<featureConfig />
<loadBalancer>
<version>1</version>
<enabled>false</enabled>
<enableServiceInsertion>false</enableServiceInsertion>
<accelerationEnabled>false</accelerationEnabled>
<monitor>
<monitorId>monitor-1</monitorId>
<type>tcp</type>
<interval>5</interval>
<timeout>15</timeout>
<maxRetries>3</maxRetries>
<name>default_tcp_monitor</name>
</monitor>
<monitor>
<monitorId>monitor-2</monitorId>
<type>http</type>
<interval>5</interval>
<timeout>15</timeout>
<maxRetries>3</maxRetries>
<method>GET</method>
<url>/</url>
<name>default_http_monitor</name>
</monitor>
<monitor>
<monitorId>monitor-3</monitorId>
<type>https</type>
<interval>5</interval>
<timeout>15</timeout>
<maxRetries>3</maxRetries>
<method>GET</method>
<url>/</url>
<name>default_https_monitor</name>
</monitor>
<logging>
<enable>false</enable>
<logLevel>info</logLevel>
</logging>
</loadBalancer>
<gslb>
<version>1</version>
<enabled>false</enabled>
<logging>
<enable>false</enable>
<logLevel>info</logLevel>
</logging>
</gslb>
<ipsec>
<version>1</version>
<enabled>false</enabled>
<logging>
<enable>true</enable>
<logLevel>warning</logLevel>
</logging>
<sites />
<global>
<psk>******</psk>
<caCertificates />
<crlCertificates />
</global>
</ipsec>
<dhcp>
<version>2</version>
<enabled>false</enabled>
<staticBindings />
<ipPools />
<logging>
<enable>false</enable>
<logLevel>info</logLevel>
</logging>
</dhcp>
<nat>
<version>2</version>
<enabled>true</enabled>
<natRules />
</nat>
<bridges>
<version>2</version>
<enabled>false</enabled>
</bridges>
<featureConfig />
</features>
<autoConfiguration>
<enabled>true</enabled>
<rulePriority>high</rulePriority>
</autoConfiguration>
<type>gatewayServices</type>
<isUniversal>false</isUniversal>
<hypervisorAssist>false</hypervisorAssist>
<queryDaemon>
<enabled>false</enabled>
<port>5666</port>
</queryDaemon>
</edge>

 

Same thing for the LDR

<?xml version=”1.0″ encoding=”UTF-8″?>

<edge>

<id>edge-72</id>

<version>6</version>

<status>deployed</status>

<datacenterMoid>datacenter-21</datacenterMoid>

<datacenterName>Home</datacenterName>

<tenant>default</tenant>

<name>LDR-3</name>

<fqdn>ldr3</fqdn>

<enableAesni>false</enableAesni>

<enableFips>false</enableFips>

<vseLogLevel>emergency</vseLogLevel>

<appliances>

<applianceSize>compact</applianceSize>

<appliance>

<highAvailabilityIndex>0</highAvailabilityIndex>

<vcUuid>500c4666-b908-cf53-a9f5-322d2fac48d3</vcUuid>

<vmId>vm-1032</vmId>

<resourcePoolId>domain-c861</resourcePoolId>

<resourcePoolName>Office</resourcePoolName>

<datastoreId>datastore-998</datastoreId>

<datastoreName>SYN9-NFS-GEN-VOL1</datastoreName>

<hostId>host-882</hostId>

<hostName>esx3.griffiths.local</hostName>

<vmFolderId>group-v22</vmFolderId>

<vmFolderName>vm</vmFolderName>

<vmHostname>ldr3-0</vmHostname>

<vmName>LDR-3-0</vmName>

<deployed>true</deployed>

<cpuReservation>

<limit>-1</limit>

<reservation>1000</reservation>

</cpuReservation>

<memoryReservation>

<limit>-1</limit>

<reservation>512</reservation>

</memoryReservation>

<edgeId>edge-72</edgeId>

<configuredResourcePool>

<id>domain-c861</id>

<name>Office</name>

<isValid>true</isValid>

</configuredResourcePool>

<configuredDataStore>

<id>datastore-998</id>

<name>SYN9-NFS-GEN-VOL1</name>

<isValid>true</isValid>

</configuredDataStore>

<configuredHost>

<id>host-882</id>

<name>esx3.griffiths.local</name>

<isValid>true</isValid>

</configuredHost>

</appliance>

<deployAppliances>true</deployAppliances>

</appliances>

<cliSettings>

<remoteAccess>true</remoteAccess>

<userName>admin</userName>

<sshLoginBannerText>

***************************************************************************

NOTICE TO USERS

 

 

 

This computer system is the private property of its owner, whether

individual, corporate or government. It is for authorized use only.

Users (authorized or unauthorized) have no explicit or implicit

expectation of privacy.

 

Any or all uses of this system and all files on this system may be

intercepted, monitored, recorded, copied, audited, inspected, and

disclosed to your employer, to authorized site, government, and law

enforcement personnel, as well as authorized officials of government

agencies, both domestic and foreign.

 

By using this system, the user consents to such interception, monitoring,

recording, copying, auditing, inspection, and disclosure at the

discretion of such personnel or officials. Unauthorized or improper use

of this system may result in civil and criminal penalties and

administrative or disciplinary action, as appropriate. By continuing to

use this system you indicate your awareness of and consent to these terms

and conditions of use. LOG OFF IMMEDIATELY if you do not agree to the

conditions stated in this warning.

 

****************************************************************************</sshLoginBannerText>

<passwordExpiry>99999</passwordExpiry>

</cliSettings>

<features>

<syslog>

<version>1</version>

<enabled>false</enabled>

</syslog>

<featureConfig/>

<firewall>

<version>4</version>

<enabled>false</enabled>

<globalConfig>

<tcpPickOngoingConnections>false</tcpPickOngoingConnections>

<tcpAllowOutOfWindowPackets>false</tcpAllowOutOfWindowPackets>

<tcpSendResetForClosedVsePorts>true</tcpSendResetForClosedVsePorts>

<dropInvalidTraffic>true</dropInvalidTraffic>

<logInvalidTraffic>false</logInvalidTraffic>

<tcpTimeoutOpen>30</tcpTimeoutOpen>

<tcpTimeoutEstablished>21600</tcpTimeoutEstablished>

<tcpTimeoutClose>30</tcpTimeoutClose>

<udpTimeout>60</udpTimeout>

<icmpTimeout>10</icmpTimeout>

<icmp6Timeout>10</icmp6Timeout>

<ipGenericTimeout>120</ipGenericTimeout>

<enableSynFloodProtection>false</enableSynFloodProtection>

<logIcmpErrors>false</logIcmpErrors>

<dropIcmpReplays>false</dropIcmpReplays>

</globalConfig>

<defaultPolicy>

<action>deny</action>

<loggingEnabled>false</loggingEnabled>

</defaultPolicy>

<firewallRules>

<firewallRule>

<id>131075</id>

<ruleTag>131075</ruleTag>

<name>routing</name>

<ruleType>internal_high</ruleType>

<enabled>true</enabled>

<loggingEnabled>false</loggingEnabled>

<description>routing</description>

<action>accept</action>

<application>

<service>

<protocol>ospf</protocol>

<port>any</port>

<sourcePort>any</sourcePort>

</service>

</application>

</firewallRule>

<firewallRule>

<id>131073</id>

<ruleTag>131073</ruleTag>

<name>default rule for ingress traffic</name>

<ruleType>default_policy</ruleType>

<enabled>true</enabled>

<loggingEnabled>false</loggingEnabled>

<description>default rule for ingress traffic</description>

<action>deny</action>

</firewallRule>

</firewallRules>

</firewall>

<routing>

<version>4</version>

<enabled>true</enabled>

<routingGlobalConfig>

<routerId>10.0.0.2</routerId>

<ecmp>false</ecmp>

<logging>

<enable>false</enable>

<logLevel>info</logLevel>

</logging>

</routingGlobalConfig>

<staticRouting>

<defaultRoute>

<vnic>2</vnic>

<mtu>1500</mtu>

<description></description>

<gatewayAddress>10.0.0.1</gatewayAddress>

<adminDistance>1</adminDistance>

</defaultRoute>

<staticRoutes/>

</staticRouting>

<ospf>

<enabled>true</enabled>

<protocolAddress>10.0.0.3</protocolAddress>

<forwardingAddress>10.0.0.2</forwardingAddress>

<ospfAreas>

<ospfArea>

<areaId>2</areaId>

<type>normal</type>

<authentication>

<type>none</type>

</authentication>

</ospfArea>

</ospfAreas>

<ospfInterfaces>

<ospfInterface>

<vnic>2</vnic>

<areaId>2</areaId>

<helloInterval>10</helloInterval>

<deadInterval>40</deadInterval>

<priority>128</priority>

<cost>1</cost>

<mtuIgnore>false</mtuIgnore>

</ospfInterface>

</ospfInterfaces>

<redistribution>

<enabled>true</enabled>

<rules>

<rule>

<id>0</id>

<from>

<ospf>false</ospf>

<bgp>false</bgp>

<static>false</static>

<connected>true</connected>

</from>

<action>permit</action>

</rule>

</rules>

</redistribution>

<gracefulRestart>true</gracefulRestart>

</ospf>

</routing>

<dhcp>

<version>2</version>

<enabled>false</enabled>

<staticBindings/>

<ipPools/>

<logging>

<enable>false</enable>

<logLevel>info</logLevel>

</logging>

</dhcp>

<bridges>

<version>2</version>

<enabled>false</enabled>

</bridges>

<highAvailability>

<version>2</version>

<enabled>false</enabled>

<declareDeadTime>15</declareDeadTime>

<logging>

<enable>false</enable>

<logLevel>info</logLevel>

</logging>

<security>

<enabled>false</enabled>

</security>

</highAvailability>

</features>

<autoConfiguration>

<enabled>true</enabled>

<rulePriority>high</rulePriority>

</autoConfiguration>

<type>distributedRouter</type>

<isUniversal>false</isUniversal>

<mgmtInterface>

<label>vNic_0</label>

<name>mgmtInterface</name>

<addressGroups>

<addressGroup>

<primaryAddress>192.168.10.224</primaryAddress>

<subnetMask>255.255.255.0</subnetMask>

<subnetPrefixLength>24</subnetPrefixLength>

</addressGroup>

</addressGroups>

<mtu>1500</mtu>

<index>0</index>

<connectedToId>dvportgroup-106</connectedToId>

<connectedToName>DV-VM</connectedToName>

</mgmtInterface>

<interfaces>

<interface>

<label>138900000002/vNic_2</label>

<name>UpLink</name>

<addressGroups>

<addressGroup>

<primaryAddress>10.0.0.2</primaryAddress>

<subnetMask>255.255.255.0</subnetMask>

<subnetPrefixLength>24</subnetPrefixLength>

</addressGroup>

</addressGroups>

<mtu>1500</mtu>

<type>uplink</type>

<isConnected>true</isConnected>

<isSharedNetwork>false</isSharedNetwork>

<connectedToId>virtualwire-45</connectedToId>

<connectedToName>Transport-10.0.0.0</connectedToName>

</interface>

<interface>

<label>13890000000a</label>

<name>GW-10.0.1</name>

<addressGroups>

<addressGroup>

<primaryAddress>10.0.1.1</primaryAddress>

<subnetMask>255.255.255.0</subnetMask>

<subnetPrefixLength>24</subnetPrefixLength>

</addressGroup>

</addressGroups>

<mtu>1500</mtu>

<type>internal</type>

<isConnected>true</isConnected>

<isSharedNetwork>false</isSharedNetwork> <connectedToId>virtualwire-46</connectedToId>

<connectedToName>LS-10.0.1</connectedToName>

</interface>

</interfaces>

<edgeAssistId>5001</edgeAssistId>

<lrouterUuid>3914608b-a1a9-41e2-8251-7da1557c38e1</lrouterUuid>

<queryDaemon>

<enabled>false</enabled>

<port>5666</port>

</queryDaemon>

</edge>

 

PowerShell get the latest VI events

From time to time it’s nice to search the VIEvent log for something specific.   I have found using PowerShell allows you to do this very quickly.   If for example I was looking for all HA events I might use the following code:

 

Import-Module -Name VMware.VimAutomation.Core

Connect-VIServer 192.168.10.14



$ViEvents = Get-ViEvent | where {$_.FullFormattedMessage -like "*HA*"} | select *

$ViEvents.count
$ViEvents | Out-GridView

Does Cloud + REST API spell the end of GUI

Fun question:  Does API spell the end of the GUI?

I started my career as a Solaris and Linux administrators mostly because I felt that working in Windows Server took away most of my control.  I loved configuring a web server in text and having full control.   I love having to understand what each variable did so I could tune my web server to meet my needs.    It was a great job which led into configuration management with puppet.   Full control and text once again…

This evening I was working with the REST API for NSX working on a side project and to confirm the results of my query I just used REST… I got my answer is a millisecond… I could not have refreshed the GUI that quickly.   It was so easy and it reminded me of the good old Linux days long forgotten as a architect.

Make no mistake it’s a coders world out there infrastructure folks need to get comfortable with API’s and code.   The future is a process of automating different units together using API’s.   Working with Rest has taught me so much about the platform.   You start to understand how the solution was built.   It exposes workflows that helps you build efficiency…

I suggest that if you really want to understand your product you need to learn it’s API.  If it does not have an API consider a different product.   I know GUI’s will be around but I do believe they will continue to have less value in enterprise deployments.  Strap on your code and join the power users.