VRO – XML work

Sooner or later you are going to have to work with XML in Ochestrator.  Orchestrator can be challenging with XML because it’s based upon E4X plugin not more current plugins.   You can find some specific details for now here.

I have been doing a lot of XML with NSX so let me explain with a real world example.  I have used the RestAPI to NSX to return the following XML:

<edge>
	<datacenterMoid>datacenter-21</datacenterMoid>
	<type>distributedRouter</type>
		<appliances>
			<appliance>
				<resourcePoolId>domain-c861</resourcePoolId>
				<datastoreId>datastore-998</datastoreId>
			</appliance>
		</appliances>
		<mgmtInterface>
			<connectedToId>virtualwire-1045</connectedToId>
				<addressGroups>
					<addressGroup>
						<primaryAddress>192.168.10.222</primaryAddress>
						<subnetMask>255.255.255.0</subnetMask>
					</addressGroup>
				</addressGroups>
		</mgmtInterface>
		<interfaces>
			<interface>
				<type>uplink</type>
				<mtu>1500</mtu>
				<isConnected>true</isConnected>
				<addressGroups>
					<addressGroup>
						<primaryAddress>172.16.1.2</primaryAddress>
						<subnetMask>255.255.255.0</subnetMask>
					</addressGroup>
				</addressGroups>
				<connectedToId>virtualwire-1036</connectedToId>
			</interface>
			<interface>
				<type>internal</type>
				<mtu>1500</mtu>
				<isConnected>true</isConnected>
				<addressGroups>
					<addressGroup>
						<primaryAddress>172.16.0.1</primaryAddress>
						<subnetMask>255.255.255.0</subnetMask>
					</addressGroup>
				</addressGroups>
				<connectedToId>virtualwire-1033</connectedToId>
			</interface>
			<interface>
				<type>internal</type>
				<mtu>1500</mtu>
				<isConnected>true</isConnected>
				<addressGroups>
					<addressGroup>
						<primaryAddress>172.16.20.1</primaryAddress>
						<subnetMask>255.255.255.0</subnetMask>
					</addressGroup>
				</addressGroups>
				<connectedToId>virtualwire-1035</connectedToId>
			</interface>
			<interface>
				<type>internal</type>
				<mtu>1500</mtu>
				<isConnected>true</isConnected>
				<addressGroups>
					<addressGroup>
						<primaryAddress>172.16.10.1</primaryAddress>
						<subnetMask>255.255.255.0</subnetMask>
					</addressGroup>
				</addressGroups>
				<connectedToId>virtualwire-1034</connectedToId>
			</interface>
		</interfaces>
</edge>

As you can see it’s got a lot of information.  This provides the basic build for a distributed logical router.  Let’s assume that I want to get all the ip addresses in use on this DLR.   First I need to convert this return from the RESTAPI into a XML object.  (It’s already in XML format but we need it as an object so we can interact with XML) Let’s assume all the content above is in the variable called mydata.

//Convert into XML object
var myXML = new XML(mydata);

 

Reading Data

Now that it’s an XML object we can interact with the specific nodes with ease.   If I wanted to return the management interface primary address I could use the following code:

System.log(myXML.mgmtInterface);

 

It’s a common mistake to include the edge on the node list which would fail to return results.   Lets do something more complex like return the first interface ip address:

System.log(myXML.interfaces.interface[0].addressGroups.addressGroup[0].primaryAddress);

 

As you can see I know that both interface and addressGroup might have multiple entries so I using the 0 to designate the first entry.  In reality it would be better to use a loop so we can get all interface IP addresses like this:

for each (a in myXML.interfaces.interface)
{
     for each (b in a.addressGroups.addressGroup)
     {
        System.log(b.primaryAddress);
     }

}

As you can see this will iterate through all interface entries (a) and then though all addressGroup entries(b) on a then print out the primaryAddress.

 

Deleting Data

Removing nodes from XML is really easy for example if I wanted to remove the first interface I would do:

delete myXML.interfaces.interface[0];

I hope this helps you a little on your journey.

VRO Enable lockdown mode

I have been reading the VMware validated design documents of late.  I cannot recommend them enough they are awesome documents.   It’s really worth the deep read.   I noticed one of the design choices is to enable lockdown mode on all esx hosts.   This is common due to security needs but it additionally commented that host profiles don’t capture lockdown mode settings so you have to manually set it.   I have used PowerCLI to turn on and off (during issues) lockdown mode for years and VMware posted a KB article that includes the PowerCLI code here.

 

I wanted to write a piece of orchestrator code that would lock down esx hosts on a daily basis if they are not in lockdown mode.  Consider it a desired end state tool.

If you wanted to enable Normal lockdown mode on all ESXi hosts you would use the following code:

 

//Get all hosts
hosts = System.getModule(“com.vmware.library.vc.host”).getAllHostSystems();

for each (host in hosts)
{

// Compare lockdown modes
if (host.config.lockdownMode.value === “lockdownDisabled”)
{
host.enterLockdownMode();
System.log(host.name + ” is being locked down”);
}
else if (host.config.lockdownMode.value === “lockdownNormal”)
{
System.log(host.name + ” is already locked down”);
}

}

 

Now if you wanted to disable lockdown you would just run the following code:

//Get all hosts
hosts = System.getModule("com.vmware.library.vc.host").getAllHostSystems();

for each (host in hosts)
{

   // Compare lockdown modes
   if (host.config.lockdownMode.value === "lockdownDisabled")
   {
   System.log(host.name + " is already not in lock down mode");
    }
else if (host.config.lockdownMode.value === "lockdownNormal")
    {

    host.exitLockdownMode();
    System.log(host.name + " is now not in lock down mode.");
    }

}

 

You can enable / disable strict mode sing lockdownStrict as well.  I hope it helps… now all you need to do is create a scheduled task and perhaps do it cluster by cluster.

 

 

Why modernize your datacenter if you are cloud first?

There has been a growing trend in enterprise to move all IT into the cloud.   Many executives have been drinking this cool aid as the best way to solve their agility issues.   Gartner surveys have shown that 66% of IT shops move to the Cloud for agility.  (Only 5% for cost – trust me unless your business has really huge bursts it’s not cheaper)  When I examine this choice with customers the details start to create challenges.   A good friend always used to say the devil is in the details…

Applications rule the world

Destination is determined by the application.  I like to divide the application stacks into three tiers:

  • Cloud Native or SaaS – These are services born in the cloud and specific to the cloud – examples would be Lambda(cloud native) or Office365 (SaaS)
  • Micro-services – Containers or applications with each function broken into atomic units using API to orchestrate outcomes
  • 3-tier architecture – traditional web, app, database architecture and COT’s applications

While some newer organizations may only have Micro-services or Cloud Native applications the lion share of enterprise customers have a mixture of all three including a health portion of vendor provided COTS applications.   As you examine these applications you discovery that public cloud may not be supported.  Replatforming COT’s applications is the role of the provider not the consumer.   When you approach traditional architecture and COTS applications the only agility that the public cloud can provide is very fast IaaS (Infrastructure orchestration).   Many IT leaders today are considering replatforming all applications using a mixure of SaaS for COTS and moving to micro-services.   It’s critical to realize that the replatform efforts may be seen as no value to the business as whole without a compelling business case.

Application limits

Some of the most common limits to public cloud adoption from the application are:

  • Regulatory Compliance
  • Data gravity/Latency – your data exists outside the public cloud and communication introduces latency
  • COT’s or lack of support for public cloud
  • Performance requirements

Public Cloud considerations

When moving to a public cloud you should consider:

  • Application refactoring and dependency mapping
  • Exit strategy
  • Cost
  • Performance control in multi-tenant world
  • Configuration flexibility limits
  • Disparate networking and security
  • Disparate management tools

What is cloud first

Given that the drive to cloud adoption is driven for the need to be more agile than one can determine that cloud first is really a deep posture of automation across architectures.  It is essentially the automation in public cloud that make it agile.

What makes a public cloud agile?

The key element of public clouds agility is the fact that it is software defined instead of hardware defined.   Many enterprises have adopted compute software definition in the form of virtualization while continuing to define storage and networking in hardware.   Agility cannot be achieved when it is waiting on people to rack and stack elements.   Hardware economy’s of scale are possible but within the reach of most enterprise environments.   So the first rule of public cloud is hardware abstraction into software.   The second rule is software defined abstraction in the form of a customer consumption layer.  These two layers provide the critical agility and speed.

As you can see from the picture the ultimate end of public cloud is to provide an increasing number of services via the UI and API to be consumed.   Most enterprise shops continue to be defined in hardware with compute virtualization.  They are working very hard to layer a consumption layer in the form of ITSM’s in front of their IT but find it hard to provide agility because of their lack of adoption of a software defined datacenter.   One cannot simply skip require components to the puzzle and expect the same results.

Wait what does this have to do with modernizing the datacenter?

Simple let us assume you cannot move everything to the cloud due to constraints (let’s be honest because of compliance and data gravity).  Then whatever lives in your private datacenter will have to use your private cloud -> is it software defined?   Does it provide your required agility?  While your footprint of private datacenter may reduce over time you still need a private cloud that provide agility.   It’s likely that the elements staying in your private datacenter generate the most income for your company.

 

Thoughts or hate mail is welcome

VRO to delete lots of virtual switches in NSX

I created a workflow to create 5000 virtual switches in NSX… which was cool.  But it did create 5000 virtual switches which now needed to be deleted.  So I created a new workflow to delete them.   In my case I created all 5000 switches on the same distributed logical router.   I had to delete the DLR before starting this workflow.   I used two RESTAPI calls to complete this work as show below:

And

 

Notice the URL templates on both as they are critical.  Otherwise it’s just a RESTAPI call to NSX Manager.

These two workflows along with the RestHost get passed into a workflow as shown:

 

Otherwise it’s just a scriptable task.  I ran into an issue with it running too fast for NSX manager so I put a 1 second wait in between each call.  Here is the code:

function sleep(milliseconds) {
 var start = new Date().getTime();
 for (var i = 0; i < 1e7; i++) {
 if ((new Date().getTime() - start) > milliseconds){
 break;
 }
 }
}

//Setup the get request for wires
var inParamtersValues = [];
var request = restOperation.createRequest(inParamtersValues, null);
request.contentType = "";
var response = request.execute();
//prepare output parameters
//System.log("Response: " + response);
statusCode = response.statusCode;
statusCodeAttribute = statusCode;
//System.log("Status code: " + statusCode);
contentLength = response.contentLength;
headers = response.getAllHeaders();
contentAsString = response.contentAsString;

System.log(response);

var xmlObj = new XML(contentAsString);

var document = XMLManager.fromString(contentAsString);

var count = document.getElementsByTagName("vdnId");

System.log("Count : " + count.length);

j = 0;

for (i=0; i < count.length; i++)
{
 //System.log("Scope: " + xmlObj.dataPage.virtualWire[i].objectId + " vdnId: " + xmlObj.dataPage.virtualWire[i].vdnId );
 if ( xmlObj.dataPage.virtualWire[i].vdnId > 5010)
 {
 System.log("Scope: " + xmlObj.dataPage.virtualWire[i].objectId + " vdnId: " + xmlObj.dataPage.virtualWire[i].vdnId );
 var virtualWireID = xmlObj.dataPage.virtualWire[i].objectId;
 var inParamtersValues = [virtualWireID];
 var request = restOperationDelete.createRequest(inParamtersValues, null);
 request.contentType = "";
 var response = request.execute();
 statusCode = response.statusCode;
 System.log("Response : " + statusCode + " Rest request : " + virtualWireID);
 j++;
 sleep(1000);
 }
}
System.log("j : " + j + " Total : " + count.length);

It should work great it’s currently set to delete every vnri above 5010 (mine start at 5000)  you can adjust this number to whatever you want…

 

 

 

 

vRealize Orchestrator scaling with 4K displays

I ran into this issue this week.   vRealize Orchestrator with Windows 10 on 4K displays makes the UI so small not even my seven year old could read it.   For those lucky enough to have 4K displays it’s a real challenge.  It’s a problem with java and dpi scaling not Orchestrator but it’s a magical challenge.   As much as I want to return to the day of using magnify glasses to read computer screens… here is a simple fix.

 

Download the client.jnlp and run it from an administrative command line with the following command:

 

javaws -J -Dsun.java2d.dpiaware=false client.jnlp

 

This should fix your vision issues and it’s cheaper that a new pair of glasses.

vRO Action to return virtual machines with a specific tag

I am a huge fan of tags in vSphere.   Meta data is the king for modular control and policy based administration.   I wrote a action for a lab I presented at VMUG sessions.   It takes the name of a tag as a string and returns the names of all virtual machines as an array of strings.  It does require a VAPI endpoint setup as explained here: (http://www.thevirtualist.org/vsphere-6-automation-vro-7-tags-vapi-part-i/) Here it is:

 

Return Type: Array/String (Could be Array VC:VirtualMachine)

Parameter: tagNameToMatch string

Code: return_vm_with_tag

 // array to hold the vm names
 var vmsWithSpecificTag = new Array();
 // VAPI connection
 var endpoints = VAPIManager.getAllEndpoints();
 //Use the first returned endpoint to gather information
 var client = endpoints[0].client();
 //var client = end.client(); 
 //Get associations
 var tagging = new com_vmware_cis_tagging_tag__association(client);
 //Get tags
 var tagMgr = new com_vmware_cis_tagging_tag(client);
 //Create object to hold VM
 var objId = new com_vmware_vapi_std_dynamic__ID() ;

//Get all virtual machines
 vms = System.getModule("com.vmware.library.vc.vm").getAllVMs();
 //loop though virtual machines
 for each (vm in vms)
 {
 // assign VM data to object
 objId.id = vm.id;
 // assign VM data to object
 objId.type = vm.vimType;
 //Get tags assigned to VM
 var tagList = tagging.list_attached_tags(objId);
 //Loop through VM assigned tags
 for each (var tagId in tagList) {
 //get object on tag
 var theTag = tagMgr.get(tagId);
 //assign name to compare
 var tagName=theTag.name;
 //compare to our requested tag
 if (tagName == tagNameToMatch) {
 System.log("VM : " + vm.name + " has the tag : " + tagName );
 // add to array
 vmsWithSpecificTag.push(vm.name);
 }
 
 }
 }


return vmsWithSpecificTag;

Learning vRealize Orchestrator

Recently I provided a lab to learn vRealize Orchestrator to the Austin VMUG.   It’s been too long since I attended a VMUG meeting due to moving to Dallas.   It was great to meet new peers in the Austin area.    The purpose of the lab was to provide people hands on experience using Orchestrator with some real world helpful examples.   I was present to answer questions and help the learning process.

I am working to bring the same lab to Dallas and Houston in the next few months but wanted to share the labs here.  It’s mostly possible to do the labs in the hands on lab environment of hol-1821-05-CMP partially made by my friend Ed Bontempo.  You will have to type a lot of code examples since HOL does not support cut and paste.   You can do it all in your home lab.    Contact me if you want to know about the next session we are presenting to a live VMUG if you want some instructor help.

Code: code_text

Lab Manual: VMUG_VRO_LAB_Manual

Enjoy!

Basic NSX Network virtualization setup

This post will go over the basic setup for network virtualization in NSX.  This is nothing new or exciting but I figured I would share as more users are deploying NSX in their home labs these days.   I will assume that you already have the environment prepared by deploying the manager and controllers and all your ESXi hosts are prepared.

We are going to set up the subnet of 10.0.0.0/17 to be virtually routed as shown below:

This requires the following:

  • Static route on Linksys EA6200 router to point 10.0.0.0/17 to 192.168.10.223 (because my Linksys does not support any dynamic routing protocols)
  •  A logical switch called Transport-10.0.0.0 between the border ESG and the Logical distributed router
  • OSPF configured between ESG-3 and LDR-3

 

Creation of the LDR-3 (pictures to follow steps)

  1. First we need to create a logical switch by choosing Logical Switches, select green + button, Input Name (Transport-10.0.0.0) and description and click ok
  2. Select NSX Edges in Navigator pane, select green + button
  3. In Name and description pane: Install Type: Logical (distributed) route, Name: LDR-3, Hostname ldr3, leave deploy NSX Edge selected, Next
  4. In settings, type your password, I like to enable ssh, click next
  5. In configure deployment: Press the green + to deploy a NSX Edge Appliance, Select correct resource pool, datastore, host, and folder, click ok
  6. Click Next
  7. In Configure interfaces
  8. Select connected to for HA interface: Port group DV-VM, press + below HA and add 192.168.10.224
  9. press the green + button under interfaces
  10. In Add NSX Edge Interface: Name Uplink, Connected to: Transport-10.0.0.0, Press green + to add IP: 10.0.0.2 subnet 24, Click ok
  11. Click Next
  12. In Default gateway settings:  Set the gateway IP as 10.0.0.1 and click next
  13. Ignore the Firewall and HA settings click next
  14. Click finish to deploy LDR

 

 

Creation of the ESG-3 (pictures to follow steps)

  1. Back at the NSX Edge section in Navigator
  2. Press the Green + sign
  3. In Name and description: Choose Edge Services gateway, Name: ESG-3, Hostname esg3 and select Next (in Production you might want high availability or ECMP)
  4. In Settings:  Type Admin password and enable ssh, Next
  5. In Configure deployment: Press Green + sign, Select resource pool, datastore and host then ok and Next
  6. In Configure Interfaces: press the green + sign
  7. Name: Uplink, Connected To: DV-VM, Press green + to add interface: 192.168.10.223 subnet 24, click ok
  8. Click Next
  9. In Default gateway settings insert default gateway of 192.168.10.1 then next
  10. Ignore firewall and HA settings and next
  11. Click Finish to deploy appliance

Configure Physical router

This is unique per router in mine I added a static route for the subnet:

 

Configure LDR

We need to add at least one inside network and configure OSPF.

  1. Logical Switch section we are going to add a switch for 10.0.1.0/24 called LS-10.0.1
  2. In Logical Switch Section: Green + button, Name LS-10.0.1 then OK
  3. Go to NSX Edges in Navigator
  4. Double click on LDR-3
  5. We need to add a interface for the new network Select Manage, Settings, Interfaces
  6. Select Green +
  7. Name: GW-10.0.1: Connected To LS-10.0.1, Green + button to add interface 10.0.1.1 subnet 24,
  8. Select Routing tab, global Configuration
  9. Go to Dynamic Routing configuration and click edit
  10. Make sure the uplink interface is chosen then click ok
  11. Press Publish Changes button
  12. Click on OSPF button
  13. Remove all current area definitions (51 ) with red X then publish changes
  14. Click green + on area definitions and add area 2 (just type 2 in area button leave rest default)
  15. Press green + in area to interface mapping button
  16. Make sure Uplink is selected and area 2 and press OK
  17. Press Edit button next to OSPF configuration and enable OSPF, For protocol address choose a free IP 10.0.0.3, forwarding is 10.0.0.2
  18. Publish Changes
  19. Go to firewall section
  20. Disable firewall
  21. Publish changes

 

Configure ESG-3

  1. Return to Networking & Security main section
  2. Select NSX Edges and double click on ESG-3
  3. Select Manage, Settings, Interfaces
  4. We need to add a interface for the transport between LDR and ESG
  5. Select vnic1 and press Edit button
  6. Connected to: Transport-10.0.0.0, IP: 10.0.0.1 subnet 24
  7. Select Routing
  8. In global configuration: Select edit next to dynamic routing configuration, ensure uplink is selected and press ok
  9. Publish changes
  10. Click on OSPF
  11. Remove current area definitions with red X and publish changes
  12. Add a new area for area 2 leaving everything else default
  13. In the area to interface mapping make sure you chose vnic1 (internal link) and area 2
  14. Select OSPF Configuration and Enable OSPF
  15. Publish Changes
  16. Select Firewall section and disable firewall and publish changes

 

Validate Configuration

Let’s validate configuration three ways: Confirming OSPF settings on ESG-3, Adding a new subnet, ping test

Confirming on ESG-3

  1. Login to ESG-3 via SSH (username admin password set during deployment)
  2. Type the following to see current routes (show ip route)  ensure that the E2 learned route is showing:

Adding a new subnet

  1. Stay logged into the ESG-3
  2. Switch to the Networking and security console, navigate to Logical switches
  3. Press green + to add a switch for LS-10.0.2
  4. select NSX Edges, Double click on LDR-3
  5. Go to Manage and settings
  6. Select Interfaces and press green +
  7. Name: GW-10.0.2, Internal, Connected to:  LS-10.0.2, IP 10.0.2.1 subnet 24
  8. Return to the ESG-3 ssh session and run the command show ip route to see 10.0.2.0/24

Test Via ping

  1. Attempt to ping either gateway on the LDR (10.0.1.1 or 10.0.2.1)

 

Additional commands on ESG-3

Here are some commands that will help you in troubleshooting OSPF

show ip ospf neightbors – show other members of the areas

show ip ospf database – understand current ospf database

 

Advice to VCDX candidates from a Double VCDX

“Sometimes it’s the journey that teaches you a lot about your destination.”  – Drake

Update: I have updated the wording on the constraints section to reflect a Twitter comment from  thanks for the fix to wording and reading.

The VMware Certified Design Expert certification (VCDX) represents the highest tier of VMware’s certifications.   I recently contributed to a panel of VCDX’s at VMworld.  Candidates considering the VCDX certification had the opportunity to ask the panel questions.   The questions illustrated that candidates were concerned about the Herculean effort required to achieve the certification.   I wanted to take this opportunity to provide some guidance I have learned as a mentor.   I believe anyone can become a VCDX.   It does require some hard work but it is very achievable.

 

Requirements, Constraints, Assumptions and Risks

Becoming a VMware certified design expert does not mean you have to be the most technical person in the room.   It does mean you have to know how to align technology to business needs.    My experience has taught me that I can tell if a proposal for VCDX will be successful right away based upon requirements, constraints, assumptions and risks.   The ability to gather business and technical requirements is a key skill for any design expert.   Your technical requirements should be aligned to the business requirements. It’s important to understand the difference between business and technical requirements:

  • Business Requirements – Defines how the delivered product provides value. Other words often used are outcomes, or expected benefits.  For example, the solution must meet regulatory compliance.
  • Technical Requirements – Defines the technical “must haves” to achieve the outcome. For example the solution must be able to fail over and fail back from a disaster and support a RTO of four hours.

Many VCDX documents are solely focused on technical requirements and miss the “why” that drives the design.   Understanding the difference between requirements and constraints is another challenge for many candidates:

  • Requirements – Things the design must meet, such as: establish a RTO of four hours or provide capacity for twenty percent growth for the next three years.
  • Constraints – Things that form limits or boundaries that apply to the design.  For example a specific vendor relationship or reuse of current hardware.  Constraints should be met by the design unless they are resolved via conflict.

Once you have established your requirements and constraints you are left with assumptions and risks:

  • Assumptions – things you believe to be true but cannot verify. For example, storage usage will grow at the same rate as compute usage or the sample data provided represents reality.
  • Risks – are simply risks to the project meeting business requirements. If you identify risks they should be provided in this section.   Every project has risks.   For example, staff skills or timelines.

 

Correctly creating requirements and constraints that align with the elements of design are critical to a successful submission.    Identification of assumptions and risks provide important protections to the architect.  The goal of a VCDX design is to align technology to meet the requirements and constraints not provide the best technology mix.

 

Elements of Design

When working with infrastructure, VMware has designated five elements that should be considered in each design choice.  Each design choice should be evaluated against the elements of design for impact.  I personally like to use the acronym RAMPS to help me remember these elements:

  • Recoverability – Choices effect on disaster recovery
  • Availability – Choices effect on SLA
  • Manageability – Choices effect of management cost
  • Performance – Choices effect on performance
  • Security – Choices effect on security

It is not uncommon for availability, recoverability, security or performance to have a negative impact on manageability.   Not all choices can have a net benefit to all elements of design.   The tie breaker with these conflicts should be the requirements.   Conflicts between design elements may exist even after evaluating the requirements.   This allows for a conflict resolution section.   Conflict resolution is where the customer of the solution acknowledges the conflict and mitigates the conflict in some form.   Make sure your design has conflicts.   Each requirement and constrain should be aligned to an element of design.  When gathering business requirement, consider the RAMPS impact of each requirement to help gather a full list of requirements and constraints.    Each technical requirement or constraint should be aligned to a single element of RAMPS.

 

Fun with Formats

Every single candidate struggles with document format.    The VCDX requires far more detail than most designs in enterprise.    Format paralysis has slowed if not stopped many candidates.   My suggestion is identify an outline that aligns with the blueprint.

  • Overview
  • Requirements, constraints, assumptions and risks
  • Conceptual architecture
  • Logical architecture
  • Physical architect
  • Security
  • Appendix

 

Each of the different layers of architecture should address the sub elements: compute, storage, networking, applications, recovery, virtual machine, management, etc…   You cannot provide lip service to conceptual and logical architecture.   They must be developed just like physical architecture.    Design choices should be justified against RAMPS, with conflicts identified.   The secret is to determine a format and start writing, don’t get stuck on format.   In the end, the format is not as important as the content assuming the reviewer can locate the items required in the blueprint.

 

Time Management

Every candidate struggles with time.  We have family, friends, hobbies, faith and work conflicting with the VCDX goal.    My advice is to set a goal with a timeline.   Agree upon a set time each day.  Exercise discipline to work on the VCDX during that time and you will achieve your goal.   For me I used 8:00 – 9:00 PM each night.  It was after my kids’ bed time and before spending time with my wife.   I had to sacrifice computer game time, my guaranteed wins from p4rgaming.com, social media time and blogging time, but after six months I was done.   This model has worked for me to achieve two VCDX certifications and put me on the path to my third.   I’d like to end where I began.   I believe everyone can achieve this certification with hard work.   To start get a mentor by visiting vcdx.vmware.com and searching for a mentor including me.