I used to be a huge PC gamer. For a solid portion of my life Steam has been the gaming platform for PC gamers. I personally have a small collection of 701 games on steam. In 2014 Steam released the Steam link a small device running a special flavor of Linux to allow you to stream you PC games on your TV. This device was plagued with having a weak wifi system thus requiring it to be plugged into your ethernet network. Once plugged in the device was awesome it worked really well.
It allowed you to use their customized controller or a PS or Xbox controller which worked great for many games. Last year for Christmas they said they were discontinuing the device and started selling them for $2.50 (original price was $39.99). Since them Steam has released the steam link software for Raspberry PI 3, Android, IOS, and Samsung TV’s. This departure from dedicated hardware is perhaps a realignment to the business model that best fits Steam (software) or is it perhaps something else:
There are lots of devices fighting for the TV and HDMI slots why add more
Hardware support is expensive and painful and replacement of devices in warranty and customer sat issues were a problem
Going all software allows them to be a lot more agile with new features not locking them into a specific hardware capability (like the crappy wifi they put into the device)
I don’t have any insight into the real reason they moved from hardware into software but I suspect it’s a realignment to their core business of software combined with a need to move faster. The infrastructure does matter (wifi issue) but it’s also a limiting factor for new features.
I wanted to populate a VRA drop down with all VM’s who have a specific security tag. So I am off to creating a action. This particular action does require that you denote the uuid of your RESTAPI connection to NSX manager. I have included mine as a reference. You can locate this via the VRO console. This action returns an array of strings.
var connection =
var tag = NSXSecurityTagManager.getSecurityTag(connection,
var list = NSXSecurityTagManager.getTaggedVms(connection,
Every so often I need to populate a drop down in VRA with a list of virtual machine names to allow the customer to select from the list. This is useful in things like my previous posts adding and removing security tags that take an input of VM name. This is created as a action and returns an array of string. Once completed just choose it as the action to populate a drop down in VRA.
I recently created an environment that had a VRA XaaS to apply a security tag to individual virtual machines. I wanted to share the code I wrote to speed up your adoption. In this case we have a scriptable task to do the work. We have one parameter:
Pivotal Container Service (PKS) provides desired state management for Kubernettes clusters. It radically simplifies many operational aspects of running K8 in production. Out of the box K8 struggles to provide secure multi-tenant ingress to clusters. With PKS this gap is filled by tight integration with NSX-T. A simple command can be used to create the K8 cluster API and worker nodes with all required networking. I wanted to provide a deeper dive into the networks that are created when you issue the following command in PKS:
pks create-cluster my-cluster.corp.local -e my-cluster.corp.local -p small
This command tells PKS to create a new K8 cluster with the name K8s-1 with an external name of k8s_1 using the small plan. My plans are defined are part of the PKS install and resizable / adjustable at any time. The plan denotes the following things:
How many Master/ETCD nodes and sizing
How many worker nodes and sizing
My command produces the following details:
Once you issue the command the ETCD and worker nodes are deployed along with all required networking. I’ll go into a deeper dive of NSX-T PKS routing in another post but simply put several networks are created during the cluster creation. All the networks include the clusters UUID so it’s simple to track. Searching in NSX-T for the UUID provided the following information:
As you can see the operation has created several logical routers to handle PKS traffic including:
T1 Router for K8 master node
T1 Router for the load balancer
Four T1 routers one per namespace (found using: kubectl get ns -o wide)
To locate what is running inside each namespace you can run (kubectl get pods –all-namespaces)
When you add additional namespaces to the K8 cluster additional T1 routers are deployed. All of this is manual with traditional K8 clusters but with PKS it’s automatically handled and integrated.