Steam’s pivot from hardware to software

I used to be a huge PC gamer. For a solid portion of my life Steam has been the gaming platform for PC gamers. I personally have a small collection of 701 games on steam. In 2014 Steam released the Steam link a small device running a special flavor of Linux to allow you to stream you PC games on your TV. This device was plagued with having a weak wifi system thus requiring it to be plugged into your ethernet network. Once plugged in the device was awesome it worked really well.

Steam link

It allowed you to use their customized controller or a PS or Xbox controller which worked great for many games. Last year for Christmas they said they were discontinuing the device and started selling them for $2.50 (original price was $39.99). Since them Steam has released the steam link software for Raspberry PI 3, Android, IOS, and Samsung TV’s. This departure from dedicated hardware is perhaps a realignment to the business model that best fits Steam (software) or is it perhaps something else:

  • There are lots of devices fighting for the TV and HDMI slots why add more
  • Hardware support is expensive and painful and replacement of devices in warranty and customer sat issues were a problem
  • Going all software allows them to be a lot more agile with new features not locking them into a specific hardware capability (like the crappy wifi they put into the device)

I don’t have any insight into the real reason they moved from hardware into software but I suspect it’s a realignment to their core business of software combined with a need to move faster. The infrastructure does matter (wifi issue) but it’s also a limiting factor for new features.

Create a VRO action to list all VM’s with a specific security tag

I wanted to populate a VRA drop down with all VM’s who have a specific security tag. So I am off to creating a action. This particular action does require that you denote the uuid of your RESTAPI connection to NSX manager. I have included mine as a reference. You can locate this via the VRO console. This action returns an array of strings.

var connection = NSXConnectionManager.findConnectionById(“a497d03f-b45c-494d-a9a2-3d8d8a3b8fe1”);

var tag = NSXSecurityTagManager.getSecurityTag(connection, ‘securitytag-12’);

var list = NSXSecurityTagManager.getTaggedVms(connection, tag);

var machines = new Array();

if (list == null) {

    members = null;

} else {

    members = new Array();

    for (i=0; i<list.length; i++) {

        var member = new Object();

        member.name = list[i].name;

                              machines.push(list[i].name);

                              System.log(list[i].name);

        member.description = list[i].description;

        member.objectId = list[i].objectId;

        member.objectType = list[i].objectTypeName;

        member.revision = list[i].revision

        members[i] = member;

    }

}

return machines;

Create a VRO action to display a drop down of all VM names

Every so often I need to populate a drop down in VRA with a list of virtual machine names to allow the customer to select from the list. This is useful in things like my previous posts adding and removing security tags that take an input of VM name. This is created as a action and returns an array of string. Once completed just choose it as the action to populate a drop down in VRA.

Here is the action code:

var machines = new Array();

vms = VcPlugin.getAllVirtualMachines();

for each (vm in vms) {

    machines.push(vm.name);

}

return machines;

VRO code to remove a NSX security group from a VM

My previous post showed how to add a NSX security tag using VRO this one is similar but removes it:

Parameters

Just VM name

Attributes

tag is a array of names; connection is the restapi connection to NSX

Scriptable task:

Code sample

Code for cut and paste:

//name = ‘dev-214’;

vms = VcPlugin.getAllVirtualMachines();

for each (vm in vms) {

    if (vm.Name == name) {

        System.log(“VM name: ” + vm.name + ” MOID: ” + vm.id);

                              machineMOID = vm.id;

    }

}

NSXSecurityTagManager.detachSecurityTagsOnVm(connection, tags, machineMOID);

VRO code to apply a NSX security tag

I recently created an environment that had a VRA XaaS to apply a security tag to individual virtual machines. I wanted to share the code I wrote to speed up your adoption. In this case we have a scriptable task to do the work. We have one parameter:

Parameters (it’s the string name of the server)

We have two attributes:

tag ( array of names selected because you have VRO integrated with NSX endpoint) RestAPI endpoint for NSX

Here is the code:

Here is the scriptable task and code

Here is the code for cut and paste usage:

//name = ‘dev-214’;

vms = VcPlugin.getAllVirtualMachines();

for each (vm in vms) {

    if (vm.Name == name) {

        System.log(“VM name: ” + vm.name + ” MOID: ” + vm.id);

                              machineMOID = vm.id;

    }

}

// Apply the tag

               NSXSecurityTagManager.applySecurityTagOnVMs(connection, machineMOID, tag);

What networks does PKS create inside each K8 cluster?

Pivotal Container Service (PKS) provides desired state management for Kubernettes clusters.  ​​​​ It radically simplifies many operational aspects of running K8 in production. ​​ Out of the box K8 struggles to provide secure multi-tenant ingress to clusters.  ​​​​ With PKS this gap is filled by tight integration with NSX-T.  ​​​​ A simple command can be used to create the K8 cluster API and worker nodes with all required networking.  ​​​​ I wanted to provide a deeper dive into the networks that are created when you issue the following command in PKS:

pks create-cluster​​ my-cluster.corp.local​​ -e​​ my-cluster.corp.local​​ -p small

This command tells PKS to create a new K8 cluster with the name K8s-1 with an external name of k8s_1 using the small plan.  ​​​​ My plans are defined are part of the PKS install and resizable / adjustable at any time.  ​​​​ The plan​​ denotes​​ the following things:

  • How many Master/ETCD nodes and sizing

  • How many worker nodes and sizing

 

My command produces the following details:

Machine generated alternative text:
— pk3 
cluster my—cluster 
Name : 
El an Name : 
UUID: 
2cea3boa-bi76-43c4-sus-99SOi7i70347 
Action : 
Last Action State : 
Last Description : 
Host : 
Port : 
Worker Nodes : 
Network Name : 
my—cluster 
succeeded 
Instance provisioning completed 
my—cluster. corp. 
8443 
3 
10.40. 14.34

Once you issue the command the ETCD and worker nodes are deployed along with all required networking.  ​​​​ I’ll go into a deeper dive of NSX-T PKS routing in another post but simply put several networks are created during the cluster creation.  ​​​​ All the networks include the clusters UUID so it’s simple to track. ​​ Searching in NSX-T for the UUID provided the following information:

Machine generated alternative text:
Logical Router t 
lb-pks-2cea3boa-b176-43c4-8718-995017170347-cluster-router 
pks-2cea3bOa-b176-43c4-8718-995017170347-cluster-router 
Pks-2cea3boa-b176-43c4-8718-995017170347-kube-publiC 
pks-2cea3boa-b176-43c4-8718-995017170347-kube-system 
pks-2cea3bOa-b176-43c4-8718-995017170347-pks-system 
8f99...7aa6 
0413.. 5734 
Oc9a...f102 
a118...d&9 
6519...1876 
d2f7...83bO 
Type 
Tier-I 
Tier-I 
Tier-I 
Tier-I 
Tier-I 
Tier-I 
Connected Tier-O Rout 
to-pks 
tO-pks 
tO-pks 
to-pks 
to-pks 
tO-pks 
High Avallablllty Mode 
Active-standby 
Transport Zone 
overlay-tz 
overlay-tz 
overlay-tz 
overlay-tz 
overlay-tz 
overlay-tz 
Edge Cluster 
edge-cluster-I

 

As you can see the operation has created​​ several​​ logical routers to handle PKS traffic including:

  • T1 Router for K8 master node

  • T1 Router for the load balancer

  • Four T1 routers one per namespace​​ (found using: kubectl get ns -o wide)

 

To locate what is running inside each namespace you can run (kubectl get pods –all-namespaces)

Namespace

What is it used for

default

default namespace for containers

kube-public

Used by cluster communications

kube-system

heapster, kube-dns, kubernetes-dashboard, metrics-server, monitoring-influxdb, telemetry-agent

pks-system

fluent, sink-controller

 

When you add additional namespaces to the K8 cluster additional T1 routers are deployed.  ​​​​ All of this is manual with traditional K8 clusters but with PKS it’s automatically handled​​ and integrated.  ​​ ​​​​