What networks does PKS create inside each K8 cluster?

Pivotal Container Service (PKS) provides desired state management for Kubernettes clusters.  ​​​​ It radically simplifies many operational aspects of running K8 in production. ​​ Out of the box K8 struggles to provide secure multi-tenant ingress to clusters.  ​​​​ With PKS this gap is filled by tight integration with NSX-T.  ​​​​ A simple command can be used to create the K8 cluster API and worker nodes with all required networking.  ​​​​ I wanted to provide a deeper dive into the networks that are created when you issue the following command in PKS:

pks create-cluster​​ my-cluster.corp.local​​ -e​​ my-cluster.corp.local​​ -p small

This command tells PKS to create a new K8 cluster with the name K8s-1 with an external name of k8s_1 using the small plan.  ​​​​ My plans are defined are part of the PKS install and resizable / adjustable at any time.  ​​​​ The plan​​ denotes​​ the following things:

  • How many Master/ETCD nodes and sizing

  • How many worker nodes and sizing

 

My command produces the following details:

Machine generated alternative text:
— pk3 
cluster my—cluster 
Name : 
El an Name : 
UUID: 
2cea3boa-bi76-43c4-sus-99SOi7i70347 
Action : 
Last Action State : 
Last Description : 
Host : 
Port : 
Worker Nodes : 
Network Name : 
my—cluster 
succeeded 
Instance provisioning completed 
my—cluster. corp. 
8443 
3 
10.40. 14.34

Once you issue the command the ETCD and worker nodes are deployed along with all required networking.  ​​​​ I’ll go into a deeper dive of NSX-T PKS routing in another post but simply put several networks are created during the cluster creation.  ​​​​ All the networks include the clusters UUID so it’s simple to track. ​​ Searching in NSX-T for the UUID provided the following information:

Machine generated alternative text:
Logical Router t 
lb-pks-2cea3boa-b176-43c4-8718-995017170347-cluster-router 
pks-2cea3bOa-b176-43c4-8718-995017170347-cluster-router 
Pks-2cea3boa-b176-43c4-8718-995017170347-kube-publiC 
pks-2cea3boa-b176-43c4-8718-995017170347-kube-system 
pks-2cea3bOa-b176-43c4-8718-995017170347-pks-system 
8f99...7aa6 
0413.. 5734 
Oc9a...f102 
a118...d&9 
6519...1876 
d2f7...83bO 
Type 
Tier-I 
Tier-I 
Tier-I 
Tier-I 
Tier-I 
Tier-I 
Connected Tier-O Rout 
to-pks 
tO-pks 
tO-pks 
to-pks 
to-pks 
tO-pks 
High Avallablllty Mode 
Active-standby 
Transport Zone 
overlay-tz 
overlay-tz 
overlay-tz 
overlay-tz 
overlay-tz 
overlay-tz 
Edge Cluster 
edge-cluster-I

 

As you can see the operation has created​​ several​​ logical routers to handle PKS traffic including:

  • T1 Router for K8 master node

  • T1 Router for the load balancer

  • Four T1 routers one per namespace​​ (found using: kubectl get ns -o wide)

 

To locate what is running inside each namespace you can run (kubectl get pods –all-namespaces)

Namespace

What is it used for

default

default namespace for containers

kube-public

Used by cluster communications

kube-system

heapster, kube-dns, kubernetes-dashboard, metrics-server, monitoring-influxdb, telemetry-agent

pks-system

fluent, sink-controller

 

When you add additional namespaces to the K8 cluster additional T1 routers are deployed.  ​​​​ All of this is manual with traditional K8 clusters but with PKS it’s automatically handled​​ and integrated.  ​​ ​​​​ 

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.