Skip to content

VMware PKS 1.3 What’s New

VMware PKS 1.3 has just been released and it contains plenty of new features and functionalities. In this post, I would like to describe and highlight some of the new capabilities.

Azure Support

The release of PKS 1.3 is a major milestone for VMware and Pivotal as it includes support for all major public cloud providers. Next to GCP and AWS, PKS 1.3 can now deploy and manage Kubernetes clusters on Azure. Here is a great video from Pivotal’s Dan Baskett demonstrating PKS on Microsoft Azure.

Kubernetes v1.12

Next to tonnes of new features, PKS 1.3 also comes with an updated Kubernetes version. The Kubernetes version shipped as part of PKS 1.3 is v1.12.4. If you want to learn more about Kubernetes 1.12, check out the video below from the Cloud Native Computing Foundation.

You can find a complete list of the Kubernetes 1.12 features on GitHub here.

New Features and Components

In the table below, you can see which components and versions are included or compatible with PKS 1.3.

screenshot 2019-01-23 at 11.57.26

Besides Azure support and an updated Kubernetes version I want to highlight the following new features:

  • BOSH Backup and Restore (BBR) for single-master clusters.
  • Custom and Routable pod networks on NSX-T.
  • Large size NSX-T load balancers with Bare Metal NSX-T edge nodes.
  • Creating sink resources with the PKS CLI.
  • Multiple NSX-T Tier-0 (T0) logical routers for use with PKS multi-tenant environments.
  • Multiple PKS foundations on the same NSX-T.
  • Scaling down the number of worker nodes.
  • Defining the CIDR range for Kubernetes pods and services on Flannel networks.
  • Harbor 1.7.1 support

A complete list of the new features in PKS 1.3 can be found here.

Let’s have a look at some of those features in more detail. The one feature that caught my eye immediately was Bosh Backup and Restore (BBR).

BOSH Backup and Restore

Besides backing up the PKS control plane, we can now backup and restore singe master node Kubernetes clusters with stateless workloads. This can be facilitated by using the BBR toolset, which can be downloaded here.

As this is a very urgent and very interesting topic for a lot of customers, I have published a separate blog post about it. Please have a look at VMware PKS 1.3 Backup and Recovery.

Custom and Routable pod networks on NSX-T

Routable pod IP addresses can be specified during cluster creation with network profiles. This allows for specialized workloads that need to have direct access to the pods and better traceability. For more details, have a look at this VMware blog post here.

Additionally, you can override the default PKS Pod network and specify a custom IP address range with a custom subnet size. This can become very handy if you are running out of capacity on the Pod network or you need to specify smaller subnets.

Large size NSX-T Load Balancer

With PKS 1.3, you can now specify large NSX-T load balancers for increased throughput and increased scale. The “large” load balancer needs to be backed by a bare metal NSX-T Edge node. The load balancer size can be specified via network profiles. Small and medium load balancers backed by NSX-T Edge VMs were already there in PKS 1.2.screenshot 2019-01-24 at 11.59.25

Multiple NSX-T Tier-0 logical routers

We now have the possibility to make use of multiple NSX-T T0 routers for better isolation of tenants. This is especially useful for organizations or service providers that need to have network isolation between tenants. VMware’s Merlin Glynn is explaining and demonstrating the multiple T0 support in the following video.

Multiple PKS Foundations on the same NSX-T

With this release, we can deploy multiple PKS implementations on a single NSX-T installation while using different NSX-T0 routers per PKS implementation. This can be useful in many scenarios, one example is different staging environments such as Dev/Test, Pre-Prod, and Production.

screenshot 2019-01-24 at 10.28.17

Scaling Down

Scaling down Kubernetes clusters is a very useful feature as it allows for efficient resource utilization by giving back unused resources. Scaling up was always possible with PKS, but the option to scale down was missing until now. Platform Reliability Engineers can now manage infrastructure capacity for their Kubernetes clusters in a better and more efficient way by scaling up and scaling down worker nodes.

If you want to learn more about Scaling in PKS 1.3, have look at my VMware PKS 1.3 Scale K8s Clusters blog post.

Conclusion

VMware PKS 1.3 offers a lot of new possibilities for Plattform Reliability Engineers and capabilities for Developers/Application Owners. PKS 1.3 can manage Kubernetes clusters on all major public cloud providers (GCP, AWS, and Azure) as well as on the number one virtualization layer vSphere. Additionally, it comes with new Network design features that allow for better choice and more flexibility for custom requirements. All in all, it is a great release and I would like to encourage everyone to try it out. If you want to learn about the upgrade to PKS 1.3, please have a look at my VMware PKS 1.3 Upgrade blog post.

Additional Sources:

Categories

Kubernetes

Tags

, , , ,

4 thoughts on “VMware PKS 1.3 What’s New Leave a comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: