Symantec ZTNA

 View Only

Managing Private Kubernetes Clusters with Secure Access Cloud 

Sep 19, 2019 01:10 PM

Kubernetes (commonly stylized as k8s) is an open-source container-orchestration system for automating application deployment, scaling, and management. It was originally designed by Google, and is now maintained by the Cloud Native Computing Foundation. It aims to provide a "platform for automating deployment, scaling, and operations of application containers across clusters of hosts".

Developers and DevOps engineers managging Kubernetes Clusters and/or applications deployed within these clusters require access to the Kubernetes Controllers and Nodes using various methods. While such an access can be achieved via either Virtual Private Networks or by exposing management endpoints of the Kubernetes Cluster via the internet, providing a Zero Trust Network access (ZTNA, also known as Software-Defined Perimeter) access to these critical management interfaces is the most secure way. This article explains the basic architecture and the steps required to provide access for managing Kubernetes Clusters, without assuming that the acceesing party has any network access to the deployed cluster.

While the method supports limitless possibilities for management access, in this article we will focus on the following three methods:

  1. Management using kubectl utility 
  2. Management using Web Dashboards and REST APIs
  3. Management using SSH connections to the Kubernetes Cluster machines

Below diagram demonstrates an architecture for managing Kubernetes clusters via Secure Access Cloud without bastion servers:

When using this approach, there is no need to manage any bastion hosts or any other additional components in the private network. On the other side, the developers / administrators will need to have an abiltiy to authenticate directly to the K8S Cluster API server and additional components from the accessing endpoint. Naturally, the accessing parties will first need to be authenticated and authorized via Secure Access Cloud.

This approach will allow kubernetes adminsitrators to execute management commands using interfaces, such as kubectl, locally on their endpoint devices.

This alternative diagram demonstrates an option for an architecture for managing Kubernetes clusters via Secure Access Cloud with bastion servers / utility hosts located in the private networks hosting the clusters:

When considering this approach, the authentication tokens (certificates, user/role tokens, etc...) can reside only within the Bastion Hosts and the accessing parties will need to authenticate (and be authorized) to the Bastion environment in order to be able to access the internal components. In this approach, the administrators can only run kubectl and similar tools on the Bastion hosts.

As a pre-condition to the following steps, Secure Access Cloud connectors need to be deployed in the private network hosting the Kubernetes Clusters. Following guide described the deployment of the connectors: Deploying Secure Access Cloud Connector as Docker Container

 

Connecting with kubectl

There are two possible approaches when dealing with kubectl connections: Connecting via Bastion host or connecting directly from the user's workstation.

 

Connecting via Bastion host

In this configuration, kubectl utility, as well as the relevant authentication / authorization environment only exist at the bastion host level. In order to access this environment and perform management operations, the accessing user will need to create an interactive SSH session to the Bastion host. To access the bastion host, it will need to be configured as an "SSH Application" in the Secure Access Cloud admin portal, according to the following document: Access to SSH Servers via Luminate

 

Connecting directly from the user's workstation

In order to connect directly from a kubectl utility running on the end users' workstation, following steps should be taken:

1. Kubernetes API Endpoint Port needs to be configured in Secure Access Cloud as a TCP Tunnel, according to the following article. This step needs to be repeated for every Kubernetes Cluster being administrated. If Kubernetes Clusters are being defined dynamically, using a Secure Access Cloud Terraform provider is recommended.

2. KubeConfig file (usually located at ~/.kube/config) should be modified for each cluster in the following manner:

   i. In the clusters section of the configuration file, the "server" key should be replaced with a local port (see the steps below regarding the suggestions on local port configuration)
   ii. If the HTTPs certificate issued to the Kubernetes API HTTPS Endpoint cannot be changed, modify the hosts file to point the domain to the localhost IP (doesn't necessarily have to be 127.0.0.1, as long as it is an IP address that the TCP Tunnel is being opened to). Alternatively, modify the KubeConfig configuration file or the kubectl command line to contain the insecure-skip-tls-verify flag. Using the flag in this case will not be as challenging from the security perspective as usual, due to end-to-end authorization during the creation of a TCP Tunnel by Secure Access Cloud.

3. TCP Tunnel should be established, as described in the following article. Multiple ports can be selected in case multiple Kubernetes clusters should be accessible. Optionally, a KubeConfig file can be modified using the exec command to open the relevant ports automatically, using the SSH Key from Secure Access Cloud portal.

 

Connecting with Web Dashboards

 

 

Connecting with SSH

 

 

Statistics
0 Favorited
11 Views
0 Files
0 Shares
0 Downloads

Tags and Keywords

Comments

Sep 23, 2019 08:49 AM

Related Entries and Links

No Related Resource entered.