Create and Manage Kubernetes Clusters on Premises and Connect any Kubernetes Cluster to AWS

Amazon EKS Anywhere & EKS Connector

Gokul Chandra

--

Amazon EKS Anywhere is a new deployment option for Amazon EKS that enables users to easily create and operate Kubernetes clusters on-premises. EKS Anywhere uses EKS Distro, a Kubernetes distribution based on and used by Amazon Elastic Kubernetes Service which was announced back in 2020.

EKS Anywhere provides an installable software package for creating and operating Kubernetes clusters on-premises and automation tooling for cluster lifecycle support. EKS provides several deployment options for Kubernetes clusters managed and hosted by AWS — EKS on AWS, EKS on AWS Wavelength (infrastructure offering optimized for mobile edge computing applications), EKS on AWS Local Zones (extension of an AWS Region in geographic proximity to the end-users).

EKS on AWS Outposts was one of the options where users were able to run Kubernetes in their own data centers but on AWS infrastructure. With EKS Anywhere users can deploy Kubernetes on their own infrastructure and is managed by the customer with a consistent AWS management experience in their data center.

AWS EKS Deployment Options

EKS Anywhere can create and manage Kubernetes clusters on multiple providers. Currently, creating development clusters locally with Docker (*only for testing) and production clusters using VMware vSphere are supported. As per the documentation, other deployment targets will be added in the future, including bare metal support in 2022 (users can deploy EKS Anywhere on any supported OS).

EKS Anywhere clusters are created and managed by the eksctl CLI, with the EKS Anywhere plugin added. While a cluster is running, most EKS Anywhere administration can be done using kubectl or other native Kubernetes tools.

eksctl — anywhere extension

EKS Anywhere provider, VMware vSphere, is implemented based on the Kubernetes Cluster API Provider vsphere (CAPV) specifications.

Cluster Creation — Stages

Cluster creation involves two steps, generating a configuration file template using eksctl anywhere generate command, adding required information to the template followed by eks anywhere create cluster command which involves a sequence of steps starting with validating the vSphere assets to creating a workload cluster. Users can run these commands from macOS or Linux host with docker installed on it, this is termed as an admin machine.

The cluster creation starts with authenticating to vSphere and validating the assets there and creates a temporary bootstrap cluster to direct the workload cluster creation. Once the workload cluster is created, the cluster management resources are moved to the workload cluster and the local bootstrap cluster is deleted.

With a fully functional workload cluster, users can use the eksa-controller and specific CRD’s provisioned on it to manage the cluster itself.

Steps involved in a successful cluster creation process:

EKS Anywhere Cluster Creation Steps

A bootstrap cluster is created using kind command to build a single-node Kubernetes bootstrap cluster on the admin machine. This includes pulling the kind node image, preparing the node, writing the configuration, starting the control-plane, installing CNI, and installing the StorageClass on the kind (Kubernetes in Docker) based single node cluster.

The sequence of steps from creating a configuration file template to a functional bootstrap cluster on the admin machine:

EKS Anywhere — Bootstrap

Once the temporary bootstrap cluster is functional on the admin machine, it starts creating a workload cluster (creates virtual machines for control-plane, etcd, and nodes) using vSphere provider and EKS-D as Kubernetes distro. Once the infrastructure provisioning is done addition of CNI (Cilium is the supported CNI today), addition of storage (vSphere CSI) are performed on the workload cluster. The bootstrap cluster then moves the CAPI objects over to the workload cluster, so it can take over the management of itself followed by deletion of the bootstrap cluster on the admin machine.

EKS Anywhere — Bootstrap

At this point, the workload cluster is ready to use, both to run workloads and to accept requests to change, update, or upgrade the cluster itself. Users can continue to use eksctl anywhere to manage the cluster, with EKS Anywhere handling the fact that CAPI management is now being fulfilled from the workload cluster instead of the bootstrap cluster.

Configuration

Each EKS Anywhere cluster is built from a cluster specification file, with the structure of the configuration file based on the target provider for the cluster. Currently, VMware vSphere is the recommended provider for supported EKS Anywhere clusters in production. EKS Anywhere uses EKS-D (EKS Distro) for bootstrapping the Kubernetes cluster.

Similar to CAPV and CAPI specifications there involves a global cluster configuration with references to machine configs. With EKS Anywhere, this configuration is condensed and made simple so that users can provide a base configuration and EKS Anywhere will translate the same to the required specs.

General CAPV — CAPI objects and relation:

CAPV and CAPI Resource Relation and ClusterCtl

In EKS Anywhere, instead of using CAPI directly with the clusterctl command to manage the workload cluster, users can use the eksctl anywhere command which abstracts that process, including calling clusterctl under the covers.

Cluster configuration includes — CNI parameters (pod subnet, service subnet CIDR’s), controlPlaneConfiguration (number of control-plane nodes, endpoint- VIP HA) and reference to machine group for control-plane machine configuration, DatacenterConfig which includes the vSphere URL, network in which the machines should be deployed, thumbprint for self-signed certs, etc., Kubernetes version specification and workerNodeGroupConfiguration (number of Kubernetes nodes) and reference to machine group for worker-node configuration.

There are two types of etcd topologies supported for configuring a cluster: Stacked (etcd members and control plane components are colocated (run on the same node/machines) or Unstacked/External (unstacked or external etcd topology, etcd members have dedicated machines and are not colocated with control plane components.

There are other optional configurations: OIDC, etcd, proxy, gitops which can be included.

EKS Anywhere — Configuration

Each VSphereMachineConfig refers to the host configuration of control plane, worker nodes and etcd hosts. Apart from the resource parameters (cpu, memory, disk) other configuration sections include — Operating system on virtual machines (permitted values: ubuntu, bottlerocket (default: bottlerocket)), vSphere resource pools for the VMs, folder to group the machines and SSH public keys.

EKS Anywhere — Configuration
EKS Anywhere — Configuration

Operating System — OVA’s and Management

EKS Anywhere today supports two operating system families: Ubuntu and Bottlerocket (default), users can use the released OVA’s from artifacts or build them with a custom base image.

To specify an OVF template, users can import OVA files into vSphere before it can be used in EKS Anywhere cluster. Users can import and deploy the OVA files using the console or Govc-CLI.

OVA — Tagging

The installer identifies which OVA to use for installation based on the Kubernetes specification in the cluster configuration and two tags on the template: OS (os:bottlerocket, os:ubuntu) and eksdRelease (eksdRelease:kubernetes-1–20-eks-*).

Users can use the vSphere console or Govc CLI (govc tags.category.create -t VirtualMachine os) to create tag categories and associate them with the templates (govc tags.attach os:ubuntu <Template Path>).

OVA — Tagging
OVA — Tagging

Sample bottlerocket template with associated tags:

OVA — Tags

If a template is not defined in the cluster spec file, EKS Anywhere will use the proper default one for the Kubernetes minor version and OS family you specified in the spec file. If the template doesn’t exist in vSphere environment, an appropriate OVA will be imported into vSphere and the necessary tags are added.

Cluster Creation Process — In Detail

Like Cluster API, EKS Anywhere runs a kind cluster on the local Administrative machine to act as a bootstrap cluster. A CLI utility container is deployed on the admin machine which installs a single node kind cluster and performs other actions such as installing CNI, StorageClass, etc. on the bootstrap cluster. The bootstrap cluster also uses EKS-D for Kubernetes (all images from EKS-D).

EKS Anywhere — Bootstrap Cluster — Creation

The bootstrap cluster includes all CAPI and CAPV components.

EKS Anywhere — Bootstrap Cluster — Pods

CRD’s (custom resource definitions) on the bootstrap cluster:

EKS Anywhere — Bootstrap Cluster — CRD’s

EKS Anywhere formulates the cluster configuration and configures all the CAPI & CAPV objects on the bootstrap cluster. For example, the output of vsphereclusters, clusters, etcdadmclusters, machines, vspheremachines and vspherevms are as below:

EKS Anywhere — Bootstrap Cluster — CAPI & CAPV Custom Objects
EKS Anywhere — Bootstrap Cluster — CAPI & CAPV Custom Objects

With all the configuration above and using vSphere provider, all the virtual machines required to host control plane, etcd and worker nodes are created on the target vSphere environment. The Kubeadm bootstrap provider uses EKS distro to bootstrap the Kubernetes cluster.

vSphere — Creation of Virtual Machines

All actions such as configuring the network, storage and other aspects of the virtual machine are handled by the automation.

vSphere — Creation of Virtual Machines

Once the cluster creation is complete and followed by few validations, users can access the workload cluster using the kubeconfig written to a folder named after the cluster name on the admin machine.

EKS Anywhere — Workload Cluster

The bootstrap cluster will push all the CAPI & CAPV components to the workload cluster, with this users can use the CRD’s and eksa-controller running in the workload cluster to manage the cluster itself.

GitOps — Cluster Management and Deploying Applications

GitOps can be enabled during cluster creation, EKS Anywhere will automatically commit the cluster configuration to the provided GitHub repository and install a GitOps toolkit (Flux) on the cluster which watches that committed configuration file.

Users can then manage the cluster by making changes to the version-controlled cluster configuration file and committing the changes. Once a change is detected by the GitOps controller running in the cluster, the scale of the cluster will be adjusted to match the committed configuration file.

Flux components deployed in flux-system namespace, the flux component configuration files are also committed to the repository and can be managed using GitOps.

EKS Anywhere — GitOps — Flux

Once the cluster creation is complete, the git repository is cloned to the cluster folder on the admin machine.

EKS Anywhere — GitOps

Cluster and flux-system configuration committed to git repository.

EKS Anywhere — GitOps

Version-controlled cluster configuration file committed to git repository.

EKS Anywhere — GitOps

Flux component configuration files committed to git repository.

EKS Anywhere — GitOps

Users can deploy applications using the same repository having multiple folders with Kubernetes manifests and a Kustomization file.

EKS Anywhere — GitOps

EKS Connector

Amazon Elastic Kubernetes Service (Amazon EKS) now allows users to connect any conformant Kubernetes cluster to AWS and visualize it in the Amazon EKS console. EKS Connector is a new capability that allows users to connect any Kubernetes clusters to the EKS console (not limited to EKS Anywhere) securely, eks-connector-agent installed on the connected cluster communicates to AWS through a secure data channel using Session Manager.

The cluster can be registered in multiple ways, using AWS CLI, SDK, eksctl, or the console. When registering a cluster through eksctl or the console, a manifest file is auto-populated, whereas additional manual steps are required when using the AWS CLI and SDK.

EKS Connector — Cluster Registration

Regardless of where the cluster is running, users can use the Amazon EKS console to view all connected clusters and the Kubernetes resources running on them.

EKS console now includes a register option along with create cluster.

EKS Connector — Cluster Registration

The cluster registration process involves two steps: registering the cluster with Amazon EKS and applying a connector YAML manifest file in the target cluster to enable connectivity. The cluster configuration requires a unique name for the cluster:

EKS Connector — Cluster Registration

Users can choose a specific Kubernetes provider from the list of available providers or choose ‘Other’ for a generalized configuration.

EKS Connector — Cluster Registration

An eks-connector-agent role with permissions to access systems manager — ssmmessages is required which enables the connector-agent to access ssmmessages. This IAM role is used by the EKS Connector agent on the Kubernetes cluster to connect to the Systems Manager service on AWS. This role can be scoped down further to include only the EKS cluster name if needed.

EKS Connector — IAM

Amazon EKS uses the service-linked role named AWSServiceRoleForAmazonEKSConnector, this role allows Amazon EKS to connect Kubernetes clusters.

Connector ServiceLinkedRole allows Amazon EKS to call other services to set up resources required for connecting a cluster to Amazon EKS. This includes managing activation in Systems Manager for the EKS Connector agent and for receiving events from EventBridge whenever the EKS Connector agent is registered and deregistered with the Systems Manager service.

EKS Connector — IAM

Users can download the YAML file using the console instruction or get the same here when using the CLI. This manifest contains the configurations for the EKS Connector and a proxy agent which are deployed as a StatefulSet on the target Kubernetes cluster.

EKS Connector — YAML

When the manifest is applied to a Kubernetes cluster, the EKS Connector agent connects to the Systems Manager service, which sends notifications to EKS through Amazon EventBridge.

EKS Connector — Sequence of Registration Steps

EKS Connector agent registered and deregistered with the Systems Manager service event on EventBridge.

EKS Connector — Event Bridge

EKS Connector acts as a proxy and forwards the EKS console requests to the Kubernetes API server on the connected cluster, so there is a need to associate the connector’s service account with an EKS Connector Role, which gives permission to impersonate AWS IAM entities.

EKS Connector — IAM

Here users can control full access (all resources) or restricted access (specific namespace or resources) by replacing references of %IAM_ARN% with the Amazon Resource Name (ARN) of a specific IAM user or role.

EKS Connector — RBAC — Kubernetes

With the above Service Account created on the target cluster user can visualize the workloads from EKS console. AWS Systems manager invokes a proxy to the agent container (ssm-agent) and the proxy container using the service account impersonates the user.

EKS Connector — View Request from EKS Console to Kubernetes Cluster

The ‘Overview’ section shows all the nodes in the cluster. As of now, all the objects are read-only and the user cannot edit or delete an object in the registered cluster.

EKS Console — Cluster Visualization

The ‘Workloads’ section displays all objects of Type: Deployment, DaemonSet and StatefulSet. Users can select these objects to select a pod-level overview. As of today, other objects of the cluster (like services, ingress, secrets, etc.) are not available for visualization.

EKS Console — Cluster Visualization

The eks-connector is deployed as a StatefulSet and consists of two containers — amazon-ssm-agent (connector-agent) and eks-connector (connector-proxy).

EKS Console — Cluster Visualization

eks-connector on a connected cluster:

EKS Connector on Kubernetes Cluster

The EKS Connector agent enables connectivity to AWS, the proxy agent interacts with Kubernetes to serve AWS requests.

EKS Connector on Kubernetes Cluster — SSM
EKS Connector on Kubernetes Cluster

connector-agent container — eks-connector pod:

EKS Connector on Kubernetes Cluster — connector-agent

connector-proxy container — eks-connector pod:

EKS Connector on Kubernetes Cluster — connector-proxy

Amazon EKS leverages AWS Systems Manager’s agent to connect to AWS services. SSM configuration (registration key, fingerprint, instance-id, etc.) stored as secret in eks-connector namespace.

EKS Connector on Kubernetes Cluster — SSM Secret

Systems Manager uses ssmmessages endpoint to make calls from SSM Agent (agent-container) to Session Manager. The connector configuration file includes the ssm agent parameters and logging configuration.

EKS Connector — Cluster Configuration

AWS Systems Manager hybrid activations created by eks-connector-agent, this provides secure access to the AWS Systems Manager service from the target cluster.

AWS Systems Manager — Hybrid Activations
AWS Systems Manager — Hybrid Activations

The service-linked-role (service-linked roles are predefined by the service and include all the permissions that the service requires to call other AWS services on the user’s behalf) is used to create sessions and command executions on the ssm-agent on the target cluster.

EKS Connector — Service Linked Role

Sessions in Session Manager:

AWS Sessions Manager

Local Cluster for Testing

EKS Anywhere supports a Docker provider for development and testing use cases only. This allows users to try EKS Anywhere on a local system before deploying to a supported provider. This is implemented using CAPD (Cluster API Provider Docker) which uses docker installed on the host to bootsrap multi-node Kind (Kubernetes in Docker) cluster with EKS Distro.

Similar to vSphere users can use eksctl generate command with provider flag set to docker to create a config file:

EKS Anywhere — Cluster Configuration — Docker Provider

Here, based on the capacity of the underlying host users can edit the node count and can even create a HA Kind cluster.

Similar to the other provider a temporary bootstrap cluster is created first which then creates the docker based workload cluster.

EKS Anywhere — Docker

The bootstrap cluster now includes all the components and CRD’s of CAPI and CAPD.

EKS Anywhere — Docker

Once the workload cluster is created the bootstrap Kind cluster is deleted from the host enabling users to operate the workload cluster using the kubeconfig file written to the host.

EKS Anywhere — Docker

Cluster Integrations and Partners

EKS Anywhere offers AWS support for certain third-party vendor components, namely Ubuntu TLS, Cilium, and Flux. It also provides flexibility for users to integrate with their choice of tools in other areas as they do today with any Kubernetes environment.

Some of the tools in the list are as below, these are not covered by the EKS Anywhere support subscription but users can use the documentation to get an idea on how to implement these on EKS Anywhere clusters.

EKS Anywhere — Integrations

Partner integrations include various products under different categories like Monitoring & Logging, Cost Management, DevOps, Security & Networking, etc.

EKS Anywhere and EKS Connector are part of a clear play for businesses embracing hybrid cloud and private infrastructure setups. Apart from hybrid cloud, these play a phenomenal role in building a concrete telco cloud portfolio for 5G enablement and edge computing. Both EKS Anywhere and EKS Connector are still new (initial releases) and based on the documentation, users can see many more capabilities added in future releases.

--

--