Automate Cluster Autoscaler in EKS

This article will help automate the process of creating and configuring Cluster Autoscaler and HPA in AWS Elastic Kubernetes Service (EKS).

Amazon Elastic Kubernetes Service (EKS) simplifies the deployment of Kubernetes clusters on AWS by automating the configuration and management of the Kubernetes control plane. This managed service facilitates the seamless execution of containerized applications in a scalable and resilient manner. Among its notable features, EKS boasts the Cluster Autoscaler, a tool designed to dynamically adjust cluster size based on workload demands.

A high-level illustration overview of Cluster Autoscaler in EKS

In Amazon Elastic Kubernetes Service (EKS), the Cluster Autoscaler plays a pivotal role in ensuring optimal resource utilization within Kubernetes clusters. Operating seamlessly in the background, it dynamically adjusts the cluster size based on real-time workload demands. The process begins with the Cluster Autoscaler continuously monitoring the resource utilization of nodes within the cluster. When it detects that additional capacity is needed, it automatically triggers the scaling process by interacting with the underlying Auto Scaling groups. This involves adding new nodes to the cluster to accommodate increased workload. Conversely, during periods of low demand, the Cluster Autoscaler scales down the cluster by removing underutilized nodes, thereby optimizing costs and improving efficiency.

HorizontalPodAutoscaler (HPA)

In conjunction with the Cluster Autoscaler, the Horizontal Pod Autoscaler (HPA) further enhances the scalability of applications in EKS. The HPA monitors the resource utilization of individual pods and dynamically adjusts the number of pod replicas to meet specified performance metrics. As demand increases, the HPA triggers the deployment of additional pod replicas to distribute the workload effectively. Conversely, during periods of reduced demand, it scales down the number of replicas to conserve resources. Together, the Cluster Autoscaler and HPA create a dynamic and responsive environment, ensuring that EKS clusters adapt to changing demands, optimizing resource utilization, and providing a seamless and cost-effective Kubernetes experience.

Kubernetes deployment configuration file for the HPA:

apiVersion: autoscaling/v1

kind: HorizontalPodAutoscaler

metadata:

  name: nginx-ingress-controller

spec:

  scaleTargetRef:

    apiVersion: apps/v1

    kind: Deployment

    name: nginx-ingress-controller

  minReplicas: 1

  maxReplicas: 10

  targetCPUUtilizationPercentage: 80

HPA will then scale out or in the number of pods between 1 and 10 to target an average CPU utilization of 80% across all the pods

The combination of Cluster Autoscaler and Horizontal Pod Autoscaler is an effective way to keep EC2 Instance Hours tied as close as possible to actual utilization of the workloads running in the cluster.

High-level overview of HPA and Cluster Autoscaler in EKS

Let's discuss each step outlined in the above diagram.

Horizontal Pod Autoscaler

1.     Metrics Server: The Metrics Server collects vital metrics from individual pods, providing real-time insights into their resource utilization.

2.     Horizontal Pod Autoscaler (HPA): HPA, informed by the metrics server, continuously evaluates metrics against defined thresholds. If the thresholds are exceeded, HPA triggers a change in the number of pod replicas.

3.     Adjusting Pod Replicas: In response to HPA's decision, the number of pod replicas is adjusted to either scale up or down based on the current workload. New pods are created or existing ones are terminated accordingly.

Cluster Autoscaler

1.     Cluster Autoscaler: Simultaneously, if the cluster lacks sufficient resources to accommodate the required replicas, pods may enter a pending state.

2.     Cluster Autoscaler Actions: Cluster Autoscaler, recognizing the resource shortfall, dynamically adjusts the desired count on the associated Auto Scaling group.

3.     AWS Auto Scaling: AWS Auto Scaling, in turn, responds to Cluster Autoscaler's directive by provisioning new nodes, ensuring the cluster has the necessary resources to accommodate the pending pods. This orchestrated process ensures that the EKS cluster scales efficiently to meet changing workloads seamlessly.

Note: This article does not cover metric server deployment. Deploying the metric server is a prerequisite for the autoscaler in EKS and should be completed before deploying the cluster autoscaler. Please refer to the information below for details on the metric server and instructions on how to deploy it.

The Kubernetes Metrics Server is an aggregator of resource usage data in your cluster, and it isn't deployed by default in Amazon EKS clusters. he Metrics Server is commonly used by other Kubernetes add ons, such as the Horizontal Pod Autoscaler or the Kubernetes Dashboard. 

Refer to the AWS documentation.

EKS cluster autoscler IAM role.

 # Cluster Auto Scaling policy

  rClusterAutoscalingPolicy:

    Type: AWS::IAM::ManagedPolicy

    Properties:

      Description: Cluster autoscaling controller policy

      PolicyDocument:

        Version: '2012-10-17'

        Statement:

          - Effect: Allow

            Sid: PermitAutoScaling

            Action:

              - autoscaling:DescribeLaunchConfigurations

              - autoscaling:DescribeAutoScalingGroups

              - autoscaling:DescribeAutoScalingInstances

              - autoscaling:DescribeTags

              - autoscaling:TerminateInstanceInAutoScalingGroup

              - ec2:DescribeLaunchTemplateVersions

              - autoscaling:SetDesiredCapacity             

            Resource: '*'

# EKS Cluster Autoscaler IAM Role

  ClusterAutoscalerRole:

    Type: AWS::IAM::Role

    Properties:

      RoleName: "ClusterAutoscaler-Role"

      AssumeRolePolicyDocument:

        Fn::Sub:

          - |

            {

              "Version": "2012-10-17",

              "Statement": [

              {

                "Effect": "Allow",

                "Principal": {

                  "Service": "ec2.amazonaws.com"

                },

                "Action": "sts:AssumeRole"

              },

              {

                "Effect": "Allow",

                "Principal": {

                  "Federated": "${providerarn}"

                },

                "Action": "sts:AssumeRoleWithWebIdentity",

                "Condition": {

                  "StringEquals": {

                    "${clusterid}": "system:serviceaccount:kube-system:cluster-autoscaler"

                  }

                }

              }

              ]

            }

          - clusterid: <>

            providerarn: <>

      Path: /

      ManagedPolicyArns:

        - !Ref ClusterAutoscalingPolicy

Below is a sample Kubernetes deployment configuration file for the Cluster Autoscaler:

For more information about Cluster Autoscaler, refer to this repository.

---

apiVersion: v1

kind: ServiceAccount

metadata:

  labels:

    k8s-addon: cluster-autoscaler.addons.k8s.io

    k8s-app: cluster-autoscaler

  name: cluster-autoscaler

  namespace: kube-system

  annotations:

    eks.amazonaws.com/role-arn: <>

---

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRole

metadata:

  name: cluster-autoscaler

  labels:

    k8s-addon: cluster-autoscaler.addons.k8s.io

    k8s-app: cluster-autoscaler

rules:

  - apiGroups: [""]

    resources: ["events", "endpoints"]

    verbs: ["create", "patch"]

  - apiGroups: [""]

    resources: ["pods/eviction"]

    verbs: ["create"]

  - apiGroups: [""]

    resources: ["pods/status"]

    verbs: ["update"]

  - apiGroups: [""]

    resources: ["endpoints"]

    resourceNames: ["cluster-autoscaler"]

    verbs: ["get", "update"]

  - apiGroups: [""]

    resources: ["nodes"]

    verbs: ["watch", "list", "get", "update"]

  - apiGroups: [""]

    resources:

      - "namespaces"

      - "pods"

      - "services"

      - "replicationcontrollers"

      - "persistentvolumeclaims"

      - "persistentvolumes"

    verbs: ["watch", "list", "get"]

  - apiGroups: ["extensions"]

    resources: ["replicasets", "daemonsets"]

    verbs: ["watch", "list", "get"]

  - apiGroups: ["policy"]

    resources: ["poddisruptionbudgets"]

    verbs: ["watch", "list"]

  - apiGroups: ["apps"]

    resources: ["statefulsets", "replicasets", "daemonsets"]

    verbs: ["watch", "list", "get"]

  - apiGroups: ["storage.k8s.io"]

    resources: ["storageclasses", "csinodes", "csidrivers", "csistoragecapacities"]

    verbs: ["watch", "list", "get"]

  - apiGroups: ["batch", "extensions"]

    resources: ["jobs"]

    verbs: ["get", "list", "watch", "patch"]

  - apiGroups: ["coordination.k8s.io"]

    resources: ["leases"]

    verbs: ["create"]

  - apiGroups: ["coordination.k8s.io"]

    resourceNames: ["cluster-autoscaler"]

    resources: ["leases"]

    verbs: ["get", "update"]

---

apiVersion: rbac.authorization.k8s.io/v1

kind: Role

metadata:

  name: cluster-autoscaler

  namespace: kube-system

  labels:

    k8s-addon: cluster-autoscaler.addons.k8s.io

    k8s-app: cluster-autoscaler

rules:

  - apiGroups: [""]

    resources: ["configmaps"]

    verbs: ["create","list","watch"]

  - apiGroups: [""]

    resources: ["configmaps"]

    resourceNames: ["cluster-autoscaler-status", "cluster-autoscaler-priority-expander"]

    verbs: ["delete", "get", "update", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRoleBinding

metadata:

  name: cluster-autoscaler

  labels:

    k8s-addon: cluster-autoscaler.addons.k8s.io

    k8s-app: cluster-autoscaler

roleRef:

  apiGroup: rbac.authorization.k8s.io

  kind: ClusterRole

  name: cluster-autoscaler

subjects:

  - kind: ServiceAccount

    name: cluster-autoscaler

    namespace: kube-system

---

apiVersion: rbac.authorization.k8s.io/v1

kind: RoleBinding

metadata:

  name: cluster-autoscaler

  namespace: kube-system

  labels:

    k8s-addon: cluster-autoscaler.addons.k8s.io

    k8s-app: cluster-autoscaler

roleRef:

  apiGroup: rbac.authorization.k8s.io

  kind: Role

  name: cluster-autoscaler

subjects:

  - kind: ServiceAccount

    name: cluster-autoscaler

    namespace: kube-system

---

apiVersion: apps/v1

kind: Deployment

metadata:

  name: cluster-autoscaler

  namespace: kube-system

  labels:

    app: cluster-autoscaler

spec:

  replicas: 1

  selector:

    matchLabels:

      app: cluster-autoscaler

  template:

    metadata:

      labels:

        app: cluster-autoscaler

      annotations:

        prometheus.io/scrape: 'true'

        prometheus.io/port: '8085'

    spec:

      priorityClassName: system-cluster-critical

      securityContext:

        runAsNonRoot: true

        runAsUser: 65534

        fsGroup: 65534

      serviceAccountName: cluster-autoscaler

      containers:

        - image:

          name: cluster-autoscaler

          resources:

            limits:

              cpu: 400m

              memory: 800Mi

            requests:

              cpu: 400m

              memory: 800Mi

          command:

            - ./cluster-autoscaler

            - --v=4

            - --stderrthreshold=info

            - --cloud-provider=aws

            - --skip-nodes-with-local-storage=false

            - --expander=least-waste

            - --node-group-auto-discovery=asg:tag=k8s.io/cluster-autoscaler/enabled,k8s.io/cluster-autoscaler/<>

          env:

            - name: AWS_REGION

              value: us-east-1

          volumeMounts:

            - name: ssl-certs

              mountPath: /etc/ssl/certs/ca-certificates.crt

              readOnly: true

          imagePullPolicy: "Always"

      volumes:

        - name: ssl-certs

          hostPath:

            path: "/etc/ssl/certs/ca-bundle.crt"

 

Conclusion

The Cluster Autoscaler in Amazon EKS emerges as a vital tool for dynamically optimizing resource utilization and ensuring scalability within Kubernetes clusters. With its ability to seamlessly adjust the cluster size based on workload demands, it enhances efficiency and responsiveness. The collaborative efforts of Horizontal Pod Autoscaler and Cluster Autoscaler create a robust ecosystem that adapts to changing requirements. This article has provided insights into its functionality, deployment steps, empowering users to harness the full potential of EKS autoscaling capabilities. As organizations navigate dynamic workloads, the Cluster Autoscaler proves indispensable for maintaining a resilient and cost-effective Kubernetes environment.

We Provide consulting, implementation, and management services on DevOps, DevSecOps, DataOps, Cloud, Automated Ops, Microservices, Infrastructure, and Security

 

Services offered by us: https://www.zippyops.com/services

Our Products: https://www.zippyops.com/products

Our Solutions: https://www.zippyops.com/solutions

For Demo, videos check out YouTube Playlist: https://www.youtube.com/watch?v=4FYvPooN_Tg&list=PLCJ3JpanNyCfXlHahZhYgJH9-rV6ouPro

 

 If this seems interesting, please email us at [email protected] for a call.



Relevant Blogs:






Recent Comments

No comments

Leave a Comment