Skip to main content
logoTetrate Service ExpressVersion: Latest

EKS Cluster Requirements

Tetrate Service Express is operated using the Management Plane component. This component is installed in an EKS cluster.

Services and applications are deployed in Workload Clusters which are onboarded to the Management Plane.

tip

You can use an EKS cluster which is dedicated to the Management Plane alone, or you can deploy into a cluster that also functions as a workload cluster.

Required Resources

Installation Tools

  • aws: The aws CLI client to manage AWS settings
  • eksctl: a recent build of eksctl to manage the EKS cluster
  • kubectl and helm: You will also require kubectl and helm to manage the services in the cluster and install components

Amazon EKS Cluster Resources

  • Kubernetes: version 1.23 - 1.26
  • Resources: 3 nodes, minimum of 6 vCPUs total, 12 GB memory total

For testing purposes, a minimum of 3 x m5.large nodes is sufficient for a cluster that operates as both management plane and workload cluster. For production environments, more capacity should be provided. Refer to the sizing guidelines for TSB and seek assistance from Tetrate technical support.

Additionally:

  • EBS Container Storage Interface driver: The TSE Management Plane requires the Amazon Elastic Block Store (Amazon EBS) Container Storage Interface (CSI) driver in order to support the persistent volume stores used by postgres and elastic. The script below installs this all-on.

  • AWS Load Balancer Controller: Management Plane and Workload clusters should be configured with the AWS Load Balancer Controller. AWS Load Balancer Controller replaces the legacy Kubernetes Cloud Load Balancer controller and better integrates with Amazon services.

Create an EKS Cluster

Creating a cluster involves two steps:

  1. Create an EKS Cluster

    You can use a version of the following script to create an EKS cluster for test purposes. The test cluster has sufficient capacity to function as both the Management Plane cluster and a Workload cluster.

    The create-cluster operation can take 10 minutes or more to complete, depending on resource availability in your chosen AWS region and zone.

    create_cluster.sh
    EKS_CLUSTER_NAME=$(whoami)-tse
    REGION=${AWS_DEFAULT_REGION:-us-east-1}
    OWNER=$(whoami)
    ACCOUNT=123456789012
    K8S_VERSION=1.26

    # Add further tags if necessary
    TAGS=owner=${OWNER},purpose=tse-test-cluster,created=$(date -Iseconds)

    # Create cluster
    # https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html
    eksctl create cluster \
    --name "${EKS_CLUSTER_NAME}" \
    --region "${REGION}" \
    --version "${K8S_VERSION}" \
    --without-nodegroup \
    --tags "${TAGS}"

    # Create node group
    # https://docs.aws.amazon.com/eks/latest/userguide/create-managed-node-group.html
    eksctl create nodegroup \
    --cluster "${EKS_CLUSTER_NAME}" \
    --name "${EKS_CLUSTER_NAME}-ng" \
    --region "${REGION}" \
    --node-type m5.large \
    --nodes 3 \
    --nodes-min 3 \
    --nodes-max 5 \
    --tags "${TAGS}"

    # This addon is required for the persistent volumes used by postgres and elastic
    # https://docs.aws.amazon.com/eks/latest/userguide/managing-ebs-csi.html
    eksctl utils associate-iam-oidc-provider \
    --cluster "${EKS_CLUSTER_NAME}" \
    --region "${REGION}" \
    --approve

    eksctl create addon \
    --cluster "${EKS_CLUSTER_NAME}" \
    --name aws-ebs-csi-driver \
    --region "${REGION}" \
    --attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy
    --force

    # Export the kube context if needed
    aws eks update-kubeconfig --region "${REGION}" --name "${EKS_CLUSTER_NAME}"

    # The following steps are needed if you chose to install TSE from Tetrate's registry
    HUB=${ACCOUNT}.dkr.ecr.${REGION}.amazonaws.com/${EKS_CLUSTER_NAME}
    echo Logging in to ${HUB} ...
    aws ecr get-login-password --region ${REGION} | docker login --username AWS --password-stdin ${HUB}
    echo HUB=${HUB}
    Troubleshooting

    When using this script, the cluster create operation may fail with an error similar to:

    CREATE_FAILED – "Resource handler returned message: \"Cannot create cluster 'NAME' because us-east-1e, the targeted availability zone, does not currently have sufficient capacity to support the cluster

    In this situation, the CloudFormation stacks are not cleaned up after rollback, and subsequent attempts will fail. You can delete the stack from the AWS Console, or via the CLI:

    # List stacks in the ROLLBACK_COMPLETE status
    aws cloudformation list-stacks --stack-status-filter ROLLBACK_COMPLETE

    You should see a stack named eksctl-EKS_CLUSTER_NAME-cluster, where EKS_CLUSTER_NAME has the value declared above. Delete this stack as follows:

    # Delete the eksctl-EKS_CLUSTER_NAME-cluster stack
    aws cloudformation delete-stack --stack-name eksctl-EKS_CLUSTER_NAME-cluster

    Take a note of the $HUB value, which identifies a private EKS registry that can be used by the installation process. You will use this when you copy the TSE images, if you install directly from the Tetrate repo:

    export HUB=${ACCOUNT}.dkr.ecr.${REGION}.amazonaws.com/${EKS_CLUSTER_NAME}
  2. Install AWS Load Balancer Controller

    Make sure to set $EKS_CLUSTER_NAME, $REGION and $ACCOUNT correctly.

    curl -O -s https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.4.7/docs/install/iam_policy.json

    aws iam create-policy \
    --policy-name AWSLoadBalancerControllerIAMPolicy \
    --policy-document file://iam_policy.json

    eksctl create iamserviceaccount \
    --cluster=${EKS_CLUSTER_NAME} \
    --namespace=kube-system \
    --name=aws-load-balancer-controller \
    --region=${REGION} \
    --attach-policy-arn=arn:aws:iam::${ACCOUNT}:policy/AWSLoadBalancerControllerIAMPolicy \
    --approve

    sleep 5

    helm repo add eks https://aws.github.io/eks-charts
    helm repo update

    helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
    -n kube-system \
    --set clusterName=${EKS_CLUSTER_NAME} \
    --set serviceAccount.create=false \
    --set serviceAccount.name=aws-load-balancer-controller

    Verify that the software is installed and running:

    kubectl get deployment -n kube-system aws-load-balancer-controller

    For full details, refer to the AWS Load Balancer Controller documentation.