Knowledge Base

Find answers to common questions about Cloudmersive products and services.



Update an AWS EKS Cluster for Windows Nodes and Cloudmersive Private Cloud CLI Only
11/30/2025 - Cloudmersive Support


0. Prereqs (one‑time on your machine)

You need:

  • AWS CloudShell (or a Linux/macOS shell) with:

    • aws CLI v2
    • kubectl
  • IAM permissions to:

    • eks:DescribeCluster, eks:ListAddons, eks:UpdateAddon, eks:CreateNodegroup, eks:ListNodegroups
    • iam:GetRole, iam:CreateRole, iam:AttachRolePolicy, iam:ListAttachedRolePolicies
    • eks:UpdateClusterConfig (only if you plan to change auth mode / access entries)

You do not need eksctl or the AWS console.

Important Auto Mode note
Official EKS docs say: “EKS Auto Mode does not support Windows nodes”.
That means Auto Mode can’t manage Windows nodes itself. This guide adds standard EKS managed node groups for Windows that happily coexist with Auto Mode (or a fully standard cluster). Managed node groups and Karpenter both support Windows even when Auto Mode is enabled.


1. Set fixed variables (names, region, sizes)

Run this once in your shell and adjust only the obvious bits (region, existing cluster name, sizes):

# Region + existing cluster
export AWS_REGION=us-east-1
export CLUSTER_NAME=my-existing-eks-cluster

# New Windows node IAM role + nodegroup
export WINDOWS_NODE_ROLE_NAME=eksWindowsNodeRole
export WINDOWS_NODEGROUP_NAME=eks-windows-ng

# (Optional) new Linux nodegroup + IAM role
# Only used if you *don’t* already have Linux nodes or Fargate
export LINUX_NODE_ROLE_NAME=eksLinuxNodeRole
export LINUX_NODEGROUP_NAME=eks-linux-ng

# Instance types & sizes
export LINUX_INSTANCE_TYPE=t3.medium
export WINDOWS_INSTANCE_TYPE=m5.large

export LINUX_DESIRED_SIZE=2
export WINDOWS_DESIRED_SIZE=2

# Simple max sizes
export LINUX_MAX_SIZE=4
export WINDOWS_MAX_SIZE=4

We’ll refer only to these variables from now on.


2. Discover cluster details & basic checks

2.1 Confirm cluster exists and is ACTIVE

aws eks describe-cluster \
  --region $AWS_REGION \
  --name $CLUSTER_NAME \
  --query "cluster.{Status:status,Version:version,Endpoint:endpoint}" \
  --output table

You should see Status = ACTIVE. If not, fix that before continuing.

2.2 Capture cluster role + subnets + caller info

# Cluster IAM role ARN/name
export CLUSTER_ROLE_ARN=$(aws eks describe-cluster \
  --region $AWS_REGION \
  --name $CLUSTER_NAME \
  --query "cluster.roleArn" \
  --output text)

export CLUSTER_ROLE_NAME=${CLUSTER_ROLE_ARN##*/}

# All subnets the cluster is using (space‑separated)
export CLUSTER_SUBNET_IDS=$(aws eks describe-cluster \
  --region $AWS_REGION \
  --name $CLUSTER_NAME \
  --query "cluster.resourcesVpcConfig.subnetIds" \
  --output text)

# Account + caller ARNs
export ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text)
export CALLER_ARN=$(aws sts get-caller-identity --query "Arn" --output text)

2.3 Check whether EKS Auto Mode compute is enabled

export EKS_AUTOMODE=$(aws eks describe-cluster \
  --region $AWS_REGION \
  --name $CLUSTER_NAME \
  --query "cluster.computeConfig.enabled" \
  --output text 2>/dev/null || echo "None")

echo "EKS Auto Mode compute enabled? $EKS_AUTOMODE"

Interpretation:

  • True → cluster has EKS Auto Mode compute in addition to anything else.
  • False / None → cluster is effectively standard mode only.

Either way this guide still works, but:

  • Auto Mode nodes will stay Linux‑only.
  • Windows nodes you add below are standard EKS managed node groups, not Auto Mode nodes.

3. Hook up kubectl to this cluster

aws eks update-kubeconfig \
  --region $AWS_REGION \
  --name $CLUSTER_NAME

kubectl get svc

You should at least see the kubernetes service.


4. Check authentication mode (aws‑auth vs access entries)

EKS can authenticate via:

  • CONFIG_MAP – legacy aws-auth only
  • API_AND_CONFIG_MAP – both access entries and aws-auth
  • APIonly access entries; aws-auth is ignored

Get the mode:

export AUTH_MODE=$(aws eks describe-cluster \
  --region $AWS_REGION \
  --name $CLUSTER_NAME \
  --query "cluster.accessConfig.authenticationMode" \
  --output text 2>/dev/null || echo "CONFIG_MAP")

echo "Cluster authentication mode: $AUTH_MODE"

Guidance:

  • CONFIG_MAP / API_AND_CONFIG_MAP → you have an aws-auth ConfigMap you can edit.
  • APIno aws-auth; EKS uses access entries only.

We’ll account for both paths later.


5. Pre‑req: ensure Linux or Fargate data plane is present

Windows doc requires at least one Linux node or Fargate Pod to run CoreDNS and other system pods.

5.1 Check Linux nodes

kubectl get nodes -o wide

# Just Linux nodes:
kubectl get nodes -o wide \
  --selector=kubernetes.io/os=linux

If you see at least one Linux node, good.

5.2 Check Fargate profiles (optional)

aws eks list-fargate-profiles \
  --cluster-name $CLUSTER_NAME \
  --region $AWS_REGION

If:

  • No Linux nodes AND no Fargate profiles, you must either:

    • create a Linux managed node group (see Step 9, optional), or
    • create a Fargate profile.

For most clusters you’ll already have Linux nodes; in that case you’ll skip Step 9.


6. Enable Windows support in the control plane

We now implement the official “Enable Windows support” steps for an existing cluster.

6.1 Ensure AmazonEKSVPCResourceController policy is attached to the cluster role

Docs require this policy on the cluster IAM role for Windows IPAM.

Show what’s attached:

aws iam list-attached-role-policies \
  --role-name $CLUSTER_ROLE_NAME \
  --query "AttachedPolicies[].PolicyName" \
  --output table

If AmazonEKSVPCResourceController is missing, attach it:

aws iam attach-role-policy \
  --role-name $CLUSTER_ROLE_NAME \
  --policy-arn arn:aws:iam::aws:policy/AmazonEKSVPCResourceController

(Attaching it twice is harmless; AWS will just say it’s already attached.)


6.2 Enable Windows IPAM in Amazon VPC CNI

Officially you must set enable-windows-ipam for the VPC CNI so the controller can allocate IPs for Windows pods.

There are two possible setups:

  • VPC CNI as an EKS managed add‑on (common today).
  • Self‑managed VPC CNI (older clusters / custom setup).

If you have the add‑on and try to edit the ConfigMap directly, EKS will periodically overwrite your changes. The correct way is via update-addon configuration.

6.2.1 Detect whether VPC CNI is an EKS add‑on

export HAS_VPC_CNI_ADDON=$(aws eks list-addons \
  --region $AWS_REGION \
  --cluster-name $CLUSTER_NAME \
  --query "length(addons[?@=='vpc-cni'])" \
  --output text)

echo "VPC CNI EKS add-on present? $HAS_VPC_CNI_ADDON"

If this prints 1managed add‑on.
If 0self‑managed.


6.2.2 If HAS_VPC_CNI_ADDON=1 (managed add‑on)

Update the add‑on config to enable Windows IPAM:

aws eks update-addon \
  --region $AWS_REGION \
  --cluster-name $CLUSTER_NAME \
  --addon-name vpc-cni \
  --resolve-conflicts OVERWRITE \
  --configuration-values '{"enableWindowsIpam":"true"}'

This is the recommended way to set the flag when using the add‑on.


6.2.3 If HAS_VPC_CNI_ADDON=0 (self‑managed CNI)

Use the ConfigMap approach from the Windows docs:

cat > vpc-resource-controller-configmap.yaml <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
  name: amazon-vpc-cni
  namespace: kube-system
data:
  enable-windows-ipam: "true"
EOF

kubectl apply -f vpc-resource-controller-configmap.yaml

6.3 (Optional) Enable prefix delegation for higher Windows pod density

If you want more pods per Windows node, you can also set enable-windows-prefix-delegation: "true" in the same ConfigMap/add‑on configuration.

Example (ConfigMap‑based):

cat > vpc-resource-controller-configmap.yaml <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
  name: amazon-vpc-cni
  namespace: kube-system
data:
  enable-windows-ipam: "true"
  enable-windows-prefix-delegation: "true"
EOF

kubectl apply -f vpc-resource-controller-configmap.yaml

For the add‑on case, you’d instead include it in --configuration-values, e.g.:

aws eks update-addon \
  --region $AWS_REGION \
  --cluster-name $CLUSTER_NAME \
  --addon-name vpc-cni \
  --resolve-conflicts OVERWRITE \
  --configuration-values '{"enableWindowsIpam":"true","enableWindowsPrefixDelegation":"true"}'

7. Create IAM role for Windows nodes

We’ll create a dedicated IAM role for the Windows managed node group (you could reuse an existing worker role, but keeping Windows separate is usually cleaner).

7.1 Common node trust policy

cat > eks-node-role-trust-policy.json <<EOF
{
  "Version":"2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "sts:AssumeRole",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      }
    }
  ]
}
EOF

7.2 Create Windows node IAM role

aws iam create-role \
  --role-name $WINDOWS_NODE_ROLE_NAME \
  --assume-role-policy-document file://eks-node-role-trust-policy.json

aws iam attach-role-policy \
  --role-name $WINDOWS_NODE_ROLE_NAME \
  --policy-arn arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy

aws iam attach-role-policy \
  --role-name $WINDOWS_NODE_ROLE_NAME \
  --policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryPullOnly

aws iam attach-role-policy \
  --role-name $WINDOWS_NODE_ROLE_NAME \
  --policy-arn arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy

(Optionally you can also attach AmazonSSMManagedInstanceCore if you want SSM Session Manager access.)

Grab the ARN:

export WINDOWS_NODE_ROLE_ARN=$(aws iam get-role \
  --role-name $WINDOWS_NODE_ROLE_NAME \
  --query "Role.Arn" \
  --output text)

8. (Optional but important) Create a Linux node group if you don’t already have Linux/Fargate

If in Step 5 you saw no Linux nodes and no Fargate profiles, you must create a small Linux node group so CoreDNS and other Linux‑only system pods have somewhere to run.

If you already have Linux nodes, skip this entire step.

8.1 Linux node IAM role

aws iam create-role \
  --role-name $LINUX_NODE_ROLE_NAME \
  --assume-role-policy-document file://eks-node-role-trust-policy.json

aws iam attach-role-policy \
  --role-name $LINUX_NODE_ROLE_NAME \
  --policy-arn arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy

aws iam attach-role-policy \
  --role-name $LINUX_NODE_ROLE_NAME \
  --policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryPullOnly

aws iam attach-role-policy \
  --role-name $LINUX_NODE_ROLE_NAME \
  --policy-arn arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
export LINUX_NODE_ROLE_ARN=$(aws iam get-role \
  --role-name $LINUX_NODE_ROLE_NAME \
  --query "Role.Arn" \
  --output text)

8.2 Linux managed node group

Use the cluster’s existing subnets:

aws eks create-nodegroup \
  --region $AWS_REGION \
  --cluster-name $CLUSTER_NAME \
  --nodegroup-name $LINUX_NODEGROUP_NAME \
  --node-role $LINUX_NODE_ROLE_ARN \
  --subnets $CLUSTER_SUBNET_IDS \
  --scaling-config minSize=1,maxSize=$LINUX_MAX_SIZE,desiredSize=$LINUX_DESIRED_SIZE \
  --instance-types $LINUX_INSTANCE_TYPE \
  --ami-type AL2023_x86_64_STANDARD \
  --disk-size 20 \
  --capacity-type ON_DEMAND

Wait for it to become active:

aws eks wait nodegroup-active \
  --region $AWS_REGION \
  --cluster-name $CLUSTER_NAME \
  --nodegroup-name $LINUX_NODEGROUP_NAME

Check that Linux nodes joined:

kubectl get nodes -o wide \
  --selector=kubernetes.io/os=linux

9. Kubernetes auth model for Windows nodes (aws‑auth vs Access Entries)

This step is mostly verification; for managed node groups, EKS automatically handles most of the mapping for you.

  • If you’re using CONFIG_MAP / API_AND_CONFIG_MAP, EKS still maintains an aws-auth ConfigMap.
  • If you’re using API, EKS uses access entries only; for Windows managed node groups it will automatically create a EC2_WINDOWS access entry when the node group is created.

We’ll come back and verify after the node group exists (Step 11). No action required right now.


10. Create the Windows managed node group

Now we actually add Windows capacity to the existing cluster.

10.1 Create Windows managed node group

aws eks create-nodegroup \
  --region $AWS_REGION \
  --cluster-name $CLUSTER_NAME \
  --nodegroup-name $WINDOWS_NODEGROUP_NAME \
  --node-role $WINDOWS_NODE_ROLE_ARN \
  --subnets $CLUSTER_SUBNET_IDS \
  --scaling-config minSize=$WINDOWS_DESIRED_SIZE,maxSize=$WINDOWS_MAX_SIZE,desiredSize=$WINDOWS_DESIRED_SIZE \
  --instance-types $WINDOWS_INSTANCE_TYPE \
  --ami-type WINDOWS_CORE_2022_x86_64 \
  --disk-size 80 \
  --capacity-type ON_DEMAND

(You can swap WINDOWS_CORE_2019_x86_64 if you specifically need Server 2019.)

Wait until it’s active:

aws eks wait nodegroup-active \
  --region $AWS_REGION \
  --cluster-name $CLUSTER_NAME \
  --nodegroup-name $WINDOWS_NODEGROUP_NAME

10.2 Verify Windows nodes are registered

kubectl get nodes -o wide
kubectl get nodes -o wide \
  --selector=kubernetes.io/os=windows

You should see your new nodes with OS = windows, ARCH = amd64.


11. Verify auth mappings for Windows nodes

11.1 If AUTH_MODE is CONFIG_MAP or API_AND_CONFIG_MAP

Check aws-auth:

kubectl get configmap aws-auth -n kube-system -o yaml

Look for an entry under data.mapRoles for your Windows node role ($WINDOWS_NODE_ROLE_ARN) that includes the eks:kube-proxy-windows group, as required for DNS resolution on Windows.

It should look like this (example):

mapRoles: |
  - groups:
    - system:bootstrappers
    - system:nodes
    - eks:kube-proxy-windows  # required for Windows kube-proxy
    rolearn: arn:aws:iam::111122223333:role/eksWindowsNodeRole
    username: system:node:{{EC2PrivateDNSName}}
  ...

If the Windows role is missing or the eks:kube-proxy-windows group isn’t there, you can:

kubectl edit configmap aws-auth -n kube-system

…and add a block like the snippet above for $WINDOWS_NODE_ROLE_ARN (this uses the editor inside CloudShell; still CLI‑only).

Note: For managed node groups, EKS often populates this mapping for you automatically; the docs just require you to verify and fix if needed.


11.2 If AUTH_MODE is API (no aws-auth)

In this mode, EKS uses access entries only. For Windows managed node groups, the docs say:

“Create a new node role for use with Windows instances, and EKS will automatically create an access entry of type EC2_WINDOWS.”

You can confirm the access entry exists:

aws eks list-access-entries \
  --region $AWS_REGION \
  --cluster-name $CLUSTER_NAME \
  --query "accessEntries[?principalArn=='$WINDOWS_NODE_ROLE_ARN']"

You should see a single entry with "type": "EC2_WINDOWS".

If you don’t for some reason, you can create it manually:

aws eks create-access-entry \
  --region $AWS_REGION \
  --cluster-name $CLUSTER_NAME \
  --principal-arn $WINDOWS_NODE_ROLE_ARN \
  --type EC2_WINDOWS

12. (Optional) Windows test workload

Same idea as your new‑cluster guide: deploy a simple IIS pod pinned to Windows nodes.

cat > windows-iis-demo.yaml <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: windows-iis-demo
  labels:
    app: windows-iis-demo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: windows-iis-demo
  template:
    metadata:
      labels:
        app: windows-iis-demo
    spec:
      nodeSelector:
        kubernetes.io/os: windows
        kubernetes.io/arch: amd64
      containers:
      - name: iis
        image: mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2022
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: windows-iis-demo-svc
spec:
  type: LoadBalancer
  selector:
    app: windows-iis-demo
  ports:
  - port: 80
    targetPort: 80
EOF

kubectl apply -f windows-iis-demo.yaml

Then:

kubectl get pods -o wide
kubectl get svc windows-iis-demo-svc

Once the service has an external hostname/IP, curl or browser to it and confirm you’re hitting IIS running in a Windows container.


13. Next Steps

You are now ready to install Cloudmersive Private Cloud into this cluster.

600 free API calls/month, with no expiration

Sign Up Now or Sign in with Google    Sign in with Microsoft

Questions? We'll be your guide.

Contact Sales