Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
175 changes: 166 additions & 9 deletions src/content/docs/aws/services/eks.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,71 @@ To interact with the Kubernetes cluster, you should also install [`kubectl`](htt
Start your LocalStack container using your preferred method.
We will demonstrate how you can auto-install an embedded Kubernetes cluster, configure ingress, and deploy a sample service with ECR.

### Deploy the necessary networking components

First we need to create a VPC for the EKS cluster. You can create a new VPC using the [`CreateVpc` API](https://docs.aws.amazon.com/vpc/latest/APIReference/API_CreateVpc.html).

Run the following command:

```bash title="Create VPC"
awslocal ec2 create-vpc --cidr-block 10.0.0.0/16
```

```bash title="Output"
{
"Vpc": {
...
"CidrBlock": "10.0.0.0/16",
"VpcId": "vpc-12345678",
...
}
}
```

Next, we need to create a subnet in the VPC. You can create a 2 subnets using the [`CreateSubnet` API](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_CreateSubnet.html). Some extra tags might be required for specific Controllers to work properly. Please refer to their specific documentation for more details.

Run the following command:

```bash title="Create Subnet 1"
awslocal ec2 create-subnet \
--vpc-id vpc-12345678 \
--cidr-block 10.0.1.0/24 \
--tag-specifications 'ResourceType=subnet,Tags=[{Key=kubernetes.io/cluster/cluster1,Value=Owned},{Key=kubernetes.io/role/internal-elb,Value=1}]' \
--availability-zone us-east-1a
```

```bash title="Output"
{
"Subnet": {
...
"SubnetId": "subnet-1",
"VpcId": "vpc-12345678",
"CidrBlock": "10.0.1.0/24"
...
}
}
```

```bash title="Create Subnet 2"
awslocal ec2 create-subnet \
--vpc-id vpc-12345678 \
--cidr-block 10.0.2.0/24 \
--tag-specifications 'ResourceType=subnet,Tags=[{Key=kubernetes.io/cluster/cluster1,Value=Owned},{Key=kubernetes.io/role/internal-elb,Value=1}]' \
--availability-zone us-east-1b
```

```bash title="Output"
{
"Subnet": {
...
"SubnetId": "subnet-2",
"VpcId": "vpc-12345678",
"CidrBlock": "10.0.2.0/24"
...
}
}
```

### Create an embedded Kubernetes cluster

The default approach for creating Kubernetes clusters using the local EKS API is by setting up an embedded [k3d](https://k3d.io/) kube cluster within Docker.
Expand All @@ -38,14 +103,15 @@ EKS_START_K3D_LB_INGRESS=1
```
:::

You can create a new cluster using the [`CreateCluster`](https://docs.aws.amazon.com/eks/latest/APIReference/API_CreateCluster.html) API.
You can create a new cluster using the [`CreateCluster` API](https://docs.aws.amazon.com/eks/latest/APIReference/API_CreateCluster.html).

Run the following command:

```bash
```bash title="Create Cluster"
awslocal eks create-cluster \
--name cluster1 \
--role-arn "arn:aws:iam::000000000000:role/eks-role" \
--resources-vpc-config "{}"
--resources-vpc-config '{"subnetIds":["subnet-1", "subnet-2"]}'
```

```bash title="Output"
Expand All @@ -55,7 +121,12 @@ awslocal eks create-cluster \
"arn": "arn:aws:eks:us-east-1:000000000000:cluster/cluster1",
"createdAt": "2022-04-13T16:38:24.850000+02:00",
"roleArn": "arn:aws:iam::000000000000:role/eks-role",
"resourcesVpcConfig": {},
"resourcesVpcConfig": {
"subnetIds": [
"subnet-1",
"subnet-2"
]
},
"identity": {
"oidc": {
"issuer": "https://localhost.localstack.cloud/eks-oidc"
Expand All @@ -67,6 +138,14 @@ awslocal eks create-cluster \
}
```

The cluster creation process may take a few moments as LocalStack sets up the necessary components. Avoid attempting to access the cluster until the status changes to `ACTIVE`.

Run the following command to wait for the cluster status to become `ACTIVE`:

```bash title="Wait for Cluster"
awslocal eks wait cluster-active --name cluster1
```

:::note
When setting up a local EKS cluster, if you encounter a `"status": "FAILED"` in the command output and see `Unable to start EKS cluster` in LocalStack logs, remove or rename the `~/.kube/config` file on your machine and retry.
The CLI mounts this file automatically for CLI versions before `3.7`, leading EKS to assume you intend to use the specified cluster, a feature that has specific requirements.
Expand All @@ -86,10 +165,11 @@ f05770ec8523 rancher/k3s:v1.21.5-k3s2 "/bin/k3s server --t…" 1 minut
...
```

After successfully creating and initializing the cluster, we can easily find the server endpoint, using the [`DescribeCluster`](https://docs.aws.amazon.com/eks/latest/APIReference/API_DescribeCluster.html) API.
After successfully creating and initializing the cluster, we can easily find the server endpoint, using the [`DescribeCluster` API](https://docs.aws.amazon.com/eks/latest/APIReference/API_DescribeCluster.html).

Run the following command:

```bash
```bash title="Describe Cluster"
awslocal eks describe-cluster --name cluster1
```

Expand All @@ -101,7 +181,12 @@ awslocal eks describe-cluster --name cluster1
"createdAt": "2022-04-13T17:12:39.738000+02:00",
"endpoint": "https://localhost.localstack.cloud:4511",
"roleArn": "arn:aws:iam::000000000000:role/eks-role",
"resourcesVpcConfig": {},
"resourcesVpcConfig": {
"subnetIds": [
"subnet-1",
"subnet-2"
]
},
"identity": {
"oidc": {
"issuer": "https://localhost.localstack.cloud/eks-oidc"
Expand All @@ -116,6 +201,78 @@ awslocal eks describe-cluster --name cluster1
}
```

### Creating a managed node group

The EKS cluster created in the previous step does not include any worker nodes by default. While you can inspect the server node, it is [tainted](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/), and workloads cannot be scheduled on it. To run workloads on the cluster, you must add at least one worker node. One way to do this is by creating a managed node group. When you create a managed node group, LocalStack automatically provisions a Docker container, joins it to the cluster, and provisions a mocked EC2 instance.

You can create a managed node group for your EKS cluster using the [`CreateNodegroup` API](https://docs.aws.amazon.com/eks/latest/APIReference/API_CreateNodegroup.html).

Run the following command:

```bash title="Create Node Group"
awslocal eks create-nodegroup \
--cluster-name cluster1 \
--nodegroup-name nodegroup1 \
--node-role arn:aws:iam::000000000000:role/eks-nodegroup-role \
--subnets subnet-1 subnet-2 \
--scaling-config desiredSize=1
```

```bash title="Output"
{
"nodegroup": {
"nodegroupName": "nodegroup1",
"nodegroupArn": "arn:aws:eks:us-east-1:000000000000:nodegroup/cluster1/nodegroup1/xxx",
"clusterName": "cluster1",
"version": "1.21",
"releaseVersion": "1.21.7-20220114",
"createdAt": "2022-04-13T17:25:45.821000+02:00",
"status": "CREATING",
"capacityType": "ON_DEMAND",
"scalingConfig": {
"desiredSize": 1
},
"subnets": [
"subnet-1",
"subnet-2"
],
"nodeRole": "arn:aws:iam::000000000000:role/eks-nodegroup-role",
"labels": {},
"health": {
"issues": []
},
"updateConfig": {
"maxUnavailable": 1
}
}
}
```

The node group creation process may take a few moments as LocalStack sets up the necessary components.

You can wait for the node group status to become `ACTIVE` by running the following command:

```bash title="Wait for Node Group"
awslocal eks wait nodegroup-active --cluster-name cluster1 --nodegroup-name nodegroup1
```

Once ready you can list the nodes in your cluster using `kubectl`:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue: we should wait for the nodegroup to be ready with

aws eks wait nodegroup-active --nodegroup-name nodegroup1

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And perhaps we can run the corresponding command when creating the cluster as well


```bash title="Get Nodes"
kubectl get nodes
```

You should see an output similar to this:

```bash title="Output"
NAME STATUS ROLES AGE VERSION
k3d-cluster1-xxx-agent-nodegroup1-0-0 Ready <none> 28s v1.33.2+k3s1
k3d-cluster1-xxx-server-0 Ready control-plane,master 2m12s v1.33.2+k3s1

```

At this point, your EKS cluster is fully operational and ready to deploy workloads.

### Utilizing ECR Images within EKS

You can now use ECR (Elastic Container Registry) images within your EKS environment.
Expand All @@ -141,7 +298,7 @@ Once you have configured this correctly, you can seamlessly use your ECR image w
To showcase this behavior, let's go through a concise step-by-step guide that will lead us to the successful pulling of an image from local ECR.
For the purpose of this guide, we will retag the `nginx` image to be pushed to a local ECR repository under a different name, and then utilize it for a pod configuration.

You can create a new ECR repository using the [`CreateRepository`](https://docs.aws.amazon.com/AmazonECR/latest/APIReference/API_CreateRepository.html) API.
You can create a new ECR repository using the [`CreateRepository` API](https://docs.aws.amazon.com/AmazonECR/latest/APIReference/API_CreateRepository.html).
Run the following command:

```bash
Expand Down Expand Up @@ -187,7 +344,7 @@ docker push 000000000000.dkr.ecr.us-east-1.localhost.localstack.cloud:4566/fanci

Now, let us set up the EKS cluster using the image pushed to local ECR.

Next, we can configure `kubectl` to use the EKS cluster, using the [`UpdateKubeconfig`](https://docs.aws.amazon.com/eks/latest/APIReference/API_UpdateClusterConfig.html) API.
Next, we can configure `kubectl` to use the EKS cluster, using the [`UpdateKubeconfig` API](https://docs.aws.amazon.com/eks/latest/APIReference/API_UpdateClusterConfig.html).
Run the following command:

```bash
Expand Down