279 lines
8.4 KiB
Markdown
279 lines
8.4 KiB
Markdown
# AWS - EKS
|
|
|
|
This is a POC of deploying an EKS stack on AWS, and some apps in it.
|
|
|
|
It uses Terraform for building the EKS cluster (1 node only, cuz $$), & another terraform configuration to deploy a couple of nginx nodes in the cluster.
|
|
|
|
## How?
|
|
|
|
### Before anything
|
|
|
|
Make sure to have a valid AWS account with the right permissions & policies.
|
|
|
|
Permissions required:
|
|
|
|
* AmazonEC2FullAccess
|
|
* IAMFullAccess
|
|
* AmazonEKSClusterPolicy
|
|
* AmazonVPCFullAccess
|
|
* AmazonEKSServicePolicy
|
|
|
|
Required policy:
|
|
|
|
```json
|
|
{
|
|
"Version": "2012-10-17",
|
|
"Statement": [
|
|
{
|
|
"Sid": "VisualEditor0",
|
|
"Effect": "Allow",
|
|
"Action": "eks:*",
|
|
"Resource": "*"
|
|
}
|
|
]
|
|
}
|
|
```
|
|
|
|
Once your IAM user is created, create the profile accordingly:
|
|
|
|
```sh
|
|
$ aws configure --profile infra-test
|
|
AWS Access Key ID [None]: AKxxxxxxxxxxxxxxxx
|
|
AWS Secret Access Key [None]: zWVxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
|
|
Default region name [None]: eu-west-3
|
|
Default output format [None]: json
|
|
```
|
|
|
|
For all the next commands, make sure to use the `AWS_PROFILE` environment variable set to your profile id:
|
|
|
|
```sh
|
|
$ export AWS_PROFILE=infra-test
|
|
```
|
|
|
|
### State space initialization
|
|
|
|
This section can be skipped; If so, make sure to disable other projects' `init.tf`.
|
|
|
|
The `state` infra will create a s3 & dynamo space to store terraform state.
|
|
|
|
```sh
|
|
$ cd state
|
|
$ terraform init
|
|
$ terraform plan -var "aws_profile=$AWS_PROFILE" -out tf.plan
|
|
$ terraform apply tf.plan
|
|
...
|
|
```
|
|
|
|
### First: EKS
|
|
|
|
Like any terraform deployments:
|
|
|
|
```sh
|
|
$ cd eks
|
|
$ terraform init
|
|
$ terraform plan -var "aws_profile=$AWS_PROFILE" -out tf.plan
|
|
$ terraform apply tf.plan
|
|
...
|
|
aws_eks_cluster.eks_cluster: Creation complete after 9m33s [id=eks-cluster-prod]
|
|
...
|
|
Apply complete! Resources: 4 added, 0 changed, 2 destroyed.
|
|
|
|
Outputs:
|
|
|
|
cluster_name = "eks-cluster-prod"
|
|
region = "eu-west-3"
|
|
|
|
$
|
|
```
|
|
|
|
Note that creating the initial EKS cluster will take up to 20 minutes in total (10 minutes for the eks cluster, 10 minutes to provision the nodes).
|
|
|
|
Once the cluster is built, make sure to configure your `.kube/config`:
|
|
|
|
```sh
|
|
$ terraform output
|
|
cluster_name = "eks-cluster-prod"
|
|
region = "eu-west-3"
|
|
|
|
$ aws eks --region $(terraform output -raw region) update-kubeconfig --name $(terraform output -raw cluster_name)
|
|
Added new context arn:aws:eks:eu-west-3:123456789012:cluster/eks-cluster-prod to /home/mycroft/.kube/config
|
|
|
|
$ kubectl get pods -A
|
|
NAMESPACE NAME READY STATUS RESTARTS AGE
|
|
kube-system aws-node-689lb 1/1 Running 0 111s
|
|
kube-system coredns-9b5d74bfb-b652h 1/1 Running 0 5m20s
|
|
kube-system coredns-9b5d74bfb-z6p6v 1/1 Running 0 5m20s
|
|
kube-system kube-proxy-xg5cp 1/1 Running 0 111s
|
|
```
|
|
|
|
### Second: Apps.
|
|
|
|
Once eks is deployed, and kubectl correctly configured, we can continue by deploying our app.
|
|
|
|
```sh
|
|
$ cd ../k8s
|
|
$ terraform init
|
|
# By default, it will install nginx; To disable it, use prod's workspace by:
|
|
# $ terraform workspace new prod
|
|
$ terraform plan -out tf.plan
|
|
$ terraform apply
|
|
...
|
|
Apply complete! Resources: 3 added, 0 changed, 1 destroyed.
|
|
```
|
|
|
|
As a result, let's verify there is our stuff deployed:
|
|
|
|
```sh
|
|
$ kubectl get pods --namespace testaroo-default
|
|
NAME READY STATUS RESTARTS AGE
|
|
alpine 1/1 Running 0 5m3s
|
|
nginx-98cf9b965-l785s 1/1 Running 0 5m3s
|
|
nginx-98cf9b965-smpkr 1/1 Running 0 5m3s
|
|
|
|
$ kubectl get deploy -n testaroo-default nginx -o wide
|
|
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
|
|
nginx 2/2 2 2 5m46s nginx-container nginx app=Nginx
|
|
|
|
$ kubectl get svc -n testaroo-default -o wide
|
|
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
|
|
nginx-lb LoadBalancer 172.20.0.75 a3a176133964a463db33dafb6c6e06a1-480398782.eu-west-3.elb.amazonaws.com 80:30512/TCP 57s app=Nginx
|
|
nginx-np NodePort 172.20.227.6 <none> 80:31234/TCP 57s app=Nginx
|
|
```
|
|
|
|
And now, as the `default` workspace was deployed, it is possible to switch to prod's:
|
|
|
|
```sh
|
|
$ terraform workspace new prod
|
|
$ terraform plan -out tf-prod.plan
|
|
$ terraform apply tf-prod.plan
|
|
$ kubectl get ns
|
|
NAME STATUS AGE
|
|
default Active 18m
|
|
kube-node-lease Active 18m
|
|
kube-public Active 18m
|
|
kube-system Active 18m
|
|
testaroo-default Active 3m10s
|
|
testaroo-prod Active 14s
|
|
|
|
$ kubectl get pods -n testaroo-prod
|
|
NAME READY STATUS RESTARTS AGE
|
|
alpine 1/1 Running 0 39s
|
|
|
|
```
|
|
|
|
No `nginx` for `prod`'s workspace, as it was disabled!
|
|
|
|
After using workspaces, it is possible to check the state files in s3:
|
|
|
|
```sh
|
|
$ aws s3 ls terraform-state-infra-aws-eks
|
|
PRE env:/
|
|
PRE global/
|
|
|
|
$ aws s3 ls terraform-state-infra-aws-eks/global/s3/
|
|
2022-02-19 16:29:43 33800 terraform.eks.tfstate
|
|
2022-02-19 16:40:25 18754 terraform.k8s.tfstate
|
|
|
|
$ aws s3 ls terraform-state-infra-aws-eks/env:/prod/global/s3/
|
|
2022-02-19 16:43:03 8392 terraform.k8s.tfstate
|
|
```
|
|
|
|
### Reaching the app.
|
|
|
|
#### Using the NodePort
|
|
|
|
It is not possible with terraform output to retrieve the configured nodes. However, it is possible to retrieve IPs for our nodes using aws cli:
|
|
|
|
```sh
|
|
$ CLUSTER_IP=$(aws ec2 describe-instances \
|
|
--filters "Name=tag:k8s.io/cluster-autoscaler/eks-cluster-prod,Values=owned" \
|
|
--filters "Name=instance-state-name,Values=running" \
|
|
--query "Reservations[*].Instances[*].PublicIpAddress" \
|
|
--output text | head -1)
|
|
$ echo ${CLUSTER_IP}
|
|
52.47.91.179
|
|
$ curl http://$CLUSTER_IP:31234/
|
|
<!DOCTYPE html>
|
|
<html>
|
|
<head>
|
|
<title>Welcome to nginx!</title>
|
|
...
|
|
|
|
```
|
|
|
|
|
|
#### Using the LoadBalancer
|
|
|
|
This approach is simpler, as it is just needed to retrieve the created the LoadBalancer external address, either by using `kubectl` or `terraform output`:
|
|
|
|
```sh
|
|
$ kubectl get svc -n testaroo nginx-lb
|
|
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
|
nginx-lb LoadBalancer 172.20.149.132 a34059e68106b41a292730b5defe734b-581837320.eu-west-3.elb.amazonaws.com 80:31698/TCP 3m50s
|
|
|
|
$ terraform output
|
|
lb-address = "a34059e68106b41a292730b5defe734b-581837320.eu-west-3.elb.amazonaws.com"
|
|
```
|
|
|
|
The service should be reachable directly using it:
|
|
|
|
```sh
|
|
$ curl http://$(terraform output -raw lb-address):80/
|
|
<!DOCTYPE html>
|
|
<html>
|
|
<head>
|
|
<title>Welcome to nginx!</title>
|
|
...
|
|
```
|
|
|
|
|
|
### Reaching the/a node ssh port:
|
|
|
|
Still using the AWS CLI to retrieve nodes, just:
|
|
|
|
```sh
|
|
$ ssh -i ~/.ssh/ec2-terraform.pem -l ec2-user $CLUSTER_IP
|
|
Last login: Fri Feb 11 13:21:00 2022 from xxxx.wanadoo.fr
|
|
|
|
__| __|_ )
|
|
_| ( / Amazon Linux 2 AMI
|
|
___|\___|___|
|
|
|
|
# docker ps|grep nginx
|
|
cc3aafd1a6ec nginx "/docker-entrypoint.…" 25 minutes ago Up 25 minutes k8s_nginx-container_nginx-98cf9b965-l785s_testaroo_e5ebf304-e156-4f6d-b00f-0f5dad0a9445_0
|
|
f4b998b0558e nginx "/docker-entrypoint.…" 25 minutes ago Up 25 minutes k8s_nginx-container_nginx-98cf9b965-smpkr_testaroo_eebe1868-fc5e-425e-948a-ce2cc2f2633e_0
|
|
14113cac359b 602401143452.dkr.ecr.eu-west-3.amazonaws.com/eks/pause:3.1-eksbuild.1 "/pause" 25 minutes ago Up 25 minutes k8s_POD_nginx-98cf9b965-l785s_testaroo_e5ebf304-e156-4f6d-b00f-0f5dad0a9445_0
|
|
c8c252673fbb 602401143452.dkr.ecr.eu-west-3.amazonaws.com/eks/pause:3.1-eksbuild.1 "/pause" 25 minutes ago Up 25 minutes k8s_POD_nginx-98cf9b965-smpkr_testaroo_eebe1868-fc5e-425e-948a-ce2cc2f2633e_0
|
|
```
|
|
|
|
|
|
### Going into a container
|
|
|
|
```sh
|
|
$ kubectl get pods -n testaroo alpine
|
|
NAME READY STATUS RESTARTS AGE
|
|
alpine 1/1 Running 0 29m
|
|
|
|
$ kubectl exec -ti -n testaroo alpine -- ps auxw
|
|
PID USER TIME COMMAND
|
|
1 root 0:00 sh -c while true; do sleep 3600; done
|
|
7 root 0:00 sleep 3600
|
|
8 root 0:00 ps auxw
|
|
|
|
$ kubectl exec -ti -n testaroo alpine -- sh
|
|
/ # echo "hello world"
|
|
hello world
|
|
/ #
|
|
```
|
|
|
|
## Todo:
|
|
|
|
* Move roles in the dedicated env;
|
|
|
|
|
|
## Notes
|
|
|
|
### Got an AWS error?
|
|
|
|
Decode it using `aws sts decode-authorization-message --encoded-message`.
|