Skip to content

NimTechnology

Trình bày các công nghệ CLOUD một cách dễ hiểu.

  • Kubernetes & Container
    • Docker
    • Kubernetes
      • Ingress
      • Pod
    • Helm Chart
    • OAuth2 Proxy
    • Isito-EnvoyFilter
    • Apache Kafka
      • Kafka
      • Kafka Connect
      • Lenses
    • Vault
    • Longhorn – Storage
    • VictoriaMetrics
    • MetalLB
    • Kong Gateway
  • CI/CD
    • ArgoCD
    • ArgoWorkflows
    • Argo Events
    • Spinnaker
    • Jenkins
    • Harbor
    • TeamCity
    • Git
      • Bitbucket
  • Coding
    • DevSecOps
    • Terraform
      • GCP – Google Cloud
      • AWS – Amazon Web Service
      • Azure Cloud
    • Golang
    • Laravel
    • Python
    • Jquery & JavaScript
    • Selenium
  • Log, Monitor & Tracing
    • DataDog
    • Prometheus
    • Grafana
    • ELK
      • Kibana
      • Logstash
  • BareMetal
    • NextCloud
  • Toggle search form

[AWS] AWS Load Balancer Controller and Ingress are Installed by Terraform Helm Provider on EKS.

Posted on September 13, 2022May 20, 2024 By nim No Comments on [AWS] AWS Load Balancer Controller and Ingress are Installed by Terraform Helm Provider on EKS.

Contents

Toggle
  • 1) Introduction to all Ingress
  • 2) AWS Load Balancer Controller
    • 2.1) Introduction
    • 2.2) Intalling AWS Load Balancer Controller with Terraform
  • 3) Ingress Basics
    • 3.1) Introduction
  • 4) Ingress Context Path based Routing
    • 4.1) Introduction
  • 5) How to Route Traffic From ALB to Pod IPs directly.
    • Direct to Pod IPs (IP Mode)
    • Node-Level Networking (NodePort)
  • 6) Create ALB, then create service on K8S (EKS)
      • Summary
  • failed calling webhook “vingress.elbv2.k8s.aws”.
    • context deadline exceeded
    • tls: failed to verify certificate: x509: certificate signed by unknown authority
  • The health check of Target Group is Unhealthy
  • AWS Load Balancer Controller find my subnet in Amazon EKS

1) Introduction to all Ingress

2) AWS Load Balancer Controller

2.1) Introduction

Đây là structure khi bạn sử dụng AWS Load Balancer Controller
Bạn để ý là service bạn phải tạo là node port thì mới chạy được case này.

2.2) Intalling AWS Load Balancer Controller with Terraform

chúng ta có file:
c4-01-lbc-datasources.tf

# Datasource: AWS Load Balancer Controller IAM Policy get from aws-load-balancer-controller/ GIT Repo (latest)
data "http" "lbc_iam_policy" {
  url = "https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/install/iam_policy.json"

  # Optional request headers
  request_headers = {
    Accept = "application/json"
  }
}

output "lbc_iam_policy" {
  value = data.http.lbc_iam_policy.body
}

Đầu tiên nó thực hiện download iam_policy.json thông qua data “http”

Tiếp theo là bạn thực hiện tạo Policy và tạo 1 assume role rồi thực hiện add Policy vào Role

c4-02-lbc-iam-policy-and-role.tf
>>>>>>>>>>>
>>>>>>


# Resource: Create AWS Load Balancer Controller IAM Policy 
resource "aws_iam_policy" "lbc_iam_policy" {
  name        = "${local.name}-AWSLoadBalancerControllerIAMPolicy"
  path        = "/"
  description = "AWS Load Balancer Controller IAM Policy"
  policy = data.http.lbc_iam_policy.body
}

output "lbc_iam_policy_arn" {
  value = aws_iam_policy.lbc_iam_policy.arn 
}

# Resource: Create IAM Role 
resource "aws_iam_role" "lbc_iam_role" {
  name = "${local.name}-lbc-iam-role"

  # Terraform's "jsonencode" function converts a Terraform expression result to valid JSON syntax.
  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action = "sts:AssumeRoleWithWebIdentity"
        Effect = "Allow"
        Sid    = ""
        Principal = {
          Federated = "${data.terraform_remote_state.eks.outputs.aws_iam_openid_connect_provider_arn}"
        }
        Condition = {
          StringEquals = {
            "${data.terraform_remote_state.eks.outputs.aws_iam_openid_connect_provider_extract_from_arn}:aud": "sts.amazonaws.com",            
            "${data.terraform_remote_state.eks.outputs.aws_iam_openid_connect_provider_extract_from_arn}:sub": "system:serviceaccount:kube-system:aws-load-balancer-controller"
          }
        }        
      },
    ]
  })

  tags = {
    tag-key = "AWSLoadBalancerControllerIAMPolicy"
  }
}

# Associate Load Balanacer Controller IAM Policy to  IAM Role
resource "aws_iam_role_policy_attachment" "lbc_iam_role_policy_attach" {
  policy_arn = aws_iam_policy.lbc_iam_policy.arn 
  role       = aws_iam_role.lbc_iam_role.name
}

output "lbc_iam_role_arn" {
  description = "AWS Load Balancer Controller IAM Role ARN"
  value = aws_iam_role.lbc_iam_role.arn
}

Ở resource “aws_iam_policy” “lbc_iam_policy” thì nó tạo Policy theo file đã download ở bước trước

Đây là config role được tạo với terraform.
và tiếp theo đây là trust relationships trong role
workload và có service account là aws-load-balancer-controller sẽ được access resource AWS LoadBalancer Controller

Thực hiện tạo provider Helm để connect EKS.

c4-03-lbc-helm-provider.tf
>>>>>
>>>
>>>

# Datasource: EKS Cluster Auth 
data "aws_eks_cluster_auth" "cluster" {
  name = data.terraform_remote_state.eks.outputs.cluster_id
}

# HELM Provider
provider "helm" {
  kubernetes {
    host                   = data.terraform_remote_state.eks.outputs.cluster_endpoint
    cluster_ca_certificate = base64decode(data.terraform_remote_state.eks.outputs.cluster_certificate_authority_data)
    token                  = data.aws_eks_cluster_auth.cluster.token
  }
}

Giờ đến bước install AWS Load Balancer Controller using HELM

c4-04-lbc-install.tf
>>>>>
>>>>

# Install AWS Load Balancer Controller using HELM

# Resource: Helm Release 
resource "helm_release" "loadbalancer_controller" {
  depends_on = [aws_iam_role.lbc_iam_role]            
  name       = "aws-load-balancer-controller"

  repository = "https://aws.github.io/eks-charts"
  chart      = "aws-load-balancer-controller"

  namespace = "kube-system"     

  # Value changes based on your Region (Below is for us-east-1)
  set {
    name = "image.repository"
    value = "602401143452.dkr.ecr.us-east-1.amazonaws.com/amazon/aws-load-balancer-controller" 
    # Changes based on Region - This is for us-east-1 Additional Reference: https://docs.aws.amazon.com/eks/latest/userguide/add-ons-images.html
  }       

  set {
    name  = "serviceAccount.create"
    value = "true"
  }

  set {
    name  = "serviceAccount.name"
    value = "aws-load-balancer-controller"
  }

  set {
    name  = "serviceAccount.annotations.eks\\.amazonaws\\.com/role-arn"
    value = "${aws_iam_role.lbc_iam_role.arn}"
  }

  set {
    name  = "vpcId"
    value = "${data.terraform_remote_state.eks.outputs.vpc_id}"
  }  

  set {
    name  = "region"
    value = "${var.aws_region}"
  }    

  set {
    name  = "clusterName"
    value = "${data.terraform_remote_state.eks.outputs.cluster_id}"
  }    
    
}

Ở file trên bạn sẽ thấy có những thứ sau đây:

repository = "https://aws.github.io/eks-charts"
  chart      = "aws-load-balancer-controller"

===> Nó trỏ đến helm chart Public.

Tiếp đến bạn sẽ set 1 số thông số: image.repository, serviceAccount.create(tạo service), …

Để terraform có thể tương tác với kubernetes cluster của bạn thì chúng ta config 1 provider “kubernetes”

và đây là phần mình show namespace: kube-system

root@work-space-u20:~# kubectl get all -n kube-system
NAME                                                READY   STATUS    RESTARTS   AGE
pod/aws-load-balancer-controller-747df8b87c-rzxxw   1/1     Running   0          6m26s
pod/aws-load-balancer-controller-747df8b87c-skpdh   1/1     Running   0          6m26s
pod/aws-node-ntwlx                                  1/1     Running   0          40m
pod/coredns-66cb55d4f4-65wrb                        1/1     Running   0          44m
pod/coredns-66cb55d4f4-zxjrw                        1/1     Running   0          44m
pod/kube-proxy-rvp6x                                1/1     Running   0          40m

NAME                                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
service/aws-load-balancer-webhook-service   ClusterIP   172.20.34.214   <none>        443/TCP         6m28s
service/kube-dns                            ClusterIP   172.20.0.10     <none>        53/UDP,53/TCP   44m

NAME                        DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/aws-node     1         1         1       1            1           <none>          44m
daemonset.apps/kube-proxy   1         1         1       1            1           <none>          44m

NAME                                           READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/aws-load-balancer-controller   2/2     2            2           6m27s
deployment.apps/coredns                        2/2     2            2           44m

NAME                                                      DESIRED   CURRENT   READY   AGE
replicaset.apps/aws-load-balancer-controller-747df8b87c   2         2         2       6m28s
replicaset.apps/coredns-66cb55d4f4                        2         2         2       44m
c5-01-kubernetes-provider.tf
>>>>>>
>>>>

# Terraform Kubernetes Provider
provider "kubernetes" {
  host = data.terraform_remote_state.eks.outputs.cluster_endpoint 
  cluster_ca_certificate = base64decode(data.terraform_remote_state.eks.outputs.cluster_certificate_authority_data)
  token = data.aws_eks_cluster_auth.cluster.token
}

sau đoạn này bạn đã có 1 ingress class mới:

root@work-space-u20:~# kubectl get ingressclass
NAME                   CONTROLLER            PARAMETERS   AGE
alb                    ingress.k8s.aws/alb   <none>       13m
my-aws-ingress-class   ingress.k8s.aws/alb   <none>       4m4s

Khám phá deployment:

kubectl descibe deployment.apps/aws-load-balancer-controller -n kube-system

Chúng ta có 1 số thông tin khá hay liên quan đến EKS VPC
sau khi đối chiếu thì khá là chính sác

3) Ingress Basics

3.1) Introduction

https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/

Bạn để ý là load balancer đí vào thông qua NodePort

Giờ chúng ta thử apply các file manifest

deployment-service.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: app3-nginx-deployment
  labels:
    app: app3-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: app3-nginx
  template:
    metadata:
      labels:
        app: app3-nginx
    spec:
      containers:
        - name: app3-nginx
          image: stacksimplify/kubenginx:1.0.0
          ports:
            - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: app3-nginx-nodeport-service
  labels:
    app: app3-nginx
  annotations:
#Important Note:  Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer    
#    alb.ingress.kubernetes.io/healthcheck-path: /index.html
spec:
  type: NodePort
  selector:
    app: app3-nginx
  ports:
    - port: 80
      targetPort: 80

Giờ chúng ta có 1 file ingress:

# Annotations Reference: https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/ingress/annotations/
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-basics
  labels:
    app: app3-nginx
  annotations:
    # Load Balancer Name
    alb.ingress.kubernetes.io/load-balancer-name: ingress-basics
    #kubernetes.io/ingress.class: "alb" (OLD INGRESS CLASS NOTATION - STILL WORKS BUT RECOMMENDED TO USE IngressClass Resource) # Additional Notes: https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.3/guide/ingress/ingress_class/#deprecated-kubernetesioingressclass-annotation
    # Ingress Core Settings
    alb.ingress.kubernetes.io/scheme: internet-facing
    # Health Check Settings
    alb.ingress.kubernetes.io/healthcheck-protocol: HTTP 
    alb.ingress.kubernetes.io/healthcheck-port: traffic-port
    alb.ingress.kubernetes.io/healthcheck-path: /index.html    
    alb.ingress.kubernetes.io/healthcheck-interval-seconds: '15'
    alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '5'
    alb.ingress.kubernetes.io/success-codes: '200'
    alb.ingress.kubernetes.io/healthy-threshold-count: '2'
    alb.ingress.kubernetes.io/unhealthy-threshold-count: '2'
spec:
  ingressClassName: my-aws-ingress-class # Ingress Class
  defaultBackend:
    service:
      name: app3-nginx-nodeport-service
      port:
        number: 80                  
      

# 1. If  "spec.ingressClassName: my-aws-ingress-class" not specified, will reference default ingress class on this kubernetes cluster
# 2. Default Ingress class is nothing but for which ingress class we have the annotation `ingressclass.kubernetes.io/is-default-class: "true"`

sau khi bạn apply xong thì bạn sẽ thấy có 1 ingress có domain.

root@work-space-u20:~/eks# kubectl get ingress
NAME             CLASS                  HOSTS   ADDRESS                                                PORTS   AGE
ingress-basics   my-aws-ingress-class   *       ingress-basics-228990159.us-east-1.elb.amazonaws.com   80      24s

bạn nhớ mở browser và truy cập http://ingress-basics-228990159.us-east-1.elb.amazonaws.com

Bạn thấy rõ là Load Balancing đang trỏ vào node port của server hay VM

Giờ xóa deployment service và ingress.

root@work-space-u20:~/eks# kubectl delete -f deployment-service.yaml
deployment.apps "app3-nginx-deployment" deleted
service "app3-nginx-nodeport-service" deleted

root@work-space-u20:~/eks# kubectl delete -f ingress.yaml
ingress.networking.k8s.io "ingress-basics" deleted

4) Ingress Context Path based Routing

4.1) Introduction

# Annotations Reference: https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/ingress/annotations/
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-cpr
  annotations:
    # Load Balancer Name
    alb.ingress.kubernetes.io/load-balancer-name: ingress-cpr
    # Ingress Core Settings
    #kubernetes.io/ingress.class: "alb" (OLD INGRESS CLASS NOTATION - STILL WORKS BUT RECOMMENDED TO USE IngressClass Resource)
    alb.ingress.kubernetes.io/scheme: internet-facing
    # Health Check Settings
    alb.ingress.kubernetes.io/healthcheck-protocol: HTTP 
    alb.ingress.kubernetes.io/healthcheck-port: traffic-port
    #Important Note:  Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer    
    alb.ingress.kubernetes.io/healthcheck-interval-seconds: '15'
    alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '5'
    alb.ingress.kubernetes.io/success-codes: '200'
    alb.ingress.kubernetes.io/healthy-threshold-count: '2'
    alb.ingress.kubernetes.io/unhealthy-threshold-count: '2'   
spec:
  ingressClassName: my-aws-ingress-class   # Ingress Class 
  defaultBackend:
    service:
      name: app3-nginx-nodeport-service
      port:
        number: 80                            
  rules:
    - http:
        paths:           
          - path: /app1
            pathType: Prefix
            backend:
              service:
                name: app1-nginx-nodeport-service
                port: 
                  number: 80
          - path: /app2
            pathType: Prefix
            backend:
              service:
                name: app2-nginx-nodeport-service
                port: 
                  number: 80
#          - path: /
#            pathType: Prefix
#            backend:
#              service:
#                name: app3-nginx-nodeport-service
#                port: 
#                  number: 80                     
             

# Important Note-1: In path based routing order is very important, if we are going to use  "/*" (Root Context), try to use it at the end of all rules.                                        
                        
# 1. If  "spec.ingressClassName: my-aws-ingress-class" not specified, will reference default ingress class on this kubernetes cluster
# 2. Default Ingress class is nothing but for which ingress class we have the annotation `ingressclass.kubernetes.io/is-default-class: "true"`
      
    

Bạn có thể tham khảo terraform ở đây:
https://github.com/mrnim94/terraform-aws/tree/master/eks-ingress

Ngoài ra bạn có thể tham khảo module của minh:

https://registry.terraform.io/modules/mrnim94/eks-alb-ingress/aws/latest

5) How to Route Traffic From ALB to Pod IPs directly.

Giờ chúng ta cần tìm hiểu lại 2 các mà ALB route traffics from ALB to POD

Direct to Pod IPs (IP Mode)

In IP Mode, the ALB routes traffic directly to the pod IPs, bypassing the node-level networking layer. This mode is enabled by setting alb.ingress.kubernetes.io/target-type: ip in your Ingress annotations.

[Internet]
    |
    v
[ALB - Application Load Balancer]
    | (1) Selects pod based on rules
    |                  
    |    +---------------------------------------------------+
    |    | VPC                                               |
    |    |    +-----------------+     +-----------------+    |
    |    |    |     Node 1      |     |     Node 2      |    |
    |    |    |                 |     |                 |    |
    |    +----> [Pod IP 1.1]    |     |  [Pod IP 2.1]   +---->(2) Routes to Pod IP
    |         |                 |     |                 |    |
    |         +-----------------+     +-----------------+    |
    |    |                                                   |
    +----+---------------------------------------------------+
  • Traffic from the internet reaches the ALB.
  • The ALB routes traffic directly to the pod IPs across different nodes, bypassing the node-level networking.
  • Each pod processes the traffic independently.

Ở đây cũng không yêu service NodePort chỉ cần ClusterIP là được

Vì ở đây chỉ có 1 pod nên bạn sẽ thấy nó là 1 ip và pod 8088 thì chắc chắc là vào IP Pod

Node-Level Networking (NodePort)

In NodePort mode, the ALB sends traffic to a specific port (the NodePort) on the nodes. Kubernetes then routes this traffic to the appropriate pods based on service definitions.

[Internet]
    |
    v
[ALB - Application Load Balancer]
    | (1) Routes to a NodePort on any node
    |                  
    |    +---------------------------------------------------+
    |    | VPC                                               |
    |    |    +-----------------+     +-----------------+    |
    |    |    |     Node 1      |     |     Node 2      |    |
    |    |    |  [NodePort 30000]     |  [NodePort 30000]    |
    +----+---->       |               |       |         +---->(2) kube-proxy routes
    |    |    |       v               |       v         |    |   to the appropriate pod
    |    |    |  [Service selects]    |  [Service selects]   |
    |    |    |       |               |       |         |    |
    |    |    |     [Pod 1.1]         |     [Pod 2.1]   |    |
    |    |    +-----------------+     +-----------------+    |
    |    |                                                   |
    +----+---------------------------------------------------+
  1. NodePort Routing: The ALB routes traffic to the designated NodePort on any node within the cluster. This NodePort is exposed on every node by Kubernetes and is associated with a specific service.
  2. Internal Routing: Kubernetes, via kube-proxy, routes the traffic from the NodePort to the appropriate pod based on the service definition. This may involve routing traffic to a pod on the same node or a different node, depending on where the selected pod is running.

Bạn thấy port 31774 thì đây chắc chắn là NodePort

6) Create ALB, then create service on K8S (EKS)

Có 1 số case bạn muốn tạo 1 AWS Loadbalancer trước rồi cấu hình rất nhiều thử trên đó.
Tiếp đến bạn mới trỏ traffics đến 1 service bất kì trên K8s thì làm sao.
Bạn sẽ không đi theo cách thông thường thì phải làm sao:

Khi bạn tạo ALB trước thì bạn sẽ có ARN của ALB đó.

trên K8s bạn tạo 1 resource là TargetGroupBinding trong apiVersion: elbv2.k8s.aws/v1beta1

apiVersion: v1
kind: Service
metadata:
  name: "nim-mtls-v4"
  namespace: "prod-nim"
spec:
  type: NodePort
  selector:
    app: nim-gateway
  ports:
    - protocol: TCP
      port: 80
      targetPort: 9000
      nodePort: 30056

---
apiVersion: elbv2.k8s.aws/v1beta1
kind: TargetGroupBinding
metadata:
  name: "nim-mtls-v4"
  namespace: "prod-nim"
spec:
  serviceRef:
    name: "nim-mtls-v4"
    port: 80
  targetGroupARN: arn:aws:elasticloadbalancing:us-east-2:XXXXXXXXX:targetgroup/prod-mtls-v4-use2/42b46xxxxx4629
  targetType: instance

Summary

  • Service: Creates a NodePort service named mdc-mtls-v4 in the prod-md-cloud-rest namespace, exposing the application running on port 9000 of the pods with label app: mdc-gateway to port 80 and mapping it to node port 30056.
  • TargetGroupBinding: Binds the service mdc-mtls-v4 to an AWS target group, enabling the integration with an AWS ELB for routing external traffic to the Kubernetes service.

failed calling webhook “vingress.elbv2.k8s.aws”.

context deadline exceeded

Error from server (InternalError): error when creating "ingress-alb.yaml": Internal error occurred: failed calling webhook "vingress.elbv2.k8s.aws": failed to call webhook: Post "https://aws-load-balancer-webhook-service.kube-system.svc:443/validate-networking-v1-ingress?timeout=10s": context deadline exceeded

https://stackoverflow.com/questions/70681534/failed-calling-webhook-vingress-elbv2-k8s-aws

 node_security_group_additional_rules = {
    ingress_allow_access_from_control_plane = {
      type                          = "ingress"
      protocol                      = "tcp"
      from_port                     = 9443
      to_port                       = 9443
      source_cluster_security_group = true
      description                   = "Allow access from control plane to webhook port of AWS load balancer controller"
    }
    #https://stackoverflow.com/questions/70681534/failed-calling-webhook-vingress-elbv2-k8s-aws
  }

tls: failed to verify certificate: x509: certificate signed by unknown authority

Failed deploy model due to Internal error occurred: failed calling webhook "mtargetgroupbinding.elbv2.k8s.aws": failed to call webhook: Post "https://aws-load-balancer-webhook-service.kube-system.svc:443/mutate-elbv2-k8s-aws-v1beta1-targetgroupbinding?timeout=10s": tls: failed to verify certificate: x509: certificate signed by unknown authority

Lúc này mình thấy certificate: aws-load-balancer-serving-cert và event báo là:
CannotRegenerateKey: User intervention required: existing private key in Secret “aws-load-balancer-webhook-tls” does not match requirements on Certificate resource, mismatching fields: [spec.privateKey.size], but cert-manager cannot create new private key as the Certificate’s .spec.privateKey.rotationPolicy is unset or set to Never. To allow cert-manager to create a new private key you can set .spec.privateKey.rotationPolicy to ‘Always’ (this will result in the private key being regenerated every time a cert is renewed)

Tiếp là xóa secret đó và restart pod: aws-load-balancer-controller
Tiếp theo sync lại ingress của application.

The health check of Target Group is Unhealthy

Nếu bạn thấy Target Group bị unHealthy như hình.
thì bạn check tiếp cái monitor đang watching trên cái gì?

Liệu path và protocol đã chính sách chưa?
hoặc bạn có thể chỉnh status code.

Bạn có thể thao khảo ở đây:
https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.1/guide/ingress/annotations/#health-check
để tạo health check phù hợp

AWS Load Balancer Controller find my subnet in Amazon EKS

https://repost.aws/knowledge-center/eks-load-balancer-controller-subnets

https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.5/deploy/subnet_discovery/

AWS - Amazon Web Service

Post navigation

Previous Post: [AWS] What’s serverless? This is Lambda.
Next Post: [MongoDB] Creating MongoDB Atlas to integrate with your workload on any Cloud

More Related Articles

[EKS/Pods] Why can not the pod on EKS call http://169.254.169.254/latest/api/token AWS - Amazon Web Service
[Golang / EKS] Accessing AWS EKS with Go: A Comprehensive Guide to Interacting with Kubernetes APIs AWS - Amazon Web Service
[AWS/EKS] EFS CSI Driver – Create Persistent Volume Clain with ReadWriteMany type on EKS AWS - Amazon Web Service
[csi-driver-smb] Providing PVC with type ReadWriteMany base on SMB AWS - Amazon Web Service
[Redis] ElastiCache-Redis Cross-Region Replication|Global DataStore AWS - Amazon Web Service
[EKS] the exciting and helpful things about EKS AWS - Amazon Web Service

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Tham Gia Group DevOps nhé!
Để Nim có nhiều động lực ra nhiều bài viết.
Để nhận được những thông báo mới nhất.

Recent Posts

  • [Azure] The subscription is not registered to use namespace ‘Microsoft.ContainerService’ May 8, 2025
  • [Azure] Insufficient regional vcpu quota left May 8, 2025
  • [WordPress] How to add a Dynamic watermark on WordPress. May 6, 2025
  • [vnet/Azure] VNet provisioning via Terraform. April 28, 2025
  • [tracetcp] How to perform a tracert command using a specific port. April 3, 2025

Archives

  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021

Categories

  • BareMetal
    • NextCloud
  • CI/CD
    • Argo Events
    • ArgoCD
    • ArgoWorkflows
    • Git
      • Bitbucket
    • Harbor
    • Jenkins
    • Spinnaker
    • TeamCity
  • Coding
    • DevSecOps
    • Golang
    • Jquery & JavaScript
    • Laravel
    • NextJS 14 & ReactJS & Type Script
    • Python
    • Selenium
    • Terraform
      • AWS – Amazon Web Service
      • Azure Cloud
      • GCP – Google Cloud
  • Kubernetes & Container
    • Apache Kafka
      • Kafka
      • Kafka Connect
      • Lenses
    • Docker
    • Helm Chart
    • Isito-EnvoyFilter
    • Kong Gateway
    • Kubernetes
      • Ingress
      • Pod
    • Longhorn – Storage
    • MetalLB
    • OAuth2 Proxy
    • Vault
    • VictoriaMetrics
  • Log, Monitor & Tracing
    • DataDog
    • ELK
      • Kibana
      • Logstash
    • Fluent
    • Grafana
    • Prometheus
  • Uncategorized
  • Admin

Copyright © 2025 NimTechnology.