Skip to content

NimTechnology

Trình bày các công nghệ CLOUD một cách dễ hiểu.

  • Kubernetes & Container
    • Docker
    • Kubernetes
      • Ingress
      • Pod
    • Helm Chart
    • OAuth2 Proxy
    • Isito-EnvoyFilter
    • Apache Kafka
      • Kafka
      • Kafka Connect
      • Lenses
    • Vault
    • Longhorn – Storage
    • VictoriaMetrics
    • MetalLB
    • Kong Gateway
  • CI/CD
    • ArgoCD
    • ArgoWorkflows
    • Argo Events
    • Spinnaker
    • Jenkins
    • Harbor
    • TeamCity
    • Git
      • Bitbucket
  • Coding
    • DevSecOps
    • Terraform
      • GCP – Google Cloud
      • AWS – Amazon Web Service
      • Azure Cloud
    • Golang
    • Laravel
    • Python
    • Jquery & JavaScript
    • Selenium
  • Log, Monitor & Tracing
    • DataDog
    • Prometheus
    • Grafana
    • ELK
      • Kibana
      • Logstash
  • BareMetal
    • NextCloud
  • Toggle search form

[Terraform] – Terraform Beginner – Lesson 9: Terraform with AWS – part 2

Posted on May 24, 2022July 13, 2022 By nim No Comments on [Terraform] – Terraform Beginner – Lesson 9: Terraform with AWS – part 2

Để dễ hiểu hơn về VPC trên AWS thì bạn tham khảo video sau:
Bài 31: AWS VPC 1 Thiết kế mạng riêng bảo mật cho doanh nghiệp trên AWS/Cloud

Contents

Toggle
  • 1) Introduction to VPCs
    • 1.1) Creating the VPC
    • 1.2) Private Subnets
    • 1.3) Subnet masks
  • 2) Demo VPCs and Nat
    • 2.1) VPCs
    • 2.2) Subnet
    • 2.3) Internet Gateway and Route Table.
    • 2.4) NAT
  • 3)Launching EC2 instances in the VPC
    • 3.1) Introduction
    • 3.2) Demo Launching instances in a VPC
      • 3.2.1) Variable
      • 3.2.2) Public key(keypair)
      • 3.2.3) Security Group
      • 3.2.4) Instance (VMs)
  • 4) Launching EC2 instances in the VPC
    • 4.1) Introduction
    • 4.2) Demo EBS with terraform.
  • 5) Userdata
    • 5.1) Instruct Userdata in AWS
    • 5.2) Demo Userdata.
  • 6) Static IPs, EIPs, and Route53
    • 6.1) Static IPs
    • 6.2) EIPs (Public IP)
    • 6.3) Route53
    • 6.4) Demo Route53
  • 7) RDS – Relational Databases
    • 7.1) What is RDS?
    • 7.2) Demo RDS.
  • 8) IAM –  Identity and Access Management
    • 8.1) Overview IAM
    • 8.2) IAM role
      • 8.2.1) Attach role to instance.
    • 8.3) IAM Group and policy
    • 8.4) Demo IAM users and groups
    • 8.4) Demo IAM Roles
  • 9) Autoscaling
    • 9.1)Autoscaling instances in AWS
    • 9.2) Demo Autoscaling
  • 10) Elastic Load Balancers (ELB)
    • 10.1) Introduction to Elastic Load Balancers (ELB)
    • 10.2) ELBs in terraform
      • 10.2.1) ELB + AutoScaling
    • 10.3) Demo ELB with autoscaling
    • 10.4) Application Load Balancer (ALB).
      • 10.4.1) Rule based load balancing.
  • 11) Elastic Beanstalk
    • 11.1) Instruct
    • 11.2) Demo Elastic Beanstalk.

1) Introduction to VPCs

1.1) Creating the VPC

On Amazon AWS, you have a default VPC (Virtual Private Network) created for you by AWS to launch instances in
Up until now we used this default VPC

VPC isolates the instances on a network level
– It’s like your own private network in the cloud

Best practice is to always launch your instances in a VPC
– the default VPC
– or one you create yourself (managed by terraform)

There’s also EC2-Classic, which is basically one big network where all AWS customers could launch their instances in
For smaller to medium setups, one VPC (per region) will be suitable for your needs
An instance launched in one VPC can never communicate with an instance in another VPC using their private IP addresses
– They could communicate still, but using their public IP (not recommended)
– You could also link 2 VPCs, called peering

1.2) Private Subnets

1.3) Subnet masks

2) Demo VPCs and Nat

2.1) VPCs

Đầu tiên chúng ta tạo 1 VPC.

# Internet VPC
resource "aws_vpc" "main" {
  cidr_block           = "10.0.0.0/16"
  instance_tenancy     = "default"
  enable_dns_support   = "true"
  enable_dns_hostnames = "true"
  enable_classiclink   = "false"
  tags = {
    Name = "main"
  }
}

reference link:
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/vpc

cidr_block: Bạn chọn dải mạng cho VPC.
instance_tenancy: là 1 tuỳ chọn tenancy để instance(EC2) launched strong VPC đó
enable_dns_support:  (Optional) A boolean flag to enable/disable DNS support in the VPC. Defaults true.
enable_dns_hostnames: (Optional) A boolean flag to enable/disable DNS hostnames in the VPC. Defaults false.
enable_classiclink: (Optional) A boolean flag to enable/disable ClassicLink for the VPC. Only valid in regions and accounts that support EC2 Classic. See the ClassicLink documentation for more information. Defaults false.
tags: đơn giản gắng key value cho vpc cho dễ nhớ.

Lúc này mình thực hiện terraform init và terraform apply

Đã tạo được VPC giờ thì kiểm tra.
Bạn nhờ là tạo region nào thì vào đúng region đó nhé

2.2) Subnet

Ở bước này chúng ta chỉ mới tạo vùng về mạng => ví dụ đây là công ty nimtechnology 10.0.0.0/16(main)
Công ty thì có nhiều phòng ban
– HR Department:
– IT Department:

=> Việc tiếp theo là chia nhỏ main thành nhiều subnet: public, private, … -> tuỳ vào độ security và nhu cầu,…

# Subnets
resource "aws_subnet" "main-public-1" {
  vpc_id                  = aws_vpc.main.id
  cidr_block              = "10.0.1.0/24"
  map_public_ip_on_launch = "true"
  availability_zone       = "eu-west-1a"

  tags = {
    Name = "main-public-1"
  }
}

Config trên là tạo subnet public vì là nó có: map_public_ip_on_launch = "true"
reference link:
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/subnet

vpc_id = aws_vpc.main.id thì chỗ này bạn config để subnet này liên kết với VPC
availability_zone = "eu-west-1a" zone này cần phải nằm strong region của VPC
Các config khách thì khá là dễ hiểu.

resource "aws_subnet" "main-private-1" {
  vpc_id                  = aws_vpc.main.id
  cidr_block              = "10.0.4.0/24"
  map_public_ip_on_launch = "false"
  availability_zone       = "eu-west-1a"

  tags = {
    Name = "main-private-1"
  }
}

Config trên là tạo subnet private vì map_public_ip_on_launch = "false"

Đã apply bằng terrform xong

2.3) Internet Gateway and Route Table.

Nếu các instance của chúng ta nằm trong subnet và VPC thì sẽ cần đi ra mạng
Giờ chúng ta cần tạo Internet GW.
Internet Gateway: là một thành phần của VPC, cho phép giao tiếp giữa VPC và Internet. Nói một cách dễ hiểu hơn là một server trong VPC muốn giao tiếp được với Internet thì cần có Internet Gateway.

# Internet GW
resource "aws_internet_gateway" "main-gw" {
  vpc_id = aws_vpc.main.id

  tags = {
    Name = "main"
  }
}

https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/internet_gateway
than khảo link trên

vpc_id = aws_vpc.main.id thì sẽ link đến VPC.

Giờ bạn tạo 1 default route để tất cả các traffic muốn đi ra internet thì chạy ra Gateway.
ta sẽ cần tạo trong route table.
Route table: là bảng định tuyến, bao gồm một tập hợp các rule (được gọi là route), được sử dụng để xác định đường đi, nơi đến của các gói tin từ mạng con hay gateway.

# route tables
resource "aws_route_table" "main-public" {
  vpc_id = aws_vpc.main.id
  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.main-gw.id
  }

  tags = {
    Name = "main-public-1"
  }
}

Config ở trên là bạn tạo 1 default root nghĩ là từ 1 subnet nào đó muốn đi ra internet ()

https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/route_table

vpc_id = aws_vpc.main.id thì link vào vpc
gateway_id = aws_internet_gateway.main-gw.id thì link vào internet gateway

Tiếp theo bạn cần tạo aws_route_table_association để link subnet main-public-1 thuộc aws_route_table main-public
-> những subnet nào được config như bên dưới thì instance trong subnet đó sẽ được access ra internet

# route associations public
resource "aws_route_table_association" "main-public-1-a" {
  subnet_id      = aws_subnet.main-public-1.id
  route_table_id = aws_route_table.main-public.id
}

Nhiều bạn sẽ thắc mác sử khách biệt

aws_route_table_association: Provides a resource to create an association between a route table and a subnet or a route table and an internet gateway or virtual private gateway.
aws_route_table: Provides a resource to create a VPC routing table

2.4) NAT

Config này để client có thể móc vào các con VM thuộc cụm private!

# nat gw
resource "aws_eip" "nat" {
  vpc = true
}

resource "aws_nat_gateway" "nat-gw" {
  allocation_id = aws_eip.nat.id
  subnet_id     = aws_subnet.main-public-1.id
  depends_on    = [aws_internet_gateway.main-gw]
}

https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/nat_gateway

aws_nat_gateway: Provides a resource to create a VPC NAT Gateway.
subnet_id – (Required) The Subnet ID of the subnet in which to place the gateway.
depends_on: này nó chờ Internet Gateway tạo trước.

Tiếp đến là tạo ra 1 route table private

# VPC setup for NAT
resource "aws_route_table" "main-private" {
  vpc_id = aws_vpc.main.id
  route {
    cidr_block     = "0.0.0.0/0"
    nat_gateway_id = aws_nat_gateway.nat-gw.id
  }

  tags = {
    Name = "main-private-1"
  }
}

Bạn sẽ thấy nó cho phép all IP address access, sends traffic đến Nat Gateway

# route associations private
resource "aws_route_table_association" "main-private-1-a" {
  subnet_id      = aws_subnet.main-private-1.id
  route_table_id = aws_route_table.main-private.id
}

resource "aws_route_table_association" "main-private-2-a" {
  subnet_id      = aws_subnet.main-private-2.id
  route_table_id = aws_route_table.main-private.id
}

resource "aws_route_table_association" "main-private-3-a" {
  subnet_id      = aws_subnet.main-private-3.id
  route_table_id = aws_route_table.main-private.id
}


3)Launching EC2 instances in the VPC

3.1) Introduction

Bạn có thể dễ dàng tạo 1 instance ec2
từ các phần ví dụ bên dưới!

Bây giờ bạn có thể thêm phần instance nằm trong subnet nào thuộc VPC nào.
Bạn cũng có thể add thêm Security Group và key-pair
-> được nhiên chúng ta sẽ làm việc với terraform là nhiều!

3.2) Demo Launching instances in a VPC

3.2.1) Variable

Đầu tiên chúng ta có 1 file variable

vars.tf
###########


  
variable "AWS_REGION" {
  default = "eu-west-1"
}

## Below codes are added to create EC2
variable "PATH_TO_PRIVATE_KEY" {
  default = "mykey"
}

variable "PATH_TO_PUBLIC_KEY" {
  default = "mykey.pub"
}

variable "AMIS" {
  type = map(string)
  default = {
    us-east-1 = "ami-13be557e"
    us-west-2 = "ami-06b94666"
    eu-west-1 = "ami-844e0bf7"
  }
}

3.2.2) Public key(keypair)

Tiếp theo là chúng ta tạo ra 1 public và config với terraform.

key.tf
############

resource "aws_key_pair" "mykeypair" {
  key_name   = "mykeypair"
  public_key = file(var.PATH_TO_PUBLIC_KEY)
}

public_key = file(var.PATH_TO_PUBLIC_KEY)
-> Bạn sẽ thấy config link đến file vars.tf

3.2.3) Security Group

Phần này rất là quen thuộc tạo Security Group!

securitygroup.tf
#######################

resource "aws_security_group" "allow-ssh" {
  vpc_id      = aws_vpc.main.id
  name        = "allow-ssh"
  description = "security group that allows ssh and all egress traffic"
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  tags = {
    Name = "allow-ssh"
  }
}

Config trên thì cho phép VM đí ra ngoài all port all protocol
Traffic đi vào: Chỉ cho phép ssh vào VM!

3.2.4) Instance (VMs)

Cuối cùng là file tạo VM và kết nỗi đến các config keypair, Security Group, …

instance.tf
##################


  
resource "aws_instance" "example" {
  ami           = var.AMIS[var.AWS_REGION]
  instance_type = "t2.micro"

  # the VPC subnet
  subnet_id = aws_subnet.main-public-1.id

  # the security group
  vpc_security_group_ids = [aws_security_group.allow-ssh.id]

  # the public SSH key
  key_name = aws_key_pair.mykeypair.key_name
}

Ở phần config instance thì có điểm hay.
ami = var.AMIS[var.AWS_REGION]
variable AMIS của mình là 1 array hay là 1 map và key của map thì phụ thuộc region

Phần security group bạn để ý values của nó là array nhé!
vpc_security_group_ids = [aws_security_group.allow-ssh.id]

Bạn gõ terraform apply và check ssh vào VM trên cloud.

ssh ubuntu@54.78.151.80

Xong rồi thì terraform destroy

4) Launching EC2 instances in the VPC

4.1) Introduction

The t2.micro instance with this particular AMI automatically adds 8 GB of EBS storage (=
Elastic Block Storage)
Some instance types have local storage on the instance itself
– This is called ephemeral storage
– This type of storage is always lost when the instance terminates
The 8GB EBS root volume storage that comes with the instance is also set to be
automatically removed when the instance is terminated
You could still instruct AWS not to do so, but that would be counter-intuitive (anti-
pattern)
In most cases the 8GB for the OS (root block device) suffices

In our next example I’m adding an extra EBS storage volume
Extra volumes can be used for the log files, any real data that is put on
the instance
That data will be persisted until you instruct AWS to remove it
EBS storage can be added using a terraform resource and then attached
to our instance

Và trên đây là 1 file ví dụ về terraform.


In the previous example we added an extra volume
The root volume of 8 GB still exists
lf you want to increase the storage or type of the root volume, you can use
root_block_device within the aws_instance resource

đây là các tăng volume của “Ổ đĩa root”

4.2) Demo EBS with terraform.

Bạn thêm config sau vào instance.tf

resource "aws_instance" "example" {
  ami           = var.AMIS[var.AWS_REGION]
  instance_type = "t2.micro"

  # the VPC subnet
  subnet_id = aws_subnet.main-public-1.id

  # the security group
  vpc_security_group_ids = [aws_security_group.allow-ssh.id]

  # the public SSH key
  key_name = aws_key_pair.mykeypair.key_name
}

resource "aws_ebs_volume" "ebs-volume-1" {
  availability_zone = "eu-west-1a"
  size              = 20
  type              = "gp2"
  tags = {
    Name = "extra volume data"
  }
}

resource "aws_volume_attachment" "ebs-volume-1-attachment" {
  device_name = "/dev/xvdh"
  volume_id   = aws_ebs_volume.ebs-volume-1.id
  instance_id = aws_instance.example.id
}

Tìm hiểu về 1 vài thông số:
type: (Optional) The type of EBS volume. Can be standard, gp2, gp3, io1, io2, sc1 or st1 (Default: gp2).
aws_volume_attachment: Cái này như kiểu mapping EBS với Instance EC2 nào?

sau khi bạn đã gõ terraform apply

thi chúng ta cùng kiểm tra trước trên console AWS

Do từ khóa EBS
Bạn thấy là đã tạo EBS cho Instance
Giờ kiêm tra Instance
Đã thấy gắn volume(EBS vào intance)

5) Userdata

5.1) Instruct Userdata in AWS

Userdata in AWS can be used to do any customization at launch:
You can install extra software
Prepare the instance to join a cluster
e.g. consul cluster, ECS cluster (docker orchestration)
Execute commands / scripts
Mount volumes
Userdata is only executed at the creation of the instance, not when the instance reboots

Terraform allows you to add userdata to the aws_instance resource
Just as a string (for simple commands)
Using templates (for more complex instructions)

Terraform allows you to add userdata to the aws_instance resource
Just as a string (for simple commands)
Using templates (for more complex instructions)

Another better example is to use the template system of terraform:

5.2) Demo Userdata.

Bạn thêm user_data như trong ảnh
nó thực hiện chạy cloudinit
resource "aws_instance" "example" {
  ami           = var.AMIS[var.AWS_REGION]
  instance_type = "t2.micro"

  # the VPC subnet
  subnet_id = aws_subnet.main-public-1.id

  # the security group
  vpc_security_group_ids = [aws_security_group.allow-ssh.id]

  # the public SSH key
  key_name = aws_key_pair.mykeypair.key_name

  # user data
  user_data = data.cloudinit_config.cloudinit-example.rendered
}

resource "aws_ebs_volume" "ebs-volume-1" {
  availability_zone = "eu-west-1a"
  size              = 20
  type              = "gp2"
  tags = {
    Name = "extra volume data"
  }
}

resource "aws_volume_attachment" "ebs-volume-1-attachment" {
  device_name = "/dev/xvdh"
  volume_id   = aws_ebs_volume.ebs-volume-1.id
  instance_id = aws_instance.example.id
}

file: cloudinit.tf
Bạn thấy trong cloudinit có 2 loại scipt.
– init.cfg: đây là kiểu khai báo scipt của AWS dành cho cloudinit.
– scripts/volumes.sh: đây là script bash shell bình thường. -> mount disk
– scripts/docker.sh: đây là script bash shell bình thường. -> Install docker

# note: previous templatefile datasources have been replaced by the template_file() function

data "cloudinit_config" "cloudinit-example" {
  gzip          = false
  base64_encode = false

  part {
    filename     = "init.cfg"
    content_type = "text/cloud-config"
    content      = templatefile("scripts/init.cfg", {
      REGION = var.AWS_REGION
    })
  }

  part {
    content_type = "text/x-shellscript"
    content      = templatefile("scripts/docker.sh", {
      DEVICE = var.INSTANCE_DEVICE_NAME
    })
  }

  part {
    content_type = "text/x-shellscript"
    content      = templatefile("scripts/volumes.sh", {
      DEVICE = var.INSTANCE_DEVICE_NAME
    })
  }
}

Tiếp đênns là bạn tạo thư mục scripts và trong thư mục có 2 file script.

init.cfg

#cloud-config

repo_update: true
repo_upgrade: all

packages:
  - lvm2

output:
  all: '| tee -a /var/log/cloud-init-output.log'

Nhìn sơ bạn sẽ thấy init.cfg thực update package, cài lvm2, output console ra 1 file là /var/log/cloud-init-output.log
Bạn có thể đọc thêm ở đây:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html#user-data-cloud-init

volumes.sh

#!/bin/bash

set -ex 

vgchange -ay

DEVICE_FS=`blkid -o value -s TYPE ${DEVICE} || echo ""`
if [ "`echo -n $DEVICE_FS`" == "" ] ; then 
  # wait for the device to be attached
  DEVICENAME=`echo "${DEVICE}" | awk -F '/' '{print $3}'`
  DEVICEEXISTS=''
  while [[ -z $DEVICEEXISTS ]]; do
    echo "checking $DEVICENAME"
    DEVICEEXISTS=`lsblk |grep "$DEVICENAME" |wc -l`
    if [[ $DEVICEEXISTS != "1" ]]; then
      sleep 15
    fi
  done
  # make sure the device file in /dev/ exists
  count=0
  until [[ -e ${DEVICE} || "$count" == "60" ]]; do
   sleep 5
   count=$(expr $count + 1)
  done
  pvcreate ${DEVICE}
  vgcreate data ${DEVICE}
  lvcreate --name volume1 -l 100%FREE data
  mkfs.ext4 /dev/data/volume1
fi
mkdir -p /data
echo '/dev/data/volume1 /data ext4 defaults 0 0' >> /etc/fstab
mount /data

file docker.sh

#!/bin/bash

# install docker
sudo apt update -y
sudo apt install apt-transport-https ca-certificates curl software-properties-common -y
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable"
sudo apt update -y
apt-cache policy docker-ce
sudo apt install docker-ce -y
sudo systemctl enable docker
sudo systemctl restart docker
sudo curl -L "https://github.com/docker/compose/releases/download/1.27.4/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose

Kết luận bạn có thể hiểu đơn giản là cloudinit là nó chạy các script khi mà VM vừa được create và vưa start xong.

OK! Giờ bạn thực hiện init và apply terraform để recheck lại kết quả!

Bạn thực ssh vào IP public của VM với user là ubuntu và ssh-key nhé!
config mount đã ăn.
Đã cài sẵn docker

6) Static IPs, EIPs, and Route53

6.1) Static IPs

Private IP addresses will be auto-assigned to EC2 instances
Every subnet within the VPC has its own range (e.g. 10.0.1.0 – 10.0.1.255)
By specifying the private IP, you can make sure the EC2 instance always uses the same IP address:

Chúng ta gán tĩnh 1 IP Private cho EC2

6.2) EIPs (Public IP)

To use a public IP address, you can use EIPs (Elastic IP addresses)
This is a public, static IP address that you can attach to your instance

Tip: You can use aws_eip.example-eip.public_ip attribute with the output resource to show the IP address after terraform apply

6.3) Route53

Typically, you’ll not use IP addresses, but hostnames.
This is where route53 comes in
You can host a domain name on AWS using Route53
You first need to register a domain name using AWS or any accredited registrar
You can then create a zone in route53 (e.g. example.com) and add DNS records (e.g. server1.example.com)

Tip: When you register your domain name, you need to add the AWS nameservers to that
domain

6.4) Demo Route53

route53.tf

resource "aws_route53_zone" "newtech-academy" {
  name = "newtech.academy"
}

resource "aws_route53_record" "server1-record" {
  zone_id = aws_route53_zone.newtech-academy.zone_id
  name    = "server1.newtech.academy"
  type    = "A"
  ttl     = "300"
  records = ["104.236.247.8"]
}

resource "aws_route53_record" "www-record" {
  zone_id = aws_route53_zone.newtech-academy.zone_id
  name    = "www.newtech.academy"
  type    = "A"
  ttl     = "300"
  records = ["104.236.247.8"]
}

resource "aws_route53_record" "mail1-record" {
  zone_id = aws_route53_zone.newtech-academy.zone_id
  name    = "newtech.academy"
  type    = "MX"
  ttl     = "300"
  records = [
    "1 aspmx.l.google.com.",
    "5 alt1.aspmx.l.google.com.",
    "5 alt2.aspmx.l.google.com.",
    "10 aspmx2.googlemail.com.",
    "10 aspmx3.googlemail.com.",
  ]
}

output "ns-servers" {
  value = aws_route53_zone.newtech-academy.name_servers
}

Bạn cứ tự apply và trải nghiệm nhé!

Và đây là kết quả!

7) RDS – Relational Databases

7.1) What is RDS?

RDS stands for Relational Database Services
It’s a managed database solution:
You can easily set up replication (high availability)
Automated snapshots (for backups)
Automated security updates
Easy instance replacement (for vertical scaling)

Supported databases are:
MySQL
MariaDB
PostgreSQL
Microsoft SQL
Oracl

Steps to create an RDS instance:
Create a subnet group
Allows you to specify in what subnets the database will be in (e.g. eu-west-1a and eu-west-1b)
Create a Parameter group
Allows you to specify parameters to change settings in the database
Create a security group that allows incoming traffic to the RDS instance
Create the RDS instance(s) itself

This subnet group specifies that the RDS will be put in the private subnets
The RDS will only be accessible from other instances within the same subnet, notfrom the internet
The RDS instance will also be placed either in private-1 or private-2, not in the private-3 subnet
when you enable High Availability you will have an instance in both subnets

7.2) Demo RDS.

Đọc lý thuyết đau cả đầu cư practice là đơn giản hết.

Đầu tiên bạn cần design Security Group:
securitygroup.tf

resource "aws_security_group" "allow-ssh" {
  vpc_id      = aws_vpc.main.id
  name        = "allow-ssh"
  description = "security group that allows ssh and all egress traffic"
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  tags = {
    Name = "allow-ssh"
  }
}

resource "aws_security_group" "allow-mariadb" {
  vpc_id      = aws_vpc.main.id
  name        = "allow-mariadb"
  description = "allow-mariadb"
  ingress {
    from_port       = 3306
    to_port         = 3306
    protocol        = "tcp"
    security_groups = [aws_security_group.allow-ssh.id] # allowing access from our example instance
  }
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
    self        = true
  }
  tags = {
    Name = "allow-mariadb"
  }
}

Các config thì khá là tường minh và dễ hiểu có 1 chỗ mính lưu ý là:

Bạn để chô chúng ta thay cidr_blocks bằng security_groups => bạn có thể acces DB từ VM EC2
Hay hiểu cách khác là các instance thuộc SecGroup allow-ssh sẽ được kết nối đến DB

Đây là file rds.tf

resource "aws_db_subnet_group" "mariadb-subnet" {
  name        = "mariadb-subnet"
  description = "RDS subnet group"
  subnet_ids  = [aws_subnet.main-private-1.id, aws_subnet.main-private-2.id]
}

resource "aws_db_parameter_group" "mariadb-parameters" {
  name        = "mariadb-parameters"
  family      = "mariadb10.6"
  description = "MariaDB parameter group"

  parameter {
    name  = "max_allowed_packet"
    value = "16777216"
  }
}

resource "aws_db_instance" "mariadb" {
  allocated_storage       = 100 # 100 GB of storage, gives us more IOPS than a lower number
  engine                  = "mariadb"
  engine_version          = "10.6.7"
  instance_class          = "db.t2.small" # use micro if you want to use the free tier
  identifier              = "mariadb"
  db_name                    = "mariadb"
  username                = "root"           # username
  password                = var.RDS_PASSWORD # password
  db_subnet_group_name    = aws_db_subnet_group.mariadb-subnet.name
  parameter_group_name    = aws_db_parameter_group.mariadb-parameters.name
  multi_az                = "false" # set to true to have high availability: 2 instances synchronized with each other
  vpc_security_group_ids  = [aws_security_group.allow-mariadb.id]
  storage_type            = "gp2"
  backup_retention_period = 30                                          # how long you’re going to keep your backups
  availability_zone       = aws_subnet.main-private-1.availability_zone # prefered AZ
  skip_final_snapshot     = true                                        # skip final snapshot when doing terraform destroy
  tags = {
    Name = "mariadb-instance"
  }
}

Bạn đẻ ý chỗ password này:
password = var.RDS_PASSWORD # password

Bậy trong var sẽ như sau:
vars.tf

variable "RDS_PASSWORD" {
}

Chỗ này mình không khai báo các giá trị default vì mình sẽ input password qua command:

terraform apply -var RDS_PASSWORD=ahihi-this-is-passwd

Mình gặp 1 lỗi:

│ Error: Error creating DB Instance: InvalidParameterCombination: Cannot find version 10.4.13 for mariadb
│       status code: 400, request id: 9cb30796-6750-483e-b418-ded9819f6217

Bạn lấy version ở link này nhé:
https://docs.amazonaws.cn/en_us/AmazonRDS/latest/UserGuide/MariaDB.Concepts.VersionMgmt.html

Giờ thì mình thêm phần output để lấy ip của VM và DB cho dễ

output.tf

output "instance" {
  value = aws_instance.example.public_ip
}

output "rds" {
  value = aws_db_instance.mariadb.endpoint
}
Sau apply thành công!

Giờ bạn ssh vào VM và cài tool mysql-client và test thôi!

ssh ubuntu@<IP_Public_Ec2>
apt update -y
apt install mysql-client -y
mysql -h mariadb.cynto9nitd8p.eu-west-1.rds.amazonaws.com -u root -p'ahihi-this-is-passwd'

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| innodb             |
| mariadb            |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
6 rows in set (0.02 sec)

8) IAM –  Identity and Access Management

8.1) Overview IAM

IAM is AWS’ Identity & Access Management
It’s a service that helps you control access to your AWS resources
In AWS you can create:
– Groups
– Users
– Roles

Users can have groups
– for instance an “Administrators” group can give admin privileges to users
Users can authenticate
– Using a login / password
– Optionally using a token: multifactor Authentication (MFA) using Google Authenticator compatible software
– an access key and secret key (the API keys)

8.2) IAM role

Roles can give users / services (temporary) access that they normally wouldn’t have
The roles can be for instance attached to EC2 instances
– From that instance, a user or service can obtain access credentials
– Using those access credentials the user or service can assume the role, which gives them permission to do something

An example:
– You create a role mybucket-access and assign the role to an EC2 instance at boot time
– You give the role the permissions to read and write items in “mybucket”
– When you log in, you can now assume this mybucket-access role, without using your own credentials – you will be given temporary access credentials which just look like normal user credentials
– You can now read and write items in “mybucket”

– Instead of a user using aws-cli, a service also assume a role
– The service needs to implement the AWS SDK
– When trying to access the S3 bucket, an API call to AWS will occur
– If roles are configured for this EC2 instance, the AWS API will give temporary access keys which can be used to assume this role
– After that, the SDK can be used just like when you would have normal credentials
– This really happens in the background and you don’t see much of it

IAM Roles only work on EC2 instances, and not for instance outside AWS
The temporary access credentials also need to be renewed, they’re only valid for a predefined amount of time
– This is also something the AWS SDK will take care of

8.2.1) Attach role to instance.

Let’s create a role now that we want to attach to an EC2 instance:

Attaching this role to an EC2 instance now is pretty easy:

Creating the bucket is just another resource:

Now we need to add some permissions using a policy document:

8.3) IAM Group and policy

To create an IAM administrators group in AWS, you can create the group and attach the AWS managed Administrator policy to it

You can also create your own custom policy. This one does the same:

Next, create a user and attach it to a group:

8.4) Demo IAM users and groups

sau đây là file manifest:

# group definition
resource "aws_iam_group" "administrators" {
  name = "administrators"
}

resource "aws_iam_policy_attachment" "administrators-attach" {
  name       = "administrators-attach"
  groups     = [aws_iam_group.administrators.name]
  policy_arn = "arn:aws:iam::aws:policy/AdministratorAccess"
}

# user
resource "aws_iam_user" "admin1" {
  name = "admin1"
}

resource "aws_iam_user" "admin2" {
  name = "admin2"
}

resource "aws_iam_group_membership" "administrators-users" {
  name = "administrators-users"
  users = [
    aws_iam_user.admin1.name,
    aws_iam_user.admin2.name,
  ]
  group = aws_iam_group.administrators.name
}

output "warning" {
  value = "WARNING: make sure you're not using the AdministratorAccess policy for other users/groups/roles. If this is the case, don't run terraform destroy, but manually unlink the created resources"
}

8.4) Demo IAM Roles

instance.tf

resource "aws_instance" "example" {
  ami           = var.AMIS[var.AWS_REGION]
  instance_type = "t2.micro"

  # the VPC subnet
  subnet_id = aws_subnet.main-public-1.id

  # the security group
  vpc_security_group_ids = [aws_security_group.example-instance.id]

  # the public SSH key
  key_name = aws_key_pair.mykeypair.key_name

  # role:
  iam_instance_profile = aws_iam_instance_profile.s3-mybucket-role-instanceprofile.name
}

Bạn sẽ để ý line iam_instance_profile

s3.tf
==> chúng ta tạo 1 s3 bucket với name là mybucket-c29df1

resource "aws_s3_bucket" "b" {
  bucket = "mybucket-c29df1"

  tags = {
    Name = "mybucket-c29df1"
  }
}

iam.tf

# create an assume role
#source is Ec2
resource "aws_iam_role" "s3-mybucket-role" {
  name               = "s3-mybucket-role"
  assume_role_policy = <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": "sts:AssumeRole",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Effect": "Allow",
      "Sid": ""
    }
  ]
}
EOF

}

# create an IAM instance profile that is attached into Ec2
resource "aws_iam_instance_profile" "s3-mybucket-role-instanceprofile" {
  name = "s3-mybucket-role"
  role = aws_iam_role.s3-mybucket-role.name
}


#destination is S3_bucket
resource "aws_iam_role_policy" "s3-mybucket-role-policy" {
  name = "s3-mybucket-role-policy"
  role = aws_iam_role.s3-mybucket-role.id
  policy = <<EOF
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
              "s3:*"
            ],
            "Resource": [
              "arn:aws:s3:::mybucket-c29df1",
              "arn:aws:s3:::mybucket-c29df1/*"
            ]
        }
    ]
}
EOF

}

Mình sẽ tóm tắt đơn giản:
step1: Create một S3_bucket aws_s3_bucket
step2: Bạn create 1 role giả định: aws_iam_role và định là source EC2 sẽ cần truy cập vào đâu đó
step3: Bạn tạo instace profile aws_iam_instance_profile và gắn aws_iam_role vào trong instance profile (tý nữa sẽ khai bào cái instance profile này trên Ec2)
step4: Bạn muốn cấp quyền cho instance Ec2 sử dụng role aws_iam_role này được truy cập vào s3 thì bạn tạo aws_iam_role_policy
step5: là bạn khai bào profile instance aws_iam_instance_profile vào trong aws_instance

Sau khi bạn đã terraform apply xong thì
ssh vào VM thôn qua ip public và user ubuntu

ssh ubuntu@<ip_public>
>>>Install awscli
>>>>https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install

>>create file to upload S3
echo "test-nimtechnology" > text.txt
aws s3 cp text.txt s3://mybucket-c7nimtechnology/text.txt
đã upload được file lên s3

Để tìm hiểu thêm thì bạn curl như sau:

curl http://169.254.169.254/latest/meta-data/iam/security-credentials/s3-mybucket-role
Bạn thấy đã có access, secret and token để access s3

9) Autoscaling

9.1)Autoscaling instances in AWS

In AWS autoscaling groups can be created to automatically add/remove instances when certain thresholds are reached
e.g. your application layer can be scaled out when you have more visitors
To set up autoscaling in AWS you need to setup at least 2 resources:
An AWS launch configuration
– Specifies the properties of the instance to be launched (AMI ID, security group, etc)
An autoscaling group
– Specifies the scaling properties (min instances, max instances, health checks)

Once the autoscaling group is setup, you can create autoscaling policies
A policy is triggered based on a threshold (CloudWatch Alarm)
An adjustment will be executed
– e.g. if the average CPU utilization is more than 20%, then scale up by +1 instances
– e.g. if the average CPU utilization is less than 5%, then scale down by -1 instances

First the launch configuration and the autoscaling group needs to be created:

To create a policy, you need a aws_autoscaling_policy:

Then, you can create a CloudWatch alarm which will trigger the autoscaling policy

If you want to receive an alert (e.g. email) when autoscaling is invoked, need create a SNS topic (Simple Notification Service):

That SNS topic needs to be attached to the autoscaling group:

9.2) Demo Autoscaling

autoscaling.tf

resource "aws_launch_configuration" "example-launchconfig" {
  name_prefix     = "example-launchconfig"
  image_id        = var.AMIS[var.AWS_REGION]
  instance_type   = "t2.micro"
  key_name        = aws_key_pair.mykeypair.key_name
  security_groups = [aws_security_group.allow-ssh.id]
}

resource "aws_autoscaling_group" "example-autoscaling" {
  name                      = "example-autoscaling"
  vpc_zone_identifier       = [aws_subnet.main-public-1.id, aws_subnet.main-public-2.id]
  launch_configuration      = aws_launch_configuration.example-launchconfig.name
  min_size                  = 1
  max_size                  = 2
  health_check_grace_period = 300
  health_check_type         = "EC2"
  force_delete              = true

  tag {
    key                 = "Name"
    value               = "ec2 instance"
    propagate_at_launch = true
  }
}

aws_launch_configuration: Provides a resource to create a new launch configuration, used for autoscaling groups.
==> Bạn tưởng tượng khi bạn create 1 instance ec2 mới một các tự động, thi bạn sẽ phải base nó trên 1 image nào đó! image_id
aws_autoscaling_group: Provides an Auto Scaling Group resource.
– vpc_zone_identifier: Chúng ta sẽ khai báo 1 list các subnet ở đây để khi autoScale nó tạo instance mới thì nó sẽ tảo ở các subnet này.

autoscalingpolicy.tf

# scale up alarm

resource "aws_autoscaling_policy" "example-cpu-policy" {
  name                   = "example-cpu-policy"
  autoscaling_group_name = aws_autoscaling_group.example-autoscaling.name
  adjustment_type        = "ChangeInCapacity"
  scaling_adjustment     = "1"
  cooldown               = "300"
  policy_type            = "SimpleScaling"
}

resource "aws_cloudwatch_metric_alarm" "example-cpu-alarm" {
  alarm_name          = "example-cpu-alarm"
  alarm_description   = "example-cpu-alarm"
  comparison_operator = "GreaterThanOrEqualToThreshold"
  evaluation_periods  = "2"
  metric_name         = "CPUUtilization"
  namespace           = "AWS/EC2"
  period              = "120"
  statistic           = "Average"
  threshold           = "30"

  dimensions = {
    "AutoScalingGroupName" = aws_autoscaling_group.example-autoscaling.name
  }

  actions_enabled = true
  alarm_actions   = [aws_autoscaling_policy.example-cpu-policy.arn]
}

# scale down alarm
resource "aws_autoscaling_policy" "example-cpu-policy-scaledown" {
  name                   = "example-cpu-policy-scaledown"
  autoscaling_group_name = aws_autoscaling_group.example-autoscaling.name
  adjustment_type        = "ChangeInCapacity"
  scaling_adjustment     = "-1"
  cooldown               = "300"
  policy_type            = "SimpleScaling"
}

resource "aws_cloudwatch_metric_alarm" "example-cpu-alarm-scaledown" {
  alarm_name          = "example-cpu-alarm-scaledown"
  alarm_description   = "example-cpu-alarm-scaledown"
  comparison_operator = "LessThanOrEqualToThreshold"
  evaluation_periods  = "2"
  metric_name         = "CPUUtilization"
  namespace           = "AWS/EC2"
  period              = "120"
  statistic           = "Average"
  threshold           = "5"

  dimensions = {
    "AutoScalingGroupName" = aws_autoscaling_group.example-autoscaling.name
  }

  actions_enabled = true
  alarm_actions   = [aws_autoscaling_policy.example-cpu-policy-scaledown.arn]
}

aws_autoscaling_policy: Provides an AutoScaling Scaling Policy resource.
– adjustment_type: kiểu điều chỉnh, mình chọn là ChangeInCapacity thây đổi khối lượng
– scaling_adjustment: nếu mình đặt là 1 thì một lần scale up/down thì sẽ thay đổi 1 đơn vị, nếu là -1 thì sẽ là scale down.
– cooldown: nó là khoảng thời gian được tính bằng giây(s), tự lúc hoàng thành 1 action scale đến next scaling tiếp theo.

aws_cloudwatch_metric_alarm: Provides a CloudWatch Metric Alarm resource.
==> lấy metrics của cloudwatch để nhận biết các người của auto scale.

Ở đây chúng ta có 2 policy: scale up and scale down.

Bạn có thể sang cloudwatch để xem 1 số thông tin có ích

OK giờ chúng ta phải làm cpu lên cao để test

Bạn ssh vào instance có auto scale.

sudo -i
apt install stress -y
stress --cpu 2 --timeout 300
Bạn thấy cloud watch bắt đầu có alarm
Bạn thấy là cpu đã vượt ngưỡng 30%
chúng ta có instance mới được create mới auto scaling.
sau đó cpu trở về bình thường thì auto scale down đã được hoạt động!
1 instance đã được xóa!

sns.tf

# Uncomment if you want to have autoscaling notifications
#resource "aws_sns_topic" "example-sns" {
#  name         = "sg-sns"
#  display_name = "example ASG SNS topic"
#} # email subscription is currently unsupported in terraform and can be done using the AWS Web Console
#
#resource "aws_autoscaling_notification" "example-notify" {
#  group_names = ["${aws_autoscaling_group.example-autoscaling.name}"]
#  topic_arn     = "${aws_sns_topic.example-sns.arn}"
#  notifications  = [
#    "autoscaling:EC2_INSTANCE_LAUNCH",
#    "autoscaling:EC2_INSTANCE_TERMINATE",
#    "autoscaling:EC2_INSTANCE_LAUNCH_ERROR"
#  ]
#}

OK bạn sẽ apply là tìm hiểu các value của config nhé!

10) Elastic Load Balancers (ELB)

10.1) Introduction to Elastic Load Balancers (ELB)

Now that you’ve autoscaled instances, you might want to put a loadbalancer in front of it
The AWS Elastic Load Balancer (ELB) automatically distributes incoming traffic across multiple EC2 instances
– The ELB itself scales when you receive more traffic
– The ELB will healthcheck your instances
– If an instance fails its healthcheck, no traffic will be sent to it
– If a new instances is added by the autoscaling group, the ELB will automatically add the new instances and will start healthchecking it

The ELB can also be used as SSL terminator
– It can offload the encryption away from the EC2 instances
– AWS can even manage the SSL certificates for you
ELBs can be spread over multiple Availability Zones for higher fault tolerance
You will in general achieve higher levels of fault tolerance with an ELB routing the traffic for your application
ELB is comparable to a nginx / haproxy, but then provided as a service

AWS provides 2 different types of load balancers:
The Classic Load Balancer (ELB)
– Routes traffic based on network information
e.g. forwards all traffic from port 80 (HTTP) to port 8080 (application)
The Application Load Balancer (ALB)
– Routes traffic based on application level information
e.g. can route /api and /website to different EC2 instances

10.2) ELBs in terraform

10.2.1) ELB + AutoScaling

You can attach the ELB to an autoscaling group:

10.3) Demo ELB with autoscaling

resource "aws_launch_configuration" "example-launchconfig" {
  name_prefix     = "example-launchconfig"
  image_id        = var.AMIS[var.AWS_REGION]
  instance_type   = "t2.micro"
  key_name        = aws_key_pair.mykeypair.key_name
  security_groups = [aws_security_group.myinstance.id]
  user_data       = "#!/bin/bash\napt-get update\napt-get -y install net-tools nginx\nMYIP=`ifconfig | grep -E '(inet 10)|(addr:10)' | awk '{ print $2 }' | cut -d ':' -f2`\necho 'this is: '$MYIP > /var/www/html/index.html"
  lifecycle {
    create_before_destroy = true
  }
}

resource "aws_autoscaling_group" "example-autoscaling" {
  name                      = "example-autoscaling"
  vpc_zone_identifier       = [aws_subnet.main-public-1.id, aws_subnet.main-public-2.id]
  launch_configuration      = aws_launch_configuration.example-launchconfig.name
  min_size                  = 2
  max_size                  = 2
  health_check_grace_period = 300
  health_check_type         = "ELB"
  load_balancers            = [aws_elb.my-elb.name]
  force_delete              = true

  tag {
    key                 = "Name"
    value               = "ec2 instance"
    propagate_at_launch = true
  }
}

autoscaling.tf

aws_launch_configuration:
– user_data: ở đây mình cài trước 1 web để test

aws_autoscaling_group:
– health_check_type: mình ddeerd là ELB
– load_balancers: khai báo aws_elb

elb.tf

resource "aws_elb" "my-elb" {
  name            = "my-elb"
  subnets         = [aws_subnet.main-public-1.id, aws_subnet.main-public-2.id]
  security_groups = [aws_security_group.elb-securitygroup.id]
  listener {
    instance_port     = 80
    instance_protocol = "http"
    lb_port           = 80
    lb_protocol       = "http"
  }
  health_check {
    healthy_threshold   = 2
    unhealthy_threshold = 2
    timeout             = 3
    target              = "HTTP:80/"
    interval            = 30
  }

  cross_zone_load_balancing   = true
  connection_draining         = true
  connection_draining_timeout = 400
  tags = {
    Name = "my-elb"
  }
}

chúng ta cũng xẽ cấu hình security group

securitygroup.tf

resource "aws_security_group" "allow-ssh" {
  vpc_id      = aws_vpc.main.id
  name        = "allow-ssh"
  description = "security group that allows ssh and all egress traffic"
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    from_port       = 80
    to_port         = 80
    protocol        = "tcp"
    security_groups = [aws_security_group.elb-securitygroup.id]
  }
  tags = {
    Name = "allow-ssh"
  }
}

resource "aws_security_group" "elb-securitygroup" {
  vpc_id      = aws_vpc.main.id
  name        = "elb"
  description = "security group for load balancer"
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  tags = {
    Name = "elb"
  }
}

output.tf

output "ELB" {
  value = aws_elb.my-elb.dns_name
}
OK sau khi terraform apply thì thử truy cập domain sau
Thực hiện vào console aws kiểm tra LB thấy đã kết nối với 2 instance.
Curl vào thấy trả 2 ip

10.4) Application Load Balancer (ALB).

10.4.1) Rule based load balancing.

For an application load balancer, you first define the general settings:

Then, you specify a target group:

You can attach instances to targets:

You also need to specify the listeners separately:

The default action matches always if you haven’t specified any other rules

With ALBs, you can specify multiple rules to send traffic to another target:

11) Elastic Beanstalk

11.1) Instruct

Elastic Beanstalk is AWS’s Platform as a Service (PaaS) solution
It’s a platform where you launch your app on without having to maintain the underlying infrastructure
– You are still responsible for the EC2 instances, but AWS will provide you with updates you can apply
– Updates can be applied manually or automatically
– The EC2 instances run Amazon Linux

Elastic Beanstalk can handle application scaling for you
– Underlying it uses a Load Balancer and an Autoscaling group to achieve this
– You can schedule scaling events or enable autoscaling based on a metric
It’s similar to Heroku (another PaaS solution)
You can have an application running just in a few clicks using the AWS Console
– Or using the elasticbeanstalk resources in Terraform

The supported Platforms are:
– PHP
– Java SE, Java with Tomcat
– .NET on Windows with IIS
– Node.js
– Python
– Ruby
– Go
– Docker (single container + multi-container, using ECS)

11.2) Demo Elastic Beanstalk.

securitygroup.tf
Đầu tiên bạn cần tạo security group

resource "aws_security_group" "app-prod" {
  vpc_id      = aws_vpc.main.id
  name        = "application - production"
  description = "security group for my app"
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = {
    Name = "myinstance"
  }
}

resource "aws_security_group" "allow-mariadb" {
  vpc_id      = aws_vpc.main.id
  name        = "allow-mariadb"
  description = "allow-mariadb"
  ingress {
    from_port       = 3306
    to_port         = 3306
    protocol        = "tcp"
    security_groups = [aws_security_group.app-prod.id] # allowing access from our example instance
  }
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
    self        = true
  }
  tags = {
    Name = "allow-mariadb"
  }
}

Thông quan Security Group bạn thấy là chúng ta sẽ có 1 RDS
rds.tf

resource "aws_db_subnet_group" "mariadb-subnet" {
  name        = "mariadb-subnet"
  description = "RDS subnet group"
  subnet_ids  = [aws_subnet.main-private-1.id, aws_subnet.main-private-2.id]
}

resource "aws_db_parameter_group" "mariadb-parameters" {
  name        = "mariadb-params"
  family      = "mariadb10.6"
  description = "MariaDB parameter group"

  parameter {
    name  = "max_allowed_packet"
    value = "16777216"
  }
}

resource "aws_db_instance" "mariadb" {
  allocated_storage         = 100 # 100 GB of storage, gives us more IOPS than a lower number
  engine                    = "mariadb"
  engine_version            = "10.6.7"
  instance_class            = "db.t2.small" # use micro if you want to use the free tier
  identifier                = "mariadb"
  db_name                   = "mydatabase"     # database name
  username                  = "root"           # username
  password                  = var.RDS_PASSWORD # password
  db_subnet_group_name      = aws_db_subnet_group.mariadb-subnet.name
  parameter_group_name      = aws_db_parameter_group.mariadb-parameters.name
  multi_az                  = "false" # set to true to have high availability: 2 instances synchronized with each other
  vpc_security_group_ids    = [aws_security_group.allow-mariadb.id]
  storage_type              = "gp2"
  backup_retention_period   = 30                                          # how long you’re going to keep your backups
  availability_zone         = aws_subnet.main-private-1.availability_zone # prefered AZ
  final_snapshot_identifier = "mariadb-final-snapshot"                    # final snapshot when executing terraform destroy
  tags = {
    Name = "mariadb-instance"
  }
}

elasticbeanstalk.tf

resource "aws_elastic_beanstalk_application" "app" {
  name        = "app"
  description = "app"
}

resource "aws_elastic_beanstalk_environment" "app-prod" {
  name                = "app-prod"
  application         = aws_elastic_beanstalk_application.app.name
  solution_stack_name = "64bit Amazon Linux 2018.03 v2.9.6 running PHP 7.3"
  setting {
    namespace = "aws:ec2:vpc"
    name      = "VPCId"
    value     = aws_vpc.main.id
  }
  setting {
    namespace = "aws:ec2:vpc"
    name      = "Subnets"
    value     = "${aws_subnet.main-private-1.id},${aws_subnet.main-private-2.id}"
  }
  setting {
    namespace = "aws:ec2:vpc"
    name      = "AssociatePublicIpAddress"
    value     = "false"
  }
  setting {
    namespace = "aws:autoscaling:launchconfiguration"
    name      = "IamInstanceProfile"
    value     = aws_iam_instance_profile.app-ec2-role.name
  }
  setting {
    namespace = "aws:autoscaling:launchconfiguration"
    name      = "SecurityGroups"
    value     = aws_security_group.app-prod.id
  }
  setting {
    namespace = "aws:autoscaling:launchconfiguration"
    name      = "EC2KeyName"
    value     = aws_key_pair.mykeypair.id
  }
  setting {
    namespace = "aws:autoscaling:launchconfiguration"
    name      = "InstanceType"
    value     = "t2.micro"
  }
  setting {
    namespace = "aws:elasticbeanstalk:environment"
    name      = "ServiceRole"
    value     = aws_iam_role.elasticbeanstalk-service-role.name
  }
  setting {
    namespace = "aws:ec2:vpc"
    name      = "ELBScheme"
    value     = "public"
  }
  setting {
    namespace = "aws:ec2:vpc"
    name      = "ELBSubnets"
    value     = "${aws_subnet.main-public-1.id},${aws_subnet.main-public-2.id}"
  }
  setting {
    namespace = "aws:elb:loadbalancer"
    name      = "CrossZone"
    value     = "true"
  }
  setting {
    namespace = "aws:elasticbeanstalk:command"
    name      = "BatchSize"
    value     = "30"
  }
  setting {
    namespace = "aws:elasticbeanstalk:command"
    name      = "BatchSizeType"
    value     = "Percentage"
  }
  setting {
    namespace = "aws:autoscaling:asg"
    name      = "Availability Zones"
    value     = "Any 2"
  }
  setting {
    namespace = "aws:autoscaling:asg"
    name      = "MinSize"
    value     = "1"
  }
  setting {
    namespace = "aws:autoscaling:updatepolicy:rollingupdate"
    name      = "RollingUpdateType"
    value     = "Health"
  }
  setting {
    namespace = "aws:elasticbeanstalk:application:environment"
    name      = "RDS_USERNAME"
    value     = aws_db_instance.mariadb.username
  }
  setting {
    namespace = "aws:elasticbeanstalk:application:environment"
    name      = "RDS_PASSWORD"
    value     = aws_db_instance.mariadb.password
  }
  setting {
    namespace = "aws:elasticbeanstalk:application:environment"
    name      = "RDS_DATABASE"
    value     = aws_db_instance.mariadb.name
  }
  setting {
    namespace = "aws:elasticbeanstalk:application:environment"
    name      = "RDS_HOSTNAME"
    value     = aws_db_instance.mariadb.endpoint
  }
}

trong config aws_elastic_beanstalk_environment:
– solution_stack_name: Chúng ta sẽ tìm ở đâu:
Link dưới này: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/concepts.platforms.html

https://docs.aws.amazon.com/elasticbeanstalk/latest/platforms/platform-history-php.html

vpc.tf

# Internet VPC
resource "aws_vpc" "main" {
  cidr_block           = "10.0.0.0/16"
  instance_tenancy     = "default"
  enable_dns_support   = "true"
  enable_dns_hostnames = "true"
  enable_classiclink   = "false"
  tags = {
    Name = "main"
  }
}

# Subnets
resource "aws_subnet" "main-public-1" {
  vpc_id                  = aws_vpc.main.id
  cidr_block              = "10.0.1.0/24"
  map_public_ip_on_launch = "true"
  availability_zone       = "eu-west-1a"

  tags = {
    Name = "main-public-1"
  }
}

resource "aws_subnet" "main-public-2" {
  vpc_id                  = aws_vpc.main.id
  cidr_block              = "10.0.2.0/24"
  map_public_ip_on_launch = "true"
  availability_zone       = "eu-west-1b"

  tags = {
    Name = "main-public-2"
  }
}

resource "aws_subnet" "main-public-3" {
  vpc_id                  = aws_vpc.main.id
  cidr_block              = "10.0.3.0/24"
  map_public_ip_on_launch = "true"
  availability_zone       = "eu-west-1c"

  tags = {
    Name = "main-public-3"
  }
}

resource "aws_subnet" "main-private-1" {
  vpc_id                  = aws_vpc.main.id
  cidr_block              = "10.0.4.0/24"
  map_public_ip_on_launch = "false"
  availability_zone       = "eu-west-1a"

  tags = {
    Name = "main-private-1"
  }
}

resource "aws_subnet" "main-private-2" {
  vpc_id                  = aws_vpc.main.id
  cidr_block              = "10.0.5.0/24"
  map_public_ip_on_launch = "false"
  availability_zone       = "eu-west-1b"

  tags = {
    Name = "main-private-2"
  }
}

resource "aws_subnet" "main-private-3" {
  vpc_id                  = aws_vpc.main.id
  cidr_block              = "10.0.6.0/24"
  map_public_ip_on_launch = "false"
  availability_zone       = "eu-west-1c"

  tags = {
    Name = "main-private-3"
  }
}

# Internet GW
resource "aws_internet_gateway" "main-gw" {
  vpc_id = aws_vpc.main.id

  tags = {
    Name = "main"
  }
}

# route tables
resource "aws_route_table" "main-public" {
  vpc_id = aws_vpc.main.id
  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.main-gw.id
  }

  tags = {
    Name = "main-public-1"
  }
}

# route associations public
resource "aws_route_table_association" "main-public-1-a" {
  subnet_id      = aws_subnet.main-public-1.id
  route_table_id = aws_route_table.main-public.id
}

resource "aws_route_table_association" "main-public-2-a" {
  subnet_id      = aws_subnet.main-public-2.id
  route_table_id = aws_route_table.main-public.id
}

resource "aws_route_table_association" "main-public-3-a" {
  subnet_id      = aws_subnet.main-public-3.id
  route_table_id = aws_route_table.main-public.id
}

resource "aws_route_table" "main-private" {
  vpc_id = aws_vpc.main.id
  route {
    cidr_block     = "0.0.0.0/0"
    nat_gateway_id = aws_nat_gateway.nat-gw.id
  }

  tags = {
    Name = "main-private-1"
  }
}

# route associations private
resource "aws_route_table_association" "main-private-1-a" {
  subnet_id      = aws_subnet.main-private-1.id
  route_table_id = aws_route_table.main-private.id
}

resource "aws_route_table_association" "main-private-2-a" {
  subnet_id      = aws_subnet.main-private-2.id
  route_table_id = aws_route_table.main-private.id
}

resource "aws_route_table_association" "main-private-3-a" {
  subnet_id      = aws_subnet.main-private-3.id
  route_table_id = aws_route_table.main-private.id
}

# nat gw
resource "aws_eip" "nat" {
  vpc = true
}

resource "aws_nat_gateway" "nat-gw" {
  allocation_id = aws_eip.nat.id
  subnet_id     = aws_subnet.main-public-1.id
  depends_on    = [aws_internet_gateway.main-gw]
}

iam.tf

# iam roles
resource "aws_iam_role" "app-ec2-role" {
  name               = "app-ec2-role"
  assume_role_policy = <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": "sts:AssumeRole",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Effect": "Allow",
      "Sid": ""
    }
  ]
}
EOF

}

resource "aws_iam_instance_profile" "app-ec2-role" {
  name = "app-ec2-role"
  role = aws_iam_role.app-ec2-role.name
}

# service
resource "aws_iam_role" "elasticbeanstalk-service-role" {
  name = "elasticbeanstalk-service-role"
  assume_role_policy = <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": "sts:AssumeRole",
      "Principal": {
        "Service": "elasticbeanstalk.amazonaws.com"
      },
      "Effect": "Allow",
      "Sid": ""
    }
  ]
}
EOF

}

# policies
resource "aws_iam_policy_attachment" "app-attach1" {
name       = "app-attach1"
roles      = [aws_iam_role.app-ec2-role.name]
policy_arn = "arn:aws:iam::aws:policy/AWSElasticBeanstalkWebTier"
}

resource "aws_iam_policy_attachment" "app-attach2" {
name       = "app-attach2"
roles      = [aws_iam_role.app-ec2-role.name]
policy_arn = "arn:aws:iam::aws:policy/AWSElasticBeanstalkMulticontainerDocker"
}

resource "aws_iam_policy_attachment" "app-attach3" {
name       = "app-attach3"
roles      = [aws_iam_role.app-ec2-role.name]
policy_arn = "arn:aws:iam::aws:policy/AWSElasticBeanstalkWorkerTier"
}

resource "aws_iam_policy_attachment" "app-attach4" {
name       = "app-attach4"
roles      = [aws_iam_role.elasticbeanstalk-service-role.name]
policy_arn = "arn:aws:iam::aws:policy/service-role/AWSElasticBeanstalkEnhancedHealth"
}

Sau khi bạn chạy terraform apply xong thì sẽ thấy có một link từ output

1 UI xin xò giờ bạn nghịch tiếp Giúp Nim nhé!


AWS - Amazon Web Service

Post navigation

Previous Post: [OpenVPN] Install OpenVPN on Ubuntu Desktop 20.04
Next Post: [Samba] Mount the shared folder on Windows into Ubuntu

More Related Articles

[EKS/ S3 Mount Point] Create a Persistent Volume on EKS using an S3 Bucket. AWS - Amazon Web Service
[AWS] Optimizing Image Storage in Amazon ECR: Understanding Layer Reuse and Immutability. AWS - Amazon Web Service
Accelerating Data Access: Effective Initialization of Amazon EBS Volumes AWS - Amazon Web Service
[Terraform / EKS] Build EKS and Karpenter by Terraform. AWS - Amazon Web Service
[AWS] Provision AWS IAM Admin User as EKS Admin AWS - Amazon Web Service
[MongoDB] Creating MongoDB Atlas to integrate with your workload on any Cloud AWS - Amazon Web Service

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Tham Gia Group DevOps nhé!
Để Nim có nhiều động lực ra nhiều bài viết.
Để nhận được những thông báo mới nhất.

Recent Posts

  • [Laravel] Laravel Helpful June 26, 2025
  • [VScode] Hướng dẫn điều chỉnh font cho terminal June 20, 2025
  • [WordPress] Hướng dấn gửi mail trên WordPress thông qua gmail. June 15, 2025
  • [Bitbucket] Git Clone/Pull/Push with Bitbucket through API Token. June 12, 2025
  • [Teamcity] How to transfer the value from pipeline A to pipeline B June 9, 2025

Archives

  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021

Categories

  • BareMetal
    • NextCloud
  • CI/CD
    • Argo Events
    • ArgoCD
    • ArgoWorkflows
    • Git
      • Bitbucket
    • Harbor
    • Jenkins
    • Spinnaker
    • TeamCity
  • Coding
    • DevSecOps
    • Golang
    • Jquery & JavaScript
    • Laravel
    • NextJS 14 & ReactJS & Type Script
    • Python
    • Selenium
    • Terraform
      • AWS – Amazon Web Service
      • Azure Cloud
      • GCP – Google Cloud
  • Kubernetes & Container
    • Apache Kafka
      • Kafka
      • Kafka Connect
      • Lenses
    • Docker
    • Helm Chart
    • Isito-EnvoyFilter
    • Kong Gateway
    • Kubernetes
      • Ingress
      • Pod
    • Longhorn – Storage
    • MetalLB
    • OAuth2 Proxy
    • Vault
    • VictoriaMetrics
  • Log, Monitor & Tracing
    • DataDog
    • ELK
      • Kibana
      • Logstash
    • Fluent
    • Grafana
    • Prometheus
  • Uncategorized
  • Admin

Copyright © 2025 NimTechnology.