Infrastructure as Code: Deployment to AWS using Terraform Part 3

Welcome to the part3 of our Infrastructure as Code using Terraform series. All our scenarios so far are to execute on AWS. This series aims to be a gentle walk-through of Terraform with full code available on git htttps://github.com/kdjomeda/intro-to-terraform.

In part1 of the series, I have evolved from a proof of concept to a real EC2 instance running outside of an AWS VPC.

in part2, I have essentially, from an EC2 classic instance, delved into the world of AWS VPC, creating subnets, creating internet and nat gateways, route and route tables, and creating the instance in one of the availability zones.

In this part of our series, I would like to introduce the notion of re-usability with the help of variables. They allow us to share this very code with multiple projects and serve different configurations. We will also be creating a database service intended for use by the application instance. at the completion of our Terraform code, we need to retrieve the DB’s endpoint and possibly pass it to other tools in our automation chain if need be or use it directly with a MySQL client. The retrieval of all information pertaining to resources created can be done using outputs which are essentially like returned values from a method in programming. The corresponding code to this article can be found here https://github.com/kdjomeda/intro-to-terraform/tree/ec2-and-rds-in-vpc

There will be a change in our project file structure we have so far and we will be adding 3 types of files:

  • variables.tf (Where variables are declared)
  • outputs.tf (Where outputs from execution are published)
  • terraform.tfvars (Which holds the default values of the variables)

I will also introduce a folder called files where I will keep the userdata shell script. Our new project structure will look like below:

I will be using ubuntu 20.20 AMI and MySQL 8.0 as our RDS engine. Again to create an AWS service via API, you need to understand how its parameters and arguments are organized. To do so with RDS , you might want to have a look at this link https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBInstance.html

Creating our input variable file

Let’s start with our variable.tf file to declare variables to be used by resources’ arguments for the VPC and EC2 instance used in the part2 of the tutorials. We will also create the variables to be used for the RDS in it.

Using defaults and datatypes in variables

On the lines 14-22 and 84-87 of the variable file above, we are using the default attribute of a variable. This means when values are not assigned to the variable in question in the tfvars file (which we will use later), Terraform will use the values specified in the default attribute.

On its lines 28-31 and 129-132, I used type to specify the datatypes of the variables that are excepted.

Creating the RDS instance

We would now need to define our terraform instruction to create our DB instance. It’s going to be based on the smallest instance class (t2 micro) avaialable, based on MySQL version 8.0.

On the code above we have the parameter group, subnet group and a security group as component dependencies for the RDS instance.

Creating our variable definition file (.tfvars)

The filename of the variable definition file can actually be called anything.tfvars or tuto_part2.tfvars. And for it’s to be picked up by Terraform, we would have to use the option -var-file anything.tfvars, when terraform apply, is executed. However, if we name the file terraform.tfvars or name any ended with .auto.tfvars, then terraform will automatically pick it up. Without this file, terraform apply command would be prompting us for the value of each and every variable from our input variable file (variable.tf). Below is our terraform.tfvars

In the file above, I would like to highlight that from lines 8-14 we are using the map type defined in the input variable file for grouping cidr for all the subnets instead of using a different variable for each subnet. On line 37, I commented out the DB superuser password variable because I would like it to be prompted. So that’s the only variable that requires input prompt from our code execution.

Creating an EC2 instance with userdata

We have to slightly modify the instructions of our EC2 resource created in part2 to use variables defined in the input variable file

The resource block for creating the instance above it had 2 noticeable new elements. One from the line 14-17 and the other one on 18. With the root_block_device attribute, we are defining how our EBS root device is to be configured. From the volume type to its size and other settings like whether it should be encrypted or not. With the userdata, we are using a shell script to “pre-install” some packages for us to be on the node by the time it’s ready to be used. That file is located in files/ubuntu_userdata.sh

The shell script above will create for us the following packages:

  • htop: To check resource usage on the node
  • fail2ban: A security utility that needs to be configured
  • ec2-instance-connect: To make the node reachable using different instance connect methodology of AWS
  • mysql-client-core: To make client requests to our RDS instance

Creating our output file

In all the previous posts, after the creation of our resources, we usually have to open AWS console before we see the IPs (both internal and external) of the EC2 instance and the endpoint URL assigned to the RDS instance. Below is our attempt on returning useful information from our just created stack

Now our code is ready to be executed. Because there is not value for the db_instance_password variable in the tfvars file, we will have to input our password at the prompt

Inputting the db password at the prompt

At the end of the execution, our outputted values should look similar to the image below:

Output values after execution
displaying the properties of our instance

Let’s try and connect to our node and connect to our MySQL from it

Leave a Comment

Your email address will not be published. Required fields are marked *

captcha * Time limit is exhausted. Please reload the CAPTCHA.

This site uses Akismet to reduce spam. Learn how your comment data is processed.