# Terraform Basics Training Course ## Introduction ### Course Introduction ### Course Resources - [[Terraform-Associate-1-1-1.pdf]] ## Introduction To Infrastructure as Code ### Challenges with Traditional IT Infrastructure - An overview of the modern needs of managing IT infrastructure. ### Types of IAC Tools - Various types of tools used for IaC. ### Why Terraform? - A discussion of why to use Terraform. ## Getting Started with Terraform ### Installing Terraform - https://developer.hashicorp.com/terraform/install?product_intent=terraform#next-steps - Installed using Ubuntu/Debian method on local system. - HCL declaritive language is used for the format of the Terraform files. - **Resource** is any object supported by Terraform that can be provisioned. ### HashiCorp Configuration Language (HCL) Basics - HCL consists of **blocks** and **arguments**. For example: ``` <block> <parameters> { key1 = value1 key2 = value2 } ``` - An example of creating a file locally. ``` resource "local_file" "pet" { filename = "/home/terrific/TerraformTraining/pets.txt" content = "We love pets!" } ``` - Example breakdown: - **resource** is the name of the specific block of the task to perform. - **Resource Type** - **local** in *local_file* is the provider. - **file** in *local_file* is the resource. - **Resource Name** - *pet* is the logical name used to identify the resource and can be named anything. - **Resource Arguments** - are specific to the resource being created. - **filename** the file to create. - **content** the contents of the file. - An AWS EC2 example. ``` resource "aws_instance" "webserver" { ami = "ami-$AMIUUID" instance_type = "t2.micro" } ``` - An AWS S3 bucket example. ``` resource "aws_s3_bucket" "data" { bucket = "webserver-bucket-org-1234" acl = "private" } ``` - Terraform workflow: - Create the Terraform configuration file. - Then execute the `terraform init` command to initialize the directory & check the configuration file. - It will install needed plugins declare in the `tf` file. - Then execute the `terraform plan` command to review the actions that Terraform will perform. - Some default options will be applied if not specifically set in the `tf` file. - Finally to apply changes execute the command `terraform apply`. - To deploy the changes directed in the `tf` file. - There is also the ability to run `terraform show` in the configuration directory to show what was performed. - Reference Terraform documentation for further information on the providers available & how to use them. - https://developer.hashicorp.com/terraform/docs ### Update and Destroy Infrastructure - When `tf` configurations are updated, running `terraform plan` will show what changes will be performed on the existing objects. - To remove the infrastructure resources that have been applied execute `terraform destroy`. This will delete the referenced objects that are mentioned in the `tf` file. ### Lab: HCL Basics - Remember you have to run `terraform init` before `terraform plan` will work. - Demonstrated the use of the `local_sensitive_file` resource to keep file contents from being displayed to the console. ## Terraform Basics ### Using Terraform Providers - registry.terraform.io - **Official** owned & maintained by Hashicorp. - **Partner** owned & maintained by 3rd party technology provider. - **Community** owned & maintained by users of the Terraform community. - Plugins are downloaded into a hidden directory in the working configuration directory, `.terraform/plugins`. - The source address is what is used to download a specific plugin, - e.g. **hashicorp/local**. - URLs can also be stored in this area, **`registry.hashicorp.io/hashicorp/local`** ### Configuration Directory - Common naming conventions - **`main.tf`** - Main configuration file containing resource definitions. - **`variables.tf`** - Contains variable declarations. - **`outputs.tf`** - Contains output from resources. - **`provider.tf`** - Contains Provider definition. ### Lab: Terraform Providers - **`.terraform/providers`** - Location for Providers. ### Multiple Providers - Multiple providers can be used within a single `tf` file. ### Lab: Multiple Providers - When changes are made to a `tf` configuration file & add another **new** resource provider, you must run `terraform init`, otherwise `terraform apply` won't work. Because the resource provider needs to be installed/enabled. ### Using Input Variables - Of course hardcoding input variables ins't a best practice. - This section introduces the **`variables.tf`** configuration file. - An example **`variables.tf`** configuration file: ``` variable "filename" { default = "/root/pets.txt" } variable "content" { default = "We love pets!" } ``` - This can be named anything but using the argument name for the variable is recommended. - An example of using the **variables** defined in the **`variables.tf`** configuration file: ``` resource "local_file" "pet" { filename = var.filename content = var.content } ``` - Another example for AWS EC2 using **variables**. - **`main.tf`** for AWS EC2 example: ``` resource "aws_instance" "webserver" { ami = var.ami instance_type = var.instance_type } ``` - **`variables.tf`** for the AWS EC2 example: ``` variable "ami" { default = "ami-123456789098765421" } variable "instance_type" { default = "t2.micro" } ``` ### Understanding the Variable Block - The **`variable`** resource has 3 parameters that can be used. - **default** which is whatever value to be used for a variable. - **type** the type of variable & optional. It enforces the type of the variable to be used. The following are the types that can be set for this parameter. - **string** - alphanumeric - **number** - a numeric digit - **bool** - true or false - **any** - the default value - **list** - a numbered collection of values, **`["cat", "dog"]`** - **map** - objects named to values, **pet1 = cat**, **pet2 = dog** - **object** - a complex data structure - **tuple** - a complex data structure - **description** an optional parameter to describe what the variable is used for. This is a best practice to use this. #### **1. Lists** A **list** is an ordered collection of values of the same type. ##### Example: ```hcl variable "instance_names" { type = list(string) default = ["web-1", "web-2", "web-3"] } ``` In this example, `instance_names` is a list of strings. #### **2. Maps** A **map** is a collection of key-value pairs, where keys are unique, and values can be of a specific type. ##### Example: ```hcl variable "instance_amis" { type = map(string) default = { us-east-1 = "ami-123456" us-west-1 = "ami-654321" } } ``` Here, `instance_amis` maps AWS regions to AMI IDs. #### **3. Objects** An **object** is a complex type that groups multiple attributes with different types. ##### Example: ```hcl variable "instance_config" { type = object({ name = string cpu = number memory = number }) default = { name = "web-server" cpu = 2 memory = 4 } } ``` This object stores multiple attributes of an instance. #### **4. Tuples** A **tuple** is a sequence with a fixed number of elements, where each element can have a different type. ##### Example: ```hcl variable "mixed_tuple" { type = tuple([string, number, bool]) default = ["example", 42, true] } ``` Here, `mixed_tuple` consists of a string, a number, and a boolean. #### **Example `main.tf` using these variables** ```hcl terraform { required_version = ">= 1.0" } variable "instance_names" { type = list(string) default = ["web-1", "web-2", "web-3"] } variable "instance_amis" { type = map(string) default = { us-east-1 = "ami-123456" us-west-1 = "ami-654321" } } variable "instance_config" { type = object({ name = string cpu = number memory = number }) default = { name = "web-server" cpu = 2 memory = 4 } } variable "mixed_tuple" { type = tuple([string, number, bool]) default = ["example", 42, true] } resource "aws_instance" "example" { count = length(var.instance_names) ami = lookup(var.instance_amis, "us-east-1", "ami-default") instance_type = "t2.micro" tags = { Name = var.instance_names[count.index] } } output "instance_details" { value = var.instance_config } output "tuple_values" { value = var.mixed_tuple } ``` ##### **Explanation:** - **List:** Used in `instance_names` to create multiple EC2 instances. - **Map:** Used in `instance_amis` to get AMI IDs based on region. - **Object:** Used in `instance_config` to store multiple instance attributes. - **Tuple:** Demonstrates a fixed-structure variable for mixed data types. ### Lab: Variables - Testing out using variables for resource blocks. ### Using Variables in Terraform - When the variables file has empty variables defined you can set them via: - Interactive prompts when using the `apply` command. - Using command like args via the `-vars` flag. - Using environment variables using `export TF_VAR_$varname`, when `$varname` is the specific variable to set. - e.g. `TF_VAR_filename` - Using variable definition files, `terraform.tfvars`, that contain the variables to set. Files can be named anything as long as the extension is **`.tfvars`** or **`.tfvars.json`**. - Files named **`*.auto.tfvars`** or **`*.auto.tfvars.json`** are automatically loaded. - Files named otherwise can be loaded by being passed in using the **`-var-file`** flag, `terraform apply -var-file variables.tfvars` Example **`terraform.tfvars`** ```hcl filename = "/root/pets.txt" content = "We love pets!" ``` - Variable definition precedence order. 1. Loads environment variables 2. terraform.tfvars file 3. `*.auto.tfvars` alphabetical order 4. `-var` or `-var-file` command line flags - Thus the 4th option would be the variables that get loaded. ### Lab: Using Variables in terraform - Experimented with undefined variables & properly setting them using variables files. - Had to create a `variables.tf` file with a single entry to reference variable definition files. ### Resource Attributes - For a "local_file" resource these are attributes such as: - filename & content - Essentially an attribute is what a resource needs to be able to be defined. - Resource attributes can be shared between resources. - It will follow the format of **`${resource.resource_name.attribute_id}`**. - So for example of random pet it would **`${random_pet.my-pet.id}`** & this would be expanded into **Mr. Bull** (or whatever the value would be for that resource). Example: ```hcl resource "local_file" "pet" { filename = var.filename content = "My favorite pet is ${random_pet.my-pet.id}" # This references the "random_pet" resource. } resource "random_pet" "my-pet" { prefix = var.prefix separator = var.separator length = var.length } ``` ### Lab: Resource Attributes - Created a resource that used the **time_static** resource to have the value interpolated into a **local_file** resource. ```hcl resource "time_static" "time_update" { } resource "local_file" "time" { filename = "/root/time.txt" content = "Time stamp of this file is ${time_static.time_update.id}" ``` - You can use **`terraform show`** to obtain information such as the **id** that was interpolated to the resource. ### Resource Dependencies - **Implicit Dependency**, this is when terraform will figure out the order to create a resource based on the resource dependency defined in the file. Known as a **reference expression**. - Reference the example in [[Terraform Basics Training Course#Resource Attributes|Resource Attributes]] - **Explicit Dependency**, this is when you specifically tell terraform what resource relies on another using the **`depends_on`** attribute. - Example of explicit dependency: ```hcl resource "local_file" "pet" { filename = var.filename content = "My favorite pet is Mr. Cat" depends_on = [ random_pet.my-pet ] } resource "random_pet" "my-pet" { prefix = var.prefix separator = var.separator length = var.length } ``` ### Lab: Resource Dependencies - Resources that aren't saved/written to a file are stored in the **`terraform state`** that is accessed via **`terraform show`**. - In this lab the **tls_private_key** resource was used. - https://registry.terraform.io/providers/hashicorp/tls/latest/docs/resources/private_key ### Output Variables In Terraform, **output variables** allow you to extract and display useful information after resource creation. These variables are often used to expose values for other modules, debugging, or sharing essential details. 1. **Expose Values** – They return values such as instance IPs, IDs, or DNS names. 2. **Reference in Other Modules** – Outputs from one module can be used in another. 3. **Debugging & Logging** – Useful for troubleshooting and verifying configurations. #### **Basic Syntax** ```hcl output "instance_ip" { value = aws_instance.example.public_ip } ``` - `output` → Declares an output variable. - `instance_ip` → Name of the output. - `value` → Specifies the value to output. #### **Example: Multiple Outputs** ```hcl resource "aws_instance" "example" { ami = "ami-123456" instance_type = "t2.micro" } output "instance_id" { value = aws_instance.example.id } output "public_ip" { value = aws_instance.example.public_ip } output "instance_arn" { value = aws_instance.example.arn } ``` #### **Optional Attributes** - **`description`** (adds documentation) - **`sensitive`** (hides output in logs) ```hcl output "db_password" { value = aws_db_instance.example.password sensitive = true } ``` --- - Use the command **`terraform output`** to return all output variables. - Use the command **`terraform output pet-name`** to get the specified output variable. In this example, we're getting the **pet-name** output variable that is **Mrs. gibbon**. - These variables can be output to other tools such as Ansible or shell scripts. - Reference [[Terraform - Examples#**Real-World Example Terraform Output Variables**|Output Variables Example]] ### Lab: Output Variables - Simple example of using output variables. ## Terraform State ### Introduction to Terraform State #### **What is Terraform State?** Terraform **state** is a JSON-based file that records the **current infrastructure configuration** managed by Terraform. It helps Terraform **track resource changes, dependencies, and mappings** between configuration and real-world infrastructure. - **`terraform.tfstate`** #### **Why is Terraform State Important?** 1. **Tracks Resources** – Maps Terraform resources to real cloud infrastructure. 2. **Detects Changes** – Identifies differences between the desired and actual state. 3. **Enables Collaboration** – Allows teams to share infrastructure state using remote backends. 4. **Speeds Up Performance** – Prevents querying cloud providers for every Terraform command. #### **Where is Terraform State Stored?** - **Local State (default)** → Stored in a file called `terraform.tfstate` - **Remote State** (recommended for teams) → Stored in **S3, Azure Blob, Google Cloud Storage, Terraform Cloud, etc.** #### **Example: Local vs Remote State** ##### **Local State (Default)** ```sh terraform apply ``` Terraform automatically creates **`terraform.tfstate`** in the working directory. #### **Remote State (Recommended)** Example: Store state in an **S3 bucket** with **DynamoDB locking** ```hcl terraform { backend "s3" { bucket = "my-terraform-state" key = "prod/terraform.tfstate" region = "us-east-1" dynamodb_table = "terraform-lock" encrypt = true } } ``` **Why?** - **S3 stores the state file securely.** - **DynamoDB prevents multiple users from making changes at the same time (locking).** #### **Common Terraform State Commands** | Command | Description | | ------------------------------------------- | ---------------------------------------------------------------------- | | `terraform state list` | Lists all managed resources in the state file. | | `terraform state show <resource>` | Displays details of a specific resource. | | `terraform state pull` | Fetches the latest state from remote storage. | | `terraform state mv <source> <destination>` | Moves a resource to a new name. | | `terraform state rm <resource>` | Removes a resource from the state (but not the actual infrastructure). | #### **Key Best Practices** ✅ **Use Remote State for Collaboration** – Helps teams work on infrastructure safely. ✅ **Enable State Locking** – Prevents conflicts when multiple users run Terraform. ✅ **Never Manually Edit `terraform.tfstate`** – Corrupting the file can cause infrastructure drift. ✅ **Use `terraform refresh` to Sync State** – Ensures Terraform recognizes manual changes in cloud infrastructure. --- ### Purpose of State #### **What is State Drift?** **Terraform state drift** occurs when the **actual infrastructure** changes outside of Terraform's control, causing it to differ from the state stored in the `terraform.tfstate` file. This can happen due to: - **Manual changes** in the cloud provider (e.g., modifying an EC2 instance via AWS Console). - **External automation** or scripts updating resources. - **Policy changes** or infrastructure failures causing unintended modifications. ##### **How to Detect State Drift?** Terraform provides commands to identify drift: ```sh terraform plan ``` - Shows differences between the **desired state** (code) and the **actual state** (real-world infrastructure). ```sh terraform refresh ``` - Updates the state file to reflect the latest infrastructure configuration. - Does **not** modify resources, only updates the local state file. ##### **How to Fix State Drift?** 1. **Reapply Terraform Configuration:** - Run `terraform apply` to bring the infrastructure back to the declared state. 2. **Manually Update State:** - If changes are intentional, use `terraform import` to sync the resource with Terraform state. 3. **Remove Resources from State:** - If a resource no longer exists but is still in state, remove it using: ```sh terraform state rm <resource> ``` ##### **Preventing State Drift** ✅ **Use Remote State & State Locking** – Prevents multiple users from making unintended changes. ✅ **Implement Infrastructure as Code (IaC) Policies** – Restrict manual changes via cloud IAM policies. ✅ **Regularly Run `terraform plan`** – Helps detect drift early before issues arise. --- ##### **`terraform plan --refresh=false` - Explanation** ```sh terraform plan --refresh=false ``` - Generates an execution plan **without updating the Terraform state file** from the actual infrastructure. - It **only compares** the **current state file (`terraform.tfstate`)** with the **Terraform configuration (`.tf` files)**. - It **does NOT check for changes** made manually in the cloud provider. #### **Why Use `--refresh=false`?** 1. **Faster Execution** – Skips querying the cloud provider, making `terraform plan` quicker. 2. **Avoid Accidental Changes** – Useful when you want to preview changes based on your `.tf` files **without incorporating real-world drift**. 3. **Debugging & Testing** – Helps determine whether changes exist **only in the configuration**, ignoring external modifications. #### **When NOT to Use It?** - If you suspect **state drift**, because it won't detect manual changes in infrastructure. - When working in a **team environment** with shared remote state (e.g., S3, Terraform Cloud), as outdated state may cause unintended consequences. #### **Alternative Approach** If you want to **ensure Terraform has the latest state**, use: ```sh terraform refresh terraform plan ``` or simply: ```sh terraform plan # (default behavior, refreshes state) ``` ### Lab: terraform State - A brief demonstration of using the **`terraform show`** command and the **`terraform.tfstate`** file. ### Terraform State Considerations - The state file contains sensitive information by default. - For example IP addresses of EC2 instances, SSH keys, or DB passwords. - File is stored in plain text json file. - Store configurations in github or similar. - Use remote state backends for storing terraform state files/data. - Don't manually/directly edit the state file. ## Working with Terraform ### Terraform Commands - **`terraform validate`** - to check if the configuration file is valid/correct. - **`terraform fmt`** - will update configuration files to make adjustments to the files formatting. - **`terraform show`** - prints out the current state of the infrastructure as seen by terraform. - **`terraform show -json `** - same as above but in JSON format. - **`terraform providers`** - view the list of providers in the current configuration. - **`terraform providers mirror $DEST_DIR`** - to copy the providers plugins to another directory. - **`terraform output`** - to print variables in the current configuration. - **`terraform output $VAR_NAMNE`** - to print a specific variable in the current configuration. - **`terraform refresh`** - Updates the state file to reflect the latest infrastructure configuration. - Does **not** modify resources, only updates the local state file. - **`terraform graph`** - Create a visual representation of configuration dependencies. - Used in conjunction with the **`dot`** command from the **graphviz** package. - For example: **`terraform graph | dot -Tsvg > graph.svg`** ### Lab: Terraform Commands - **`terraform validate`** only validates syntax, not argument values for resources. - Tested out the various commands from [[Terraform Basics Training Course#Terraform Commands|Terraform Commands]] ### Mutable vs Immutable Infrastructure #### **Mutable Infrastructure** - **Definition:** Infrastructure that can be **modified or updated in place** without replacing the entire resource. - **Example:** - Changing an EC2 instance type without recreating it. - Updating configurations on an existing server. - **Pros:** ✅ Faster updates (no need to recreate resources). ✅ Retains state and data (useful for databases, long-running servers). - **Cons:** ❌ Risk of configuration drift if manual changes occur. ❌ Harder to roll back changes cleanly. #### **Immutable Infrastructure** - **Definition:** Infrastructure where **changes result in creating a new resource** instead of modifying the existing one. - **Example:** - Deploying a new AMI for updates instead of modifying an existing instance. - Replacing a Kubernetes pod instead of modifying it in place. - **Pros:** ✅ Ensures consistency (no drift or unpredictable changes). ✅ Safer rollbacks (just revert to a previous version). - **Cons:** ❌ Slower updates (requires creating and replacing resources). ❌ May cause downtime if not managed properly. #### **Terraform & Infrastructure Approach** - **Mutable Approach:** - Updating resources in place using Terraform (`terraform apply`). - Example: Updating a security group rule. - **Immutable Approach:** - Using `create_before_destroy` to **replace resources before deleting the old one**. - Example: ```hcl resource "aws_launch_template" "example" { name_prefix = "example" image_id = "ami-123456" } resource "aws_autoscaling_group" "example" { launch_template { id = aws_launch_template.example.id version = "$Latest" } } ``` - This replaces the launch template without modifying existing instances. #### **Which One to Use?** ✅ **Use Mutable Infrastructure** for databases, persistent storage, and quick changes. ✅ **Use Immutable Infrastructure** for stateless applications, containers, and CI/CD pipelines. --- - Configuration drift occurs when the actual infrastructure diverges from the Terraform configuration due to manual changes, external automation, or system failures. This results in inconsistencies between what Terraform expects and what exists in reality. ### LifeCycle Rules Terraform **lifecycle rules** allow you to control how resources are **created, updated, and destroyed**, helping to manage infrastructure changes more predictably. ##### **Key Lifecycle Arguments** Terraform provides three main lifecycle arguments: 1. **`create_before_destroy`** – Ensures a new resource is created **before** destroying the old one. ```hcl resource "aws_instance" "example" { ami = "ami-123456" instance_type = "t2.micro" lifecycle { create_before_destroy = true } } ``` ✅ Avoids downtime during resource replacement. ❌ Not all resources support this behavior (e.g., S3 buckets). 2. **`prevent_destroy`** – Prevents accidental resource deletion. ```hcl resource "aws_s3_bucket" "example" { bucket = "my-important-bucket" lifecycle { prevent_destroy = true } } ``` ✅ Protects critical resources (e.g., databases, production S3 buckets). ❌ Requires **manual override** to delete (`terraform apply` will fail if destruction is attempted). 3. **`ignore_changes`** – Prevents Terraform from updating specific resource attributes. ```hcl resource "aws_instance" "example" { ami = "ami-123456" instance_type = "t2.micro" lifecycle { ignore_changes = [instance_type] } } ``` ✅ Useful when external systems modify attributes (e.g., Auto Scaling Groups changing instance types). ❌ Can lead to **configuration drift** if not used carefully. ##### **Best Practices for Lifecycle Rules** ✅ Use **`prevent_destroy`** for critical resources to prevent accidental deletion. ✅ Apply **`create_before_destroy`** for zero-downtime updates when replacing resources. ✅ Be cautious with **`ignore_changes`** to avoid unintended drift. ❌ This doesn't prevent resources from being destroyed using **`terraform destroy``** ### Lab: Lifecycle Rules - This is related to the `create_before_destroy` lifecycle rule. - In certain instances, you will see that the lifecycle rule we applied caused the local file to the created first and the same file to be destroyed during the recreate operation. ### Datasources - Data sources allow terraform to read attributes from files outside of terraform management. - This introduces the keyword **`data`**. ```hcl data "local_file" "dog" { filename = "/root/dog.txt" } ``` - This can then be used as follows: ```hcl resource "local_file" "pet" { filename = "/root/pets.txt" content = data.local_file.dog.content } ``` ### Lab: Datasources - Simple examples of working with the **`data`** block keyword. ### Meta-Arguments Terraform **meta-arguments** are special configuration options that modify how resources and modules behave **without changing their core attributes**. They help with **resource management, scaling, and modularization**. #### **Common Meta-Arguments** ##### **1. `count` (Creates Multiple Instances)** - Used to **create multiple copies** of a resource dynamically. - Example: ```hcl resource "aws_instance" "example" { count = 3 ami = "ami-123456" instance_type = "t2.micro" tags = { Name = "instance-${count.index}" } } ``` ✅ Useful for creating multiple resources dynamically. ❌ `count.index` can cause issues if elements are removed (use `for_each` instead). ##### **2. `for_each` (Iterates Over Maps & Sets)** - More flexible than `count`, allowing iteration over **maps or sets**. - Example: ```hcl resource "aws_s3_bucket" "example" { for_each = toset(["dev-bucket", "prod-bucket"]) bucket = each.key } ``` ✅ Prevents index shifting issues that occur with `count`. ❌ Only works with maps and sets, not lists. ##### **3. `depends_on` (Explicit Dependencies)** - Ensures a resource is created **after** another resource. - Example: ```hcl resource "aws_instance" "example" { ami = "ami-123456" instance_type = "t2.micro" depends_on = [aws_security_group.example] } ``` ✅ Helps manage resource dependencies explicitly. ❌ Usually not needed if Terraform automatically determines dependencies. ##### **4. `provider` (Overrides Default Provider)** - Specifies which **Terraform provider configuration** to use. - Example: ```hcl resource "aws_instance" "example" { provider = aws.us_east_1 ami = "ami-123456" } ``` ✅ Useful for **multi-region** or **multi-cloud** deployments. #### **Why Use Meta-Arguments?** ✅ **Optimize Resource Management** – Dynamically scale infrastructure using `count` and `for_each`. ✅ **Improve Dependency Handling** – Use `depends_on` to ensure correct execution order. ✅ **Enable Multi-Provider Configurations** – Use `provider` to specify different cloud regions. ### Count - Reference [[Terraform Basics Training Course#Meta-Arguments#**1. `count` (Creates Multiple Instances)**|Meta Arguments - Count]] - Example of creating 3 files using the **`count`** meta argument. ```hcl # main.tf resource "local_file" "pet" { filename = var.filename[count.index] count = length(var.filename) } ``` ```hcl # variables.tf variable "filename" { default = [ "/root/pets.txt", "/root/dogs.txt", "/root/cat.txt" ] } ``` ### for-each - Reference [[Terraform Basics Training Course#Meta-Arguments#**2. `for_each` (Iterates Over Maps & Sets)**|Meta Arguments - for-each]] - Example of creating multiple files using the **`for_each`** meta argument. ```hcl # main.tf resource "local_file" "pet" { filename = each.value for_each = toset(var.filename) } output "pets" { value = local_file.pet } ``` ```hcl # variables.tf variable "filename" { type = list(string) default = [ "/root/pets.txt", "/root/dogs.txt", "/root/cat.txt" ] } ``` ### Lab: Count and for each - Simple examples of using the **`for_each`** & **`count`** meta arguments. ### Version Constraints - Specify the provider plugin to be used, reference docs on versions of a plugin available. - A provider version is specified in the **`main.tf`** configuration file. ### Lab: Version Constraints - Simple exercise of using version constraints. ## Terraform with AWS ### Getting Started with AWS - Overview of AWS. ### Demo Setup an AWS Account - I already have an AWS account. ### Introduction to IAM - A brief overview of AWS IAM. - AWS resources need to have IAM rules applied to them to interact with each other. - e.g. apply **`AmazonS3FullAccess`** rule for EC2 instances to access S3. ### Demo IAM - A brief demo of using the AWS IAM service. - Create a test user & group for training purposes. - Create a role for applying to resources. ### Programmatic Access - Usage & configuration of the **`aws`** CLI. - Example of creating a IAM user: - `aws iam create-user --user-name $NAME` ### Lab: AWS CLI and IAM - Uses a mock local stack of AWS. Use the following for the endpoint argument: `--endpoint http://aws:4566` - Listing users in the mock environment: - `aws --endpoint http://aws:4566 iam list-users` - Creating a user: ```bash aws iam create-user --user-name mary --endpoint http://aws:4566 ``` - Attaching a policy to a user: ```bash aws iam attach-user-policy --policy-arn arn:aws:iam::aws:policy/ AdministratorAccess --user-name mary --endpoint http://aws:4566 ``` - Create a user group: ```bash aws iam create-group --group-name project-sapphire-developers --endpoint http://aws:4566 ``` - Add users to group: ```bash aws iam add-user-to-group --user-name jack --group-name project-sapphire-developers --endpoint http://aws:4566 aws iam add-user-to-group --user-name jill --group-name project-sapphire-developers --endpoint http://aws:4566 ``` - List policies attached to a group: ```bash aws --endpoint http://aws:4566 iam list-attached-group-policies --group-name project-sapphire-developers ``` - List policies attached to a user: ```bash aws --endpoint http://aws:4566 iam list-attached-user-policies --user-name jack ``` - Attach a policy to a group: ```bash aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonEC2FullAccess --group-name project-sapphire-developers ``` ### AWS IAM with Terraform - Reference terraform AWS provider documentation. - Instead of storing AWS credentials in the terraform config files use one of the following: - use the AWS credentials file in **`~/.aws/credentials`** - the shell variables **`export AWS_ACCESS_KEY_ID`** & **`export AWS_SECRET_ACCESS_KEY_ID`** ### IAM Policies with Terraform - You can also use Heredoc within the config files instead of `jsonencode`. - The **`file("$FILE")`** function can be used as well in the resource. ##### **Using IAM with Terraform** ###### **Scenario** You want to **create an IAM user**, **attach a policy**, and **assign it to a group** using Terraform. ###### **Terraform IAM Configuration (`main.tf`)** ```hcl provider "aws" { region = "us-east-1" } # Create an IAM User resource "aws_iam_user" "example_user" { name = "terraform-user" } # Create an IAM Policy resource "aws_iam_policy" "example_policy" { name = "S3ReadOnlyAccess" description = "Allows read-only access to S3" policy = jsonencode({ Version = "2012-10-17" Statement = [{ Action = ["s3:ListBucket", "s3:GetObject"] Effect = "Allow" Resource = "*" }] }) } # Attach Policy to User resource "aws_iam_user_policy_attachment" "example_attachment" { user = aws_iam_user.example_user.name policy_arn = aws_iam_policy.example_policy.arn } # Create an IAM Group resource "aws_iam_group" "example_group" { name = "developers" } # Attach the User to the Group resource "aws_iam_group_membership" "example_membership" { name = "developer-group-membership" group = aws_iam_group.example_group.name users = [aws_iam_user.example_user.name] } ``` ###### **Explanation** - **Creates an IAM User (`terraform-user`).** - **Defines an IAM Policy** allowing read-only S3 access. - **Attaches the policy** to the user. - **Creates an IAM Group (`developers`).** - **Adds the user to the group.** --- #### **Using IAM Roles & Instance Profiles with Terraform** ##### **Scenario** You want to: 1. **Create an IAM Role** for EC2 instances. 2. **Attach a policy that grants S3 read access.** 3. **Create an Instance Profile to link the role to an EC2 instance.** ###### **Terraform IAM Role & Instance Profile Configuration (`main.tf`)** ```hcl provider "aws" { region = "us-east-1" } # IAM Role for EC2 resource "aws_iam_role" "ec2_role" { name = "EC2_S3_ReadOnly_Role" assume_role_policy = jsonencode({ Version = "2012-10-17" Statement = [{ Effect = "Allow" Principal = { Service = "ec2.amazonaws.com" } Action = "sts:AssumeRole" }] }) } # IAM Policy for S3 Read Access resource "aws_iam_policy" "s3_readonly_policy" { name = "S3ReadOnlyPolicy" description = "Allows EC2 instances to read from S3" policy = jsonencode({ Version = "2012-10-17" Statement = [{ Effect = "Allow" Action = ["s3:ListBucket", "s3:GetObject"] Resource = "*" }] }) } # Attach Policy to Role resource "aws_iam_role_policy_attachment" "attach_s3_policy" { role = aws_iam_role.ec2_role.name policy_arn = aws_iam_policy.s3_readonly_policy.arn } # IAM Instance Profile (Required for EC2 to Use IAM Role) resource "aws_iam_instance_profile" "ec2_instance_profile" { name = "EC2InstanceProfile" role = aws_iam_role.ec2_role.name } # Launch an EC2 Instance with IAM Role resource "aws_instance" "example" { ami = "ami-123456" # Replace with a valid AMI ID instance_type = "t2.micro" iam_instance_profile = aws_iam_instance_profile.ec2_instance_profile.name tags = { Name = "EC2WithIAMRole" } } ``` ###### **Explanation** 1. **Creates an IAM Role (`EC2_S3_ReadOnly_Role`)** - Allows EC2 to assume the role using the **`sts:AssumeRole`** permission. 2. **Defines an IAM Policy (`S3ReadOnlyPolicy`)** - Grants **S3 read access** to resources using this role. 3. **Attaches the Policy to the Role** - Ensures the EC2 role has the necessary permissions. 4. **Creates an IAM Instance Profile** - AWS **requires** an instance profile to assign IAM roles to EC2. 5. **Launches an EC2 Instance with the IAM Role** - The EC2 instance will inherit the **S3 read-only permissions**. --- #### **Restricting IAM Policy to a Specific S3 Bucket** ##### **Scenario** You want to: ✅ **Limit EC2 access to only a specific S3 bucket** instead of allowing access to all S3 resources (`*`). ✅ **Ensure the EC2 instance can only read (list & get) objects but not delete or modify them.** ###### **Updated Terraform Configuration (`main.tf`)** ```hcl provider "aws" { region = "us-east-1" } # Define the S3 Bucket Name (Modify this to your bucket name) variable "s3_bucket_name" { default = "my-secure-bucket" } # IAM Role for EC2 resource "aws_iam_role" "ec2_role" { name = "EC2_S3_Restricted_Role" assume_role_policy = jsonencode({ Version = "2012-10-17" Statement = [{ Effect = "Allow" Principal = { Service = "ec2.amazonaws.com" } Action = "sts:AssumeRole" }] }) } # IAM Policy - Restrict Access to a Specific S3 Bucket resource "aws_iam_policy" "s3_restricted_policy" { name = "S3RestrictedReadOnlyPolicy" description = "Allows EC2 to read only from a specific S3 bucket" policy = jsonencode({ Version = "2012-10-17" Statement = [ { Effect = "Allow" Action = ["s3:ListBucket"] Resource = "arn:aws:s3:::${var.s3_bucket_name}" }, { Effect = "Allow" Action = ["s3:GetObject"] Resource = "arn:aws:s3:::${var.s3_bucket_name}/*" } ] }) } # Attach Policy to Role resource "aws_iam_role_policy_attachment" "attach_s3_policy" { role = aws_iam_role.ec2_role.name policy_arn = aws_iam_policy.s3_restricted_policy.arn } # IAM Instance Profile resource "aws_iam_instance_profile" "ec2_instance_profile" { name = "EC2InstanceProfileRestricted" role = aws_iam_role.ec2_role.name } # Launch an EC2 Instance with IAM Role resource "aws_instance" "example" { ami = "ami-123456" # Replace with a valid AMI ID instance_type = "t2.micro" iam_instance_profile = aws_iam_instance_profile.ec2_instance_profile.name tags = { Name = "EC2WithRestrictedIAMRole" } } ``` ###### **Explanation** 1. **Restricts the EC2 instance to a specific S3 bucket** (`my-secure-bucket`). 2. **Grants only these actions:** - `s3:ListBucket` → Allows the instance to list objects **only in that bucket**. - `s3:GetObject` → Allows the instance to download objects but **not modify or delete them**. 3. **Uses Terraform variable (`var.s3_bucket_name`)** to make the bucket name dynamic. --- ### Lab: IAM with Terraform - Practice working with AWS IAM resources & terraform. ### Introduction to AWS S3 - Already familiar with S3 due to [[AWS Cloud Practitioner]] & [[AWS SAA - Services - Storage#S3 - Overview| AWS SAA - S3]] ### S3 with Terraform - If you don't specify a bucket name, terraform will create a random one. #### **Terraform & S3 - Example** ##### **Scenario** You want to use Terraform to **create an S3 bucket** with: ✅ **Versioning enabled** (for data recovery). ✅ **Server-side encryption** (for security). ✅ **Public access blocked** (to prevent unauthorized access). ###### **Terraform S3 Configuration (`main.tf`)** ```hcl provider "aws" { region = "us-east-1" } # Create an S3 Bucket resource "aws_s3_bucket" "example_bucket" { bucket = "my-terraform-bucket-12345" # Change to a globally unique name } # Enable Versioning resource "aws_s3_bucket_versioning" "versioning" { bucket = aws_s3_bucket.example_bucket.id versioning_configuration { status = "Enabled" } } # Enable Server-Side Encryption resource "aws_s3_bucket_server_side_encryption_configuration" "encryption" { bucket = aws_s3_bucket.example_bucket.id rule { apply_server_side_encryption_by_default { sse_algorithm = "AES256" } } } # Block Public Access resource "aws_s3_bucket_public_access_block" "public_access" { bucket = aws_s3_bucket.example_bucket.id block_public_acls = true block_public_policy = true ignore_public_acls = true restrict_public_buckets = true } # Output the S3 Bucket Name output "s3_bucket_name" { description = "The name of the created S3 bucket" value = aws_s3_bucket.example_bucket.id } ``` ###### **Explanation** 1. **Creates an S3 bucket** with a globally unique name. 2. **Enables versioning** for object recovery. 3. **Applies server-side encryption** to secure stored objects. 4. **Blocks public access** to prevent unauthorized access. 5. **Outputs the bucket name** after creation. --- #### **Terraform & S3 with Lifecycle Policies & Logging** ##### **Scenario** This Terraform configuration will: ✅ **Create an S3 bucket** ✅ **Enable versioning** (for data recovery) ✅ **Enforce server-side encryption** (AES-256 for security) ✅ **Block public access** (to prevent unauthorized access) ✅ **Enable logging** (to track access & modifications) ✅ **Add a lifecycle policy** (to move older files to Glacier and delete old versions) ###### **Terraform S3 Configuration (`main.tf`)** ```hcl provider "aws" { region = "us-east-1" } # Define a unique bucket name variable "s3_bucket_name" { default = "my-terraform-secure-bucket-12345" # Change to a globally unique name } # Define a logging bucket resource "aws_s3_bucket" "logging_bucket" { bucket = "${var.s3_bucket_name}-logs" } # Enable Logging on the Logging Bucket resource "aws_s3_bucket_acl" "logging_bucket_acl" { bucket = aws_s3_bucket.logging_bucket.id acl = "log-delivery-write" } # Create the Main S3 Bucket resource "aws_s3_bucket" "example_bucket" { bucket = var.s3_bucket_name } # Enable Server-Side Encryption resource "aws_s3_bucket_server_side_encryption_configuration" "encryption" { bucket = aws_s3_bucket.example_bucket.id rule { apply_server_side_encryption_by_default { sse_algorithm = "AES256" } } } # Block Public Access resource "aws_s3_bucket_public_access_block" "public_access" { bucket = aws_s3_bucket.example_bucket.id block_public_acls = true block_public_policy = true ignore_public_acls = true restrict_public_buckets = true } # Enable Logging resource "aws_s3_bucket_logging" "logging" { bucket = aws_s3_bucket.example_bucket.id target_bucket = aws_s3_bucket.logging_bucket.id target_prefix = "log/" } # Enable Versioning resource "aws_s3_bucket_versioning" "versioning" { bucket = aws_s3_bucket.example_bucket.id versioning_configuration { status = "Enabled" } } # S3 Lifecycle Policy - Move Old Files to Glacier & Delete Old Versions resource "aws_s3_bucket_lifecycle_configuration" "lifecycle" { bucket = aws_s3_bucket.example_bucket.id rule { id = "move-to-glacier" status = "Enabled" filter {} transition { days = 30 storage_class = "GLACIER" } } rule { id = "delete-old-versions" status = "Enabled" noncurrent_version_expiration { noncurrent_days = 90 } } } # Output the S3 Bucket Name output "s3_bucket_name" { description = "The name of the created S3 bucket" value = aws_s3_bucket.example_bucket.id } # Output the Logging Bucket Name output "logging_bucket_name" { description = "The name of the S3 logging bucket" value = aws_s3_bucket.logging_bucket.id } ``` ###### **Explanation** 1. **Creates a logging bucket** (`example-bucket-logs`) to store access logs. 2. **Enables logging on the main bucket** (logs go to the logging bucket under `log/` prefix). 3. **Applies a lifecycle policy** to: - **Move files older than 30 days** to **Glacier storage** (cheaper, long-term storage). - **Delete non-current object versions** after **90 days** (to save storage costs). #### **Adding an IAM Policy to Control S3 Bucket Access** ##### **Scenario** You want to: ✅ Restrict access to the **S3 bucket** to specific IAM users or roles. ✅ Allow only **read & write access** while **blocking delete permissions**. ✅ Ensure **logging bucket access is restricted** (so only logging services can write to it). ###### **Updated Terraform Configuration (`iam.tf`)** ```hcl # IAM Policy to Restrict S3 Access resource "aws_iam_policy" "s3_access_policy" { name = "S3BucketAccessPolicy" description = "Grants limited access to the S3 bucket without delete permissions" policy = jsonencode({ Version = "2012-10-17" Statement = [ # Allow listing bucket contents { Effect = "Allow" Action = ["s3:ListBucket"] Resource = "arn:aws:s3:::${var.s3_bucket_name}" }, # Allow reading and writing objects (but NOT deleting) { Effect = "Allow" Action = ["s3:GetObject", "s3:PutObject"] Resource = "arn:aws:s3:::${var.s3_bucket_name}/*" }, # Explicitly deny delete actions { Effect = "Deny" Action = ["s3:DeleteObject", "s3:DeleteBucket"] Resource = "arn:aws:s3:::${var.s3_bucket_name}/*" } ] }) } # Attach IAM Policy to a User (Replace "example_user" with actual user name) resource "aws_iam_user" "example_user" { name = "restricted-s3-user" } resource "aws_iam_user_policy_attachment" "user_s3_attach" { user = aws_iam_user.example_user.name policy_arn = aws_iam_policy.s3_access_policy.arn } # Restrict Logging Bucket Access (Only AWS Logging Service can write logs) resource "aws_s3_bucket_policy" "logging_policy" { bucket = aws_s3_bucket.logging_bucket.id policy = jsonencode({ Version = "2012-10-17" Statement = [{ Effect = "Allow" Principal = { Service = "logging.s3.amazonaws.com" } Action = ["s3:PutObject"] Resource = "arn:aws:s3:::${aws_s3_bucket.logging_bucket.id}/*" Condition = { StringEquals = { "s3:x-amz-acl" = "bucket-owner-full-control" } } }] }) } ``` ###### **Explanation** 1. **IAM Policy for S3 Bucket (`s3_access_policy`)** - Grants `ListBucket` permission to list objects. - Allows `GetObject` and `PutObject` (read & write) permissions. - **Denies delete permissions** to prevent accidental data loss. 2. **Attaches the Policy to an IAM User (`restricted-s3-user`)** - This user can **upload & read files** but **not delete them**. 3. **Restricts Logging Bucket Access (`logging_policy`)** - Ensures **only AWS logging services** can write logs to the bucket. ✅ The IAM user **can read and upload** but **cannot delete** S3 objects. ✅ The logging bucket **only accepts logs from AWS** (prevents unauthorized writes). --- ### Lab: S3 - The lab demonstrated basic bucket & file upload concepts for S3 & terraform. - Below is an example I wanted for myself. #### **Uploading a File to S3 Using Terraform** ##### **Scenario** You want to **upload a file** (e.g., `example.txt`) to an **S3 bucket** using Terraform. ##### **Terraform Configuration (`main.tf`)** ```hcl provider "aws" { region = "us-east-1" } # Define an S3 Bucket resource "aws_s3_bucket" "example_bucket" { bucket = "my-terraform-upload-bucket-12345" # Change to a globally unique name } # Upload a File to S3 resource "aws_s3_object" "uploaded_file" { bucket = aws_s3_bucket.example_bucket.id key = "uploads/example.txt" # File path in S3 source = "example.txt" # Path to the local file to upload acl = "private" # Set file permissions } # Output S3 File URL output "file_url" { description = "S3 URL of the uploaded file" value = "https://${aws_s3_bucket.example_bucket.id}.s3.amazonaws.com/uploads/example.txt" } ``` ##### **Explanation** 1. **Creates an S3 bucket** (`my-terraform-upload-bucket-12345`). 2. **Uploads a local file (`example.txt`)** to the S3 bucket inside the `uploads/` folder. 3. **Sets file permissions** (`private`, meaning only the bucket owner can access it). 4. **Outputs the S3 file URL** after the upload. ##### **Deploying the Configuration** 1. **Create `example.txt`** in the same directory as your Terraform script: ```sh echo "Hello, Terraform S3 Upload!" > example.txt ``` 2. **Initialize Terraform** ```sh terraform init ``` 3. **Apply the Configuration** ```sh terraform apply ``` 4. **Retrieve the File URL** ```sh terraform output file_url ``` Example Output: ``` file_url = "https://my-terraform-upload-bucket-12345.s3.amazonaws.com/uploads/example.txt" ``` #### **Optional Enhancements** - **Make the file public** by setting `acl = "public-read"`. - **Use `content_type = "text/plain"`** to specify file type. - **Enable versioning** on the bucket for file history tracking. ### Introduction to DynamoDB - Already familiar with this from [[AWS Cloud Practitioner#Core AWS Services - Database|AWS CP - Core Services - Databases]] & [[AWS SAA - Services - Databases#DynamoDB|AWS SAA - DynamoDB]] ### Demo Dynamodb - Already familiar with this from [[AWS SAA - Services - Databases#DynamoDB - Demo|AWS SAA - DynamoDB Demo]] ### DynamoDB with Terraform ##### **Scenario** You want to **create a DynamoDB table** using Terraform with: ✅ A **primary key** (partition key). ✅ **Provisioned capacity mode** (Read & Write capacity units). ###### **Terraform Configuration (`main.tf`)** ```hcl provider "aws" { region = "us-east-1" } # Create a DynamoDB Table resource "aws_dynamodb_table" "example_table" { name = "UsersTable" billing_mode = "PROVISIONED" # Can be "PAY_PER_REQUEST" for on-demand mode read_capacity = 5 write_capacity = 5 hash_key = "UserID" # Partition key attribute { name = "UserID" type = "S" # S = String, N = Number, B = Binary } } # Output the Table Name output "dynamodb_table_name" { description = "The name of the DynamoDB table" value = aws_dynamodb_table.example_table.name } ``` ###### **Explanation** 1. **Defines a DynamoDB table (`UsersTable`)** 2. **Uses `UserID` as the partition key** 3. **Provisioned mode with 5 read & write capacity units** 4. **Outputs the table name** after creation ##### **Optional Enhancements** - **Use `billing_mode = "PAY_PER_REQUEST"`** for automatic scaling. - **Add a Sort Key** for composite primary keys. - **Enable Point-in-Time Recovery** for backups. ### Lab: DynamoDB - Simple lab for interacting with DynamoDB using terraform. ## Remote State ### What is Remote State and State Locking? - Reference [[Terraform Basics Training Course#Terraform State|Terraform State]] --- ##### **1. Terraform Remote State** **Remote state** stores the Terraform state file (`terraform.tfstate`) in a **centralized location** instead of the local filesystem. ✅ Helps teams **collaborate** by sharing state. ✅ Prevents **state file corruption** when multiple users apply changes. ✅ Enables **secure storage** in cloud backends like S3, Azure Blob, or Terraform Cloud. **Example: Storing State in an S3 Bucket** ```hcl terraform { backend "s3" { bucket = "my-terraform-state-bucket" key = "prod/terraform.tfstate" region = "us-east-1" dynamodb_table = "terraform-lock" encrypt = true } } ``` - This stores the state in **S3** and enables **state locking** using DynamoDB. ##### **2. Terraform State Locking** **State locking** prevents multiple users or processes from modifying the Terraform state at the same time, avoiding conflicts. ✅ Prevents **concurrent state modifications**. ✅ Uses **DynamoDB, Terraform Cloud, or Azure Blob** for locking. **Example: Enabling State Locking with DynamoDB** ```hcl resource "aws_dynamodb_table" "terraform_locks" { name = "terraform-lock" billing_mode = "PAY_PER_REQUEST" hash_key = "LockID" attribute { name = "LockID" type = "S" } } ``` - Terraform will check this table before modifying the state, preventing conflicts. ### Remote Backends with S3 #### **Using S3 for Terraform Remote State & DynamoDB for State Locking** - Save these contents into a file called **`terraform.tf`**. ##### **1. Create an S3 Bucket for Terraform State** ```hcl resource "aws_s3_bucket" "terraform_state" { bucket = "my-terraform-state-bucket-12345" # Change to a unique bucket name lifecycle { prevent_destroy = true # Prevent accidental deletion } versioning { enabled = true # Enable versioning for rollback } server_side_encryption_configuration { rule { apply_server_side_encryption_by_default { sse_algorithm = "AES256" } } } tags = { Name = "Terraform State Bucket" Environment = "Production" } } ``` ##### **2. Create a DynamoDB Table for State Locking** ```hcl resource "aws_dynamodb_table" "terraform_locks" { name = "terraform-lock" billing_mode = "PAY_PER_REQUEST" hash_key = "LockID" attribute { name = "LockID" type = "S" } tags = { Name = "Terraform State Lock Table" Environment = "Production" } } ``` ##### **3. Configure Terraform Backend to Use S3 & DynamoDB** Create a `backend.tf` file or place this inside your `main.tf`: ```hcl terraform { backend "s3" { bucket = "my-terraform-state-bucket-12345" key = "global/terraform.tfstate" region = "us-east-1" encrypt = true dynamodb_table = "terraform-lock" } } ``` ##### **4. Initialize Terraform with Remote State** ```sh terraform init ``` - This configures Terraform to store its state in **S3** and use **DynamoDB for locking**. ##### **5. Applying Terraform Configuration** ```sh terraform apply ``` - After applying, the Terraform state file will be stored in **S3**, and DynamoDB will prevent conflicts with state locking. #### **Key Benefits** ✅ **Remote State Storage** – State file is securely stored in S3 with encryption. ✅ **Versioning Enabled** – Allows rollback of previous state versions. ✅ **State Locking** – Prevents multiple users from modifying state at the same time. ### Lab: Remote State - Practicing using remote states with `minio`. ### Terraform State Commands - Reference [[Terraform Basics Training Course#Terraform State#**Common Terraform State Commands**|Common Terraform State Commands]] ### Lab: Terraform State Commands - Practicing using the **`terraform state`** commands. ## Terraform Provisioners ### Introduction to AWS EC2 (optional) - Already familiar with this due to [[AWS Cloud Practitioner#Core AWS Services - Compute EC2|AWS CP - Core Services - EC2]] & [[AWS SAA - Services - Compute#EC2|AWS SAA - EC2]] ### Demo: Deploying an EC2 Instance (optional) - Already familiar with this due to [[AWS SAA - Services - Compute#EC2|AWS SAA - EC2]] & [[AWS SAA - Services - Compute#EC2 - Demo]] ### AWS EC2 with Terraform #### **Terraform EC2 with SSH Key Pair, Security Group, and Nginx Deployment** ##### **1. Terraform Configuration (`main.tf`)** ```hcl provider "aws" { region = "us-east-1" } # Create an SSH Key Pair resource "aws_key_pair" "ec2_key" { key_name = "terraform-key" public_key = file("~/.ssh/id_rsa.pub") # Ensure you have this key generated } # Security Group for EC2 (Allows SSH & HTTP) resource "aws_security_group" "ec2_sg" { name = "ec2-security-group" description = "Allow SSH and HTTP access" # Allow SSH (port 22) from anywhere ingress { from_port = 22 to_port = 22 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } # Allow HTTP (port 80) from anywhere ingress { from_port = 80 to_port = 80 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } # Allow all outgoing traffic egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } } # EC2 Instance with Nginx Installation resource "aws_instance" "example" { ami = "ami-12345678" # Replace with a valid AMI ID instance_type = "t2.micro" key_name = aws_key_pair.ec2_key.key_name security_groups = [aws_security_group.ec2_sg.name] user_data = <<-EOF #!/bin/bash sudo yum update -y sudo amazon-linux-extras enable nginx1 sudo yum install -y nginx sudo systemctl start nginx sudo systemctl enable nginx EOF tags = { Name = "TerraformEC2" } } # Output the EC2 Public IP output "ec2_public_ip" { description = "The public IP of the EC2 instance" value = aws_instance.example.public_ip } ``` ##### **2. Initialize Terraform** ```sh terraform init ``` 🔹 Prepares Terraform by downloading required providers. ##### **3. Plan the Deployment** ```sh terraform plan ``` 🔹 Shows the changes Terraform will make. ##### **4. Apply the Configuration** ```sh terraform apply ``` 🔹 Deploys the EC2 instance with the SSH key, security group, and Nginx installation. ##### **5. Retrieve EC2 Public IP** ```sh terraform output ec2_public_ip ``` 🔹 Displays the instance’s public IP. ##### **6. Connect to the EC2 Instance via SSH** ```sh ssh -i ~/.ssh/id_rsa ec2-user@<EC2_PUBLIC_IP> ``` 🔹 Replace `<EC2_PUBLIC_IP>` with the actual IP from the Terraform output. ##### **7. Verify Nginx is Running** ```sh curl http://<EC2_PUBLIC_IP> ``` 🔹 Should return the Nginx welcome page. ##### **8. Access Nginx from Your Browser** 🔹 Open `http://<EC2_PUBLIC_IP>` in your browser to see the Nginx default page. ### Terraform Provisioners - Reference the **`remote-exec`** documentation for details on defining SSH connections. --- ### **Terraform Provisioners - Overview** #### **What Are Terraform Provisioners?** Terraform **provisioners** execute scripts or commands on a resource **after creation**. They are typically used for **bootstrapping instances, configuring software, or executing scripts**. ### **Types of Provisioners** #### **1. `local-exec` Provisioner** ✅ Runs a command **on the machine running Terraform** (local system). ✅ Useful for sending notifications, triggering scripts, or running API calls. **Example:** ```hcl resource "aws_instance" "example" { ami = "ami-12345678" instance_type = "t2.micro" provisioner "local-exec" { command = "echo Instance Created: ${self.public_ip} >> instances.log" } } ``` #### **2. `remote-exec` Provisioner** ✅ Runs commands **on the remote resource** after provisioning. ✅ Requires SSH or WinRM connection to the instance. **Example:** ```hcl resource "aws_instance" "example" { ami = "ami-12345678" instance_type = "t2.micro" connection { type = "ssh" user = "ec2-user" private_key = file("~/.ssh/id_rsa") host = self.public_ip } provisioner "remote-exec" { inline = [ "sudo yum update -y", "sudo yum install -y nginx", "sudo systemctl start nginx" ] } } ``` #### **3. `file` Provisioner** ✅ Uploads a file from the **local system to the remote resource**. **Example:** ```hcl resource "aws_instance" "example" { ami = "ami-12345678" instance_type = "t2.micro" connection { type = "ssh" user = "ec2-user" private_key = file("~/.ssh/id_rsa") host = self.public_ip } provisioner "file" { source = "local_script.sh" destination = "/home/ec2-user/remote_script.sh" } } ``` ### **Best Practices** ✅ **Use provisioners only when necessary** – Avoid them if cloud-init or configuration management tools (e.g., Ansible, Chef) can do the job. ✅ **Ensure connectivity** – `remote-exec` requires a working SSH or WinRM connection. ✅ **Use `null_resource`** – If a provisioner isn't tied to a resource, consider using a `null_resource`. --- #### **Using a `null_resource` with a Provisioner** ##### **What is a `null_resource`?** A `null_resource` in Terraform **does not create an actual resource**, but it allows running provisioners independently. It is useful for executing scripts, running commands, or triggering external automation. --- ##### **Example: Running a Local Script with a `null_resource`** ```hcl provider "aws" { region = "us-east-1" } # Define a null_resource with a local-exec provisioner resource "null_resource" "run_script" { provisioner "local-exec" { command = "echo 'Terraform deployment completed' >> deployment.log" } } ``` ##### **What This Does** ✅ Runs the **local script** on the Terraform machine after applying the configuration. ✅ Creates a **log file** (`deployment.log`) recording the deployment completion. --- ##### **Example: Using `null_resource` to Upload a File to an EC2 Instance** ```hcl resource "aws_instance" "example" { ami = "ami-12345678" instance_type = "t2.micro" key_name = "my-key" tags = { Name = "ProvisionedInstance" } } # Use a null_resource to copy a file after EC2 is created resource "null_resource" "copy_file" { depends_on = [aws_instance.example] connection { type = "ssh" user = "ec2-user" private_key = file("~/.ssh/id_rsa") host = aws_instance.example.public_ip } provisioner "file" { source = "local_script.sh" destination = "/home/ec2-user/remote_script.sh" } } ``` ##### **What This Does** ✅ **Ensures EC2 is created first** using `depends_on`. ✅ **Uploads a script (`local_script.sh`)** to the instance. ##### **Why Use `null_resource`?** ✅ Allows **running provisioners independently** of a specific resource. ✅ Useful for **triggering external automation** (e.g., scripts, notifications). ✅ Can be combined with **`triggers`** for conditional execution. ### Provisioner Behaviour ### **Terraform Provisioner Behavior - Overview** #### **How Terraform Provisioners Work** Terraform **provisioners** execute scripts or commands on a resource **after it is created or destroyed**. They are typically used for **bootstrapping**, **software installation**, and **configuration tasks**. --- #### **Key Behavior of Terraform Provisioners** ##### **1. Provisioners Run Only on Creation by Default** - Provisioners execute **only once when a resource is created**. - If Terraform updates a resource, the provisioner **does not run again** unless forced. ##### **2. Destroy-time Provisioners** - Provisioners can be set to run when a resource is **destroyed**. - This is useful for **cleanup tasks** before a resource is removed. - Example: ```hcl provisioner "local-exec" { when = destroy command = "echo 'Instance is being deleted' >> destroy.log" } ``` ##### **3. Failure Handling (`on_failure` Behavior)** - By default, **if a provisioner fails, Terraform fails**. - You can override this behavior using: ```hcl provisioner "remote-exec" { on_failure = continue # Terraform will continue even if the provisioner fails } ``` - `continue` → Ignores failures. - `fail` (default) → Stops Terraform execution. ##### **4. Dependent Resources (`depends_on`)** - Provisioners do **not guarantee resource dependency** unless explicitly defined. - Use `depends_on` to **ensure a provisioner runs after another resource**: ```hcl resource "null_resource" "copy_script" { depends_on = [aws_instance.example] provisioner "file" { source = "script.sh" destination = "/home/ec2-user/script.sh" } } ``` ##### **Best Practices for Provisioners** ✅ **Use Cloud-native methods instead** (e.g., `user_data` for AWS EC2 initialization). ✅ **Ensure proper dependency management** (`depends_on` if necessary). ✅ **Avoid unnecessary execution** by limiting provisioner runs. ✅ **Use `on_failure = continue` carefully** to prevent silent failures. ### Lab: AWS EC2 and Provisioners - Practicing with EC2 & provisioners. ### Considerations with Provisioners - Reduce usage of provisioners. - Utilize imaging tools like Packer instead to deploy customized images with software already installed. --- #### **Terraform Considerations with Provisioners** ##### **1. Provisioners Should Be a Last Resort** - Terraform is designed to **declare infrastructure, not manage configurations**. - Prefer **cloud-native solutions** (e.g., AWS `user_data`, Ansible, or cloud-init) instead of provisioners. ##### **2. Provisioners Do Not Always Rerun** - They only execute **on resource creation** unless the resource is destroyed and recreated. - If an instance is modified (e.g., changing instance type), the provisioner **won't run again**. - To force execution, destroy and recreate the resource: ```sh terraform taint aws_instance.example terraform apply ``` ##### **3. Managing Dependencies Properly** - Terraform does **not guarantee execution order** unless explicitly set. - Use `depends_on` to ensure provisioners run after dependent resources: ```hcl resource "null_resource" "provision_script" { depends_on = [aws_instance.example] provisioner "file" { source = "setup.sh" destination = "/home/ec2-user/setup.sh" } } ``` ##### **4. Handling Provisioner Failures** - By default, Terraform **fails** if a provisioner fails. - Use `on_failure = continue` to allow Terraform to proceed despite failures: ```hcl provisioner "remote-exec" { on_failure = continue inline = ["echo 'Provisioning failed, but continuing...'"] } ``` ##### **5. Using Remote-Exec Securely** - Requires **SSH or WinRM** access to the instance. - Ensure proper **key management** (avoid hardcoding credentials). - Example secure SSH connection: ```hcl connection { type = "ssh" user = "ec2-user" private_key = file("~/.ssh/id_rsa") host = aws_instance.example.public_ip } ``` ##### **6. Consider Using `null_resource` for Standalone Provisioners** - If provisioners **do not belong to a specific resource**, use `null_resource`: ```hcl resource "null_resource" "provision_script" { provisioner "local-exec" { command = "echo 'Executing standalone provisioner'" } } ``` ##### **Key Takeaways** ✅ **Use provisioners only when necessary** – prefer configuration management tools. ✅ **Ensure correct dependency handling** using `depends_on`. ✅ **Handle failures properly** to prevent Terraform from breaking unexpectedly. ## Terraform Import, Tainting Resources & Debugging ### Terraform Taint ##### **What is `terraform taint`?** `terraform taint` was a command used in Terraform **(deprecated in v0.15)** to manually mark a resource for **recreation** during the next `terraform apply`. ##### **Why Use `terraform taint`?** - **Force a specific resource to be recreated** without deleting the entire infrastructure. - Useful when a resource is in an **inconsistent state** but Terraform does not detect a change. ##### **Example Usage (Before Deprecation)** ```sh terraform taint aws_instance.example terraform apply ``` 🔹 This would **destroy and recreate** the `aws_instance.example` during `terraform apply`. ##### **New Alternative: `terraform apply -replace`** Since Terraform **0.15+, `terraform taint` is deprecated**. Use: ```sh terraform apply -replace=aws_instance.example ``` 🔹 This achieves the same behavior as `taint` by marking the resource for recreation. ##### **Key Differences** | Feature | `terraform taint` (Deprecated) | `terraform apply -replace` (Recommended) | | ---------------------------------------- | ------------------------------ | ---------------------------------------- | | Marks a resource for recreation | ✅ Yes | ✅ Yes | | Requires `terraform apply` after marking | ✅ Yes | ✅ Yes | | More granular control | ❌ No | ✅ Yes | | Still supported | ❌ No (deprecated) | ✅ Yes (Terraform 0.15+) | ##### **Best Practices** ✅ **Use `terraform apply -replace`** instead of `taint`. ✅ **Only replace resources when necessary** to avoid unnecessary downtime. ✅ **Run `terraform plan` first** to preview the impact before applying. --- #### **Using `terraform apply -replace` with Multiple Resources** ##### **What is `-replace`?** Since Terraform **0.15+,** `terraform taint` has been **deprecated**. Instead, use `terraform apply -replace` to **force recreation** of resources. ##### **1. Example Terraform Configuration (`main.tf`)** ```hcl provider "aws" { region = "us-east-1" } # Create an EC2 Instance resource "aws_instance" "example1" { ami = "ami-12345678" # Replace with a valid AMI ID instance_type = "t2.micro" tags = { Name = "ExampleInstance1" } } # Another EC2 Instance resource "aws_instance" "example2" { ami = "ami-12345678" instance_type = "t2.micro" tags = { Name = "ExampleInstance2" } } ``` ##### **2. Marking Multiple Resources for Replacement** If you want to **force Terraform to recreate both instances**, run: ```sh terraform apply -replace=aws_instance.example1 -replace=aws_instance.example2 ``` 🔹 This **only replaces** `example1` and `example2` without affecting other resources. ##### **3. Checking Before Applying** Run: ```sh terraform plan -replace=aws_instance.example1 -replace=aws_instance.example2 ``` 🔹 This **previews changes** before applying them. ##### **4. Best Practices for Using `-replace`** ✅ **Use `terraform plan` first** to ensure the expected behavior. ✅ **Only replace necessary resources** to avoid downtime. ✅ **Avoid replacing stateful resources** like databases unless required. Would you like an example of using `-replace` with infrastructure like RDS or S3? 🚀 ### Debugging ##### **What is Terraform Debugging?** Terraform provides **various debugging tools** to help diagnose issues related to **configuration errors, provider failures, and state issues**. ##### **1. Enabling Debug Logging (`TF_LOG`)** Terraform supports different log levels: - `TRACE` (most detailed) - `DEBUG` - `INFO` - `WARN` - `ERROR` (least detailed) **Example: Enable Debug Logging** ```sh export TF_LOG=DEBUG terraform apply ``` 🔹 Logs detailed execution steps to help identify issues. ##### **2. Redirecting Logs to a File (`TF_LOG_PATH`)** Instead of displaying logs in the terminal, you can save them to a file: ```sh export TF_LOG=DEBUG export TF_LOG_PATH="terraform.log" terraform apply ``` 🔹 This helps when **reviewing logs later** or sharing them for troubleshooting. ##### **3. Debugging Terraform Plan & Apply** - Use `terraform plan` before `terraform apply` to preview changes. - If `terraform apply` fails, re-run with `TF_LOG=DEBUG`. - Check **state issues** using: ```sh terraform state list terraform state show <resource> ``` ##### **4. Terraform Debug Command (`terraform debug`)** Terraform 1.6+ introduced `terraform debug` for capturing execution traces: ```sh terraform debug ``` 🔹 Captures a **full diagnostic trace** for detailed troubleshooting. ##### **5. Common Debugging Issues & Solutions** | Issue | Solution | |-------|----------| | **Hanging Execution** | Run with `TF_LOG=DEBUG` to check where it gets stuck. | | **State Corruption** | Use `terraform state list` and `terraform state show <resource>` to inspect. | | **Provider Issues** | Run `terraform providers` to check provider versions. | | **Variable Errors** | Use `terraform console` to test expressions before applying. | ##### **Best Practices for Debugging Terraform** ✅ Always **run `terraform plan` before `apply`** to catch errors early. ✅ Use **`TF_LOG=DEBUG`** for deeper analysis of failures. ✅ Keep logs using **`TF_LOG_PATH`** for sharing/debugging later. ### Lab: Taint and Debugging - A brief exercise of enabling logging and tainting. ### Terraform Import ##### **What is `terraform import`?** `terraform import` is used to **bring existing infrastructure under Terraform management** without recreating the resource. ##### **1. Why Use `terraform import`?** ✅ Manage existing cloud resources **without downtime**. ✅ Prevent Terraform from **destroying manually created resources**. ✅ Useful for migrating **manually created infrastructure** into Terraform. ##### **2. Basic Syntax** ```sh terraform import <resource_type>.<resource_name> <existing_resource_id> ``` **Example: Import an Existing EC2 Instance** ```sh terraform import aws_instance.example i-0abcd1234efgh5678 ``` 🔹 This imports the EC2 instance **`i-0abcd1234efgh5678`** into **Terraform state** under **`aws_instance.example`**. ##### **3. Steps to Use `terraform import`** 1. **Define the resource in Terraform configuration (`main.tf`)** ```hcl resource "aws_instance" "example" { ami = "ami-12345678" # Just a placeholder instance_type = "t2.micro" } ``` 2. **Run the import command** ```sh terraform import aws_instance.example i-0abcd1234efgh5678 ``` 🔹 This **links the real instance** to Terraform state. 3. **Run `terraform plan`** ```sh terraform plan ``` 🔹 Terraform will show **differences** between the existing resource and your configuration. 4. **Update `main.tf` to match actual settings** - Run `terraform state show aws_instance.example` to get actual attributes. - Copy those values into `main.tf` to prevent unnecessary changes. 5. **Apply Terraform to sync state and configuration** ```sh terraform apply ``` ##### **4. Limitations of `terraform import`** ❌ **Only imports to state** – You must manually update the `.tf` files. ❌ **Does not support bulk imports** – You need to import resources one by one. ❌ **No automatic configuration generation** – Terraform won’t create a `.tf` file for you. ##### **Best Practices** ✅ Always **run `terraform state show <resource>`** after import to verify attributes. ✅ Manually update `.tf` files to **match real-world settings** before applying. ✅ Use Terraform Cloud or remote state for **better state management**. --- #### **Importing an Existing S3 Bucket into Terraform** ##### **1. Define the S3 Bucket in Terraform Configuration (`main.tf`)** Before importing, you need a **placeholder resource** in your Terraform configuration: ```hcl provider "aws" { region = "us-east-1" } resource "aws_s3_bucket" "example" { bucket = "my-existing-bucket" } ``` 🔹 **Ensure the bucket name matches** the actual bucket in AWS. ##### **2. Run the Terraform Import Command** ```sh terraform import aws_s3_bucket.example my-existing-bucket ``` 🔹 This imports the existing **S3 bucket** named `"my-existing-bucket"` into Terraform **state** under `aws_s3_bucket.example`. ##### **3. Verify the Imported State** After importing, check the resource attributes with: ```sh terraform state show aws_s3_bucket.example ``` 🔹 This displays **all the attributes** of the bucket that Terraform recognizes. ##### **4. Update `main.tf` to Match Real Configuration** Terraform **only imports the resource into state**, so you must manually update `main.tf` with actual attributes. Example **after running `terraform state show`**: ```hcl resource "aws_s3_bucket" "example" { bucket = "my-existing-bucket" lifecycle { prevent_destroy = true } versioning { enabled = true } server_side_encryption_configuration { rule { apply_server_side_encryption_by_default { sse_algorithm = "AES256" } } } } ``` 🔹 Ensure your `.tf` file **matches** the real-world configuration to prevent unintended changes. ##### **5. Run `terraform plan` to Validate** ```sh terraform plan ``` 🔹 This checks if Terraform will **make changes** after the import. ##### **6. Apply Terraform to Confirm Everything is Managed** ```sh terraform apply ``` 🔹 Ensures Terraform is **fully managing** the imported S3 bucket. ##### **Key Takeaways** ✅ `terraform import` **only adds resources to the state** – You must update `.tf` files manually. ✅ Always **run `terraform state show`** to confirm attributes after importing. ✅ Use `terraform plan` before applying to **avoid unwanted modifications**. ### Lab: Terraform Import - Example of creating SSH key via `aws` CLI. ```bash aws ec2 create-key-pair --key-name jade --query 'KeyMaterial' --output text > /root/terraform-projects/project-jade/jade.pem ``` - Describing an instance using AMI UUID. ```bash aws ec2 describe-instances --filters "Name=image-id,Values=ami-082b3eca74 6b12a89" | jq -r '.Reservations[].Instances[].InstanceId' ``` - Obtaining specific instance information. ```bash aws ec2 describe-instances --filters "Name=tag:Name,Values=jade-mw" --query "Reservations[*].Instances[*].[ImageId, InstanceType, KeyName, Tags]" ``` ## Terraform Modules ### What are modules? Terraform **modules** are reusable, self-contained packages of Terraform configurations that help **organize and standardize infrastructure deployment**. They allow you to break down complex infrastructure into **smaller, manageable components**. ##### **Why Use Terraform Modules?** ✅ **Code Reusability** – Avoid duplication by defining infrastructure once and reusing it. ✅ **Maintainability** – Easier to manage and update infrastructure across multiple environments. ✅ **Scalability** – Helps scale deployments consistently across teams and projects. ✅ **Encapsulation** – Keeps configurations modular and reduces complexity. ##### **Basic Module Structure** A Terraform module consists of: - **`main.tf`** – Defines resources. - **`variables.tf`** – Declares input variables. - **`outputs.tf`** – Defines output values. ##### **Using a Module in Terraform** ```hcl module "ec2_instance" { source = "./modules/ec2" instance_type = "t2.micro" } ``` 🔹 The `source` argument points to the module directory (`./modules/ec2`). ##### **Where Can Modules Be Stored?** - **Locally (`./modules/...`)** – Inside your project directory. - **GitHub/Bitbucket (`source = "git::https://github.com/user/module.git"`)** - **Terraform Registry (`source = "terraform-aws-modules/ec2-instance/aws"`)** ##### **Best Practices for Modules** ✅ Keep them **small and focused** (e.g., separate modules for EC2, VPC, RDS). ✅ Use **variables** to make them flexible. ✅ Always define **`outputs.tf`** to expose necessary values. ✅ Store reusable modules in **GitHub or Terraform Registry** for consistency. --- ### Creating and Using a Module #### **Creating a Custom Terraform Module** ##### **1. Project Structure** Organizing Terraform code into a module helps keep infrastructure modular and reusable. ``` terraform-project/ │── main.tf │── variables.tf │── outputs.tf │── modules/ │ ├── ec2/ │ │ ├── main.tf │ │ ├── variables.tf │ │ ├── outputs.tf ``` Here, the **`modules/ec2/`** directory contains a custom EC2 module. ##### **2. Define the EC2 Module (`modules/ec2/main.tf`)** ```hcl resource "aws_instance" "this" { ami = var.ami instance_type = var.instance_type tags = { Name = var.instance_name } } ``` ##### **3. Define Input Variables (`modules/ec2/variables.tf`)** ```hcl variable "ami" { description = "AMI ID for the EC2 instance" type = string } variable "instance_type" { description = "Instance type" type = string default = "t2.micro" } variable "instance_name" { description = "EC2 instance name" type = string } ``` ##### **4. Define Outputs (`modules/ec2/outputs.tf`)** ```hcl output "public_ip" { description = "Public IP of the instance" value = aws_instance.this.public_ip } output "instance_id" { description = "ID of the EC2 instance" value = aws_instance.this.id } ``` ##### **5. Use the Module in the Root Configuration (`main.tf`)** ```hcl provider "aws" { region = "us-east-1" } module "ec2_instance" { source = "./modules/ec2" ami = "ami-12345678" # Replace with a valid AMI ID instance_type = "t2.micro" instance_name = "Terraform-Module-EC2" } output "ec2_public_ip" { value = module.ec2_instance.public_ip } ``` ##### **6. Initialize and Apply Terraform** ```sh terraform init terraform apply ``` This will create an EC2 instance using the **custom module**. ##### **Key Takeaways** ✅ Modules help **organize Terraform configurations** efficiently. ✅ **Reusability** – You can use the same module multiple times with different parameters. ✅ Using `outputs.tf` makes module values accessible in the root configuration. ### Using Modules from the Registry #### Using a Module from the Terraform Registry ##### **1. Why Use a Terraform Registry Module?** Terraform Registry provides **pre-built, reusable modules** to simplify infrastructure deployment. Instead of writing everything from scratch, you can use **official and community-maintained modules**. ##### **2. Example: Deploying an EC2 Instance Using a Terraform Registry Module** This example uses the **`terraform-aws-modules/ec2-instance/aws`** module from the Terraform Registry. ```hcl provider "aws" { region = "us-east-1" } # Using the EC2 Module from Terraform Registry module "ec2_instance" { source = "terraform-aws-modules/ec2-instance/aws" version = "5.0.0" # Ensure you use the latest compatible version name = "Terraform-EC2" instance_type = "t2.micro" ami = "ami-12345678" # Replace with a valid AMI ID key_name = "my-key-pair" # Ensure the key pair exists monitoring = true vpc_security_group_ids = ["sg-12345678"] # Replace with a valid security group ID subnet_id = "subnet-12345678" # Replace with a valid subnet ID tags = { Environment = "Dev" Owner = "TerraformUser" } } # Output the EC2 Public IP output "ec2_public_ip" { value = module.ec2_instance.public_ip } ``` ##### **3. Initializing and Applying Terraform** ```sh terraform init terraform apply ``` 🔹 This pulls the module from the Terraform Registry and provisions the EC2 instance. ##### **4. Why Use Terraform Registry Modules?** ✅ **Saves Time** – No need to write resource definitions from scratch. ✅ **Standardized Best Practices** – Community and official modules follow best practices. ✅ **Easier Maintenance** – Updates are easier by just changing the module version. --- #### **Using Multiple Modules from the Terraform Registry** ##### **1. Why Use Multiple Terraform Registry Modules?** Using multiple modules helps **organize infrastructure components** while leveraging **pre-built, well-maintained modules** from the Terraform Registry. ##### **2. Example: Deploying a VPC, Security Group, and EC2 Instance Using Terraform Registry Modules** This example provisions: ✅ A **VPC** using the `terraform-aws-modules/vpc/aws` module. ✅ A **Security Group** for SSH & HTTP access using `terraform-aws-modules/security-group/aws`. ✅ An **EC2 instance** inside the VPC using `terraform-aws-modules/ec2-instance/aws`. ```hcl provider "aws" { region = "us-east-1" } # Create a VPC using a Terraform Registry Module module "vpc" { source = "terraform-aws-modules/vpc/aws" version = "5.0.0" name = "my-vpc" cidr = "10.0.0.0/16" azs = ["us-east-1a", "us-east-1b"] public_subnets = ["10.0.1.0/24", "10.0.2.0/24"] enable_dns_support = true enable_dns_hostnames = true } # Create a Security Group for EC2 module "security_group" { source = "terraform-aws-modules/security-group/aws" version = "5.0.0" name = "ec2-sg" description = "Allow SSH and HTTP access" vpc_id = module.vpc.vpc_id ingress_rules = ["ssh-tcp", "http-80-tcp"] ingress_cidr_blocks = ["0.0.0.0/0"] } # Create an EC2 Instance inside the VPC module "ec2_instance" { source = "terraform-aws-modules/ec2-instance/aws" version = "5.0.0" name = "Terraform-EC2" instance_type = "t2.micro" ami = "ami-12345678" # Replace with a valid AMI ID key_name = "my-key-pair" # Ensure the key pair exists monitoring = true vpc_security_group_ids = [module.security_group.security_group_id] subnet_id = module.vpc.public_subnets[0] tags = { Environment = "Dev" Owner = "TerraformUser" } } # Output Values output "vpc_id" { value = module.vpc.vpc_id } output "security_group_id" { value = module.security_group.security_group_id } output "ec2_public_ip" { value = module.ec2_instance.public_ip } ``` ##### **3. Initializing and Applying Terraform** ```sh terraform init terraform apply ``` 🔹 This will deploy **a VPC, security group, and EC2 instance** using Terraform Registry modules. ##### **4. Benefits of Using Multiple Registry Modules** ✅ **Encapsulation** – Keeps VPC, Security Groups, and EC2 as separate logical units. ✅ **Modularity** – Each module can be reused independently in other projects. ✅ **Less Complexity** – Reduces the amount of custom Terraform code needed. ### Lab: Terraform Modules - A brief exercise using registry modules. ## Terraform Functions & Conditional Expressions ### More Terraform Functions - Reference **`terraform console`** command. --- #### **Terraform Functions - Overview** ##### **What Are Terraform Functions?** Terraform **functions** allow data manipulation, computation, and transformation within configurations. They help avoid hardcoding values and make configurations dynamic. ##### **Types of Terraform Functions** ✅ **String Functions** – Modify and format strings (`upper()`, `lower()`, `replace()`, `split()`, `substring()`, `trimspace()`, `format()`). ✅ **Numeric Functions** – Perform calculations (`min()`, `max()`, `ceil()`, `floor()`, `abs()`, `log()`, `sqrt()`). ✅ **Collection Functions** – Work with lists and maps (`length()`, `merge()`, `contains()`, `index()`, `lookup()`, `flatten()`, `distinct()`, `zipmap()`, `element()`, `map()`). ✅ **Date and Time Functions** – Handle timestamps (`timestamp()`, `timeadd()`, `formatdate()`). ✅ **Filesystem Functions** – Read files into Terraform (`file()`, `templatefile()`). ✅ **Encoding Functions** – Convert values (`jsonencode()`, `jsondecode()`, `base64encode()`, `base64decode()`). ✅ **Network Functions** – Work with IPs (`cidrsubnet()`, `cidrhost()`, `cidrnetmask()`). ##### **Example Usage** ```hcl variable "name" { default = "terraform" } output "uppercase_name" { value = upper(var.name) # Converts "terraform" to "TERRAFORM" } output "lowercase_name" { value = lower("TERRAFORM") # Converts "TERRAFORM" to "terraform" } output "replace_example" { value = replace("hello world", "world", "Terraform") # Returns "hello Terraform" } output "split_example" { value = split(",", "apple,banana,cherry") # Returns ["apple", "banana", "cherry"] } output "substring_example" { value = substring("terraform", 0, 4) # Returns "terr" } output "trimspace_example" { value = trimspace(" hello ") # Returns "hello" } output "format_example" { value = format("Instance-%d", 101) # Returns "Instance-101" } output "sum_numbers" { value = min(10, 20, 5) # Returns 5 } output "max_number" { value = max(10, 20, 5) # Returns 20 } output "floor_example" { value = floor(4.8) # Returns 4 } output "ceil_example" { value = ceil(4.2) # Returns 5 } output "abs_example" { value = abs(-10) # Returns 10 } output "sqrt_example" { value = sqrt(16) # Returns 4 } output "log_example" { value = log(100, 10) # Returns 2 } output "length_example" { value = length(["apple", "banana", "cherry"]) # Returns 3 } output "merge_example" { value = merge({ a = 1, b = 2 }, { c = 3 }) # Returns {a = 1, b = 2, c = 3} } output "contains_example" { value = contains(["apple", "banana", "cherry"], "banana") # Returns true } output "index_example" { value = index(["a", "b", "c"], "b") # Returns 1 } output "lookup_example" { value = lookup({ "key1" = "value1", "key2" = "value2" }, "key1", "default") # Returns "value1" } output "distinct_example" { value = distinct(["a", "b", "a", "c"]) # Returns ["a", "b", "c"] } output "flatten_example" { value = flatten([["a", "b"], ["c", "d"]]) # Returns ["a", "b", "c", "d"] } output "zipmap_example" { value = zipmap(["a", "b"], [1, 2]) # Returns {"a" = 1, "b" = 2} } output "element_example" { value = element(["apple", "banana", "cherry"], 1) # Returns "banana" } output "map_example" { value = map("key1", "value1", "key2", "value2") # Returns {"key1" = "value1", "key2" = "value2"} } output "timestamp_example" { value = timestamp() # Returns current timestamp (e.g., "2024-02-18T12:00:00Z") } output "timeadd_example" { value = timeadd(timestamp(), "24h") # Adds 24 hours to current time } output "formatdate_example" { value = formatdate("YYYY-MM-DD", timestamp()) # Returns "2024-02-18" } output "file_example" { value = file("example.txt") # Reads the contents of example.txt } output "jsonencode_example" { value = jsonencode({ key = "value" }) # Returns '{"key":"value"}' } output "jsondecode_example" { value = jsondecode("{\"key\":\"value\"}")["key"] # Returns "value" } output "base64encode_example" { value = base64encode("hello") # Returns "aGVsbG8=" } output "base64decode_example" { value = base64decode("aGVsbG8=") # Returns "hello" } output "cidrsubnet_example" { value = cidrsubnet("10.0.0.0/16", 4, 1) # Returns "10.0.16.0/20" } output "cidrhost_example" { value = cidrhost("10.0.0.0/16", 10) # Returns "10.0.0.10" } output "cidrnetmask_example" { value = cidrnetmask("10.0.0.0/16") # Returns "255.255.0.0" } ``` ##### **Why Use Terraform Functions?** ✅ **Reduce Hardcoding** – Automate data transformations dynamically. ✅ **Improve Code Efficiency** – Simplifies calculations and text processing. ✅ **Enable Dynamic Resource Management** – Adjusts values based on computed logic. ### Conditional Expressions #### **List of Terraform Conditional Expressions** ##### **1. Basic Conditional Expression Syntax** ```hcl condition ? true_value : false_value ``` ✅ Returns `true_value` if the condition is **true**, otherwise returns `false_value`. ##### **2. Numeric Comparison Conditionals** ```hcl output "is_greater" { value = 10 > 5 ? "Yes" : "No" # Returns "Yes" } output "is_equal" { value = 5 == 5 ? "Match" : "No Match" # Returns "Match" } ``` ✅ Used for **greater than (`>`), less than (`<`), equal (`==`), not equal (`!=`)** conditions. ##### **3. String-Based Conditionals** ```hcl variable "env" { default = "dev" } output "instance_size" { value = var.env == "prod" ? "t3.large" : "t3.micro" } ``` ✅ Compares strings and returns different values dynamically. ##### **4. Boolean Conditionals** ```hcl variable "enable_feature" { default = true } output "feature_status" { value = var.enable_feature ? "Enabled" : "Disabled" } ``` ✅ Used for enabling/disabling features dynamically. ##### **5. Nested Conditional Expressions** ```hcl variable "env" { default = "dev" } output "instance_size" { value = var.env == "prod" ? "t3.large" : (var.env == "staging" ? "t3.medium" : "t3.micro") } ``` ✅ Allows multiple condition checks, similar to **if-elseif-else**. ##### **6. Using Conditionals in Resource Attributes** ```hcl resource "aws_s3_bucket_versioning" "example" { bucket = aws_s3_bucket.example.id versioning_configuration { status = var.enable_versioning ? "Enabled" : "Suspended" } } ``` ✅ Dynamically **enable or disable S3 versioning** based on a variable. ##### **7. Using Maps for Cleaner Condition Handling** ```hcl variable "env" { default = "dev" } variable "instance_sizes" { default = { dev = "t3.micro" staging = "t3.medium" prod = "t3.large" } } output "selected_size" { value = lookup(var.instance_sizes, var.env, "t3.micro") # Defaults to "t3.micro" if env is not found } ``` ✅ Instead of multiple conditions, a **map lookup** provides a cleaner approach. ##### **8. Conditionals with `count` for Resource Creation** ```hcl resource "aws_instance" "example" { count = var.create_instance ? 1 : 0 ami = "ami-12345678" instance_type = "t3.micro" } ``` ✅ If `var.create_instance` is **true**, the instance is created, otherwise **no instance** is created. ##### **9. Conditionals with `for_each` to Create Resources Dynamically** ```hcl variable "enabled_services" { default = ["s3", "ec2"] } resource "aws_iam_role" "example" { for_each = toset(var.enabled_services) name = "role-${each.value}" } ``` ✅ Creates resources **only for enabled services**. ##### **Best Practices** ✅ Use **maps** instead of **nested conditionals** for readability. ✅ Keep conditions **simple and avoid complex nesting**. ✅ Combine with `count` and `for_each` for **dynamic resource creation**. --- #### **Real-World Example: Using Conditional Expressions in a Terraform Module** ##### **Scenario** We want to create a Terraform module that: ✅ **Dynamically provisions an EC2 instance** based on an environment (`dev`, `staging`, `prod`). ✅ **Enables S3 versioning only for production**. ✅ **Allows SSH access only in `staging` and `prod` environments**. ✅ **Uses a `map` to assign instance sizes based on the environment** instead of multiple conditionals. ##### **1. Terraform Module Directory Structure** ``` terraform-project/ │── main.tf │── variables.tf │── outputs.tf │── modules/ │ ├── compute/ │ │ ├── main.tf │ │ ├── variables.tf │ │ ├── outputs.tf ``` ##### **2. Define the EC2 Module (`modules/compute/main.tf`)** ```hcl resource "aws_instance" "example" { count = var.create_instance ? 1 : 0 ami = "ami-12345678" # Replace with a valid AMI ID instance_type = lookup(var.instance_sizes, var.env, "t3.micro") vpc_security_group_ids = [aws_security_group.ec2_sg.id] tags = { Name = "Instance-${var.env}" Environment = var.env } } resource "aws_security_group" "ec2_sg" { name = "ec2-security-group-${var.env}" description = "Security group for EC2 instance" ingress { from_port = 22 to_port = 22 protocol = "tcp" cidr_blocks = var.env == "prod" || var.env == "staging" ? ["0.0.0.0/0"] : [] } egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } } resource "aws_s3_bucket" "example" { bucket = "my-app-bucket-${var.env}" } resource "aws_s3_bucket_versioning" "example" { bucket = aws_s3_bucket.example.id versioning_configuration { status = var.env == "prod" ? "Enabled" : "Suspended" } } ``` ##### **3. Define Input Variables (`modules/compute/variables.tf`)** ```hcl variable "env" { description = "Deployment environment (dev, staging, prod)" type = string default = "dev" } variable "create_instance" { description = "Whether to create the EC2 instance" type = bool default = true } variable "instance_sizes" { description = "Map of instance sizes based on environment" type = map(string) default = { dev = "t3.micro" staging = "t3.medium" prod = "t3.large" } } ``` ##### **4. Define Outputs (`modules/compute/outputs.tf`)** ```hcl output "instance_id" { description = "The ID of the EC2 instance" value = aws_instance.example[*].id } output "s3_bucket_name" { description = "S3 bucket name" value = aws_s3_bucket.example.id } output "security_group_id" { description = "Security Group ID" value = aws_security_group.ec2_sg.id } ``` ##### **5. Use the Module in the Root Configuration (`main.tf`)** ```hcl provider "aws" { region = "us-east-1" } module "compute" { source = "./modules/compute" env = "prod" create_instance = true } ``` ##### **6. Apply the Configuration** ```sh terraform init terraform apply ``` 🔹 This provisions **an EC2 instance, an S3 bucket, and a security group** based on the environment. ##### **How Conditional Expressions Were Used in This Module** ✅ **`count = var.create_instance ? 1 : 0`** → Controls **whether the EC2 instance is created** ✅ **`lookup(var.instance_sizes, var.env, "t3.micro")`** → Dynamically assigns the **EC2 instance type** ✅ **`var.env == "prod" ? "Enabled" : "Suspended"`** → **Enables S3 versioning** only in production ✅ **`var.env == "prod" || var.env == "staging" ? ["0.0.0.0/0"] : []`** → **Allows SSH only in staging and prod** ##### **Key Benefits of This Approach** ✅ **Modular & Reusable** – The module can be used for different environments (`dev`, `staging`, `prod`) ✅ **Less Hardcoding** – Uses **conditionals & maps** instead of `if-else` structures ✅ **More Control** – Resources are **enabled/disabled dynamically** based on conditions ### Lab: Functions and Conditional Expressions - Exercises using the functions & conditionals. ### Terraform Workspaces (OSS) #### **What Are Terraform Workspaces?** Terraform **Workspaces (OSS)** provide a way to manage **multiple environments (e.g., dev, staging, prod)** using a single Terraform configuration. Each workspace has its **own separate state file**, allowing infrastructure isolation while using the same code. ##### **Why Use Terraform Workspaces?** ✅ **Environment Isolation** – Each workspace maintains a separate state file. ✅ **Avoid Code Duplication** – Use the same Terraform code for multiple deployments. ✅ **Easier Multi-Environment Management** – Switch between `dev`, `staging`, and `prod` easily. ##### **Basic Workspace Commands** ```sh terraform workspace new dev # Create a new workspace named "dev" terraform workspace list # List all workspaces terraform workspace select prod # Switch to the "prod" workspace terraform workspace show # Show the current active workspace ``` ##### **Using Workspaces in Configuration** ```hcl variable "environment" { default = terraform.workspace } resource "aws_s3_bucket" "example" { bucket = "my-app-${terraform.workspace}-bucket" tags = { Environment = terraform.workspace } } ``` 🔹 This creates different S3 buckets for each workspace (e.g., `my-app-dev-bucket`, `my-app-prod-bucket`). ##### **Best Practices for Workspaces** ✅ Use **workspaces for separate state management** (not full environment segregation). ✅ For complex environments, consider **separate state files** instead of workspaces. ✅ Use `terraform.workspace` to dynamically adjust resource names. --- #### **Using Terraform Workspaces with Remote State (S3 & DynamoDB)** ##### **Why Use Workspaces with Remote State?** Terraform workspaces allow managing multiple environments (`dev`, `staging`, `prod`) **while using a centralized remote state** stored in **S3**. DynamoDB is used for **state locking** to prevent conflicts when multiple users modify the state. ##### **1. Configure Terraform Backend for Workspaces (`backend.tf`)** ```hcl terraform { backend "s3" { bucket = "my-terraform-state-bucket" key = "envs/${terraform.workspace}/terraform.tfstate" region = "us-east-1" encrypt = true dynamodb_table = "terraform-lock" } } ``` 🔹 This ensures each workspace (`dev`, `staging`, `prod`) has its **own state file** stored under `envs/{workspace}/terraform.tfstate`. 🔹 **DynamoDB table (`terraform-lock`)** prevents concurrent state modifications. ##### **2. Create a DynamoDB Table for State Locking (`dynamodb.tf`)** ```hcl resource "aws_dynamodb_table" "terraform_locks" { name = "terraform-lock" billing_mode = "PAY_PER_REQUEST" hash_key = "LockID" attribute { name = "LockID" type = "S" } } ``` 🔹 This table is used to lock the state **to prevent conflicts** when multiple users apply changes. ##### **3. Create an S3 Bucket for Remote State (`s3.tf`)** ```hcl resource "aws_s3_bucket" "terraform_state" { bucket = "my-terraform-state-bucket" versioning { enabled = true } server_side_encryption_configuration { rule { apply_server_side_encryption_by_default { sse_algorithm = "AES256" } } } } ``` 🔹 Enables **versioning** for rollback and **encryption** for security. ##### **4. Using Workspaces in Resource Configuration (`main.tf`)** ```hcl provider "aws" { region = "us-east-1" } variable "environment" { default = terraform.workspace } resource "aws_s3_bucket" "example" { bucket = "my-app-${terraform.workspace}-bucket" tags = { Environment = terraform.workspace } } output "current_workspace" { value = terraform.workspace } output "s3_bucket_name" { value = aws_s3_bucket.example.id } ``` 🔹 **Dynamically sets resource names** based on the current workspace. 🔹 Different **S3 buckets** are created for each workspace (e.g., `my-app-dev-bucket`, `my-app-prod-bucket`). ##### **5. Managing Workspaces & Deploying Terraform** ```sh terraform init # Initialize Terraform with remote state terraform workspace new dev # Create a new workspace for "dev" terraform workspace select dev # Switch to "dev" workspace terraform apply # Apply changes for "dev" terraform workspace new prod # Create a new "prod" workspace terraform workspace select prod # Switch to "prod" terraform apply # Apply changes for "prod" ``` 🔹 Each workspace (`dev`, `staging`, `prod`) has **its own separate state file** in S3. 🔹 Terraform **automatically loads the correct state** for the active workspace. ##### **Key Benefits of This Approach** ✅ **Environment Isolation** – Each workspace maintains a separate state file in S3. ✅ **Remote State Management** – S3 stores the state securely, and DynamoDB prevents conflicts. ✅ **Modular & Scalable** – Easily add new environments without duplicating code. ### Lab: terraform Workspaces - Practice with terraform workspaces.