# AWS SAA - Services - Compute
### EC2
- I'm already familiar with this concept, it's just virtual machines.
- Instance Types
- General purpose - workloads for balanced usage, e.g. web servers.
- Compute optimized - for workloads that require greater CPU usage.
- Memory optimized - for workloads that require processing large datasets in memory.
- Storage optimized - for workloads require high IO.
- GPU instances - for GPU workloads.
- AMI (Amazon Machine Image)
- Both AWS & 3rd party provided.
- Can be customized.
- Private, Public & Shared AMIs
- Baked In vs Fried
- User Data
- Bootstrap script max size 16KB
- Security Groups are required upon deploying an EC2.
- EC2 with EBS volumes.
- Persistent storage & snapshots.
- EC2 with ELB & AutoScaling (AS)
- Traffic management (ELB) & automatically deploying/removing instances (AS).
- EC2 with Elastic IPs
- Persistent floating IPs.
- Launch templates
- Configurations that specify instance sizes, SGs when deploying EC2 instances.
- EC2 Instance Placements
- Cluster Placement Group - deploys instances as close together as possible.
- Partition Placement Groups - deploys instances in a way that doesn't have instances deployed on the same hosts.
- Spread Placement Group - deploys instances against multiple hosts to ensure instances aren't on the same hosts.
- EC2 Pricing
- Reference [[AWS Cloud Practitioner#Specific Billing - EC2 | Specific Billing - EC2]].
### EC2 - Demo
- A brief overview & demo of EC2.
### LAB - Create your favorite virtual machine in EC2
- Task Request - Part 0:
- Create a security group with the following info:
- Security group 1
- Security group name: my-web-server-sg
- Description: allows SSH, HTTP from any network.
- VPC: choose the default VPC
- Inbound rules: allow SSH, HTTP (TCP/22, TCP/80) traffics from any networks (0.0.0.0/0)
- Outbound rules: leave default settings
- Task Request - Part 1:
- Deploy instance in `us-east-1`
- Name: my-linux-server
- Under Application and OS Images (Amazon Machine Image), do the following:
- Choose Quick Start, and then choose Ubuntu. This is the operating system (OS) for your instance.
- Choose the t2.micro instance type, which is selected by default.
- Under Key pair (login), select ec2-user key pair.
- Keep the Network Setting as default values. For the Firewall (security groups), select Select existing security group and make sure my-web-server-sg security group being used.
- For the storage, configure the root volume with 30GiB GP2.
- Task Request - Part 2:
- Install NGINX & test service functionality.
- Task Request - Part 3:
- Delete the instance.
### EC2 Image Builder
- A service to create a "Golden Image" for your specific use case or needs.
- Image Builder Process
- Build the image.
- Select the base/source image.
- Customize
- Add or remove software from base image
- Test
- Perform whatever tests to validate image meets needs.
- Distribute
- Distribute AMIs to AWS regions & AWS accounts.
- Run
- Deploy instances from the custom image.
- EC2 Image Builder Pipeline
- Image Recipe
- Source image
- Build components
- Customization
- Infrastructure Config
- Performs the actual image creation
- Defines what subnets & VPCs to use
- Distribution Config
- Where the image should be deployed to
- Features
- Automated image creation
- Golden Image creation
- Simpler to secure, e.g. patch management
- Consistent workflow
- Version management
### Elastic Network Interfaces
- Elastic Network Interfaces (ENI)
- A virtual network interface that can be attached to instances in a VPC.
- Can be moved across instances.
- Is a logical network interface for network management of an instance.
- Will contain network configurations, e.g. IPs
- Are persistent network devices.
- Can have Elastic IPs applied to them.
- Instances have a primary ENI that posses the public/private IP addresses.
- Secondary ENIs can have one primary public IP & multiple private IPs.
- Can be associated with different SGs.
- Summary
- Multiple IPs
- Elastic IPs
- Assign SGs to the ENI itself
- Can be hot attached/detached to instances
- Network Flow Logs can be applied to ENIs
### Elastic Network Interfaces - Demo
- A brief overview & demo of the ENI feature for EC2.
- Resides in the Network & Security section.
- ENIs are located in the Advanced Network settings when deploying an EC2 instance.
### Elastic Beanstalk
- **AWS Elastic Beanstalk** is a **Platform-as-a-Service (PaaS)** that simplifies the deployment, scaling, and management of web applications and services. It automatically handles infrastructure provisioning, load balancing, scaling, and monitoring while allowing developers to focus on writing code. Elastic Beanstalk supports multiple programming languages, including **Java, .NET, Node.js, Python, Ruby, PHP, and Go**, and integrates with AWS services like **RDS, S3, and IAM**. It provides a **fully managed environment** but also allows manual customization for greater control over configurations.
- Features
- Easy deployment of applications in AWS.
- Managed platform updates.
- Autoscaling & ELB functionality.
- Built in monitoring & health monitoring.
- Pre-configured application stacks/component.
### Elastic Beanstalk - Demo
- A brief overview & demo of Elastic Beanstalk.
### Lightsail
- **AWS Lightsail** is a **simplified cloud computing service** designed for developers, small businesses, and startups to quickly deploy and manage applications. It provides **pre-configured virtual private servers (VPS)** with bundled resources such as **compute, storage, networking, and databases** at a predictable, low-cost pricing model. Lightsail supports easy deployment of **WordPress, LAMP stacks, containers, and other web applications** with built-in **DNS management, load balancing, and automatic backups**. It is ideal for users who need a straightforward cloud solution without the complexity of managing AWS infrastructure manually.
- Features
- Virtual Servers
- Containers
- Load Balancers
- Managed DBs
- Global CDN
- Benefits
- Simple & easy to use.
- Pre-configured solutions.
- Reliability.
- Smooth transition with EC2.
- Integration to AWS services.
### ECS
- **AWS Elastic Container Service (ECS)** is a fully managed **container orchestration service** that allows users to run, manage, and scale Docker containers on AWS. ECS integrates tightly with AWS services like **EC2 (for self-managed instances) and Fargate (for serverless containers)**, providing flexibility based on workload requirements. It supports **load balancing, auto-scaling, IAM-based security controls, and networking features** to simplify containerized application deployment. ECS is widely used for **microservices architectures, batch processing, and running containerized workloads** without the need for managing underlying infrastructure.
- ECS is proprietary AWS provided service.
- With ECS you have to manage the underlying EC2 instances/infrastructure.
- ECS manages the containers.
- With ECS & Fargate, AWS will manage the container manager (ECS) & the underlying compute/infrastructure (Fargate).
- ECS Task Overview
- Create a dockerfile, then create an image from it.
- A "Task" is basically a container in ECS terms.
- Then use a Task Definition to define the container options, (image/ports/voumes/CPU/MEM).
- A Task Definition is similar to a docker-compose file in ECS terms.
- ECS Service
- A service that ensures a certain number of Tasks are running at all times.
- Restarts containers that have exited/crashed.
- ECS & LBs
- A LB can be assigned to route external traffic to ECS managed service.
### ECS - Demo Part 1
- A brief demo & overview of the ECS service, part 1.
- Reference Docker Hub, Mumshad Mannambeth.
- kodekloud - ecs-project1 & ecs-project2
- Remember that IPs of containers will change when updates are rolled out. Utilize a LB to reduce the causing an issue with your deployment implementation.
### ECS - Demo Part 1
- A brief demo & overview of the ECS service, part 2.
- This demo used a ELB to forward traffic to the web-api container provided in the example.
### EKS
- **AWS Elastic Kubernetes Service (EKS)** is a fully managed **Kubernetes service** that simplifies the deployment, scaling, and management of containerized applications. It automates **Kubernetes cluster setup, updates, and maintenance**, while integrating with AWS services like **IAM, VPC, and CloudWatch** for security, networking, and monitoring. EKS supports **both EC2 and AWS Fargate** for running Kubernetes workloads, providing flexibility in managing infrastructure. It is ideal for organizations adopting **microservices architectures**, requiring **high availability, scalability, and portability** for their containerized applications.
- AWS creates & manages the control plane nodes. It will scale these services as well.
- API server
- Scheduler
- Controller Manager
- ETCD
- With EKS you create & manage the worker nodes.
- Self-managed nodes are EC2 instances that you deploy with the required components.
- kublet
- kube-proxy
- container runtime
- perform maintenance updates
- register node with control plane
- Managed Node Group
- AWS automation provisioning & lifecycle management of EC2 nodes.
- Managed nodes run EKS images.
- Managed EKS nodes are part of an Auto Scaling group.
- Fargate
- Serverless architecture.
- Creates worker nodes on demand.
- Managed by AWS/Fargate.
- Creating EKS Cluster
- Provide cluster name & k8s version.
- Provide IAM fole for cluster.
- Provision worker nodes.
- Specify storage & secrets if needed.
- Select VPC & subnets.
- Define SG for the cluster.
- Creating Worker Nodes
- Create a node group.
- Select instance type.
- Define min/max number of nodes.
- Specify EKS cluster to connect to.
- Connecting to Cluster
- Utilize `kubectl set-cluster` to connect.
- Methods to Create Cluster
- AWS Console
- Create cluster & cluster worker nodes.
- Setup `kubectl` to connect.
- AWS provided `eksctl` CLI
- `eksctl create cluster`
- IaC such as Terraform/Pulumi
### ECR
- Elastic Container Registry provided & managed by AWS.
- Private registries.
- Image lifecycle management.
- Integration with AWS services.
- Image Scanning uses AWS Inspector for known vulnerabilities & issues.
- Reference [[AWS Cloud Practitioner#AWS Security Resources | AWS Security Resources]] - Detection.
- Management of container images to be stored in the AWS environment.
- EKS/ECS or on-premise can utilize this as a registry.
- Public ECR are available to anyone.
- Charged for storage of images.
- No charges for outbound traffic.
- Private ECR are available AWS accounts, (yours or another account).
- Charges for storage of images.
- Charges for outbound traffic.
- CI/CD Pipeline Example
- Push code to CodeCommit which would trigger a run in CodePipeline /CodeBuild that would then deploy to ECR.
### ECR - Demo
- A brief overview & demo of the ECR service & features.
- Requires AWS CLI for usage.
- Can utilize IAM roles as well for this.
- Reference Docker documentation for management/usage of image registries.
### App Runner
- **AWS App Runner** is a **fully managed service** that makes it easy to deploy and run **containerized web applications and APIs** without managing infrastructure. It automatically handles **scaling, load balancing, security, and deployment** from a **source code repository (GitHub) or container registry (ECR)**. App Runner provides **built-in TLS encryption, IAM-based security, and automatic scaling**, making it ideal for **developers who want to focus on coding** while AWS manages the underlying infrastructure.
- Creates a CI/CD pipeline on your behalf for code management.
- Supports CodeCommit/GitHub.
- AppRunner provides a custom URL for access.
- Supports AWS services & 3rd party services as it is just your application running in AWS infrastructure.
- You can setup a VPC connector to access your AWS resources.
### Batch
- **AWS Batch** is a fully managed **batch computing service** that enables users to efficiently run **large-scale batch processing jobs** across AWS compute resources. It dynamically provisions **EC2 instances or Fargate containers** based on job requirements, optimizing cost and performance. AWS Batch eliminates the need for managing **job scheduling, infrastructure provisioning, and scaling**, making it ideal for workloads such as **data processing, machine learning training, financial modeling, and scientific simulations**.
- Job Lifecycyle
- The **AWS Batch Job Lifecycle** begins when a **job is submitted** to a job queue, where it waits to be scheduled based on priority and resource availability. The **scheduler assigns the job** to a compute environment (EC2 or Fargate) and moves it to the **RUNNING** state when resources are available. Once the job completes successfully or fails, it transitions to the **SUCCEEDED** or **FAILED** state, respectively. AWS Batch can then trigger retries, send notifications, or execute dependent jobs based on predefined conditions.
- Submitted -> Pending -> Runnable -> Starting -> Running, will be on of the 2 states:
- Succeeded - Job completed.
- Failed - Job failed.
- Batch Components
- Job Definition a template of how the job should be run
- Job Submission
- Job Queue
- Job Scheduler
- Job Execution
- Compute environment - either EC2 or Fargate
- Will automatically scale resources based on job requirements.
### Lambda
- **AWS Lambda** is a **serverless compute service** that allows users to run code **without provisioning or managing servers**. It automatically scales, executes code in response to **events (e.g., S3 uploads, API Gateway requests, DynamoDB triggers)**, and charges only for the compute time used. Lambda supports multiple programming languages and integrates with AWS services like **S3, DynamoDB, SNS, and CloudWatch**. It is ideal for **event-driven applications, microservices, and automation tasks**, enabling developers to focus on code while AWS handles scaling, fault tolerance, and infrastructure management.
- Reference [[AWS Cloud Practitioner#Core AWS Services - Compute Lambda | Compute Lamba]]
### Lambda - Demo
- A brief overview & demo of the Lambda service & features.
- Reference documentation with regards to layers.
### LAB - launching your first lambda function
- Task Request - Part 0:
- Select **Author From Scratch**
- Create lambda function named: `kk-saa-1`
- Runtime: `python-3.11`
- Task Request - Part 1:
- Under the **Test** tab of the lambda function copy the JSON.below to it.
- **Event Name**: Test
- Select **Test** button to test code.
- Use the code below for the function.
- Execute the following command:
- `aws lambda invoke --function-name kk-saa-1 --invocation-type RequestResponse --payload '{"key1": "kodekloud student"}' output.txt`
- Example python code
```python
def lambda_handler(event, context):
username = event.get('key1', 'User')
response_message = f"Hello {username}, all the best for your SAA exam!"
return {
'statusCode': 200,
'body': response_message
}
```
- JSON Payload
```json
{
"key1": "kodekloud student"
}
```
- Task Request - Notes:
- I had to use the `base64` utility to encode the payloads but this was not mentioned at all in the lab. So I think the lab needs to be fixed.
- Provided feedback via KK lab and hope to get it fixed...
### Step Functions - Demo
- **AWS Step Functions** is a **serverless workflow orchestration service** that allows you to build and coordinate **multi-step workflows** using AWS services like **Lambda, ECS, DynamoDB, and S3**. It provides a **visual interface** for designing workflows, enabling you to define the order of execution, handle errors, and integrate with external APIs. Step Functions supports **Standard Workflows** (for long-running processes) and **Express Workflows** (for high-volume, short-duration tasks). It simplifies application logic, improves automation, and ensures reliable state management for complex workflows.
### Serverless Application Model
- The **AWS Serverless Application Model (AWS SAM)** is an **open-source framework** designed to simplify the development, deployment, and management of **serverless applications** on AWS. It extends **AWS CloudFormation** by providing a **simplified syntax** to define serverless resources like **AWS Lambda, API Gateway, DynamoDB, and Step Functions**. AWS SAM supports **local testing and debugging**, enabling developers to emulate Lambda functions and API Gateway endpoints on their local machines. With built-in **CI/CD capabilities**, it streamlines the deployment process, making it easier to build scalable and efficient serverless applications.
- SAM template
- A YAML configuration file that defines various AWS resources based on CloudFormation.
- SAM CLI
- **`sam init`** – Initializes a new AWS SAM application with a predefined template.
- **`sam build`** – Compiles and prepares the application code and dependencies for deployment.
- **`sam local invoke`** – Runs a Lambda function locally for testing.
- **`sam package`** – Packages the application by creating a CloudFormation-compatible deployment artifact.
- **`sam deploy`** – Deploys the packaged application to AWS using CloudFormation.
- **`sam logs`** – Fetches and displays logs for a deployed Lambda function from Amazon CloudWatch.
- **`sam validate`** – Checks the AWS SAM template for syntax and best practices errors.
- **`sam publish`** – Publishes the application to the AWS Serverless Application Repository (SAR).
### Serverless Application Respository
- The **AWS Serverless Application Repository (SAR)** is a **managed repository** that enables developers and organizations to **discover, share, and deploy pre-built serverless applications**. It allows users to **publish their own serverless applications** using AWS SAM and share them privately within an organization or publicly with the AWS community. SAR simplifies deployment by integrating with AWS Lambda, API Gateway, and other AWS services, making it easy to **reuse existing solutions, accelerate development, and maintain best practices** for serverless architectures.
### Amplify
- **AWS Amplify** is a **development platform** that simplifies the creation, deployment, and management of **full-stack web and mobile applications**. It provides **backend services (authentication, APIs, storage, and data)** and a **fully managed CI/CD pipeline** for deploying frontend applications. Amplify integrates with **React, Angular, Vue, iOS, and Android**, allowing developers to build **scalable, secure, and cloud-powered applications** without managing infrastructure. It is ideal for teams looking to accelerate development with **serverless and backend-as-a-service (BaaS) capabilities**.
- Reference [[AWS Cloud Practitioner#Secondary AWS Services - Frontend Web & Mobile Services | Secondary AWS Services - Frontend Web & Mobile Services]]
### Outposts
- **AWS Outposts** is a **fully managed hybrid cloud solution** that extends **AWS infrastructure, services, and APIs** to on-premises data centers or edge locations. It enables organizations to run **AWS services locally** while maintaining seamless integration with the AWS cloud for **low-latency applications, data residency, and regulatory compliance needs**. AWS Outposts comes in two variants: **Outposts racks**, which provide full AWS infrastructure, and **Outposts servers**, designed for smaller-scale deployments. It is ideal for workloads that require **consistent hybrid cloud environments** across on-premises and AWS regions.
### ECS/EKS Anywhere
- **AWS ECS Anywhere** and **AWS EKS Anywhere** are **hybrid container solutions** that extend AWS-managed container orchestration to **on-premises infrastructure**.
- **ECS Anywhere** allows running **Amazon Elastic Container Service (ECS) tasks** on **on-premises servers or edge devices** while leveraging AWS's control plane for management, monitoring, and scaling.
- **EKS Anywhere** enables deploying **Amazon Elastic Kubernetes Service (EKS) clusters** in **on-premises environments**, providing a consistent Kubernetes experience with AWS-native tools for automation, security, and updates.
- Both services help organizations maintain **hybrid and multi-cloud containerized applications** while using AWS’s managed orchestration benefits.
- Maintain data sovereignty.
### VMware Cloud on AWS
- **VMware Cloud on AWS** is a **hybrid cloud service** that enables organizations to run **VMware workloads natively on AWS infrastructure**. It integrates **VMware vSphere, vSAN, and NSX** with AWS services, allowing seamless migration and extension of on-premises **VMware environments** to the cloud. The service provides **high availability, scalability, and disaster recovery** while offering direct access to AWS-native services for modernizing applications. It is ideal for enterprises looking to **extend data centers, migrate workloads, or implement hybrid cloud strategies** without refactoring applications.
### The Snow Family (compute mainly)
- Reference [[AWS Cloud Practitioner#Migrating Data to AWS |Migrating Data to AWS]] area & Snowball.
- The **AWS Snow Family** consists of **portable, rugged edge computing and data transfer devices** designed for **operating in disconnected, remote, or edge environments**. It includes:
- **AWS Snowcone** – A **small, portable edge device** (8 TB SSD or 14 TB HDD) with optional **AWS IoT Greengrass** for **local compute processing** and secure data transfer.
- **AWS Snowball Edge Compute Optimized** – A **powerful edge computing device** with **52 vCPUs, 208 GiB RAM, and optional GPU** to run **machine learning, data processing, or IoT workloads** in remote locations.
- **AWS Snowmobile** – A **large-scale data transfer service** (up to 100 PB) designed for **migrating massive datasets to AWS** via a shipping container.
- These devices enable **edge computing, real-time analytics, and data processing** before transferring data to AWS, making them ideal for **military, research, maritime, and industrial applications**.
### Challenge Yourself Quiz - Meet your Services - Compute
- Questions: 26
- 21 of 26
- 81%
- Q: A front-end developer needs to deploy a static web application that interacts with backend AWS services. Which service provides a set of tools to build full-stack applications seamlessly?
- A: AWS Amplify
- My selection: AWS App Runner. I forgot Amplify is more suited for static apps.
- Q: A startup wants to deploy their web application quickly and easily without managing the underlying infrastructure, with a pricing model that scales with their usage. Which AWS service meets these requirements?
- A: AWS App Runner
- My selection: AWS Elastic Beanstalk. I need to go over these again as they're similar but different in the scale & abilities provided.
- Q: A solutions architect is designing a high availability architecture for a stateful application that requires a failover mechanism with the same IP address. Which AWS resource can be used to meet this requirement?
- A: Elastic Network Interfaces (ENI)
- My selection: ELB. The question specifically asks about keeping the IP, not scaling. So while ELB can maintain an address, it doesn't suite this particular request.
- Q: An organization wants to run their containerized applications on-premises with the same management, security, and scale as in AWS Cloud. Which service allows them to do this?
- A: AWS ECS Anywhere
- My selection: AWS Outposts. I need to slow down and read the question. It specifically mentions containers & thus ECS Anywhere is the appropriate answer.
- Q: A developer wants to build and deploy serverless applications using AWS Lambda, Amazon API Gateway, and Amazon DynamoDB. Which framework simplifies the process of building these applications?
- A: AWS Serverless Application Model (SAM)
- My selection: AWS Amplify. They asked specifically about a "framework" not a solution. Once again I need to slow down and read the questions completely before answering.