# AWS SAA - Services - Storage
### Elastic Block Storage (EBS)
- Can be presented as a volume to boot from.
- Can be presented as a volume to mount.
- Block level storage for EC2 instances.
- Can be detached/attached to different EC2 instances.
- Certain volumes can multi-attach a EBS volume.
- EBS has AZ level redundancy.
- Are AZ specific.
- Can't cross share EBS volumes in different AZs.
- EBS snapshots can be stored in S3.
- A snapshot can be deployed from S3 to another AZ.
- A snapshot can be copied from region to region via S3.
- Volume Types
- General purpose SSD (gp2 & gp3)
- Price & performance
- Recommended for most workloads
- Types:
- GP3
- 20% lower price
- GP2
- Performance scales with volume size.
- Provisioned IOPS SSD vol
- Critical
- IOPS intensive
- Throughput intensive workloads
- Types:
- PIPOPS Io2 volumes
- PIPOPS Io2 block express volumes
- PIPOPS Io1 volumes
- Throughput optimized HDD volumes
- Big data
- Data warehouses
- Log processing
- Cold HDD volumes
- Infrequently accessed data
- Scenarios where the lowest cost is important
- Magnetic
- Workloads where data is infrequently accessed
- ![[EBS_VolTypes-0.png| General Purpose SSD & PIOPS SSD]]
- ![[EBS_VolTypes-1.png | Throughput Optimized HDD & Cold HDD]]
- ![[EBS_VolTypes-2.png | Magnetic]]
- EBS Pricing
- Per GB per month.
- Depends on volume type.
- Faster IOPS is more costly.
- Snapshots are per GB per month.
- Summary
- Block storage is data in blocks & stored as separate pieces each with a UUID.
- A collection of blocks is presented to the OS as a volume.
- Block storage be booted & mounted.
- EBS yields block-level storage volumes for EC2 instsances.
- EBS volumes are provisioned in an AZ.
- EBS volumes can be copied to another AZ via snapshots and then create a volume from the snapshot in the desired AZ.
- EBS has multiple volume types for different storage needs.
- Costs are incurred by what you provision & charged per GB per month.
### EBS - Demo part 1
- A brief overview & demo of the EBS service & features.
- Created a EBS volume, attached & mounted it to a server.
- Then detached it from the server & attached it to a different server.
### EBS - Demo part 2
- Showed how to create a snapshot of the EBS volume to copy it to a different AZ in the same region.
- Then created a volume from the snapshot in the different AZ.
- Showed how to copy the snapshot from the source region to a different destination region, create a volume & attach it to a EC2 instance in that region.
### LAB - Moving a volume from one EC2 instance to the next
- Task Overview
>The organisation decided to run an application on an EC2 instance, you are tasked to attach the EBS volume to the EC2 instance.
>
>For convenience two EC2 instances were created for the purpose of lab. Check the EC2 instances created and make a note of region where instances were created.
>Note: Perform all tasks in us-east-1 region.
- Task request - Part 0:
- Create EBS volume
- Volume type: gp2
- Size: 20GB
- AZ: `us-east-1c` (where instances are located)
- Tags: Key = Name and Volume = EBSVOLUME
- Task request - Part 1:
- Attached to `Instance1`
- Select the volume created in the previous steps
- Click on the Actions drop-down button and select attach volume
- Now confirm the AZ of EBS Volume.
- In the instance dropdown menu select Instance1
- (Remember instances in the same AZ as EBS volume are only displayed here.)
- Keep device name as sdf and proceed to attach volume.
- Task request - Part 2:
- Create XFS file system on the EBS volume attached to `instance1`
- Command: `sudo mkfs.xfs /dev/xvdf`
- Task request - Part 3:
- Create directory ebsdemo under /home/ec2-user directory.
- Mount directory to volume attached (`/dev/****`).
- Task request - Part 4:
- Configure so that the filesystem will be automatically mounted on the boot.
- Get the unique identifier of the ebs volume.
- Edit `/etc/fstab`
- Task request - Part 5:
- Create file in directory `/home/ec2-user/ebsvolume`
- Contents of file, "my first ebs volume"
- Task request - Part 6:
- There is a scheduled maintenance for a system upgrade but the application should be in available status, so create and attach the EBS volume to the 2nd Ec2 instance where the application is already installed.
- Unmount & detach the EBS volume from `instance1`.
- `sudo umount -l /home/ec2-user/ebsdemo`
- Navigate to the volumes page in the AWS console and click on the select the volume we are using for this lab and click on actions, in the drop-down menu select detach volume.
- Task request - Part 7:
- Attach the EBS volume to `instance2`.
- Connect to instance 2 by SSH.
- We have already created a filesystem for volume attached. Create a directory ebsdemo under /home/ec2-user/. Mount the volume to the directory ebsdemo.
- Task request - Part 8:
- After some time to enhance the availability of application, it is deployed in another AZ us-east-1d.
- Application is deployed on instance3 Ec2 instance. we want to move the ebs volume to instance3 now. However instance3 is in a different AZ, This is a problem because ebs volumes are only available in one AZ. To make the ebs volume available for instance3 in a different AZ we need to take a snapshot. Snapshots allow us to take a volume in an az and move it to another.
- Create a snapshot of the ebs volume.
- Select the volume gp2 type for which you want to create a snapshot.
- Click on Actions, in the dropdown menu select create snapshot.
- Check in snapshots under the Elastic Block Store section of the Ec2 page.
- Snapshot creation will take some time.
- Copy the Snapshot-id which will be used later.
- `snap-0e15b4871d10c5977`
- Task request - Part 9:
- Create a volume from the snapshot you created in last step and attach it to instance3.
- `snap-0e15b4871d10c5977`
- Click on create volume
- a. Type gp2
- b. Size 20
- c. Availability zone: us-east-1d
- In the snapshot id section click on the dropdown menu and click on 1st option (specify custom snapshotid).
- Paste the copied snapshotid in the window that popped up and click on save.
- Then click on Create volume.
- `vol-08e351bc27775fa8a`
- Task request - Part 9:
- Now attach the volume created from snapshot to instance3. Follow below steps.
- Attach the volume to instance3
- Connect to instance 3 by SSH.
- Create directory ebsdemo under /home/ec2-user/. Now mount the volume to the directory ebsdemo.
- After mounting in ebsdemo directory you can check for file1
- Navigate into the /home/ec2-user/ebsdemo directory and verify if the file1 file we created on the first instance is there.
### Instance Store
- Provides temporary block level instance storage.
- Physically on the HV.
- Summary
- Instance stores should only be used for temporary data.
- If an EC2 instance is moved to another host, then it will lose all the data from the original Instance Store.
- Block level storage.
### Instance Store - Demo
- A brief demo & overview of the Instance Store feature.
- Not all instance types support the Instance Store feature.
- Provided on non-free Instance Types.
- When a instance is stopped & the instance is moved to a new host, the data & file system will not be available.
### EFS
- EFS supports NFSv4.
- Does not work with Windows based systems.
- Can be shared across multiple instances.
- EFS is VPC specific via a mount target.
- Mount Target is deployed into a subnet.
- Standard Storage Classes
- Multi-AZ resilience & the highest levels of durability & availability.
- EFS Standard
- EFS Standard Infrequent Access (Standard IA)
- One Zone Storage Classes
- Additional savings but saves data in a single AZ.
- EFS One Zone
- EFS One Zone Unfrequent Access (EFS One Zone IA)
- Performance Modes
- General Purpose Performance Mode
- Latency sensitive applications.
- Web-serving environments.
- Content management systems.
- Home directories.
- General file sharing.
- Elastic Throughput Mode
- Automatically scale throughput performance up or down to meet the needs of workload activity.
- Max I/O Performance Mode
- Higher levels of aggregate throughput & operations per second.
- Provisioned Throughput Mode
- Level of throughput the file system can drive independent of the file system's size or burst credit balance.
- Bursting Throughput Mode
- Scales with the amount of storage in your file system & supports bursting to higher levels for up to 12 hours per day.
- Installing amazon-efs-utils
- `sudo dnf -y install amazon-efs-utils`
- or whatever supported pkg manager.
- `sudo mount.efs $efs:id /mount/point`
- the `$efs:id` is found via AWS Console.
- Summary
- FS storage service provided by AWS.
- EFS supports NFSv4.
- EFS does not support Windows, Linux only.
- EFS can be mounted onto multiple EC2 instances.
- EFS FS are made available inside a VPC via Mount Targets. Mount Targets get IP addresses from subnets deployed in.
- EFS has two storage classes:
- Standard Storage Classes
- One Zone Storage Classes
- EFS has two modes:
- General Purpose Performance Mode
- Elastic Throughput Mode
- EFS is non-bootable.
### EFS - Demo
- A brief demo & overview of the EFS service & features.
- Lifecycle Management is able to automatically move older data to a different storage class.
- SGs are applied to the EFS Mount Targets.
### FSx for Windows/Lustre/NetApp/OpenZFS
- FSx is a managed high performance file storage server/service for varying workloads.
- Benefits
- Provides storage
- Managed storage
- Scalable storage
- Shared access
- Backups
- Flavours
- Amazon FSx for Windows File Server
- Supports SMB protocol.
- Integration with MS AD.
- Supports data deduplication.
- Set quotas.
- Amazon FSx for Lustre
- Low latency, high throughput access to data.
- Built on Lustre file system
- Integrates with other AWS services such as S3, DataSync & AWS Batch
- Easily scale file systems capacity & throughput
- Amazon FSx for NetAPP ONTAP
- High performance storage for Linux, Windows & MacOS via NFS, SMB & iSCSI protocols.
- File system can be scaled up or down for workload demands.
- Supports snapshots, clones, replications & more.
- Amazon FSx for OpenZFS
- Built on top of OpenZFS.
- Supports Linux, Windows & MacOS via NFS.
- Supports data compression, snapshots & data cloning.
- Features built-in data protection & security features.
- Deployment options
- FSx Windows, OnTap & OpenZFS support Single & Multi AZ deployments.
- FSx Lustre supports single AZ.
- FSx Comparisons
- ![[FSx_comparison-0.png | FSx Comparisons]]
- Summary
- FSx is a fully managed service to provide high performance file storage for various workloads.
- FSx comes in 3 flavours:
- Windows SMB
- Lustre
- OnTAP
- FSx for Windows is fully managed Windows SMB server.
- Integrates with AD for authentication.
- FSx Lustre is optimized for high performance & parallel file processing.
- Best use case for scientific computing & machine learning.
- Based on Lustre file system.
- FSx OnTAP is based on NetApps OnTAP file system.
- Supports NFS, SMB, & iSCSI.
- FSx OpenZFS is based on open-source OpenZFS file system.
- Supports Windows, Linux, & MacOS via NFS.
### S3 - Overview
- Object based storage service.
- Use cases
- log files
- media/audio/video/images
- CI/CD artifacts
- storage for static content & media for web sites.
- Terminology
- Bucket, a container for objects similar to a directory.
- Objects are whatever data/content stored in a bucket.
- Key - The file name.
- Value - file data itself.
- There are additional items as well such as version ID.
- Buckets have a flat file structure, there are no sub-directories.
- The console represents files as being stored in a directory but this is merely a view of the data structure.
- Availability
- Data is stored & replicated across a fleet of servers/AZs for HA.
- Bucket Naming Conventions
- Buckets must be unique **across** all AWS accounts.
- This is due to how S3 creates sub-domains that map to buckets. e.g.
- `https://mybucket.s3.amazonaws.com` is unique across AWS & thus you'll be unable to name a bucket **mybucket**.
- Restrictions
- Can handle unlimited number of objects.
- Maximum individual file size for an object is 5 TB.
- By default accounts support 100 buckets but can be increased to 1k by requesting service limit increase.
- Summary
- Scalable, highly available, secure & performant object storage service.
- Common use cases are storing static websites, media files, logs & traces.
- Object storage is a flat file structure that can't be booted or mounted from.
- Objects are merely files that have a key (the name of the object) & the value (the data itself with additional metadata)
- Buckets are a container for objects.
- Bucket names must be unique across all AWS accounts. They're globally unique.
- S3 can handle an unlimited number of objects.
- Maximum size for a single file is 5 TB.
- Multi-part upload allows to break up an object before uploading.
### S3 - demo
- A brief demo & overview of the S3 object service & features.
### S3 - Storage Classes
- S3 Standard
- Default storage class.
- Objects replicated across at least three AZs.
- 99.9% of availability.
- Low latency, immediate availability.
- Charged per GB per month & outgoing data.
- S3 Standard-IA (Infrequent Access)
- Charged per GB per month & outgoing data.
- Includes a retrieval free.
- Minimum duration charge of 30 days.
- S3 One Zone-IA (Infrequent Access)
- Stored on one AZ.
- Cheaper as a result.
- Replicated within the AZ but not across multiple AZs.
- Charged same as S3 Standard.
- S3 Glacier Instant
- Low cost option for archival data.
- Performance same as S3 Standard.
- Includes a retrieval free.
- Minimum duration charge of 90 days.
- S3 Glacier Flexible
- Not publicly available.
- Not immediately available.
- Minimum duration charge of 90 days.
- Minimum file size 40 KB.
- Options to retrieve, (fee increases by speed requested):
- Bulk - 5-12 hrs
- Expedited - 1-5 mins
- Standard - 3-5 hrs
- During retrieval objects are stored in S3 Standard-IA.
- S3 Glacier Deep Archive
- Not publicly available.
- Not immediately available.
- Retrieval free.
- Minimum duration charge of 180 days.
- Minimum file size 40 KB.
- Cheapest storage class.
- Options to retrieve, (fee increases by speed requested):
- Standard - 12 hrs
- Bulk - 48 hrs
- During retrieval objects are stored in S3 Standard-IA.
- S3 Intelligent Tiering
- Automatically reduces storage costs by intelligently moving data to the most cost-effective access tier.
- Incurs a monitoring/automation fee per 1k objects.
- ![[S3_StorageClasses-0.png]]
- Summary
- Storage classes provide a varying levels of data access, resiliency & cost.
- Storage classes can be defined by setting the `z-amz-storage-class` request header but can be changed after upload as well.
### S3 - Storage Classes - Demo
- A brief overview & demo of the various storage classes for S3.
### S3 - Versioning
- Versioning is enabled on the entire bucket not the objects themselves.
- Versioning options:
- Unversioned (default)
- Versioning Enabled
- Once enabled, you can't disable it you can only suspend it.
- Versioning Suspended
- How versioning works
- When a file is modified/updated, it will add an version ID property to the original file.
- A new version ID will be added as the file is modified/updated thus creating a new file.
- When a file is deleted, it adds a property called a "Delete Marker".
- To undelete the file, remove the Delete Marker.
- Versioning Costs
- Charged for all versions of an object in a bucket, both the file itself and any version of it.
- Thus if the file was 5 GB and then another version of it is 10 GB, you'll be charged for 15 GB.
- Versioning Suspending
- It keeps all of the existing versions of a file & doesn't delete older versions.
- However when the file is updated/replaced, it uses a version ID of NULL.
- MFA Delete
- When enabled, this feature is required to change the versioning state of the bucket.
- MFA is required to delete versions & can only enabled via CLI.
- Summary
- Versioning allows you to preserve, retrieve & restore every version of a object stored in your bucket.
- Versioning is disabled on buckets by default & must be explicitly enabled.
- Versioning is enabled at the bucket level & you can't enable versioning per object.
- Buckets have 3 versioning states:
- Unversioned, Versioning Enabled & Versioning Suspended.
- When versioning is enabled on a bucket, it can only be suspended not disabled.
- When suspended previous versions remain but new versions won't be created.
- Users are charged for **each** version of an object.
- MFA can be configured to secure the versioning state of a bucket.
### S3 - Versioning - Demo
- A brief overview & demo of the S3 versioning feature.
- When enabled a toggle of "Show Versions" will be available in the bucket.
- The "Delete Marker" is a special file you must delete in order to get to the versions of the file.
- Deleting specific versions, removes it as normal.
### S3 - ACL and Resource Policies
- Resource Policy
- Determines who has access to a S3 resource.
- S3 Bucket Policy
- Determines who can access the bucket & what operations can be performed.
- S3 Bucket Policies are written in JSON.
- reference AWS docs.
- `Sid` name you give a rule
- `Pricipal` who this policy applies to
- `*` applies to all users/everyone.
- `Effect` is what is being done, e.g. Allow
- `Action` is what is allowed to be done
- This can be delete, create, etc.
- `Resource` is the S3 bucket ARN
- A specific prefix can be defined as well
- Multiple statements can be supported.
- `Condition` a specific rule to apply to specific users, e.g. by IP address or bucket prefix.
- IAM Policies vs Resource Policies
- IAM Policies are attached to a AWS user.
- Resource Policies are attached to a AWS resource.
- Can be applied to anonymous/public users.
- Must ensure that the IAM policy & Resource Policy don't conflict with each other.
- S3 ACLs
- Legacy access control mechanism, not recommended to use.
- Summary
- Bucket polcies determine who have access & what operations they can perform.
- Within a policy you have **Principal**, **Resource**, **Effect** & **Action**.
- Principal determines who the policy should apply to.
- Resources determines what AWS resources the policy should apply to.
- Action determines what the principal is allowed to perform on the resources.
- Effect either allows or denies the action.
- Bucket policies work alongside IAM policies.
- Make sure to verify IAM policies aren't conflicting with Bucket policies for AWS users.
- For public access to a bucket, Bucket Policies are used.
- For AWS users access to a bucket, IAM Policies are used.
### S3 - ACL and Resource Policies - Demo
- A brief overview & demo of the Resource Policies feature of S3.
- To test functionality utilize a browser session management feature/tool.
- Reference S3 policy documentation.
- Some actions require specific rules to be define appropriately/correctly.
### S3 - Static Website Hosting
- S3 provides a URL to access the website.
- `http://$bucketname.s3-website-region.amazonaws.com`
- Custom domain names are supported but requires specific format.
- Uses Route 53.
- Bucket name will be the domain name to support being hosted, e.g. `example.com`
- Pricing is standard S3 pricing & per request sent to the site.
- Reference AWS S3 pricing docs, (for GET requests).
- Summary
- S3 can be used to host static websites.
- Charged for files in S3 & a fee per HTTP request.
- S3 provides a default URL to access website.
- Custom domains require the bucket name to match the domain, e.g. `example.com`.
### S3 - Static Website Hosting - Demo
- A brief overview & demo of the static hosting feature of S3.
- The Static Website Hosting is in the properties of the bucket.
- Remember to enable publicly accessible & create a bucket resource policy to allow connectivity.
### S3 - Pre-Signed URLs
- Allows for non-AWS users to access S3 buckets or objects securely.
- Pre-Signed URLs have authentication info stored in the URL itself.
- Use case example
- A media hosting site that enables paid users of the site to access content stored in the S3 bucket.
- An application where you can have a paid user upload content to the S3 bucket.
- Pre-signed URLs limitations:
- Require an expiration date, URLs can't last indefinitely.
- Maximum of 7 days expiration for a URL.
- If IAM user does not have access to an S3 bucket, a pre-signed URL can still be generated using that account.
- The pre-signed URL doesn't grant access to a bucket howeer it allows requests to be sent to S3 as the **user** that generated the URL.
- Summary
- Pre-signed URLs use security credentials to grant time limited access to download objects.
- When a user access a pre-signed URL they are performing a request to AWS API as the user that generated the pre-signed URL.
- If the AWS user that created the pre-signed URL can't access an object, then the user accessing the URL will also be unable to access the object.
### S3 - Pre-Signed URLs - Demo
- A brief overview & demo of the pre-signed URL feature of S3.
- SDK/CLI will be primary way of interacting with this feature.
### S3 - Access Points
- Act like custom views for individuals/groups for access management for buckets.
- Delegate polices from the bucket to the access point.
- Summary
- Simplifies access management to S3 buckets.
- Every user/group can be provided their own Access Point which acts as their own view/tunnel into a S3 bucket.
- Every access points gets its own ARN & users will refer to that Access Point URL instead of the S3 Bucket URL.
- Instead of applying policies on buckets, we can delegate the policies to the Access Points, which makes policy management more simpler.
- Access Points allow to restrict access to buckets to devices in specific VPCs.
### S3 - Access Points - Demo
- A brief overview & demo of the Access Points feature of S3.
- Reference S3 documentation for Access Point policies.
- Delegating access control to access points.
- Must include `/objects` then the prefix of the S3 object.
- Include the Access Point in the policy as well.
### LAB - Creating an S3 bucket and putting objects into different storage tiers.
- Task Request - Part 0:
- Create an S3 bucket in the us-east-1 region.
- Note: The bucket name should start with the prefix kk-lab-, for example kk-lab-john
- Task Request - Part 1:
- Upload a file to bucket, must copy into area & then select upload.
- Task Request - Part 2:
- Modify object Storage Class to be Standard-IA
- Select the object, then select "Actions" then select "Edit Storage Class"
- Task Request - Part 3:
- Update the uploaded object image.jpg to use the Glacier Instant Retrieval storage class.
- Task Request - Part 4:
- Upload a file logs.txt & set initial Storage Class to Glacier Instant Retrieval storage class.
- Task Request - Part 5:
- Delete the file logs.txt from the bucket.
### AWS Backup
- Backup vs Disaster Recovery
- Backups
- Creates copies of data to restore it in case of data loss.
- Are an essential part of disaster recovery.
- Disaster Recovery
- Encompasses a broader strategy, including backups.
- Includes planning for system & application recovery.
- AWS & Disaster Recovery
- S3 for disaster recovery
- EBS snapshots for disaster recovery
- Manually or scheduled.
- AWS Backup Service
- Backup Vault - the container that stores backup data.
- Backup Plan - the defined configuration/schedule for the backup.
- Recovery Point - the point in time to recover from.
- Integrates with various AWS services.
- Summary
- Disaster recovery refers to the process of planning for & responding to events that could cause data loss or system downtime.
- A solid disaster recovery plan ensures business continuity, minimizes downtime & safeguards data integrity.
- Disaster recovery encompasses a broader strategy including backup & also includes planning for system & application recovery.
- AWS provides services that can be utilized to assist in Backups & DR, (S3, EBS snapshots & AWS Backup).
- AWS Backup is a fully managed backup service for centralized & automated backups of data across AWS services/resources.
- AWS Backup has 3 main concepts
- Backup Vault
- Backup Plan
- Recovery Point
- AWS Backup can perform backups across AWS service such as EC2, EBS, EFS & RDS.
### Elastic Disaster Recovery
- Fully managed DR service.
- Use AWS as a recovery site.
- A continual replication state.
- Failover from On-Premise to AWS.
- Failover from other Cloud platforms to AWS.
- Failover from one AWS region to another.
- How DRS works:
- Identify source servers for DR
- Replicate data via AWS Replication Agent
- Define replication settings, e.g. DR staging area
- Define launch settings for DR, e.g. EC2 recovery servers
- Summary
- A fully managed DR service for physical, virtual & cloud based servers.
- Can utilize AWS as a recovery site instead of investing in on-premise DR infrastructure.
- Source servers represent the servers/data that is to be replicated for DR.
- The staging area is the location where AWS will receive the replicated data.
- A launch template is used to define the specs of the recovery servers.
- Size, region/subnet, security groups.
### Storage Gateway
- A bridge for AWS storage & on-premise environment.
- Use cases
- Extension of on-premise storage
- Assists with cloud migrations
- Backups
- DR
- Deployed as an appliance, VM or physical device in your environment.
- Storage Flavours, (based on existing storage solution being used in your environment)
- Volume
- File
- Tape
- Volume Gateway
- Cached Mode
- Data isn't stored locally & is stored in S3.
- Frequently accessed data is cached locally.
- Acts as a DC extension.
- Stored Mode
- The appliance will work as existing storage within the env.
- Initially data is stored locally on-prem, not in AWS.
- Then data is replicated asynchronously to S3 as a backup
- File Gateway
- For NFS/SMB storage.
- Data isn't stored locally & is stored in S3.
- Frequently accessed data is cached locally.
- Tape Gateway
- Data isn't stored locally & is stored in S3.
- Stores in Virtual Tape Library (S3)
- Virtual Tape Shelf (Glacier)
- Uses iSCSI protocol.
### Challenge Yourself Quiz - Meet your Services - Storage
- Questions:
- Quiz: 77% - 20 out of 26 correct.
- Q: A company wants to protect their S3 objects from being accidentally deleted or overwritten. Which S3 feature should they enable?
- A: S3 Versioning protects S3 objects from being accidentally deleted or overwritten by keeping multiple versions of an object in the same bucket. S3 Object Lock can help prevent objects from being deleted or overwritten for a fixed amount of time, or indefinitely.
- This was multiple choice & I forgot to select another option.
- Q: A data engineer needs to upload large files over a high-latency network to S3. Which method should they use to maximize the upload efficiency?
- A: S3 Multipart Upload maximizes upload efficiency, especially for large files over high-latency networks, by allowing parallel uploads of parts of an object.
- I selected S3 Transfer Acceleration.
- Q: An organization wants to enforce that all objects uploaded to their S3 bucket are encrypted at rest. Which method can they use to achieve this?
- A: S3 Bucket Policies can be used to enforce that all objects uploaded to an S3 bucket are encrypted at rest by denying uploads that are not encrypted.
- I selected Enable S3 Default Encrytion.
- Q: A database administrator needs to choose an EBS volume type for a high-performance database that requires consistent IOPS performance and low-latency throughput. Which EBS volume type should they choose?
- A: Provisioned IOPS SSD (io1) is the best choice for high-performance databases that require consistent IOPS performance and low-latency throughput. This volume type is designed to deliver predictable, high IOPS rates for I/O-intensive workloads.
- I selected Throughput Optimized HDD (st1). There are so many options for storage I got confused...
- Q: A company needs to comply with regulatory requirements that require them to prevent object deletion or modification for a fixed period. Which S3 feature can enforce this compliance requirement?
- A: S3 Object Lock prevents object deletion or modification for a fixed period, helping to comply with regulatory requirements.
- I selected S3 Lifecycle Policies.
- Q: A system administrator needs to grant read and write permissions to a single user on specific S3 objects. Which access control method allows this level of fine-grained permissions?
- A: S3 ACLs
- I selected S3 Bucket Policies because it was mentioned that ACLs weren't recommended for use...