Compute Flashcards

Evaluate and choose appropriate compute services to meet workload demands with performance and cost efficiency. (52 cards)

1
Q

A company runs an application on six web application servers in an Amazon EC2 Auto Scaling group in a single Availability Zone. The application is fronted by an Application Load Balancer (ALB). A Solutions Architect needs to modify the infrastructure to be highly available without making any modifications to the application.

Which architecture should the Solutions Architect choose to enable high availability?

  1. Create an Amazon CloudFront distribution with a custom origin across multiple Regions.
  2. Modify the Auto Scaling group to use two instances across each of three Availability Zones.
  3. Create a launch template that can be used to quickly create more instances in another Region.
  4. Create an Auto Scaling group to launch three instances across each of two Regions.
A

2. Modify the Auto Scaling group to use two instances across each of three Availability Zones.

The only thing that needs to be changed in this scenario to enable HA is to split the instances across multiple Availability Zones. The architecture already uses Auto Scaling and Elastic Load Balancing so there is plenty of resilience to failure. Once the instances are running across multiple AZs there will be AZ-level fault tolerance as well.

  • CloudFront is not used to create HA for your application, it is used to accelerate access to media content.
  • Multi-AZ should be enabled rather than multi-Region.
  • HA can be achieved within a Region by simply enabling more AZs in the ASG. An ASG cannot launch instances in multiple Regions.

Reference:
Add an Availability Zone

Save time with our AWS cheat sheets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

A company runs an application on an Amazon EC2 instance the requires 250 GB of storage space. The application is not used often and has small spikes in usage on weekday mornings and afternoons. The disk I/O can vary with peaks hitting a maximum of 3,000 IOPS. A Solutions Architect must recommend the most cost-effective storage solution that delivers the performance required.

Which solution or configuration should the Solutions Architect recommend?

  1. Amazon EBS Cold HDD (sc1)
  2. Amazon EBS General Purpose SSD (gp2)
  3. Amazon EBS Provisioned IOPS SSD (i01)
  4. Amazon EBS Throughput Optimized HDD (st1)
A

2. Amazon EBS General Purpose SSD (gp2)

General Purpose SSD (gp2) volumes offer cost-effective storage that is ideal for a broad range of workloads. These volumes deliver single-digit millisecond latencies and the ability to burst to 3,000 IOPS for extended periods of time.

Between a minimum of 100 IOPS (at 33.33 GiB and below) and a maximum of 16,000 IOPS (at 5,334 GiB and above), baseline performance scales linearly at 3 IOPS per GiB of volume size. AWS designs gp2 volumes to deliver their provisioned performance 99% of the time. A gp2 volume can range in size from 1 GiB to 16 TiB.

In this configuration the volume will provide a baseline performance of 750 IOPS but will always be able to burst to the required 3,000 IOPS during periods of increased traffic.

  • The i01 volume type will be more expensive and is not necessary for the performance levels required.
  • The sc1 volume type is not going to deliver the performance requirements as it cannot burst to 3,000 IOPS.
  • The st1 volume type is not going to deliver the performance requirements as it cannot burst to 3,000 IOPS.

Reference:
Ebs Volume Types

Save time with our AWS cheat sheets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

A media processing company is migrating its on-premises application to the AWS Cloud. The application processes high volumes of videos and generates large output files during the workflow.
The company requires a scalable solution to handle an increasing number of video processing jobs. The solution should minimize manual intervention, simplify job orchestration, and eliminate the need to manage infrastructure. Operational overhead must be kept to a minimum.

Which solution will fulfill these requirements with the LEAST operational overhead?

  1. Use Amazon Elastic Container Service (Amazon ECS) with AWS Fargate to process the videos. Use Amazon Simple Queue Service (Amazon SQS) for workflow orchestration and store the processed files in Amazon S3.
  2. Use AWS Batch to run video processing jobs. Use AWS Step Functions to manage the workflow. Store the processed files in Amazon S3.
  3. Use AWS Lambda and Amazon EC2 On-Demand Instances for video processing. Store the processed files in Amazon FSx for Lustre.
  4. Use a fleet of Amazon EC2 Spot Instances to process the videos. Use AWS Step Functions for workflow management and store the processed files in Amazon Elastic File System (Amazon EFS).
A

2. Use AWS Batch to run video processing jobs. Use AWS Step Functions to manage the workflow. Store the processed files in Amazon S3.

AWS Batch automatically manages the underlying infrastructure, scales based on workload and simplifies batch job management. Combining this with AWS Step Functions enables efficient orchestration of workflows. Storing the processed files in Amazon S3 provides durability and scalability, which is ideal for managing large files. This solution minimizes operational overhead by leveraging fully managed services.

  • While Fargate reduces infrastructure management, integrating SQS for workflow orchestration adds complexity compared to using AWS Step Functions. AWS Batch provides more specialized functionality for processing batch workloads.
  • Combining Lambda with EC2 adds complexity to infrastructure management and does not fully eliminate operational overhead. Additionally, FSx for Lustre is more suitable for high-performance computing scenarios rather than general-purpose storage of processed files.
  • Spot Instances are not ideal for workflows that must minimize interruptions, as they can be terminated unexpectedly. AWS Batch is a more suitable solution for managing job processing workloads with minimal operational overhead.

References:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

A logistics company is running a containerized application on an Amazon Elastic Kubernetes Service (Amazon EKS) cluster with Amazon EC2 instances as the worker nodes. The application includes a management dashboard that uses Amazon DynamoDB for real-time tracking data and a reporting service that stores large datasets in Amazon S3.

The company needs to ensure that the EKS Pods running the management dashboard can access only Amazon DynamoDB, and the EKS Pods running the reporting service can access only Amazon S3. The company uses AWS Identity and Access Management (IAM) for access control.

Which solution will meet these requirements?

  1. Create separate IAM policies for Amazon S3 and DynamoDB access. Attach both policies to the IAM role associated with the EC2 instance profile. Use Kubernetes namespaces to restrict access for the respective Pods to Amazon S3 or DynamoDB.
  2. Create separate IAM roles with policies for Amazon S3 and DynamoDB access. Use Kubernetes service accounts with IAM Role for Service Accounts (IRSA) to assign the AmazonS3FullAccess policy to the reporting service Pods and the AmazonDynamoDBFullAccess policy to the management dashboard Pods.
  3. Create IAM roles with permissions for Amazon S3 and DynamoDB access. Attach the Amazon S3 role to the reporting service Pods and the DynamoDB role to the management dashboard Pods using a shared service account.
  4. Configure role-based access control (RBAC) within Kubernetes to define which Pods can access Amazon S3 and DynamoDB. Use Kubernetes ConfigMaps to store the IAM credentials for each service.
A

2. Create separate IAM roles with policies for Amazon S3 and DynamoDB access. Use Kubernetes service accounts with IAM Role for Service Accounts (IRSA) to assign the AmazonS3FullAccess policy to the reporting service Pods and the AmazonDynamoDBFullAccess policy to the management dashboard Pods.

IAM Role for Service Accounts (IRSA) allows the assignment of IAM roles directly to Kubernetes service accounts, which the Pods use to assume permissions. This solution enables fine-grained access control, ensuring that the Pods for the management dashboard can access only DynamoDB and the Pods for the reporting service can access only S3.

  • Attaching both policies to the EC2 instance profile means all Pods running on the worker nodes would inherit access to both S3 and DynamoDB, violating the requirement for restricted access. Kubernetes namespaces alone do not enforce IAM permissions.
  • Using a shared service account would not provide sufficient isolation between the Pods. IRSA is the correct mechanism to ensure separate access control for each service.
  • Storing IAM credentials in ConfigMaps is not a secure practice. Additionally, RBAC in Kubernetes is not designed to enforce access control for AWS services. IRSA is the proper solution for managing IAM permissions at the Pod level.

References:

Save time with our AWS cheat sheets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

A financial services company runs a trading application on a Kubernetes cluster hosted in its on-premises data center. Due to a recent surge in trading activity, the on-premises infrastructure can no longer support the increased load. The company plans to migrate the trading application to the AWS Cloud using an Amazon Elastic Kubernetes Service (Amazon EKS) cluster.

The company wants to minimize the operational overhead by avoiding management of the underlying compute infrastructure for the new AWS architecture.

Which solution will meet these requirements with the LEAST operational overhead?

  1. Use self-managed EC2 instances to provide the compute capacity for the EKS cluster. Deploy the application to the cluster using these instances.
  2. Use managed node groups to provide the compute capacity for the EKS cluster. Deploy the application to the cluster using the managed nodes.
  3. Use AWS Fargate to provide the compute capacity for the EKS cluster. Create a Fargate profile and deploy the application using the profile.
  4. Use Amazon EC2 Spot Instances with managed node groups to provide cost-effective compute capacity for the EKS cluster. Deploy the application using the Spot nodes.
A

3. Use AWS Fargate to provide the compute capacity for the EKS cluster. Create a Fargate profile and deploy the application using the profile.

AWS Fargate eliminates the need to provision or manage EC2 instances, allowing the company to run pods without managing the underlying infrastructure. This greatly reduces operational overhead and meets the company’s requirements.

  • Self-managed EC2 instances require manual provisioning, patching, and scaling, which increases operational overhead compared to serverless options like Fargate.
  • While managed node groups reduce some operational effort compared to self-managed nodes, they still require some level of instance management, such as scaling and patching, unlike Fargate.
  • Spot Instances, while cost-effective, introduce potential interruptions and require management of instance scaling, which adds operational complexity.

References:

Save time with our AWS cheat sheets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

A financial services company operates multiple internal services across various AWS accounts. The company uses AWS Organizations to manage these accounts and needs a centralized security appliance in a networking account to inspect all inter-service communication between AWS accounts. The solution must ensure secure and efficient routing of traffic through the security appliance.

Which solution will meet these requirements?

  1. Deploy a Network Load Balancer (NLB) in the networking account to route traffic to the security appliance. Configure the service accounts to send traffic to the NLB by using a VPC peering connection.
  2. Deploy an Application Load Balancer (ALB) in the networking account to route traffic to the security appliance. Configure the service accounts to send traffic to the ALB by using a private link.
  3. Deploy a Gateway Load Balancer (GWLB) in the networking account to route traffic to the security appliance. Configure the service accounts to send traffic to the GWLB by using a Gateway Load Balancer endpoint in each service account.
  4. Deploy interface VPC endpoints in the networking account for each service in the service accounts. Configure the security appliance to inspect traffic sent through the endpoints.
A

3. Deploy a Gateway Load Balancer (GWLB) in the networking account to route traffic to the security appliance. Configure the service accounts to send traffic to the GWLB by using a Gateway Load Balancer endpoint in each service account.

GWLB is specifically designed to simplify the deployment of security appliances. Using GWLB endpoints in service accounts ensures efficient routing and centralized inspection of traffic.

  • NLB is not optimized for traffic inspection. Additionally, VPC peering lacks centralized management and scalability for large organizations.
  • ALB is primarily used for HTTP/* https-based applications and is not suitable for routing traffic for inspection appliances.
  • Interface VPC endpoints do not inherently support routing all traffic through a centralized appliance and lack the capability for deep packet inspection.

Reference:
What is a Gateway Load Balancer?

Save time with our AWS cheat sheets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

A company runs a critical data analysis job every Friday evening. The job processes large datasets and requires at least 2 hours to complete without interruptions. The job is stateful and needs reliable compute resources. The company wants to minimize operational overhead while ensuring the job runs as scheduled.

Which solution will meet these requirements?

  1. Configure the job as a containerized task and run it on AWS Fargate using Amazon ECS. Schedule the task using Amazon EventBridge Scheduler.
  2. Configure the job to run in an AWS Lambda function with reserved concurrency. Use Amazon EventBridge to invoke the function on a schedule.
  3. Deploy the job on a dedicated Amazon EC2 On-Demand instance. Use a cron job to schedule the analysis.
  4. Use an Amazon EMR cluster with Spot Instances to process the job. Use Amazon EMR Step Functions to schedule the job execution.
A

1. Configure the job as a containerized task and run it on AWS Fargate using Amazon ECS. Schedule the task using Amazon EventBridge Scheduler.

This is the best option because AWS Fargate provides a serverless compute engine for containers, ensuring no interruptions. EventBridge Scheduler offers an easy way to schedule tasks without requiring manual intervention.

  • Lambda functions are not suited for stateful, long-running jobs. Lambda has a maximum execution duration of 15 minutes.
  • While this solution avoids interruptions, it requires managing and maintaining the EC2 instance, which increases operational overhead.
  • Spot Instances are not suitable for stateful jobs because they can be interrupted. This approach also introduces additional complexity with EMR setup and management.

References:

Save time with our AWS cheat sheets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Amazon EC2 instances in an Auto Scaling group. The application stores temporary training data on attached Amazon Elastic Block Store (Amazon EBS) volumes. The company seeks recommendations to optimize costs for the EC2 instances, the Auto Scaling group, and the EBS volumes with minimal manual intervention.

Which solution will meet these requirements with the MOST operational efficiency?

  1. Configure AWS Compute Optimizer to provide cost optimization recommendations for the EC2 instances, the Auto Scaling group, and the EBS volumes.
  2. Set up Amazon CloudWatch billing alerts and manually analyze metrics to identify cost-saving opportunities for the EC2 instances, the Auto Scaling group, and the EBS volumes.
  3. Use AWS Cost and Usage Reports to export data to Amazon Athena. Query the data to identify inefficiencies in the EC2 instances, the Auto Scaling group, and the EBS volumes.
  4. Use AWS Compute Optimizer for recommendations on EC2 instances and Auto Scaling groups. Use Amazon Data Lifecycle Manager to evaluate cost optimizations for the EBS volumes.
A

1. Configure AWS Compute Optimizer to provide cost optimization recommendations for the EC2 instances, the Auto Scaling group, and the EBS volumes.

AWS Compute Optimizer offers actionable insights for cost optimization across EC2 instances, Auto Scaling groups, and EBS volumes with minimal operational overhead.

  • Set up Amazon CloudWatch billing alerts and manually analyze metrics to identify cost-saving opportunities for the EC2 instances, the Auto Scaling group, and the EBS volumes: This option requires significant manual intervention, which reduces operational efficiency.
  • Use AWS Cost and Usage Reports to export data to Amazon Athena. Query the data to identify inefficiencies in the EC2 instances, the Auto Scaling group, and the EBS volumes: While this approach provides insights, it requires significant manual effort to analyze the data.
  • Use AWS Compute Optimizer for recommendations on EC2 instances and Auto Scaling groups. Use Amazon Data Lifecycle Manager to evaluate cost optimizations for the EBS volumes: This requires separate configurations for different resources, increasing complexity compared to a single AWS Compute Optimizer setup.

Reference:
What is AWS Compute Optimizer?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

An application on Amazon Elastic Container Service (ECS) performs data processing in two parts. The second part takes much longer to complete.

How can an Architect decouple the data processing from the backend application component?

  1. Process both parts using the same ECS task. Create an Amazon Kinesis Firehose stream
  2. Process each part using a separate ECS task. Create an Amazon SNS topic and send a notification when the processing completes
  3. Create an Amazon DynamoDB table and save the output of the first part to the table
  4. Process each part using a separate ECS task. Create an Amazon SQS queue
A

4. Process each part using a separate ECS task. Create an Amazon SQS queue

Processing each part using a separate ECS task may not be essential but means you can separate the processing of the data. An Amazon Simple Queue Service (SQS) is used for decoupling applications. It is a message queue on which you place messages for processing by application components. In this case you can process each data processing part in separate ECS tasks and have them write an Amazon SQS queue. That way the backend can pick up the messages from the queue when they’re ready and there is no delay due to the second part not being complete.

  • Amazon Kinesis Firehose is used for streaming data. This is not an example of streaming data. In this case SQS is better as a message can be placed on a queue to indicate that the job is complete and ready to be picked up by the backend application component.
  • Amazon Simple Notification Service (SNS) can be used for sending notifications. It is useful when you need to notify multiple AWS services. In this case an Amazon SQS queue is a better solution as there is no mention of multiple AWS services and this is an ideal use case for SQS.
  • Amazon DynamoDB is unlikely to be a good solution for this requirement. There is a limit on the maximum amount of data that you can store in an entry in a DynamoDB table.

Reference:
AWS Application Integration Services

Save time with our AWS cheat sheets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

A High Performance Computing (HPC) application will be migrated to AWS. The application requires low network latency and high throughput between nodes and will be deployed in a single AZ.

How should the application be deployed for best inter-node performance?

  1. In a partition placement group
  2. In a cluster placement group
  3. In a spread placement group
  4. Behind a Network Load Balancer (NLB)
A

2. In a cluster placement group

A cluster placement group provides low latency and high throughput for instances deployed in a single AZ. It is the best way to provide the performance required for this application.

  • A partition placement group is used for grouping instances into logical segments. It provides control and visibility into instance placement but is not the best option for performance.
  • A spread placement group is used to spread instances across underlying hardware. It is not the best option for performance.
  • A network load balancer is used for distributing incoming connections, this does assist with inter-node performance.

Reference:
Placement groups for your Amazon EC2 instances

Save time with our AWS cheat sheets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

An application has been migrated to Amazon EC2 Linux instances. The EC2 instances run several 1-hour tasks on a schedule. There is no common programming language among these tasks, as they were written by different teams. Currently, these tasks run on a single instance, which raises concerns about performance and scalability. To resolve these concerns, a solutions architect must implement a solution.

Which solution will meet these requirements with the LEAST Operational overhead?

  1. Use AWS Batch to run the tasks as jobs. Schedule the jobs by using Amazon EventBridge (Amazon CloudWatch Events).
  2. Convert the EC2 instance to a container. Use AWS App Runner to create the container on demand to run the tasks as jobs.
  3. Copy the tasks into AWS Lambda functions. Schedule the Lambda functions by using Amazon EventBridge (Amazon CloudWatch Events).
  4. Create an Amazon Machine Image (AMI) of the EC2 instance that runs the tasks. Create an Auto Scaling group with the AMI to run multiple copies of the instance.
A

4. Create an Amazon Machine Image (AMI) of the EC2 instance that runs the tasks. Create an Auto Scaling group with the AMI to run multiple copies of the instance.

The best solution is to create an AMI of the EC2 instance, and then use it as a template for which to launch additional instances using an Auto Scaling Group. This removes the issues of performance, scalability, and redundancy by allowing the EC2 instances to automatically scale and be launched across multiple Availability Zones.

  • AWS Batch is designed to run jobs across multiple instances, there would be less operational overhead by creating an AMI instead.
  • Converting your EC2 instances to containers is not the easiest way to achieve this task.
  • The maximum execution time for a Lambda function is 15 minutes, making it unsuitable for tasks running on a one-hour schedule.

Reference:
Use Elastic Load Balancing to distribute incoming application traffic in your Auto Scaling group

Save time with our AWS cheat sheets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

A company operates a three-tier architecture for their online order processing system. The architecture includes EC2 instances in the web tier behind an Application Load Balancer, EC2 instances in the processing tier, and Amazon DynamoDB for storage. To decouple the web and processing tiers, the company uses Amazon Simple Queue Service (Amazon SQS).

During peak demand, some customers experience delays or failures in order processing. At these times, the EC2 instances in the processing tier reach 100% CPU utilization, and the SQS queue length increases significantly. These peak periods are unpredictable.

What should the company do to improve the application’s performance?

  1. Use predictive scaling in Amazon EC2 Auto Scaling to add instances to the processing tier ahead of peak times. Use CPU utilization as the key metric to scale.
  2. Deploy Amazon ElastiCache for Redis to reduce the read and write load on DynamoDB. Use a scheduled scaling policy for the processing tier instances.
  3. Configure an Amazon EC2 Auto Scaling target tracking policy for the processing tier instances. Use the SQS ApproximateNumberOfMessages metric to dynamically scale the tier based on queue length.
  4. Implement an Amazon CloudFront distribution to cache static content in the web tier. Use HTTP request count as a scaling metric for the processing tier.
A

3. Configure an Amazon EC2 Auto Scaling target tracking policy for the processing tier instances. Use the SQS ApproximateNumberOfMessages metric to dynamically scale the tier based on queue length.

Use an Amazon EC2 Auto Scaling target tracking policy for the processing tier instances. Use the SQS ApproximateNumberOfMessages metric to dynamically scale the tier based on queue length: This is correct because target tracking policies allow Auto Scaling to dynamically adjust the number of processing tier instances based on real-time conditions. Using the ApproximateNumberOfMessages attribute from SQS ensures that the application can handle the increasing workload when the queue length grows, preventing CPU exhaustion and processing delays.

  • Predictive scaling may not work well with unpredictable traffic patterns. Additionally, CPU utilization is not the best indicator for scaling when dealing with queue-based workloads, as the queue length is a more direct reflection of demand.
  • DynamoDB is not the bottleneck in this scenario. The issue lies in the processing tier’s inability to handle the workload, not in the storage tier. Scheduled scaling also does not address the unpredictable nature of traffic peaks.
  • The problem is not related to static content delivery or HTTP request volume in the web tier. The delays occur due to processing tier limitations, so caching in the web tier will not resolve the issue.

References:

Save time with our AWS cheat sheets:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

A social media analytics company runs a data processing application on a single Amazon EC2 On-Demand Instance. The application is stateless and processes user behavior data in near real-time. Recently, the application has started showing performance degradation during peak times, including 5xx errors due to high traffic volumes. The company wants to implement a solution to make the application scale automatically to handle traffic spikes in a cost-effective way.

Which solution will meet these requirements MOST cost-effectively?

  1. Create an Amazon Machine Image (AMI) of the application. Use the AMI to deploy two EC2 On-Demand Instances. Attach an Application Load Balancer to distribute traffic between the two instances.
  2. Use AWS Lambda and Amazon SQS to redesign the application into a serverless architecture. Deploy Lambda functions to process incoming requests and store results in Amazon DynamoDB.
  3. Create an Auto Scaling group using an Amazon Machine Image (AMI) of the application. Use a launch template that configures the Auto Scaling group to scale out and in based on CPU utilization. Attach an Application Load Balancer to the Auto Scaling group to distribute traffic.
  4. Increase the size of the existing EC2 instance to a larger instance type using Amazon EC2 Auto Scaling scheduled actions to handle peak hours. Use Amazon Route 53 to distribute traffic between the upgraded instance and a secondary instance in another Region.
A

3. Create an Auto Scaling group using an Amazon Machine Image (AMI) of the application. Use a launch template that configures the Auto Scaling group to scale out and in based on CPU utilization. Attach an Application Load Balancer to the Auto Scaling group to distribute traffic.

Auto Scaling ensures the application can scale automatically to meet demand, while the Application Load Balancer distributes traffic across instances, improving fault tolerance. This setup is both cost-effective and aligned with the application’s stateless architecture.

  • Simply adding a second EC2 instance does not allow for dynamic scaling. This approach could lead to underutilization during off-peak times and increased costs.
  • Redesigning the application as a serverless architecture adds significant development effort and may not be the most cost-effective solution compared to using Auto Scaling with existing EC2-based architecture.
  • Increasing the instance size does not enable dynamic scaling. Route 53 does not natively provide traffic distribution between instances based on load metrics, and this solution introduces unnecessary complexity with no clear scalability benefits.

References:

Save time with our AWS cheat sheets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

A company hosts a monolithic web application on an Amazon EC2 instance. Application users have recently reported poor performance at specific times. Analysis of Amazon CloudWatch metrics shows that CPU utilization is 100% during the periods of poor performance.
The company wants to resolve this performance issue and improve application availability.

Which combination of steps will meet these requirements MOST cost-effectively?

(Select TWO.)

  1. Use AWS Compute Optimizer to obtain a recommendation for an instance type to scale vertically.
  2. Create an Amazon Machine Image (AMI) from the web server. Reference the AMI in a new launch template.
  3. Create an Auto Scaling group and an Application Load Balancer to scale vertically.
  4. Use AWS Compute Optimizer to obtain a recommendation for an instance type to scale horizontally.
  5. Create an Auto Scaling group and an Application Load Balancer to scale horizontally.
A

1. Use AWS Compute Optimizer to obtain a recommendation for an instance type to scale vertically.
5. Create an Auto Scaling group and an Application Load Balancer to scale horizontally.

AWS Compute Optimizer can suggest a more appropriate EC2 instance type with adequate resources for improved performance when scaling vertically.

Horizontal scaling improves application availability by adding multiple EC2 instances. The Application Load Balancer ensures traffic is distributed evenly across instances.

  • Creating an AMI and a launch template alone does not address scaling or performance issues without integrating them into an Auto Scaling group.
  • Vertical scaling is achieved by changing the instance type, not by using Auto Scaling groups or load balancers.
  • Horizontal scaling focuses on adding multiple instances, which does not require Compute Optimizer recommendations for instance types.

References:

Save time with our AWS cheat sheets:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

A company operates an e-commerce application hosted on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer (ALB). Customer transactions and order information are stored in an Amazon Aurora PostgreSQL DB cluster. The company wants to implement a disaster recovery (DR) plan to prepare for Region-wide outages. The DR solution must provide a recovery time objective (RTO) of 30 minutes. The DR infrastructure does not need to be operational unless the primary Region becomes unavailable.

Which solution will meet these requirements?

  1. Deploy the DR infrastructure in a second AWS Region, including an ALB and an Auto Scaling group with desired and maximum capacities set to zero. Convert the Aurora PostgreSQL DB cluster into an Aurora global database. Use Amazon Route 53 to configure active-passive failover.
  2. Deploy an ALB and Auto Scaling group in a second AWS Region. Set the Auto Scaling group desired capacity to a minimum value. Use Amazon RDS Cross-Region Read Replicas to replicate the Aurora DB cluster. Configure Amazon Route 53 for active-active failover.
  3. Use AWS Backup to schedule regular backups of the Aurora DB cluster and EC2 instances. In the second AWS Region, create infrastructure using AWS CloudFormation templates upon failure. Configure Amazon Route 53 with a failover policy to redirect traffic.
  4. Deploy the DR infrastructure in a second AWS Region. Include an Aurora DB cluster configured with Cross-Region Replication and an ALB with the same configuration. Set up an Amazon CloudWatch alarm to increase the Auto Scaling group desired capacity upon failure.
A

1. Deploy the DR infrastructure in a second AWS Region, including an ALB and an Auto Scaling group with desired and maximum capacities set to zero. Convert the Aurora PostgreSQL DB cluster into an Aurora global database. Use Amazon Route 53 to configure active-passive failover.

This approach provides minimal operational overhead while meeting the 30-minute RTO. Aurora global database ensures replication with low latency, while Route 53 handles DNS failover.

  • This solution increases costs because active-active failover is unnecessary. Additionally, using read replicas does not provide write capabilities during a failover.
  • While this reduces costs, it increases the RTO significantly due to the need to create infrastructure and restore data after a failure.
  • This is less operationally efficient compared to the Aurora global database. Additionally, the solution introduces unnecessary complexity.

References:

Save time with our AWS cheat sheets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

A company’s staff connect from home office locations to administer applications using bastion hosts in a single AWS Region. The company requires a resilient bastion host architecture that requires minimal ongoing operational overhead.

How can a Solutions Architect best meet these requirements?

  1. Create a Network Load Balancer backed by an Auto Scaling group with instances in multiple Availability Zones.
  2. Create a Network Load Balancer backed by Reserved Instances in a cluster placement group.
  3. Create a Network Load Balancer backed by the existing servers in different Availability Zones.
  4. Create a Network Load Balancer backed by an Auto Scaling group with instances in multiple AWS Regions.
A

1. Create a Network Load Balancer backed by an Auto Scaling group with instances in multiple Availability Zones.

Bastion hosts (aka “jump hosts”) are EC2 instances in public subnets that administrators and operations staff can connect to from the internet. From the bastion host they are then able to connect to other instances and applications within AWS by using internal routing within the VPC.

All answers use a Network Load Balancer which is acceptable for forwarding incoming connections to targets. The differences are in where the connections are forwarded to. The best option is to create an Auto Scaling group with EC2 instances in multiple Availability Zones. This creates a resilient architecture within a single AWS Region which is exactly what the question asks for.

  • You cannot have instances in an ASG across multiple Regions and you can’t have an NLB distribute connections across multiple Regions.
  • A cluster placement group packs instances close together inside an Availability Zone. This strategy enables workloads to achieve the low-latency network performance necessary for tightly coupled node-to-node communication that is typical of HPC applications.
  • An Auto Scaling group is required to maintain instances in different AZs for resilience.

Reference:
Auto Scaling benefits for application architecture

Save time with our AWS cheat sheets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

A company runs a containerized application on an Amazon Elastic Kubernetes Service (EKS) using a microservices architecture. The company requires a solution to collect, aggregate, and summarize metrics and logs. The solution should provide a centralized dashboard for viewing information including CPU and memory utilization for EKS namespaces, services, and pods.

Which solution meets these requirements?

  1. Configure Amazon CloudWatch Container Insights in the existing EKS cluster. View the metrics and logs in the CloudWatch console.
  2. Run the Amazon CloudWatch agent in the existing EKS cluster. View the metrics and logs in the CloudWatch console.
  3. Migrate the containers to Amazon ECS and enable Amazon CloudWatch Container Insights. View the metrics and logs in the CloudWatch console.
  4. Configure AWS X-Ray to enable tracing for the EKS microservices. Query the trace data using Amazon Elasticsearch.
A

1. Configure Amazon CloudWatch Container Insights in the existing EKS cluster. View the metrics and logs in the CloudWatch console.

Use CloudWatch Container Insights to collect, aggregate, and summarize metrics and logs from your containerized applications and microservices. Container Insights is available for Amazon Elastic Container Service (Amazon ECS), Amazon Elastic Kubernetes Service (Amazon EKS), and Kubernetes platforms on Amazon EC2.

With Container Insights for EKS you can see the top contributors by memory or CPU, or the most recently active resources. This is available when you select any of the following dashboards in the drop-down box near the top of the page:

  • ECS Services
  • ECS Tasks
  • EKS Namespaces
  • EKS Services
  • EKS Pods
  • Container Insights is the best way to view the required data.
  • There is no need to migrate containers to ECS as EKS is supported for Container Insights.
  • X-Ray will not deliver the required statistics to a centralized dashboard.

Reference:
Container Insights

Save time with our AWS cheat sheets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

A company has deployed an application that consists of several microservices running on Amazon EC2 instances behind an Amazon API Gateway API. A Solutions Architect is concerned that the microservices are not designed to elastically scale when large increases in demand occur.

Which solution addresses this concern?

  1. Create an Amazon SQS queue to store incoming requests. Configure the microservices to retrieve the requests from the queue for processing.
  2. Use Amazon CloudWatch alarms to notify operations staff when the microservices are suffering high CPU utilization.
  3. Spread the microservices across multiple Availability Zones and configure Amazon Data Lifecycle Manager to take regular snapshots.
  4. Use an Elastic Load Balancer to distribute the traffic between the microservices. Configure Amazon CloudWatch metrics to monitor traffic to the microservices.
A

1. Create an Amazon SQS queue to store incoming requests. Configure the microservices to retrieve the requests from the queue for processing.

The individual microservices are not designed to scale. Therefore, the best way to ensure they are not overwhelmed by requests is to decouple the requests from the microservices. An Amazon SQS queue can be created, and the API Gateway can be configured to add incoming requests to the queue. The microservices can then pick up the requests from the queue when they are ready to process them.

  • This solution requires manual intervention and does not help the application to elastically scale.
  • This does not automate the elasticity of the application.
  • You cannot use an ELB spread traffic across many different individual microservices as the requests must be directed to individual microservices. Therefore, you would need a target group per microservice, and you would need Auto Scaling to scale the microservices.

Reference:
Understanding asynchronous messaging for microservices

Save time with our AWS cheat sheets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

A Solutions Architect working for a large financial institution is building an application to manage their customers financial information and their sensitive personal information. The Solutions Architect requires that the storage layer can store immutable data out of the box, with the ability to encrypt the data at rest and requires that the storage layer provides ACID properties. They also want to use a containerized solution to manage the compute layer.

Which solution will meet these requirements with the LEAST amount of operational overhead?

  1. Create an Auto Scaling Group with EC2 instances behind an Application Load Balancer. To manage the storage layer, use Amazon S3.
  2. Configure an ECS cluster on EC2 behind an Application Load Balancer within an Auto Scaling Group. Store data using Amazon DynamoDB.
  3. Create a cluster of ECS instances on AWS Fargate within an Auto Scaling Group behind an Application Load Balancer. To manage the storage layer, use Amazon S3.
  4. Set up an ECS cluster behind an Application Load Balancer on AWS Fargate. Use Amazon Quantum Ledger Database (QLDB) to manage the storage layer.
A

4. Set up an ECS cluster behind an Application Load Balancer on AWS Fargate. Use Amazon Quantum Ledger Database (QLDB) to manage the storage layer.

The solution requires that the storage layer be immutable. This immutability can only be delivered by Amazon Quantum Ledger Database (QLDB), as Amazon QLDB has a built-in immutable journal that stores an accurate and sequenced entry of every data change. The journal is append-only, meaning that data can only be added to a journal, and it cannot be overwritten or deleted.

Secondly the compute layer needs to not only be containerized, and implemented with the least possible operational overhead. The option that best fits these requirements is Amazon ECS on AWS Fargate, as AWS Fargate is a Serverless, containerized deployment option.

  • EC2 instances are virtual machines, not a container product and Amazon S3 is an object storage service which does not act as an immutable storage layer.
  • ECS on EC2 provides a higher level of operational overhead than using AWS Fargate, as Fargate is a Serverless service.
  • Although Fargate would be a suitable deployment option, Amazon S3 is not suitable for the storage layer as it is not immutable by default.

Reference:
Amazon Quantum Ledger Database (QLDB) features

Save time with our AWS cheat sheets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

A company operates a production environment on Amazon EC2 instances. The instances are required to run continuously from Tuesday to Sunday without interruptions. On Mondays, the instances are needed for only 8 hours, and they also cannot tolerate interruptions. The company wants to implement a cost-effective solution to optimize EC2 usage while meeting these requirements.

Which solution will provide the MOST cost-effective results?

  1. Purchase Standard Reserved Instances for the EC2 instances that operate continuously from Tuesday to Sunday. Use Scheduled Reserved Instances for the EC2 instances that run for 8 hours on Mondays.
  2. Use Spot Instances for the EC2 instances that run for 8 hours on Mondays. Purchase Standard Reserved Instances for the EC2 instances that operate continuously from Tuesday to Sunday.
  3. Purchase Convertible Reserved Instances for the EC2 instances that operate continuously from Tuesday to Sunday. Use Spot Instances for the EC2 instances that run for 8 hours on Mondays.
  4. Purchase Standard Reserved Instances for the EC2 instances that operate continuously from Tuesday to Sunday. Use Convertible Reserved Instances for the EC2 instances that run for 8 hours on Mondays.
A

1. Purchase Standard Reserved Instances for the EC2 instances that operate continuously from Tuesday to Sunday. Use Scheduled Reserved Instances for the EC2 instances that run for 8 hours on Mondays.

Standard Reserved Instances provide cost savings for long-term, predictable workloads, like the continuous operation from Tuesday to Sunday. Scheduled Reserved Instances are ideal for predictable workloads with fixed schedules, like the 8-hour workload on Mondays, offering savings while ensuring uninterrupted operation.

  • Spot Instances, while cost-effective, are not suitable for workloads that cannot tolerate interruptions. Spot Instances may be terminated unexpectedly, which does not meet the requirement for uninterrupted operation.
  • Convertible Reserved Instances, while flexible, are not the most cost-effective option for predictable and consistent workloads like continuous operation from Tuesday to Sunday. Additionally, Spot Instances are unsuitable for workloads requiring uninterrupted operation.
  • Convertible Reserved Instances are not as cost-effective as Scheduled Reserved Instances for fixed and predictable schedules, such as the 8-hour Monday workload.

References:

Save time with our AWS cheat sheets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

A video editing company processes high-resolution footage for its clients. Each video file is several terabytes in size and needs to undergo intensive editing, such as applying filters and color grading, before delivery. Processing each video takes up to 25 minutes.
The company needs a solution that can scale to handle increased demand during peak periods while remaining cost-effective. The processed videos must be accessible for a minimum of 90 days.

Which solution will meet these requirements?

  1. Use Amazon Elastic Container Service (Amazon ECS) with AWS Fargate to run containerized video editing tasks. Store metadata in Amazon DynamoDB and processed video files in Amazon S3 Standard-IA for reduced costs.
  2. Deploy Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer (ALB). Use Amazon Simple Queue Service (Amazon SQS) to queue incoming video processing jobs. Store metadata in Amazon RDS, and store processed videos in Amazon S3 Glacier Flexible Retrieval for long-term storage.
  3. Use AWS Batch to orchestrate video editing jobs on Spot Instances. Store metadata in Amazon ElastiCache for Redis and processed video files in Amazon S3 Intelligent-Tiering.
  4. Use an on-premises video processing server connected to AWS Storage Gateway to store and retrieve video files from Amazon S3. Use Amazon RDS for metadata and configure Storage Gateway for caching frequently accessed data.
A

3. Use AWS Batch to orchestrate video editing jobs on Spot Instances. Store metadata in Amazon ElastiCache for Redis and processed video files in Amazon S3 Intelligent-Tiering.

AWS Batch handles batch processing workloads efficiently, and Spot Instances reduce compute costs. ElastiCache ensures low-latency metadata access during processing, and S3 Intelligent-Tiering reduces storage costs while keeping processed videos accessible.

  • While ECS with Fargate is scalable and serverless, AWS Batch is better suited for batch processing workloads like video editing.
  • S3 Glacier Flexible Retrieval is designed for archival data and is not suitable for videos that need to be accessed within 90 days. Additionally, managing EC2 instances increases operational overhead compared to serverless options.
  • Managing on-premises infrastructure introduces significant complexity and overhead. AWS-native services like Batch provide a more scalable and cost-effective solution for this workload.

References:

22
Q

A gaming company recently launched a multiplayer gaming platform for its users. The platform runs on multiple Amazon EC2 instances across two Availability Zones. Players use TCP to communicate with the platform in real time. The platform must be highly available and automatically scale as the number of players increases, while remaining cost-effective.

Which combination of steps will meet these requirements MOST cost-effectively?

(Select TWO.)

  1. Use an Application Load Balancer to distribute TCP traffic to the EC2 instances.
  2. Configure an Auto Scaling group to add or remove EC2 instances based on player traffic.
  3. Deploy an Amazon ECS cluster to replace the EC2 instances and handle player traffic.
  4. Add a Network Load Balancer in front of the EC2 instances to manage TCP traffic.
  5. Configure Amazon Route 53 to implement latency-based routing across multiple EC2 instances.
A

2. Configure an Auto Scaling group to add or remove EC2 instances based on player traffic.
4. Add a Network Load Balancer in front of the EC2 instances to manage TCP traffic.

An Auto Scaling group automatically adjusts the number of EC2 instances in response to traffic changes. This ensures cost efficiency by scaling out during high demand and scaling in during low demand.

  • A Network Load Balancer is optimized for handling TCP traffic with low latency. It provides the required scalability and high availability across multiple Availability Zones.
  • Application Load Balancers are optimized for HTTP/* https traffic, not TCP. A Network Load Balancer is more cost-effective and appropriate for this use case.
  • Switching to Amazon ECS introduces operational complexity and higher costs. The question specifies using EC2 instances, so Auto Scaling and a Network Load Balancer are more cost-effective and aligned with the scenario.
  • Route 53 latency-based routing is not designed to provide automatic scaling or load balancing. Instead, it is used for directing traffic across endpoints in different Regions or locations.

References:

Save time with our AWS cheat sheets:

23
Q

A retail company runs an on-premises application that uses Java Spring Boot on Windows servers. The application is resource-intensive and handles customer-facing operations. The company wants to modernize the application by migrating it to a containerized environment running on AWS. The new solution must automatically scale based on Amazon CloudWatch metrics and minimize operational overhead for managing infrastructure.

Which solution will meet these requirements with the LEAST operational overhead?

  1. Use AWS App2Container to containerize the application. Deploy the containerized application to Amazon Elastic Container Service (Amazon ECS) on AWS Fargate by using an AWS CloudFormation template.
  2. Use AWS App2Container to containerize the application. Deploy the containerized application to Amazon Elastic Container Service (Amazon ECS) on Amazon EC2 instances by using an AWS CloudFormation template.
  3. Use AWS App Runner to containerize the application. Use App Runner to automatically deploy and manage the application without using ECS or EC2.
  4. Use AWS App Runner to containerize the application. Deploy the containerized application to Amazon Elastic Kubernetes Service (Amazon EKS) on Amazon EC2 instances.
A

3. Use AWS App Runner to containerize the application. Use App Runner to automatically deploy and manage the application without using ECS or EC2.

App Runner simplifies deployment and management of containerized applications. It automatically scales based on demand and integrates with CloudWatch, minimizing operational overhead.

  • While Fargate reduces operational overhead by eliminating the need to manage EC2 instances, using ECS still requires more configuration and maintenance compared to App Runner.
  • Running ECS on EC2 requires managing the underlying EC2 instances, which increases operational overhead.
  • EKS involves additional complexity in managing Kubernetes clusters and EC2 instances, making it less suitable for reducing operational overhead.

References:

24
Q

A company is migrating its legacy customer support applications from an on-premises data center to AWS. Each application runs on a dedicated virtual machine and relies on proprietary software that cannot be modified. The applications must remain highly available and continue to operate in the event of a single Availability Zone failure. The company wants to minimize changes to its architecture and operational overhead.

Which solution will meet these requirements?

  1. Create an Amazon Machine Image (AMI) for each application. Launch two EC2 instances for each application in different Availability Zones. Use an Application Load Balancer to distribute traffic evenly between the instances.
  2. Use AWS Backup to configure hourly backups of each EC2 instance. Store backups in Amazon S3 Glacier. In case of failure, restore the latest backup to a new EC2 instance in another Availability Zone.
  3. Use AWS Elastic Disaster Recovery (AWS DRS) to replicate the on-premises virtual machines to AWS. Launch the virtual machines in an Auto Scaling group configured to span multiple Availability Zones.
  4. Refactor the applications into microservices and deploy them on Amazon ECS with Fargate. Use a Network Load Balancer to route traffic to the Fargate tasks.
A

1. Create an Amazon Machine Image (AMI) for each application. Launch two EC2 instances for each application in different Availability Zones. Use an Application Load Balancer to distribute traffic evenly between the instances.

It provides high availability by deploying instances across Availability Zones and distributes traffic using a load balancer.

  • While backups help in disaster recovery, this solution does not provide high availability or fault tolerance in real-time.
  • While AWS DRS is useful for disaster recovery, this approach introduces unnecessary complexity for an already migrated application.
  • The requirement explicitly states that the proprietary applications cannot be modified.

References:
What is an Application Load Balancer?

Save time with our AWS cheat sheets.

25
A company wants to use Amazon Elastic Container Service (Amazon ECS) to run its containerized application in a hybrid environment. The company needs to ensure that the application can scale across both on-premises and AWS environments. It also requires a load balancer to handle HTTP traffic for the new containers that will run in the AWS Cloud. **Which combination of actions will meet these requirements?** (Select TWO.) 1. Set up an ECS cluster that uses the AWS Fargate launch type for the cloud application containers. Use an Amazon ECS Anywhere external launch type for the on-premises application containers. 2. Set up an Application Load Balancer for cloud ECS services. 3. Set up a Network Load Balancer for cloud ECS services. 4. Set up an ECS cluster that uses the AWS Fargate launch type. Use Fargate for the cloud application containers and the on-premises application containers. 5. Set up an ECS cluster that uses the Amazon EC2 launch type for the cloud application containers. Use Amazon ECS Anywhere with an AWS Fargate launch type for the on-premises application containers.
**1.** Set up an ECS cluster that uses the AWS Fargate launch type for the cloud application containers. Use an Amazon ECS Anywhere external launch type for the on-premises application containers. **2.** Set up an Application Load Balancer for cloud ECS services. ## Footnote This meets the requirement of a hybrid environment by running containers in AWS Fargate for cloud workloads and Amazon ECS Anywhere for on-premises workloads. ECS Anywhere enables on-premises servers to join ECS clusters. This is the appropriate load balancer for handling HTTP traffic, ensuring proper routing and scalability for the cloud-based application containers. * A Network Load Balancer is primarily used for TCP or UDP traffic, not HTTP traffic. * Fargate cannot be used to run containers in an on-premises environment. * ECS Anywhere uses external instances, not AWS Fargate, for on-premises environments. **Reference:** [Amazon ECS clusters for the external launch type](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-anywhere.html) Save time with our [AWS cheat sheets](https://digitalcloud.training/amazon-ecs-and-eks/).
26
An application that runs a computational fluid dynamics workload uses a tightly-coupled HPC architecture that uses the MPI protocol and runs across many nodes. A service-managed deployment is required to minimize operational overhead. **Which deployment option is MOST suitable for provisioning and managing the resources required for this use case?** 1. Use Amazon EC2 Auto Scaling to deploy instances in multiple subnets 2. Use AWS CloudFormation to deploy a Cluster Placement Group on EC2 3. Use AWS Batch to deploy a multi-node parallel job 4. Use AWS Elastic Beanstalk to provision and manage the EC2 instances
**3.** Use AWS Batch to deploy a multi-node parallel job ## Footnote AWS Batch Multi-node parallel jobs enable you to run single jobs that span multiple Amazon EC2 instances. With AWS Batch multi-node parallel jobs, you can run large-scale, tightly coupled, high performance computing applications and distributed GPU model training without the need to launch, configure, and manage Amazon EC2 resources directly. An AWS Batch multi-node parallel job is compatible with any framework that supports IP-based, internode communication, such as Apache MXNet, TensorFlow, Caffe2, or Message Passing Interface (MPI). This is the most efficient approach to deploy the resources required and supports the application requirements most effectively. * This is not the best solution for a tightly-coupled HPC workload with specific requirements such as MPI support. * This would deploy a cluster placement group but not manage it. AWS Batch is a better fit for large scale workloads such as this. * You can certainly provision and manage EC2 instances with Elastic Beanstalk but this scenario is for a specific workload that requires MPI support and managing a HPC deployment across a large number of nodes. AWS Batch is more suitable. **References:** * [High Performance Computing Lens](https://d1.awsstatic.com/whitepapers/architecture/AWS-HPC-Lens.pdf) * [Multi-node parallel jobs](https://docs.aws.amazon.com/batch/latest/userguide/multi-node-parallel-jobs.html)
27
A Solutions Architect has been tasked with building an application which stores images to be used for a website. The website will be accessed by thousands of customers. The images within the application need to be able to be transformed and processed as they are being retrieved. The solutions architect would prefer to use managed services to achieve this, and the solution should be highly available and scalable, and be able to serve users from around the world with low latency. **Which scenario represents the easiest solution for this task?** 1. Store the images in a DynamoDB table, with DynamoDB Global Tables enabled. Provision a Lambda function to process the data on demand as it leaves the table. 2. Store the images in Amazon S3, behind a CloudFront distribution. Use S3 Event Notifications to connect to a Lambda function to process and transform the images when a GET request is initiated on an object. 3. Store the images in Amazon S3, behind a CloudFront distribution. Use S3 Object Lambda to transform and process the images whenever a GET request is initiated on an object. 4. Store the images in a DynamoDB table, with DynamoDB Accelerator enabled. Use Amazon EventBridge to pass the data into an event bus as it is retrieved from DynamoDB and use AWS Lambda to process the data.
**3.** Store the images in Amazon S3, behind a CloudFront distribution. Use S3 Object Lambda to transform and process the images whenever a GET request is initiated on an object. ## Footnote With S3 Object Lambda you can add your own code to S3 GET requests to modify and process data as it is returned to an application. For the first time, you can use custom code to modify the data returned by standard S3 GET requests to filter rows, dynamically resize images, redact confidential data, and much more. Powered by AWS Lambda functions, your code runs on infrastructure that is fully managed by AWS, eliminating the need to create and store derivative copies of your data or to run expensive proxies, all with no changes required to your applications. * DynamoDB is not as well designed for Write Once Read Many workloads and adding a Lambda function to the DynamoDB table takes more manual provisioning of resources than using S3 Object Lambda. * This would work; however it is easier to use S3 Object Lambda as this manages the Lambda function for you. * DynamoDB is not as well designed for Write Once Read Many workloads and adding a Lambda function to the DynamoDB table takes more manual provisioning of resources than using S3 Object Lambda. **Reference:** [Amazon S3 Object Lambda](https://aws.amazon.com/s3/features/object-lambda/) Save time with our [AWS cheat sheets](https://digitalcloud.training/aws-lambda/).
28
A retail organization sends coupons out twice a week and this results in a predictable surge in sales traffic. The application runs on Amazon EC2 instances behind an Elastic Load Balancer. The organization is looking for ways lower costs while ensuring they meet the demands of their customers. **How can they achieve this goal?** 1. Use capacity reservations with savings plans 2. Use a mixture of spot instances and on demand instances 3. Increase the instance size of the existing EC2 instances 4. Purchase Amazon EC2 dedicated hosts
**1.** Use capacity reservations with savings plans ## Footnote On-Demand Capacity Reservations enable you to reserve compute capacity for your Amazon EC2 instances in a specific Availability Zone for any duration. By creating Capacity Reservations, you ensure that you always have access to EC2 capacity when you need it, for as long as you need it. When used in combination with savings plans, you can also gain the advantages of cost reduction. * You can mix spot and on-demand in an auto scaling group. However, there’s a risk the spot price may not be good, and this is a regular, predictable increase in traffic. * This would add more cost all the time rather than catering for the temporary increases in traffic. * This is not a way to save cost as dedicated hosts are much more expensive than shared hosts. **Reference:** [Differences between Capacity Reservations, Reserved Instances, and Savings Plans](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-capacity-reservations.html#capacity-reservations-differences) Save time with our [AWS cheat sheets](https://digitalcloud.training/amazon-ec2/).
29
A company plans to make an Amazon EC2 Linux instance unavailable outside of business hours to save costs. The instance is backed by an Amazon EBS volume. There is a requirement that the contents of the instance’s memory must be preserved when it is made unavailable. **How can a solutions architect meet these requirements?** 1. Stop the instance outside business hours. Start the instance again when required. 2. Hibernate the instance outside business hours. Start the instance again when required. 3. Use Auto Scaling to scale down the instance outside of business hours. Scale up the instance when required. 4. Terminate the instance outside business hours. Recover the instance again when required.
**2.** Hibernate the instance outside business hours. Start the instance again when required. ## Footnote When you hibernate an instance, Amazon EC2 signals the operating system to perform hibernation (suspend-to-disk). Hibernation saves the contents from the instance memory (RAM) to your Amazon Elastic Block Store (Amazon EBS) root volume. Amazon EC2 persists the instance's EBS root volume and any attached EBS data volumes. When you start your instance: * The EBS root volume is restored to its previous state * The RAM contents are reloaded * The processes that were previously running on the instance are resumed * Previously attached data volumes are reattached and the instance retains its instance ID * When an instance is stopped the operating system is shut down and the contents of memory will be lost. * Auto Scaling scales does not scale up and down, it scales in by terminating instances and out by launching instances. When scaling out new instances are launched and no state will be available from terminated instances. * You cannot recover terminated instances, you can recover instances that have become impaired in some circumstances. **Reference:** [Hibernate your Amazon EC2 instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Hibernate.html) Save time with our [AWS cheat sheets](* https://digitalcloud.training/amazon-ec2/).
30
Amazon EC2 instances in a development environment run between 9am and 5pm Monday-Friday. Production instances run 24/7. **Which pricing models should be used to optimize cost and ensure capacity is available?** (Select TWO.) 1. Use Spot instances for the development environment 2. Use Reserved instances for the development environment 3. On-demand capacity reservations for the development environment 4. Use Reserved instances for the production environment 5. Use On-Demand instances for the production environment
**3.** On-demand capacity reservations for the development environment **4.** Use Reserved instances for the production environment ## Footnote Capacity reservations have no commitment and can be created and canceled as needed. This is ideal for the development environment as it will ensure the capacity is available. There is no price advantage but none of the other options provide a price advantage whilst also ensuring capacity is available Reserved instances are a good choice for workloads that run continuously. This is a good option for the production environment. * Spot Instances are a cost-effective choice if you can be flexible about when your applications run and if your applications can be interrupted. Spot instances are not suitable for the development environment as important work may be interrupted. * They require a long-term commitment which is not ideal for a development environment. * There is no long-term commitment required when you purchase On-Demand Instances. However, you do not get any discount and therefore this is the most expensive option. **References:** * [Amazon EC2 billing and purchasing options](https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/instance-purchasing-options.html) * [Reserve compute capacity with EC2 On-Demand Capacity Reservations](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-capacity-reservations.html) Save time with our [AWS cheat sheets](https://digitalcloud.training/amazon-ec2/).
31
A company runs a large batch processing job at the end of every quarter. The processing job runs for 5 days and uses 15 Amazon EC2 instances. The processing must run uninterrupted for 5 hours per day. The company is investigating ways to reduce the cost of the batch processing job. **Which pricing model should the company choose?** 1. Reserved Instances 2. Spot Instances 3. On-Demand Instances 4. Dedicated Instances
**3.** On-Demand Instances ## Footnote Each EC2 instance runs for 5 hours a day for 5 days per quarter or 20 days per year. This is time duration is insufficient to warrant reserved instances as these require a commitment of a minimum of 1 year and the discounts would not outweigh the costs of having the reservations unused for a large percentage of time. In this case, there are no options presented that can reduce the cost and therefore on-demand instances should be used. * Reserved instances are good for continuously running workloads that run for a period of 1 or 3 years. * Spot instances may be interrupted, and this is not acceptable. Note that Spot Block is deprecated and unavailable to new customers. * These do not provide any cost advantages and will be more expensive. **Reference:** [Amazon EC2 Pricing](https://aws.amazon.com/ec2/pricing/) Save time with our [AWS cheat sheets](https://digitalcloud.training/amazon-ec2/).
32
A company hosts a multiplayer game on AWS. The application uses Amazon EC2 instances in a single Availability Zone and users connect over Layer 4. Solutions Architect has been tasked with making the architecture highly available and also more cost-effective. **How can the solutions architect best meet these requirements?** (Select TWO.) 1. Configure an Auto Scaling group to add or remove instances in the Availability Zone automatically 2. Increase the number of instances and use smaller EC2 instance types 3. Configure a Network Load Balancer in front of the EC2 instances 4. Configure an Application Load Balancer in front of the EC2 instances 5. Configure an Auto Scaling group to add or remove instances in multiple Availability Zones automatically
**3.** Configure a Network Load Balancer in front of the EC2 instances **5.** Configure an Auto Scaling group to add or remove instances in multiple Availability Zones automatically ## Footnote The solutions architect must enable high availability for the architecture and ensure it is cost-effective. To enable high availability an Amazon EC2 Auto Scaling group should be created to add and remove instances across multiple availability zones. In order to distribute the traffic to the instances the architecture should use a Network Load Balancer which operates at Layer 4. This architecture will also be cost-effective as the Auto Scaling group will ensure the right number of instances are running based on demand. * This is not the most cost-effective option. Auto Scaling should be used to maintain the right number of active instances. * This is not highly available as it’s a single AZ. * An ALB operates at Layer 7 rather than Layer 4. **Reference:** [Auto Scaling Load Balancer](https://docsaws.amazon.com/autoscaling/ec2/userguide/autoscaling-load-balancer.html) Save time with our AWS cheat sheets: * [Amazon EC2](https://digitalcloud.training/amazon-ec2/) * [AWS Elastic Load Balancing (AWS ELB)](https://digitalcloud.training/aws-elastic-load-balancing-aws-elb/)
33
A company's application is running on Amazon EC2 instances in a single Region. In the event of a disaster, a solutions architect needs to ensure that the resources can also be deployed to a second Region. **Which combination of actions should the solutions architect take to accomplish this?** (Select TWO.) 1. Detach a volume on an EC2 instance and copy it to an Amazon S3 bucket in the second Region 2. Launch a new EC2 instance from an Amazon Machine Image (AMI) in the second Region 3. Launch a new EC2 instance in the second Region and copy a volume from Amazon S3 to the new instance 4. Copy an Amazon Machine Image (AMI) of an EC2 instance and specify the second Region for the destination 5. Copy an Amazon Elastic Block Store (Amazon EBS) volume from Amazon S3 and launch an EC2 instance in the second Region using that EBS volume
**2.** Launch a new EC2 instance from an Amazon Machine Image (AMI) in the second Region **4.** Copy an Amazon Machine Image (AMI) of an EC2 instance and specify the second Region for the destination ## Footnote You can copy an Amazon Machine Image (AMI) within or across AWS Regions using the AWS Management Console, the AWS Command Line Interface or SDKs, or the Amazon EC2 API, all of which support the CopyImage action. Using the copied AMI the solutions architect would then be able to launch an instance from the same EBS volume in the second Region. Note: the AMIs are stored on Amazon S3, however you cannot view them in the S3 management console or work with them programmatically using the S3 API. * You cannot copy EBS volumes directly from EBS to Amazon S3. * You cannot create an EBS volume directly from Amazon S3. * You cannot create an EBS volume directly from Amazon S3. **Reference:** [Copy an Amazon EC2 AMI](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/CopyingAMIs.html) Save time with our [AWS cheat sheets](https://digitalcloud.training/amazon-ebs/).
34
A solutions architect is designing the infrastructure to run an application on Amazon EC2 instances. The application requires high availability and must dynamically scale based on demand to be cost efficient. **What should the solutions architect do to meet these requirements?** 1. Configure an Application Load Balancer in front of an Auto Scaling group to deploy instances to multiple Regions 2. Configure an Amazon CloudFront distribution in front of an Auto Scaling group to deploy instances to multiple Regions 3. Configure an Application Load Balancer in front of an Auto Scaling group to deploy instances to multiple Availability Zones 4. Configure an Amazon API Gateway API in front of an Auto Scaling group to deploy instances to multiple Availability Zones
**3.** Configure an Application Load Balancer in front of an Auto Scaling group to deploy instances to multiple Availability Zones ## Footnote The Amazon EC2-based application must be highly available and elastically scalable. Auto Scaling can provide the elasticity by dynamically launching and terminating instances based on demand. This can take place across availability zones for high availability. Incoming connections can be distributed to the instances by using an Application Load Balancer (ALB). * API gateway is not used for load balancing connections to Amazon EC2 instances. * You cannot launch instances in multiple Regions from a single Auto Scaling group. **References:** * [What is Amazon EC2 Auto Scaling?](https://docs.aws.amazon.com/autoscaling/ec2/userguide/what-is-amazon-ec2-auto-scaling.html) * [Elastic Load Balancing](https://aws.amazon.com/elasticloadbalancing/) Save time with our AWS cheat sheets: * [Amazon EC2 Auto Scaling](https://digitalcloud.training/amazon-ec2-auto-scaling/) * [AWS Elastic Load Balancing (AWS ELB)](https://digitalcloud.training/aws-elastic-load-balancing-aws-elb/)
35
A web application runs in public and private subnets. The application architecture consists of a web tier and database tier running on Amazon EC2 instances. Both tiers run in a single Availability Zone (AZ). **Which combination of steps should a solutions architect take to provide high availability for this architecture?** (Select TWO.) 1. Create new public and private subnets in the same AZ for high availability 2. Create an Amazon EC2 Auto Scaling group and Application Load Balancer (ALB) spanning multiple AZs 3. Add the existing web application instances to an Auto Scaling group behind an Application Load Balancer (ALB) 4. Create new public and private subnets in a new AZ. Create a database using Amazon EC2 in one AZ 5. Create new public and private subnets in the same VPC, each in a new AZ. Migrate the database to an Amazon RDS multi-AZ deployment
**2.** Create an Amazon EC2 Auto Scaling group and Application Load Balancer (ALB) spanning multiple AZs **5.** Create new public and private subnets in the same VPC, each in a new AZ. Migrate the database to an Amazon RDS multi-AZ deployment ## Footnote To add high availability to this architecture both the web tier and database tier require changes. For the web tier an Auto Scaling group across multiple AZs with an ALB will ensure there are always instances running and traffic is being distributed to them. The database tier should be migrated from the EC2 instances to Amazon RDS to take advantage of a managed database with Multi-AZ functionality. This will ensure that if there is an issue preventing access to the primary database a secondary database can take over. * This would not add high availability. * The existing servers are in a single subnet. For HA we need to instances in multiple subnets. * We also need HA for the database layer. **References:** * [Use Elastic Load Balancing to distribute incoming application traffic in your Auto Scaling group](https://docs.aws.amazon.com/autoscaling/ec2/userguide/autoscaling-load-balancer.html) * [Amazon RDS Multi-AZ](https://aws.amazon.com/rds/features/multi-az/) Save time with our AWS cheat sheets: * [AWS Elastic Load Balancing (AWS ELB)](https://digitalcloud.training/aws-elastic-load-balancing-aws-elb/) * [Amazon EC2 Auto Scaling](https://digitalcloud.training/amazon-ec2-auto-scaling/) * [Amazon RDS](https://digitalcloud.training/amazon-rds/)
36
A legacy tightly-coupled High Performance Computing (HPC) application will be migrated to AWS. **Which network adapter type should be used?** 1. Elastic Network Interface (ENI) 2. Elastic Network Adapter (ENA) 3. Elastic Fabric Adapter (EFA) 4. Elastic IP Address
**3.** Elastic Fabric Adapter (EFA) ## Footnote An Elastic Fabric Adapter is an AWS Elastic Network Adapter (ENA) with added capabilities. The EFA lets you apply the scale, flexibility, and elasticity of the AWS Cloud to tightly-coupled HPC apps. It is ideal for tightly coupled app as it uses the Message Passing Interface (MPI). * The ENI is a basic type of adapter and is not the best choice for this use case. * The ENA, which provides Enhanced Networking, does provide high bandwidth and low inter-instance latency but it does not support the features for a tightly-coupled app that the EFA does. * An Elastic IP address is just a static public IP address, it is not a type of network adapter. **Reference:** [Elastic Fabric Adapter (EFA) for Tightly-Coupled HPC Workloads](https://aws.amazon.com/blogs/aws/now-available-elastic-fabric-adapter-efa-for-tightly-coupled-hpc-workloads/) Save time with our [AWS cheat sheets](https://digitalcloud.training/amazon-ec2/).
37
An application running on an Amazon ECS container instance using the EC2 launch type needs permissions to write data to Amazon DynamoDB. **How can you assign these permissions only to the specific ECS task that is running the application?** 1. Create an IAM policy with permissions to DynamoDB and attach it to the container instance 2. Create an IAM policy with permissions to DynamoDB and assign It to a task using the taskRoleArn parameter 3. Use a security group to allow outbound connections to DynamoDB and assign it to the container instance 4. Modify the AmazonECSTaskExecutionRolePolicy policy to add permissions for DynamoDB
**2.** Create an IAM policy with permissions to DynamoDB and assign It to a task using the taskRoleArn parameter ## Footnote To specify permissions for a specific task on Amazon ECS you should use IAM Roles for Tasks. The permissions policy can be applied to tasks when creating the task definition, or by using an IAM task role override using the AWS CLI or SDKs. The taskRoleArn parameter is used to specify the policy. * You should not apply the permissions to the container instance as they will then apply to all tasks running on the instance as well as the instance itself. * Though you will need a security group to allow outbound connections to DynamoDB, the question is asking how to assign permissions to write data to DynamoDB and a security group cannot provide those permissions. * The AmazonECSTaskExecutionRolePolicy policy is the Task Execution IAM Role. This is used by the container agent to be able to pull container images, write log file etc. **Reference:** [Amazon ECS task IAM role](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html) Save time with our [AWS cheat sheets](https://digitalcloud.training/amazon-ecs-and-eks/).
38
A company requires a solution to allow customers to customize images that are stored in an online catalog. The image customization parameters will be sent in requests to Amazon API Gateway. The customized image will then be generated on-demand and can be accessed online. **The solutions architect requires a highly available solution. Which solution will be MOST cost-effective?** 1. Use Amazon EC2 instances to manipulate the original images into the requested customization. Store the original and manipulated images in Amazon S3. Configure an Elastic Load Balancer in front of the EC2 instances 2. Use AWS Lambda to manipulate the original images to the requested customization. Store the original and manipulated images in Amazon S3. Configure an Amazon CloudFront distribution with the S3 bucket as the origin 3. Use AWS Lambda to manipulate the original images to the requested customization. Store the original images in Amazon S3 and the manipulated images in Amazon DynamoDB. Configure an Elastic Load Balancer in front of the Amazon EC2 instances 4. Use Amazon EC2 instances to manipulate the original images into the requested customization. Store the original images in Amazon S3 and the manipulated images in Amazon DynamoDB. Configure an Amazon CloudFront distribution with the S3 bucket as the origin
**2.** Use AWS Lambda to manipulate the original images to the requested customization. Store the original and manipulated images in Amazon S3. Configure an Amazon CloudFront distribution with the S3 bucket as the origin ## Footnote All solutions presented are highly available. The key requirement that must be satisfied is that the solution should be cost-effective and you must choose the most cost-effective option. Therefore, it’s best to eliminate services such as Amazon EC2 and ELB as these require ongoing costs even when they’re not used. Instead, a fully serverless solution should be used. AWS Lambda, Amazon S3 and CloudFront are the best services to use for these requirements. * This is not the most cost-effective option as the ELB and EC2 instances will incur costs even when not used. * This is not the most cost-effective option as the ELB will incur costs even when not used. Also, Amazon DynamoDB will incur RCU/WCUs when running and is not the best choice for storing images. * This is not the most cost-effective option as the EC2 instances will incur costs even when not used. **Reference:** [Serverless on AWS](https://aws.amazon.com/serverless/) Save time with our AWS cheat sheets: * [Amazon S3 and Glacier](https://digitalcloud.training/amazon-s3-and-glacier/) * [AWS Lambda](https://digitalcloud.training/aws-lambda/) * [Amazon CloudFront](https://digitalcloud.training/amazon-cloudfront/)
39
A finance organization wants to deploy end of day processing applications to a fleet of Amazon EC2 instances with a focus on reducing cost. These applications are stateless and can be re-triggered in case of failure. The company needs a solution that minimizes cost and operational overhead. **What should a solutions architect do to meet these requirements?** 1. Use Spot Instances in an Amazon EC2 Auto Scaling group to run the application containers. 2. Use Spot Instances in an Amazon Elastic Kubernetes Service (Amazon EKS) managed node group. 3. Use On-Demand Instances in an Amazon EC2 Auto Scaling group to run the application containers. 4. Use On-Demand Instances in an Amazon Elastic Kubernetes Service (Amazon EKS) managed node group.
**2.** Use Spot Instances in an Amazon Elastic Kubernetes Service (Amazon EKS) managed node group. ## Footnote Since by using EC2 Spot Instances, customers can access additional compute capacity between 70%-90% off On-Demand Instance pricing, we can directly eliminate two options utilizing on demand instances. Among the two options with spot instances, since the application is stateless, the better idea is to have a containerized approach and utilize EKS to reduce operational overhead. * As mentioned above, EKS gives you more options towards application fleet orchestration which makes it a better choice. * As compared to spot instances, on demand instances are costlier and for end of day processing where failures can be re-triggered and are acceptable, spot instances are a better choice. * As compared to spot instances, on demand instances are more expensive and for end of day processing where failures can be re-triggered and are acceptable, spot instances are a better choice. **Reference:** [Best practices for handling EC2 Spot Instance interruptions](https://aws.amazon.com/blogs/compute/best-practices-for-handling-ec2-spot-instance-interruptions/) Save time with our [AWS cheat sheets](https://digitalcloud.training/amazon-ecs-and-eks/).
40
An application that is being installed on an Amazon EC2 instance requires a persistent block storage volume. The data must be encrypted at rest and regular volume-level backups must be automated. **Which solution options should be used?** 1. Use an encrypted Amazon EBS volume and use Data Lifecycle Manager to automate snapshots 2. Use an encrypted Amazon EFS filesystem and use an Amazon CloudWatch Events rule to start a backup copy of data using AWS Lambda 3. Use server-side encryption on an Amazon S3 bucket and use Cross-Region-Replication to backup on a schedule 4. Use an encrypted Amazon EC2 instance store and copy the data to another EC2 instance using a cron job and a batch script
**1.** Use an encrypted Amazon EBS volume and use Data Lifecycle Manager to automate snapshots ## Footnote For block storage the Solutions Architect should use either Amazon EBS or EC2 instance store. However, the instance store is non-persistent so EBS must be used. With EBS you can encrypt your volume and automate volume-level backups using snapshots that are run by Data Lifecycle Manager. * EFS is not block storage, it is a file-level storage service. * Amazon S3 is an object-based storage system not a block-based storage system. * The EC2 instance store is a non-persistent volume. **Reference:** [Amazon EBS volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volumes.html) Save time with our [AWS cheat sheets](https://digitalcloud.training/amazon-ebs/).
41
An e-commerce company operates a containerized microservices application on a fleet of Amazon EC2 instances. As part of their infrastructure improvement efforts, the company plans to migrate the application to Amazon Elastic Kubernetes Service (Amazon EKS) for enhanced scalability and management. As part of the security protocol, the company has configured the Amazon EKS control plane with endpoint private access enabled and public access disabled. The data plane resides within private subnets. However, the company faces an issue where nodes fail to join the cluster. **What can be done to allow the nodes to join the EKS cluster?** 1. Modify the associated IAM role to include permissions to the AmazonEKSClusterPolicy. 2. Establish VPC peering connection for nodes to access the control plane. 3. Move nodes to public subnet and configure security group rules for the EC2 nodes. 4. Set up VPC endpoints for Amazon EKS and ECR to enable nodes to communicate with the control plane.
**4.** Set up VPC endpoints for Amazon EKS and ECR to enable nodes to communicate with the control plane. ## Footnote When the EKS control plane is configured with private access, and the nodes are in a private subnet, you need to create VPC endpoints for Amazon EKS and ECR. This allows the nodes to communicate with the EKS control plane and pull container images from ECR. * IAM roles are crucial for setting up permissions, but simply modifying the associated IAM role would not solve the issue of nodes not being able to connect to the control plane. * VPC peering is not the recommended way to allow nodes in a private subnet to access the EKS control plane. This approach might also incur additional operational overhead. * Moving the nodes to public subnets contradicts the original requirement of having the data plane in private subnets. Additionally, this approach might introduce unnecessary security risks. **Reference:** [Deploy private clusters with limited internet access](https://docs.aws.amazon.com/eks/latest/userguide/private-clusters.html) Save time with our [AWS cheat sheets](https://digitalcloud.training/amazon-ecs-and-eks/).
42
A game development company is planning to build a cloud-based game platform on AWS. The player activity patterns are unpredictable and could remain idle for extended periods. Only players who have purchased the game should have the ability to log in and play. **Which combination of steps will meet these requirements MOST cost-effectively?** (Select THREE.) 1. Implement an AWS Lambda function to fetch player information from Amazon DynamoDB. Establish an Amazon API Gateway endpoint to handle RESTful API calls, directing them to the Lambda function. 2. Set up an Amazon Elastic Container Service (Amazon ECS) service behind an Application Load Balancer to fetch player information from Amazon RDS. Establish an Amazon API Gateway endpoint to handle RESTful API calls, directing them to the ECS service. 3. Use AWS Cognito User Pools to handle user authentication. 4. Use AWS Cognito Identity Pools to handle user authentication. 5. Leverage AWS Amplify to serve the frontend game interface with HTML, CSS, and JS. Use the integrated Amazon CloudFront configuration for distribution. 6. Use Amazon S3 static web hosting with HTML, CSS, and JS. Use Amazon CloudFront to distribute the frontend game interface.
**1.** Implement an AWS Lambda function to fetch player information from Amazon DynamoDB. Establish an Amazon API Gateway endpoint to handle RESTful API calls, directing them to the Lambda function. **3.** Use AWS Cognito User Pools to handle user authentication. **5.** Leverage AWS Amplify to serve the frontend game interface with HTML, CSS, and JS. Use the integrated Amazon CloudFront configuration for distribution. ## Footnote AWS Lambda is a cost-effective solution for unpredictable traffic patterns due to its pay-per-use pricing model. DynamoDB is also a cost-effective and highly scalable solution for storing user data. The API Gateway provides a HTTP-based endpoint that can be used to expose the Lambda function. AWS Cognito User Pools provide user directory features including sign-up and sign-in services, which are suitable for managing game user authentication. AWS Amplify simplifies the process of hosting web applications with automated deployment processes. It also integrates with CloudFront, providing a global content delivery network to efficiently serve the game interface. * Using Amazon ECS might be an overkill for this scenario and might not be as cost-effective compared to Lambda and DynamoDB, especially for unpredictable and possibly idle traffic. * Cognito Identity Pools are used for granting access to AWS resources rather than handling user authentication. * While you could host a static website on S3 and use CloudFront for distribution, AWS Amplify can provide additional capabilities tailored to modern web applications. Furthermore, Amplify's automated deployment processes can provide a more streamlined and efficient approach to managing the game's frontend compared to managing separate S3 and CloudFront configurations. **References:** * [Amazon Cognito user pools](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-identity-pools.html) * [Fullstack TypeScript. Frontend DX for AWS.](https://aws.amazon.com/amplify/) Save time with our AWS cheat sheets: * [AWS Lambda](https://digitalcloud.training/aws-lambda/) * [Amazon ECS and EKS](https://digitalcloud.training/amazon-ecs-and-eks/)
43
A company's web application is using multiple Amazon EC2 Linux instances and storing data on Amazon EBS volumes. The company is looking for a solution to increase the resiliency of the application in case of a failure. **What should a solutions architect do to meet these requirements?** 1. Launch the application on EC2 instances in each Availability Zone. Attach EBS volumes to each EC2 instance 2. Create an Application Load Balancer with Auto Scaling groups across multiple Availability Zones. Mount an instance store on each EC2 instance 3. Create an Application Load Balancer with Auto Scaling groups across multiple Availability Zones. Store data on Amazon EFS and mount a target on each instance 4. Create an Application Load Balancer with Auto Scaling groups across multiple Availability Zones. Store data using Amazon S3 One Zone-Infrequent Access (S3 One Zone-A)
**3.** Create an Application Load Balancer with Auto Scaling groups across multiple Availability Zones. Store data on Amazon EFS and mount a target on each instance ## Footnote To increase the resiliency of the application the solutions architect can use Auto Scaling groups to launch and terminate instances across multiple availability zones based on demand. An application load balancer (ALB) can be used to direct traffic to the web application running on the EC2 instances. Lastly, the Amazon Elastic File System (EFS) can assist with increasing the resilience of the application by providing a shared file system that can be mounted by multiple EC2 instances from multiple availability zones. * The EBS volumes are single points of failure which are not shared with other instances. * Instance stores are ephemeral data stores which means data is lost when powered down. Also, instance stores cannot be shared between instances. * There are data retrieval charges associated with this S3 tier. It is not a suitable storage tier for application files. **Reference:** [Amazon Elastic File System Documentation](https://docs.aws.amazon.com/efs/) Save time with our [AWS cheat sheets](https://digitalcloud.training/amazon-efs/).
44
A company runs an internal browser-based application. The application runs on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. The Auto Scaling group scales up to 20 instances during work hours, but scales down to 2 instances overnight. Staff are complaining that the application is very slow when the day begins, although it runs well by midmorning. **How should the scaling be changed to address the staff complaints and keep costs to a minimum?** 1. Implement a scheduled action that sets the desired capacity to 20 shortly before the office opens 2. Implement a step scaling action triggered at a lower CPU threshold, and decrease the cooldown period 3. Implement a target tracking action triggered at a lower CPU threshold, and decrease the cooldown period 4. Implement a scheduled action that sets the minimum and maximum capacity to 20 shortly before the office opens
**3.** Implement a target tracking action triggered at a lower CPU threshold, and decrease the cooldown period ## Footnote Though this sounds like a good use case for scheduled actions, both answers using scheduled actions will have 20 instances running regardless of actual demand. A better option to be more cost effective is to use a target tracking action that triggers at a lower CPU threshold. With this solution the scaling will occur before the CPU utilization gets to a point where performance is affected. This will result in resolving the performance issues whilst minimizing costs. Using a reduced cooldown period will also more quickly terminate unneeded instances, further reducing costs. * This is not the most cost-effective option. Note you can choose min, max, or desired for a scheduled action. * This is not the most cost-effective option. Note you can choose min, max, or desired for a scheduled action. * AWS recommend you use target tracking in place of step scaling for most use cases. **Reference:** [Target tracking scaling policies for Amazon EC2 Auto Scaling](https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-target-tracking.html) Save time with our [AWS cheat sheets](https://digitalcloud.training/amazon-ec2-auto-scaling/).
45
A tool needs to analyze data stored in an Amazon S3 bucket. Processing the data takes a few seconds and results are then written to another S3 bucket. Less than 256 MB of memory is needed to run the process. **What would be the MOST cost-effective compute solutions for this use case?** 1. AWS Fargate tasks 2. AWS Lambda functions 3. Amazon EC2 spot instances 4. Amazon Elastic Beanstalk
**2.** AWS Lambda functions ## Footnote AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume. Lambda has a maximum execution time of 900 seconds and memory can be allocated up to 3008 MB. Therefore, the most cost-effective solution will be AWS Lambda. * Fargate runs Docker containers and is serverless. However, you do pay for the running time of the tasks so it will not be as cost-effective. * EC2 instances must run continually waiting for jobs to process so even with spot this would be less cost-effective (and subject to termination). * This services also relies on Amazon EC2 instances so would not be as cost-effective. **Reference:** [AWS Lambda](https://aws.amazon.com/lambda/) Save time with our [AWS cheat sheets](https://digitalcloud.training/aws-lambda/).
46
An application runs on EC2 instances in a private subnet behind an Application Load Balancer in a public subnet. The application is highly available and distributed across multiple AZs. The EC2 instances must make API calls to an internet-based service. **How can the Solutions Architect enable highly available internet connectivity?** 1. Create a NAT gateway and attach it to the VPC. Add a route to the gateway to each private subnet route table 2. Configure an internet gateway. Add a route to the gateway to each private subnet route table 3. Create a NAT instance in the private subnet of each AZ. Update the route tables for each private subnet to direct internet-bound traffic to the NAT instance 4. Create a NAT gateway in the public subnet of each AZ. Update the route tables for each private subnet to direct internet-bound traffic to the NAT gateway
**4.** Create a NAT gateway in the public subnet of each AZ. Update the route tables for each private subnet to direct internet-bound traffic to the NAT gateway ## Footnote The only solution presented that actually works is to create a NAT gateway in the public subnet of each AZ. They must be created in the public subnet as they gain public IP addresses and use an internet gateway for internet access. The route tables in the private subnets must then be configured with a route to the NAT gateway and then the EC2 instances will be able to access the internet (subject to security group configuration). * You do not attach NAT gateways to VPCs, you add them to public subnets. * You cannot add a route to an internet gateway to a private subnet route table (private EC2 instances don’t even have public IP addresses). * You do not create NAT instances in private subnets, they must be created in public subnets. **Reference:** [NAT gateways](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html) Save time with our [AWS cheat sheets](https://digitalcloud.training/amazon-vpc/).
47
The Solutions Architect in charge of a critical application must ensure the Amazon EC2 instances are able to be launched in another AWS Region in the event of a disaster. **What steps should the Solutions Architect take?** (Select TWO.) 1. Launch instances in the second Region using the S3 API 2. Create AMIs of the instances and copy them to another Region 3. Enable cross-region snapshots for the Amazon EC2 instances 4. Launch instances in the second Region from the AMIs 5. Copy the snapshots using Amazon S3 cross-region replication
**2.** Create AMIs of the instances and copy them to another Region **4.** Launch instances in the second Region from the AMIs ## Footnote You can create AMIs of the EC2 instances and then copy them across Regions. This provides a point-in-time copy of the state of the EC2 instance in the remote Region. Once you’ve created AMIs of EC2 instances and copied them to the second Region, you can then launch the EC2 instances from the AMIs in that Region. This is a good DR strategy as you have moved stateful EC2 instances to another Region. * Though snapshots (and EBS-backed AMIs) are stored on Amazon S3, you cannot actually access them using the S3 API. You must use the EC2 API. * You cannot enable “cross-region snapshots” as this is not a feature that currently exists. * You cannot work with snapshots using Amazon S3 at all including leveraging the cross-region replication feature. **Reference:** [EBS Snapshot Copy (Between Regions)](https://aws.amazon.com/blogs/aws/ebs-snapshot-copy/) Save time with our [AWS cheat sheets](https://digitalcloud.training/amazon-ec2/).
48
A company has launched a multi-tier application architecture. The web tier and database tier run on Amazon EC2 instances in private subnets within the same Availability Zone. **Which combination of steps should a Solutions Architect take to add high availability to this architecture?** (Select TWO.) 1. Create new public subnets in the same AZ for high availability and move the web tier to the public subnets 2. Create an Amazon EC2 Auto Scaling group and Application Load Balancer (ALB) spanning multiple AZs 3. Add the existing web application instances to an Auto Scaling group behind an Application Load Balancer (ALB) 4. Create new private subnets in the same VPC but in a different AZ. Create a database using Amazon EC2 in one AZ 5. Create new private subnets in the same VPC but in a different AZ. Migrate the database to an Amazon RDS multi-AZ deployment
**2.** Create an Amazon EC2 Auto Scaling group and Application Load Balancer (ALB) spanning multiple AZs **5.** Create new private subnets in the same VPC but in a different AZ. Migrate the database to an Amazon RDS multi-AZ deployment ## Footnote The Solutions Architect can use Auto Scaling group across multiple AZs with an ALB in front to create an elastic and highly available architecture. Then, migrate the database to an Amazon RDS multi-AZ deployment to create HA for the database tier. This results in a fully redundant architecture that can withstand the failure of an availability zone. * If subnets share the same AZ they are not suitable for splitting your tier across them for HA as the failure of a an AZ will take out both subnets. * The instances are in a single AZ so the Solutions Architect should create a new auto scaling group and launch instances across multiple AZs. * A database in a single AZ will not be highly available. **References:** * [What is Amazon EC2?](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-increase-availability.html) * [Configuring and managing a Multi-AZ deployment for Amazon RDS](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html) Save time with our AWS cheat sheets: * [Amazon EC2](https://digitalcloud.training/amazon-ec2/) * [Amazon RDS](https://digitalcloud.training/amazon-rds/)
49
A company operates a critical Python-based application that analyzes incoming real-time data. The application runs every 15 minutes and takes approximately 2 minutes to complete a run. It requires 1.5 GB of memory and uses the CPU intensively during its operation. The company wants to minimize the costs associated with running this application. **Which solution will meet these requirements?** 1. Use AWS App2Container (A2C) to containerize the application. Run the application as an Amazon Elastic Container Service (Amazon ECS) task on AWS Fargate with 1 virtual CPU (vCPU) and 1.5 GB of memory. 2. Implement the application as an AWS Lambda function configured with 1.5 GB of memory. Use Amazon EventBridge to schedule the function to run every 15 minutes. 3. Use AWS App2Container (A2C) to containerize the application. Deploy the container on an Amazon EC2 instance, configure an Amazon CloudWatch alarm to stop the instance when the application is not running. 4. Deploy the application on an Amazon EC2 instance and manually start and stop the instance in alignment with the schedule of the application run.
**2.** Implement the application as an AWS Lambda function configured with 1.5 GB of memory. Use Amazon EventBridge to schedule the function to run every 15 minutes. ## Footnote This is the most cost-effective solution. AWS Lambda is designed for running code in response to events or on a schedule, and you only pay for the compute time that you consume. Configuring the function with 1.5GB memory would ensure the function has enough resources, and using Amazon EventBridge for scheduling would enable running the function every 15 minutes. * This is not the most cost-effective solution. Even though AWS App2Container (A2C) would help in containerizing the application and AWS Fargate would abstract the need to manage underlying EC2 instances, it is still an overkill for an application that runs for short durations intermittently. It would still result in paying for unused compute resources. * AWS App2Container (A2C) is used to help containerize applications, but this does not optimize for cost because it requires running an EC2 instance continuously and stopping the instance when not in use can be complex and might not be timely, resulting in potential unnecessary costs. * This solution involves significant manual intervention and managing EC2 instances. While it can work, it's not an optimized way, especially in terms of cost and operation overhead. It does not take advantage of the pay-per-use model and automatic scaling provided by AWS Lambda. **Reference:** [Tutorial: Create an EventBridge scheduled rule for AWS Lambda functions](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-run-lambda-schedule.html) Save time with our [AWS cheat sheets](https://digitalcloud.training/amazon-cloudwatch/).
50
A multinational organization has a distributed application that runs on Amazon EC2 instances, which are behind an Application Load Balancer in an Auto Scaling group. The application utilizes a MySQL database hosted on Amazon Aurora. The database cluster spans across multiple Availability Zones in a single region. The organization plans to launch its services in a new geographical area and wants to ensure maximum availability with minimal service interruption. **Which strategy should the organization adopt?** 1. Expand the existing Auto Scaling group into the new Region. Utilize Amazon Aurora Global Database to extend the database across the primary and new regions. Implement Amazon Route 53 health checks with a failover routing policy directed towards the new region. 2. Replicate the application layer in the new region. Implement an Aurora MySQL Read Replica in the new region using Route 53 health checks and a failover routing policy. In case of primary failure, promote the Read Replica to primary. 3. Create a similar application layer in the new region. Establish a new Aurora MySQL database in this region. Use AWS Database Migration Service (AWS DMS) for ongoing replication from the primary database to the new region. Implement Amazon Route 53 health checks with a failover routing policy to the new region. 4. Establish the application layer in the new region. Use Amazon Aurora Global Database for deploying the database in the primary and new regions. Apply Amazon Route 53 health checks with a failover routing policy to the new region. Promote the secondary to primary as needed.
**4.** Establish the application layer in the new region. Use Amazon Aurora Global Database for deploying the database in the primary and new regions. Apply Amazon Route 53 health checks with a failover routing policy to the new region. Promote the secondary to primary as needed. ## Footnote This solution involves creating an application layer in the new region and using Amazon Aurora Global Database, which supports replicating your databases across multiple regions with minimal impact on performance. This configuration can enhance disaster recovery capabilities and can reduce the impact of planned maintenance. Amazon Route 53 health checks with a failover routing policy can automatically route traffic to the new region in the event of a failure in the primary region, thereby ensuring high availability. With an Aurora global database, there are two different approaches to failover depending on the scenario. You can use manual unplanned failover (detach and promote) or managed planned failover. * This solution involves creating a Read Replica in the new region, which would indeed allow for the promotion of the Read Replica to a primary instance if necessary. However, this process isn't instantaneous and could lead to service interruption, which is not what the question asked for. Aurora Global Database provides a lower RTO/RPO. * AWS Database Migration Service (AWS DMS) is primarily used for migrating databases to AWS from on-premises environments or for replicating databases for data warehousing and other use cases. It isn't as suitable for ongoing high-availability or failover scenarios as Amazon Aurora Global Database, which is specifically designed for these situations. * It is not possible to expand an Auto Scaling group across multiple Regions. ASGs operate within a Region only. **References:** * [Using Amazon Aurora Global Database](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-database.html) * [Recovering an Amazon Aurora global database from an unplanned outage](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-database-disaster-recovery.html#aurora-global-database-failover) Save time with our [AWS cheat sheets](https://digitalcloud.training/amazon-aurora/).
51
An e-commerce company operates a serverless web application that must interact with numerous Amazon DynamoDB tables to fulfill user requests. It is critical that the application's performance remains consistent and unaffected while interacting with these tables. **Which method provides the MOST operationally efficient way to fulfill these requirements?** 1. AWS Lambda with Step Functions. 2. Amazon S3 with Lambda triggers. 3. AWS Glue with a DynamoDB connector. 4. AWS AppSync with multiple data sources and resolvers.
**4.** AWS AppSync with multiple data sources and resolvers. ## Footnote AWS AppSync simplifies application development by letting you create a flexible API to securely access, manipulate, and combine data from one or more data sources. AppSync is a managed service that uses GraphQL to make it easy for applications to get exactly the data they need, including from multiple DynamoDB tables. AWS AppSync is designed for real-time and offline data access which makes it an ideal solution for this scenario. * AWS Step Functions make it easy to coordinate the components of distributed applications and microservices using visual workflows. However, while you could theoretically build a flow to retrieve data from multiple tables, it's not the most efficient solution as it introduces additional complexity and potential latency. * While you can use AWS Lambda to execute code in response to triggers like changes to data in an Amazon S3 bucket, this doesn't directly allow the application to retrieve data from multiple DynamoDB tables. This approach would also involve unnecessary data transfers and added latency. * AWS Glue is a fully managed extract, transform, and load (ETL) service that makes it easy to prepare and load your data for analytics. However, AWS Glue isn't meant for real-time data retrieval in an application. Using it for real-time data retrieval would likely be overcomplicated and inefficient. **Reference:** [AWS AppSync features](https://aws.amazon.com/appsync/product-details/)
52
A software development company is deploying a microservices-based application on Amazon Elastic Kubernetes Service (Amazon EKS). The application's traffic fluctuates significantly throughout the day and the company wants to ensure that the EKS cluster scales up and down according to these traffic patterns. **Which combination of steps would satisfy these requirements with MINIMAL operational overhead?** (Select TWO.) 1. Implement the Kubernetes Vertical Pod Autoscaler to adjust the CPU and memory allocation for the pods. 2. Utilize the Kubernetes Metrics Server to enable horizontal pod autoscaling based on resource utilization. 3. Employ the Kubernetes Cluster Autoscaler for dynamically managing the quantity of nodes in the EKS cluster. 4. Integrate Amazon SQS and connect it to Amazon EKS for workload management. 5. Leverage AWS X-Ray to track and analyze the application's network activity.
**2.** Utilize the Kubernetes Metrics Server to enable horizontal pod autoscaling based on resource utilization. **3.** Employ the Kubernetes Cluster Autoscaler for dynamically managing the quantity of nodes in the EKS cluster. ## Footnote The Metrics Server collects resource metrics like CPU and memory usage from each node and its pods and provides these metrics to the Kubernetes API server for use by the Horizontal Pod Autoscaler, which automatically scales the number of pods in a deployment, replication controller, replica set, or stateful set based on observed CPU utilization. The Kubernetes Cluster Autoscaler automatically adjusts the size of the Kubernetes cluster when there are pods that failed to run in the cluster due to insufficient resources or when there are nodes in the cluster that have been underutilized for an extended period and their pods can be placed on other existing nodes. * The Vertical Pod Autoscaler adjusts the resources of the pods and not the number of pods or nodes, which won't directly help with scaling according to traffic patterns. * Amazon SQS is a message queuing service, and while it can be used to manage workloads by decoupling microservices, it doesn't directly help with autoscaling an EKS cluster based on traffic patterns. * AWS X-Ray provides insights into the behavior of your applications, but it doesn't directly help with autoscaling an EKS cluster. **References:** * [Resource metrics pipeline](https://kubernetes.io/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/#metrics-server) * [Scale cluster compute with Karpenter and Cluster Autoscaler](https://docs.aws.amazon.com/eks/latest/userguide/autoscaling.html) Save time with our [AWS cheat sheets](https://digitalcloud.training/amazon-ecs-and-eks/).