A company runs an application on six web application servers in an Amazon EC2 Auto Scaling group in a single Availability Zone. The application is fronted by an Application Load Balancer (ALB). A Solutions Architect needs to modify the infrastructure to be highly available without making any modifications to the application.
Which architecture should the Solutions Architect choose to enable high availability?
2. Modify the Auto Scaling group to use two instances across each of three Availability Zones.
The only thing that needs to be changed in this scenario to enable HA is to split the instances across multiple Availability Zones. The architecture already uses Auto Scaling and Elastic Load Balancing so there is plenty of resilience to failure. Once the instances are running across multiple AZs there will be AZ-level fault tolerance as well.
Reference:
Add an Availability Zone
Save time with our AWS cheat sheets.
A company runs an application on an Amazon EC2 instance the requires 250 GB of storage space. The application is not used often and has small spikes in usage on weekday mornings and afternoons. The disk I/O can vary with peaks hitting a maximum of 3,000 IOPS. A Solutions Architect must recommend the most cost-effective storage solution that delivers the performance required.
Which solution or configuration should the Solutions Architect recommend?
2. Amazon EBS General Purpose SSD (gp2)
General Purpose SSD (gp2) volumes offer cost-effective storage that is ideal for a broad range of workloads. These volumes deliver single-digit millisecond latencies and the ability to burst to 3,000 IOPS for extended periods of time.
Between a minimum of 100 IOPS (at 33.33 GiB and below) and a maximum of 16,000 IOPS (at 5,334 GiB and above), baseline performance scales linearly at 3 IOPS per GiB of volume size. AWS designs gp2 volumes to deliver their provisioned performance 99% of the time. A gp2 volume can range in size from 1 GiB to 16 TiB.
In this configuration the volume will provide a baseline performance of 750 IOPS but will always be able to burst to the required 3,000 IOPS during periods of increased traffic.
Reference:
Ebs Volume Types
Save time with our AWS cheat sheets.
A media processing company is migrating its on-premises application to the AWS Cloud. The application processes high volumes of videos and generates large output files during the workflow.
The company requires a scalable solution to handle an increasing number of video processing jobs. The solution should minimize manual intervention, simplify job orchestration, and eliminate the need to manage infrastructure. Operational overhead must be kept to a minimum.
Which solution will fulfill these requirements with the LEAST operational overhead?
2. Use AWS Batch to run video processing jobs. Use AWS Step Functions to manage the workflow. Store the processed files in Amazon S3.
AWS Batch automatically manages the underlying infrastructure, scales based on workload and simplifies batch job management. Combining this with AWS Step Functions enables efficient orchestration of workflows. Storing the processed files in Amazon S3 provides durability and scalability, which is ideal for managing large files. This solution minimizes operational overhead by leveraging fully managed services.
References:
A logistics company is running a containerized application on an Amazon Elastic Kubernetes Service (Amazon EKS) cluster with Amazon EC2 instances as the worker nodes. The application includes a management dashboard that uses Amazon DynamoDB for real-time tracking data and a reporting service that stores large datasets in Amazon S3.
The company needs to ensure that the EKS Pods running the management dashboard can access only Amazon DynamoDB, and the EKS Pods running the reporting service can access only Amazon S3. The company uses AWS Identity and Access Management (IAM) for access control.
Which solution will meet these requirements?
2. Create separate IAM roles with policies for Amazon S3 and DynamoDB access. Use Kubernetes service accounts with IAM Role for Service Accounts (IRSA) to assign the AmazonS3FullAccess policy to the reporting service Pods and the AmazonDynamoDBFullAccess policy to the management dashboard Pods.
IAM Role for Service Accounts (IRSA) allows the assignment of IAM roles directly to Kubernetes service accounts, which the Pods use to assume permissions. This solution enables fine-grained access control, ensuring that the Pods for the management dashboard can access only DynamoDB and the Pods for the reporting service can access only S3.
References:
Save time with our AWS cheat sheets.
A financial services company runs a trading application on a Kubernetes cluster hosted in its on-premises data center. Due to a recent surge in trading activity, the on-premises infrastructure can no longer support the increased load. The company plans to migrate the trading application to the AWS Cloud using an Amazon Elastic Kubernetes Service (Amazon EKS) cluster.
The company wants to minimize the operational overhead by avoiding management of the underlying compute infrastructure for the new AWS architecture.
Which solution will meet these requirements with the LEAST operational overhead?
3. Use AWS Fargate to provide the compute capacity for the EKS cluster. Create a Fargate profile and deploy the application using the profile.
AWS Fargate eliminates the need to provision or manage EC2 instances, allowing the company to run pods without managing the underlying infrastructure. This greatly reduces operational overhead and meets the company’s requirements.
References:
Save time with our AWS cheat sheets.
A financial services company operates multiple internal services across various AWS accounts. The company uses AWS Organizations to manage these accounts and needs a centralized security appliance in a networking account to inspect all inter-service communication between AWS accounts. The solution must ensure secure and efficient routing of traffic through the security appliance.
Which solution will meet these requirements?
3. Deploy a Gateway Load Balancer (GWLB) in the networking account to route traffic to the security appliance. Configure the service accounts to send traffic to the GWLB by using a Gateway Load Balancer endpoint in each service account.
GWLB is specifically designed to simplify the deployment of security appliances. Using GWLB endpoints in service accounts ensures efficient routing and centralized inspection of traffic.
Reference:
What is a Gateway Load Balancer?
Save time with our AWS cheat sheets.
A company runs a critical data analysis job every Friday evening. The job processes large datasets and requires at least 2 hours to complete without interruptions. The job is stateful and needs reliable compute resources. The company wants to minimize operational overhead while ensuring the job runs as scheduled.
Which solution will meet these requirements?
1. Configure the job as a containerized task and run it on AWS Fargate using Amazon ECS. Schedule the task using Amazon EventBridge Scheduler.
This is the best option because AWS Fargate provides a serverless compute engine for containers, ensuring no interruptions. EventBridge Scheduler offers an easy way to schedule tasks without requiring manual intervention.
References:
Save time with our AWS cheat sheets.
Amazon EC2 instances in an Auto Scaling group. The application stores temporary training data on attached Amazon Elastic Block Store (Amazon EBS) volumes. The company seeks recommendations to optimize costs for the EC2 instances, the Auto Scaling group, and the EBS volumes with minimal manual intervention.
Which solution will meet these requirements with the MOST operational efficiency?
1. Configure AWS Compute Optimizer to provide cost optimization recommendations for the EC2 instances, the Auto Scaling group, and the EBS volumes.
AWS Compute Optimizer offers actionable insights for cost optimization across EC2 instances, Auto Scaling groups, and EBS volumes with minimal operational overhead.
Reference:
What is AWS Compute Optimizer?
An application on Amazon Elastic Container Service (ECS) performs data processing in two parts. The second part takes much longer to complete.
How can an Architect decouple the data processing from the backend application component?
4. Process each part using a separate ECS task. Create an Amazon SQS queue
Processing each part using a separate ECS task may not be essential but means you can separate the processing of the data. An Amazon Simple Queue Service (SQS) is used for decoupling applications. It is a message queue on which you place messages for processing by application components. In this case you can process each data processing part in separate ECS tasks and have them write an Amazon SQS queue. That way the backend can pick up the messages from the queue when they’re ready and there is no delay due to the second part not being complete.
Reference:
AWS Application Integration Services
Save time with our AWS cheat sheets.
A High Performance Computing (HPC) application will be migrated to AWS. The application requires low network latency and high throughput between nodes and will be deployed in a single AZ.
How should the application be deployed for best inter-node performance?
2. In a cluster placement group
A cluster placement group provides low latency and high throughput for instances deployed in a single AZ. It is the best way to provide the performance required for this application.
Reference:
Placement groups for your Amazon EC2 instances
Save time with our AWS cheat sheets.
An application has been migrated to Amazon EC2 Linux instances. The EC2 instances run several 1-hour tasks on a schedule. There is no common programming language among these tasks, as they were written by different teams. Currently, these tasks run on a single instance, which raises concerns about performance and scalability. To resolve these concerns, a solutions architect must implement a solution.
Which solution will meet these requirements with the LEAST Operational overhead?
4. Create an Amazon Machine Image (AMI) of the EC2 instance that runs the tasks. Create an Auto Scaling group with the AMI to run multiple copies of the instance.
The best solution is to create an AMI of the EC2 instance, and then use it as a template for which to launch additional instances using an Auto Scaling Group. This removes the issues of performance, scalability, and redundancy by allowing the EC2 instances to automatically scale and be launched across multiple Availability Zones.
Reference:
Use Elastic Load Balancing to distribute incoming application traffic in your Auto Scaling group
Save time with our AWS cheat sheets.
A company operates a three-tier architecture for their online order processing system. The architecture includes EC2 instances in the web tier behind an Application Load Balancer, EC2 instances in the processing tier, and Amazon DynamoDB for storage. To decouple the web and processing tiers, the company uses Amazon Simple Queue Service (Amazon SQS).
During peak demand, some customers experience delays or failures in order processing. At these times, the EC2 instances in the processing tier reach 100% CPU utilization, and the SQS queue length increases significantly. These peak periods are unpredictable.
What should the company do to improve the application’s performance?
3. Configure an Amazon EC2 Auto Scaling target tracking policy for the processing tier instances. Use the SQS ApproximateNumberOfMessages metric to dynamically scale the tier based on queue length.
Use an Amazon EC2 Auto Scaling target tracking policy for the processing tier instances. Use the SQS ApproximateNumberOfMessages metric to dynamically scale the tier based on queue length: This is correct because target tracking policies allow Auto Scaling to dynamically adjust the number of processing tier instances based on real-time conditions. Using the ApproximateNumberOfMessages attribute from SQS ensures that the application can handle the increasing workload when the queue length grows, preventing CPU exhaustion and processing delays.
References:
Save time with our AWS cheat sheets:
A social media analytics company runs a data processing application on a single Amazon EC2 On-Demand Instance. The application is stateless and processes user behavior data in near real-time. Recently, the application has started showing performance degradation during peak times, including 5xx errors due to high traffic volumes. The company wants to implement a solution to make the application scale automatically to handle traffic spikes in a cost-effective way.
Which solution will meet these requirements MOST cost-effectively?
3. Create an Auto Scaling group using an Amazon Machine Image (AMI) of the application. Use a launch template that configures the Auto Scaling group to scale out and in based on CPU utilization. Attach an Application Load Balancer to the Auto Scaling group to distribute traffic.
Auto Scaling ensures the application can scale automatically to meet demand, while the Application Load Balancer distributes traffic across instances, improving fault tolerance. This setup is both cost-effective and aligned with the application’s stateless architecture.
References:
Save time with our AWS cheat sheets.
A company hosts a monolithic web application on an Amazon EC2 instance. Application users have recently reported poor performance at specific times. Analysis of Amazon CloudWatch metrics shows that CPU utilization is 100% during the periods of poor performance.
The company wants to resolve this performance issue and improve application availability.
Which combination of steps will meet these requirements MOST cost-effectively?
(Select TWO.)
1. Use AWS Compute Optimizer to obtain a recommendation for an instance type to scale vertically.
5. Create an Auto Scaling group and an Application Load Balancer to scale horizontally.
AWS Compute Optimizer can suggest a more appropriate EC2 instance type with adequate resources for improved performance when scaling vertically.
Horizontal scaling improves application availability by adding multiple EC2 instances. The Application Load Balancer ensures traffic is distributed evenly across instances.
References:
Save time with our AWS cheat sheets:
A company operates an e-commerce application hosted on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer (ALB). Customer transactions and order information are stored in an Amazon Aurora PostgreSQL DB cluster. The company wants to implement a disaster recovery (DR) plan to prepare for Region-wide outages. The DR solution must provide a recovery time objective (RTO) of 30 minutes. The DR infrastructure does not need to be operational unless the primary Region becomes unavailable.
Which solution will meet these requirements?
1. Deploy the DR infrastructure in a second AWS Region, including an ALB and an Auto Scaling group with desired and maximum capacities set to zero. Convert the Aurora PostgreSQL DB cluster into an Aurora global database. Use Amazon Route 53 to configure active-passive failover.
This approach provides minimal operational overhead while meeting the 30-minute RTO. Aurora global database ensures replication with low latency, while Route 53 handles DNS failover.
References:
Save time with our AWS cheat sheets.
A company’s staff connect from home office locations to administer applications using bastion hosts in a single AWS Region. The company requires a resilient bastion host architecture that requires minimal ongoing operational overhead.
How can a Solutions Architect best meet these requirements?
1. Create a Network Load Balancer backed by an Auto Scaling group with instances in multiple Availability Zones.
Bastion hosts (aka “jump hosts”) are EC2 instances in public subnets that administrators and operations staff can connect to from the internet. From the bastion host they are then able to connect to other instances and applications within AWS by using internal routing within the VPC.
All answers use a Network Load Balancer which is acceptable for forwarding incoming connections to targets. The differences are in where the connections are forwarded to. The best option is to create an Auto Scaling group with EC2 instances in multiple Availability Zones. This creates a resilient architecture within a single AWS Region which is exactly what the question asks for.
Reference:
Auto Scaling benefits for application architecture
Save time with our AWS cheat sheets.
A company runs a containerized application on an Amazon Elastic Kubernetes Service (EKS) using a microservices architecture. The company requires a solution to collect, aggregate, and summarize metrics and logs. The solution should provide a centralized dashboard for viewing information including CPU and memory utilization for EKS namespaces, services, and pods.
Which solution meets these requirements?
1. Configure Amazon CloudWatch Container Insights in the existing EKS cluster. View the metrics and logs in the CloudWatch console.
Use CloudWatch Container Insights to collect, aggregate, and summarize metrics and logs from your containerized applications and microservices. Container Insights is available for Amazon Elastic Container Service (Amazon ECS), Amazon Elastic Kubernetes Service (Amazon EKS), and Kubernetes platforms on Amazon EC2.
With Container Insights for EKS you can see the top contributors by memory or CPU, or the most recently active resources. This is available when you select any of the following dashboards in the drop-down box near the top of the page:
Reference:
Container Insights
Save time with our AWS cheat sheets.
A company has deployed an application that consists of several microservices running on Amazon EC2 instances behind an Amazon API Gateway API. A Solutions Architect is concerned that the microservices are not designed to elastically scale when large increases in demand occur.
Which solution addresses this concern?
1. Create an Amazon SQS queue to store incoming requests. Configure the microservices to retrieve the requests from the queue for processing.
The individual microservices are not designed to scale. Therefore, the best way to ensure they are not overwhelmed by requests is to decouple the requests from the microservices. An Amazon SQS queue can be created, and the API Gateway can be configured to add incoming requests to the queue. The microservices can then pick up the requests from the queue when they are ready to process them.
Reference:
Understanding asynchronous messaging for microservices
Save time with our AWS cheat sheets.
A Solutions Architect working for a large financial institution is building an application to manage their customers financial information and their sensitive personal information. The Solutions Architect requires that the storage layer can store immutable data out of the box, with the ability to encrypt the data at rest and requires that the storage layer provides ACID properties. They also want to use a containerized solution to manage the compute layer.
Which solution will meet these requirements with the LEAST amount of operational overhead?
4. Set up an ECS cluster behind an Application Load Balancer on AWS Fargate. Use Amazon Quantum Ledger Database (QLDB) to manage the storage layer.
The solution requires that the storage layer be immutable. This immutability can only be delivered by Amazon Quantum Ledger Database (QLDB), as Amazon QLDB has a built-in immutable journal that stores an accurate and sequenced entry of every data change. The journal is append-only, meaning that data can only be added to a journal, and it cannot be overwritten or deleted.
Secondly the compute layer needs to not only be containerized, and implemented with the least possible operational overhead. The option that best fits these requirements is Amazon ECS on AWS Fargate, as AWS Fargate is a Serverless, containerized deployment option.
Reference:
Amazon Quantum Ledger Database (QLDB) features
Save time with our AWS cheat sheets.
A company operates a production environment on Amazon EC2 instances. The instances are required to run continuously from Tuesday to Sunday without interruptions. On Mondays, the instances are needed for only 8 hours, and they also cannot tolerate interruptions. The company wants to implement a cost-effective solution to optimize EC2 usage while meeting these requirements.
Which solution will provide the MOST cost-effective results?
1. Purchase Standard Reserved Instances for the EC2 instances that operate continuously from Tuesday to Sunday. Use Scheduled Reserved Instances for the EC2 instances that run for 8 hours on Mondays.
Standard Reserved Instances provide cost savings for long-term, predictable workloads, like the continuous operation from Tuesday to Sunday. Scheduled Reserved Instances are ideal for predictable workloads with fixed schedules, like the 8-hour workload on Mondays, offering savings while ensuring uninterrupted operation.
References:
Save time with our AWS cheat sheets.
A video editing company processes high-resolution footage for its clients. Each video file is several terabytes in size and needs to undergo intensive editing, such as applying filters and color grading, before delivery. Processing each video takes up to 25 minutes.
The company needs a solution that can scale to handle increased demand during peak periods while remaining cost-effective. The processed videos must be accessible for a minimum of 90 days.
Which solution will meet these requirements?
3. Use AWS Batch to orchestrate video editing jobs on Spot Instances. Store metadata in Amazon ElastiCache for Redis and processed video files in Amazon S3 Intelligent-Tiering.
AWS Batch handles batch processing workloads efficiently, and Spot Instances reduce compute costs. ElastiCache ensures low-latency metadata access during processing, and S3 Intelligent-Tiering reduces storage costs while keeping processed videos accessible.
References:
A gaming company recently launched a multiplayer gaming platform for its users. The platform runs on multiple Amazon EC2 instances across two Availability Zones. Players use TCP to communicate with the platform in real time. The platform must be highly available and automatically scale as the number of players increases, while remaining cost-effective.
Which combination of steps will meet these requirements MOST cost-effectively?
(Select TWO.)
2. Configure an Auto Scaling group to add or remove EC2 instances based on player traffic.
4. Add a Network Load Balancer in front of the EC2 instances to manage TCP traffic.
An Auto Scaling group automatically adjusts the number of EC2 instances in response to traffic changes. This ensures cost efficiency by scaling out during high demand and scaling in during low demand.
References:
Save time with our AWS cheat sheets:
A retail company runs an on-premises application that uses Java Spring Boot on Windows servers. The application is resource-intensive and handles customer-facing operations. The company wants to modernize the application by migrating it to a containerized environment running on AWS. The new solution must automatically scale based on Amazon CloudWatch metrics and minimize operational overhead for managing infrastructure.
Which solution will meet these requirements with the LEAST operational overhead?
3. Use AWS App Runner to containerize the application. Use App Runner to automatically deploy and manage the application without using ECS or EC2.
App Runner simplifies deployment and management of containerized applications. It automatically scales based on demand and integrates with CloudWatch, minimizing operational overhead.
References:
A company is migrating its legacy customer support applications from an on-premises data center to AWS. Each application runs on a dedicated virtual machine and relies on proprietary software that cannot be modified. The applications must remain highly available and continue to operate in the event of a single Availability Zone failure. The company wants to minimize changes to its architecture and operational overhead.
Which solution will meet these requirements?
1. Create an Amazon Machine Image (AMI) for each application. Launch two EC2 instances for each application in different Availability Zones. Use an Application Load Balancer to distribute traffic evenly between the instances.
It provides high availability by deploying instances across Availability Zones and distributes traffic using a load balancer.
References:
What is an Application Load Balancer?
Save time with our AWS cheat sheets.