A web application allows users to upload photos and add graphical elements to them. The application offers two tiers of service: free and paid. Photos uploaded by paid users should be processed before those submitted using the free tier. The photos are uploaded to an Amazon S3 bucket which uses an event notification to send the job information to Amazon SQS.
How should a Solutions Architect configure the Amazon SQS deployment to meet these requirements?
3. Use a separate SQS Standard queue for each tier. Configure Amazon EC2 instances to prioritize polling for the paid queue over the free queue.
AWS recommend using separate queues when you need to provide prioritization of work. The logic can then be implemented at the application layer to prioritize the queue for the paid photos over the queue for the free photos.
Reference:
What is Amazon Simple Queue Service?
Save time with our AWS cheat sheets.
An eCommerce application consists of three tiers. The web tier includes EC2 instances behind an Application Load balancer, the middle tier uses EC2 instances and an Amazon SQS queue to process orders, and the database tier consists of an Auto Scaling DynamoDB table. During busy periods customers have complained about delays in the processing of orders. A Solutions Architect has been tasked with reducing processing times.
Which action will be MOST effective in accomplishing this requirement?
2. Use Amazon EC2 Auto Scaling to scale out the middle tier instances based on the SQS queue depth.
The most likely cause of the processing delays is insufficient instances in the middle tier where the order processing takes place. The most effective solution to reduce processing times in this case is to scale based on the backlog per instance (number of messages in the SQS queue) as this reflects the amount of work that needs to be done.
Reference:
Scaling policy based on Amazon SQS
Save time with our AWS cheat sheets.
There are two applications in a company: a sender application that sends messages containing payloads, and a processing application that receives messages containing payloads. The company wants to implement an AWS service to handle messages between these two different applications. The sender application sends on average 1,000 messages each hour and the messages depending on the type sometimes take up to 2 days to be processed. If the messages fail to process, they must be retained so that they do not impact the processing of any remaining messages.
Which solution meets these requirements and is the MOST operationally efficient?
3. Provide an Amazon Simple Queue Service (Amazon SQS) queue for the sender and processor applications. Set up a dead-letter queue to collect failed messages.
Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. SQS eliminates the complexity and overhead associated with managing and operating message-oriented middleware and empowers developers to focus on differentiating work.
Reference:
Amazon Simple Queue Service
Save time with our AWS cheat sheets.
A logistics company processes real-time sensor data from delivery vehicles to optimize routes and track vehicle health. The current architecture includes an Auto Scaling group of Amazon EC2 instances for ingesting and storing sensor data, and a separate Auto Scaling group for analyzing and generating route optimizations based on this data.
The company has observed performance issues during peak delivery hours when the rate of data ingestion is significantly higher than the analysis and processing rate. The company wants to ensure that both systems can scale independently, and no data is lost during scaling events.
Which solution will meet these requirements?
1. Use two Amazon Simple Queue Service (Amazon SQS) queues: one for data ingestion and one for route analysis. Configure the EC2 instances to poll their respective queues and scale the Auto Scaling groups based on the ApproximateNumberOfMessages metric in each queue.
SQS decouples the ingestion and analysis processes, ensuring no data is lost during traffic spikes. Scaling the Auto Scaling groups based on the queue length allows for efficient scaling that matches the workload demand for each process.
Reference:
Scaling policy based on Amazon SQS
Save time with our AWS cheat sheets.
A fintech company is modernizing its payments processing system to adopt a serverless microservices architecture. The company wants to decouple its services and implement an event-driven architecture to support a publish/subscribe (pub/sub) model. The system needs to notify multiple downstream services when payment events occur, ensuring scalability and low operational overhead.
Which solution will meet these requirements MOST cost-effectively?
3. Configure an Amazon SNS topic to receive payment events from an AWS Lambda function. Set up multiple subscribers, such as Lambda functions, to process the events.
SNS is a fully managed pub/sub messaging service that supports high-throughput, scalable delivery to multiple subscribers, which aligns with the company’s requirements.
References:
Save time with our AWS cheat sheets.
A company is launching a new photo processing service that uses machine learning (ML) models to analyze and tag images. The service consists of independent microservices for different types of image processing tasks. Each ML model loads approximately 500 MB of data from Amazon S3 into memory at startup.
Users will submit images through a RESTful API, which can handle individual or batch requests. Traffic patterns are unpredictable, with peaks during marketing campaigns and minimal usage during off-hours. The company needs a scalable and cost-effective solution to manage this workload.
Which solution will meet these requirements?
3. Send the API requests to an Amazon Simple Queue Service (Amazon SQS) queue. Deploy the ML models as Amazon Elastic Container Service (Amazon ECS) services that read messages from the queue. Use auto scaling for ECS to adjust capacity based on queue length.
This solution effectively decouples the API from the backend processing with SQS, ensuring scalability during peaks. ECS services handle the asynchronous nature of requests, and auto scaling adjusts resources based on queue traffic.
Save time with our AWS cheat sheets.
A transportation company uses GPS devices installed on its fleet of delivery trucks to monitor their location in real time. Each GPS device sends location updates every 5 minutes if the truck has traveled more than 100 meters. The data is transmitted to a web application running on three Amazon EC2 instances deployed across multiple Availability Zones in a single AWS Region.
Recently, during a peak delivery period, the web application was overwhelmed by the increased volume of GPS data, leading to data loss with no way to replay the events. The company wants to ensure that no location data is lost and that the application can scale efficiently to handle traffic spikes, all with minimal operational overhead.
What should the solutions architect do to meet these requirements?
3. Use an Amazon Simple Queue Service (Amazon SQS) queue to store the incoming GPS data. Modify the application to poll the queue for new messages and process the data.
SQS decouples the data ingestion process from the application. It ensures that no data is lost during traffic spikes and allows the application to process data at its own pace, minimizing the risk of being overwhelmed.
References:
Save time with our AWS cheat sheets.
An application has multiple components for receiving requests that must be processed and subsequently processing the requests. The company requires a solution for decoupling the application components. The application receives around 10,000 requests per day and requests can take up to 2 days to process. Requests that fail to process must be retained.
Which solution meets these requirements most efficiently?
2. Decouple the application components with an Amazon SQS queue. Configure a dead-letter queue to collect the requests that failed to process.
The Amazon Simple Queue Service (SQS) is ideal for decoupling the application components. Standard queues can support up to 120,000 in flight messages and messages can be retained for up to 14 days in the queue.
To ensure the retention of requests (messages) that fail to process, a dead-letter queue can be configured. Messages that fail to process are sent to the dead-letter queue (based on the redrive policy) and can be subsequently dealt with.
References:
Save time with our AWS cheat sheets.
A web app allows users to upload images for viewing online. The compute layer that processes the images is behind an Auto Scaling group. The processing layer should be decoupled from the front end and the ASG needs to dynamically adjust based on the number of images being uploaded.
How can this be achieved?
3. Create an Amazon SQS queue and custom CloudWatch metric to measure the number of messages in the queue. Configure the ASG to scale based on the number of messages in the queue
The best solution is to use Amazon SQS to decouple the front end from the processing compute layer. To do this you can create a custom CloudWatch metric that measures the number of messages in the queue and then configure the ASG to scale using a target tracking policy that tracks a certain value.
Reference:
Scaling policy based on Amazon SQS
Save time with our AWS cheat sheets.
A company is migrating a decoupled application to AWS. The application uses a message broker based on the MQTT protocol. The application will be migrated to Amazon EC2 instances and the solution for the message broker must not require rewriting application code.
Which AWS service can be used for the migrated message broker?
3. Amazon MQ
Amazon MQ is a managed message broker service for Apache ActiveMQ that makes it easy to set up and operate message brokers in the cloud. Connecting current applications to Amazon MQ is easy because it uses industry-standard APIs and protocols for messaging, including JMS, NMS, AMQP, STOMP, MQTT, and WebSocket. Using standards means that in most cases, there’s no need to rewrite any messaging code when you migrate to AWS.
References:
Amazon MQ
Save time with our AWS cheat sheets.
A Solutions Architect is designing an application that will run on an Amazon EC2 instance. The application must asynchronously invoke and AWS Lambda function to analyze thousands of .CSV files. The services should be decoupled.
Which service can be used to decouple the compute services?
2. Amazon SNS
You can use a Lambda function to process Amazon Simple Notification Service notifications. Amazon SNS supports Lambda functions as a target for messages sent to a topic. This solution decouples the Amazon EC2 application from Lambda and ensures the Lambda function is invoked.
Reference:
Invoking Lambda functions with Amazon SNS notifications
Save time with our AWS cheat sheets:
An application is being monitored using Amazon GuardDuty. A Solutions Architect needs to be notified by email of medium to high severity events.
How can this be achieved?
2. Create an Amazon CloudWatch events rule that triggers an Amazon SNS topic
A CloudWatch Events rule can be used to set up automatic email notifications for Medium to High Severity findings to the email address of your choice. You simply create an Amazon SNS topic and then associate it with an Amazon CloudWatch events rule.
Note: step by step procedures for how to set this up can be found in the article linked in the references below.
Reference:
Processing GuardDuty findings with Amazon EventBridge
Save time with our AWS cheat sheets.
An application running on Amazon EC2 needs to asynchronously invoke an AWS Lambda function to perform data processing. The services should be decoupled.
Which service can be used to decouple the compute services?
2. Amazon SNS
You can use a Lambda function to process Amazon Simple Notification Service notifications. Amazon SNS supports Lambda functions as a target for messages sent to a topic. This solution decouples the Amazon EC2 application from Lambda and ensures the Lambda function is invoked.
References:
Save time with our AWS cheat sheets:
A Solutions Architect has been tasked with re-deploying an application running on AWS to enable high availability. The application processes messages that are received in an ActiveMQ queue running on a single Amazon EC2 instance. Messages are then processed by a consumer application running on Amazon EC2. After processing the messages the consumer application writes results to a MySQL database running on Amazon EC2.
Which architecture offers the highest availability and low operational complexity?
4. Deploy Amazon MQ with active/standby brokers configured across two Availability Zones. Create an Auto Scaling group for the consumer EC2 instances across two Availability Zones. Use an Amazon RDS MySQL database with Multi-AZ enabled.
The correct answer offers the highest availability as it includes Amazon MQ active/standby brokers across two AZs, an Auto Scaling group across two AZ,s and a Multi-AZ Amazon RDS MySQL database deployment.
This architecture not only offers the highest availability it is also operationally simple as it maximizes the usage of managed services.
Reference:
AWS Well-Architected
Save time with our AWS cheat sheets:
A solutions architect is designing an application on AWS. The compute layer will run in parallel across EC2 instances. The compute layer should scale based on the number of jobs to be processed. The compute layer is stateless. The solutions architect must ensure that the application is loosely coupled and the job items are durably stored.
Which design should the solutions architect use?
3. Create an Amazon SQS queue to hold the jobs that needs to be processed. Create an Amazon EC2 Auto Scaling group for the compute application. Set the scaling policy for the Auto Scaling group to add and remove nodes based on the number of items in the SQS queue
In this case we need to find a durable and loosely coupled solution for storing jobs. Amazon SQS is ideal for this use case and can be configured to use dynamic scaling based on the number of jobs waiting in the queue.
Reference:
Scaling policy based on Amazon SQS
Save time with our [AWS cheat sheets:
A new application will run across multiple Amazon ECS tasks. Front-end application logic will process data and then pass that data to a back-end ECS task to perform further processing and write the data to a datastore. The Architect would like to reduce-interdependencies so failures do no impact other components.
Which solution should the Architect use?
4. Create an Amazon SQS queue and configure the front-end to add messages to the queue and the back-end to poll the queue for messages
This is a good use case for Amazon SQS. SQS is a service that is used for decoupling applications, thus reducing interdependencies, through a message bus. The front-end application can place messages on the queue and the back-end can then poll the queue for new messages. Please remember that Amazon SQS is pull-based (polling) not push-based (use SNS for push-based).
Reference:
What is Amazon Elastic Container Service?
Save time with our AWS cheat sheets:
An automotive company plans to implement IoT sensors in manufacturing equipment that will send data to AWS in real time. The solution must receive events in an ordered manner from each asset and ensure that the data is saved for future processing.
Which solution would be MOST efficient?
1. Use Amazon Kinesis Data Streams for real-time events with a partition for each equipment asset. Use Amazon Kinesis Data Firehose to save data to Amazon S3.
Amazon Kinesis Data Streams is the ideal service for receiving streaming data. The Amazon Kinesis Client Library (KCL) delivers all records for a given partition key to the same record processor, making it easier to build multiple applications reading from the same Amazon Kinesis data stream. Therefore, a separate partition (rather than shard) should be used for each equipment asset.
Amazon Kinesis Firehose can be used to receive streaming data from Data Streams and then load the data into Amazon S3 for future processing.
References:
Save time with our AWS cheat sheets.
An IoT sensor is being rolled out to thousands of a company’s existing customers. The sensors will stream high volumes of data each second to a central location. A solution must be designed to ingest and store the data for analytics. The solution must provide near-real time performance and millisecond responsiveness.
Which solution should a Solutions Architect recommend?
2. Ingest the data into an Amazon Kinesis Data Stream. Process the data with an AWS Lambda function and then store the data in Amazon DynamoDB.
A Kinesis data stream is a set of shards. Each shard contains a sequence of data records. A consumer is an application that processes the data from a Kinesis data stream. You can map a Lambda function to a shared-throughput consumer (standard iterator), or to a dedicated-throughput consumer with enhanced fan-out.
Amazon DynamoDB is the best database for this use case as it supports near-real time performance and millisecond responsiveness.
Reference:
How Lambda processes records from Amazon Kinesis Data Streams
Save time with our AWS cheat sheets.
An e-commerce company has developed a new application which has been successfully deployed on AWS. For an upcoming sale, the company is expecting a huge rise in traffic and while testing for the event they have encountered performance issues in the application when many requests are sent to the application.
The current application stack is Amazon Aurora PostgreSQL database with an AWS Lambda compute layer fronted by API Gateway. A solutions architect must recommend improvements scalability whilst minimizing the configuration effort.
Which solution will meet these requirements?
4. Set up two Lambda functions. Configure one function to receive the information. Configure the other function to load the information into the database. Integrate the Lambda functions by using an Amazon Simple Queue Service (Amazon SQS) queue.
With Amazon SQS, you can offload tasks from one component of your application by sending them to a queue and processing them asynchronously. Lambda polls the queue and invokes your Lambda function synchronously with an event that contains the message from the SQS queue. This solution improves scalability as the message bus decouples the processing components of the application meaning it is less likely that the application will suffer outages or lost data.
Reference:
Using Lambda with Amazon SQS
Save time with our AWS cheat sheets:
An application makes calls to a REST API running on Amazon EC2 instances behind an Application Load Balancer (ALB). Most API calls complete quickly. However, a single endpoint is making API calls that require much longer to complete and this is introducing overall latency into the system.
What steps can a Solutions Architect take to minimize the effects of the long-running API calls?
2. Create an Amazon SQS queue and decouple the long-running API calls
An Amazon Simple Queue Service (SQS) can be used to offload and decouple the long-running requests. They can then be processed asynchronously by separate EC2 instances. This is the best way to reduce the overall latency introduced by the long-running API call.
Reference:
What is Amazon Simple Queue Service?
Save time with our AWS cheat sheets.