Application Integration Flashcards

Architect decoupled systems using AWS messaging and event services to enable scalable and resilient applications. (20 cards)

19
Q

A web application allows users to upload photos and add graphical elements to them. The application offers two tiers of service: free and paid. Photos uploaded by paid users should be processed before those submitted using the free tier. The photos are uploaded to an Amazon S3 bucket which uses an event notification to send the job information to Amazon SQS.

How should a Solutions Architect configure the Amazon SQS deployment to meet these requirements?

  1. Use one SQS standard queue. Use batching for the paid photos and short polling for the free photos.
  2. Use a separate SQS FIFO queue for each tier. Set the free queue to use short polling and the paid queue to use long polling.
  3. Use a separate SQS Standard queue for each tier. Configure Amazon EC2 instances to prioritize polling for the paid queue over the free queue.
  4. Use one SQS FIFO queue. Assign a higher priority to the paid photos so they are processed first.
A

3. Use a separate SQS Standard queue for each tier. Configure Amazon EC2 instances to prioritize polling for the paid queue over the free queue.

AWS recommend using separate queues when you need to provide prioritization of work. The logic can then be implemented at the application layer to prioritize the queue for the paid photos over the queue for the free photos.

  • FIFO queues preserve the order of messages but they do not prioritize messages within the queue. The orders would need to be placed into the queue in a priority order and there’s no way of doing this as the messages are sent automatically through event notifications as they are received by Amazon S3.
  • Batching adds efficiency but it has nothing to do with ordering or priority.
  • Short polling and long polling are used to control the amount of time the consumer process waits before closing the API call and trying again. Polling should be configured for efficiency of API calls and processing of messages but does not help with message prioritization.

Reference:
What is Amazon Simple Queue Service?

Save time with our AWS cheat sheets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

An eCommerce application consists of three tiers. The web tier includes EC2 instances behind an Application Load balancer, the middle tier uses EC2 instances and an Amazon SQS queue to process orders, and the database tier consists of an Auto Scaling DynamoDB table. During busy periods customers have complained about delays in the processing of orders. A Solutions Architect has been tasked with reducing processing times.

Which action will be MOST effective in accomplishing this requirement?

  1. Replace the Amazon SQS queue with Amazon Kinesis Data Firehose.
  2. Use Amazon EC2 Auto Scaling to scale out the middle tier instances based on the SQS queue depth.
  3. Use Amazon DynamoDB Accelerator (DAX) in front of the DynamoDB backend tier.
  4. Add an Amazon CloudFront distribution with a custom origin to cache the responses for the web tier.
A

2. Use Amazon EC2 Auto Scaling to scale out the middle tier instances based on the SQS queue depth.

The most likely cause of the processing delays is insufficient instances in the middle tier where the order processing takes place. The most effective solution to reduce processing times in this case is to scale based on the backlog per instance (number of messages in the SQS queue) as this reflects the amount of work that needs to be done.

  • The issue is not the efficiency of queuing messages but the processing of the messages. In this case scaling the EC2 instances to reflect the workload is a better solution.
  • The DynamoDB table is configured with Auto Scaling so this is not likely to be the bottleneck in order processing.
  • This will cache media files to speed up web response times but not order processing times as they take place in the middle tier.

Reference:
Scaling policy based on Amazon SQS

Save time with our AWS cheat sheets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

There are two applications in a company: a sender application that sends messages containing payloads, and a processing application that receives messages containing payloads. The company wants to implement an AWS service to handle messages between these two different applications. The sender application sends on average 1,000 messages each hour and the messages depending on the type sometimes take up to 2 days to be processed. If the messages fail to process, they must be retained so that they do not impact the processing of any remaining messages.

Which solution meets these requirements and is the MOST operationally efficient?

  1. Set up a Redis database on Amazon EC2. Configure the instance to be used by both applications. The messages should be stored, processed, and deleted, respectively.
  2. Receive the messages from the sender application using an Amazon Kinesis data stream. Utilize the Kinesis Client Library (KCL) to integrate the processing application.
  3. Provide an Amazon Simple Queue Service (Amazon SQS) queue for the sender and processor applications. Set up a dead-letter queue to collect failed messages.
  4. Subscribe the processing application to an Amazon Simple Notification Service (Amazon SNS) topic to receive notifications. Write to the SNS topic using the sender application.
A

3. Provide an Amazon Simple Queue Service (Amazon SQS) queue for the sender and processor applications. Set up a dead-letter queue to collect failed messages.

Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. SQS eliminates the complexity and overhead associated with managing and operating message-oriented middleware and empowers developers to focus on differentiating work.

  • The most operationally efficient way is to use the managed service Amazon SQS.
  • The most operationally efficient way is to use the managed service Amazon SQS.
  • Amazon SNS is not a queuing service, but a pub-sub one to many notification service and cannot be used as a queue.

Reference:
Amazon Simple Queue Service

Save time with our AWS cheat sheets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

A logistics company processes real-time sensor data from delivery vehicles to optimize routes and track vehicle health. The current architecture includes an Auto Scaling group of Amazon EC2 instances for ingesting and storing sensor data, and a separate Auto Scaling group for analyzing and generating route optimizations based on this data.

The company has observed performance issues during peak delivery hours when the rate of data ingestion is significantly higher than the analysis and processing rate. The company wants to ensure that both systems can scale independently, and no data is lost during scaling events.

Which solution will meet these requirements?

  1. Use two Amazon Simple Queue Service (Amazon SQS) queues: one for data ingestion and one for route analysis. Configure the EC2 instances to poll their respective queues and scale the Auto Scaling groups based on the ApproximateNumberOfMessages metric in each queue.
  2. Replace the EC2-based Auto Scaling groups with AWS Lambda functions to process the incoming data and analysis tasks. Use Amazon DynamoDB to store the intermediate data and scale DynamoDB based on the traffic patterns.
  3. Use Amazon Kinesis Data Streams to buffer the sensor data. Configure Amazon Kinesis Data Analytics to process the data and adjust the number of EC2 instances in the analysis Auto Scaling group based on the volume of data being processed.
  4. Use two Amazon SQS queues: one for data ingestion and one for route analysis. Configure Amazon EventBridge rules to monitor queue length and scale each Auto Scaling group based on the backlog of messages in their respective queues.
A

1. Use two Amazon Simple Queue Service (Amazon SQS) queues: one for data ingestion and one for route analysis. Configure the EC2 instances to poll their respective queues and scale the Auto Scaling groups based on the ApproximateNumberOfMessages metric in each queue.

SQS decouples the ingestion and analysis processes, ensuring no data is lost during traffic spikes. Scaling the Auto Scaling groups based on the queue length allows for efficient scaling that matches the workload demand for each process.

  • Replacing EC2 instances with Lambda introduces architectural changes and operational overhead. Additionally, DynamoDB is not required when SQS provides an efficient decoupling mechanism.
  • Kinesis introduces additional complexity and costs and is not necessary for this use case. SQS already provides buffering and scaling capabilities tailored to this scenario.
  • EventBridge is not the optimal mechanism for scaling Auto Scaling groups. Scaling directly based on SQS metrics is simpler and more efficient for managing workloads in this scenario.

Reference:
Scaling policy based on Amazon SQS

Save time with our AWS cheat sheets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

A fintech company is modernizing its payments processing system to adopt a serverless microservices architecture. The company wants to decouple its services and implement an event-driven architecture to support a publish/subscribe (pub/sub) model. The system needs to notify multiple downstream services when payment events occur, ensuring scalability and low operational overhead.

Which solution will meet these requirements MOST cost-effectively?

  1. Configure an Amazon EventBridge rule to capture payment events and route them to multiple AWS Lambda functions that handle downstream processing.
  2. Use Amazon Kinesis Data Firehose to deliver payment events to multiple S3 buckets. Configure downstream services to poll the buckets for event processing.
  3. Configure an Amazon SNS topic to receive payment events from an AWS Lambda function. Set up multiple subscribers, such as Lambda functions, to process the events.
  4. Use Amazon MQ as a message broker to enable publish/subscribe communication between the payment microservices and the downstream services.
A

3. Configure an Amazon SNS topic to receive payment events from an AWS Lambda function. Set up multiple subscribers, such as Lambda functions, to process the events.

SNS is a fully managed pub/sub messaging service that supports high-throughput, scalable delivery to multiple subscribers, which aligns with the company’s requirements.

  • EventBridge is more suited for complex event routing scenarios and not optimized for simple pub/sub use cases compared to SNS.
  • Kinesis Data Firehose is designed for streaming data delivery, not for pub/sub messaging. It adds unnecessary overhead for the described use case.
  • Amazon MQ is a managed message broker service that is typically used for legacy applications requiring protocols such as JMS or AMQP. It adds more operational complexity compared to SNS.

References:

Save time with our AWS cheat sheets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

A company is launching a new photo processing service that uses machine learning (ML) models to analyze and tag images. The service consists of independent microservices for different types of image processing tasks. Each ML model loads approximately 500 MB of data from Amazon S3 into memory at startup.

Users will submit images through a RESTful API, which can handle individual or batch requests. Traffic patterns are unpredictable, with peaks during marketing campaigns and minimal usage during off-hours. The company needs a scalable and cost-effective solution to manage this workload.

Which solution will meet these requirements?

  1. Route the API requests to a Network Load Balancer (NLB). Deploy the ML models as Amazon Elastic Kubernetes Service (Amazon EKS) pods. Configure auto scaling based on CPU usage for EKS nodes.
  2. Route the API requests to an Application Load Balancer (ALB). Deploy the ML models as AWS Lambda functions. Use provisioned concurrency to ensure Lambda functions remain warm for high-performance batch processing.
  3. Send the API requests to an Amazon Simple Queue Service (Amazon SQS) queue. Deploy the ML models as Amazon Elastic Container Service (Amazon ECS) services that read messages from the queue. Use auto scaling for ECS to adjust capacity based on queue length.
  4. Send the API requests to an Amazon EventBridge bus. Deploy the ML models as AWS Lambda functions that EventBridge invokes. Use auto scaling to increase memory and concurrency based on the size of the event payloads.
A

3. Send the API requests to an Amazon Simple Queue Service (Amazon SQS) queue. Deploy the ML models as Amazon Elastic Container Service (Amazon ECS) services that read messages from the queue. Use auto scaling for ECS to adjust capacity based on queue length.

This solution effectively decouples the API from the backend processing with SQS, ensuring scalability during peaks. ECS services handle the asynchronous nature of requests, and auto scaling adjusts resources based on queue traffic.

  • While scalable, this setup is more operationally complex and does not inherently manage batch processing or asynchronous workloads as well as SQS.
  • AWS Lambda is not optimal for this use case due to its constraints on memory and runtime, particularly with large ML models and batch requests.
  • EventBridge is not suitable for handling high-frequency batch image processing tasks. AWS Lambda’s limitations also make it unsuitable for this scenario.

Save time with our AWS cheat sheets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

A transportation company uses GPS devices installed on its fleet of delivery trucks to monitor their location in real time. Each GPS device sends location updates every 5 minutes if the truck has traveled more than 100 meters. The data is transmitted to a web application running on three Amazon EC2 instances deployed across multiple Availability Zones in a single AWS Region.

Recently, during a peak delivery period, the web application was overwhelmed by the increased volume of GPS data, leading to data loss with no way to replay the events. The company wants to ensure that no location data is lost and that the application can scale efficiently to handle traffic spikes, all with minimal operational overhead.

What should the solutions architect do to meet these requirements?

  1. Use an Amazon S3 bucket to store the GPS location updates. Modify the application to periodically scan the bucket for new files and process the data.
  2. Use Amazon Kinesis Data Streams to ingest the GPS data. Configure an AWS Lambda function to process the data in real time and store results in an Amazon DynamoDB table.
  3. Use an Amazon Simple Queue Service (Amazon SQS) queue to store the incoming GPS data. Modify the application to poll the queue for new messages and process the data.
  4. Store the GPS location updates in an Amazon DynamoDB table. Modify the application to query the table for unprocessed data and process it. Use DynamoDB TTL to remove old records after processing.
A

3. Use an Amazon Simple Queue Service (Amazon SQS) queue to store the incoming GPS data. Modify the application to poll the queue for new messages and process the data.

SQS decouples the data ingestion process from the application. It ensures that no data is lost during traffic spikes and allows the application to process data at its own pace, minimizing the risk of being overwhelmed.

  • Amazon S3 is not ideal for real-time ingestion and processing. Scanning the bucket periodically introduces latency and may not be able to handle high throughput use cases efficiently.
  • Kinesis Data Streams introduces additional operational complexity and is typically suited for high throughput streaming workloads. SQS provides a simpler, lower-overhead solution for this use case.
  • Using DynamoDB as a temporary data store for this purpose is less efficient and increases complexity. SQS is specifically designed to handle message queues for this type of workload.

References:

Save time with our AWS cheat sheets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

An application has multiple components for receiving requests that must be processed and subsequently processing the requests. The company requires a solution for decoupling the application components. The application receives around 10,000 requests per day and requests can take up to 2 days to process. Requests that fail to process must be retained.

Which solution meets these requirements most efficiently?

  1. Create an Amazon DynamoDB table and enable DynamoDB streams. Configure the processing component to process requests from the stream.
  2. Decouple the application components with an Amazon SQS queue. Configure a dead-letter queue to collect the requests that failed to process.
  3. Use an Amazon Kinesis data stream to decouple application components and integrate the processing component with the Kinesis Client Library (KCL).
  4. Decouple the application components with an Amazon SQS Topic. Configure the receiving component to subscribe to the SNS Topic.
A

2. Decouple the application components with an Amazon SQS queue. Configure a dead-letter queue to collect the requests that failed to process.

The Amazon Simple Queue Service (SQS) is ideal for decoupling the application components. Standard queues can support up to 120,000 in flight messages and messages can be retained for up to 14 days in the queue.

To ensure the retention of requests (messages) that fail to process, a dead-letter queue can be configured. Messages that fail to process are sent to the dead-letter queue (based on the redrive policy) and can be subsequently dealt with.

  • SNS does not store requests, it immediately forwards all notifications to subscribers.
  • This is a less efficient solution and will likely be less cost-effective compared to using Amazon SQS. There is also no option for retention of requests that fail to process.
  • This solution does not offer any way of retaining requests that fail to process of removal of items from the table and is therefore less efficient.

References:

Save time with our AWS cheat sheets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

A web app allows users to upload images for viewing online. The compute layer that processes the images is behind an Auto Scaling group. The processing layer should be decoupled from the front end and the ASG needs to dynamically adjust based on the number of images being uploaded.

How can this be achieved?

  1. Create an Amazon SNS Topic to generate a notification each time a message is uploaded. Have the ASG scale based on the number of SNS messages
  2. Create a target tracking policy that keeps the ASG at 70% CPU utilization
  3. Create an Amazon SQS queue and custom CloudWatch metric to measure the number of messages in the queue. Configure the ASG to scale based on the number of messages in the queue
  4. Create a scheduled policy that scales the ASG at times of expected peak load
A

3. Create an Amazon SQS queue and custom CloudWatch metric to measure the number of messages in the queue. Configure the ASG to scale based on the number of messages in the queue

The best solution is to use Amazon SQS to decouple the front end from the processing compute layer. To do this you can create a custom CloudWatch metric that measures the number of messages in the queue and then configure the ASG to scale using a target tracking policy that tracks a certain value.

  • The Amazon Simple Notification Service (SNS) is used for sending notifications using topics. Amazon SQS is a better solution for this scenario as it provides a decoupling mechanism where the actual images can be stored for processing. SNS does not provide somewhere for the images to be stored.
  • Using a target tracking policy with the ASG that tracks CPU utilization does not allow scaling based on the number of images being uploaded.
  • Using a scheduled policy is less dynamic as though you may be able to predict usage patterns, it would be better to adjust dynamically based on actual usage.

Reference:
Scaling policy based on Amazon SQS

Save time with our AWS cheat sheets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

A company is migrating a decoupled application to AWS. The application uses a message broker based on the MQTT protocol. The application will be migrated to Amazon EC2 instances and the solution for the message broker must not require rewriting application code.

Which AWS service can be used for the migrated message broker?

  1. Amazon SQS
  2. Amazon SNS
  3. Amazon MQ
  4. AWS Step Functions
A

3. Amazon MQ

Amazon MQ is a managed message broker service for Apache ActiveMQ that makes it easy to set up and operate message brokers in the cloud. Connecting current applications to Amazon MQ is easy because it uses industry-standard APIs and protocols for messaging, including JMS, NMS, AMQP, STOMP, MQTT, and WebSocket. Using standards means that in most cases, there’s no need to rewrite any messaging code when you migrate to AWS.

  • This is an Amazon proprietary service and does not support industry-standard messaging APIs and protocols.
  • This is a notification service not a message bus.
  • This is a workflow orchestration service, not a message bus.

References:
Amazon MQ

Save time with our AWS cheat sheets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

A Solutions Architect is designing an application that will run on an Amazon EC2 instance. The application must asynchronously invoke and AWS Lambda function to analyze thousands of .CSV files. The services should be decoupled.

Which service can be used to decouple the compute services?

  1. Amazon SWF
  2. Amazon SNS
  3. Amazon Kinesis
  4. Amazon OpsWorks
A

2. Amazon SNS

You can use a Lambda function to process Amazon Simple Notification Service notifications. Amazon SNS supports Lambda functions as a target for messages sent to a topic. This solution decouples the Amazon EC2 application from Lambda and ensures the Lambda function is invoked.

  • The Simple Workflow Service (SWF) is used for process automation. It is not well suited to this requirement.
  • This service is used for ingesting and processing real time streaming data, it is not a suitable service to be used solely for invoking a Lambda function.
  • This service is used for configuration management of systems using Chef or Puppet.

Reference:
Invoking Lambda functions with Amazon SNS notifications

Save time with our AWS cheat sheets:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

An application is being monitored using Amazon GuardDuty. A Solutions Architect needs to be notified by email of medium to high severity events.

How can this be achieved?

  1. Configure an Amazon CloudWatch alarm that triggers based on a GuardDuty metric
  2. Create an Amazon CloudWatch events rule that triggers an Amazon SNS topic
  3. Create an Amazon CloudWatch Logs rule that triggers an AWS Lambda function
  4. Configure an Amazon CloudTrail alarm the triggers based on GuardDuty API activity
A

2. Create an Amazon CloudWatch events rule that triggers an Amazon SNS topic

A CloudWatch Events rule can be used to set up automatic email notifications for Medium to High Severity findings to the email address of your choice. You simply create an Amazon SNS topic and then associate it with an Amazon CloudWatch events rule.
Note: step by step procedures for how to set this up can be found in the article linked in the references below.

  • There is no metric for GuardDuty that can be used for specific findings.
  • CloudWatch logs is not the right CloudWatch service to use. CloudWatch events is used for reacting to changes in service state.
  • CloudTrail cannot be used to trigger alarms based on GuardDuty API activity.

Reference:
Processing GuardDuty findings with Amazon EventBridge

Save time with our AWS cheat sheets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

An application running on Amazon EC2 needs to asynchronously invoke an AWS Lambda function to perform data processing. The services should be decoupled.

Which service can be used to decouple the compute services?

  1. AWS Config
  2. Amazon SNS
  3. Amazon MQ
  4. AWS Step Functions
A

2. Amazon SNS

You can use a Lambda function to process Amazon Simple Notification Service notifications. Amazon SNS supports Lambda functions as a target for messages sent to a topic. This solution decouples the Amazon EC2 application from Lambda and ensures the Lambda function is invoked.

  • AWS Config is a service that is used for continuous compliance, not application decoupling.
  • Amazon MQ is similar to SQS but is used for existing applications that are being migrated into AWS. SQS should be used for new applications being created in the cloud.
  • AWS Step Functions is a workflow service. It is not the best solution for this scenario.

References:

Save time with our AWS cheat sheets:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

A Solutions Architect has been tasked with re-deploying an application running on AWS to enable high availability. The application processes messages that are received in an ActiveMQ queue running on a single Amazon EC2 instance. Messages are then processed by a consumer application running on Amazon EC2. After processing the messages the consumer application writes results to a MySQL database running on Amazon EC2.

Which architecture offers the highest availability and low operational complexity?

  1. Deploy a second Active MQ server to another Availability Zone. Launch an additional consumer EC2 instance in another Availability Zone. Use MySQL database replication to another Availability Zone.
  2. Deploy Amazon MQ with active/standby brokers configured across two Availability Zones. Launch an additional consumer EC2 instance in another Availability Zone. Use MySQL database replication to another Availability Zone.
  3. Deploy Amazon MQ with active/standby brokers configured across two Availability Zones. Launch an additional consumer EC2 instance in another Availability Zone. Use Amazon RDS for MySQL with Multi-AZ enabled.
  4. Deploy Amazon MQ with active/standby brokers configured across two Availability Zones. Create an Auto Scaling group for the consumer EC2 instances across two Availability Zones. Use an Amazon RDS MySQL database with Multi-AZ enabled.
A

4. Deploy Amazon MQ with active/standby brokers configured across two Availability Zones. Create an Auto Scaling group for the consumer EC2 instances across two Availability Zones. Use an Amazon RDS MySQL database with Multi-AZ enabled.

The correct answer offers the highest availability as it includes Amazon MQ active/standby brokers across two AZs, an Auto Scaling group across two AZ,s and a Multi-AZ Amazon RDS MySQL database deployment.

This architecture not only offers the highest availability it is also operationally simple as it maximizes the usage of managed services.

  • This architecture does not offer the highest availability as it does not use Auto Scaling. It is also not the most operationally efficient architecture as it does not use AWS managed services.
  • This architecture does not use Auto Scaling for best HA or the RDS managed service.
  • This solution does not use Auto Scaling.

Reference:
AWS Well-Architected

Save time with our AWS cheat sheets:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

A solutions architect is designing an application on AWS. The compute layer will run in parallel across EC2 instances. The compute layer should scale based on the number of jobs to be processed. The compute layer is stateless. The solutions architect must ensure that the application is loosely coupled and the job items are durably stored.

Which design should the solutions architect use?

  1. Create an Amazon SNS topic to send the jobs that need to be processed. Create an Amazon EC2 Auto Scaling group for the compute application. Set the scaling policy for the Auto Scaling group to add and remove nodes based on CPU usage
  2. Create an Amazon SQS queue to hold the jobs that need to be processed. Create an Amazon EC2 Auto Scaling group for the compute application. Set the scaling policy for the Auto Scaling group to add and remove nodes based on network usage
  3. Create an Amazon SQS queue to hold the jobs that needs to be processed. Create an Amazon EC2 Auto Scaling group for the compute application. Set the scaling policy for the Auto Scaling group to add and remove nodes based on the number of items in the SQS queue
  4. Create an Amazon SNS topic to send the jobs that need to be processed. Create an Amazon EC2 Auto Scaling group for the compute application. Set the scaling policy for the Auto Scaling group to add and remove nodes based on the number of messages published to the SNS topic
A

3. Create an Amazon SQS queue to hold the jobs that needs to be processed. Create an Amazon EC2 Auto Scaling group for the compute application. Set the scaling policy for the Auto Scaling group to add and remove nodes based on the number of items in the SQS queue

In this case we need to find a durable and loosely coupled solution for storing jobs. Amazon SQS is ideal for this use case and can be configured to use dynamic scaling based on the number of jobs waiting in the queue.

  • To configure this scaling you can use the backlog per instance metric with the target value being the acceptable backlog per instance to maintain. You can calculate these numbers as follows:
    • Backlog per instance: To calculate your backlog per instance, start with the ApproximateNumberOfMessages queue attribute to determine the length of the SQS queue (number of messages available for retrieval from the queue). Divide that number by the fleet’s running capacity, which for an Auto Scaling group is the number of instances in the InService state, to get the backlog per instance.
    • Acceptable backlog per instance: To calculate your target value, first determine what your application can accept in terms of latency. Then, take the acceptable latency value and divide it by the average time that an EC2 instance takes to process a message.
      This solution will scale EC2 instances using Auto Scaling based on the number of jobs waiting in the SQS queue.
  • Scaling on network usage does not relate to the number of jobs waiting to be processed.
  • Amazon SNS is a notification service so it delivers notifications to subscribers. It does store data durably but is less suitable than SQS for this use case. Scaling on CPU usage is not the best solution as it does not relate to the number of jobs waiting to be processed.
  • Amazon SNS is a notification service so it delivers notifications to subscribers. It does store data durably but is less suitable than SQS for this use case. Scaling on the number of notifications in SNS is not possible.

Reference:
Scaling policy based on Amazon SQS

Save time with our [AWS cheat sheets:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

A new application will run across multiple Amazon ECS tasks. Front-end application logic will process data and then pass that data to a back-end ECS task to perform further processing and write the data to a datastore. The Architect would like to reduce-interdependencies so failures do no impact other components.

Which solution should the Architect use?

  1. Create an Amazon Kinesis Firehose delivery stream and configure the front-end to add data to the stream and the back-end to read data from the stream
  2. Create an Amazon Kinesis Firehose delivery stream that delivers data to an Amazon S3 bucket, configure the front-end to write data to the stream and the back-end to read data from Amazon S3
  3. Create an Amazon SQS queue that pushes messages to the back-end. Configure the front-end to add messages to the queue
  4. Create an Amazon SQS queue and configure the front-end to add messages to the queue and the back-end to poll the queue for messages
A

4. Create an Amazon SQS queue and configure the front-end to add messages to the queue and the back-end to poll the queue for messages

This is a good use case for Amazon SQS. SQS is a service that is used for decoupling applications, thus reducing interdependencies, through a message bus. The front-end application can place messages on the queue and the back-end can then poll the queue for new messages. Please remember that Amazon SQS is pull-based (polling) not push-based (use SNS for push-based).

  • Amazon Kinesis Firehose is used for streaming data. With Firehose the data is immediately loaded into a destination that can be Amazon S3, RedShift, Elasticsearch, or Splunk. This is not an ideal use case for Firehose as this is not streaming data and there is no need to load data into an additional AWS service.
  • SQS is pull-based, not push-based. EC2 instances must poll the queue to find jobs to process.

Reference:
What is Amazon Elastic Container Service?

Save time with our AWS cheat sheets:

35
Q

An automotive company plans to implement IoT sensors in manufacturing equipment that will send data to AWS in real time. The solution must receive events in an ordered manner from each asset and ensure that the data is saved for future processing.

Which solution would be MOST efficient?

  1. Use Amazon Kinesis Data Streams for real-time events with a partition for each equipment asset. Use Amazon Kinesis Data Firehose to save data to Amazon S3.
  2. Use Amazon Kinesis Data Streams for real-time events with a shard for each equipment asset. Use Amazon Kinesis Data Firehose to save data to Amazon EBS.
  3. Use an Amazon SQS FIFO queue for real-time events with one queue for each equipment asset. Trigger an AWS Lambda function for the SQS queue to save data to Amazon EFS.
  4. Use an Amazon SQS standard queue for real-time events with one queue for each equipment asset. Trigger an AWS Lambda function from the SQS queue to save data to Amazon S3.
A

1. Use Amazon Kinesis Data Streams for real-time events with a partition for each equipment asset. Use Amazon Kinesis Data Firehose to save data to Amazon S3.

Amazon Kinesis Data Streams is the ideal service for receiving streaming data. The Amazon Kinesis Client Library (KCL) delivers all records for a given partition key to the same record processor, making it easier to build multiple applications reading from the same Amazon Kinesis data stream. Therefore, a separate partition (rather than shard) should be used for each equipment asset.

Amazon Kinesis Firehose can be used to receive streaming data from Data Streams and then load the data into Amazon S3 for future processing.

  • A partition should be used rather than a shard as explained above.
  • Amazon SQS cannot be used for real-time use cases.

References:

Save time with our AWS cheat sheets.

36
Q

An IoT sensor is being rolled out to thousands of a company’s existing customers. The sensors will stream high volumes of data each second to a central location. A solution must be designed to ingest and store the data for analytics. The solution must provide near-real time performance and millisecond responsiveness.

Which solution should a Solutions Architect recommend?

  1. Ingest the data into an Amazon SQS queue. Process the data using an AWS Lambda function and then store the data in Amazon RedShift.
  2. Ingest the data into an Amazon Kinesis Data Stream. Process the data with an AWS Lambda function and then store the data in Amazon DynamoDB.
  3. Ingest the data into an Amazon SQS queue. Process the data using an AWS Lambda function and then store the data in Amazon DynamoDB.
  4. Ingest the data into an Amazon Kinesis Data Stream. Process the data with an AWS Lambda function and then store the data in Amazon RedShift.
A

2. Ingest the data into an Amazon Kinesis Data Stream. Process the data with an AWS Lambda function and then store the data in Amazon DynamoDB.

A Kinesis data stream is a set of shards. Each shard contains a sequence of data records. A consumer is an application that processes the data from a Kinesis data stream. You can map a Lambda function to a shared-throughput consumer (standard iterator), or to a dedicated-throughput consumer with enhanced fan-out.

Amazon DynamoDB is the best database for this use case as it supports near-real time performance and millisecond responsiveness.

  • Amazon RedShift cannot provide millisecond responsiveness.
  • Amazon SQS does not provide near real-time performance and RedShift does not provide millisecond responsiveness.
  • Amazon SQS does not provide near real-time performance.

Reference:
How Lambda processes records from Amazon Kinesis Data Streams

Save time with our AWS cheat sheets.

37
Q

An e-commerce company has developed a new application which has been successfully deployed on AWS. For an upcoming sale, the company is expecting a huge rise in traffic and while testing for the event they have encountered performance issues in the application when many requests are sent to the application.

The current application stack is Amazon Aurora PostgreSQL database with an AWS Lambda compute layer fronted by API Gateway. A solutions architect must recommend improvements scalability whilst minimizing the configuration effort.

Which solution will meet these requirements?

  1. Refactor the Lambda function code to Apache Tomcat code that runs on Amazon EC2 instances. Connect the database by using native Java Database Connectivity (JDBC) drivers.
  2. Change the platform from Aurora to Amazon DynamoDB. Provision a DynamoDB Accelerator (DAX) cluster. Use the DAX client SDK to point the existing DynamoDB API calls at the DAX cluster.
  3. Set up two Lambda functions. Configure one function to receive the information. Configure the other function to load the information into the database. Integrate the Lambda functions by using Amazon Simple Notification Service (Amazon SNS).
  4. Set up two Lambda functions. Configure one function to receive the information. Configure the other function to load the information into the database. Integrate the Lambda functions by using an Amazon Simple Queue Service (Amazon SQS) queue.
A

4. Set up two Lambda functions. Configure one function to receive the information. Configure the other function to load the information into the database. Integrate the Lambda functions by using an Amazon Simple Queue Service (Amazon SQS) queue.

With Amazon SQS, you can offload tasks from one component of your application by sending them to a queue and processing them asynchronously. Lambda polls the queue and invokes your Lambda function synchronously with an event that contains the message from the SQS queue. This solution improves scalability as the message bus decouples the processing components of the application meaning it is less likely that the application will suffer outages or lost data.

  • You cannot run Lambda code on Amazon EC2 instances.
  • Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache for Amazon DynamoDB. The question doesn’t talk about hot or frequently accessed data only about an increase in volume so introducing DAX might not completely solve the issues.
  • SNS is used for fan-out scenarios when a single event is to be broadcasted among consumers and hence is not a good fit here.

Reference:
Using Lambda with Amazon SQS

Save time with our AWS cheat sheets:

38
Q

An application makes calls to a REST API running on Amazon EC2 instances behind an Application Load Balancer (ALB). Most API calls complete quickly. However, a single endpoint is making API calls that require much longer to complete and this is introducing overall latency into the system.

What steps can a Solutions Architect take to minimize the effects of the long-running API calls?

  1. Change the EC2 instance to one with enhanced networking to reduce latency
  2. Create an Amazon SQS queue and decouple the long-running API calls
  3. Increase the ALB idle timeout to allow the long-running requests to complete
  4. Change the ALB to a Network Load Balancer (NLB) and use SSL/TLS termination
A

2. Create an Amazon SQS queue and decouple the long-running API calls

An Amazon Simple Queue Service (SQS) can be used to offload and decouple the long-running requests. They can then be processed asynchronously by separate EC2 instances. This is the best way to reduce the overall latency introduced by the long-running API call.

  • This will not reduce the latency of the API call as network latency is not the issue here, it is the latency of how long the API call takes to complete.
  • The issue is not the connection being interrupted, it is that the API call takes a long time to complete..
  • SSL/TLS termination is not of benefit here as the problem is not encryption or processing of encryption. The issue is API call latency.

Reference:
What is Amazon Simple Queue Service?

Save time with our AWS cheat sheets.