Networking & Content Delivery Flashcards

Design and implement networking architectures that ensure secure, high-performance, and global content delivery. (33 cards)

1
Q

An Amazon EC2 instance runs in a VPC network, and the network must be secured by a solutions architect. The EC2 instances contain highly sensitive data and have been launched in private subnets. Company policy restricts EC2 instances that run in the VPC from accessing the internet. The instances need to access the software repositories using a third-party URL to download and install software product updates. All other internet traffic must be blocked, with no exceptions.

Which solution meets these requirements?

  1. Configure the route table for the private subnet so that it routes the outbound traffic to an AWS Network Firewall firewall then configure domain list rule groups.
  2. Create an AWS WAF web ACL. Filter traffic requests based on source and destination IP address ranges with custom rules.
  3. Establish strict inbound rules for your security groups. Specify the URLs of the authorized software repositories on the internet in your outbound rule.
  4. Place an Application Load Balancer in front of your EC2 instances. Direct all outbound traffic to the ALB. For outbound access to the internet, use a URL-based rule listener in the ALB’s target group.
A

1. Configure the route table for the private subnet so that it routes the outbound traffic to an AWS Network Firewall firewall then configure domain list rule groups.

The AWS Network Firewall is a managed service that makes it easy to deploy essential network protections for all your Amazon Virtual Private Clouds, and you can then use domain list rules to block HTTP or * https traffic to domains identified as low-reputation, or that are known or suspected to be associated with malware or botnets.

  • AWS WAF is a web application firewall that helps protect your web applications or APIs against common web exploits and bots that may affect availability, compromise security, or consume excessive resources. It is designed to protect your applications from malicious traffic, not your VPC.
  • You cannot specify URLs in security group rules so this would not work.
  • The ALB would not work as this sits within the VPC and is unable to control traffic entering and leaving the VPC itself.

Reference:
AWS Network Firewall

Save time with our AWS cheat sheets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

A company operates a globally accessed video-sharing platform where users can upload, view, and download videos from their mobile devices. The platform’s static website is hosted in an Amazon S3 bucket.

Due to the platform’s rapid growth, users are experiencing increased latency during video uploads and downloads. The company needs to improve the performance of the platform while minimizing the complexity of the implementation.

Which solution will meet these requirements with the LEAST implementation effort?

  1. Configure an Amazon CloudFront distribution for the S3 bucket to accelerate download performance. Enable S3 Transfer Acceleration to enhance upload performance.
  2. Deploy Amazon EC2 instances in multiple AWS Regions and migrate the platform to these instances. Use an Application Load Balancer to distribute traffic across the instances and configure AWS Global Accelerator for improved global performance.
  3. Configure an Amazon CloudFront distribution with the S3 bucket as the origin to accelerate downloads. Use CloudFront for uploads as well. Create additional S3 buckets in multiple Regions and set up replication rules to sync user content between buckets. Redirect users to the closest bucket for downloads.
  4. Set up AWS Global Accelerator for the S3 bucket to optimize network routing. Configure the platform to use the Global Accelerator endpoint instead of the S3 bucket.
A

1. Configure an Amazon CloudFront distribution for the S3 bucket to accelerate download performance. Enable S3 Transfer Acceleration to enhance upload performance.

Configure an Amazon CloudFront distribution for the S3 bucket to accelerate download performance. Enable S3 Transfer Acceleration to enhance upload performance: This is correct because Amazon CloudFront is a global content delivery network (CDN) that improves download performance by caching content closer to users. S3 Transfer Acceleration reduces upload latency by utilizing optimized AWS edge locations to accelerate the data transfer from the user to the S3 bucket. Together, these services provide a cost-effective and low-effort solution to improve the platform’s performance.

  • Migrating the platform to EC2 instances and configuring Global Accelerator is significantly more complex and requires ongoing maintenance. This solution involves more implementation effort compared to using managed AWS services like CloudFront and S3 Transfer Acceleration.
  • Adding multiple S3 buckets and setting up replication rules increases the complexity of the solution. While CloudFront can accelerate uploads and downloads, replicating buckets adds unnecessary operational overhead when S3 Transfer Acceleration can address upload latency more efficiently.
  • AWS Global Accelerator does not directly support S3 buckets as an endpoint. Configuring Global Accelerator would require additional infrastructure, such as custom endpoints, increasing the effort and complexity without directly solving the latency issues.

References:

Save time with our AWS cheat sheets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

A fitness company collects user feedback from mobile app surveys about its workout plans and features. Users submit thousands of survey responses daily, and the company wants to automate feedback analysis to track user sentiment and improve its offerings. The analyzed feedback data must be stored for at least 12 months to identify trends over time.
The company requires a highly scalable solution that minimizes operational complexity.

Which solution will meet these requirements in the MOST scalable way?

  1. Collect survey responses via an Amazon API Gateway endpoint integrated with Amazon Kinesis Data Firehose. Configure Firehose to stream the data to an Amazon S3 bucket. Use S3 Event Notifications to invoke an AWS Lambda function that calls Amazon Comprehend for sentiment analysis and writes results to an Amazon DynamoDB table with TTL configured to delete records after 12 months.
  2. Send survey responses to an Amazon EventBridge rule, which routes the data to an AWS Step Functions workflow. Use Step Functions to trigger AWS Lambda for data processing and sentiment analysis with Amazon Comprehend. Store the results in an Amazon DynamoDB table and use DynamoDB’s TTL feature to expire data after 12 months.
  3. Write survey responses directly to an Amazon Redshift database. Configure Amazon Redshift ML to perform sentiment analysis on the feedback data in real time. Use Amazon S3 to archive the processed results and configure lifecycle policies to delete S3 objects after 12 months.
  4. Deploy an on-premises server that receives survey responses via a REST API. Process the data locally, use a custom machine learning model for sentiment analysis, and upload results to Amazon S3. Use Amazon S3 lifecycle policies to delete the data after 12 months.
A

1. Collect survey responses via an Amazon API Gateway endpoint integrated with Amazon Kinesis Data Firehose. Configure Firehose to stream the data to an Amazon S3 bucket. Use S3 Event Notifications to invoke an AWS Lambda function that calls Amazon Comprehend for sentiment analysis and writes results to an Amazon DynamoDB table with TTL configured to delete records after 12 months.

This architecture is highly scalable and cost-effective. Kinesis Data Firehose automatically scales to handle large volumes of data, and S3 provides reliable storage for raw survey responses. Using Lambda for sentiment analysis with Amazon Comprehend reduces operational complexity, and DynamoDB with TTL ensures data is stored efficiently for 12 months.

  • Using EventBridge and Step Functions adds unnecessary complexity to the workflow. Kinesis Data Firehose with S3 provides a simpler and more efficient mechanism for streaming and storing data.
  • Amazon Redshift is better suited for analytical queries and is not optimized for real-time sentiment analysis. This solution introduces higher costs and complexity compared to using Amazon Comprehend and DynamoDB.
  • Deploying and managing on-premises servers increases operational overhead. AWS services like Lambda and Comprehend provide a more scalable and managed solution for this use case.

References:

Save time with our AWS cheat sheets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

A university operates its critical IT services, including authentication and DNS, from an on-premises data center. The data center is connected to AWS using AWS Direct Connect (DX). The university is onboarding additional AWS accounts for different departments, all of which need secure and consistent access to the on-premises services.
The university wants a scalable and cost-effective solution that minimizes operational overhead.

What should a solutions architect implement to meet these requirements?

  1. Configure AWS Transit Gateway to connect the Direct Connect gateway to the VPCs in the new accounts. Route network traffic from the new accounts to the on-premises data center through the transit gateway.
  2. Deploy an AWS Site-to-Site VPN connection from the on-premises data center to each new AWS account. Configure route tables to forward traffic to the VPN.
  3. Establish a VPC peering connection between the Direct Connect VPC and each new AWS account. Configure security groups to allow traffic to flow between the VPCs and the on-premises services.
  4. Create a Direct Connect connection in each new AWS account and configure route tables in each VPC to send traffic to the on-premises data center.
A

1. Configure AWS Transit Gateway to connect the Direct Connect gateway to the VPCs in the new accounts. Route network traffic from the new accounts to the on-premises data center through the transit gateway.

AWS Transit Gateway enables scalable connectivity between multiple VPCs and on-premises networks. By connecting the Direct Connect gateway to the transit gateway, traffic from new AWS accounts can securely access on-premises services with minimal operational overhead.

  • Setting up individual VPN connections for each account introduces significant operational complexity and ongoing costs. Direct Connect with a transit gateway is more scalable and efficient.
  • VPC peering is not scalable for managing multiple AWS accounts. Transit Gateway is designed for such use cases and reduces operational overhead.
  • Creating multiple Direct Connect connections is expensive and unnecessary. A single Direct Connect gateway connected to a transit gateway can serve multiple accounts.

References:

Save time with our AWS cheat sheets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

A retail company is migrating its supply chain application to Amazon Elastic Kubernetes Service (Amazon EKS). The company requires pods in the EKS cluster to use custom subnets in its existing VPC. Additionally, the pods must securely communicate with other resources within the VPC, while adhering to compliance requirements.

Which solution will meet these requirements?

  1. Use the Amazon VPC CNI plugin for Kubernetes. Configure the custom subnets in the VPC and associate the subnets with the EKS cluster to allow pods to use them.
  2. Set up AWS Transit Gateway to manage the routing between custom subnets and the EKS pods for secure communication within the VPC.
  3. Configure an AWS Site-to-Site VPN between the custom subnets and the EKS cluster to enable secure communication for the pods.
  4. Define Kubernetes network policies that enforce pod placement on specific nodes residing in the custom subnets within the VPC.
A

1. Use the Amazon VPC CNI plugin for Kubernetes. Configure the custom subnets in the VPC and associate the subnets with the EKS cluster to allow pods to use them.

The Amazon VPC CNI plugin allows EKS pods to receive IP addresses from the specified custom subnets within the VPC. This ensures that the pods can securely communicate with other resources in the VPC.

  • AWS Transit Gateway is primarily used for connecting multiple VPCs or on-premises networks. It is not needed for pod-level subnet configuration within a single VPC.
  • The VPC and EKS cluster are already within the same AWS environment, and a VPN is unnecessary for communication within the same VPC.
  • Kubernetes network policies are used to control traffic flow between pods, not to configure subnet usage for pods.

Reference:
Assign IPs to Pods with the Amazon VPC CNI

Save time with our AWS cheat sheets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

A company hosts a website on Amazon EC2 instances behind an Application Load Balancer (ALB). The website serves static content. Website traffic is increasing. The company wants to minimize the website hosting costs.

Which solution will meet these requirements?

  1. Move the website to an Amazon S3 bucket. Configure an Amazon CloudFront distribution for the S3 bucket.
  2. Move the website to an Amazon S3 bucket. Configure an Amazon ElastiCache cluster for the S3 bucket.
  3. Move the website to AWS Amplify. Configure an ALB to resolve to the Amplify website.
  4. Move the website to AWS Amplify. Configure EC2 instances to cache the website.
A

1. Move the website to an Amazon S3 bucket. Configure an Amazon CloudFront distribution for the S3 bucket.

Move the website to an Amazon S3 bucket. Configure an Amazon CloudFront distribution for the S3 bucket: This is correct because Amazon S3 is cost-effective for serving static content. Adding CloudFront ensures global content delivery with reduced latency and caching, which minimizes hosting costs.

  • ElastiCache is designed for in-memory data storage and retrieval, not for optimizing S3 access for static content.
  • Amplify is already a fully managed service that does not require an ALB for static websites.
  • Using EC2 instances for caching introduces unnecessary complexity and cost for a static website.

References:

Save time with our AWS cheat sheets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

A company has a Production VPC and a Pre-Production VPC. The Production VPC uses VPNs through a customer gateway to connect to a single device in an on-premises data center. The Pre-Production VPC uses a virtual private gateway attached to two AWS Direct Connect (DX) connections. Both VPCs are connected using a single VPC peering connection.

How can a Solutions Architect improve this architecture to remove any single point of failure?

  1. Add an additional VPC peering connection between the two VPCs.
  2. Add additional VPNs to the Production VPC from a second customer gateway device.
  3. Add a set of VPNs between the Production and Pre-Production VPCs.
  4. Add a second virtual private gateway and attach it to the Production VPC.
A

2. Add additional VPNs to the Production VPC from a second customer gateway device.

The only single point of failure in this architecture is the customer gateway device in the on-premises data center. A customer gateway device is the on-premises (client) side of the connection into the VPC. The customer gateway configuration is created within AWS, but the actual device is a physical or virtual device running in the on-premises data center. If this device is a single device, then if it fails the VPN connections will fail. The AWS side of the VPN link is the virtual private gateway, and this is a redundant device.

  • VPC peering connections are already redundant, you do not need multiple connections.
  • You cannot create VPN connections between VPCs (using AWS VPNs).
  • Virtual private gateways (VGWs) are redundant devices so a second one is not necessary.

Reference:
AWS Site-to-Site VPN customer gateway devices

Save time with our AWS cheat sheets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

A company hosts statistical data in an Amazon S3 bucket that users around the world download from their website using a URL that resolves to a domain name. The company needs to provide low latency access to users and plans to use Amazon Route 53 for hosting DNS records.

Which solution meets these requirements?

  1. Create a web distribution on Amazon CloudFront pointing to an Amazon S3 origin. Create a CNAME record in a Route 53 hosted zone that points to the CloudFront distribution, resolving to the application’s URL domain name.
  2. Create an A record in Route 53, use a Route 53 traffic policy for the web application, and configure a geolocation rule. Configure health checks to check the health of the endpoint and route DNS queries to other endpoints if an endpoint is unhealthy.
  3. Create a web distribution on Amazon CloudFront pointing to an Amazon S3 origin. Create an ALIAS record in the Amazon Route 53 hosted zone that points to the CloudFront distribution, resolving to the application’s URL domain name.
  4. Create an A record in Route 53, use a Route 53 traffic policy for the web application, and configure a geoproximity rule. Configure health checks to check the health of the endpoint and route DNS queries to other endpoints if an endpoint is unhealthy.
A

3. Create a web distribution on Amazon CloudFront pointing to an Amazon S3 origin. Create an ALIAS record in the Amazon Route 53 hosted zone that points to the CloudFront distribution, resolving to the application’s URL domain name.

This is a simple requirement for low latency access to the contents of an Amazon S3 bucket for global users. The best solution here is to use Amazon CloudFront to cache the content in Edge Locations around the world. This involves creating a web distribution that points to an S3 origin (the bucket) and then create an Alias record in Route 53 that resolves the applications URL to the CloudFront distribution endpoint.

  • An Alias record should be used to point to an Amazon CloudFront distribution.
  • There is only a single endpoint (the Amazon S3 bucket) so this strategy would not work. Much better to use CloudFront to cache in multiple locations.
  • Again, there is only one endpoint so this strategy will simply not work.

Reference:
Routing traffic to a website that is hosted in an Amazon S3 bucket

Save time with our AWS cheat sheets:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

A company has created an application that stores sales performance data in an Amazon DynamoDB table. A web application is being created to display the data. A Solutions Architect must design the web application using managed services that require minimal operational maintenance.

Which architectures meet these requirements?

(Select TWO.)

  1. An Amazon API Gateway REST API directly accesses the sales performance data in the DynamoDB table.
  2. An Elastic Load Balancer forwards requests to a target group with the DynamoDB table configured as the target.
  3. An Amazon API Gateway REST API invokes an AWS Lambda function. The Lambda function reads data from the DynamoDB table.
  4. An Elastic Load Balancer forwards requests to a target group of Amazon EC2 instances. The EC2 instances run an application that reads data from the DynamoDB table.
  5. An Amazon Route 53 hosted zone routes requests to an AWS Lambda endpoint to invoke a Lambda function that reads data from the DynamoDB table.
A

1. An Amazon API Gateway REST API directly accesses the sales performance data in the DynamoDB table.
3. An Amazon API Gateway REST API invokes an AWS Lambda function. The Lambda function reads data from the DynamoDB table.

There are two architectures here that fulfill the requirement to create a web application that displays the data from the DynamoDB table.
The first one is to use an API Gateway REST API that invokes an AWS Lambda function. A Lambda proxy integration can be used, and this will proxy the API requests to the Lambda function which processes the request and accesses the DynamoDB table.

The second option is to use an API Gateway REST API to directly access the sales performance data. In this case a proxy for the DynamoDB query API can be created using a method in the REST API.

  • An Alias record could be created in a hosted zone but a hosted zone itself does not route to a Lambda endpoint. Using an Alias, it is possible to route to a VPC endpoint that uses a Lambda function however there would not be a web front end so a REST API would be preferable.
  • You cannot configure DynamoDB as a target in a target group.
  • This would not offer low operational maintenance as you must manage the EC2 instances.

References:

Save time with our AWS cheat sheets:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

A company has created a disaster recovery solution for an application that runs behind an Application Load Balancer (ALB). The DR solution consists of a second copy of the application running behind a second ALB in another Region. The Solutions Architect requires a method of automatically updating the DNS record to point to the ALB in the second Region.

What action should the Solutions Architect take?

  1. Enable an ALB health check.
  2. Use Amazon EventBridge to cluster the ALBs.
  3. Enable an Amazon Route 53 health check.
  4. Configure an alarm on a CloudTrail trail.
A

3. Enable an Amazon Route 53 health check.

Amazon Route 53 health checks monitor the health and performance of your web applications, web servers, and other resources. Each health check that you create can monitor one of the following:

  • The health of a specified resource, such as a web server
  • The status of other health checks
  • The status of an Amazon CloudWatch alarm

Health checks can be used with other configurations such as a failover routing policy. In this case a failover routing policy will direct traffic to the ALB of the primary Region unless health checks fail at which time it will direct traffic to the secondary record for the DR ALB.

  • This will simply perform health checks of the instances behind the ALB, rather than the ALB itself. This could be used in combination with Route 53 health checks.
  • You cannot cluster ALBs in any way.
  • CloudTrail records API activity so this does not help.

References:
Creating Amazon Route 53 health checks

Save time with our AWS cheat sheets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

An organization is extending a secure development environment into AWS. They have already secured the VPC including removing the Internet Gateway and setting up a Direct Connect connection.

What else needs to be done to add encryption?

  1. Setup a Virtual Private Gateway (VPG)
  2. Enable IPSec encryption on the Direct Connect connection
  3. Setup the Border Gateway Protocol (BGP) with encryption
  4. Configure an AWS Direct Connect Gateway
A

1. Setup a Virtual Private Gateway (VPG)

A VPG is used to setup an AWS VPN which you can use in combination with Direct Connect to encrypt all data that traverses the Direct Connect link. This combination provides an IPsec-encrypted private connection that also reduces network costs, increases bandwidth throughput, and provides a more consistent network experience than internet-based VPN connections.

  • There is no option to enable IPSec encryption on the Direct Connect connection.
  • The BGP protocol is not used to enable encryption for Direct Connect, it is used for routing.
  • An AWS Direct Connect Gateway is used to connect to VPCs across multiple AWS regions. It is not involved with encryption.

References:

Save time with our AWS cheat sheets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

A media company is building a video content distribution platform on AWS. The platform uses a REST API hosted on Amazon API Gateway to serve metadata about the videos, such as titles and descriptions. The metadata is confidential and must be accessible only from a specific set of trusted IP addresses belonging to the company’s office network.

Which solution will meet these requirements?

  1. Configure an API Gateway resource policy that denies access to any IP address that is not explicitly allowed.
  2. Deploy the API Gateway in a private subnet and configure a network ACL to permit traffic only from the trusted IP addresses.
  3. Set up API Gateway with a private integration and restrict access to the trusted IP addresses using a VPC endpoint policy.
  4. Modify the API Gateway security group to allow inbound requests only from the trusted IP addresses.
A

1. Configure an API Gateway resource policy that denies access to any IP address that is not explicitly allowed.

Resource policies in API Gateway allow you to restrict access to APIs by specifying conditions, such as IP addresses. By creating a resource policy with a condition that permits traffic only from the trusted IP range, you can ensure that the API is accessible only from the company’s internal network.

  • API Gateway cannot be directly deployed within a private subnet. Instead, it is a managed service that operates outside of VPCs. To use a private network, you would need a VPC endpoint, but this does not fully meet the requirements without additional configuration.
  • Private integrations in API Gateway are used to route traffic to resources within a VPC, such as an application running on Amazon EC2. While you can use a VPC endpoint policy for restricting access, this setup is more complex than using a resource policy and does not directly fulfill the requirement of limiting IP addresses for the API.
  • API Gateway is not associated with security groups. Security groups are used for resources like EC2 instances or load balancers, not for API Gateway endpoints.

References:

Save time with our AWS cheat sheets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

A financial services company needs to set up an Amazon RDS Multi-AZ database to store customer transaction records. The database will serve as the backend for an on-premises financial analysis application. The company requires the on-premises application to connect directly to the RDS database when employees are working from the office.
The company must ensure the connection is established securely and efficiently.

Which solution provides the required connectivity MOST securely?

  1. Create a VPC with two private subnets. Deploy the RDS database in the private subnets. Establish connectivity between the on-premises office and AWS using AWS Site-to-Site VPN with a customer gateway.
  2. Create a VPC with two public subnets. Deploy the RDS database in the public subnets. Configure an AWS Direct Connect connection between the on-premises office and the VPC for low-latency access.
  3. Create a VPC with two private subnets. Deploy the RDS database in the private subnets. Configure RDS security groups to allow the on-premises office IP ranges to access the database directly over the internet.
  4. Create a VPC with two public subnets. Deploy the RDS database in the public subnets. Use AWS Client VPN to establish secure connectivity between employees’ desktops and the database.
A

1. Create a VPC with two private subnets. Deploy the RDS database in the private subnets. Establish connectivity between the on-premises office and AWS using AWS Site-to-Site VPN with a customer gateway.

Placing the RDS database in private subnets ensures it is not publicly accessible. Using AWS Site-to-Site VPN securely connects the on-premises office to the VPC, allowing direct connectivity to the database while maintaining security.

  • Placing the RDS database in public subnets exposes it to the internet, violating security best practices. While Direct Connect provides low latency, public subnets do not meet the company’s security requirements.
  • Private subnets do not have direct internet access, and accessing the database over the internet introduces security vulnerabilities, even with security group rules.
  • Deploying the database in public subnets unnecessarily exposes it to potential threats. Client VPN is a good solution for employee access but does not address the requirement to connect securely from an on-premises office.

References:

Save time with our AWS cheat sheets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

A research organization runs its photo analysis application on AWS. The application processes images uploaded by field scientists and stores them temporarily on an Amazon EC2 instance’s locally attached Amazon Elastic Block Store (Amazon EBS) volume. Every evening, the processed images are uploaded to an Amazon S3 bucket for long-term storage.
The solutions architect has discovered that the images are being uploaded to S3 through the public internet. The organization wants to ensure that the upload traffic to Amazon S3 remains private and does not use the public internet.

Which solution will meet these requirements?

  1. Create a gateway VPC endpoint for the S3 bucket. Update the VPC’s route table to route all S3 traffic through the gateway endpoint.
  2. Use an Amazon S3 access point for the EC2 instance. Configure the photo analysis application to upload files to the bucket through the access point.
  3. Configure a VPC peering connection between the VPC containing the EC2 instance and Amazon S3. Update the route table to use the peering connection for traffic to S3.
  4. Deploy a NAT gateway in the VPC. Configure the EC2 instance’s security group to allow outbound traffic to the NAT gateway, which will route traffic to the S3 bucket.
A

1. Create a gateway VPC endpoint for the S3 bucket. Update the VPC’s route table to route all S3 traffic through the gateway endpoint.

A gateway VPC endpoint establishes a private connection to Amazon S3 without using the public internet. Updating the route table ensures all traffic to S3 is routed through this private endpoint. This solution is secure and cost-effective.

  • While S3 access points simplify access management for specific applications, they do not provide private connectivity. A gateway VPC endpoint is required to route traffic privately.
  • VPC peering does not provide connectivity to AWS services like Amazon S3. Gateway VPC endpoints are the correct mechanism for private S3 connectivity.
  • A NAT gateway routes traffic to the internet, and S3 traffic would still use public endpoints. This does not meet the requirement of avoiding the public internet.

References:

Save time with our AWS cheat sheets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

A streaming service company runs its video recommendation engine on an Amazon EC2 Auto Scaling group behind an Application Load Balancer (ALB) in a single AWS Region. The service generates personalized recommendations based on user activity and serves dynamic content to millions of users worldwide.

The company needs a cost-optimized solution to improve performance and scalability while ensuring that users across the globe experience low latency when accessing personalized recommendations.

Which solution will meet these requirements?

  1. Set up an Amazon CloudFront distribution and configure the existing ALB as the origin. Use dynamic cache settings to reduce latency for global users.
  2. Configure AWS Global Accelerator to route traffic to the existing ALB and EC2 instances in the Region closest to each user.
  3. Deploy additional EC2 instances and ALBs in multiple Regions. Use Amazon Route 53 latency-based routing to direct users to the Region with the lowest latency.
  4. Migrate the recommendation engine to Amazon S3 and enable static website hosting. Use an Amazon CloudFront distribution to cache the content globally.
A

1. Set up an Amazon CloudFront distribution and configure the existing ALB as the origin. Use dynamic cache settings to reduce latency for global users.

CloudFront provides a global content delivery network (CDN) that reduces latency by caching content closer to users. For dynamic content, CloudFront can still improve performance by optimizing requests and routing through its edge locations. This solution is cost-effective and requires minimal architectural changes.

  • Global Accelerator improves performance for static and dynamic content, but it requires additional costs and does not cache dynamic content at the edge like CloudFront. It is better suited for multi-Region deployments.
  • Deploying and managing resources across multiple Regions increases cost and operational overhead. For a cost-optimized solution, CloudFront is more efficient as it uses its global edge locations without requiring additional infrastructure.
  • The recommendation engine serves dynamic content, which cannot be hosted on Amazon S3. S3 is designed for static content like images or HTML files and is not appropriate for processing user-specific dynamic data.

References:

Save time with our AWS cheat sheets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

An online education company is launching a new e-learning platform on AWS. The platform will run on Amazon EC2 instances deployed across multiple Availability Zones in multiple AWS Regions. Students worldwide will access the platform through the internet to stream educational content. The company wants to ensure that each student is directed to the EC2 instances in the Region that is geographically closest to their location. The solution must provide high availability and efficient traffic routing.

Which solution will meet these requirements?

  1. Use Amazon Route 53 geolocation routing policy to direct students to the closest Region. Use an internet-facing Application Load Balancer to distribute traffic across the EC2 instances within each Region.
  2. Use Amazon Route 53 latency routing policy to direct students to the Region with the lowest network latency. Use an internet-facing Application Load Balancer to distribute traffic across the EC2 instances within each Region.
  3. Use Amazon Route 53 geoproximity routing policy to route students to the geographically closest Region. Configure an internet-facing Network Load Balancer to distribute traffic across the EC2 instances within each Availability Zone.
  4. Use Amazon Route 53 weighted routing policy to balance traffic across Regions. Use an internet-facing Application Load Balancer to distribute traffic across the EC2 instances within each Availability Zone.
A

2. Use Amazon Route 53 latency routing policy to direct students to the Region with the lowest network latency. Use an internet-facing Application Load Balancer to distribute traffic across the EC2 instances within each Region.

The latency routing policy dynamically routes users to the Region with the lowest latency. The Application Load Balancer ensures traffic is evenly distributed across all EC2 instances within the Region.

  • The geolocation routing policy routes users based on their geographic location, which may not always align with the Region with the lowest latency. For example, users in a region geographically close to a Region with high latency may not experience the best performance.
  • While geoproximity routing directs users based on proximity, it requires manual adjustments for bias, which increases operational complexity. Additionally, a Network Load Balancer is less suitable for HTTP/* https workloads compared to an Application Load Balancer.
  • A weighted routing policy does not guarantee that users are directed to the closest Region or the one with the lowest latency.

References:

Save time with our AWS cheat sheets.

17
Q

A Solutions Architect needs to select a low-cost, short-term option for adding resilience to an AWS Direct Connect connection.

What is the MOST cost-effective solution to provide a backup for the Direct Connect connection?

  1. Implement a second AWS Direct Connection
  2. Implement an IPSec VPN connection and use the same BGP prefix
  3. Configure AWS Transit Gateway with an IPSec VPN backup
  4. Configure an IPSec VPN connection over the Direct Connect link
A

2. Implement an IPSec VPN connection and use the same BGP prefix

This is the most cost-effective solution. With this option both the Direct Connect connection and IPSec VPN are active and being advertised using the Border Gateway Protocol (BGP). The Direct Connect link will always be preferred unless it is unavailable.

  • This is not a short-term or low-cost option as it takes time to implement and is costly.
  • This is a workable solution and provides some advantages. However, you do need to pay for the Transit Gateway so it is not the most cost-effective option and probably not suitable for a short-term need.
  • This is not a solution to the problem as the VPN connection is going over the Direct Connect link. This is something you might do to add encryption to Direct Connect but it doesn’t make it more resilient.

Reference:
Configure VPN

Save time with our AWS cheat sheets.

18
Q

A Solutions Architect has placed an Amazon CloudFront distribution in front of their web server, which is serving up a highly accessed website, serving content globally. The Solutions Architect needs to dynamically route the user to a new URL depending on where the user is accessing from, through running a particular script. This dynamic routing will happen on every request, and as a result requires the code to run at extremely low latency, and low cost.

What solution will best achieve this goal?

  1. Redirect traffic by running your code within a Lambda function using Lambda@Edge.
  2. At the Edge Location, run your code with CloudFront Functions.
  3. Use Path Based Routing to route each user to the appropriate webpage behind an Application Load Balancer.
  4. Use Route 53 Geo Proximity Routing to route users’ traffic to your resources based on their geographic location.
A

2. At the Edge Location, run your code with CloudFront Functions.

With CloudFront Functions in Amazon CloudFront, you can write lightweight functions in JavaScript for high-scale, latency-sensitive CDN customizations. Your functions can manipulate the requests and responses that flow through CloudFront, perform basic authentication and authorization, generate HTTP responses at the edge, and more. CloudFront Functions is approximately 1/6th the cost of Lambda@Edge and is extremely low latency as the functions are run on the host in the edge location, instead of the running on a Lambda function elsewhere.

  • Although you could achieve this using Lambda@Edge, the question states the need for the lowest latency possible, and comparatively the lowest latency option is CloudFront Functions.
  • This architecture does not account for the fact that custom code needs to be run to make this happen.
  • This may work, however again it does not account for the fact that custom code needs to be run to make this happen.

Reference:
Customize at the edge with CloudFront Functions

Save time with our AWS cheat sheets.

19
Q

A Solutions Architect is tasked with designing a fully Serverless, Microservices based web application which requires the use of a GraphQL API to provide a single entry point to the application.

Which AWS managed service could the Solutions Architect use?

  1. API Gateway
  2. Amazon Athena
  3. AWS AppSync
  4. AWS Lambda
A

3. AWS AppSync

AWS AppSync is a serverless GraphQL and Pub/Sub API service that simplifies building modern web and mobile applications.

AWS AppSync GraphQL APIs simplify application development by providing a single endpoint to securely query or update data from multiple databases, microservices, and APIs.

  • You cannot create GraphQL APIs on API Gateway.
  • Amazon Athena is a Serverless query service where you can query S3 using SQL statements.
  • AWS Lambda is a serverless compute service and is not designed to build APIs.

Reference:
AWS AppSync

Save time with our AWS cheat sheets.

20
Q

A company hosts an application on Amazon EC2 instances behind Application Load Balancers in several AWS Regions. Distribution rights for the content require that users in different geographies must be served content from specific regions.

Which configuration meets these requirements?

  1. Create Amazon Route 53 records with a geolocation routing policy.
  2. Create Amazon Route 53 records with a geoproximity routing policy.
  3. Configure Amazon CloudFront with multiple origins and AWS WAF.
  4. Configure Application Load Balancers with multi-Region routing.
A

1. Create Amazon Route 53 records with a geolocation routing policy.

To protect the distribution rights of the content and ensure that users are directed to the appropriate AWS Region based on the location of the user, the geolocation routing policy can be used with Amazon Route 53.

Geolocation routing lets you choose the resources that serve your traffic based on the geographic location of your users, meaning the location that DNS queries originate from.
When you use geolocation routing, you can localize your content and present some or all of your website in the language of your users. You can also use geolocation routing to restrict distribution of content to only the locations in which you have distribution rights.

  • Use this routing policy when you want to route traffic based on the location of your resources and, optionally, shift traffic from resources in one location to resources in another.
  • AWS WAF protects against web exploits but will not assist with directing users to different content (from different origins).
  • There is no such thing as multi-Region routing for ALBs.

Reference:
Choosing a routing policy

Save time with our AWS cheat sheets.

21
Q

A company delivers content to subscribers distributed globally from an application running on AWS. The application uses a fleet of Amazon EC2 instance in a private subnet behind an Application Load Balancer (ALB). Due to an update in copyright restrictions, it is necessary to block access for specific countries.

What is the EASIEST method to meet this requirement?

  1. Modify the ALB security group to deny incoming traffic from blocked countries
  2. Modify the security group for EC2 instances to deny incoming traffic from blocked countries
  3. Use Amazon CloudFront to serve the application and deny access to blocked countries
  4. Use a network ACL to block the IP address ranges associated with the specific countries
A

3. Use Amazon CloudFront to serve the application and deny access to blocked countries

When a user requests your content, CloudFront typically serves the requested content regardless of where the user is located. If you need to prevent users in specific countries from accessing your content, you can use the CloudFront geo restriction feature to do one of the following:

  • Allow your users to access your content only if they’re in one of the countries on a whitelist of approved countries.
  • Prevent your users from accessing your content if they’re in one of the countries on a blacklist of banned countries.

For example, if a request comes from a country where, for copyright reasons, you are not authorized to distribute your content, you can use CloudFront geo restriction to block the request.

This is the easiest and most effective way to implement a geographic restriction for the delivery of content.

  • This would be extremely difficult to manage.
  • Security groups cannot block traffic by country.
  • Security groups cannot block traffic by country.

Reference:
Restrict the geographic distribution of your content

Save time with our AWS cheat sheets.

22
Q

An organization want to share regular updates about their charitable work using static webpages. The pages are expected to generate a large amount of views from around the world. The files are stored in an Amazon S3 bucket. A solutions architect has been asked to design an efficient and effective solution.

Which action should the solutions architect take to accomplish this?

  1. Generate presigned URLs for the files
  2. Use cross-Region replication to all Regions
  3. Use the geoproximity feature of Amazon Route 53
  4. Use Amazon CloudFront with the S3 bucket as its origin
A

4. Use Amazon CloudFront with the S3 bucket as its origin

Amazon CloudFront can be used to cache the files in edge locations around the world and this will improve the performance of the webpages.

  • To serve a static website hosted on Amazon S3, you can deploy a CloudFront distribution using one of these configurations:
    • Using a REST API endpoint as the origin with access restricted by an origin access identity (OAI)
    • Using a website endpoint as the origin with anonymous (public) access allowed
    • Using a website endpoint as the origin with access restricted by a Referer header
  • This is used to restrict access which is not a requirement.
  • This does not provide a mechanism for directing users to the closest copy of the static webpages.
  • This does not include a solution for having multiple copies of the data in different geographic lcoations.

Reference:
How do I use CloudFront to serve a static website that’s hosted on Amazon S3?

Save time with our AWS cheat sheets.

23
Q

An application is running on Amazon EC2 behind an Elastic Load Balancer (ELB). Content is being published using Amazon CloudFront and you need to restrict the ability for users to circumvent CloudFront and access the content directly through the ELB.

How can you configure this solution?

  1. Create an Origin Access Identity (OAI) and associate it with the distribution
  2. Use signed URLs or signed cookies to limit access to the content
  3. Use a Network ACL to restrict access to the ELB
  4. Create a VPC Security Group for the ELB and use AWS Lambda to automatically update the CloudFront internal service IP addresses when they change
A

4. Create a VPC Security Group for the ELB and use AWS Lambda to automatically update the CloudFront internal service IP addresses when they change

The only way to get this working is by using a VPC Security Group for the ELB that is configured to allow only the internal service IP ranges associated with CloudFront. As these are updated from time to time, you can use AWS Lambda to automatically update the addresses. This is done using a trigger that is triggered when AWS issues an SNS topic update when the addresses are changed.

  • You can use an OAI to restrict access to content in Amazon S3 but not on EC2 or ELB.
  • Signed cookies and URLs are used to limit access to files but this does not stop people from circumventing CloudFront and accessing the ELB directly.
  • A Network ACL can be used to restrict access to an ELB but it is recommended to use security groups and this solution is incomplete as it does not account for the fact that the internal service IP ranges change over time.

Reference:
How to Automatically Update Your Security Groups for Amazon CloudFront and AWS WAF by Using AWS Lambda

Save time with our AWS cheat sheets.

24
Q

An international logistics company has web applications running on AWS in the us-west-2 Region and database servers in the eu-central-1 Region. The applications running in a VPC in us-west-2 need to communicate securely with the databases running in a VPC in eu-central-1.

Which network design will meet these requirements?

  1. Establish a VPC peering connection between the us-west-2 VPC and the eu-central-1 VPC. Modify the subnet route tables accordingly. Create an inbound rule in the eu-central-1 database security group that references the security group ID of the application servers in us-west-2.
  2. Configure a VPC peering connection between the us-west-2 VPC and the eu-central-1 VPC. Update the subnet route tables accordingly. Create an inbound rule in the eu-central-1 database security group that allows traffic from the us-west-2 application server IP addresses.
  3. Create a VPC peering connection between the us-west-2 VPC and the eu-central-1 VPC. Add the appropriate routes to the subnet route tables. Create an inbound rule in the us-west-2 application security group that allows traffic from the eu-central-1 database server IP addresses.
  4. Establish a transit gateway with a peering attachment between the us-west-2 VPC and the eu-central-1 VPC. After the transit gateways are properly peered and routing is configured, create an inbound rule in the eu-central-1 database security group that references the security group ID of the application servers in us-west-2.
A

2. Configure a VPC peering connection between the us-west-2 VPC and the eu-central-1 VPC. Update the subnet route tables accordingly. Create an inbound rule in the eu-central-1 database security group that allows traffic from the us-west-2 application server IP addresses.

The correct solution establishes a VPC peering connection between the two regions, and it properly sets up the inbound rule in the eu-central-1 database security group to allow traffic from the us-west-2 application server IP addresses, which is the correct way to configure this as security groups can’t be referenced across regions.

  • You cannot reference a security group from another region. Security groups are region-specific and can only be referenced within the same region.
  • In this scenario, we want to allow traffic from the application servers in us-west-2 to the database servers in eu-central-1. The inbound rule should be configured in the eu-central-1 database security group to allow this traffic.
  • You cannot reference a security group from another region. Security groups are region-specific and can only be referenced within the same region.

Reference:
Update your security groups to reference peer security groups

Save time with our AWS cheat sheets.

25
An online game platform company is launching a new game feature that involves a significant update to their existing API hosted on Amazon API Gateway. The company wants to minimize the impact on their existing users, and they need a deployment strategy that allows them to gradually roll out the changes while monitoring for any potential issues. **What should the company do to achieve this?** 1. Use an API Gateway canary release deployment. Initially direct a small percentage of user traffic to the new API version. After API verification, promote the canary stage to the production stage. 2. Update the existing API directly in API Gateway with the new feature and immediately direct all traffic to the updated API. 3. Create a completely new API for the new game feature and redirect half of the user traffic to the new API while maintaining the other half on the existing API. 4. Create a new version of the API and use Route 53 to gradually shift DNS queries from the existing API endpoint to the new API endpoint.
**1.** Use an API Gateway canary release deployment. Initially direct a small percentage of user traffic to the new API version. After API verification, promote the canary stage to the production stage. ## Footnote The correct answer is to use Amazon API Gateway's canary release deployments. This allows the company to gradually roll out the new API version, initially exposing only a small percentage of their users to the new API. As they monitor the system and confirm that the new API is working as expected, they can increase the percentage of traffic directed to the new version. * Updating the existing API directly in API Gateway and immediately redirecting all traffic to the updated API is risky. If there are any issues with the new API, it could negatively impact all users, rather than just a small subset of users. * Creating a completely new API and redirecting half the user traffic to the new API is not a gradual rollout strategy. This approach would immediately expose many users to potential issues with the new API. * Using Route 53 to gradually shift DNS queries from the existing API endpoint to the new API endpoint could work, but it is not as simple or efficient as using API Gateway's canary release deployments. DNS changes can also take time to propagate, potentially leading to inconsistent behavior for users. **Reference:** [Set up an API Gateway canary release deployment](https://docs.aws.amazon.com/apigateway/latest/developerguide/canary-release.html) Save time with our [AWS cheat sheets](https://digitalcloud.training/amazon-api-gateway/).
26
An online education platform uses Amazon CloudFront to distribute learning resources globally. The company wants to ensure that only enrolled students have access to the course materials. These materials are stored in an Amazon S3 bucket. In addition, the company occasionally provides exclusive resources to certain students for research and project work. **Which solution will meet these requirements?** 1. Create and provide S3 pre-signed URLs to authenticated students. 2. Utilize Amazon S3 object-level encryption for course materials. 3. Implement CloudFront Field-Level Encryption to block access to non-enrolled students. 4. Implement CloudFront signed cookies for authenticated students.
**4.** Implement CloudFront signed cookies for authenticated students. ## Footnote CloudFront signed cookies are a method to control who can access your content. When a user authenticates and is verified as an enrolled student, the application can set a cookie in the student's browser. The cookie contains the same information that can be included in a signed URL but applies to multiple files in one or multiple directories. * S3 pre-signed URLs are used to grant temporary access to a specific S3 object. This could be a valid option for individual file access but would be less efficient for multiple files or directories. * Amazon S3 object-level encryption is mainly about securing data at rest, it won't control who can or cannot access the content. * CloudFront Field-Level Encryption handles sensitive information in HTTP POST requests to help prevent the information from being seen by unauthorized viewers. It's not designed to control access to content. **Reference:** [Use signed cookies](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-signed-cookies.html) Save time with our [AWS cheat sheets](https://digitalcloud.training/amazon-cloudfront/).
27
A multinational podcast company uses Amazon CloudFront for distributing its digital content. The company wants to gradually introduce content across various regions. It also needs to ensure that listeners who are outside the regions to which the content is currently released, cannot access the content. **Which solution will meet these requirements?** 1. Implement geographical restrictions on CloudFront content using a deny list and create a custom error message. 2. Establish a new URL for the restricted content, control access with signed URLs and cookies, and set up a custom error message. 3. Encrypt the company's distributed content data and establish a custom error message. 4. Create a new URL for the restricted content and establish an expiration date-based access policy for signed URLs.
**1.** Implement geographical restrictions on CloudFront content using a deny list and create a custom error message. ## Footnote By setting geographical restrictions on CloudFront content using a deny list, the company can block access to content for users outside of the released regions. If a user from a blocked region attempts to access the content, they would receive the custom error message, thereby meeting the company's requirements. * While signed URLs and cookies can be used to control access to content, they don't inherently consider the geographical location of the users, thus it would not guarantee that only users in the released regions could access the content. * Although encrypting the content data adds a layer of security, it does not restrict access based on the geographical location of the users. * Time-based access policies with signed URLs can limit access to the content after a certain time, but it does not restrict access based on the geographical location of the users. **Reference:** [Restrict the geographic distribution of your content](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/georestrictions.html) Save time with our [AWS cheat sheets](https://digitalcloud.training/amazon-cloudfront/).
28
A travel agency operates a web service in an AWS Region. The service is accessed by customers via a REST API on Amazon API Gateway. The agency uses Amazon Route 53 for DNS and wants to provide individual and secure URLs for each travel agent using the service. **Which combination of steps will meet these requirements with the LEAST operational complexity?** (Select THREE.) 1. Register the desired domain with a domain registrar. Set up a wildcard custom domain in a Route 53 hosted zone and create a record in the zone that points to the API Gateway endpoint. 2. Request a wildcard certificate that corresponds to the custom domain name in AWS Certificate Manager (ACM), within a different Region. 3. Create separate hosted zones in Route 53 for each travel agent as needed. Set up zone records that point to the API Gateway endpoint. 4. Request a wildcard certificate that matches the custom domain name in AWS Certificate Manager (ACM) in the same Region. 5. Establish separate API endpoints in API Gateway for each travel agent. 6. Establish a custom domain name in API Gateway for the REST API. Import the corresponding certificate from AWS Certificate Manager (ACM).
**1.** Register the desired domain with a domain registrar. Set up a wildcard custom domain in a Route 53 hosted zone and create a record in the zone that points to the API Gateway endpoint. **4.** Request a wildcard certificate that matches the custom domain name in AWS Certificate Manager (ACM) in the same Region. **6.** Establish a custom domain name in API Gateway for the REST API. Import the corresponding certificate from AWS Certificate Manager (ACM). ## Footnote Registering a wildcard custom domain name in Route 53 and creating a record pointing to API Gateway endpoint allows you to create unique URLs for each customer under the same domain name. Requesting a wildcard certificate in the same AWS region as the REST API would provide secure URLs (* https) for all customers under the same domain name. This would minimize the operational complexity of managing multiple certificates in different regions. By creating a custom domain name in API Gateway and importing the wildcard certificate from ACM, the company can provide secure and unique URLs for each customer. API Gateway's custom domain names provide paths for API methods, helping maintain a consistent experience for customers. * Requesting a wildcard certificate in a different AWS region than your API Gateway increases operational complexity and doesn't provide any significant benefit. * Creating separate hosted zones for each travel agent can significantly increase operational complexity and cost. It would be more efficient to use a single hosted zone with a wildcard domain and use paths for differentiation. * Creating separate API endpoints for each travel agent can significantly increase the complexity and management overhead. Instead, it would be more efficient to use different paths under the same API endpoint. **Reference:** [DNS domain name format](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/DomainNameFormat.html) Save time with our [AWS cheat sheets](https://digitalcloud.training/aws-certificate-manager/).
29
A music streaming company needs to incorporate a third-party song feed. The song feed sends a webhook to notify an external service when new songs are ready for consumption. A developer has written an AWS Lambda function to retrieve songs when the company receives a webhook callback. The developer must expose the Lambda function for the third party to invoke. **Which solution will meet these requirements with the LEAST operational complexity?** 1. Generate an API Gateway endpoint for the Lambda function. Provide the API Gateway endpoint to the third party for the webhook. 2. Deploy a Network Load Balancer (NLB) to distribute requests to the Lambda function. Provide the NLB URL to the third party for the webhook. 3. Create an Amazon Simple Notification Service (Amazon SNS) topic. Link the topic to the Lambda function. Provide the SNS topic ARN to the third party for the webhook. 4. Create an Amazon Simple Queue Service (Amazon SQS) queue. Connect the queue to the Lambda function. Provide the ARN of the SQS queue to the third party for the webhook.
**1.** Generate an API Gateway endpoint for the Lambda function. Provide the API Gateway endpoint to the third party for the webhook. ## Footnote API Gateway enables you to create, deploy, and manage a RESTful API to expose backend HTTP endpoints, AWS Lambda functions, or other AWS services. You can provide the third party with the API Gateway endpoint, and they can invoke the Lambda function through it. This solution is the most operationally efficient because it requires the fewest resources and management overhead. * While it is possible to trigger AWS Lambda from an Application Load Balancer (ALB), not NLB, using ALB would add unnecessary complexity to the solution. * Amazon SNS is a pub/sub messaging service, but it is not meant to expose a public-facing endpoint for third-party webhooks. Also, providing ARN to a third party would not be possible as SNS topics cannot be invoked directly from the internet. * SQS is a message queuing service used to decouple and scale microservices, distributed systems, and serverless applications. It is not suitable for exposing a public-facing endpoint for third-party webhooks. Moreover, like SNS, SQS cannot be invoked directly from the internet. **Reference:** [API Gateway WebSocket APIs](https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-websocket-api.html) Save time with our [AWS cheat sheets](https://digitalcloud.training/amazon-api-gateway/).
30
A Solutions Architect needs to capture information about the traffic that reaches an Amazon Elastic Load Balancer. The information should include the source, destination, and protocol. **What is the most secure and reliable method for gathering this data?** 1. Create a VPC flow log for each network interface associated with the ELB 2. Enable Amazon CloudTrail logging and configure packet capturing 3. Use Amazon CloudWatch Logs to review detailed logging information 4. Create a VPC flow log for the subnets in which the ELB is running
**1.** Create a VPC flow log for each network interface associated with the ELB ## Footnote You can use VPC Flow Logs to capture detailed information about the traffic going to and from your Elastic Load Balancer. Create a flow log for each network interface for your load balancer. There is one network interface per load balancer subnet. * CloudTrail performs auditing of API actions, it does not do packet capturing. * This service does not record this information in CloudWatch logs. * The more secure option is to use the ELB network interfaces. **References:** * [Logging IP traffic using VPC Flow Logs](https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs.html) * [Monitor your Network Load Balancers](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-monitoring.html) Save time with our [AWS cheat sheets](https://digitalcloud.training/aws-elastic-load-balancing-aws-elb/).
31
A company is developing a web-based application that will be used for real-time chat functionality. The application should use WebSocket APIs to maintain a persistent connection with the client. The backend services of the application, hosted in containers within private subnets of a VPC, need to be accessed securely. **Which solution will meet these requirements?** 1. Develop a WebSocket API using Amazon API Gateway. Host the application in Amazon Elastic Kubernetes Service (EKS) in a private subnet. Establish a private VPC link for the API Gateway to securely access the Amazon EKS cluster. 2. Develop a REST API using Amazon API Gateway. Host the application in Amazon Elastic Kubernetes Service (EKS) in a private subnet. Establish a private VPC link for the API Gateway to securely access the Amazon EKS cluster. 3. Develop a WebSocket API using Amazon API Gateway. Host the application in Amazon Elastic Kubernetes Service (EKS) in a private subnet. Create a security group that allows API Gateway to access the Amazon EKS cluster. 4. Develop a REST API using Amazon API Gateway. Host the application in Amazon Elastic Kubernetes Service (EKS) in a private subnet. Create a security group that allows API Gateway to access the Amazon EKS cluster.
**1.** Develop a WebSocket API using Amazon API Gateway. Host the application in Amazon Elastic Kubernetes Service (EKS) in a private subnet. Establish a private VPC link for the API Gateway to securely access the Amazon EKS cluster. ## Footnote The requirement is for a real-time chat application, which makes the use of WebSocket APIs more suitable. Hosting the application in Amazon EKS within a private subnet allows secure and scalable management of the application. Creating a VPC link provides secure, private connectivity between API Gateway and the Amazon EKS service hosted inside the VPC. * This solution does provide the secure hosting environment and private connectivity between API Gateway and the Amazon EKS cluster, but REST APIs are not suitable for real-time applications like a chat service. This is because REST APIs use request-response model which doesn't provide the continuous connection needed for real-time communication. * This option, while correctly suggesting the use of WebSocket APIs and Amazon EKS, proposes the use of a security group for connectivity. However, security groups act as a firewall for associated Amazon EC2 instances, controlling both inbound and outbound traffic at the instance level, while access to services within VPCs is more securely managed through VPC links. * REST APIs are not suitable for a real-time chat application. Also, managing access via a security group is not the most secure method for accessing services hosted within private subnets in a VPC. **Reference:** [WebSocket API](https://docs.aws.amazon.com/whitepapers/latest/best-practices-api-gateway-private-apis-integration/websocket-api.html) Save time with our [AWS cheat sheets](https://digitalcloud.training/amazon-api-gateway/).
32
A corporation has a web-based multiplayer gaming service that operates using both TCP and UDP protocols. Amazon Route 53 is currently employed to direct application traffic to a set of Network Load Balancers (NLBs) in various AWS Regions. To prepare for an increase in user activity, the company must enhance application performance and reduce latency. **Which approach will best meet these requirements?** 1. Incorporate Amazon CloudFront in front of the NLBs and extend the duration of the Cache-Control max-age directive. 2. Substitute the NLBs with Application Load Balancers (ALBs) and set Route 53 to utilize latency-based routing. 3. Implement AWS Global Accelerator ahead of the NLBs and align the Global Accelerator endpoint to use the appropriate listener ports. 4. Insert an Amazon API Gateway endpoint behind the NLBs, enable API caching, and customize method caching across different stages.
**3.** Implement AWS Global Accelerator ahead of the NLBs and align the Global Accelerator endpoint to use the appropriate listener ports. ## Footnote AWS Global Accelerator is designed to improve the availability and performance of your applications for local and global users. It directs traffic to optimal endpoints over the AWS global network, thus enhancing the performance of your TCP and UDP traffic by routing packets through the AWS global network infrastructure, reducing jitter, and improving overall game performance. * Amazon CloudFront is a content delivery network (CDN) that speeds up the delivery of your static and dynamic web content. While it could potentially help with application performance, it doesn't directly improve TCP/UDP performance, which is the specific requirement in this case. * Application Load Balancers (ALBs) are layer 7 load balancers and they do not support the handling of raw TCP and UDP traffic, which is a requirement for the gaming application in the question. NLBs, on the other hand, are suitable for extreme performance needs and for TCP/UDP traffic. * While the API Gateway would add more control and security to the application, the caching feature is not necessarily beneficial for this real-time gaming scenario where the content is likely to change frequently and unpredictably. **Reference:** [AWS Global Accelerator](https://aws.amazon.com/global-accelerator/) Save time with our [AWS cheat sheets](https://digitalcloud.training/aws-global-accelerator/).
33
A telecommunication company has an API that allows users to manage their mobile plans and services. The API experiences significant traffic spikes during specific times such as end of the month and special offer periods. The company needs to ensure low latency response time consistently to ensure a good user experience. The solution should also minimize operational overhead. **Which solution would meet these requirements MOST efficiently?** 1. Implement the API using AWS Elastic Beanstalk with auto-scaling groups. 2. Use Amazon API Gateway with AWS Fargate tasks to handle the API requests. 3. Use Amazon API Gateway along with AWS Lambda functions with provisioned concurrency. 4. Implement the API on an Amazon EC2 instance behind an Application Load Balancer with manual scaling.
**3.** Use Amazon API Gateway along with AWS Lambda functions with provisioned concurrency. ## Footnote Amazon API Gateway and AWS Lambda together make a highly scalable solution for APIs. Provisioned concurrency in Lambda ensures that there is always a warm pool of functions ready to quickly respond to API requests, thereby guaranteeing low latency even during peak traffic times. * Elastic Beanstalk is a viable option for deploying applications and auto-scaling and can help handle increased traffic, but it doesn't guarantee the low latency requirement during peak traffic times. * API Gateway with Fargate can provide scalable compute, but this approach can result in higher operational overhead because of the need to manage the container lifecycle. * This solution does not scale automatically and would require manual intervention to ensure optimal performance during traffic spikes. Therefore, it doesn't satisfy the requirement of minimizing operational overhead. **Reference:** [Configuring provisioned concurrency for a function](https://docs.aws.amazon.com/lambda/latest/dg/provisioned-concurrency.html) Save time with our [AWS cheat sheets](https://digitalcloud.training/amazon-api-gateway/).