* Timed run-through 3* Flashcards

(8 cards)

1
Q

Q: A global social media site uses Amazon CloudFront for static content, but users report long login times and occasional HTTP 504 errors. Which two cost-effective changes should be made to improve performance?

A

Correct answer:
-Use Lambda@Edge so authentication logic runs closer to users at CloudFront edge locations
-Configure CloudFront origin failover with an origin group so requests switch to a secondary origin if the primary origin returns failure responses such as HTTP 504
-The correct answers are: use Lambda@Edge for authentication closer to users, and configure CloudFront origin failover

Wrong answers:
-Build multiple regional Amazon VPC deployments with a transit VPC and regional AWS Lambda functions
-This is much higher cost and operational effort than Lambda@Edge

-Increase CloudFront cache max-age
-This helps static object caching, not login and authentication latency

-Deploy the application to multiple Regions with Amazon Route 53 latency routing
-This may help performance, but it is not the most cost-effective option in this scenario

Exam takeaway:
-Lambda@Edge is used when logic must run close to global users at CloudFront edge locations
-CloudFront origin failover improves resiliency for origin-side failures such as HTTP 504
-Cache tuning helps cached content, not authentication workflows

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Q: A company stores structured and semi-structured data in an Amazon S3 data lake and wants to use big data processing frameworks plus business intelligence tools and standard SQL queries. What is the highest-performing solution?

A

Correct answer:
-Use Amazon EMR for big data processing with frameworks such as Apache Hadoop and Apache Spark
-Store the processed data in Amazon Redshift for high-performance analytics with standard SQL and business intelligence tools
-The correct answer is: create an Amazon EMR cluster and store the processed data in Amazon Redshift

Wrong answers:
-Use AWS Glue and store processed data in Amazon S3
-Amazon S3 is a data lake, but Amazon Redshift is better for fast SQL analytics and business intelligence workloads

-Use Amazon Managed Service for Apache Flink Studio and store data in Amazon DynamoDB
-Apache Flink is mainly for streaming use cases, and Amazon DynamoDB is not designed for complex SQL analytics and business intelligence

-Create an Amazon EC2 instance and store processed data in Amazon EBS
-This has limited scale and creates unnecessary management overhead compared with Amazon EMR

Exam takeaway:
-Amazon EMR = big data processing frameworks
-Amazon Redshift = fast SQL analytics and business intelligence on large datasets
-Do not use Amazon DynamoDB or single Amazon EC2 instances for heavy analytical SQL workloads

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Q: An ecommerce application on an Auto Scaling group has predictable weekly promotion traffic spikes, but new Amazon EC2 instances take too long to initialize. The company wants capacity launched in advance based on forecasted load with the least effort. What should be used?

A

Correct answer:
-Configure the Auto Scaling group to use predictive scaling
-Predictive scaling uses machine learning on historical Amazon CloudWatch data to forecast demand and launch capacity ahead of time
-The correct answer is: configure the Auto Scaling group to use predictive scaling

Wrong answers:
-Use Amazon SageMaker Clarify and create scheduled scaling from the results
-Amazon SageMaker Clarify is for bias detection and model explainability, not forecasting Auto Scaling demand

-Use dynamic scaling based on historical average CPU load
-Dynamic scaling is reactive, not forecast-based

-Run a scheduled Amazon EventBridge rule and AWS Lambda scaling job each night
-This takes more effort and is less flexible if the traffic pattern changes

Exam takeaway:
-Predictive scaling is best for recurring traffic patterns and slow-start applications
-Dynamic scaling reacts after load rises; predictive scaling launches capacity before the spike
-Do not use extra custom machine learning or scheduling if Auto Scaling already has the feature built in

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Q: A database on Amazon RDS for MySQL must automatically fail over to continue operating during failures, and the design must be as highly available as possible. What should a solutions architect do?

A

Correct answer:
-Enable Multi-AZ deployment to create a synchronous standby replica in another Availability Zone
-Multi-AZ provides automatic failover and high availability
-The correct answer is: create a standby replica in another Availability Zone by enabling Multi-AZ deployment

Wrong answers:
-Create same-Region and cross-Region read replicas and promote one during failure
-Read replica promotion is manual, so it does not meet the automatic failover requirement

-Create five read replicas across Availability Zones and promote one during outage
-This still does not provide automatic failover

-Create five cross-Region read replicas and promote one during outage
-This may improve resilience, but failover is still manual

Exam takeaway:
-Multi-AZ on Amazon RDS = high availability and automatic failover
-Read replicas are mainly for scaling reads and disaster recovery, not automatic failover
-If the exam says automatic failover, think Multi-AZ first

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Q: A company is migrating a two-tier application to AWS with an internet-facing Application Load Balancer, private application and database tiers, separate subnets for the database tier, custom route table isolation for the database tier, and high availability across two Availability Zones. How many subnets are required?

A

Correct answer:
-You need one public subnet for the Application Load Balancer in each Availability Zone
-You need one private subnet for the application tier and one private subnet for the database tier in each Availability Zone
-That is 3 subnets per Availability Zone × 2 Availability Zones = 6 subnets
-The correct answer is: 6 subnets

Wrong answers:
-2, 3, or 4 subnets
-These do not satisfy both multi-Availability Zone high availability and separate public, application, and database subnet requirements in each Availability Zone

Exam takeaway:
-For high availability across two Availability Zones, duplicate the subnet pattern in both Availability Zones
-Internet-facing Application Load Balancers need public subnets
-Application and database tiers commonly use separate private subnets, especially when the database requires tighter isolation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Q: A bank needs to ensure no future Amazon S3 uploads are unencrypted, the encryption uses 256-bit Advanced Encryption Standard, and keys are rotated automatically every year with the least operational overhead. What should be done?

A

Correct answer:
-Create an Amazon S3 bucket policy that denies uploads unless the request includes the s3:x-amz-server-side-encryption header set to AES256
-Use server-side encryption with Amazon S3-managed keys, which provides built-in AWS-managed key rotation
-The correct answer is: deny non-encrypted uploads with a bucket policy and use server-side encryption with Amazon S3-managed keys

Wrong answers:
-Use a Service Control Policy for the bucket and modify Amazon S3-managed key rotation
-Service Control Policies are not the right control here, and you do not manage Amazon S3-managed key rotation yourself

-Use a customer-managed AWS Key Management Service key and manually rotate it each year
-This adds more operational overhead and does not enforce encryption on upload by itself without a bucket policy

-Require aws:kms in the header and use S3 Object Lock
-That is server-side encryption with AWS Key Management Service, not the requested AES256; S3 Object Lock is for immutability, not key rotation

Exam takeaway:
-Use an Amazon S3 bucket policy to enforce encryption on upload
-AES256 in the header means server-side encryption with Amazon S3-managed keys
-S3 Object Lock is for retention and immutability, not encryption

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Q: A company plans to use a Network Load Balancer in a VPC and needs the security team to inspect traffic entering and leaving the VPC. What should a solutions architect recommend?

A

Correct answer:
-Deploy AWS Network Firewall at the VPC level
-Create custom rule groups and update route tables so ingress and egress traffic can be inspected and filtered
-The correct answer is: create a firewall using AWS Network Firewall at the VPC level, add custom rule groups, and update the VPC route tables

Wrong answers:
-Use Network Access Analyzer
-Network Access Analyzer finds unintended access paths; it does not perform packet inspection or filtering

-Create a subnet-level firewall with Amazon Detective and use Reachability Analyzer
-Amazon Detective is for investigation, not firewalling, and firewalls are not created this way

-Use Traffic Mirroring on the Network Load Balancer
-Traffic Mirroring copies traffic for analysis but does not itself inspect and block traffic

Exam takeaway:
-AWS Network Firewall is the main service for inspecting and filtering ingress and egress traffic at the VPC level
-Network Access Analyzer and Reachability Analyzer are analysis tools, not inline firewalls
-Traffic Mirroring is for copying traffic, not blocking it

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Q: A healthcare company needs to monitor all incoming and outgoing traffic for a production VPC and actively block malicious connections. What should be implemented?

A

Correct answer:
-Create custom security rules in AWS Network Firewall to detect and filter traffic entering and leaving the production VPC
-AWS Network Firewall can both monitor traffic and actively block malicious connections
-The correct answer is: create custom security rules in AWS Network Firewall

Wrong answers:
-Use AWS GuardDuty with AWS Lambda response
-AWS GuardDuty detects threats, but it does not sit inline to block traffic entering or leaving the VPC

-Use AWS Traffic Mirroring and analyze the mirrored traffic
-This helps observe traffic, but it does not actively prevent malicious connections

-Use AWS Firewall Manager with AWS Web Application Firewall policies
-AWS Web Application Firewall protects web applications at Layer 7, not all VPC ingress and egress traffic

Exam takeaway:
-AWS Network Firewall is for inline inspection and blocking of VPC traffic
-AWS GuardDuty detects suspicious activity but does not block traffic by itself
-AWS Web Application Firewall protects HTTP and HTTPS applications, not all network traffic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly