Q: A company has multiple research departments that can freely provision AWS resources, and it wants to avoid unexpectedly hitting AWS service quotas. Which two actions should a Solutions Architect implement to monitor quota usage and send alerts?
Correct answer:
-Write an AWS Lambda function that refreshes AWS Trusted Advisor Service Limits checks every 24 hours
-Capture Trusted Advisor status events with Amazon EventBridge and send notifications to an Amazon Simple Notification Service topic
-This matches the AWS Quota Monitor pattern: refresh Trusted Advisor quota data, then route events to notifications
-The correct answer is: Use a scheduled Lambda to refresh Trusted Advisor Service Limits checks and use EventBridge with Amazon Simple Notification Service for alerts
Wrong answers:
-Create an Amazon Simple Notification Service topic and configure it as a target for notifications
-Incomplete by itself; it does not specify what service generates or routes the notifications
-Query Trusted Advisor every 24 hours with DescribeTrustedAdvisorChecks and use a Developer Support plan
-DescribeTrustedAdvisorChecks lists available checks, not quota usage details, and Trusted Advisor APIs require Business, Enterprise On-Ramp, or Enterprise Support
-Use an AWS Config managed rule plus Lambda scheduling to monitor service quotas
-AWS Config is mainly for compliance, not quota monitoring, and adds unnecessary cost and complexity
Exam takeaway:
-For AWS quota monitoring, know the Trusted Advisor Service Limits + scheduled Lambda + EventBridge + Amazon Simple Notification Service pattern
-AWS Config is for compliance, not the normal answer for service quota monitoring
Q: A company launched an Amazon EC2 instance for a network monitoring system and wants an automated way to send the instance log files to Amazon CloudWatch Logs. Which service or feature should be used?
Correct answer:
-Use the CloudWatch Logs agent on the Amazon EC2 instance
-It automatically pushes log data from the instance to CloudWatch Logs
-This is the standard exam answer for sending Amazon EC2 instance logs into CloudWatch Logs
-The correct answer is: CloudWatch Logs agent
Wrong answers:
-CloudTrail Processing Library
-This is a Java library for processing CloudTrail logs, not for shipping Amazon EC2 instance logs to CloudWatch Logs
-AWS Transfer for SFTP
-This is for managed SFTP file transfer workloads, not for sending logs from Amazon EC2 to CloudWatch Logs
-CloudTrail with log file validation
-CloudTrail tracks AWS API activity; it does not collect and push instance log files to CloudWatch Logs
Exam takeaway:
-CloudWatch Logs agent sends log files from Amazon EC2 to CloudWatch Logs
-CloudTrail is for API auditing, not operating system or application log shipping
Q: A DevOps Engineer needs to deploy a public web application on Amazon EC2 instances behind a load balancer, with horizontal scaling, sticky sessions, and protection from web attacks using AWS WAF. Which two components should be set up?
Correct answer:
-Set up an internet-facing Application Load Balancer with HTTP and HTTPS listeners forwarding to the target group
-Create a Web Access Control List in AWS WAF and associate it with the public web application
-Application Load Balancer supports sticky sessions, and AWS WAF integrates with Application Load Balancer for Layer 7 protection
-The correct answer is: Use an internet-facing Application Load Balancer and an AWS WAF Web Access Control List
Wrong answers:
-Provision a Network Load Balancer with a TCP listener
-Network Load Balancer does not provide the sticky-session behavior expected here and is not the usual web-routing plus AWS WAF answer
-Deploy AWS Network Firewall to a public subnet
-AWS Network Firewall is a network-layer firewall, not a web-layer load balancer or sticky-session solution
-Launch a Gateway Load Balancer
-Gateway Load Balancer is for third-party virtual appliances, not normal web application load balancing with AWS WAF
Exam takeaway:
-Application Load Balancer is the exam answer for HTTP and HTTPS apps, host or path routing, and sticky sessions
-AWS WAF protects web apps at Layer 7 and is commonly associated with Application Load Balancer
Q: A Solutions Architect identified a series of distributed denial of service attacks in an Amazon Virtual Private Cloud and needs the most suitable AWS service to mitigate them. What should be used?
Correct answer:
-Use AWS Shield Advanced to detect and mitigate distributed denial of service attacks
-It provides enhanced detection and mitigation for large and sophisticated attacks
-It also offers near real-time visibility and access to the AWS DDoS Response Team
-The correct answer is: AWS Shield Advanced
Wrong answers:
-Use AWS Firewall Manager
-AWS Firewall Manager helps centrally manage firewall policies, but it is not the actual distributed denial of service protection service
-Use AWS WAF
-AWS WAF helps block common web exploits like SQL injection or cross-site scripting, but it is not the main answer for broader distributed denial of service mitigation
-Use Security Groups and Network Access Control Lists
-These improve network security, but they are not sufficient to mitigate distributed denial of service attacks
Exam takeaway:
-AWS Shield Standard is basic included protection; AWS Shield Advanced is the stronger exam answer for serious distributed denial of service mitigation
-AWS WAF is for web attack filtering, not the primary distributed denial of service service
Q: A Forex trading platform running an on-premises Oracle database must urgently migrate to AWS and keep the database highly available if a database server fails in the future. Which two actions should a Solutions Architect choose?
Correct answer:
-Create an Oracle database in Amazon Relational Database Service with Multi-AZ deployments
-Migrate the Oracle database to AWS using the AWS Database Migration Service
-Amazon Relational Database Service Multi-AZ provides synchronous standby replication and automatic failover for high availability
-The correct answer is: Use AWS Database Migration Service for migration and Amazon Relational Database Service for Oracle with Multi-AZ for high availability
Wrong answers:
-Convert the schema using the AWS Schema Conversion Tool
-This is mainly for heterogeneous migrations, not Oracle-to-Oracle homogeneous migration
-Launch Amazon Relational Database Service for Oracle with Recovery Manager enabled
-Oracle Recovery Manager is not supported in Amazon Relational Database Service
-Migrate to a single-instance non-cluster Amazon Aurora database
-A single instance does not meet the high-availability requirement
Exam takeaway:
-For Oracle-to-Oracle migration, think AWS Database Migration Service, not Schema Conversion Tool
-For database high availability in Amazon Relational Database Service, Multi-AZ is the key exam feature
Q: Amazon EC2 instances need secure access to AWS services such as Amazon Simple Storage Service and Amazon Redshift, and system administrators also need secure access for deployment and testing. Which two configurations should be used?
Correct answer:
-Assign an IAM role to the Amazon EC2 instance
-Enable Multi-Factor Authentication for administrators
-IAM roles provide temporary credentials to the instance without storing long-term access keys
-The correct answer is: Use an IAM role for the Amazon EC2 instance and enable Multi-Factor Authentication for administrators
Wrong answers:
-Assign an IAM user for each Amazon EC2 instance
-IAM roles are the correct and more secure design for instance access to AWS services
-Store AWS access keys in the Amazon EC2 instance
-This is insecure and against AWS best practice because credentials can be exposed or compromised
-Store AWS access keys in AWS Certificate Manager
-AWS Certificate Manager manages certificates, not access keys
Exam takeaway:
-Use IAM roles, not IAM users or stored access keys, for applications running on Amazon EC2
-Use Multi-Factor Authentication to secure human administrator access
Q: A company has Spot Amazon EC2 instances behind an Application Load Balancer and needs a shared, distributed session store with sub-millisecond latency, multithreaded performance, and automatic replacement of failed cache nodes. Which service is the best fit?
Correct answer:
-Use Amazon ElastiCache for Memcached with Auto Discovery
-Memcached is a good fit for distributed session storage and supports multithreaded performance
-Auto Discovery helps clients find nodes automatically, and failed nodes are automatically detected and replaced
-The correct answer is: Amazon ElastiCache for Memcached with Auto Discovery
Wrong answers:
-AWS Elastic Load Balancing sticky sessions
-Sticky sessions keep a user on one instance, but the requirement is for shared session state across the fleet
-Amazon ElastiCache for Redis Global Datastore
-Redis is not the best answer here because the scenario stresses multithreaded performance, which points to Memcached
-Amazon Relational Database Service with Amazon Relational Database Service Proxy
-This is slower and less cost-effective than an in-memory cache for session storage
Exam takeaway:
-Memcached is commonly tested for simple, horizontally scaled, multithreaded caching and session storage
-Redis is strong for richer data structures and features, but Memcached is the usual answer when multithreaded scaling is highlighted
Q: A university runs an online learning portal on Amazon EC2 with a single Amazon Aurora database and wants to improve Aurora availability to reduce unnecessary downtime. What should be done?
Correct answer:
-Create Amazon Aurora Replicas
-Aurora Replicas can be promoted if the primary database fails
-They improve availability and are the best available option among the choices given
-The correct answer is: Create Amazon Aurora Replicas
Wrong answers:
-Deploy Aurora across Auto Scaling groups of Amazon EC2 instances with a load balancer
-Aurora is a managed database service, not something you deploy on normal Amazon EC2 instances
-Enable Hash Joins
-Hash Joins are a query-performance feature, not an availability feature
-Use Asynchronous Key Prefetch
-This improves certain query patterns, not database availability
Exam takeaway:
-Aurora Replicas help with read scaling and can support failover
-If Multi-AZ style resilience is not explicitly offered in the choices, Aurora Replicas are often the next best availability answer
Q: A startup wants the easiest way to set up and govern a secure, compliant, multi-account AWS environment, including a dashboard for non-compliant resources and continuous policy enforcement. Which AWS service should be used?
Correct answer:
-Use AWS Control Tower to launch a landing zone and provision accounts through Account Factory
-Use the AWS Control Tower dashboard to monitor accounts, guardrails, and non-compliant resources
-AWS Control Tower is the easiest multi-account governance answer because it builds on AWS Organizations and adds landing zone and guardrails
-The correct answer is: AWS Control Tower with landing zone, Account Factory, dashboard, and guardrails
Wrong answers:
-Use AWS Service Catalog with launch constraints and a compliance aggregator
-AWS Service Catalog helps standardize deployments, but it is not the main service for detecting non-compliant resources across a multi-account landing zone
-Use AWS Direct Connect Partner with AWS Organizations and AWS Config
-AWS Direct Connect is about network connectivity, not multi-account governance
-Use AWS CloudFormation StackSets with AWS Config aggregators
-This can work, but it is more operationally heavy and not the easiest solution
Exam takeaway:
-AWS Control Tower is the go-to exam answer for quickly establishing and governing a secure multi-account AWS environment
-Know landing zone, Account Factory, preventive guardrails, detective guardrails, and dashboard
Q: A company already has an IAM role used by Amazon EC2 instances to access Amazon DynamoDB in one AWS Region and wants instances in a new AWS Region to have the exact same permissions. What should be done?
Correct answer:
-Assign the existing IAM role to the Amazon EC2 instances in the new Region
-IAM is a global service, so the same role can be used across Regions
-There is no need to recreate or duplicate the role per Region
-The correct answer is: Assign the existing IAM role to instances in the new Region
Wrong answers:
-Create a new IAM role and policies in the new Region
-Unnecessary, because IAM roles are not regional
-Duplicate the IAM role and policies to the new Region
-Also unnecessary, because one IAM role can be used across Regions
-Create an Amazon Machine Image and copy it to the new Region
-An Amazon Machine Image copy does not duplicate or transfer IAM role permissions
Exam takeaway:
-IAM is global, not regional
-Amazon EC2 instances in different Regions can use the same IAM role if the permissions fit
Q: Two On-Demand Amazon EC2 instances in the same Availability Zone but different subnets inside one Virtual Private Cloud must communicate with each other, with one instance hosting a database and the other hosting a web application. Which two things must be checked?
Correct answer:
-Check that the security groups allow the web application host to reach the database on the correct port and protocol
-Check that the Network Access Control Lists allow communication between the two subnets
-Security groups control instance-level traffic, while Network Access Control Lists control subnet-level traffic
-The correct answer is: Verify both security groups and Network Access Control Lists
Wrong answers:
-Ensure both instances are in the same placement group
-Placement groups are about placement and performance, not required for normal communication inside a Virtual Private Cloud
-Check the default route to a NAT instance or Internet Gateway
-Instances communicating inside one Virtual Private Cloud do not need internet routing
-Check that both instances are the same instance class
-Instance class does not affect whether two instances can communicate
Exam takeaway:
-For communication inside a Virtual Private Cloud, first think security groups and Network Access Control Lists
-Security groups are instance-level and stateful; Network Access Control Lists are subnet-level and stateless
Q: A GraphQL API running on Amazon Elastic Kubernetes Service with AWS Fargate uses Amazon DynamoDB and DynamoDB Accelerator, and the architect must keep DynamoDB traffic off the public internet while also enabling automated cross-account backups for long-term retention. What should be implemented?
Correct answer:
-Create a DynamoDB gateway endpoint and associate it with the appropriate route table
-Use AWS Backup to automatically copy DynamoDB on-demand backups to another AWS account
-DynamoDB gateway endpoints keep traffic within the AWS network without traversing the public internet
-The correct answer is: Use a DynamoDB gateway endpoint plus AWS Backup cross-account backup copy
Wrong answers:
-Create a DynamoDB interface endpoint and use Point-in-Time Recovery across accounts
-DynamoDB uses a gateway endpoint, and Point-in-Time Recovery does not provide cross-account backup copy for long-term retention
-Use a DynamoDB gateway endpoint plus Network Access Control List rules and built-in on-demand backups for cross-account recovery
-Network Access Control Lists are not the key requirement here, and native DynamoDB on-demand backups cannot be copied across accounts
-Use a DynamoDB interface endpoint, AWS Network Firewall, and Amazon Timestream for recovery
-This is the wrong endpoint type, AWS Network Firewall is unnecessary here, and Amazon Timestream is unrelated to DynamoDB backup recovery
Exam takeaway:
-DynamoDB uses gateway endpoints, not interface endpoints
-Use AWS Backup when the question asks for cross-account or cross-Region backup copy and longer-term retention beyond native DynamoDB backup features