A company uses an Amazon RDS MySQL database instance to store customer order data. The security team have requested that SSL/TLS encryption in transit must be used for encrypting connections to the database from application servers. The data in the database is currently encrypted at rest using an AWS KMS key.
How can a Solutions Architect enable encryption in transit?
4. Download the AWS-provided root certificates. Use the certificates when connecting to the RDS DB instance.
Amazon RDS creates an SSL certificate and installs the certificate on the DB instance when Amazon RDS provisions the instance. These certificates are signed by a certificate authority. The SSL certificate includes the DB instance endpoint as the Common Name (CN) for the SSL certificate to guard against spoofing attacks.
Reference:
Using SSL/TLS to encrypt a connection to a DB instance or cluster
Save time with our AWS cheat sheets.
An eCommerce company runs an application on Amazon EC2 instances in public and private subnets. The web application runs in a public subnet and the database runs in a private subnet. Both the public and private subnets are in a single Availability Zone.
Which combination of steps should a solutions architect take to provide high availability for this architecture?
(Select TWO.)
3. Create an EC2 Auto Scaling group and Application Load Balancer that spans across multiple AZs.
5. Create new public and private subnets in a different AZ. Migrate the database to an Amazon RDS multi-AZ deployment.
High availability can be achieved by using multiple Availability Zones within the same VPC. An EC2 Auto Scaling group can then be used to launch web application instances in multiple public subnets across multiple AZs and an ALB can be used to distribute incoming load.
The database solution can be made highly available by migrating from EC2 to Amazon RDS and using a Multi-AZ deployment model. This will provide the ability to failover to another AZ in the event of a failure of the primary database or the AZ in which it runs.
Reference:
Amazon EC2 Auto Scaling
Save time with our AWS cheat sheets:
A startup is prototyping a movie streaming platform on AWS. The platform consists of an Application Load Balancer, an Auto Scaling group of Amazon EC2 instances to host the frontend, and an Amazon RDS for PostgreSQL DB instance running in a Single-AZ configuration.
Users report slow response times when browsing the catalog of available movies. The movie catalog is a set of tables in the database that is updated infrequently. A solutions architect finds that the database’s CPU utilization spikes significantly during catalog queries.
What should the solutions architect recommend to improve the performance of the platform during catalog searches?
2. Implement an Amazon ElastiCache for Redis cluster to cache catalog queries. Configure the application to use lazy loading to populate the cache.
ElastiCache provides a highly performant in-memory caching layer that can significantly reduce database load and response times for infrequently updated data like a product or movie catalog. Lazy loading ensures the cache is populated only when a query misses, reducing unnecessary overhead.
References:
Save time with our AWS cheat sheets.
A healthcare company is migrating its on-premises Oracle database to an Amazon RDS for Oracle database. The database must meet compliance requirements to retain backups for 120 days. Additionally, the company must have the ability to restore the database to any point in time within the past 10 days. The solution must minimize operational overhead and ensure compliance with these requirements.
Which solution will meet these requirements with the LEAST operational overhead?
1. Configure Amazon RDS automated backups. Set the retention period to 35 days and enable point-in-time recovery for the past 10 days. Use AWS Backup to retain additional backups for 120 days.
RDS automated backups support up to 35 days of retention and point-in-time recovery. AWS Backup can extend the retention period to 120 days without additional complexity.
References:
Save time with our AWS cheat sheets.
A company uses an Amazon RDS for MySQL instance for its operational database. To handle the increased read-only traffic during a recent peak period, the company added a read replica. During the peak period, the CPU usage on the read replica reached 60%, and the primary instance also had 60% CPU usage. After the peak period ended, the read replica’s CPU usage decreased to 25%, while the primary instance consistently remains at 60%. The company wants to optimize costs while ensuring enough capacity for future growth.
Which solution will meet these requirements?
2. Resize the read replica to a smaller instance size and keep the primary instance unchanged.
The read replica’s CPU usage is now consistently low at 25%, meaning a smaller instance size can accommodate the current workload. Keeping the primary instance unchanged ensures consistent performance for write-heavy workloads.
Reference:
Working with DB instance read replicas
Save time with our AWS cheat sheets.
A group of business analysts perform read-only SQL queries on an Amazon RDS database. The queries have become quite numerous and the database has experienced some performance degradation. The queries must be run against the latest data. A Solutions Architect must solve the performance problems with minimal changes to the existing web application.
What should the Solutions Architect recommend?
4. Create a read replica of the primary database and instruct the business analysts to direct queries to the replica.
The performance issues can be easily resolved by offloading the SQL queries the business analysts are performing to a read replica. This ensures that data that is being queries is up to date and the existing web application does not require any modifications to take place.
Reference:
Working with DB instance read replicas
Save time with our AWS cheat sheets.
A company runs an eCommerce application that uses an Amazon Aurora database. The database performs well except for short periods when monthly sales reports are run. A Solutions Architect has reviewed metrics in Amazon CloudWatch and found that the Read Ops and CPUUtilization metrics are spiking during the periods when the sales reports are run.
What is the MOST cost-effective solution to solve this performance issue?
3. Create an Aurora Replica and use the replica endpoint for reporting.
The simplest and most cost-effective option is to use an Aurora Replica. The replica can serve read operations which will mean the reporting application can run reports on the replica endpoint without causing any performance impact on the production database.
Reference:
Amazon Aurora storage
Save time with our AWS cheat sheets.
A company wants to migrate a legacy web application from an on-premises data center to AWS. The web application consists of a web tier, an application tier, and a MySQL database. The company does not want to manage instances or clusters.
Which combination of services should a solutions architect include in the overall architecture?
(Select TWO.)
2. Amazon RDS for MySQL
5. AWS Fargate
Amazon RDS is a managed service and you do not need to manage the instances. This is an ideal backend for the application and you can run a MySQL database on RDS without any refactoring. For the application components these can run on Docker containers with AWS Fargate. Fargate is a serverless service for running containers on AWS.
References:
Save time with our AWS cheat sheets:
A company hosts a serverless application on AWS. The application consists of Amazon API Gateway, AWS Lambda, and Amazon RDS for PostgreSQL. During times of peak traffic and when traffic spikes are experienced, the company notices an increase in application errors caused by database connection timeouts. The company is looking for a solution that will reduce the number of application failures with the least amount of code changes.
What should a solutions architect do to meet these requirements?
2. Enable an RDS Proxy instance on your RDS Database.
Amazon RDS Proxy is a fully managed, highly available database proxy for Amazon Relational Database Service (RDS) that makes applications more scalable, more resilient to database failures, and more secure. Amazon RDS Proxy allows applications to pool and share connections established with the database, improving database efficiency and application scalability.
Amazon RDS Proxy can be enabled for most applications with no code changes so this solution requires the least amount of code changes.
Reference:
Amazon RDS Proxy
Save time with our AWS cheat sheets.
A small Python application is used by a company to process JSON documents and output the results to a SQL database which currently lives on-premises. The application is run thousands of times every day, and the company wants to move the application to the AWS Cloud. To maximize scalability and minimize operational overhead, the company needs a highly available solution.
Which solution will meet these requirements?
2. Put the JSON documents in an Amazon S3 bucket. As documents arrive in the S3 bucket, create an AWS Lambda function that runs Python code to process them. Use Amazon Aurora DB clusters to store the results.
Firstly, S3 is a highly available and durable place to store these JSON documents that will be written once and read many times (WORM). As this application runs thousands of times per day, AWS Lambda would be ideal to use as it will scale whenever the application needs to be ran, and Python is a runtime environment that is natively supported by AWS Lambda, whenever the events arrive in the S3 bucket, and this could be easily achieved using S3 event notifications. Finally Amazon Aurora is a highly available and durable AWS managed database. Amazon Aurora automatically maintains six copies of your data across three Availability Zones (AZs) to adhere to your redundancy requirements.
Reference:
Using AWS Lambda with Amazon RDS
Save time with our AWS cheat sheets.
A stock trading startup company has a custom web application to sell trading data to its users online. The company uses Amazon DynamoDB to store its data and wants to build a new service that sends an alert to the managers of four internal teams every time a new trading event is recorded. The company does not want this new service to affect the performance of the current application.
What should a solutions architect do to meet these requirements with the LEAST amount of operational overhead?
3. On the table, enable Amazon DynamoDB Streams. Subscriptions can be made to a single Amazon Simple Notification Service (Amazon SNS) topic using triggers.
DynamoDB Streams captures a time-ordered sequence of item-level modifications in any DynamoDB table and stores this information in a log for up to 24 hours. Applications can access this log and view the data items as they appeared before and after they were modified, in near-real time. This is the native way to handle this within DynamoDB, therefore will incur the least amount of operational overhead.
Reference:
Change data capture for DynamoDB Streams
Save time with our AWS cheat sheets.
A social media platform uses Amazon DynamoDB to store user profiles, friend connections, and post interactions. The platform is rapidly expanding to new countries and needs to ensure a seamless user experience with high availability and low latency for its global user base.
The platform must handle unpredictable workloads and regional outages while maintaining a cost-effective architecture.
Which solution will meet these requirements MOST cost-effectively?
3. Use DynamoDB global tables to replicate data automatically across multiple Regions. Deploy the tables in on-demand capacity mode to handle workload variability.
Use DynamoDB global tables to replicate data automatically across multiple Regions. Deploy the tables in on-demand capacity mode to handle workload variability: This is correct because DynamoDB global tables provide automatic multi-Region replication, ensuring low latency and high availability for a global user base. On-demand capacity mode is a cost-effective choice for workloads with unpredictable demand, as it adjusts capacity based on usage.
References:
Save time with our AWS cheat sheets.
A fitness application company is launching a platform to track user activity, workout logs, and personalized settings. The database must support structured data, allow for transactions between related data, and dynamically scale to handle unpredictable traffic spikes during peak hours. The solution must also support automated backups and minimize operational management.
Which solution will meet these requirements MOST cost-effectively?
3. Use Amazon Aurora Serverless v2 to store the data. Enable serverless auto-scaling and configure automated backups to Amazon S3 with a 7-day retention period.
Aurora Serverless supports relational data, transactions, and complex queries while scaling seamlessly to meet workload demands. It integrates natively with Amazon S3 for automated backups, eliminating operational overhead while remaining cost-effective.
References:
Save time with our AWS cheat sheets.
A retail company runs its order processing system on AWS. The system uses an Amazon RDS for MySQL Multi-AZ database cluster as its backend. The company must retain database backups for 30 days to meet compliance requirements. The company uses both automated RDS backups and manual backups for specific points in time. The company wants to enforce the 30-day retention policy for all backups while ensuring that both automated and manual backups created within the last 30 days are preserved. The solution must be cost-effective and require minimal operational effort.
Which solution will meet these requirements MOST cost-effectively?
1. Configure the RDS backup retention policy to 30 days for automated backups. Use a script to identify and delete manual backups that are older than 30 days.
RDS backup retention policies can only be applied to automated backups. Manual backups must be managed separately. Using a script to identify and delete older manual backups ensures compliance without additional costs.
References:
Save time with our AWS cheat sheets.
A healthcare company is building a patient records management application that uses a relational database to store user data and configuration details. The company expects steady growth in the number of patients. The database workload is expected to be variable and read-heavy, with occasional write operations. The company wants to cost-optimize the database solution while ensuring the necessary performance for its workload.
Which solution will meet these requirements MOST cost-effectively?
2. Deploy the database on Amazon Aurora Serverless v2 to automatically scale the database capacity based on actual usage and handle fluctuations in workload.
Aurora Serverless v2 is a cost-effective option that dynamically adjusts capacity to meet workload demands. It is particularly suited for variable workloads, offering the performance of Aurora with a pay-per-use pricing model.
References:
Save time with our AWS cheat sheets.
A retail company uses an Amazon Aurora MySQL DB cluster for its order management system. The cluster includes eight Aurora Replicas. The company wants to ensure that reporting queries from its analytics team are automatically distributed across three specific Aurora Replicas that have higher compute and memory capacity than the rest of the cluster.
Which solution will meet these requirements?
1. Create and use a custom endpoint that targets the three high-capacity replicas.
Aurora custom endpoints allow you to define a subset of replicas for specific workloads. By creating a custom endpoint, the reporting queries can be automatically distributed across the three high-capacity replicas without involving the rest of the cluster.
References:
Save time with our AWS cheat sheets.
A company runs its critical payment processing application on an Amazon Aurora MySQL cluster in the ap-southeast-1 Region. As part of its disaster recovery (DR) strategy, the company has selected the ap-northeast-1 Region for failover capabilities.
The company requires a recovery point objective (RPO) of less than 5 minutes and a recovery time objective (RTO) of no more than 15 minutes. The company also wants to minimize operational overhead and ensure failover happens with minimal downtime and configuration.
Which solution will meet these requirements with the MOST operational efficiency?
1. Convert the Aurora cluster to an Aurora global database. Configure cross-Region replication and managed failover.
Aurora global databases are purpose-built for disaster recovery and can achieve an RPO of under 5 seconds with automated failover, meeting the company’s stringent RPO and RTO requirements.
References:
Save time with our AWS cheat sheets.
A Solutions Architect is migrating a distributed application from their on-premises environment into AWS. This application consists of an Apache Cassandra NoSQL database, with a containerized SUSE Linux compute layer with an additional storage layer made up of multiple Microsoft SQL Server databases. Once in the cloud the company wants to have as little operational overhead as possible, with no schema conversion during the migration and the company wants to host the architecture in a highly available and durable way.
Which of the following groups of services will provide the solutions architect with the best solution?
3. Run the NoSQL database on Amazon Keyspaces, and the compute layer on Amazon ECS on Fargate. Use Amazon RDS for Microsoft SQL Server to host the second storage layer.
Amazon Keyspaces (for Apache Cassandra) is a scalable, highly available, and managed Apache Cassandra–compatible database service. This combined with a containerized, serverless compute layer on Amazon ECS for Fargate and a RDS for Microsoft SQL Server database layer is a fully managed version of what currently exists on premises.
Reference:
Amazon Keyspaces (for Apache Cassandra)
Save time with our AWS cheat sheets.
A company is deploying an Amazon ElastiCache for Redis cluster. To enhance security a password should be required to access the database.
What should the solutions architect use?
3. Redis AUTH command
Redis authentication tokens enable Redis to require a token (password) before allowing clients to execute commands, thereby improving data security.
You can require that users enter a token on a token-protected Redis server. To do this, include the parameter –auth-token (API: AuthToken) with the correct token when you create your replication group or cluster. Also include it in all subsequent commands to the replication group or cluster.
Reference:
Authenticating with the Valkey and Redis OSS AUTH command
Save time with our AWS cheat sheets.
A solutions architect is designing a new service that will use an Amazon API Gateway API on the frontend. The service will need to persist data in a backend database using key-value requests. Initially, the data requirements will be around 1 GB and future growth is unknown. Requests can range from 0 to over 800 requests per second.
Which combination of AWS services would meet these requirements? (Select TWO.)
2. AWS Lambda
3. Amazon DynamoDB
In this case AWS Lambda can perform the computation and store the data in an Amazon DynamoDB table. Lambda can scale concurrent executions to meet demand easily and DynamoDB is built for key-value data storage requirements and is also serverless and easily scalable. This is therefore a cost effective solution for unpredictable workloads.
References:
Save time with our AWS cheat sheets:
A company runs a web application that serves weather updates. The application runs on a fleet of Amazon EC2 instances in a Multi-AZ Auto scaling group behind an Application Load Balancer (ALB). The instances store data in an Amazon Aurora database. A solutions architect needs to make the application more resilient to sporadic increases in request rates.
Which architecture should the solutions architect implement?
(Select TWO.)
2. Add Amazon Aurora Replicas
5. Add an Amazon CloudFront distribution in front of the ALB
The architecture is already highly resilient but may be subject to performance degradation if there are sudden increases in request rates. To resolve this situation Amazon Aurora Read Replicas can be used to serve read traffic which offloads requests from the main database. On the frontend an Amazon CloudFront distribution can be placed in front of the ALB and this will cache content for better performance and also offloads requests from the backend.
References:
Save time with our AWS cheat sheets:
An Amazon RDS Read Replica is being deployed in a separate region. The master database is not encrypted but all data in the new region must be encrypted.
How can this be achieved?
4. Encrypt a snapshot from the master DB instance, create a new encrypted master DB instance, and then create an encrypted cross-region Read Replica
You cannot create an encrypted Read Replica from an unencrypted master DB instance. You also cannot enable encryption after launch time for the master DB instance. Therefore, you must create a new master DB by taking a snapshot of the existing DB, encrypting it, and then creating the new DB from the snapshot. You can then create the encrypted cross-region Read Replica of the master DB.
All other options will not work due to the limitations explained above.
References:
Save time with our AWS cheat sheets.
An Amazon RDS PostgreSQL database is configured as Multi-AZ. A solutions architect needs to scale read performance and the solution must be configured for high availability.
What is the most cost-effective solution?
1. Create a read replica as a Multi-AZ DB instance
You can create a read replica as a Multi-AZ DB instance. Amazon RDS creates a standby of your replica in another Availability Zone for failover support for the replica. Creating your read replica as a Multi-AZ DB instance is independent of whether the source database is a Multi-AZ DB instance.
Reference:
About AWS
Save time with our AWS cheat sheets.
A global financial services company is currently operating a three-tier web application to handle their main customer facing website. This application uses several Amazon EC2 instances behind an Application Load Balancer and connects directly to a DynamoDB table.
Due to recent customer complaints of slow loading times, their Solutions Architect has been asked to implement changes to solve this problem, without rearchitecting the core application components.
Which combination of actions should the solutions architect take to accomplish this?
(Select TWO.)
3. Create a CloudFront distribution and place it in front of the Application Load Balancer.
4. Set up an Amazon DynamoDB Accelerator (DAX) cluster in front of the DynamoDB table.
A CloudFront distribution would cache content in one of the many global edge locations, ensuring that any customer access to the content will be accessing it at a much lower latency compared to using the Application Load Balancer on its own.
Secondly, DynamoDB has a built-in caching solution known as DynamoDB Accelerator (DAX). If your application is serving traffic from a DynamoDB database and is struggling to scale, you can use the DynamoDB cache to improve application.
Reference:
Amazon CloudFront
Save time with our AWS cheat sheets.