Database Flashcards

Analyze AWS database offerings to design high-performing, scalable, and cost-effective data solutions. (37 cards)

35
Q

A company uses an Amazon RDS MySQL database instance to store customer order data. The security team have requested that SSL/TLS encryption in transit must be used for encrypting connections to the database from application servers. The data in the database is currently encrypted at rest using an AWS KMS key.

How can a Solutions Architect enable encryption in transit?

  1. Enable encryption in transit using the RDS Management console and obtain a key using AWS KMS.
  2. Add a self-signed certificate to the RDS DB instance. Use the certificates in all connections to the RDS DB instance.
  3. Take a snapshot of the RDS instance. Restore the snapshot to a new instance with encryption in transit enabled.
  4. Download the AWS-provided root certificates. Use the certificates when connecting to the RDS DB instance.
A

4. Download the AWS-provided root certificates. Use the certificates when connecting to the RDS DB instance.

Amazon RDS creates an SSL certificate and installs the certificate on the DB instance when Amazon RDS provisions the instance. These certificates are signed by a certificate authority. The SSL certificate includes the DB instance endpoint as the Common Name (CN) for the SSL certificate to guard against spoofing attacks.

  • You can download a root certificate from AWS that works for all Regions or you can download Region-specific intermediate certificates.
  • There is no need to do this as a certificate is created when the DB instances is launched.
  • You cannot enable/disable encryption in transit using the RDS management console or use a KMS key.
  • You cannot use self-signed certificates with RDS.

Reference:
Using SSL/TLS to encrypt a connection to a DB instance or cluster

Save time with our AWS cheat sheets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

An eCommerce company runs an application on Amazon EC2 instances in public and private subnets. The web application runs in a public subnet and the database runs in a private subnet. Both the public and private subnets are in a single Availability Zone.

Which combination of steps should a solutions architect take to provide high availability for this architecture?

(Select TWO.)

  1. Create new public and private subnets in the same AZ but in a different Amazon VPC.
  2. Create an EC2 Auto Scaling group in the public subnet and use an Application Load Balancer.
  3. Create an EC2 Auto Scaling group and Application Load Balancer that spans across multiple AZs.
  4. Create new public and private subnets in a different AZ. Create a database using Amazon EC2 in one AZ.
  5. Create new public and private subnets in a different AZ. Migrate the database to an Amazon RDS multi-AZ deployment.
A

3. Create an EC2 Auto Scaling group and Application Load Balancer that spans across multiple AZs.
5. Create new public and private subnets in a different AZ. Migrate the database to an Amazon RDS multi-AZ deployment.

High availability can be achieved by using multiple Availability Zones within the same VPC. An EC2 Auto Scaling group can then be used to launch web application instances in multiple public subnets across multiple AZs and an ALB can be used to distribute incoming load.

The database solution can be made highly available by migrating from EC2 to Amazon RDS and using a Multi-AZ deployment model. This will provide the ability to failover to another AZ in the event of a failure of the primary database or the AZ in which it runs.

  • You cannot use multiple VPCs for this solution as it would be difficult to manage and direct traffic (you can’t load balance across VPCs).
  • This does not achieve HA as you need multiple public subnets across multiple AZs.
  • The database solution is not HA in this answer option.

Reference:
Amazon EC2 Auto Scaling

Save time with our AWS cheat sheets:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

A startup is prototyping a movie streaming platform on AWS. The platform consists of an Application Load Balancer, an Auto Scaling group of Amazon EC2 instances to host the frontend, and an Amazon RDS for PostgreSQL DB instance running in a Single-AZ configuration.

Users report slow response times when browsing the catalog of available movies. The movie catalog is a set of tables in the database that is updated infrequently. A solutions architect finds that the database’s CPU utilization spikes significantly during catalog queries.

What should the solutions architect recommend to improve the performance of the platform during catalog searches?

  1. Migrate the movie catalog to Amazon DynamoDB and use the DynamoDB Accelerator (DAX) service to cache queries for the catalog.
  2. Implement an Amazon ElastiCache for Redis cluster to cache catalog queries. Configure the application to use lazy loading to populate the cache.
  3. Enable read replicas for the RDS instance. Configure the frontend application to distribute catalog queries across the read replicas.
  4. Use Amazon Aurora Serverless for the movie catalog database. Configure Aurora’s built-in caching to handle frequent queries efficiently.
A

2. Implement an Amazon ElastiCache for Redis cluster to cache catalog queries. Configure the application to use lazy loading to populate the cache.

ElastiCache provides a highly performant in-memory caching layer that can significantly reduce database load and response times for infrequently updated data like a product or movie catalog. Lazy loading ensures the cache is populated only when a query misses, reducing unnecessary overhead.

  • Migrating to DynamoDB introduces unnecessary complexity and operational overhead, especially for a relational workload that involves the movie catalog. DAX is designed for NoSQL databases and is not the best fit for this use case.
  • While read replicas can improve performance for read-heavy workloads, they do not reduce query latency as effectively as an in-memory caching solution like ElastiCache.
  • Switching to Aurora Serverless would involve significant changes to the database architecture. While Aurora provides caching, ElastiCache is specifically optimized for high-performance caching use cases.

References:

Save time with our AWS cheat sheets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

A healthcare company is migrating its on-premises Oracle database to an Amazon RDS for Oracle database. The database must meet compliance requirements to retain backups for 120 days. Additionally, the company must have the ability to restore the database to any point in time within the past 10 days. The solution must minimize operational overhead and ensure compliance with these requirements.

Which solution will meet these requirements with the LEAST operational overhead?

  1. Configure Amazon RDS automated backups. Set the retention period to 35 days and enable point-in-time recovery for the past 10 days. Use AWS Backup to retain additional backups for 120 days.
  2. Create an Amazon RDS manual snapshot every week. Use an AWS Lambda function to delete snapshots that are older than 120 days.
  3. Use AWS Backup to create a backup plan for Amazon RDS with a 120-day retention period. Enable point-in-time recovery by combining AWS Backup and RDS automated backups.
  4. Set up Amazon S3 Lifecycle policies to retain database exports for 120 days. Use AWS Database Migration Service (AWS DMS) to export the database to Amazon S3 every 24 hours.
A

1. Configure Amazon RDS automated backups. Set the retention period to 35 days and enable point-in-time recovery for the past 10 days. Use AWS Backup to retain additional backups for 120 days.

RDS automated backups support up to 35 days of retention and point-in-time recovery. AWS Backup can extend the retention period to 120 days without additional complexity.

  • Manual snapshot management introduces operational overhead and is not as scalable or reliable as RDS automated backups combined with AWS Backup.
  • While AWS Backup can retain backups for 120 days, it cannot directly handle point-in-time recovery, which requires native RDS automated backups.
  • Exporting the database to S3 introduces unnecessary operational complexity and does not support point-in-time recovery.

References:

Save time with our AWS cheat sheets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

A company uses an Amazon RDS for MySQL instance for its operational database. To handle the increased read-only traffic during a recent peak period, the company added a read replica. During the peak period, the CPU usage on the read replica reached 60%, and the primary instance also had 60% CPU usage. After the peak period ended, the read replica’s CPU usage decreased to 25%, while the primary instance consistently remains at 60%. The company wants to optimize costs while ensuring enough capacity for future growth.

Which solution will meet these requirements?

  1. Delete the read replica and keep the primary instance unchanged.
  2. Resize the read replica to a smaller instance size and keep the primary instance unchanged.
  3. Upgrade the read replica to a larger instance size and downgrade the primary instance to a smaller instance size.
  4. Delete the read replica and upgrade the primary instance to a larger instance size.
A

2. Resize the read replica to a smaller instance size and keep the primary instance unchanged.

The read replica’s CPU usage is now consistently low at 25%, meaning a smaller instance size can accommodate the current workload. Keeping the primary instance unchanged ensures consistent performance for write-heavy workloads.

  • The read replica may still be needed for future peak periods or reporting and removing it could impact performance if the workload increases again.
  • The primary instance already has consistent 60% CPU usage. Downgrading it could result in performance bottlenecks.
  • Increasing the primary instance size will unnecessarily increase costs, especially since the read replica can handle reporting traffic at a lower cost.

Reference:
Working with DB instance read replicas

Save time with our AWS cheat sheets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

A group of business analysts perform read-only SQL queries on an Amazon RDS database. The queries have become quite numerous and the database has experienced some performance degradation. The queries must be run against the latest data. A Solutions Architect must solve the performance problems with minimal changes to the existing web application.

What should the Solutions Architect recommend?

  1. Export the data to Amazon S3 and instruct the business analysts to run their queries using Amazon Athena.
  2. Load the data into an Amazon Redshift cluster and instruct the business analysts to run their queries against the cluster.
  3. Load the data into Amazon ElastiCache and instruct the business analysts to run their queries against the ElastiCache endpoint.
  4. Create a read replica of the primary database and instruct the business analysts to direct queries to the replica.
A

4. Create a read replica of the primary database and instruct the business analysts to direct queries to the replica.

The performance issues can be easily resolved by offloading the SQL queries the business analysts are performing to a read replica. This ensures that data that is being queries is up to date and the existing web application does not require any modifications to take place.

  • The data must be the latest data and this method would therefore require constant exporting of the data.
  • This is another solution that requires exporting the loading the data which means over time it will become out of date.
  • It will be much easier to create a read replica. ElastiCache requires updates to the application code so should be avoided in this example.

Reference:
Working with DB instance read replicas

Save time with our AWS cheat sheets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

A company runs an eCommerce application that uses an Amazon Aurora database. The database performs well except for short periods when monthly sales reports are run. A Solutions Architect has reviewed metrics in Amazon CloudWatch and found that the Read Ops and CPUUtilization metrics are spiking during the periods when the sales reports are run.

What is the MOST cost-effective solution to solve this performance issue?

  1. Create an Amazon Redshift data warehouse and run the reporting there.
  2. Modify the Aurora database to use an instance class with more CPU.
  3. Create an Aurora Replica and use the replica endpoint for reporting.
  4. Enable storage Auto Scaling for the Amazon Aurora database.
A

3. Create an Aurora Replica and use the replica endpoint for reporting.

The simplest and most cost-effective option is to use an Aurora Replica. The replica can serve read operations which will mean the reporting application can run reports on the replica endpoint without causing any performance impact on the production database.

  • Aurora storage automatically scales based on volumes, there is no storage auto scaling feature for Aurora.
  • This would be less cost-effective and require more work in copying the data to the data warehouse.
  • This may not resolve the storage performance issues and could be more expensive depending on instances sizes.

Reference:
Amazon Aurora storage

Save time with our AWS cheat sheets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

A company wants to migrate a legacy web application from an on-premises data center to AWS. The web application consists of a web tier, an application tier, and a MySQL database. The company does not want to manage instances or clusters.

Which combination of services should a solutions architect include in the overall architecture?

(Select TWO.)

  1. Amazon DynamoDB
  2. Amazon RDS for MySQL
  3. Amazon EC2 Spot Instances
  4. Amazon Kinesis Data Streams
  5. AWS Fargate
A

2. Amazon RDS for MySQL
5. AWS Fargate

Amazon RDS is a managed service and you do not need to manage the instances. This is an ideal backend for the application and you can run a MySQL database on RDS without any refactoring. For the application components these can run on Docker containers with AWS Fargate. Fargate is a serverless service for running containers on AWS.

  • This is a NoSQL database and would be incompatible with the relational MySQL DB.
  • This would require managing instances.
  • This is a service for streaming data.

References:

Save time with our AWS cheat sheets:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

A company hosts a serverless application on AWS. The application consists of Amazon API Gateway, AWS Lambda, and Amazon RDS for PostgreSQL. During times of peak traffic and when traffic spikes are experienced, the company notices an increase in application errors caused by database connection timeouts. The company is looking for a solution that will reduce the number of application failures with the least amount of code changes.

What should a solutions architect do to meet these requirements?

  1. Reduce the concurrency rate for your Lambda Function.
  2. Enable an RDS Proxy instance on your RDS Database.
  3. Change the class of the instance of your database to allow more connections.
  4. Change the database to an Amazon DynamoDB database with on-demand scaling.
A

2. Enable an RDS Proxy instance on your RDS Database.

Amazon RDS Proxy is a fully managed, highly available database proxy for Amazon Relational Database Service (RDS) that makes applications more scalable, more resilient to database failures, and more secure. Amazon RDS Proxy allows applications to pool and share connections established with the database, improving database efficiency and application scalability.

Amazon RDS Proxy can be enabled for most applications with no code changes so this solution requires the least amount of code changes.

  • Concurrency is the number of requests that your function is serving at any given time. The errors are caused by an increase in connection timeouts, so editing the concurrency of your Lambda function would not solve the problem.
  • Resizing the instance might help, but there will be some inevitable downtime with a PostgreSQL database on RDS. RDS Proxy is specifically designed for this reason and would incur no downtime.
  • This would require significant application changes to accommodate the NoSQL database structure.

Reference:
Amazon RDS Proxy

Save time with our AWS cheat sheets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

A small Python application is used by a company to process JSON documents and output the results to a SQL database which currently lives on-premises. The application is run thousands of times every day, and the company wants to move the application to the AWS Cloud. To maximize scalability and minimize operational overhead, the company needs a highly available solution.

Which solution will meet these requirements?

  1. Build an S3 bucket to place the JSON documents in. Run the Python code on multiple Amazon EC2 instances to process the documents. Store the results in a database using the Amazon Aurora Database engine.
  2. Put the JSON documents in an Amazon S3 bucket. As documents arrive in the S3 bucket, create an AWS Lambda function that runs Python code to process them. Use Amazon Aurora DB clusters to store the results.
  3. Create an Amazon Elastic Block Store (Amazon EBS) volume for the JSON documents. Attach the volume to multiple Amazon EC2 instances using the EBS Multi-Attach feature. Process the documents with Python code on the EC2 instances and then extract the results to an Amazon RDS DB instance.
  4. The JSON documents should be queued as messages in the Amazon Simple Queue Service (Amazon SQS). Using the Amazon Elastic Container Service (Amazon ECS) with the Amazon EC2 launch type, deploy the Python code as a container. The container can be used to process SQS messages. Using Amazon RDS, store the results.
A

2. Put the JSON documents in an Amazon S3 bucket. As documents arrive in the S3 bucket, create an AWS Lambda function that runs Python code to process them. Use Amazon Aurora DB clusters to store the results.

Firstly, S3 is a highly available and durable place to store these JSON documents that will be written once and read many times (WORM). As this application runs thousands of times per day, AWS Lambda would be ideal to use as it will scale whenever the application needs to be ran, and Python is a runtime environment that is natively supported by AWS Lambda, whenever the events arrive in the S3 bucket, and this could be easily achieved using S3 event notifications. Finally Amazon Aurora is a highly available and durable AWS managed database. Amazon Aurora automatically maintains six copies of your data across three Availability Zones (AZs) to adhere to your redundancy requirements.

  • Multiple EC2 instances could work, however if you wanted to use EC2 to process the JSON documents you would need to either leave the EC2 instances running all the time (not cost effective) or have them spin up and spin down thousands of times per day (this would be slow and not ideal).
  • EBS is not optimized for write once read many use-cases, and if you wanted to use EC2 to process the JSON documents you would need to either leave the EC2 instances running all the time (not cost effective) or have them spin up and spin down thousands of times per day (this would be slow and not ideal).
  • A queue within Amazon SQS is not designed to be used for write once read many solutions, and it is designed to be used to decouple separate layers of your architecture. Secondly, ECS for EC2 is not ideal as you would need to either leave the EC2 instances running all the time (not cost effective) or have them spin up and spin down thousands of times per day (this would be slow and not ideal) if you wanted to use ECS for EC2.

Reference:
Using AWS Lambda with Amazon RDS

Save time with our AWS cheat sheets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

A stock trading startup company has a custom web application to sell trading data to its users online. The company uses Amazon DynamoDB to store its data and wants to build a new service that sends an alert to the managers of four internal teams every time a new trading event is recorded. The company does not want this new service to affect the performance of the current application.

What should a solutions architect do to meet these requirements with the LEAST amount of operational overhead?

  1. Write new event data to the table using DynamoDB transactions. The transactions should be configured to notify internal teams.
  2. Use the current application to publish a message to four Amazon Simple Notification Service (Amazon SNS) topics. Each team should subscribe to one topic.
  3. On the table, enable Amazon DynamoDB Streams. Subscriptions can be made to a single Amazon Simple Notification Service (Amazon SNS) topic using triggers.
  4. Create a custom attribute for each record to flag new items. A cron job can be written to scan the table every minute for new items and notify an Amazon Simple Queue Service (Amazon SQS) queue.
A

3. On the table, enable Amazon DynamoDB Streams. Subscriptions can be made to a single Amazon Simple Notification Service (Amazon SNS) topic using triggers.

DynamoDB Streams captures a time-ordered sequence of item-level modifications in any DynamoDB table and stores this information in a log for up to 24 hours. Applications can access this log and view the data items as they appeared before and after they were modified, in near-real time. This is the native way to handle this within DynamoDB, therefore will incur the least amount of operational overhead.

  • With Amazon DynamoDB transactions, you can group multiple actions together and submit them as a single all-or-nothing TransactWriteItems or TransactGetItems operation. The following sections describe API operations, capacity management, best practices, and other details about using transactional operations in DynamoDB. This is not suitable for this use case.
  • Using four separate SQS queues will take a significant amount of overhead, and this functionality can be managed natively within DynamoDB using DynamoDB streams.
  • Writing a CRON job also takes significant overhead compared to using DynamoDB streams.

Reference:
Change data capture for DynamoDB Streams

Save time with our AWS cheat sheets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

A social media platform uses Amazon DynamoDB to store user profiles, friend connections, and post interactions. The platform is rapidly expanding to new countries and needs to ensure a seamless user experience with high availability and low latency for its global user base.
The platform must handle unpredictable workloads and regional outages while maintaining a cost-effective architecture.

Which solution will meet these requirements MOST cost-effectively?

  1. Deploy DynamoDB tables in a single AWS Region using provisioned capacity mode. Use DynamoDB Streams to replicate data asynchronously to a secondary Region for failover.
  2. Use Amazon S3 to store user data and replicate the data across multiple Regions using S3 Cross-Region Replication. Use AWS Lambda to perform real-time data updates for the application.
  3. Use DynamoDB global tables to replicate data automatically across multiple Regions. Deploy the tables in on-demand capacity mode to handle workload variability.
  4. Use DynamoDB Accelerator (DAX) to reduce read latency for frequently accessed items. Deploy DynamoDB tables in a single Region and use manual Cross-Region Replication to replicate data to other Regions for fault tolerance.
A

3. Use DynamoDB global tables to replicate data automatically across multiple Regions. Deploy the tables in on-demand capacity mode to handle workload variability.

Use DynamoDB global tables to replicate data automatically across multiple Regions. Deploy the tables in on-demand capacity mode to handle workload variability: This is correct because DynamoDB global tables provide automatic multi-Region replication, ensuring low latency and high availability for a global user base. On-demand capacity mode is a cost-effective choice for workloads with unpredictable demand, as it adjusts capacity based on usage.

  • Using DynamoDB Streams for cross-Region replication requires a custom implementation, which increases operational overhead. Additionally, a single primary Region does not ensure low latency for a global user base.
  • S3 is not designed for transactional, low latency use cases like a social media platform. DynamoDB is the better choice for structured, real-time data access.
  • DAX reduces read latency but does not provide global availability or automatic replication. Manual Cross-Region Replication adds operational complexity and is less reliable than global tables.

References:

Save time with our AWS cheat sheets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

A fitness application company is launching a platform to track user activity, workout logs, and personalized settings. The database must support structured data, allow for transactions between related data, and dynamically scale to handle unpredictable traffic spikes during peak hours. The solution must also support automated backups and minimize operational management.

Which solution will meet these requirements MOST cost-effectively?

  1. Use Amazon DynamoDB with on-demand capacity mode to handle fluctuating traffic. Enable DynamoDB Point-in-Time Recovery (PITR) for automated backups.
  2. Deploy an open-source database on Amazon EC2 Spot Instances in an Auto Scaling group. Configure daily backups to Amazon S3 Intelligent-Tiering for cost optimization.
  3. Use Amazon Aurora Serverless v2 to store the data. Enable serverless auto-scaling and configure automated backups to Amazon S3 with a 7-day retention period.
  4. Deploy an Amazon RDS MySQL instance in a multi-AZ configuration. Use provisioned IOPS storage and configure automated backups to Amazon S3 Glacier Flexible Retrieval for long-term retention.
A

3. Use Amazon Aurora Serverless v2 to store the data. Enable serverless auto-scaling and configure automated backups to Amazon S3 with a 7-day retention period.

Aurora Serverless supports relational data, transactions, and complex queries while scaling seamlessly to meet workload demands. It integrates natively with Amazon S3 for automated backups, eliminating operational overhead while remaining cost-effective.

  • While DynamoDB is cost-effective and highly scalable, it lacks native support for relational data and transactions, which are required in this scenario. For structured data with relational dependencies, Aurora Serverless is the better fit.
  • Managing an open-source database on EC2 Spot Instances introduces operational complexity and risks interruptions. Additionally, Spot Instances are less suitable for workloads requiring high availability.
  • Provisioned IOPS increases costs unnecessarily, and Glacier Flexible Retrieval is unsuitable for backups requiring quick access.

References:

Save time with our AWS cheat sheets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

A retail company runs its order processing system on AWS. The system uses an Amazon RDS for MySQL Multi-AZ database cluster as its backend. The company must retain database backups for 30 days to meet compliance requirements. The company uses both automated RDS backups and manual backups for specific points in time. The company wants to enforce the 30-day retention policy for all backups while ensuring that both automated and manual backups created within the last 30 days are preserved. The solution must be cost-effective and require minimal operational effort.

Which solution will meet these requirements MOST cost-effectively?

  1. Configure the RDS backup retention policy to 30 days for automated backups. Use a script to identify and delete manual backups that are older than 30 days.
  2. Use AWS Backup to enforce a 30-day retention policy for automated backups. Configure an AWS Lambda function to identify and delete manual backups older than 30 days.
  3. Disable RDS automated backups. Use AWS Backup to create and retain daily backups for 30 days. Use AWS Backup lifecycle policies to delete backups older than 30 days.
  4. Retain the current configuration with both automated and manual backups. Use Amazon CloudWatch Events with AWS Lambda to automatically delete both automated and manual backups that are older than 30 days.
A

1. Configure the RDS backup retention policy to 30 days for automated backups. Use a script to identify and delete manual backups that are older than 30 days.

RDS backup retention policies can only be applied to automated backups. Manual backups must be managed separately. Using a script to identify and delete older manual backups ensures compliance without additional costs.

  • AWS Backup is not required for RDS automated backups, which natively support retention policies. Using Lambda for manual backup deletion adds unnecessary operational overhead.
  • Disabling automated backups introduces operational risks and does not provide a cost-effective solution compared to the built-in RDS backup retention feature.
  • CloudWatch Events and Lambda add complexity and operational overhead compared to simply using RDS retention policies and a script for manual backups.

References:

Save time with our AWS cheat sheets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

A healthcare company is building a patient records management application that uses a relational database to store user data and configuration details. The company expects steady growth in the number of patients. The database workload is expected to be variable and read-heavy, with occasional write operations. The company wants to cost-optimize the database solution while ensuring the necessary performance for its workload.

Which solution will meet these requirements MOST cost-effectively?

  1. Deploy the database on Amazon RDS. Use General Purpose SSD (gp3) storage with a read replica to ensure consistent performance for read and write operations.
  2. Deploy the database on Amazon Aurora Serverless v2 to automatically scale the database capacity based on actual usage and handle fluctuations in workload.
  3. Deploy the database on Amazon DynamoDB. Use on-demand capacity mode to automatically adjust throughput and accommodate workload changes.
  4. Deploy the database on Amazon RDS. Use magnetic storage with Multi-AZ deployments to ensure durability and handle the read-heavy workload.
A

2. Deploy the database on Amazon Aurora Serverless v2 to automatically scale the database capacity based on actual usage and handle fluctuations in workload.

Aurora Serverless v2 is a cost-effective option that dynamically adjusts capacity to meet workload demands. It is particularly suited for variable workloads, offering the performance of Aurora with a pay-per-use pricing model.

  • While using a read replica can help with read-heavy workloads, it introduces additional costs and management overhead compared to Aurora Serverless v2.
  • DynamoDB is a NoSQL database that is not designed for relational database requirements. While it is highly scalable, it does not meet the company’s need for a relational database solution.
  • Magnetic storage is outdated, has limited performance, and is not cost-effective for a read-heavy workload.

References:

Save time with our AWS cheat sheets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

A retail company uses an Amazon Aurora MySQL DB cluster for its order management system. The cluster includes eight Aurora Replicas. The company wants to ensure that reporting queries from its analytics team are automatically distributed across three specific Aurora Replicas that have higher compute and memory capacity than the rest of the cluster.

Which solution will meet these requirements?

  1. Create and use a custom endpoint that targets the three high-capacity replicas.
  2. Use the reader endpoint to automatically distribute reporting queries across all replicas in the cluster.
  3. Create a cluster clone for the reporting workload and use the writer endpoint of the cloned cluster.
  4. Direct reporting queries to the instance endpoints of the three high-capacity replicas.
A

1. Create and use a custom endpoint that targets the three high-capacity replicas.

Aurora custom endpoints allow you to define a subset of replicas for specific workloads. By creating a custom endpoint, the reporting queries can be automatically distributed across the three high-capacity replicas without involving the rest of the cluster.

  • The reader endpoint distributes queries across all Aurora Replicas in the cluster, which does not restrict queries to the desired three replicas.
  • Creating a cluster clone duplicates data unnecessarily, which increases costs and operational complexity. Additionally, the writer endpoint does not distribute queries among replicas.
  • Managing connections manually to individual instance endpoints is inefficient and does not provide automated query distribution across the replicas.

References:

Save time with our AWS cheat sheets.

51
Q

A company runs its critical payment processing application on an Amazon Aurora MySQL cluster in the ap-southeast-1 Region. As part of its disaster recovery (DR) strategy, the company has selected the ap-northeast-1 Region for failover capabilities.
The company requires a recovery point objective (RPO) of less than 5 minutes and a recovery time objective (RTO) of no more than 15 minutes. The company also wants to minimize operational overhead and ensure failover happens with minimal downtime and configuration.

Which solution will meet these requirements with the MOST operational efficiency?

  1. Convert the Aurora cluster to an Aurora global database. Configure cross-Region replication and managed failover.
  2. Create an Aurora read replica in ap-northeast-1 to replicate data from the primary Aurora cluster. Promote the read replica manually in the event of a failover.
  3. Create a new Aurora MySQL cluster in ap-northeast-1 and use AWS Database Migration Service (AWS DMS) to replicate data between clusters.
  4. Use Amazon S3 Cross-Region Replication to replicate database backups from ap-southeast-1 to ap-northeast-1. Restore the backups to a new Aurora cluster during failover.
A

1. Convert the Aurora cluster to an Aurora global database. Configure cross-Region replication and managed failover.

Aurora global databases are purpose-built for disaster recovery and can achieve an RPO of under 5 seconds with automated failover, meeting the company’s stringent RPO and RTO requirements.

  • While this approach provides DR capabilities, it involves manual intervention during failover, which increases downtime and operational complexity, making it less efficient for meeting the RTO goal.
  • AWS DMS is not ideal for low-latency replication of high-throughput transactional databases, and it does not natively provide automated failover capabilities.
  • Restoring backups to create a new Aurora cluster involves significant downtime and is not suitable for the given RPO and RTO requirements.

References:

Save time with our AWS cheat sheets.

52
Q

A Solutions Architect is migrating a distributed application from their on-premises environment into AWS. This application consists of an Apache Cassandra NoSQL database, with a containerized SUSE Linux compute layer with an additional storage layer made up of multiple Microsoft SQL Server databases. Once in the cloud the company wants to have as little operational overhead as possible, with no schema conversion during the migration and the company wants to host the architecture in a highly available and durable way.

Which of the following groups of services will provide the solutions architect with the best solution?

  1. Run the NoSQL database on DynamoDB, and the compute layer on Amazon ECS on EC2. Use Amazon RDS for Microsoft SQL Server to host the second storage layer.
  2. Run the NoSQL database on DynamoDB, and the compute layer on Amazon ECS on Fargate. Use Amazon RDS for Microsoft SQL Server to host the second storage layer.
  3. Run the NoSQL database on Amazon Keyspaces, and the compute layer on Amazon ECS on Fargate. Use Amazon RDS for Microsoft SQL Server to host the second storage layer.
  4. Run the NoSQL database on Amazon Keyspaces, and the compute layer on Amazon ECS on Fargate. Use Amazon Aurora to host the second storage layer.
A

3. Run the NoSQL database on Amazon Keyspaces, and the compute layer on Amazon ECS on Fargate. Use Amazon RDS for Microsoft SQL Server to host the second storage layer.

Amazon Keyspaces (for Apache Cassandra) is a scalable, highly available, and managed Apache Cassandra–compatible database service. This combined with a containerized, serverless compute layer on Amazon ECS for Fargate and a RDS for Microsoft SQL Server database layer is a fully managed version of what currently exists on premises.

  • DynamoDB is not a managed version of DynamoDB therefore it is not the correct answer.
  • DynamoDB is not a managed version of DynamoDB therefore it is not the correct answer.
  • Amazon Aurora does not have an option to run a Microsoft SQL Server database, therefore this answer is not correct.

Reference:
Amazon Keyspaces (for Apache Cassandra)

Save time with our AWS cheat sheets.

53
Q

A company is deploying an Amazon ElastiCache for Redis cluster. To enhance security a password should be required to access the database.

What should the solutions architect use?

  1. AWS Directory Service
  2. AWS IAM Policy
  3. Redis AUTH command
  4. VPC Security Group
A

3. Redis AUTH command

Redis authentication tokens enable Redis to require a token (password) before allowing clients to execute commands, thereby improving data security.

You can require that users enter a token on a token-protected Redis server. To do this, include the parameter –auth-token (API: AuthToken) with the correct token when you create your replication group or cluster. Also include it in all subsequent commands to the replication group or cluster.

  • This is a managed Microsoft Active Directory service and cannot add password protection to Redis.
  • You cannot use an IAM policy to enforce a password on Redis.
  • A security group protects at the network layer, it does not affect application authentication.

Reference:
Authenticating with the Valkey and Redis OSS AUTH command

Save time with our AWS cheat sheets.

54
Q

A solutions architect is designing a new service that will use an Amazon API Gateway API on the frontend. The service will need to persist data in a backend database using key-value requests. Initially, the data requirements will be around 1 GB and future growth is unknown. Requests can range from 0 to over 800 requests per second.

Which combination of AWS services would meet these requirements? (Select TWO.)

  1. AWS Fargate
  2. AWS Lambda
  3. Amazon DynamoDB
  4. Amazon EC2 Auto Scaling
  5. Amazon RDS
A

2. AWS Lambda
3. Amazon DynamoDB

In this case AWS Lambda can perform the computation and store the data in an Amazon DynamoDB table. Lambda can scale concurrent executions to meet demand easily and DynamoDB is built for key-value data storage requirements and is also serverless and easily scalable. This is therefore a cost effective solution for unpredictable workloads.

  • Containers run constantly and therefore incur costs even when no requests are being made.
  • This uses EC2 instances which will incur costs even when no requests are being made.
  • This is a relational database not a No-SQL database. It is therefore not suitable for key-value data storage requirements.

References:

Save time with our AWS cheat sheets:

55
Q

A company runs a web application that serves weather updates. The application runs on a fleet of Amazon EC2 instances in a Multi-AZ Auto scaling group behind an Application Load Balancer (ALB). The instances store data in an Amazon Aurora database. A solutions architect needs to make the application more resilient to sporadic increases in request rates.

Which architecture should the solutions architect implement?

(Select TWO.)

  1. Add and AWS WAF in front of the ALB
  2. Add Amazon Aurora Replicas
  3. Add an AWS Transit Gateway to the Availability Zones
  4. Add an AWS Global Accelerator endpoint
  5. Add an Amazon CloudFront distribution in front of the ALB
A

2. Add Amazon Aurora Replicas
5. Add an Amazon CloudFront distribution in front of the ALB

The architecture is already highly resilient but may be subject to performance degradation if there are sudden increases in request rates. To resolve this situation Amazon Aurora Read Replicas can be used to serve read traffic which offloads requests from the main database. On the frontend an Amazon CloudFront distribution can be placed in front of the ALB and this will cache content for better performance and also offloads requests from the backend.

  • A web application firewall protects applications from malicious attacks. It does not improve performance.
  • This is used to connect on-premises networks to VPCs.
  • This service is used for directing users to different instances of the application in different regions based on latency.

References:

Save time with our AWS cheat sheets:

56
Q

An Amazon RDS Read Replica is being deployed in a separate region. The master database is not encrypted but all data in the new region must be encrypted.

How can this be achieved?

  1. Enable encryption using Key Management Service (KMS) when creating the cross-region Read Replica
  2. Encrypt a snapshot from the master DB instance, create an encrypted cross-region Read Replica from the snapshot
  3. Enabled encryption on the master DB instance, then create an encrypted cross-region Read Replica
  4. Encrypt a snapshot from the master DB instance, create a new encrypted master DB instance, and then create an encrypted cross-region Read Replica
A

4. Encrypt a snapshot from the master DB instance, create a new encrypted master DB instance, and then create an encrypted cross-region Read Replica

You cannot create an encrypted Read Replica from an unencrypted master DB instance. You also cannot enable encryption after launch time for the master DB instance. Therefore, you must create a new master DB by taking a snapshot of the existing DB, encrypting it, and then creating the new DB from the snapshot. You can then create the encrypted cross-region Read Replica of the master DB.

All other options will not work due to the limitations explained above.

References:

Save time with our AWS cheat sheets.

57
Q

An Amazon RDS PostgreSQL database is configured as Multi-AZ. A solutions architect needs to scale read performance and the solution must be configured for high availability.

What is the most cost-effective solution?

  1. Create a read replica as a Multi-AZ DB instance
  2. Deploy a read replica in a different AZ to the master DB instance
  3. Deploy a read replica using Amazon ElastiCache
  4. Deploy a read replica in the same AZ as the master DB instance
A

1. Create a read replica as a Multi-AZ DB instance

You can create a read replica as a Multi-AZ DB instance. Amazon RDS creates a standby of your replica in another Availability Zone for failover support for the replica. Creating your read replica as a Multi-AZ DB instance is independent of whether the source database is a Multi-AZ DB instance.

  • This does not provide high availability for the read replica
  • ElastiCache is not used to create read replicas of RDS database.
  • This solution does not include HA for the read replica.

Reference:
About AWS

Save time with our AWS cheat sheets.

58
Q

A global financial services company is currently operating a three-tier web application to handle their main customer facing website. This application uses several Amazon EC2 instances behind an Application Load Balancer and connects directly to a DynamoDB table.
Due to recent customer complaints of slow loading times, their Solutions Architect has been asked to implement changes to solve this problem, without rearchitecting the core application components.

Which combination of actions should the solutions architect take to accomplish this?

(Select TWO.)

  1. Migrate the DynamoDB database to Amazon Aurora with a multi-AZ deployment model.
  2. Migrate the entire application stack to AWS Elastic Beanstalk with both web server and worker environments.
  3. Create a CloudFront distribution and place it in front of the Application Load Balancer.
  4. Set up an Amazon DynamoDB Accelerator (DAX) cluster in front of the DynamoDB table.
  5. Migrate the web application to be hosted on a containerized solution using AWS Fargate.
A

3. Create a CloudFront distribution and place it in front of the Application Load Balancer.
4. Set up an Amazon DynamoDB Accelerator (DAX) cluster in front of the DynamoDB table.

A CloudFront distribution would cache content in one of the many global edge locations, ensuring that any customer access to the content will be accessing it at a much lower latency compared to using the Application Load Balancer on its own.

Secondly, DynamoDB has a built-in caching solution known as DynamoDB Accelerator (DAX). If your application is serving traffic from a DynamoDB database and is struggling to scale, you can use the DynamoDB cache to improve application.

  • Migrating the entire application to AWS Elastic Beanstalk would require rearchitecting and would not necessarily improve the latency of the application for end users.
  • Refactoring the application to move from a No-SQL database (DynamoDB) to a SQL database (Amazon Aurora) would take a significant amount of application and code changes, due to the fundamental differences between SQL and NoSQL databases.
  • The application does not currently use containers, and instead uses Amazon EC2 instances. Changing the application to using a containerized compute layer would also require architectural changes and would not be suitable for this use case.

Reference:
Amazon CloudFront

Save time with our AWS cheat sheets.

59
A law firm has recently moved an on-premises multi-tier web application to AWS. Currently, the web application is based on a containerized solution and is running inside Linux based EC2 instances which connect to a PostgreSQLdatabase hosted on separate but dedicated EC2 instances. The company wishes to optimize operational efficiency and performance. **Which combination of actions should the solutions architect take?** (Select TWO.) 1. Migrate the PostgreSQL database to Amazon Aurora. 2. Migrate the web application to the same Amazon EC2 instances as the database. 3. Set up an Amazon CloudFront distribution for the web application content. 4. Set up Amazon ElastiCache between the web application and the PostgreSQL database. 5. Migrate the web application to be hosted on AWS Fargate with Amazon Elastic Container Service (Amazon ECS).
**1.** Migrate the PostgreSQL database to Amazon Aurora. **5.** Migrate the web application to be hosted on AWS Fargate with Amazon Elastic Container Service (Amazon ECS). ## Footnote Amazon Aurora (Aurora) is a fully managed relational database engine that's compatible with MySQL and PostgreSQL. You already know how MySQL and PostgreSQL combine the speed and reliability of high-end commercial databases with the simplicity and cost-effectiveness of open-source databases. The code, tools, and applications you use today with your existing MySQL and PostgreSQL databases can be used with Aurora. With some workloads, Aurora can deliver up to five times the throughput of MySQL and up to three times the throughput of PostgreSQL without requiring changes to most of your existing applications. Amazon ECS is a fully managed container orchestration service that makes it easy for you to deploy, manage, and scale containerized applications. This is a better hosting solution for a containerized solution rather than managing the underlying container platform yourself. In the case of Fargate, the solution is serverless, so it massively reduces operational overhead. * This might reduce cost but doesn’t offer any other advantages. * CloudFront helps with caching content globally for better performance but does not help reduce the operational overhead or performance of this solution. * Caching will only help when you have hot data segments and does not reduce the operational overhead of this solution. **References:** * [What is Amazon Aurora?](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/CHAP_AuroraOverview.html) * [Checking Aurora MySQL version numbers](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Updates.Versions.html#AuroraMySQL.Updates.UpgradePaths) Save time with our AWS cheat sheets: * [Amazon Aurora](https://digitalcloud.training/amazon-aurora/) * [Amazon ECS and EKS](https://digitalcloud.training/amazon-ecs-and-eks/)
60
A data analytics company is hosting a data lake which consists of data in Amazon S3 and Amazon RDS for PostgreSQL. The company needs a reporting solution that provides data visualization for the latest dataset and includes all the data sources within the data lake. Only the company's management team should have full access to all the visualizations. The rest of the company should have only limited access. **Which solution will meet these requirements?** 1. Create an analysis in Amazon QuickSight. Connect all the data sources and create new datasets. Publish dashboards to visualize the data. Share the dashboards with the appropriate IAM roles. 2. Create an analysis in Amazon QuickSight. Connect all the data sources and create new datasets. Publish dashboards to visualize the data. Share the dashboards with the appropriate users and groups. 3. Create an AWS Glue table and crawler for the data in Amazon S3. Create an AWS Glue extract, transform, and load (ETL) job to produce reports. Publish the reports to Amazon S3. Use S3 bucket policies to limit access to the reports. 4. Create an AWS Glue table and crawler for the data in Amazon S3. Use Amazon Athena Federated Query to access data within Amazon RDS for PostgreSQL. Generate reports by using Amazon Athena. Publish the reports to Amazon S3. Use S3 bucket policies to limit access to the reports.
**2.** Create an analysis in Amazon QuickSight. Connect all the data sources and create new datasets. Publish dashboards to visualize the data. Share the dashboards with the appropriate users and groups. ## Footnote Amazon QuickSight is the best fit for data visualization and reporting, especially when data resides in Amazon S3 and Amazon RDS for PostgreSQL. QuickSight supports connecting to both Amazon S3 (via Athena) and RDS for PostgreSQL as data sources, allowing integrated dashboards across both. QuickSight allows you to control access at the level of users and groups, which is more fine-grained than IAM roles for sharing dashboards. User/group sharing aligns with the need to give management full access and limited access to others, fulfilling the access control requirements effectively. * While Athena Federated Queries are useful for joining S3 and RDS data, it lacks visualization capabilities. Again, S3-based reports are not ideal for dashboards or restricted sharing by user role. * As with the previous answer, this option solves the problem of access sharing with resources but does not take care of delta in data. Also, you connect user and groups in your QuickSight account but not IAM Roles. * Amazon Athena should be used with AWS Glue to provide the required functionality as described in the explanation above and the article linked below. **Reference:** [Create a data source connection](https://docs.aws.amazon.com/athena/latest/ug/connect-to-a-data-source.html) Save time with our AWS cheat sheets: * [AWS Glue](https://digitalcloud.training/aws-glue/) * [AWS Athena](https://digitalcloud.training/amazon-athena/)
61
A Solutions Architect manages multiple Amazon RDS MySQL databases. To improve security, the Solutions Architect wants to enable secure user access with short-lived credentials. **How can these requirements be met?** 1. Configure the MySQL databases to use the AWS Security Token Service (STS) 2. Configure the application to use the AUTH command to send a unique password 3. Create the MySQL user accounts to use the AWSAuthenticationPlugin with IAM 4. Configure the MySQL databases to use AWS KMS data encryption keys
**3.** Create the MySQL user accounts to use the AWSAuthenticationPlugin with IAM ## Footnote With MySQL, authentication is handled by AWSAuthenticationPlugin—an AWS-provided plugin that works seamlessly with IAM to authenticate your IAM users. Connect to the DB instance and issue the CREATE USER statement, as shown in the following example. CREATE USER jane_doe IDENTIFIED WITH AWSAuthenticationPlugin AS 'RDS'; The IDENTIFIED WITH clause allows MySQL to use the AWSAuthenticationPlugin to authenticate the database account (jane_doe). The AS 'RDS' clause refers to the authentication method, and the specified database account should have the same name as the IAM user or role. In this example, both the database account and the IAM user or role are named jane_doe. * You cannot configure MySQL to directly use the AWS STS. * This is used with Redis databases, not with RDS databases. * Data encryption keys are used for data encryption not management of connections strings. **Reference:** [Creating a database account using IAM authentication](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.DBAccounts.html) Save time with our [AWS cheat sheets](https://digitalcloud.training/amazon-rds/).
62
A company has an on-premises server that uses a MySQL database to process and store customer information. The company wants to migrate to an AWS database service to achieve higher availability and to improve application performance. Additionally, the company wants to offload reporting workloads from its primary database to ensure it remains performant. **Which solution will meet these requirements in the MOST operationally efficient way?** 1. Use Amazon RDS with MySQL in a Single-AZ deployment. Create a read replica in the same availability zone as the primary DB instance. Direct the reporting functions to the read replica. 2. Use AWS Database Migration Service (AWS DMS) to create an Amazon Aurora DB cluster in multiple AWS Regions. Point the reporting functions toward a separate DB instance from the primary DB instance. 3. Use Amazon Aurora with MySQL compatibility. Direct the reporting functions to use one of the Aurora Replicas. 4. Use Amazon EC2 instances to deploy a self-managed MySQL database with a replication setup for reporting purposes. Place instances in multiple availability zones and manage backups and patching manually.
**3.** Use Amazon Aurora with MySQL compatibility. Direct the reporting functions to use one of the Aurora Replicas. ## Footnote Amazon Aurora with MySQL compatibility is a good fit for achieving high availability and improved performance. Aurora automatically distributes the data across multiple AZs in a single region. Additionally, Aurora allows the creation of up to 15 Aurora Replicas that share the same underlying volume as the primary instance. Directing reporting functions to the Aurora Replica is an effective way to offload reporting workloads from the primary database. * Though you can use Amazon RDS with MySQL in a Single-AZ deployment and create a read replica, it is not the most operationally efficient option as it does not provide the high availability that Aurora's architecture offers. * Using AWS DMS to create Amazon Aurora DB clusters in multiple AWS Regions would be overkill for the requirements. It could also introduce additional complexity and doesn’t specifically address using a replica for reporting purposes. * Managing your own database on Amazon EC2 instances requires a significant operational overhead as you need to handle backups, patch management, and high availability yourself. This option is not the most operationally efficient compared to using a managed database service like Amazon Aurora. **Reference:** [Replication with Amazon Aurora](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Replication.html) Save time with our [AWS cheat sheets](https://digitalcloud.training/amazon-aurora/).
63
A financial firm is aiming to leverage AWS Cloud for augmenting its on-premises disaster recovery (DR) architecture. The firm's main application, running on PostgreSQL, is housed on a virtual machine (VM) on-premises. The DR solution needs to align with the application's recovery point objective (RPO) of less than a minute and a recovery time objective (RTO) of within two hours, all while keeping costs to a minimum. **Which solution will meet these requirements?** 1. Configure an active-active multi-site setup between the on-premises server and AWS using PostgreSQL with a third-party high availability solution. 2. Set up a warm standby Amazon RDS for PostgreSQL database on AWS. Configure AWS Database Migration Service (AWS DMS) to use change data capture (CDC). 3. Use AWS Elastic Disaster Recovery with continuous replication to act as a pilot light solution on AWS. 4. Utilize third-party backup software to perform daily backups and store a secondary set of backups in Amazon S3.
**2.** Set up a warm standby Amazon RDS for PostgreSQL database on AWS. Configure AWS Database Migration Service (AWS DMS) to use change data capture (CDC). ## Footnote Configuring a warm standby Amazon RDS for PostgreSQL database on AWS and using AWS DMS with change data capture will meet the RTO and RPO requirements. DMS can handle the ongoing replication from the on-premises PostgreSQL to the standby RDS instance, providing a near real-time replica of the data. In a DR scenario, this standby instance can be promoted to become the new primary database, meeting the required RTO and RPO. * Setting up an active-active multi-site setup between the on-premises server and AWS using PostgreSQL with a third-party high availability solution might meet the RPO and RTO requirements, but it would likely be more expensive and complex than the correct answer. * AWS Elastic Disaster Recovery with continuous replication can provide a DR solution, but for a database, it is typically more efficient to use a service designed for that purpose, like RDS with DMS. * Using third-party backup software to perform daily backups and storing a secondary set of backups in Amazon S3 would not meet the RPO of less than a minute, as this approach could lead to a data loss up to 24 hours. Also, the process of restoring from a backup might not meet the RTO of within two hours. **Reference:** [Creating tasks for ongoing replication using AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Task.CDC.html) Save time with our AWS cheat sheets: * [Amazon RDS](https://digitalcloud.training/amazon-rds/) * [AWS Migration Services](https://digitalcloud.training/aws-migration-services/)
64
A health tech company runs a multi-tier medical records application in the AWS Cloud, which operates across three Availability Zones. The application architecture includes an Application Load Balancer, a cluster of Amazon EC2 instances that handle user session states, and a PostgreSQL database running on an EC2 instance. The company anticipates a sharp surge in application traffic due to a new partnership. The company needs to scale to accommodate future application capacity demands and ensure high availability across all three Availability Zones. **Which solution will meet these requirements?** 1. Migrate the PostgreSQL database to Amazon RDS for PostgreSQL with a Multi-AZ DB instance deployment. Use Amazon ElastiCache for Redis with a replication group to manage session data and cache reads. Migrate the application server to an Auto Scaling group across three Availability Zones. 2. Migrate the PostgreSQL database to Amazon Aurora with PostgreSQL compatibility with a single AZ deployment. Use Amazon ElastiCache for Memcached to manage session data and cache reads. Migrate the application server to an Auto Scaling group across three Availability Zones. 3. Migrate the PostgreSQL database to Amazon DynamoDB. Use DynamoDB Accelerator (DAX) to cache reads. Store the session data in DynamoDB. Migrate the application server to an Auto Scaling group across three Availability Zones. 4. Keep the PostgreSQL database on EC2 instance. Use Amazon ElastiCache for Redis to manage session data and cache reads. Migrate the application server to an Auto Scaling group across three Availability Zones.
**1.** Migrate the PostgreSQL database to Amazon RDS for PostgreSQL with a Multi-AZ DB instance deployment. Use Amazon ElastiCache for Redis with a replication group to manage session data and cache reads. Migrate the application server to an Auto Scaling group across three Availability Zones. ## Footnote This solution fulfills all the requirements. Amazon RDS with Multi-AZ instances provides high availability and failover support for DB instances. ElastiCache for Redis supports storing session state data and can provide sub-millisecond response times, enabling applications to achieve instant, high-speed reads and writes. Auto Scaling ensures that the application has the correct number of Amazon EC2 instances to handle the load for your application. * Aurora is a high-performance database service, but using a single AZ deployment doesn't provide the high availability across multiple AZs the company wants. Also, while ElastiCache for Memcached can be used for caching, it doesn't offer the durability and atomicity that Redis offers, which is particularly useful for session data. * Although DynamoDB is a high-performance, scalable NoSQL database, it is not a drop-in replacement for a relational database like PostgreSQL. It has a different data model and supports a different set of query options. This change could require significant modifications to the application code and may not support the same transactional capabilities as PostgreSQL. * Although the EC2 instance can run the PostgreSQL database, it does not provide the same level of managed service benefits (like automatic patching, backups, and high availability with Multi-AZ deployments) as Amazon RDS. This option would likely result in higher operational overhead and doesn't fully utilize the benefits of managed AWS services. **References:** * [Amazon RDS Multi-AZ](https://aws.amazon.com/rds/features/multi-az/) * [Amazon ElastiCache for Valkey and for Redis OSS](https://aws.amazon.com/elasticache/redis/) * [Amazon EC2 Auto Scaling](https://aws.amazon.com/ec2/autoscaling/) Save time with our [AWS cheat sheets](https://digitalcloud.training/amazon-elasticache/).
65
An organization manages its own MySQL databases, which are hosted on Amazon EC2 instances. In response to changes in demand, replication and scaling are manually managed by the company. It is essential for the company to have a way to add and remove compute capacity as needed from the database tier. The solution also must offer improved performance, scaling, and durability with minimal effort from operations. **Which solution meets these requirements?** 1. Migrate the databases to Amazon Aurora Serverless (Aurora MySQL). 2. Migrate the databases to Amazon Aurora Serverless (Aurora PostgreSQL). 3. Consolidate the databases into a single MySQL database. Use larger EC2 instances for the larger database. 4. For the database tier, create an EC2 Auto Scaling group. Create a new database environment and migrate the existing databases.
**1.** Migrate the databases to Amazon Aurora Serverless (Aurora MySQL). ## Footnote Amazon Aurora provides automatic scaling for MySQL databases. Amazon Aurora provides built-in security, continuous backups, serverless compute, up to 15 read replicas, automated multi-Region replication, and integrations with other AWS services. Aurora Serverless reduces any effort from operations also to provision any servers to manage the database cluster. * Although PostgreSQL is an option for Aurora, the database schema would have to be changed to allow PostgreSQL compatibility. * Databases run on a larger EC2 instance would not provide improved performance, scaling, and durability. * This would improve the scalability of the solution but would still have to be heavily managed by the organization, something that would not be needed with Aurora Serverless. **Reference:** [Amazon Aurora Serverless](https://aws.amazon.com/rds/aurora/serverless/) Save time with our [AWS cheat sheets](https://digitalcloud.training/amazon-rds/).
66
An application uses Amazon EC2 instances and an Amazon RDS MySQL database. The database is not currently encrypted. A solutions architect needs to apply encryption to the database for all new and existing data. **How should this be accomplished?** 1. Create an Amazon ElastiCache cluster and encrypt data using the cache nodes 2. Enable encryption for the database using the API. Take a full snapshot of the database. Delete old snapshots 3. Take a snapshot of the RDS instance. Create an encrypted copy of the snapshot. Restore the RDS instance from the encrypted snapshot 4. Create an RDS read replica with encryption at rest enabled. Promote the read replica to master and switch the application over to the new master. Delete the old RDS instance
**3.** Take a snapshot of the RDS instance. Create an encrypted copy of the snapshot. Restore the RDS instance from the encrypted snapshot ## Footnote There are some limitations for encrypted Amazon RDS DB Instances: you can't modify an existing unencrypted Amazon RDS DB instance to make the instance encrypted, and you can't create an encrypted read replica from an unencrypted instance. However, you can use the Amazon RDS snapshot feature to encrypt an unencrypted snapshot that's taken from the RDS database that you want to encrypt. Restore a new RDS DB instance from the encrypted snapshot to deploy a new encrypted DB instance. Finally, switch your connections to the new DB instance. * You cannot encrypt an RDS database using an ElastiCache cache node. * You cannot enable encryption for an existing database. * You cannot create an encrypted read replica from an unencrypted database instance. **References:** * [Encrypting Amazon RDS resources](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html) * [How can I encrypt an unencrypted Amazon RDS DB instance for MySQL or MariaDB with minimal downtime?](https://aws.amazon.com/premiumsupport/knowledge-center/rds-encrypt-instance-mysql-mariadb/) Save time with our [AWS cheat sheets](https://digitalcloud.training/amazon-rds/).
67
A company runs a streaming media service and the content is stored on Amazon S3. The media catalog server pulls updated content from S3 and can issue over 1 million read operations per second for short periods. Latency must be kept under 5ms for these updates. **Which solution will provide the BEST performance for the media catalog updates?** 1. Update the application code to use an Amazon ElastiCache for Redis cluster 2. Implement Amazon CloudFront and cache the content at Edge Locations 3. Update the application code to use an Amazon DynamoDB Accelerator cluster 4. Implement an Instance store volume on the media catalog server
**1.** Update the application code to use an Amazon ElastiCache for Redis cluster ## Footnote Some applications, such as media catalog updates require high frequency reads, and consistent throughput. For such applications, customers often complement S3 with an in-memory cache, such as Amazon ElastiCache for Redis, to reduce the S3 retrieval cost and to improve performance. ElastiCache for Redis is a fully managed, in-memory data store that provides sub-millisecond latency performance with high throughput. ElastiCache for Redis complements S3 in the following ways: - Redis stores data in-memory, so it provides sub-millisecond latency and supports incredibly high requests per second. - It supports key/value based operations that map well to S3 operations (for example, GET/SET => GET/PUT), making it easy to write code for both S3 and ElastiCache. - It can be implemented as an application side cache. This allows you to use S3 as your persistent store and benefit from its durability, availability, and low cost. Your applications decide what objects to cache, when to cache them, and how to cache them. In this example the media catalog is pulling updates from S3 so the performance between these components is what needs to be improved. Therefore, using ElastiCache to cache the content will dramatically increase the performance. * CloudFront is good for getting media closer to users but in this case we’re trying to improve performance within the data center moving data from S3 to the media catalog server. * DynamoDB Accelerator (DAX) is used with DynamoDB but is unsuitable for use with Amazon S3. * This will improve local disk performance but will not improve reads from Amazon S3. **Reference:** [Turbocharge Amazon S3 with Amazon ElastiCache for Redis](https://aws.amazon.com/blogs/storage/turbocharge-amazon-s3-with-amazon-elasticache-for-redis/) Save time with our [AWS cheat sheets](https://digitalcloud.training/amazon-elasticache/).
68
A company runs an application on premises that stores a large quantity of semi-structured data using key-value pairs. The application code will be migrated to AWS Lambda and a highly scalable solution is required for storing the data. **Which datastore will be the best fit for these requirements?** 1. Amazon EFS 2. Amazon RDS MySQL 3. Amazon EBS 4. Amazon DynamoDB
**4.** Amazon DynamoDB ## Footnote Amazon DynamoDB is a no-SQL database that stores data using key-value pairs. It is ideal for storing large amounts of semi-structured data and is also highly scalable. This is the best solution for storing this data based on the requirements in the scenario. * The Amazon Elastic File System (EFS) is not suitable for storing key-value pairs. * Amazon Relational Database Service (RDS) is used for structured data as it is an SQL type of database. * Amazon Elastic Block Store (EBS) is a block-based storage system. You attach volumes to EC2 instances. It is not used for key-value pairs or to be used by Lambda functions. **Reference:** [Amazon DynamoDB features](https://aws.amazon.com/dynamodb/features/) Save time with our [AWS cheat sheets](https://digitalcloud.training/amazon-dynamodb/).
69
A financial services company is migrating its sensitive customer data and applications to AWS. They want to ensure that the data is securely stored and managed while reducing the overall maintenance and operational overhead associated with managing databases. **Which solution will meet these requirements?** 1. Migrate the applications and data to Amazon EC2 instances. Utilize the AWS Key Management Service (AWS KMS) customer managed keys for encryption. 2. Migrate the data and applications to Amazon RDS instances. Enable encryption at rest using AWS Key Management Service (AWS KMS). 3. Store the data in Amazon S3. Utilize Amazon Macie for ongoing data security and threat detection. 4. Migrate the data to Amazon RDS instances. Enable Amazon GuardDuty for data protection and threat detection.
**2.** Migrate the data and applications to Amazon RDS instances. Enable encryption at rest using AWS Key Management Service (AWS KMS). ## Footnote Amazon RDS makes it easy to go from project conception to deployment by managing time-consuming database administration tasks including backups, software patching, monitoring, scaling, and replication. Amazon RDS supports encryption at rest, which ensures the security of sensitive data and meets regulatory compliance requirements. AWS Key Management Service (AWS KMS) is integrated with Amazon RDS to make it easier to create, control, and manage keys for encryption. * While this solution offers data encryption, it does not meet the requirement to reduce operational overhead. Managing databases on EC2 instances requires additional administrative tasks, such as managing backups and applying software patches, which Amazon RDS handles automatically. * Amazon S3 and Macie are suitable for data storage and security analysis, respectively. However, Amazon S3 is not designed to serve as a transactional database for applications, which is a key requirement in this scenario. * While Amazon RDS is a correct choice for database management and Amazon GuardDuty offers threat detection, GuardDuty is not specifically designed for data protection within databases. It's a threat detection service that continuously monitors for malicious activity and unauthorized behavior to protect your AWS accounts and workloads. **Reference:** [Encrypting Amazon RDS resources](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html) Save time with our AWS cheat sheets: * [Amazon RDS](https://digitalcloud.training/amazon-rds/) * [AWS KMS](https://digitalcloud.training/aws-kms/)
70
A cloud architect is assessing the resilience of a web application deployed on AWS. It was observed that the application experienced a downtime of about 3 minutes when a scheduled failover was performed on the application's Amazon RDS MySQL database as part of a scaling operation. The organization wants to mitigate such downtime in future scaling exercises while minimizing operational overhead. **Which solution will be the MOST effective in achieving this?** 1. Implement more RDS MySQL read replicas in the cluster to manage the load during the failover. 2. Establish a secondary RDS MySQL cluster within the same AWS Region. During any future failover, modify the application to connect to the secondary cluster's writer endpoint. 3. Implement an Amazon ElastiCache for Redis cluster to manage the load during the failover. 4. Configure an Amazon RDS Proxy for the database and modify the application to connect to the proxy endpoint.
**4.** Configure an Amazon RDS Proxy for the database and modify the application to connect to the proxy endpoint. ## Footnote Amazon RDS Proxy is a fully managed, highly available database proxy for Amazon RDS that makes applications more scalable, more resilient to database failures, and more secure. During a failover, RDS Proxy automatically connects to a standby database instance while preserving connections from your application and reducing failover times for RDS and Aurora multi-AZ databases. So, there is minimal downtime for the application. * Adding more read replicas to the cluster does not decrease the downtime during a failover. It only improves the database's ability to handle read-heavy workloads. Read replicas do not contribute to a faster failover process. * This approach is operationally heavy as it involves managing two separate RDS clusters and manually updating the application's database endpoint during a failover. Moreover, it does not necessarily reduce the downtime during a failover as there might be data inconsistency issues between the primary and secondary clusters, depending on the replication latency. * ElastiCache is an in-memory cache and not a relational database service. It is typically used to cache frequently accessed data to reduce latency and improve application performance, not for managing failovers. **References:** * [Amazon RDS Proxy](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/rds-proxy.html) * [Configuring and managing a Multi-AZ deployment for Amazon RDS](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html) Save time with our [AWS cheat sheets](https://digitalcloud.training/amazon-rds/).
71
A digital media company uses an Amazon RDS MySQL instance for its content management system. Recently, the company has observed that their RDS instance is nearing its storage capacity due to the constant influx of new data. The company wants to ensure there's always sufficient storage without any operational interruption or manual intervention. **Which solution should the company use to address this situation with the LEAST operational overhead?** 1. Enable automatic storage scaling for the MySQL instance. 2. Migrate the database to a larger Amazon RDS MySQL instance. 3. Implement a lifecycle policy to delete older data from the MySQL instance. 4. Utilize Amazon ElastiCache to offload some read traffic and reduce database load.
**1.** Enable automatic storage scaling for the MySQL instance. ## Footnote Amazon RDS's automatic storage scaling allows the database to automatically increase its storage capacity when the available storage is low. This feature helps to prevent out-of-storage situations and requires no operational overhead. * While this would provide more storage, it does not address the issue of potential future storage shortages and requires significant operational effort for the migration. * While this might help free up some storage, it might not be suitable if all data is essential for business operations. Also, this does not provide a long-term solution if data growth continues. * While ElastiCache can help to improve the database's read efficiency, it doesn't directly address the disk space concern for the RDS instance. **Reference:** [Working with storage for Amazon RDS DB instances](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PIOPS.StorageTypes.html) Save time with our [AWS cheat sheets](https://digitalcloud.training/amazon-rds/).