Your development team has created a gaming application that uses DynamoDB to store user statistics and provide fast game updates back to users. The team has begun testing the application but needs a consistent data set to perform tests with. The testing process alters the dataset, so the baseline data needs to be retrieved upon each new test. Which AWS service can meet this need by exporting data from DynamoDB and importing data into DynamoDB?
Elastic Map Reduce
You have configured an Auto Scaling Group of EC2 instances fronted by an Application Load Balancer and backed by an RDS database. You want to begin monitoring the EC2 instances using CloudWatch metrics. Which metric is not readily available out of the box?
Memory utilization
Memory utilization is not available as an out of the box metric in CloudWatch.
You can, however, collect memory metrics when you configure a custom metric for CloudWatch.
Types of custom metrics that you can set up include:
You are working as a Solutions Architect in a large healthcare organization. You have many Auto Scaling Groups that utilize launch configurations. Many of these launch configurations are similar yet have subtle differences. You’d like to use multiple versions of these launch configurations. An ideal approach would be to have a default launch configuration and then have additional versions that add additional features. Which option best meets these requirements?
Use launch templates instead
Your company is currently building out a second AWS region. Following best practices, they’ve been using CloudFormation to make the migration easier. They’ve run into a problem with the template though. Whenever the template is created in the new region, it’s still referencing the AMI in the old region. What steps can you take to automatically select the correct AMI when the template is deployed?
Create a mapping in the template. Define the unique AMI value per region.
Your company is using a hybrid configuration because there are some legacy applications which are not easily converted and migrated to AWS. And with this configuration comes a typical scenario where the legacy apps must maintain the same private ip address and MAC address. You are attempting to convert the application to the Cloud and have configured an EC2 instance to house the application. What you are currently testing is removing the ENI from the legacy instance and attaching it to the EC2 instance. You want to attempt a warm attach. What does this mean?
Attach the ENI to an instance when it is stopped.
Some best practices for configuring network interfaces:
A company has an Auto Scaling Group of EC2 instances hosting their retail sales application. Any significant downtime for this application can result in large losses of profit. Therefore the architecture also includes an Application Load Balancer and an RDS database in a Multi-AZ deployment. What will happen to preserve high availability if the primary database fails?
The CNAME is switched from the primary db instance to the secondary.
You work for an online retailer where any downtime at all can cause a significant loss of revenue. You have architected your application to be deployed on an Auto Scaling Group of EC2 instances behind a load balancer. You have configured and deployed these resources using a CloudFormation template. The Auto Scaling Group is configured with default settings, and a simple CPU utilization scaling policy. You have also set up multiple Availability Zones for high availability. The Load Balancer does health checks against an html file generated by script. When you begin performing load testing on your application and notice in CloudWatch that the load balancer is not sending traffic to one of your EC2 instances. What could be the problem?
The EC2 instance has failed the load balancer health check.
Your boss has tasked you with decoupling your existing web frontend from the backend. Both applications run on EC2 instances. After you investigate the existing architecture, you find that (on average) the backend resources are processing about 5,000 requests per second and will need something that supports their extreme level of message processing. It’s also important that each request is processed only 1 time. What can you do to decouple these resources?
Use SQS Standard. Include a unique ordering ID in each message, and have the backend application use this to deduplicate messages.
You have just started work at a small startup in the Seattle area. Your first job is to help containerize your company’s microservices and move them to AWS. The team has selected ECS as their orchestration service of choice. You’ve discovered the code currently uses access keys and secret access keys in order to communicate with S3. How can you best handle this authentication for the newly containerized application?
Attach a role with the appropriate permissions to the task definition in ECS.
A team of architects is designing a new AWS environment for a company which wants to migrate to the Cloud. The architects are considering the use of EC2 instances with instance store volumes. The architects realize that the data on the instance store volumes are ephemeral. Which action will not cause the data to be deleted on an instance store volume?
Reboot
A software gaming company has produced an online racing game that uses CloudFront for fast delivery to worldwide users. The game also uses DynamoDB for storing in-game and historical user data. The DynamoDB table has a preconfigured read and write capacity. Users have been reporting slowdown issues, and an analysis has revealed the DynamoDB table has begun throttling during peak traffic times. What step can you take to improve game performance?
Adjust your auto scaling thresholds to scale more aggressively.
A professional baseball league has chosen to use a key-value and document database for storage, processing, and data delivery. Many of the data requirements involve high-speed processing of data such as a Doppler radar system which samples the position of the baseball 2000 times per second. Which AWS data storage can meet these requirements?
DynamoDB
Your team has provisioned an Auto Scaling Groups in a single Region. The Auto Scaling Group at max capacity would total 40 EC2 instances between them. However, you notice that the Auto Scaling Groups will only scale out to a portion of that number of instances at any one time. What could be the problem?
There is a vCPU-based on-demand instance limit per region
You work for an advertising company that has a real-time bidding application. You are also using CloudFront on the front end to accommodate a worldwide user base. Your users begin complaining about response times and pauses in real-time bidding. What is the best service that can be used to reduce DynamoDB response times by an order of magnitude (milliseconds to microseconds)?
DAX
Your company uses IoT devices installed in businesses to provide those business real-time data for analysis. You have decided to use AWS Kinesis Data Firehose to stream the data to multiple backend storing services for analytics. Which service listed is not a viable solution to stream the real time data to?
Athena
You work for an oil and gas company as a lead in data analytics. The company is using IoT devices to better understand their assets in the field (for example, pumps, generators, valve assemblies, and so on). Your task is to monitor the IoT devices in real-time to provide valuable insight that can help you maintain the reliability, availability, and performance of your IoT devices. What tool can you use to process streaming data in real time with standard SQL without having to learn new programming languages or processing frameworks?
Kinesis Data Analytics
An Application Load Balancer is fronting an Auto Scaling Group of EC2 instances, and the instances are backed by an RDS database. The Auto Scaling Group has been configured to use the Default Termination Policy. You are testing the Auto Scaling Group and have triggered a scale-in. Which instance will be terminated first?
The instance launched from the oldest launch configuration
What do we know?
You have configured an Auto Scaling Group of EC2 instances. You have begun testing the scaling of the Auto Scaling Group using a stress tool to force the CPU utilization metric being used to force scale out actions. The stress tool is also being manipulated by removing stress to force a scale in. But you notice that these actions are only taking place in five-minute intervals. What is happening?
The Auto Scaling Group is following the default cooldown procedure.
You have been given an assignment to configure Network ACLs in your VPC. Before configuring the NACLs, you need to understand how the NACLs are evaluated. How are NACL rules evaluated?
NACL rules are evaluated by rule number from lowest to highest and executed immediately when a matching rule is found.
The following are the parts of a network ACL rule:
You have been evaluating the NACLs in your company. Currently, you are looking at the default network ACL. Which statement is true about NACLs?
The default configuration of the default NACL is Allow, and the default configuration of a custom NACL is Deny.
A consultant hired by a small company to configure an AWS environment. The consultant begins working with the VPC and launching EC2 instances within the VPC. The initial instances will be placed in a public subnet. The consultant begins to create security groups. What is true of security groups?
You can specify allow rules but not deny rules
The following are the basic characteristics of security groups for your VPC:
An international company has many clients around the world. These clients need to transfer gigabytes to terabytes of data quickly and on a regular basis to an S3 bucket. Which S3 feature will enable these long distance data transfers in a secure and fast manner?
Transfer Acceleration
You might want to use Transfer Acceleration on a bucket for various reasons, including the following:
An organization of about 100 employees has performed the initial setup of users in IAM. All users except administrators have the same basic privileges. But now it has been determined that 50 employees will have extra restrictions on EC2. They will be unable to launch new instances or alter the state of existing instances. What will be the quickest way to implement these restrictions?
Create the appropriate policy. Create a new group for the restricted users. Place the restricted users in the new group and attach the policy to the group.
AWS supports six types of policies:
A small company has nearly 200 users who already have AWS accounts in the company AWS environment. A new S3 bucket has been created which will allow roughly a third of all users access to sensitive information in the bucket. What is the most time efficient way to get these users access to the bucket?
Create a new policy which will grant permissions to the bucket. Create a group and attach the policy to that group. Add the users to this group.
An IAM group is a collection of IAM users.
Groups let you specify permissions for multiple users, which can make it easier to manage the permissions for those users.
Note that a group is not truly an “identity” in IAM because it cannot be identified as a Principal in a permission policy.
It is simply a way to attach policies to multiple users at one time.
Following are some important characteristics of groups: