A company runs an application in an on-premises data center that collects environmental data from production machinery. The data consists of JSON files stored on network attached storage (NAS) and around 5 TB of data is collected each day. The company must upload this data to Amazon S3 where it can be processed by an analytics application. The data must be transferred securely.
Which solution offers the MOST reliable and time-efficient data transfer?
3. AWS DataSync over AWS Direct Connect.
The most reliable and time-efficient solution that keeps the data secure is to use AWS DataSync and synchronize the data from the NAS device directly to Amazon S3. This should take place over an AWS Direct Connect connection to ensure reliability, speed, and security.
AWS DataSync can copy data between Network File System (NFS) shares, Server Message Block (SMB) shares, self-managed object storage, AWS Snowcone, Amazon Simple Storage Service (Amazon S3) buckets, Amazon Elastic File System (Amazon EFS) file systems, and Amazon FSx for Windows File Server file systems.
Reference:
AWS DataSync
Save time with our AWS cheat sheets.
A surveying team is using a fleet of drones to collect images of construction sites. The surveying team’s laptops lack the inbuilt storage and compute capacity to transfer the images and process the data. While the team has Amazon EC2 instances for processing and Amazon S3 buckets for storage, network connectivity is intermittent and unreliable. The images need to be processed to evaluate the progress of each construction site.
What should a solutions architect recommend?
1. Process and store the images using AWS Snowball Edge devices.
AWS physical Snowball Edge device will provide much more inbuilt compute and storage compared to the current team’s laptops. This negates the need to rely on a stable connection to process any images and solves the team’s problems easily and efficiently.
Reference:
What is Snowball Edge?
Save time with our AWS cheat sheets.
A healthcare organization operates multiple applications on virtual machines (VMs) in its on-premises data center. Due to increasing demand for its services, the data center can no longer scale quickly enough to meet business needs. The organization has decided to migrate its non-critical workloads to AWS using a lift-and-shift strategy to expedite the process.
Which combination of steps will meet these requirements?
(Select THREE.)
1. Use AWS Application Migration Service to replicate the VMs to AWS. Install the AWS Replication Agent on each VM.
3. Complete the initial data replication from the VMs to AWS. Launch test instances to perform acceptance tests for the workloads.
5. Stop all operations on the VMs. Perform a cutover by launching the migrated instances in AWS.
AWS Application Migration Service enables lift-and-shift migrations by replicating VMs from the on-premises data center to AWS. Installing the Replication Agent is the first step to initiate data replication.
Testing the replicated workloads ensures the migrated VMs function as expected on AWS before the final cutover.
Stopping operations ensures a clean cutover process, allowing the organization to launch the migrated instances in AWS with minimal disruption to business operations.
References:
Save time with our AWS cheat sheets.
A logistics company needs to replicate ongoing data changes from an on-premises Microsoft SQL Server database to Amazon RDS for SQL Server. The volume of data to replicate varies throughout the day due to periodic spikes in activity. The company plans to use AWS Database Migration Service (AWS DMS) for this task. The solution must dynamically allocate capacity based on workload demand while keeping operational overhead low.
Which solution will meet these requirements?
1. Configure AWS DMS Serverless to create a replication task that scales its capacity automatically based on workload demand.
AWS DMS Serverless dynamically adjusts replication capacity in response to data volume changes, providing cost efficiency and reducing manual management.
Reference:
What is AWS Database Migration Service?
Save time with our AWS cheat sheets.
A company manages several applications that run in different AWS accounts within an AWS Organizations setup. The company has outsourced the management of certain applications to external contractors. The contractors require secure access to the AWS Management Console and operating system access to Amazon Linux-based Amazon EC2 instances in private subnets for troubleshooting. The company must ensure all activities are logged and minimize the risk of unauthorized access.
Which solution will meet these requirements MOST securely?
1. Deploy AWS Systems Manager Agent (SSM Agent) to all instances. Assign an instance profile to the instances with the required Systems Manager policies. Grant contractors access to the AWS Management Console by configuring permission sets in AWS IAM Identity Center. Use Systems Manager Session Manager for secure instance access without requiring open network ports.
Systems Manager Session Manager provides secure, auditable access to instances without requiring SSH keys or open ports. IAM Identity Center enables centralized management of console access for contractors.
References:
Save time with our AWS cheat sheets.
A large MongoDB database running on-premises must be migrated to Amazon DynamoDB within the next few weeks. The database is too large to migrate over the company’s limited internet bandwidth so an alternative solution must be used.
What should a Solutions Architect recommend?
2. Use the Schema Conversion Tool (SCT) to extract and load the data to an AWS Snowball Edge device. Use the AWS Database Migration Service (DMS) to migrate the data to Amazon DynamoDB
Larger data migrations with AWS DMS can include many terabytes of information. This process can be cumbersome due to network bandwidth limits or just the sheer amount of data. AWS DMS can use Snowball Edge and Amazon S3 to migrate large databases more quickly than by other methods.
References:
Save time with our AWS cheat sheets.
An organization has a large amount of data on Windows (SMB) file shares in their on-premises data center. The organization would like to move data into Amazon S3. They would like to automate the migration of data over their AWS Direct Connect link.
Which AWS service can assist them?
4. AWS DataSync
AWS DataSync can be used to move large amounts of data online between on-premises storage and Amazon S3 or Amazon Elastic File System (Amazon EFS). DataSync eliminates or automatically handles many of these tasks, including scripting copy jobs, scheduling and monitoring transfers, validating data, and optimizing network utilization. The source datastore can be Server Message Block (SMB) file servers.
Reference:
AWS DataSync FAQs
Save time with our AWS cheat sheets.
A company has acquired another business and needs to migrate their 50TB of data into AWS within 1 month. They also require a secure, reliable and private connection to the AWS cloud.
How are these requirements best accomplished?
2. Migrate data using AWS Snowball. Provision an AWS VPN initially and order a Direct Connect link
AWS Direct Connect provides a secure, reliable and private connection. However, lead times are often longer than 1 month so it cannot be used to migrate data within the timeframes. Therefore, it is better to use AWS Snowball to move the data and order a Direct Connect connection to satisfy the other requirement later on. In the meantime the organization can use an AWS VPN for secure, private access to their VPC.
References:
Save time with our AWS cheat sheets:
A Solutions Architect is designing a migration strategy for a company moving to the AWS Cloud. The company use a shared Microsoft filesystem that uses Distributed File System Namespaces (DFSN).
What will be the MOST suitable migration strategy for the filesystem?
4. Use AWS DataSync to migrate to Amazon FSx for Windows File Server
The destination filesystem should be Amazon FSx for Windows File Server. This supports DFSN and is the most suitable storage solution for Microsoft filesystems. AWS DataSync supports migrating to the Amazon FSx and automates the process.
References:
Save time with our AWS cheat sheets.
Data from 45 TB of data is used for reporting by a company. The company wants to move this data from on premises into the AWS cloud. A custom application in the company’s data center runs a weekly data transformation job and the company plans to pause the application until the data transfer is complete and needs to begin the transfer process as soon as possible.
The data center bandwidth is saturated, and a solutions architect has been tasked to transfer the data and must configure the transformation job to continue to run in the AWS Cloud.
Which solution will meet these requirements with the LEAST operational overhead?
3. Order an AWS Snowball Edge Storage Optimized device. Copy the data to the device. and create a custom transformation job by using AWS Glue.
As the network is saturated, the solutions architect will have to use a physical solution, i.e. a member of the snow family to achieve this requirement quickly. As the data transformation job needs to be completed in the cloud, using AWS Glue will suit this requirement also. AWS Glue is a managed data transformation service.
Reference:
AWS Glue
Save time with our AWS cheat sheets.
A company have 500 TB of data in an on-premises file share that needs to be moved to Amazon S3 Glacier. The migration must not saturate the company’s low-bandwidth internet connection and the migration must be completed within a few weeks.
What is the MOST cost-effective solution?
4. Order 7 AWS Snowball appliances and select an Amazon S3 bucket as the destination. Create a lifecycle policy to transition the S3 objects to Amazon S3 Glacier
As the company’s internet link is low-bandwidth uploading directly to Amazon S3 (ready for transition to Glacier) would saturate the link. The best alternative is to use AWS Snowball appliances. The Snowball edge appliance can hold up to 80 TB of data so 7 devices would be required to migrate 500 TB of data.
Snowball moves data into AWS using a hardware device and the data is then copied into an Amazon S3 bucket of your choice. From there, lifecycle policies can transition the S3 objects to Amazon S3 Glacier.
Reference:
What is Snowball Edge?
Save time with our AWS cheat sheets.
A healthcare company is migrating its patient record system to AWS. The company receives thousands of encrypted patient data files every day through FTP. An on-premises server processes the data files twice a day. However, the processing job takes hours to finish.
The company wants the AWS solution to process incoming data files as soon as they arrive with minimal changes to the FTP clients that send the files. The solution must delete the incoming data files after the files have been processed successfully. Processing for each file needs to take around 10 minutes.
Which solution will meet these requirements in the MOST operationally efficient way?
4. Use AWS Transfer Family to create an SFTP server to store incoming files in Amazon S3 Standard. Create an AWS Lambda function to process the files and to delete the files after they are processed. Use an S3 event notification to invoke the Lambda function when the files arrive.
AWS Transfer Family provides fully managed support for file transfers directly into and out of Amazon S3 using SFTP. Storing incoming files in S3 Standard offers high durability, availability, and performance object storage for frequently accessed data.
AWS Lambda can respond immediately to S3 events, which allows processing of files as soon as they arrive. Lambda can also delete the files after processing. This meets all requirements and is operationally efficient, as it requires minimal management and has low costs.
Reference:
AWS Transfer Family
Save time with our AWS cheat sheets.