Case Study - A company is building a web-based AI application by using Amazon SageMaker. The application will provide the following capabilities and features: ML experimentation, training, a central model registry, model deployment, and model monitoring. The application must ensure secure and isolated use of training data during the ML lifecycle. The training data is stored in Amazon S3. The company needs to use the central model registry to manage different versions of models in the application. Which action will meet this requirement with the LEAST operational overhead?A.. Create a separate Amazon Elastic Container Registry (Amazon ECR) repository for each model.
B.. Use Amazon Elastic Container Registry (Amazon ECR) and unique tags for each model version.
C.. Use the SageMaker Model Registry and model groups to catalog the models.
D.. Use the SageMaker Model Registry and unique tags for each model version.
C
Case Study - A company is building a web-based AI application by using Amazon SageMaker. The application will provide the following capabilities and features: ML experimentation, training, a central model registry, model deployment, and model monitoring. The application must ensure secure and isolated use of training data during the ML lifecycle. The training data is stored in Amazon S3. The company is experimenting with consecutive training jobs. How can the company MINIMIZE infrastructure startup times for these jobs?A.. Use Managed Spot Training.
B.. Use SageMaker managed warm pools.
C.. Use SageMaker Training Compiler.
D.. Use the SageMaker distributed data parallelism (SMDDP) library.
B
Case Study - A company is building a web-based AI application by using Amazon SageMaker. The application will provide the following capabilities and features: ML experimentation, training, a central model registry, model deployment, and model monitoring. The application must ensure secure and isolated use of training data during the ML lifecycle. The training data is stored in Amazon S3. The company must implement a manual approval-based workflow to ensure that only approved models can be deployed to production endpoints. Which solution will meet this requirement?A.. Use SageMaker Experiments to facilitate the approval process during model registration.
B.. Use SageMaker ML Lineage Tracking on the central model registry. Create tracking entities for the approval process.
C.. Use SageMaker Model Monitor to evaluate the performance of the model and to manage the approval.
D.. Use SageMaker Pipelines. When a model version is registered, use the AWS SDK to change the approval status to “Approved.”
D
Case Study - A company is building a web-based AI application by using Amazon SageMaker. The application will provide the following capabilities and features: ML experimentation, training, a central model registry, model deployment, and model monitoring. The application must ensure secure and isolated use of training data during the ML lifecycle. The training data is stored in Amazon S3. The company needs to run an on-demand workflow to monitor bias drift for models that are deployed to real-time endpoints from the application. Which action will meet this requirement?A.. Configure the application to invoke an AWS Lambda function that runs a SageMaker Clarify job.
B.. Invoke an AWS Lambda function to pull the sagemaker-model-monitor-analyzer built-in SageMaker image.
C.. Use AWS Glue Data Quality to monitor bias.
D.. Use SageMaker notebooks to compare the bias.
A
Case study - An ML engineer is developing a fraud detection model on AWS. The training dataset includes transaction logs, customer profiles, and tables from an on-premises MySQL database. The transaction logs and customer profiles are stored in Amazon S3. The dataset has a class imbalance that affects the learning of the model’s algorithm. Additionally, many of the features have interdependencies. The algorithm is not capturing all the desired underlying patterns in the data. Which AWS service or feature can aggregate the data from the various data sources?A.. Amazon EMR Spark jobs
B.. Amazon Kinesis Data Streams
C.. Amazon DynamoDB
D.. AWS Lake Formation
D
Case study - An ML engineer is developing a fraud detection model on AWS. The training dataset includes transaction logs, customer profiles, and tables from an on-premises MySQL database. The transaction logs and customer profiles are stored in Amazon S3. The dataset has a class imbalance that affects the learning of the model’s algorithm. Additionally, many of the features have interdependencies. The algorithm is not capturing all the desired underlying patterns in the data. After the data is aggregated, the ML engineer must implement a solution to automatically detect anomalies in the data and to visualize the result. Which solution will meet these requirements?A.. Use Amazon Athena to automatically detect the anomalies and to visualize the result.
B.. Use Amazon Redshift Spectrum to automatically detect the anomalies. Use Amazon QuickSight to visualize the result.
C.. Use Amazon SageMaker Data Wrangler to automatically detect the anomalies and to visualize the result.
D.. Use AWS Batch to automatically detect the anomalies. Use Amazon QuickSight to visualize the result.
C
Case study - An ML engineer is developing a fraud detection model on AWS. The training dataset includes transaction logs, customer profiles, and tables from an on-premises MySQL database. The transaction logs and customer profiles are stored in Amazon S3. The dataset has a class imbalance that affects the learning of the model’s algorithm. Additionally, many of the features have interdependencies. The algorithm is not capturing all the desired underlying patterns in the data. The training dataset includes categorical data and numerical data. The ML engineer must prepare the training dataset to maximize the accuracy of the model. Which action will meet this requirement with the LEAST operational overhead?A.. Use AWS Glue to transform the categorical data into numerical data.
B.. Use AWS Glue to transform the numerical data into categorical data.
C.. Use Amazon SageMaker Data Wrangler to transform the categorical data into numerical data.
D.. Use Amazon SageMaker Data Wrangler to transform the numerical data into categorical data.
C
Case study - An ML engineer is developing a fraud detection model on AWS. The training dataset includes transaction logs, customer profiles, and tables from an on-premises MySQL database. The transaction logs and customer profiles are stored in Amazon S3. The dataset has a class imbalance that affects the learning of the model’s algorithm. Additionally, many of the features have interdependencies. The algorithm is not capturing all the desired underlying patterns in the data. Before the ML engineer trains the model, the ML engineer must resolve the issue of the imbalanced data. Which solution will meet this requirement with the LEAST operational effort?A.. Use Amazon Athena to identify patterns that contribute to the imbalance. Adjust the dataset accordingly.
B.. Use Amazon SageMaker Studio Classic built-in algorithms to process the imbalanced dataset.
C.. Use AWS Glue DataBrew built-in features to oversample the minority class.
D.. Use the Amazon SageMaker Data Wrangler balance data operation to oversample the minority class.
D
Case study - An ML engineer is developing a fraud detection model on AWS. The training dataset includes transaction logs, customer profiles, and tables from an on-premises MySQL database. The transaction logs and customer profiles are stored in Amazon S3. The dataset has a class imbalance that affects the learning of the model’s algorithm. Additionally, many of the features have interdependencies. The algorithm is not capturing all the desired underlying patterns in the data. The ML engineer needs to use an Amazon SageMaker built-in algorithm to train the model. Which algorithm should the ML engineer use to meet this requirement?A.. LightGBM
B.. Linear learner
C.. К-means clustering
D.. Neural Topic Model (NTM)
A
A company has deployed an XGBoost prediction model in production to predict if a customer is likely to cancel a subscription. The company uses Amazon SageMaker Model Monitor to detect deviations in the F1 score. During a baseline analysis of model quality, the company recorded a threshold for the F1 score. After several months of no change, the model’s F1 score decreases significantly. What could be the reason for the reduced F1 score?A.. Concept drift occurred in the underlying customer data that was used for predictions.
B.. The model was not sufficiently complex to capture all the patterns in the original baseline data.
C.. The original baseline data had a data quality issue of missing values.
D.. Incorrect ground truth labels were provided to Model Monitor during the calculation of the baseline.
A
A company has a team of data scientists who use Amazon SageMaker notebook instances to test ML models. When the data scientists need new permissions, the company attaches the permissions to each individual role that was created during the creation of the SageMaker notebook instance. The company needs to centralize management of the team’s permissions. Which solution will meet this requirement?A.. Create a single IAM role that has the necessary permissions. Attach the role to each notebook instance that the team uses.
B.. Create a single IAM group. Add the data scientists to the group. Associate the group with each notebook instance that the team uses.
C.. Create a single IAM user. Attach the AdministratorAccess AWS managed IAM policy to the user. Configure each notebook instance to use the IAM user.
D.. Create a single IAM group. Add the data scientists to the group. Create an IAM role. Attach the AdministratorAccess AWS managed IAM policy to the role. Associate the role with the group. Associate the group with each notebook instance that the team uses.
A
An ML engineer needs to use an ML model to predict the price of apartments in a specific location. Which metric should the ML engineer use to evaluate the model’s performance?A.. Accuracy
B.. Area Under the ROC Curve (AUC)
C.. F1 score
D.. Mean absolute error (MAE)
D
An ML engineer has trained a neural network by using stochastic gradient descent (SGD). The neural network performs poorly on the test set. The values for training loss and validation loss remain high and show an oscillating pattern. The values decrease for a few epochs and then increase for a few epochs before repeating the same cycle. What should the ML engineer do to improve the training process?A.. Introduce early stopping.
B.. Increase the size of the test set.
C.. Increase the learning rate.
D.. Decrease the learning rate.
D
An ML engineer needs to process thousands of existing CSV objects and new CSV objects that are uploaded. The CSV objects are stored in a central Amazon S3 bucket and have the same number of columns. One of the columns is a transaction date. The ML engineer must query the data based on the transaction date. Which solution will meet these requirements with the LEAST operational overhead?A.. Use an Amazon Athena CREATE TABLE AS SELECT (CTAS) statement to create a table based on the transaction date from data in the central S3 bucket. Query the objects from the table.
B.. Create a new S3 bucket for processed data. Set up S3 replication from the central S3 bucket to the new S3 bucket. Use S3 Object Lambda to query the objects based on transaction date.
C.. Create a new S3 bucket for processed data. Use AWS Glue for Apache Spark to create a job to query the CSV objects based on transaction date. Configure the job to store the results in the new S3 bucket. Query the objects from the new S3 bucket.
D.. Create a new S3 bucket for processed data. Use Amazon Data Firehose to transfer the data from the central S3 bucket to the new S3 bucket. Configure Firehose to run an AWS Lambda function to query the data based on transaction date.
A
A company has a large, unstructured dataset. The dataset includes many duplicate records across several key attributes. Which solution on AWS will detect duplicates in the dataset with the LEAST code development?A.. Use Amazon Mechanical Turk jobs to detect duplicates.
B.. Use Amazon QuickSight ML Insights to build a custom deduplication model.
C.. Use Amazon SageMaker Data Wrangler to pre-process and detect duplicates.
D.. Use the AWS Glue FindMatches transform to detect duplicates.
D
A company needs to run a batch data-processing job on Amazon EC2 instances. The job will run during the weekend and will take 90 minutes to finish running. The processing can handle interruptions. The company will run the job every weekend for the next 6 months. Which EC2 instance purchasing option will meet these requirements MOST cost-effectively?A.. Spot Instances
B.. Reserved Instances
C.. On-Demand Instances
D.. Dedicated Instances
A
An ML engineer has an Amazon Comprehend custom model in Account A in the us-east-1 Region. The ML engineer needs to copy the model to Account В in the same Region. Which solution will meet this requirement with the LEAST development effort?A.. Use Amazon S3 to make a copy of the model. Transfer the copy to Account B.
B.. Create a resource-based IAM policy. Use the Amazon Comprehend ImportModel API operation to copy the model to Account B.
C.. Use AWS DataSync to replicate the model from Account A to Account B.
D.. Create an AWS Site-to-Site VPN connection between Account A and Account В to transfer the model.
B
An ML engineer is training a simple neural network model. The ML engineer tracks the performance of the model over time on a validation dataset. The model’s performance improves substantially at first and then degrades after a specific number of epochs. Which solutions will mitigate this problem? (Choose two.)A.. Enable early stopping on the model.
B.. Increase dropout in the layers.
C.. Increase the number of layers.
D.. Increase the number of neurons.
E.. Investigate and reduce the sources of model bias.
AB
A company has a Retrieval Augmented Generation (RAG) application that uses a vector database to store embeddings of documents. The company must migrate the application to AWS and must implement a solution that provides semantic search of text files. The company has already migrated the text repository to an Amazon S3 bucket. Which solution will meet these requirements?A.. Use an AWS Batch job to process the files and generate embeddings. Use AWS Glue to store the embeddings. Use SQL queries to perform the semantic searches.
B.. Use a custom Amazon SageMaker notebook to run a custom script to generate embeddings. Use SageMaker Feature Store to store the embeddings. Use SQL queries to perform the semantic searches.
C.. Use the Amazon Kendra S3 connector to ingest the documents from the S3 bucket into Amazon Kendra. Query Amazon Kendra to perform the semantic searches.
D.. Use an Amazon Textract asynchronous job to ingest the documents from the S3 bucket. Query Amazon Textract to perform the semantic searches.
C
A company uses Amazon Athena to query a dataset in Amazon S3. The dataset has a target variable that the company wants to predict. The company needs to use the dataset in a solution to determine if a model can predict the target variable. Which solution will provide this information with the LEAST development effort?A.. Create a new model by using Amazon SageMaker Autopilot. Report the model’s achieved performance.
B.. Implement custom scripts to perform data pre-processing, multiple linear regression, and performance evaluation. Run the scripts on Amazon EC2 instances.
C.. Configure Amazon Macie to analyze the dataset and to create a model. Report the model’s achieved performance.
D.. Select a model from Amazon Bedrock. Tune the model with the data. Report the model’s achieved performance.
A
A company wants to predict the success of advertising campaigns by considering the color scheme of each advertisement. An ML engineer is preparing data for a neural network model. The dataset includes color information as categorical data. Which technique for feature engineering should the ML engineer use for the model?A.. Apply label encoding to the color categories. Automatically assign each color a unique integer.
B.. Implement padding to ensure that all color feature vectors have the same length.
C.. Perform dimensionality reduction on the color categories.
D.. One-hot encode the color categories to transform the color scheme feature into a binary matrix.
D
A company uses a hybrid cloud environment. A model that is deployed on premises uses data in Amazon 53 to provide customers with a live conversational engine. The model is using sensitive data. An ML engineer needs to implement a solution to identify and remove the sensitive data. Which solution will meet these requirements with the LEAST operational overhead?A.. Deploy the model on Amazon SageMaker. Create a set of AWS Lambda functions to identify and remove the sensitive data.
B.. Deploy the model on an Amazon Elastic Container Service (Amazon ECS) cluster that uses AWS Fargate. Create an AWS Batch job to identify and remove the sensitive data.
C.. Use Amazon Macie to identify the sensitive data. Create a set of AWS Lambda functions to remove the sensitive data.
D.. Use Amazon Comprehend to identify the sensitive data. Launch Amazon EC2 instances to remove the sensitive data.
C
An ML engineer needs to create data ingestion pipelines and ML model deployment pipelines on AWS. All the raw data is stored in Amazon S3 buckets. Which solution will meet these requirements?A.. Use Amazon Data Firehose to create the data ingestion pipelines. Use Amazon SageMaker Studio Classic to create the model deployment pipelines.
B.. Use AWS Glue to create the data ingestion pipelines. Use Amazon SageMaker Studio Classic to create the model deployment pipelines.
C.. Use Amazon Redshift ML to create the data ingestion pipelines. Use Amazon SageMaker Studio Classic to create the model deployment pipelines.
D.. Use Amazon Athena to create the data ingestion pipelines. Use an Amazon SageMaker notebook to create the model deployment pipelines.
B
A company that has hundreds of data scientists is using Amazon SageMaker to create ML models. The models are in model groups in the SageMaker Model Registry. The data scientists are grouped into three categories: computer vision, natural language processing (NLP), and speech recognition. An ML engineer needs to implement a solution to organize the existing models into these groups to improve model discoverability at scale. The solution must not affect the integrity of the model artifacts and their existing groupings. Which solution will meet these requirements?A.. Create a custom tag for each of the three categories. Add the tags to the model packages in the SageMaker Model Registry.
B.. Create a model group for each category. Move the existing models into these category model groups.
C.. Use SageMaker ML Lineage Tracking to automatically identify and tag which model groups should contain the models.
D.. Create a Model Registry collection for each of the three categories. Move the existing model groups into the collections.
D