You import two Microsoft Excel tables named Customer and Address into Power Query. Customer contains the following columns:
✑ Customer ID
✑ Customer Name
✑ Phone
✑ Email Address
✑ Address ID
Address contains the following columns:
✑ Address ID
✑ Address Line 1
✑ Address Line 2
✑ City
✑ State/Region
✑ Country
✑ Postal Code
Each Customer ID represents a unique customer in the Customer table. Each Address ID represents a unique address in the Address table.
You need to create a query that has one row per customer. Each row must contain City, State/Region, and Country for each customer.
What should you do?
A. Merge the Customer and Address tables.
B. Group the Customer and Address tables by the Address ID column.
C. Transpose the Customer and Address tables.
D. Append the Customer and Address tables.
✅ A. Merge the Customer and Address tables.
Why this is correct (broken down simply):
You want one row per customer, and you want to add city, state, and country information from the Address table.
The Customer table already has one row per customer.
The Address table has one row per address.
Customers are connected to addresses by Address ID.
To bring the address columns into the customer table, you perform a merge (Power Query’s version of a join).
This creates a single table with:
Customer ID
Customer Name
Phone
Email
Address ID
City
State/Region
Country
Why the other options are wrong:
❌ B. Group the Customer and Address tables
Grouping would aggregate rows — you don’t want to summarize anything. You want to join two tables, not group them.
❌ C. Transpose the tables
Transpose flips rows into columns — totally unrelated to joining customer info with address info.
❌ D. Append the tables
Append stacks tables on top of each other.
Customer table and Address table don’t have the same columns AND shouldn’t be stacked.
You have a CSV file that contains user complaints. The file contains a column named Logged. Logged contains the date and time each complaint occurred. The data in Logged is in the following format: 2018-12-31 at 08:59.
You need to be able to analyze the complaints by the logged date and use a built-in date hierarchy.
What should you do?
A. Apply the Parse function from the Data transformations options to the Logged column.
B. Change the data type of the Logged column to Date.
C. Split the Logged column by using at as the delimiter.
D. Create a column by example that starts with 2018-12-31
✅ C. Split the Logged column by using “at” as the delimiter.
Short Explanation
The Logged column is text (2018-12-31 at 08:59).
Split it at “at” → you get a clean date and a clean time.
Convert the date half to Date → Power BI gives you the built-in date hierarchy.
Why not the others (super short)
A: Parse fails because of the word “at.”
B: Changing data type errors for the same reason.
D: Works but not reliable or intended—manual.
You use Power Query to import two tables named Order Header and Order Details from an Azure SQL database. The Order Header table relates to the Order Details table by using a column named Order ID in each table.
You need to combine the tables into a single query that contains the unique columns of each table.
What should you select in Power Query Editor?
A. Merge queries
B. Combine files
C. Append queries
✅ A. Merge queries
Short Explanation
Merge = join tables side-by-side based on a shared key (Order ID).
This brings in unique columns from both tables into one combined query.
Why not the others
B. Combine files → only used for multiple files in a folder (CSV, Excel).
C. Append → stacks rows on top of each other (must have same columns), not what you want.
You have a PBIX file that imports data from a Microsoft Excel data source stored in a file share on a local network.
You are notified that the Excel data source was moved to a new location.
You need to update the PBIX file to use the new location.
What are three ways to achieve the goal? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.
A. From the Datasets settings of the Power BI service, configure the data source credentials.
B. From the Data source settings in Power BI Desktop, configure the file path.
C. From Current File in Power BI Desktop, configure the Data Load settings.
D. From Power Query Editor, use the formula bar to configure the file path for the applied step.
E. From Advanced Editor in Power Query Editor, configure the file path in the M code.
✅ B. From the Data source settings in Power BI Desktop, configure the file path.
✅ D. From Power Query Editor, use the formula bar to configure the file path for the applied step.
✅ E. From Advanced Editor in Power Query Editor, configure the file path in the M code
Why these are correct (short version)
B — Data source settings:
Lets you edit the file path directly for the existing connection.
D — Formula bar:
Lets you manually change the file path in the Source = File.Contents(“…”) step.
E — Advanced Editor:
Also lets you update the file path inside the M code.
Why the others are wrong
❌ A: Dataset settings in Power BI Service won’t fix a local file path—it’s only for refresh credentials.
❌ C: Data Load settings do not control file paths.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have a data source that contains a column. The column contains case sensitive data.
You have a Power BI semantic model in DirectQuery mode.
You connect to the model and discover that it contains undefined values and errors.
You need to resolve the issue.
Solution: You implicitly convert the values into the required type.
Does this meet the goal?
A. Yes
B. No
❌ B. No
Short Explanation
Implicitly converting values does not fix case-sensitive issues in a DirectQuery model.
If the underlying data source is case-sensitive, mismatches can cause:
undefined values
errors
broken relationships
lookup failures
Implicit conversion won’t fix these problems.
You must ensure case consistency or explicitly transform the data.
So the solution does NOT meet the goal.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have a data source that contains a column. The column contains case sensitive data.
You have a Power BI semantic model in DirectQuery mode.
You connect to the model and discover that it contains undefined values and errors.
You need to resolve the issue.
Solution: You normalize casing in the source query or Power Query Editor.
Does this meet the goal?
A. Yes
B. No
✅ A. Yes
Short Explanation
Because the column is case sensitive, mismatched casing can cause:
undefined values
lookup failures
errors in DirectQuery
Normalizing casing (e.g., making everything uppercase or lowercase) fixes the inconsistency, which resolves the undefined values and errors.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have a data source that contains a column. The column contains case sensitive data.
You have a Power BI semantic model in DirectQuery mode.
You connect to the model and discover that it contains undefined values and errors.
You need to resolve the issue.
Solution: You add an index key and normalize casing in the data source.
Does this meet the goal?
A. Yes
B. No
✅ A. Yes
Short Explanation
The issue is caused by case-sensitive data in a DirectQuery model, which leads to undefined values and relationship/lookup errors.
Normalizing the casing directly fixes the root problem.
Adding an index key also helps ensure unique, consistent joins — which further prevents lookup errors in DirectQuery.
You use Power BI Desktop to import two tables named Customer and Contacts.
The Customer table contains the following columns:
The Contacts table contains the following columns:
A web-based contact form is used to fill the Contacts table. The data is not sanitized.
You need to create a merge for the Customer and Contacts tables.
What should you do?
A. Disable fuzzy matching.
B. Enable fuzzy matching.
C. Set Join Kind to Left Outer.
✅ B. Enable fuzzy matching.
Short Explanation
The Customer Name in the Contacts table comes from a web form and is not sanitized.
That means it may contain:
Extra spaces
Misspellings
Different casing
Variants like “Contoso Ltd” vs “Contoso Limited”
A normal merge (exact match) may fail because the text must match perfectly.
Fuzzy matching allows Power BI to match similar text values even when they’re slightly different.
Why not the others
❌ A. Disable fuzzy matching — would prevent imperfect matches from being found.
❌ C. Left Outer — join type doesn’t fix messy text; the issue is matching accuracy, not join direction.
You have a Microsoft Power BI Desktop report named Report1 that uses an Azure SQL database as a data source.
A user named User1 plans to create a report by using the same data source as Report1.
You need to simplify the connection to the data source for User1.
Which type of file should you create?
A. PBIDS
B. XLSX
C. PBIT
D. PBIX
✅ A. PBIDS
Short Explanation
A PBIDS (Power BI Data Source) file is specifically designed to make it easy for others to connect to the same data source.
When User1 opens the PBIDS file:
Power BI automatically opens
The data source connection details are pre-filled
User1 only needs to enter credentials (if required)
This is the recommended method to simplify data source connection for others.
Why not the others
❌ B. XLSX — just an Excel file; not helpful for connecting to Azure SQL.
❌ C. PBIT — template for reports; includes queries but not ideal when the goal is only to share the connection info.
❌ D. PBIX — full report, not a connection helper.
You plan to create a Power BI semantic model named Model1 that will contain data from an Azure SQL database named DB1.
Model1 must show updated data within two minutes of the data being updated in DB1.
You need to select a connectivity mode for the connection to DB1.
What should you choose?
A. DirectQuery
B. live connection
C. import
✅ A. DirectQuery
Short Explanation
You need data in the model to update within two minutes of changes in the Azure SQL database.
Import mode can’t meet this requirement because scheduled refreshes can’t go that frequently.
Live connection works only for Analysis Services or certain Power BI datasets—not Azure SQL.
DirectQuery queries the database in real time (or near real time), p
You are creating a report in Power BI Desktop.
You load a data extract that includes a free text field named coll.
You need to analyze the frequency distribution of the string lengths in col1. The solution must not affect the size of the model.
What should you do?
A. In the report, add a DAX calculated column that calculates the length of col1
B. In the report, add a DAX function that calculates the average length of col1
C. From Power Query Editor, add a column that calculates the length of col1
D. From Power Query Editor, change the distribution for the Column profile to group by length for col1
✅ D. From Power Query Editor, change the distribution for the Column profile to group by length for col1 🎯📊✨
You need to analyze string length frequency 🔡📈
AND
You must NOT affect model size 🚫📦
Option D works because:
Column profiling happens only inside Power Query Editor 🧪
It does not create new columns 💨
It does not load anything into the model
You can visually see the frequency distribution by length right in the profiling pane 🔍📊
Perfect match. ✔️
❌ Why the others are wrong (quick hits)
A. Create a DAX calculated column
💀 Adds a new column → increases model size.
B. Add a DAX measure
💀 Only gives you an average → does NOT give a full frequency distribution.
C. Add a column in Power Query
💀 New column = more data loaded → increases model size.
You need to provide a user with the ability to add members to a workspace. The solution must use the principle of least privilege.
Which role should you assign to the user?
A. Viewer
B. Admin
C. Contributor
D. Member
✅ D. Member 🎯✨
You want the user to be able to add members to a workspace 👥➕
But you must follow least privilege 🔐
Here’s what each workspace role can do:
Viewer 👀 → Can only view content
Contributor ✏️ → Can edit content but cannot add users
Member 💼 → Can add members + edit content, but not full control
Admin 👑 → Full control, highest permissions
Since both Admin and Member can add workspace users, but Member has fewer privileges…
👉 Member = least privilege that still meets the requirement 🎉
In Power BI Desktop, you are building a sales report that contains two tables. Both tables have row-level security (RLS) configured.
You need to create a relationship between the tables. The solution must ensure that bidirectional cross-filtering honors the RLS settings.
What should you do?
A. Create an inactive relationship between the tables and select Apply security filter in both directions.
B. Create an active relationship between the tables and select Apply security filter in both directions.
C. Create an inactive relationship between the tables and select Assume referential integrity.
D. Create an active relationship between the tables and select Assume referential integrity.
✅ B. Create an active relationship between the tables and select Apply security filter in both directions. 🔐🔄✨
You need:
✔️ Bidirectional cross-filtering 🔄
✔️ RLS to flow correctly between tables 🔐
✔️ A relationship that actually filters during report use (must be active) ⚡
When you choose Apply security filter in both directions, Power BI:
Allows RLS to propagate from either table → the other 🔁
Ensures users only see rows they are allowed to see across both tables 👀
This is the only option that meets all requirements.
❌ Why the other options fail (quick hits)
A. Inactive relationship + apply security
💀 Inactive = no filtering unless manually activated in a DAX measure. RLS won’t flow.
C. Inactive + assume referential integrity
💀 Still inactive. And “Assume referential integrity” is only for DirectQuery joins.
D. Active + assume referential integrity
💀 Doesn’t control RLS behavior at all. Only affects join type.
You have a Power BI model that contains a table named Employees. The table contains the following columns:
Each employee is uniquely identified by using Employee ID.
You need to create a DAX measure that will calculate the average salary of all the employees in the sales department.
Which DAX expression should you use?
A. DISTINCTCOUNT(‘Employees’[Salary])
B. CALCULATE(DISTINCTCOUNT(‘Employees’[Salary]), ‘Employees’[Department] = “Sales”)
C. CALCULATE(AVERAGE(‘Employees’[Salary]), ‘Employees’[Department] = “Sales”)
D. AVERAGE(‘Employees’[Salary])
✔ Correct Answer: C.
CALCULATE(AVERAGE(‘Employees’[Salary]), ‘Employees’[Department] = “Sales”) 💰📊✨
Why:
You need the average salary, but only for the Sales department.
CALCULATE applies the filter → “Sales” → then computes the AVERAGE. 🎯
❌ Why not the others?
A DISTINCTCOUNT → counts unique salaries, not average ❌
B DISTINCTCOUNT inside CALCULATE → still wrong metric 🔢🚫
D AVERAGE → averages all departments, not Sales 🌍❌
You have a Power BI data model that contains a table named Employees. The table has the following columns:
You are implementing dynamic row-level security (RLS).
You need to create a table filter to meet the following requirements:
Which expression should you use?
A. [Email Address] - USERNAME()
B. [Employee Name] - USERPRINCIPALNAME()
C. [Employee Name] = USERNAME()
D. [Email Address] = USERPRINCIPALNAME()
✔ Correct Answer: D. [Email Address] = USERPRINCIPALNAME() 📧🔐✨
Why:
Dynamic RLS should match the user’s email address.
USERPRINCIPALNAME() returns the user’s full UPN/email in both Desktop and the Service ✔️
Perfect for row-level filtering so users only see their own record 🎯
❌ Why not the others?
A [Email Address] - USERNAME() → Wrong operator + USERNAME can return machine name in Desktop ❌
B Compares Employee Name to an email → mismatched fields ❌
C Same issue: Employee Name ≠ full UPN/email ❌
You have a Power BI model that contains a table named Sales. The Sales table contains the following columns:
Orders are uniquely identified by using the order ID and can have multiple order lines. Each order line within an order contains a different product ID.
You need to write a DAX measure that counts the number of orders.
Which formula should you use?
A. Count(‘Sales’[Order ID])
B. CountA(‘Sales’ [Order ID])
C. CountRows(‘Sales’)
D. DistinctCount(‘Sales’ [Order ID])
✔ Correct Answer: D. DISTINCTCOUNT(‘Sales’[Order ID]) 🔢📦✨
Why:
Each Order ID represents one order, but there are multiple order lines per order.
DISTINCTCOUNT counts unique orders, not rows — perfect for counting orders. 🎯
❌ Why not the others?
A. COUNT(Order ID) → Counts rows; duplicate Order IDs inflate results 🚫
B. COUNTA → Same issue as COUNT; counts non-blanks, not unique orders ❌
C. COUNTROWS(Sales) → Counts order lines, not orders 📄➕❌
You create a Power BI report named Summary1.
You discover that Summary1 is slow.
You run Performance analyzer to identify performance metrics for Summary1.
Which two metrics display the execution duration in Performance analyzer? Each correct answer present part of the solution.
NOTE: Each correct answer is worth one point.
A. Top Operations
B. DAX query
C. Server requests
D. Dependencies
E. Visual display
✔ Correct Answers: B and E ⏱️📊✨
B. DAX query
E. Visual display
✔ Correct Answers: B and E ⏱️📊✨
B. DAX query
E. Visual display
Why B (DAX query)? 🧠⚡
Shows how long the DAX query took to run — core execution time.
Why E (Visual display)? 🎨⌛
Shows how long Power BI took to render the visual after getting the data.
These are the two Performance Analyzer metrics that show execution duration.
❌ Why not the others?
A. Top Operations → Not a Performance Analyzer metric ❌
C. Server requests → Shows backend calls, not execution time directly 🌐
D. Dependencies → Metadata, not duration 🔗
You have a Power BI model that contains two tables named Sales and Date. The Sales table relates to the Date table by using a many-to-one relationship. The Sales table contains the following columns:
You need to create a DAX measure for a rolling 31-day sales total that will return the total sales amount for a selected date and the previous 30 days.
Which DAX expression should you use?
A. CALCULATE(SUM(Sales[SalesAmount]), DATEADD(Date[Date], -30, DAY))
B. CALCULATE(SUM(Sales[SalesAmount]), DATESBETWEEN(Date[Date], Max(‘Date’[Date])-30, Max(‘Date’[Date])))
C. CALCULATE(SUM(Sales[SalesAmount]), DATESMTD(Date[Date]))
D. CALCULATE(SUM(Sales[SalesAmount]), DISTINCTCOUNT(Date[Date]) = 31)
✔ Correct Answer: B.
CALCULATE(SUM(Sales[SalesAmount]), DATESBETWEEN(Date[Date], MAX(Date[Date]) - 30, MAX(Date[Date]))) 📅📊🔥
Why:
DATESBETWEEN lets you define a custom rolling window — in this case:
From selected date minus 30 days
To selected date
= a full 31-day rolling total ✔️
Perfect for rolling periods.
❌ Why not the others?
A. DATEADD(-30) → Returns only the date 30 days earlier, not the full 31-day range ❌
C. DATESMTD → Month-to-date, not rolling days 📆❌
D. DISTINCTCOUNT = 31 → Invalid filtering logic 🚫
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have an on-premises data gateway.
You need to reduce the amount of data sent through the gateway by semantic models that run in Import storage mode.
Solution: You create aggregations to summarize results.
Does this meet the goal?
A. Yes
B. No
❌ Correct Answer: B. No
Why it’s NO:
Aggregations do NOT reduce the amount of data sent through the gateway for Import models.
Even if you build aggregation tables, Power BI still must import the full detailed data from the on-prem SQL Server during refresh.
So the same large dataset still travels through the gateway. 🚫📡
Aggregations only help query performance, not gateway data volume during refresh.
👍 What would reduce gateway load?
Filtering data before it reaches Power BI (SQL views, WHERE clauses)
Incremental refresh
Partitions
Pre-aggregation inside SQL Server, not Power BI
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have an on-premises data gateway.
You need to reduce the amount of data sent through the gateway by semantic models that run in import storage mode.
Solution: You increase Automatic page refresh intervals.
Does this meet the goal?
A. Yes
B. No
❌ Correct Answer: B. No 🚫📡
Why:
Automatic page refresh affects DirectQuery visuals only — it controls how often visuals re-query the source.
It does NOT affect Import models, and it does NOT reduce data moving through the gateway during refresh.
Import storage mode still loads the full dataset through the on-prem gateway during scheduled refresh, unchanged. 📥📊
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have an on-premises data gateway.
You need to reduce the amount of data sent through the gateway by semantic models that run in Import storage mode.
Solution: You decrease the dashboard cache update frequency.
Does this meet the goal?
A. Yes
B. No
❌ Correct Answer: B. No 🚫📡
Why:
Dashboard cache refresh affects how often the dashboard tiles refresh in the Service, not how much data gets imported through the on-premises gateway.
Import models still require the full dataset to be refreshed through the gateway — unchanged. 📥📊
So reducing cache frequency does NOT reduce gateway data volume.
You have a Power BI model that contains a table named Expenses. The table contains the following columns:
Expenses are recorded by employees, and each expense has a unique expense ID.
You need to create a DAX measure that will calculate the total amount of expenses.
Which DAX expression should you use?
A. Sum(‘Expenses’[Amount])
B. Max(‘Expenses’[Amount])
C. Sum(‘Expenses’[Expense ID])
D. Count(‘Expenses’[Amount])
✔ Correct Answer: A. SUM(‘Expenses’[Amount]) 💰📊✨
Why:
To calculate total expenses, you simply sum the Amount column — nothing else is needed. 🎯
❌ Why not the others?
B. MAX(Amount) → Only returns the largest single expense ❌
C. SUM(Expense ID) → Adds IDs, not amounts 🤦♀️
D. COUNT(Amount) → Counts rows, doesn’t total money 🔢❌
You have a clustered bar chart that contains a measure named Salary as the value and a field named Employee as the axis. Salary is present in the data as a numerical amount representing US dollars.
You need to create a reference line to show which employees are above the median salary.
Solution: You create an average line by using the Salary measure.
Does this meet the goal?
A. Yes
B. No
❌ Correct Answer: B. No 🚫📉
Why:
You need a line that represents the median salary, but an average line shows the mean, not the median.
Employees above the average are not necessarily above the median — different calculation. ❌📊