Microsoft DP-600 Exam (page: 3)
Microsoft Implementing Analytics Solutions Using Fabric
Updated on: 13-Dec-2025

Viewing Page 3 of 26

HOTSPOT (Drag and Drop is not supported)
You have a Fabric workspace named Workspace1 and an Azure Data Lake Storage Gen2 account named storage1. Workspace1 contains a lakehouse named Lakehouse1.

You need to create a shortcut to storage1 in Lakehouse1.

Which protocol and endpoint should you specify? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Hot Area:

  1. See Explanation section for answer.

Answer(s): A

Explanation:




Box 1: abfss
Access Azure storage
Once you have properly configured credentials to access your Azure storage container, you can interact with resources in the storage account using URIs. Databricks recommends using the abfss driver for greater security.

spark.read.load("abfss://<container-name>@<storage-account-name>.dfs.core.windows.net/<path-to-data>") dbutils.fs.ls("abfss://<container-name>@<storage-account-name>.dfs.core.windows.net/<path-to-data>") CREATE TABLE <database-name>.<table-name>;
COPY INTO <database-name>.<table-name>
FROM 'abfss://container@storageAccount.dfs.core.windows.net/path/to/folder' FILEFORMAT = CSV
COPY_OPTIONS ('mergeSchema' = 'true');
Box 2: dfs
dfs is used for the endpoint:
dbutils.fs.ls("abfss://<container-name>@<storage-account-name>.dfs.core.windows.net/<path-to-data>")


Reference:

https://docs.databricks.com/en/connect/storage/azure-storage.html



You have an Azure Repos Git repository named Repo1 and a Fabric-enabled Microsoft Power BI Premium capacity. The capacity contains two workspaces named Workspace1 and Workspace2. Git integration is enabled at the workspace level.

You plan to use Microsoft Power BI Desktop and Workspace1 to make version-controlled changes to a semantic model stored in Repo1. The changes will be built and deployed to Workspace2 by using Azure Pipelines.

You need to ensure that report and semantic model definitions are saved as individual text files in a folder hierarchy. The solution must minimize development and maintenance effort.

In which file format should you save the changes?

  1. PBIP
  2. PBIDS
  3. PBIT
  4. PBIX

Answer(s): A

Explanation:

Power BI Desktop projects (PREVIEW)
Power BI Desktop introduces a new way to author, collaborate, and save your projects. You can now save your work as a Power BI Project (PBIP). As a project, report and semantic model item definitions are saved as individual plain text files in a simple, intuitive folder structure.


Reference:

https://learn.microsoft.com/en-us/power-bi/developer/projects/projects-overview



You have a Fabric tenant that contains a lakehouse named Lakehouse1. Lakehouse1 contains a Delta table that has one million Parquet files.

You need to remove files that were NOT referenced by the table during the past 30 days. The solution must ensure that the transaction log remains consistent, and the ACID properties of the table are maintained.

What should you do?

  1. From OneLake file explorer, delete the files.
  2. Run the OPTIMIZE command and specify the Z-order parameter.
  3. Run the OPTIMIZE command and specify the V-order parameter.
  4. Run the VACUUM command.

Answer(s): D

Explanation:

VACUUM
Applies to: check marked yes Databricks SQL check marked yes Databricks Runtime Remove unused files from a table directory.

VACUUM removes all files from the table directory that are not managed by Delta, as well as data files that are no longer in the latest state of the transaction log for the table and are older than a retention threshold.

Incorrect:
Not B: What is Z order optimization?
Z-ordering is a technique to colocate related information in the same set of files. This co-locality is automatically used by Delta Lake on Azure Databricks data-skipping algorithms. This behavior dramatically reduces the amount of data that Delta Lake on Azure Databricks needs to read.

Not C: Delta Lake table optimization and V-Order
V-Order is a write time optimization to the parquet file format that enables lightning-fast reads under the Microsoft Fabric compute engines, such as Power BI, SQL, Spark, and others.

Power BI and SQL engines make use of Microsoft Verti-Scan technology and V-Ordered parquet files to achieve in-memory like data access times. Spark and other non-Verti-Scan compute engines also benefit from the V-Ordered files with an average of 10% faster read times, with some scenarios up to 50%.

V-Order works by applying special sorting, row group distribution, dictionary encoding and compression on parquet files, thus requiring less network, disk, and CPU resources in compute engines to read it, providing cost efficiency and performance. V-Order sorting has a 15% impact on average write times but provides up to 50% more compression.


Reference:

https://docs.databricks.com/en/sql/language-manual/delta-vacuum.html https://learn.microsoft.com/en-us/fabric/data-engineering/delta-optimization-and-v-order?



You have a Fabric tenant that contains a lakehouse named Lakehouse1.

You need to prevent new tables added to Lakehouse1 from being added automatically to the default semantic model of the lakehouse.

What should you configure?

  1. the SQL analytics endpoint settings
  2. the semantic model settings
  3. the workspace settings
  4. the Lakehouse1 settings

Answer(s): A

Explanation:

Default Power BI semantic models in Microsoft Fabric
Sync the default Power BI semantic model
Previously we auto added all tables and views in the Warehouse to the default Power BI semantic model. Based on feedback, we have modified the default behavior to not automatically add tables and views to the default Power BI semantic model. This change will ensure the background sync will not get triggered. This will also disable some actions like "New Measure", "Create Report", "Analyze in Excel".

If you want to change this default behavior, you can:
Manually enable the Sync the default Power BI semantic model setting for each Warehouse or SQL analytics endpoint in the workspace. This will restart the background sync that will incur some consumption costs.



2. Manually pick tables and views to be added to semantic model through Manage default Power BI semantic model in the ribbon or info bar.

NOTE: Understand what's in the default Power BI semantic model
When you create a Warehouse or SQL analytics endpoint, a default Power BI semantic model is created. The default semantic model is represented with the (default) suffix.


Reference:

https://learn.microsoft.com/en-us/fabric/data-warehouse/semantic-models



You have a Fabric tenant that contains JSON files in OneLake. The files have one billion items. You plan to perform time series analysis of the items.

You need to transform the data, visualize the data to find insights, perform anomaly detection, and share the insights with other business users. The solution must meet the following requirements:
Use parallel processing. Minimize the duplication of data.

Minimize how long it takes to load the data.

What should you use to transform and visualize the data?

  1. the PySpark library in a Fabric notebook
  2. the pandas library in a Fabric notebook
  3. a Microsoft Power BI report that uses core visuals

Answer(s): A

Explanation:

PySpark vs Pandas Performance
Pyspark has been created to help us work with big data on distributed systems. On the other hand, the pandas module is used to manipulate and analyze datasets up to a few GigaBytes (Less than 10 GB to be specific).

So, PySpark, when used with a distributed computing system, gives better performance than pandas. Pyspark also uses resilient distributed datasets (RDDs) to work parallel on the data. Hence, it performs better than pandas.

NOTE: PySpark is a Python library that provides an interface for Apache Spark. Spark is an open-source framework for big data processing. Spark is built to process large amounts of data quickly by distributing computing tasks across a cluster of machines.

PySpark allows us to use Apache Spark and its ecosystem of libraries, such as Spark SQL for working with structured data.

We can also use Spark MLlib for machine learning and GraphX for graph processing using Pyspark in Python.

PySpark supports many data sources, including Hadoop Distributed File System (HDFS), Apache Cassandra, and Amazon S3.

Along with the data processing capabilities, we can also use pyspark with popular Python libraries such as NumPy and Pandas.


Reference:

https://www.codeconquest.com/blog/pyspark-vs-pandas-performance-memory-consumption-and-use-cases



You have a Fabric tenant that contains two workspaces named Workspace1 and Workspace2 and a user named User1.

You need to ensure that User1 can perform the following tasks: Create a new domain.

Create two subdomains named subdomain1 and subdomain2. Assign Workspace1 to subdomain1.

Assign Workspace2 to subdomain2.

The solution must follow the principle of least privilege. Which role should you assign to User1?

  1. domain admin
  2. domain contributor
  3. Fabric admin
  4. workspace Admin

Answer(s): A

Explanation:

To achieve the tasks described, User1 needs permissions to manage domains and assign workspaces to subdomains. Here’s a breakdown of the required tasks and the permissions needed:
Create a new domain:
This requires the ability to manage domains, which is specifically granted by the domain admin role.

Create two subdomains (subdomain1 and subdomain2):
Subdomains fall under domain management, so the domain admin role is required.

Assign workspaces (Workspace1 and Workspace2) to subdomains:
Assigning workspaces to subdomains is also a domain management task, which the domain admin role permits.



HOTSPOT (Drag and Drop is not supported)
You have a Fabric tenant that contains three users named User1, User2, and User3. The tenant contains a security group named Group1. User1 and User3 are members of Group1.

The tenant contains the workspaces shown in the following table.



The tenant contains the domains shown in the following table.



User1 creates a new workspace named Workspace3. You assign Domain1 as the default domain of Group1.

For each of the following statements, select Yes if the statement is true. Otherwise, select No.

NOTE: Each correct selection is worth one point.

Hot Area:

  1. See Explanation section for answer.

Answer(s): A

Explanation:




User2 is assigned the Contributor role for Workspace3 - No
User2 is not a member of Group1, and Workspace3 is created by User1. Since Workspace3 is assigned to Domain1 (default domain of Group1), only members of Group1 will have permissions based on their role in the domain. User2 is not part of Group1, so they have no role in Workspace3.

User3 is assigned the Viewer role for Workspace3 - No
User3 is a member of Group1, and the default domain (Domain1) is assigned to Group1. However, there is no indication that User3 has been explicitly granted the Viewer role in Workspace3. If permissions were inherited, User3 would have the default role for Domain1, but the problem does not specify this explicitly, so
we assume no Viewer role is assigned.

User3 is assigned the Contributor role for Workspace1 - No
Workspace1 is explicitly assigned to User1 as the admin. There is no indication that User3 has any permissions for Workspace1. Being a member of Group1 does not grant automatic Contributor access to a workspace unless explicitly configured.



You have a Fabric warehouse named Warehouse1 that contains a table named Table1. Table1 contains customer data.

You need to implement row-level security (RLS) for Table1. The solution must ensure that users can see only their respective data.

Which two objects should you create? Each correct answer presents part of the solution.

NOTE: Each correct selection is worth one point.

  1. DATABASE ROLE
  2. STORED PROCEDURE
  3. CONSTRAINT
  4. FUNCTION
  5. SECURITY POLICY

Answer(s): A,E

Explanation:

A database role is used to assign permissions to users or groups. In the context of RLS, you create roles that map to specific user groups or individuals, determining which rows they can access.

A security policy is used to enforce row-level security. This is done by creating a filter predicate that limits the rows returned based on a condition, such as the user's identity or a specific column value.



Viewing Page 3 of 26



Share your comments for Microsoft DP-600 exam with other users:

Prateek 9/18/2023 11:13:00 AM

kindly share the dumps
UNITED STATES


Irfan 11/25/2023 1:26:00 AM

very nice content
Anonymous


php 6/16/2023 12:49:00 AM

passed today
Anonymous


Durga 6/23/2023 1:22:00 AM

hi can you please upload questions
Anonymous


JJ 5/28/2023 4:32:00 AM

please upload quetions
THAILAND


Norris 1/3/2023 8:06:00 PM

i passed my exam thanks to this braindumps questions. these questions are valid in us and i highly recommend it!
UNITED STATES


abuti 7/21/2023 6:10:00 PM

are they truely latest
Anonymous


Curtis Nakawaki 7/5/2023 8:46:00 PM

questions appear contemporary.
UNITED STATES


Vv 12/2/2023 6:31:00 AM

good to prepare in this site
UNITED STATES


praveenkumar 11/20/2023 11:57:00 AM

very helpful to crack first attempt
Anonymous


asad Raza 5/15/2023 5:38:00 AM

please upload this exam
CHINA


Reeta 7/17/2023 5:22:00 PM

please upload the c_activate22 dump questions with answer
SWEDEN


Wong 12/20/2023 11:34:00 AM

q10 - the answer should be a. if its c, the criteria will meet if either the prospect is not part of the suppression lists or if the job title contains vice president
MALAYSIA


david 12/12/2023 12:38:00 PM

this was on the exam as of 1211/2023
Anonymous


Tink 7/24/2023 9:23:00 AM

great for prep
GERMANY


Jaro 12/18/2023 3:12:00 PM

i think in question 7 the first answer should be power bi portal (not power bi)
Anonymous


9eagles 4/7/2023 10:04:00 AM

on question 10 and so far 2 wrong answers as evident in the included reference link.
Anonymous


Tai 8/28/2023 5:28:00 AM

wonderful material
SOUTH AFRICA


VoiceofMidnight 12/29/2023 4:48:00 PM

i passed!! ...but barely! got 728, but needed 720 to pass. the exam hit me with labs right out of the gate! then it went to multiple choice. protip: study the labs!
UNITED STATES


A K 8/3/2023 11:56:00 AM

correct answer for question 92 is c -aws shield
Anonymous


Nitin Mindhe 11/27/2023 6:12:00 AM

great !! it is really good
IRELAND


BailleyOne 11/22/2023 1:45:00 AM

explanations for the answers are to the point.
Anonymous


patel 10/25/2023 8:17:00 AM

how can rea next
INDIA


MortonG 10/19/2023 6:32:00 PM

question: 128 d is the wrong answer...should be c
EUROPEAN UNION


Jayant 11/2/2023 3:15:00 AM

thanks for az 700 dumps
Anonymous


Bipul Mishra 12/14/2023 7:12:00 AM

thank you for this tableau dumps . it will helpfull for tableau certification
UNITED STATES


hello 10/31/2023 12:07:00 PM

good content
Anonymous


Matheus 9/3/2023 2:14:00 PM

just testing if the comments are real
UNITED STATES


yenvti2@gmail.com 8/12/2023 7:56:00 PM

very helpful for exam preparation
Anonymous


Miguel 10/5/2023 12:16:00 PM

question 11: https://help.salesforce.com/s/articleview?id=sf.admin_lead_to_patient_setup_overview.htm&type=5
SPAIN


Noushin 11/28/2023 4:52:00 PM

i think the answer to question 42 is b not c
CANADA


susan sandivore 8/28/2023 1:00:00 AM

thanks for the dump
Anonymous


Aderonke 10/31/2023 12:51:00 AM

fantastic assessments
Anonymous


Priscila 7/22/2022 9:59:00 AM

i find the xengine test engine simulator to be more fun than reading from pdf.
GERMANY