Microsoft DP-600 Exam (page: 3)
Microsoft Implementing Analytics Solutions Using Fabric
Updated on: 12-Feb-2026

Viewing Page 3 of 26

HOTSPOT (Drag and Drop is not supported)
You have a Fabric workspace named Workspace1 and an Azure Data Lake Storage Gen2 account named storage1. Workspace1 contains a lakehouse named Lakehouse1.

You need to create a shortcut to storage1 in Lakehouse1.

Which protocol and endpoint should you specify? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Hot Area:

  1. See Explanation section for answer.

Answer(s): A

Explanation:




Box 1: abfss
Access Azure storage
Once you have properly configured credentials to access your Azure storage container, you can interact with resources in the storage account using URIs. Databricks recommends using the abfss driver for greater security.

spark.read.load("abfss://<container-name>@<storage-account-name>.dfs.core.windows.net/<path-to-data>") dbutils.fs.ls("abfss://<container-name>@<storage-account-name>.dfs.core.windows.net/<path-to-data>") CREATE TABLE <database-name>.<table-name>;
COPY INTO <database-name>.<table-name>
FROM 'abfss://container@storageAccount.dfs.core.windows.net/path/to/folder' FILEFORMAT = CSV
COPY_OPTIONS ('mergeSchema' = 'true');
Box 2: dfs
dfs is used for the endpoint:
dbutils.fs.ls("abfss://<container-name>@<storage-account-name>.dfs.core.windows.net/<path-to-data>")


Reference:

https://docs.databricks.com/en/connect/storage/azure-storage.html



You have an Azure Repos Git repository named Repo1 and a Fabric-enabled Microsoft Power BI Premium capacity. The capacity contains two workspaces named Workspace1 and Workspace2. Git integration is enabled at the workspace level.

You plan to use Microsoft Power BI Desktop and Workspace1 to make version-controlled changes to a semantic model stored in Repo1. The changes will be built and deployed to Workspace2 by using Azure Pipelines.

You need to ensure that report and semantic model definitions are saved as individual text files in a folder hierarchy. The solution must minimize development and maintenance effort.

In which file format should you save the changes?

  1. PBIP
  2. PBIDS
  3. PBIT
  4. PBIX

Answer(s): A

Explanation:

Power BI Desktop projects (PREVIEW)
Power BI Desktop introduces a new way to author, collaborate, and save your projects. You can now save your work as a Power BI Project (PBIP). As a project, report and semantic model item definitions are saved as individual plain text files in a simple, intuitive folder structure.


Reference:

https://learn.microsoft.com/en-us/power-bi/developer/projects/projects-overview



You have a Fabric tenant that contains a lakehouse named Lakehouse1. Lakehouse1 contains a Delta table that has one million Parquet files.

You need to remove files that were NOT referenced by the table during the past 30 days. The solution must ensure that the transaction log remains consistent, and the ACID properties of the table are maintained.

What should you do?

  1. From OneLake file explorer, delete the files.
  2. Run the OPTIMIZE command and specify the Z-order parameter.
  3. Run the OPTIMIZE command and specify the V-order parameter.
  4. Run the VACUUM command.

Answer(s): D

Explanation:

VACUUM
Applies to: check marked yes Databricks SQL check marked yes Databricks Runtime Remove unused files from a table directory.

VACUUM removes all files from the table directory that are not managed by Delta, as well as data files that are no longer in the latest state of the transaction log for the table and are older than a retention threshold.

Incorrect:
Not B: What is Z order optimization?
Z-ordering is a technique to colocate related information in the same set of files. This co-locality is automatically used by Delta Lake on Azure Databricks data-skipping algorithms. This behavior dramatically reduces the amount of data that Delta Lake on Azure Databricks needs to read.

Not C: Delta Lake table optimization and V-Order
V-Order is a write time optimization to the parquet file format that enables lightning-fast reads under the Microsoft Fabric compute engines, such as Power BI, SQL, Spark, and others.

Power BI and SQL engines make use of Microsoft Verti-Scan technology and V-Ordered parquet files to achieve in-memory like data access times. Spark and other non-Verti-Scan compute engines also benefit from the V-Ordered files with an average of 10% faster read times, with some scenarios up to 50%.

V-Order works by applying special sorting, row group distribution, dictionary encoding and compression on parquet files, thus requiring less network, disk, and CPU resources in compute engines to read it, providing cost efficiency and performance. V-Order sorting has a 15% impact on average write times but provides up to 50% more compression.


Reference:

https://docs.databricks.com/en/sql/language-manual/delta-vacuum.html https://learn.microsoft.com/en-us/fabric/data-engineering/delta-optimization-and-v-order?



You have a Fabric tenant that contains a lakehouse named Lakehouse1.

You need to prevent new tables added to Lakehouse1 from being added automatically to the default semantic model of the lakehouse.

What should you configure?

  1. the SQL analytics endpoint settings
  2. the semantic model settings
  3. the workspace settings
  4. the Lakehouse1 settings

Answer(s): A

Explanation:

Default Power BI semantic models in Microsoft Fabric
Sync the default Power BI semantic model
Previously we auto added all tables and views in the Warehouse to the default Power BI semantic model. Based on feedback, we have modified the default behavior to not automatically add tables and views to the default Power BI semantic model. This change will ensure the background sync will not get triggered. This will also disable some actions like "New Measure", "Create Report", "Analyze in Excel".

If you want to change this default behavior, you can:
Manually enable the Sync the default Power BI semantic model setting for each Warehouse or SQL analytics endpoint in the workspace. This will restart the background sync that will incur some consumption costs.



2. Manually pick tables and views to be added to semantic model through Manage default Power BI semantic model in the ribbon or info bar.

NOTE: Understand what's in the default Power BI semantic model
When you create a Warehouse or SQL analytics endpoint, a default Power BI semantic model is created. The default semantic model is represented with the (default) suffix.


Reference:

https://learn.microsoft.com/en-us/fabric/data-warehouse/semantic-models



You have a Fabric tenant that contains JSON files in OneLake. The files have one billion items. You plan to perform time series analysis of the items.

You need to transform the data, visualize the data to find insights, perform anomaly detection, and share the insights with other business users. The solution must meet the following requirements:
Use parallel processing. Minimize the duplication of data.

Minimize how long it takes to load the data.

What should you use to transform and visualize the data?

  1. the PySpark library in a Fabric notebook
  2. the pandas library in a Fabric notebook
  3. a Microsoft Power BI report that uses core visuals

Answer(s): A

Explanation:

PySpark vs Pandas Performance
Pyspark has been created to help us work with big data on distributed systems. On the other hand, the pandas module is used to manipulate and analyze datasets up to a few GigaBytes (Less than 10 GB to be specific).

So, PySpark, when used with a distributed computing system, gives better performance than pandas. Pyspark also uses resilient distributed datasets (RDDs) to work parallel on the data. Hence, it performs better than pandas.

NOTE: PySpark is a Python library that provides an interface for Apache Spark. Spark is an open-source framework for big data processing. Spark is built to process large amounts of data quickly by distributing computing tasks across a cluster of machines.

PySpark allows us to use Apache Spark and its ecosystem of libraries, such as Spark SQL for working with structured data.

We can also use Spark MLlib for machine learning and GraphX for graph processing using Pyspark in Python.

PySpark supports many data sources, including Hadoop Distributed File System (HDFS), Apache Cassandra, and Amazon S3.

Along with the data processing capabilities, we can also use pyspark with popular Python libraries such as NumPy and Pandas.


Reference:

https://www.codeconquest.com/blog/pyspark-vs-pandas-performance-memory-consumption-and-use-cases



You have a Fabric tenant that contains two workspaces named Workspace1 and Workspace2 and a user named User1.

You need to ensure that User1 can perform the following tasks: Create a new domain.

Create two subdomains named subdomain1 and subdomain2. Assign Workspace1 to subdomain1.

Assign Workspace2 to subdomain2.

The solution must follow the principle of least privilege. Which role should you assign to User1?

  1. domain admin
  2. domain contributor
  3. Fabric admin
  4. workspace Admin

Answer(s): A

Explanation:

To achieve the tasks described, User1 needs permissions to manage domains and assign workspaces to subdomains. Here’s a breakdown of the required tasks and the permissions needed:
Create a new domain:
This requires the ability to manage domains, which is specifically granted by the domain admin role.

Create two subdomains (subdomain1 and subdomain2):
Subdomains fall under domain management, so the domain admin role is required.

Assign workspaces (Workspace1 and Workspace2) to subdomains:
Assigning workspaces to subdomains is also a domain management task, which the domain admin role permits.



HOTSPOT (Drag and Drop is not supported)
You have a Fabric tenant that contains three users named User1, User2, and User3. The tenant contains a security group named Group1. User1 and User3 are members of Group1.

The tenant contains the workspaces shown in the following table.



The tenant contains the domains shown in the following table.



User1 creates a new workspace named Workspace3. You assign Domain1 as the default domain of Group1.

For each of the following statements, select Yes if the statement is true. Otherwise, select No.

NOTE: Each correct selection is worth one point.

Hot Area:

  1. See Explanation section for answer.

Answer(s): A

Explanation:




User2 is assigned the Contributor role for Workspace3 - No
User2 is not a member of Group1, and Workspace3 is created by User1. Since Workspace3 is assigned to Domain1 (default domain of Group1), only members of Group1 will have permissions based on their role in the domain. User2 is not part of Group1, so they have no role in Workspace3.

User3 is assigned the Viewer role for Workspace3 - No
User3 is a member of Group1, and the default domain (Domain1) is assigned to Group1. However, there is no indication that User3 has been explicitly granted the Viewer role in Workspace3. If permissions were inherited, User3 would have the default role for Domain1, but the problem does not specify this explicitly, so
we assume no Viewer role is assigned.

User3 is assigned the Contributor role for Workspace1 - No
Workspace1 is explicitly assigned to User1 as the admin. There is no indication that User3 has any permissions for Workspace1. Being a member of Group1 does not grant automatic Contributor access to a workspace unless explicitly configured.



You have a Fabric warehouse named Warehouse1 that contains a table named Table1. Table1 contains customer data.

You need to implement row-level security (RLS) for Table1. The solution must ensure that users can see only their respective data.

Which two objects should you create? Each correct answer presents part of the solution.

NOTE: Each correct selection is worth one point.

  1. DATABASE ROLE
  2. STORED PROCEDURE
  3. CONSTRAINT
  4. FUNCTION
  5. SECURITY POLICY

Answer(s): A,E

Explanation:

A database role is used to assign permissions to users or groups. In the context of RLS, you create roles that map to specific user groups or individuals, determining which rows they can access.

A security policy is used to enforce row-level security. This is done by creating a filter predicate that limits the rows returned based on a condition, such as the user's identity or a specific column value.



Viewing Page 3 of 26



Share your comments for Microsoft DP-600 exam with other users:

deally 1/19/2024 3:41:00 PM

knowable questions
UNITED STATES


Sonia 7/23/2023 4:03:00 PM

very helpfull
UNITED STATES


binEY 10/6/2023 5:15:00 AM

good questions
Anonymous


Neha 9/28/2023 1:58:00 PM

its helpful
Anonymous


Desmond 1/5/2023 9:11:00 PM

i just took my oracle exam and let me tell you, this exam dumps was a lifesaver! without them, iam not sure i would have passed. the questions were tricky and the answers were obscure, but the exam dumps had everything i needed. i would recommend to anyone looking to pass their oracle exams with flying colors (and a little bit of cheating) lol.
SINGAPORE


Davidson OZ 9/9/2023 6:37:00 PM

22. if you need to make sure that one computer in your hot-spot network can access the internet without hot-spot authentication, which menu allows you to do this? answer is ip binding and not wall garden. wall garden allows specified websites to be accessed with users authentication to the hotspot
Anonymous


381 9/2/2023 4:31:00 PM

is question 1 correct?
Anonymous


Laurent 10/6/2023 5:09:00 PM

good content
Anonymous


Sniper69 5/9/2022 11:04:00 PM

manged to pass the exam with this exam dumps.
UNITED STATES


Deepak 12/27/2023 2:37:00 AM

good questions
SINGAPORE


dba 9/23/2023 3:10:00 AM

can we please have the latest exam questions?
Anonymous


Prasad 9/29/2023 7:27:00 AM

please help with jn0-649 latest dumps
HONG KONG


GTI9982 7/31/2023 10:15:00 PM

please i need this dump. thanks
CANADA


Elton Riva 12/12/2023 8:20:00 PM

i have to take the aws certified developer - associate dva-c02 in the next few weeks and i wanted to know if the questions on your website are the same as the official exam.
Anonymous


Berihun Desalegn Wonde 7/13/2023 11:00:00 AM

all questions are more important
Anonymous


gr 7/2/2023 7:03:00 AM

ques 4 answer should be c ie automatically recover from failure
Anonymous


RS 7/27/2023 7:17:00 AM

very very useful page
INDIA


Blessious Phiri 8/12/2023 11:47:00 AM

the exams are giving me an eye opener
Anonymous


AD 10/22/2023 9:08:00 AM

3rd so far, need to cover more
Anonymous


Matt 11/18/2023 2:32:00 AM

aligns with the pecd notes
Anonymous


Sri 10/15/2023 4:38:00 PM

question 4: b securityadmin is the correct answer. https://docs.snowflake.com/en/user-guide/security-access-control-overview#access-control-framework
GERMANY


H.T.M. D 6/25/2023 2:55:00 PM

kindly please share dumps
Anonymous


Satish 11/6/2023 4:27:00 AM

it is very useful, thank you
Anonymous


Chinna 7/30/2023 8:37:00 AM

need safe rte dumps
FRANCE


1234 6/30/2023 3:40:00 AM

can you upload the cis - cpg dumps
Anonymous


Did 1/12/2024 3:01:00 AM

q6 = 1. download odt application 2. create a configuration file (xml) 3. setup.exe /download to download the installation files 4. setup.exe /configure to deploy the application
FRANCE


John 10/12/2023 12:30:00 PM

great material
Anonymous


Dinesh 8/1/2023 2:26:00 PM

could you please upload sap c_arsor_2302 questions? it will be very much helpful.
Anonymous


LBert 6/19/2023 10:23:00 AM

vraag 20c: rsa veilig voor symmtrische cryptografie? antwoord c is toch fout. rsa is voor asymmetrische cryptogafie??
NETHERLANDS


g 12/22/2023 1:51:00 PM

so far good
UNITED STATES


Milos 8/4/2023 9:33:00 AM

question 31 has obviously wrong answers. tls and ssl are used to encrypt data at transit, not at rest.
Serbia And Montenegro


Diksha 9/25/2023 2:32:00 AM

pls provide dump for 1z0-1080-23 planning exams
Anonymous


H 7/17/2023 4:28:00 AM

could you please upload the exam?
Anonymous


Anonymous 9/14/2023 4:47:00 AM

please upload this
UNITED STATES