Microsoft DP-700 Exam (page: 5)
Microsoft Implementing Data Engineering Solutions Using Fabric
Updated on: 28-Jul-2025

Viewing Page 5 of 25

You have an Azure Data Lake Storage Gen2 account named storage1 and an Amazon S3 bucket named storage2.
You have the Delta Parquet files shown in the following table.

You have a Fabric workspace named Workspace1 that has the cache for shortcuts enabled. Workspace1 contains a lakehouse named Lakehouse1. Lakehouse1 has the following shortcuts:
A shortcut to ProductFile aliased as Products
A shortcut to StoreFile aliased as Stores
A shortcut to TripsFile aliased as Trips
The data from which shortcuts will be retrieved from the cache?

  1. Trips and Stores only
  2. Products and Store only
  3. Stores only
  4. Products only
  5. Products, Stores, and Trips

Answer(s): B

Explanation:

When the cache for shortcuts is enabled in Fabric, the data retrieval is governed by the caching behavior, which generally retains data for a specific period after it was last accessed. The data from the shortcuts will be retrieved from the cache if the data is stored in locations that support caching. Here's a breakdown based on the data's location:
Products: The ProductFile is stored in Azure Data Lake Storage Gen2 (storage1). Since Azure Data
Lake is a supported storage system in Fabric and the file is relatively small (50 MB), this data is most likely cached and can be retrieved from the cache.
Stores: The StoreFile is stored in Amazon S3 (storage2), and even though it is stored in a different cloud provider, Fabric can cache data from Amazon S3 if caching is enabled. This data (25 MB) is likely cached and retrievable.
Trips: The TripsFile is stored in Amazon S3 (storage2) and is significantly larger (2 GB) compared to the other files.
While Fabric can cache data from Amazon S3, the larger size of the file (2 GB) may exceed typical cache sizes or retention windows, causing this file to likely be retrieved directly from the source instead of the cache.



HOTSPOT (Drag and Drop is not supported)
You have a Fabric workspace named Workspace1 that contains the items shown in the following table.

For Model1, the Keep your Direct Lake data up to date option is disabled.
You need to configure the execution of the items to meet the following requirements:
Notebook1 must execute every weekday at 8:00 AM.
Notebook2 must execute when a file is saved to an Azure Blob Storage container.
Model1 must refresh when Notebook1 has executed successfully.
How should you orchestrate each item? To answer, select the appropriate options in the answer area.
Note: Each correct selection is worth one point.
Hot Area:

  1. See Explanation section for answer.

Answer(s): A

Explanation:



Notebook1 - Add Notebook1 to Pipeline1.
To schedule Notebook1 to run every weekday at 8:00 AM, the best way to orchestrate it is by adding it to Pipeline1 and then configuring a schedule for the pipeline execution. This allows you to leverage the scheduling capabilities of pipelines in Fabric, ensuring that Notebook1 runs at the desired time.
Notebook2 - From Real-Time hub, configure the execution of Notebook2. Since Notebook2 needs to execute when a file is saved to an Azure Blob Storage container, the best way to trigger this is by using Real-Time hub. The Real-Time hub is designed for event-driven scenarios like this, where an external event (e.g., file saved to a blob) can trigger the execution of a notebook.
Pipeline1 - Configure the execution of Pipeline1 by using a schedule. To ensure Model1 refreshes when Notebook1 has executed successfully, you can orchestrate the execution flow by adding Notebook1 to Pipeline1 and configuring the pipeline to execute on a schedule (such as when Notebook1 runs). You can also use dependencies within the pipeline to ensure Model1 only refreshes after Notebook1 executes.
Model1 - Add Model1 to Pipeline1.
To ensure Model1 refreshes when Notebook1 executes, you need to add Model1 to Pipeline1 and configure the execution within the pipeline. By configuring the pipeline so that Model1 refreshes after Notebook1 has successfully executed, you establish the correct dependency. Additionally, ensure that the Keep your Direct
Lake data up to date option is disabled since the refresh is controlled by Pipeline1.



Your company has a sales department that uses two Fabric workspaces named Workspace1 and Workspace2.
The company decides to implement a domain strategy to organize the workspaces.
You need to ensure that a user can perform the following tasks:
Create a new domain for the sales department.
Create two subdomains: one for the east region and one for the west region.
Assign Workspace1 to the east region subdomain.
Assign Workspace2 to the west region subdomain.
The solution must follow the principle of least privilege.
Which role should you assign to the user?

  1. workspace Admin
  2. domain admin
  3. domain contributor
  4. Fabric admin

Answer(s): D



You have a Fabric workspace named Workspace1 that contains a warehouse named DW1 and a data pipeline named Pipeline1.
You plan to add a user named User3 to Workspace1.
You need to ensure that User3 can perform the following actions:
View all the items in Workspace1.
Update the tables in DW1.
The solution must follow the principle of least privilege.
You already assigned the appropriate object-level permissions to DW1.
Which workspace role should you assign to User3?

  1. Admin
  2. Member
  3. Viewer
  4. Contributor

Answer(s): D

Explanation:

To ensure User3 can view all items in Workspace1 and update the tables in DW1, the most appropriate workspace role to assign is the Contributor role. This role allows User3 to:
1. View all items in Workspace1: The Contributor role provides the ability to view all objects within the workspace, such as data pipelines, warehouses, and other resources.
2. Update the tables in DW1: The Contributor role allows User3 to modify or update resources within the workspace, including the tables in DW1, assuming that appropriate object-level permissions are set for the warehouse.
This role adheres to the principle of least privilege, as it provides the necessary permissions without granting broader administrative rights.



You have a Fabric capacity that contains a workspace named Workspace1. Workspace1 contains a lakehouse named Lakehouse1, a data pipeline, a notebook, and several Microsoft Power BI reports.
A user named User1 wants to use SQL to analyze the data in Lakehouse1.
You need to configure access for User1. The solution must meet the following requirements:
Provide User1 with read access to the table data in Lakehouse1.
Prevent User1 from using Apache Spark to query the underlying files in Lakehouse1.
Prevent User1 from accessing other items in Workspace1.
What should you do?

  1. Share Lakehouse1 with User1 directly and select Read all SQL endpoint data.
  2. Assign User1 the Viewer role for Workspace1. Share Lakehouse1 with User1 and select Read all SQL endpoint data.
  3. Share Lakehouse1 with User1 directly and select Build reports on the default semantic model.
  4. Assign User1 the Member role for Workspace1. Share Lakehouse1 with User1 and select Read all SQL endpoint data.

Answer(s): A



Viewing Page 5 of 25



Share your comments for Microsoft DP-700 exam with other users:

AM 6/20/2023 7:54:00 PM

please post
UNITED STATES


Nagendra Pedipina 7/13/2023 2:22:00 AM

q:42 there has to be a image in the question to choose what does it mean from the options
INDIA


BrainDumpee 11/18/2023 1:36:00 PM

looking for cphq dumps, where can i find these for free? please and thank you.
UNITED STATES


sheik 10/14/2023 11:37:00 AM

@aarun , thanks for the information. it would be great help if you share your email
Anonymous


Random user 12/11/2023 1:34:00 AM

1z0-1078-23 need this dumps
Anonymous


labuschanka 11/16/2023 6:06:00 PM

i gave the microsoft azure az-500 tests and prepared from this site as it has latest mock tests available which helped me evaluate my performance and score 919/1000
Anonymous


Marianne 10/22/2023 11:57:00 PM

i cannot see the button to go to the questions
Anonymous


sushant 6/28/2023 4:52:00 AM

good questions
EUROPEAN UNION


A\MAM 6/27/2023 5:17:00 PM

q-6 ans-b correct. https://docs.paloaltonetworks.com/pan-os/9-1/pan-os-cli-quick-start/use-the-cli/commit-configuration-changes
UNITED STATES


unanimous 12/15/2023 6:38:00 AM

very nice very nice
Anonymous


akminocha 9/28/2023 10:36:00 AM

please help us with 1z0-1107-2 dumps
INDIA


Jefi 9/4/2023 8:15:00 AM

please upload the practice questions
Anonymous


Thembelani 5/30/2023 2:45:00 AM

need this dumps
Anonymous


Abduraimov 4/19/2023 12:43:00 AM

preparing for this exam is overwhelming. you cannot pass without the help of these exam dumps.
UNITED KINGDOM


Puneeth 10/5/2023 2:06:00 AM

new to this site but i feel it is good
EUROPEAN UNION


Ashok Kumar 1/2/2024 6:53:00 AM

the correct answer to q8 is b. explanation since the mule app has a dependency, it is necessary to include project modules and dependencies to make sure the app will run successfully on the runtime on any other machine. source code of the component that the mule app is dependent of does not need to be included in the exported jar file, because the source code is not being used while executing an app. compiled code is being used instead.
Anonymous


Merry 7/30/2023 6:57:00 AM

good questions
Anonymous


VoiceofMidnight 12/17/2023 4:07:00 PM

Delayed the exam until December 29th.
UNITED STATES


Umar Ali 8/29/2023 2:59:00 PM

A and D are True
Anonymous


vel 8/28/2023 9:17:09 AM

good one with explanation
Anonymous


Gurdeep 1/18/2024 4:00:15 PM

This is one of the most useful study guides I have ever used.
CANADA