Databricks Databricks Certified Associate Developer for Apache Spark 3.0 Exam (page: 1)
Databricks Certified Associate Developer for Apache Spark
Updated on: 09-Apr-2026

Which of the following options describes the responsibility of the executors in Spark?

  1. The executors accept jobs from the driver, analyze those jobs, and return results to the driver.
  2. The executors accept tasks from the driver, execute those tasks, and return results to the cluster manager.
  3. The executors accept tasks from the driver, execute those tasks, and return results to the driver.
  4. The executors accept tasks from the cluster manager, execute those tasks, and return results to the driver.
  5. The executors accept jobs from the driver, plan those jobs, and return results to the cluster manager.

Answer(s): C

Explanation:

More info: Running Spark: an overview of Spark’s runtime architecture - Manning (https://bit.ly/2RPmJn9)



Which of the following describes the role of tasks in the Spark execution hierarchy?

  1. Tasks are the smallest element in the execution hierarchy.
  2. Within one task, the slots are the unit of work done for each partition of the data.
  3. Tasks are the second-smallest element in the execution hierarchy.
  4. Stages with narrow dependencies can be grouped into one task.
  5. Tasks with wide dependencies can be grouped into one stage.

Answer(s): A

Explanation:

Stages with narrow dependencies can be grouped into one task. Wrong, tasks with narrow dependencies can be grouped into one stage. Tasks with wide dependencies can be grouped into one stage.
Wrong, since a wide transformation causes a shuffle which always marks the boundary of a stage. So, you cannot bundle multiple tasks that have wide dependencies into a stage.

Tasks are the second-smallest element in the execution hierarchy. No, they are the smallest element in the execution hierarchy.

Within one task, the slots are the unit of work done for each partition of the data.
No, tasks are the unit of work done per partition. Slots help Spark parallelize work. An executor can have multiple slots which enable it to process multiple tasks in parallel.



Which of the following describes the role of the cluster manager?

  1. The cluster manager schedules tasks on the cluster in client mode.
  2. The cluster manager schedules tasks on the cluster in local mode.
  3. The cluster manager allocates resources to Spark applications and maintains the executor processes in client mode.
  4. The cluster manager allocates resources to Spark applications and maintains the executor processes in remote mode.
  5. The cluster manager allocates resources to the DataFrame manager.

Answer(s): C

Explanation:

The cluster manager allocates resources to Spark applications and maintains the executor processes in client mode.
Correct. In cluster mode, the cluster manager is located on a node other than the client machine. From there it starts and ends executor processes on the cluster nodes as required by the Spark application running on the Spark driver.
The cluster manager allocates resources to Spark applications and maintains the executor processes in remote mode.
Wrong, there is no "remote" execution mode in Spark. Available execution modes are local, client, and cluster.
The cluster manager allocates resources to the DataFrame manager Wrong, there is no "DataFrame manager" in Spark.
The cluster manager schedules tasks on the cluster in client mode.
No, in client mode, the Spark driver schedules tasks on the cluster – not the cluster manager. The cluster manager schedules tasks on the cluster in local mode.
Wrong: In local mode, there is no "cluster". The Spark application is running on a single machine, not on a cluster of machines.



Which of the following is the idea behind dynamic partition pruning in Spark?

  1. Dynamic partition pruning is intended to skip over the data you do not need in the results of a query.
  2. Dynamic partition pruning concatenates columns of similar data types to optimize join performance.
  3. Dynamic partition pruning performs wide transformations on disk instead of in memory.
  4. Dynamic partition pruning reoptimizes physical plans based on data types and broadcast variables.
  5. Dynamic partition pruning reoptimizes query plans based on runtime statistics collected during query execution.

Answer(s): A

Explanation:

Dynamic partition pruning reoptimizes query plans based on runtime statistics collected during query execution.
No – this is what adaptive query execution does, but not dynamic partition pruning.
Dynamic partition pruning concatenates columns of similar data types to optimize join performance. Wrong, this answer does not make sense, especially related to dynamic partition pruning.
Dynamic partition pruning reoptimizes physical plans based on data types and broadcast variables.
It is true that dynamic partition pruning works in joins using broadcast variables. This actually happens in both the logical optimization and the physical planning stage. However, data types do not play a role for the reoptimization.
Dynamic partition pruning performs wide transformations on disk instead of in memory.
This answer does not make sense. Dynamic partition pruning is meant to accelerate Spark – performing any transformation involving disk instead of memory resources would decelerate Spark and certainly achieve the opposite effect of what dynamic partition pruning is intended for.



Which of the following is one of the big performance advantages that Spark has over Hadoop?

  1. Spark achieves great performance by storing data in the DAG format, whereas Hadoop can only use parquet files.
  2. Spark achieves higher resiliency for queries since, different from Hadoop, it can be deployed on Kubernetes.
  3. Spark achieves great performance by storing data and performing computation in memory, whereas large jobs in Hadoop require a large amount of relatively slow disk I/O operations.
  4. Spark achieves great performance by storing data in the HDFS format, whereas Hadoop can only use parquet files.
  5. Spark achieves performance gains for developers by extending Hadoop's DataFrames with a user- friendly API.

Answer(s): C

Explanation:

Spark achieves great performance by storing data in the DAG format, whereas Hadoop can only use parquet files.
Wrong, there is no "DAG format". DAG stands for "directed acyclic graph". The DAG is a means of representing computational steps in Spark. However, it is true that Hadoop does not use a DAG.

The introduction of the DAG in Spark was a result of the limitation of Hadoop's map reduce framework in which data had to be written to and read from disk continuously.

Graph DAG in Apache Spark - DataFlair
Spark achieves great performance by storing data in the HDFS format, whereas Hadoop can only use parquet files.
No. Spark can certainly store data in HDFS (as well as other formats), but this is not a key performance advantage over Hadoop. Hadoop can use multiple file formats, not only parquet.
Spark achieves higher resiliency for queries since, different from Hadoop, it can be deployed on Kubernetes.
No, resiliency is not asked for in the question. The Question: is about
performance improvements. Both Hadoop and Spark can be deployed on Kubernetes.
Spark achieves performance gains for developers by extending Hadoop's DataFrames with a user- friendly API.
No. DataFrames are a concept in Spark, but not in Hadoop.



Viewing Page 1 of 37



Share your comments for Databricks Databricks Certified Associate Developer for Apache Spark 3.0 exam with other users:

LK 1/2/2024 11:56:00 AM

great content
Anonymous


Srijeeta 10/8/2023 6:24:00 AM

how do i get the remaining questions?
INDIA


Jovanne 7/26/2022 11:42:00 PM

well formatted pdf and the test engine software is free. well worth the money i sept.
ITALY


CHINIMILLI SATISH 8/29/2023 6:22:00 AM

looking for 1z0-116
Anonymous


Pedro Afonso 1/15/2024 8:01:00 AM

in question 22, shouldnt be in the data (option a) layer?
Anonymous


Pushkar 11/7/2022 12:12:00 AM

the questions are incredibly close to real exam. you people are amazing.
INDIA


Ankit S 11/13/2023 3:58:00 AM

q15. answer is b. simple
UNITED STATES


S. R 12/8/2023 9:41:00 AM

great practice
FRANCE


Mungara 3/14/2023 12:10:00 AM

thanks to this exam dumps, i felt confident and passed my exam with ease.
UNITED STATES


Anonymous 7/25/2023 2:55:00 AM

need 1z0-1105-22 exam
Anonymous


Nigora 5/31/2022 10:05:00 PM

this is a beautiful tool. passed after a week of studying.
UNITED STATES


Av dey 8/16/2023 2:35:00 PM

can you please upload the dumps for 1z0-1096-23 for oracle
INDIA


Mayur Shermale 11/23/2023 12:22:00 AM

its intresting, i would like to learn more abouth this
JAPAN


JM 12/19/2023 2:23:00 PM

q252: dns poisoning is the correct answer, not locator redirection. beaconing is detected from a host. this indicates that the system has been infected with malware, which could be the source of local dns poisoning. location redirection works by either embedding the redirection in the original websites code or having a user click on a url that has an embedded redirect. since users at a different office are not getting redirected, it isnt an embedded redirection on the original website and since the user is manually typing in the url and not clicking a link, it isnt a modified link.
UNITED STATES


Freddie 12/12/2023 12:37:00 PM

helpful dump questions
SOUTH AFRICA


Da Costa 8/25/2023 7:30:00 AM

question 423 eigrp uses metric
Anonymous


Bsmaind 8/20/2023 9:22:00 AM

hello nice dumps
Anonymous


beau 1/12/2024 4:53:00 PM

good resource for learning
UNITED STATES


Sandeep 12/29/2023 4:07:00 AM

very useful
Anonymous


kevin 9/29/2023 8:04:00 AM

physical tempering techniques
Anonymous


Blessious Phiri 8/15/2023 4:08:00 PM

its giving best technical knowledge
Anonymous


Testbear 6/13/2023 11:15:00 AM

please upload
ITALY


shime 10/24/2023 4:23:00 AM

great question with explanation thanks!!
ETHIOPIA


Thembelani 5/30/2023 2:40:00 AM

does this exam have lab sections?
Anonymous


Shin 9/8/2023 5:31:00 AM

please upload
PHILIPPINES


priti kagwade 7/22/2023 5:17:00 AM

please upload the braindump for .net
UNITED STATES


Robe 9/27/2023 8:15:00 PM

i need this exam 1z0-1107-2. please.
Anonymous


Chiranthaka 9/20/2023 11:22:00 AM

very useful!
Anonymous


Not Miguel 11/26/2023 9:43:00 PM

for this question - "which three type of basic patient or member information is displayed on the patient info component? (choose three.)", list of conditions is not displayed (it is displayed in patient card, not patient info). so should be thumbnail of chatter photo
Anonymous


Andrus 12/17/2023 12:09:00 PM

q52 should be d. vm storage controller bandwidth represents the amount of data (in terms of bandwidth) that a vms storage controller is using to read and write data to the storage fabric.
Anonymous


Raj 5/25/2023 8:43:00 AM

nice questions
UNITED STATES


max 12/22/2023 3:45:00 PM

very useful
Anonymous


Muhammad Rawish Siddiqui 12/8/2023 6:12:00 PM

question # 208: failure logs is not an example of operational metadata.
SAUDI ARABIA


Sachin Bedi 1/5/2024 4:47:00 AM

good questions
Anonymous