Databricks Databricks-Machine-Learning-Associate Exam (page: 1)
Databricks Certified Machine Learning Associate
Updated on: 21-Feb-2026

A machine learning engineer has created a Feature Table new_table using Feature Store Client fs.
When creating the table, they specified a metadata description with key information about the Feature Table. They now want to retrieve that metadata programmatically.
Which of the following lines of code will return the metadata description?

  1. There is no way to return the metadata description programmatically.
  2. fs.create_training_set("new_table")
  3. fs.get_table("new_table").description
  4. fs.get_table("new_table").load_df()
  5. fs.get_table("new_table")

Answer(s): C

Explanation:

To retrieve the metadata description of a feature table created using the Feature Store Client (referred here as fs), the correct method involves calling get_table on the fs client with the table name as an argument, followed by accessing the description attribute of the returned object. The code snippet fs.get_table("new_table").description correctly achieves this by fetching the table object for "new_table" and then accessing its description attribute, where the metadata is stored. The other options do not correctly focus on retrieving the metadata description.


Reference:

Databricks Feature Store documentation (Accessing Feature Table Metadata).



A data scientist has a Spark DataFrame spark_df. They want to create a new Spark DataFrame that contains only the rows from spark_df where the value in column price is greater than 0.
Which of the following code blocks will accomplish this task?

  1. spark_df[spark_df["price"] > 0]
  2. spark_df.filter(col("price") > 0)
  3. SELECT * FROM spark_df WHERE price > 0
  4. spark_df.loc[spark_df["price"] > 0,:]
  5. spark_df.loc[:,spark_df["price"] > 0]

Answer(s): B

Explanation:

To filter rows in a Spark DataFrame based on a condition, you use the filter method along with a column condition. The correct syntax in PySpark to accomplish this task is spark_df.filter(col("price") > 0), which filters the DataFrame to include only those rows where the value in the "price" column is greater than 0. The col function is used to specify column-based operations. The other options provided either do not use correct Spark DataFrame syntax or are intended for different types of data manipulation frameworks like pandas.


Reference:

PySpark DataFrame API documentation (Filtering DataFrames).



A health organization is developing a classification model to determine whether or not a patient currently has a specific type of infection. The organization's leaders want to maximize the number of positive cases identified by the model.
Which of the following classification metrics should be used to evaluate the model?

  1. RMSE
  2. Precision
  3. Area under the residual operating curve
  4. Accuracy
  5. Recall

Answer(s): E

Explanation:

When the goal is to maximize the identification of positive cases in a classification task, the metric of interest is Recall. Recall, also known as sensitivity, measures the proportion of actual positives that are correctly identified by the model (i.e., the true positive rate). It is crucial for scenarios where missing a positive case (false negative) has serious implications, such as in medical diagnostics. The other metrics like Precision, RMSE, and Accuracy serve different aspects of performance measurement and are not specifically focused on maximizing the detection of positive cases alone.


Reference:

Classification Metrics in Machine Learning (Understanding Recall).



In which of the following situations is it preferable to impute missing feature values with their median value over the mean value?

  1. When the features are of the categorical type
  2. When the features are of the boolean type
  3. When the features contain a lot of extreme outliers
  4. When the features contain no outliers
  5. When the features contain no missing no values

Answer(s): C

Explanation:

Imputing missing values with the median is often preferred over the mean in scenarios where the data contains a lot of extreme outliers. The median is a more robust measure of central tendency in such cases, as it is not as heavily influenced by outliers as the mean. Using the median ensures that the imputed values are more representative of the typical data point, thus preserving the integrity of the dataset's distribution. The other options are not specifically relevant to the question of handling outliers in numerical data.


Reference:

Data Imputation Techniques (Dealing with Outliers).



A data scientist has replaced missing values in their feature set with each respective feature variable's median value. A colleague suggests that the data scientist is throwing away valuable information by doing this.
Which of the following approaches can they take to include as much information as possible in the feature set?

  1. Impute the missing values using each respective feature variable's mean value instead of the median value
  2. Refrain from imputing the missing values in favor of letting the machine learning algorithm determine how to handle them
  3. Remove all feature variables that originally contained missing values from the feature set
  4. Create a binary feature variable for each feature that contained missing values indicating whether each row's value has been imputed
  5. Create a constant feature variable for each feature that contained missing values indicating the percentage of rows from the feature that was originally missing

Answer(s): D

Explanation:

By creating a binary feature variable for each feature with missing values to indicate whether a value has been imputed, the data scientist can preserve information about the original state of the data. This approach maintains the integrity of the dataset by marking which values are original and which are synthetic (imputed). Here are the steps to implement this approach:
Identify Missing Values: Determine which features contain missing values. Impute Missing Values: Continue with median imputation or choose another method (mean, mode, regression, etc.) to fill missing values.
Create Indicator Variables: For each feature that had missing values, add a new binary feature. This feature should be '1' if the original value was missing and imputed, and '0' otherwise. Data Integration: Integrate these new binary features into the existing dataset. This maintains a record of where data imputation occurred, allowing models to potentially weight these observations differently.
Model Adjustment: Adjust machine learning models to account for these new features, which might involve considering interactions between these binary indicators and other features.
Reference
"Feature Engineering for Machine Learning" by Alice Zheng and Amanda Casari (O'Reilly Media, 2018), especially the sections on handling missing data. Scikit-learn documentation on imputing missing values: https://scikit- learn.org/stable/modules/impute.html



Viewing Page 1 of 16



Share your comments for Databricks Databricks-Machine-Learning-Associate exam with other users:

Bsmaind 8/20/2023 9:23:00 AM

nice practice dumps
Anonymous


kumar 11/15/2023 11:24:00 AM

nokia 4a0-114 dumps
Anonymous


Vetri 10/3/2023 12:59:00 AM

great content and wonderful to have the answers with explanation
UNITED STATES


Ranjith 8/21/2023 3:39:00 PM

for question #118, the answer is option c. the screen shot is showing the drop down, but the answer is marked incorrectly please update . thanks for sharing such nice questions.
Anonymous


Eduardo Ramírez 12/11/2023 9:55:00 PM

the correct answer for the question 29 is d.
Anonymous


Dass 11/2/2023 7:43:00 AM

question no 22: correct answers: bc, 1 per session 1 per page 1 per component always
UNITED STATES


Reddy 12/14/2023 2:42:00 AM

these are pretty useful
Anonymous


Daisy Delgado 1/9/2023 1:05:00 PM

awesome
UNITED STATES


Atif 6/13/2023 4:09:00 AM

yes please upload
UNITED STATES


Xunil 6/12/2023 3:04:00 PM

great job whoever put this together, for the greater good! thanks!
Anonymous


Lakshmi 10/2/2023 5:26:00 AM

just started to view all questions for the exam
NETHERLANDS


rani 1/19/2024 11:52:00 AM

helpful material
Anonymous


Greg 11/16/2023 6:59:00 AM

hope for the best
UNITED STATES


hi 10/5/2023 4:00:00 AM

will post exam has finished
UNITED STATES


Vmotu 8/24/2023 11:14:00 AM

really correct and good analyze!
AZERBAIJAN


hicham 5/30/2023 8:57:00 AM

excellent thanks a lot
FRANCE


Suman C 7/7/2023 8:13:00 AM

will post once pass the cka exam
INDIA


Ram 11/3/2023 5:10:00 AM

good content
Anonymous


Nagendra Pedipina 7/13/2023 2:12:00 AM

q:32 answer has to be option c
INDIA


Tamer Barakat 12/7/2023 5:17:00 PM

nice questions
Anonymous


Daryl 8/1/2022 11:33:00 PM

i really like the support team in this website. they are fast in communication and very helpful.
UNITED KINGDOM


Curtis Nakawaki 6/29/2023 9:13:00 PM

a good contemporary exam review
UNITED STATES


x-men 5/23/2023 1:02:00 AM

q23, its an array, isnt it? starts with [ and end with ]. its an array of objects, not object.
UNITED STATES


abuti 7/21/2023 6:24:00 PM

cool very helpfull
Anonymous


Krishneel 3/17/2023 10:34:00 AM

i just passed. this exam dumps is the same one from prepaway and examcollection. it has all the real test questions.
INDIA


Regor 12/4/2023 2:01:00 PM

is this a valid prince2 practitioner dumps?
UNITED KINGDOM


asl 9/14/2023 3:59:00 PM

all are relatable questions
CANADA


Siyya 1/19/2024 8:30:00 PM

might help me to prepare for the exam
Anonymous


Ted 6/21/2023 11:11:00 PM

just paid and downlaod the 2 exams using the 50% sale discount. so far i was able to download the pdf and the test engine. all looks good.
GERMANY


Paul K 11/27/2023 2:28:00 AM

i think it should be a,c. option d goes against the principle of building anything custom unless there are no work arounds available
INDIA


ph 6/16/2023 12:41:00 AM

very legible
Anonymous


sephs2001 7/31/2023 10:42:00 PM

is this exam accurate or helpful?
Anonymous


ash 7/11/2023 3:00:00 AM

please upload dump, i have exam in 2 days
INDIA


Sneha 8/17/2023 6:29:00 PM

this is useful
CANADA