Databricks Databricks-Machine-Learning-Associate Exam (page: 1)
Databricks Certified Machine Learning Associate
Updated on: 02-Jan-2026

A machine learning engineer has created a Feature Table new_table using Feature Store Client fs.
When creating the table, they specified a metadata description with key information about the Feature Table. They now want to retrieve that metadata programmatically.
Which of the following lines of code will return the metadata description?

  1. There is no way to return the metadata description programmatically.
  2. fs.create_training_set("new_table")
  3. fs.get_table("new_table").description
  4. fs.get_table("new_table").load_df()
  5. fs.get_table("new_table")

Answer(s): C

Explanation:

To retrieve the metadata description of a feature table created using the Feature Store Client (referred here as fs), the correct method involves calling get_table on the fs client with the table name as an argument, followed by accessing the description attribute of the returned object. The code snippet fs.get_table("new_table").description correctly achieves this by fetching the table object for "new_table" and then accessing its description attribute, where the metadata is stored. The other options do not correctly focus on retrieving the metadata description.


Reference:

Databricks Feature Store documentation (Accessing Feature Table Metadata).



A data scientist has a Spark DataFrame spark_df. They want to create a new Spark DataFrame that contains only the rows from spark_df where the value in column price is greater than 0.
Which of the following code blocks will accomplish this task?

  1. spark_df[spark_df["price"] > 0]
  2. spark_df.filter(col("price") > 0)
  3. SELECT * FROM spark_df WHERE price > 0
  4. spark_df.loc[spark_df["price"] > 0,:]
  5. spark_df.loc[:,spark_df["price"] > 0]

Answer(s): B

Explanation:

To filter rows in a Spark DataFrame based on a condition, you use the filter method along with a column condition. The correct syntax in PySpark to accomplish this task is spark_df.filter(col("price") > 0), which filters the DataFrame to include only those rows where the value in the "price" column is greater than 0. The col function is used to specify column-based operations. The other options provided either do not use correct Spark DataFrame syntax or are intended for different types of data manipulation frameworks like pandas.


Reference:

PySpark DataFrame API documentation (Filtering DataFrames).



A health organization is developing a classification model to determine whether or not a patient currently has a specific type of infection. The organization's leaders want to maximize the number of positive cases identified by the model.
Which of the following classification metrics should be used to evaluate the model?

  1. RMSE
  2. Precision
  3. Area under the residual operating curve
  4. Accuracy
  5. Recall

Answer(s): E

Explanation:

When the goal is to maximize the identification of positive cases in a classification task, the metric of interest is Recall. Recall, also known as sensitivity, measures the proportion of actual positives that are correctly identified by the model (i.e., the true positive rate). It is crucial for scenarios where missing a positive case (false negative) has serious implications, such as in medical diagnostics. The other metrics like Precision, RMSE, and Accuracy serve different aspects of performance measurement and are not specifically focused on maximizing the detection of positive cases alone.


Reference:

Classification Metrics in Machine Learning (Understanding Recall).



In which of the following situations is it preferable to impute missing feature values with their median value over the mean value?

  1. When the features are of the categorical type
  2. When the features are of the boolean type
  3. When the features contain a lot of extreme outliers
  4. When the features contain no outliers
  5. When the features contain no missing no values

Answer(s): C

Explanation:

Imputing missing values with the median is often preferred over the mean in scenarios where the data contains a lot of extreme outliers. The median is a more robust measure of central tendency in such cases, as it is not as heavily influenced by outliers as the mean. Using the median ensures that the imputed values are more representative of the typical data point, thus preserving the integrity of the dataset's distribution. The other options are not specifically relevant to the question of handling outliers in numerical data.


Reference:

Data Imputation Techniques (Dealing with Outliers).



A data scientist has replaced missing values in their feature set with each respective feature variable's median value. A colleague suggests that the data scientist is throwing away valuable information by doing this.
Which of the following approaches can they take to include as much information as possible in the feature set?

  1. Impute the missing values using each respective feature variable's mean value instead of the median value
  2. Refrain from imputing the missing values in favor of letting the machine learning algorithm determine how to handle them
  3. Remove all feature variables that originally contained missing values from the feature set
  4. Create a binary feature variable for each feature that contained missing values indicating whether each row's value has been imputed
  5. Create a constant feature variable for each feature that contained missing values indicating the percentage of rows from the feature that was originally missing

Answer(s): D

Explanation:

By creating a binary feature variable for each feature with missing values to indicate whether a value has been imputed, the data scientist can preserve information about the original state of the data. This approach maintains the integrity of the dataset by marking which values are original and which are synthetic (imputed). Here are the steps to implement this approach:
Identify Missing Values: Determine which features contain missing values. Impute Missing Values: Continue with median imputation or choose another method (mean, mode, regression, etc.) to fill missing values.
Create Indicator Variables: For each feature that had missing values, add a new binary feature. This feature should be '1' if the original value was missing and imputed, and '0' otherwise. Data Integration: Integrate these new binary features into the existing dataset. This maintains a record of where data imputation occurred, allowing models to potentially weight these observations differently.
Model Adjustment: Adjust machine learning models to account for these new features, which might involve considering interactions between these binary indicators and other features.
Reference
"Feature Engineering for Machine Learning" by Alice Zheng and Amanda Casari (O'Reilly Media, 2018), especially the sections on handling missing data. Scikit-learn documentation on imputing missing values: https://scikit- learn.org/stable/modules/impute.html



Viewing Page 1 of 16



Share your comments for Databricks Databricks-Machine-Learning-Associate exam with other users:

open2exam 10/29/2023 1:14:00 PM

i need exam questions nca 6.5 any help please ?
Anonymous


Gerald 9/11/2023 12:22:00 PM

just took the comptia cybersecurity analyst (cysa+) - wished id seeing this before my exam
UNITED STATES


ryo 9/10/2023 2:27:00 PM

very helpful
MEXICO


Jamshed 6/20/2023 4:32:00 AM

i need this exam
PAKISTAN


Roberto Capra 6/14/2023 12:04:00 PM

nice questions... are these questions the same of the exam?
Anonymous


Synt 5/23/2023 9:33:00 PM

need to view
UNITED STATES


Vey 5/27/2023 12:06:00 AM

highly appreciate for your sharing.
CAMBODIA


Tshepang 8/18/2023 4:41:00 AM

kindly share this dump. thank you
Anonymous


Jay 9/26/2023 8:00:00 AM

link plz for download
UNITED STATES


Leo 10/30/2023 1:11:00 PM

data quality oecd
Anonymous


Blessious Phiri 8/13/2023 9:35:00 AM

rman is one good recovery technology
Anonymous


DiligentSam 9/30/2023 10:26:00 AM

need it thx
Anonymous


Vani 8/10/2023 8:11:00 PM

good questions
NEW ZEALAND


Fares 9/11/2023 5:00:00 AM

good one nice revision
Anonymous


Lingaraj 10/26/2023 1:27:00 AM

i love this thank you i need
Anonymous


Muhammad Rawish Siddiqui 12/5/2023 12:38:00 PM

question # 142: data governance is not one of the deliverables in the document and content management context diagram.
SAUDI ARABIA


al 6/7/2023 10:25:00 AM

most answers not correct here
Anonymous


Bano 1/19/2024 2:29:00 AM

what % of questions do we get in the real exam?
UNITED STATES


Oliviajames 10/25/2023 5:31:00 AM

i just want to tell you. i took my microsoft az-104 exam and passed it. your program was awesome. i especially liked your detailed questions and answers and practice tests that made me well-prepared for the exam. thanks to this website!!!
UNITED STATES


Divya 8/27/2023 12:31:00 PM

all the best
UNITED STATES


KY 1/1/2024 11:01:00 PM

very usefull document
Anonymous


Arun 9/20/2023 4:52:00 PM

nice and helpful questions
INDIA


Joseph J 7/11/2023 2:53:00 PM

i found the questions helpful
UNITED STATES


Meg 10/12/2023 8:02:00 AM

q 105 . ans is d
INDIA


Navaneeth S 7/14/2023 7:57:00 AM

i have interest to get a sybase iq dba certification
UNITED STATES


Aish 10/11/2023 5:27:00 AM

want to pass exm.
INDIA


Anonymous 6/12/2023 7:23:00 AM

are the answers correct?
INDIA


Kris 7/7/2023 9:43:00 AM

good morning, could you please upload this exam again, i need it to test my knowledge in sd-wan with version 7.0.
Anonymous


Meghraj mali 10/7/2023 1:47:00 PM

very nice question
CANADA


Noel 11/1/2022 9:14:00 PM

i have learning disability and this exam dumps allowed me to focus on the actual questions and not worry about notes and the those other study materials.
SOUTH AFRICA


Jas 10/25/2023 6:01:00 PM

165 should be apt
UNITED STATES


Neetu 6/22/2023 8:41:00 AM

please upload the dumps, real need of them
Anonymous


Mark 10/24/2023 1:34:00 AM

any recent feeedback?
UNITED STATES


Gopinadh 8/9/2023 4:05:00 AM

question number 2 is indicating you are giving proper questions. observe and change properly.
Anonymous