Cloudera CCA175 Exam (page: 3)
Cloudera CCA Spark and Hadoop Developer Exam
Updated on: 25-Dec-2025

Viewing Page 3 of 21

Problem Scenario 71 :
Write down a Spark script using Python,
In which it read a file "Content.txt" (On hdfs) with following content.
After that split each row as (key, value), where key is first word in line and entire line as value.
Filter out the empty lines.
And save this key value in "problem86" as Sequence file(On hdfs)
Part 2 : Save as sequence file , where key as null and entire line as value. Read back the stored sequence files.
Content.txt
Hello this is ABCTECH.com
This is XYZTECH.com
Apache Spark Training
This is Spark Learning Session

Spark is faster than MapReduce

  1. See the explanation for Step by Step Solution and configuration.

Answer(s): A

Explanation:

Solution :
Step 1:
# Import SparkContext and SparkConf
from pyspark import SparkContext, SparkConf

Step 2:
#load data from hdfs
contentRDD = sc.textFile(MContent.txt")

Step 3:
#filter out non-empty lines
nonemptyjines = contentRDD.filter(lambda x: len(x) > 0)

Step 4:
#Split line based on space (Remember : It is mandatory to convert is in tuple} words = nonempty_lines.map(lambda x: tuple(x.split('', 1)))
words.saveAsSequenceFile("problem86")

Step 5: Check contents in directory problem86 hdfs dfs -cat problem86/part*
Step 6: Create key, value pair (where key is null)
nonempty_lines.map(lambda line: (None, Mne}).saveAsSequenceFile("problem86_1")
Step 7: Reading back the sequence file data using spark. seqRDD = sc.sequenceFile("problem86_1")
Step 8: Print the content to validate the same.
for line in seqRDD.collect():
print(line)



Problem Scenario 12 : You have been given following mysql database details as well as other info.
user=retail_dba
password=cloudera
database=retail_db
jdbc URL = jdbc:mysql://quickstart:3306/retail_db
Please accomplish following.

1. Create a table in retailedb with following definition.
CREATE table departments_new (department_id int(11), department_name varchar(45),
created_date T1MESTAMP DEFAULT NOW());
2. Now isert records from departments table to departments_new
3. Now import data from departments_new table to hdfs.
4. Insert following 5 records in departmentsnew table. Insert into departments_new
values(110, "Civil" , null); Insert into departments_new values(111, "Mechanical" , null);
Insert into departments_new values(112, "Automobile" , null); Insert into departments_new
values(113, "Pharma" , null);
Insert into departments_new values(114, "Social Engineering" , null);
5. Now do the incremental import based on created_date column.

  1. See the explanation for Step by Step Solution and configuration.

Answer(s): A

Explanation:

Solution :
Step 1: Login to musql db
mysql --user=retail_dba -password=cloudera
show databases;
use retail db; show tables;
Step 2: Create a table as given in problem statement. CREATE table departments_new (department_id int(11), department_name varchar(45), createddate T1MESTAMP DEFAULT NOW());
show tables;
Step 3: isert records from departments table to departments_new insert into departments_new select a.", null from departments a;
Step 4: Import data from departments new table to hdfs.
sqoop import \
-connect jdbc:mysql://quickstart:330G/retail_db \
~username=retail_dba \
-password=cloudera \
-table departments_new\
--target-dir /user/cloudera/departments_new \
--split-by departments
Stpe 5 : Check the imported data.
hdfs dfs -cat /user/cloudera/departmentsnew/part"
Step 6: Insert following 5 records in departmentsnew table. Insert into departments_new values(110, "Civil" , null); Insert into departments_new values(111, "Mechanical" , null); Insert into departments_new values(112, "Automobile" , null); Insert into departments_new values(113, "Pharma" , null); Insert into departments_new values(114, "Social Engineering" , null); commit;
Stpe 7 : Import incremetal data based on created_date column.
sqoop import \
-connect jdbc:mysql://quickstart:330G/retaiI_db \
-username=retail_dba \
-password=cloudera \
--table departments_new\
-target-dir /user/cloudera/departments_new \
-append \
-check-column created_date \
-incremental lastmodified \
-split-by departments \
-last-value "2016-01-30 12:07:37.0"
Step 8: Check the imported value.
hdfs dfs -cat /user/cloudera/departmentsnew/part"



Problem Scenario 29 : Please accomplish the following exercises using HDFS command line options.

1. Create a directory in hdfs named hdfs_commands.
2. Create a file in hdfs named data.txt in hdfs_commands.
3. Now copy this data.txt file on local filesystem, however while copying file please make
sure file properties are not changed e.g. file permissions.
4. Now create a file in local directory named data_local.txt and move this file to hdfs in
hdfs_commands directory.
5. Create a file data_hdfs.txt in hdfs_commands directory and copy it to local file system.
6. Create a file in local filesystem named file1.txt and put it to hdfs

  1. See the explanation for Step by Step Solution and configuration.

Answer(s): A

Explanation:

Solution :
Step 1: Create directory
hdfs dfs -mkdir hdfs_commands
Step 2: Create a file in hdfs named data.txt in hdfs_commands. hdfs dfs -touchz hdfs_commands/data.txt
Step 3: Now copy this data.txt file on local filesystem, however while copying file please make sure file properties are not changed e.g. file permissions.
hdfs dfs -copyToLocal -p hdfs_commands/data.txt/home/cloudera/Desktop/HadoopExam
Step 4: Now create a file in local directory named data_local.txt and move this file to hdfs in hdfs_commands directory.
touch data_local.txt
hdfs dfs -moveFromLocal /home/cloudera/Desktop/HadoopExam/dataJocal.txt hdfs_commands/
Step 5: Create a file data_hdfs.txt in hdfs_commands directory and copy it to local file system.
hdfs dfs -touchz hdfscommands/data hdfs.txt
hdfs dfs -getfrdfs_commands/data_hdfs.txt /home/cloudera/Desktop/HadoopExam/
Step 6: Create a file in local filesystem named filel .txt and put it to hdfs touch filel.txt
hdfs dfs -put/home/cloudera/Desktop/HadoopExam/file1.txt hdfs_commands/



Problem Scenario 86 : In Continuation of previous question, please accomplish following activities.

1. Select Maximum, minimum, average , Standard Deviation, and total quantity.
2. Select minimum and maximum price for each product code.
3. Select Maximum, minimum, average , Standard Deviation, and total quantity for each
product code, hwoever make sure Average and Standard deviation will have maximum two
decimal values.
4. Select all the product code and average price only where product count is more than or
equal to 3.
5. Select maximum, minimum , average and total of all the products for each code. Also
produce the same across all the products.

  1. See the explanation for Step by Step Solution and configuration.

Answer(s): A

Explanation:

Solution :
Step 1: Select Maximum, minimum, average , Standard Deviation, and total quantity. val results = sqlContext.sql('.....SELECT MAX(price) AS MAX , MIN(price) AS MIN , AVG(price) AS Average, STD(price) AS STD, SUM(quantity) AS total_products FROM products......)
results. showQ
Step 2: Select minimum and maximum price for each product code. val results = sqlContext.sql(......SELECT code, MAX(price) AS Highest Price', MIN(price) AS Lowest Price'
FROM products GROUP BY code......)
results. showQ
Step 3: Select Maximum, minimum, average , Standard Deviation, and total quantity for each product code, hwoever make sure Average and Standard deviation will have maximum two decimal values.
val results = sqlContext.sql(......SELECT code, MAX(price), MIN(price), CAST(AVG(price} AS DECIMAL(7, 2)) AS Average', CAST(STD(price) AS DECIMAL(7, 2)) AS 'Std Dev\ SUM(quantity) FROM products
GROUP BY code......)
results. showQ
Step 4: Select all the product code and average price only where product count is more than or equal to 3.
val results = sqlContext.sql(......SELECT code AS Product Code', COUNTf) AS Count',
CAST(AVG(price) AS DECIMAL(7, 2)) AS Average' FROM products GROUP BY code HAVING Count >=3"M") results. showQ
Step 5: Select maximum, minimum , average and total of all the products for each code.
Also produce the same across all the products.
val results = sqlContext.sql( """SELECT
code,
MAX(price),
MIN(pnce),
CAST(AVG(price) AS DECIMAL(7, 2)) AS Average',
SUM(quantity)-
FROM products
GROUP BY code
WITH ROLLUP""" )
results. show()



Problem Scenario 9 : You have been given following mysql database details as well as other info.
user=retail_dba
password=cloudera
database=retail_db
jdbc URL = jdbc:mysql://quickstart:3306/retail_db
Please accomplish following.

1. Import departments table in a directory.
2. Again import departments table same directory (However, directory already exist hence
it should not overrride and append the results)
3. Also make sure your results fields are terminated by '|' and lines terminated by '\n\

  1. See the explanation for Step by Step Solution and configuration.

Answer(s): A

Explanation:

Solution:
Step 1: Clean the hdfs file system, if they exists clean out.
hadoop fs -rm -R departments
hadoop fs -rm -R categories
hadoop fs -rm -R products
hadoop fs -rm -R orders
hadoop fs -rm -R order_items
hadoop fs -rm -R customers
Step 2: Now import the department table as per requirement.
sqoop import \
-connect jdbc:mysql://quickstart:330G/retaiI_db \
--username=retail_dba \
-password=cloudera \
-table departments \
-target-dir=departments \
-fields-terminated-by '|' \
-lines-terminated-by '\n' \
-ml
Step 3: Check imported data.
hdfs dfs -Is departments
hdfs dfs -cat departments/part-m-00000
Step 4: Now again import data and needs to appended.
sqoop import \
-connect jdbc:mysql://quickstart:3306/retail_db \
--username=retail_dba \
-password=cloudera \
-table departments \
-target-dir departments \
-append \
-tields-terminated-by '|' \
-lines-termtnated-by '\n' \
-ml
Step 5: Again Check the results
hdfs dfs -Is departments
hdfs dfs -cat departments/part-m-00001



Viewing Page 3 of 21



Share your comments for Cloudera CCA175 exam with other users:

Sniper69 5/9/2022 11:04:00 PM

manged to pass the exam with this exam dumps.
UNITED STATES


Deepak 12/27/2023 2:37:00 AM

good questions
SINGAPORE


dba 9/23/2023 3:10:00 AM

can we please have the latest exam questions?
Anonymous


Prasad 9/29/2023 7:27:00 AM

please help with jn0-649 latest dumps
HONG KONG


GTI9982 7/31/2023 10:15:00 PM

please i need this dump. thanks
CANADA


Elton Riva 12/12/2023 8:20:00 PM

i have to take the aws certified developer - associate dva-c02 in the next few weeks and i wanted to know if the questions on your website are the same as the official exam.
Anonymous


Berihun Desalegn Wonde 7/13/2023 11:00:00 AM

all questions are more important
Anonymous


gr 7/2/2023 7:03:00 AM

ques 4 answer should be c ie automatically recover from failure
Anonymous


RS 7/27/2023 7:17:00 AM

very very useful page
INDIA


Blessious Phiri 8/12/2023 11:47:00 AM

the exams are giving me an eye opener
Anonymous


AD 10/22/2023 9:08:00 AM

3rd so far, need to cover more
Anonymous


Matt 11/18/2023 2:32:00 AM

aligns with the pecd notes
Anonymous


Sri 10/15/2023 4:38:00 PM

question 4: b securityadmin is the correct answer. https://docs.snowflake.com/en/user-guide/security-access-control-overview#access-control-framework
GERMANY


H.T.M. D 6/25/2023 2:55:00 PM

kindly please share dumps
Anonymous


Satish 11/6/2023 4:27:00 AM

it is very useful, thank you
Anonymous


Chinna 7/30/2023 8:37:00 AM

need safe rte dumps
FRANCE


1234 6/30/2023 3:40:00 AM

can you upload the cis - cpg dumps
Anonymous


Did 1/12/2024 3:01:00 AM

q6 = 1. download odt application 2. create a configuration file (xml) 3. setup.exe /download to download the installation files 4. setup.exe /configure to deploy the application
FRANCE


John 10/12/2023 12:30:00 PM

great material
Anonymous


Dinesh 8/1/2023 2:26:00 PM

could you please upload sap c_arsor_2302 questions? it will be very much helpful.
Anonymous


LBert 6/19/2023 10:23:00 AM

vraag 20c: rsa veilig voor symmtrische cryptografie? antwoord c is toch fout. rsa is voor asymmetrische cryptogafie??
NETHERLANDS


g 12/22/2023 1:51:00 PM

so far good
UNITED STATES


Milos 8/4/2023 9:33:00 AM

question 31 has obviously wrong answers. tls and ssl are used to encrypt data at transit, not at rest.
Serbia And Montenegro


Diksha 9/25/2023 2:32:00 AM

pls provide dump for 1z0-1080-23 planning exams
Anonymous


H 7/17/2023 4:28:00 AM

could you please upload the exam?
Anonymous


Anonymous 9/14/2023 4:47:00 AM

please upload this
UNITED STATES


Naveena 1/13/2024 9:55:00 AM

good material
Anonymous


WildWilly 1/19/2024 10:43:00 AM

lets see if this is good stuff...
Anonymous


Lavanya 11/2/2023 1:53:00 AM

useful information
UNITED STATES


Moussa 12/12/2023 5:52:00 AM

intéressant
BURKINA FASO


Madan 6/22/2023 9:22:00 AM

thank you for making the interactive questions
Anonymous


Vavz 11/2/2023 6:51:00 AM

questions are accurate
Anonymous


Su 11/23/2023 4:34:00 AM

i need questions/dumps for this exam.
Anonymous


LuvSN 7/16/2023 11:19:00 AM

i need this exam, when will it be uploaded
ROMANIA