Cloudera CCA175 Exam (page: 1)
Cloudera CCA Spark and Hadoop Developer Exam
Updated on: 25-Dec-2025

Viewing Page 1 of 21

CORRECT TEXT
Problem Scenario 28 : You need to implement near real time solutions for collecting information when submitted in file with below
Data
echo "IBM, 100, 20160104" >> /tmp/spooldir2/.bb.txt
echo "IBM, 103, 20160105" >> /tmp/spooldir2/.bb.txt
mv /tmp/spooldir2/.bb.txt /tmp/spooldir2/bb.txt
After few mins
echo "IBM, 100.2, 20160104" >> /tmp/spooldir2/.dr.txt
echo "IBM, 103.1, 20160105" >> /tmp/spooldir2/.dr.txt
mv /tmp/spooldir2/.dr.txt /tmp/spooldir2/dr.txt
You have been given below directory location (if not available than create it) /tmp/spooldir2 .
As soon as file committed in this directory that needs to be available in hdfs in /tmp/flume/primary as well as /tmp/flume/secondary location.
However, note that/tmp/flume/secondary is optional, if transaction failed which writes in this directory need not to be rollback.
Write a flume configuration file named flumeS.conf and use it to load data in hdfs with following additional properties .

1. Spool /tmp/spooldir2 directory
2. File prefix in hdfs sholuld be events
3. File suffix should be .log
4. If file is not committed and in use than it should have _ as prefix.
5. Data should be written as text to hdfs

  1. See the explanation for Step by Step Solution and configuration.

Answer(s): A

Explanation:

Solution :Step 1: Create directory mkdir /tmp/spooldir2
Step 2: Create flume configuration file, with below configuration for source, sink and channel and save it in flume8.conf.
agent1 .sources = source1
agent1.sinks = sink1a sink1bagent1.channels = channel1a channel1b
agent1.sources.source1.channels = channel1a channel1b agent1.sources.source1.selector.type = replicating
agent1.sources.source1.selector.optional = channel1b agent1.sinks.sink1a.channel = channel1a
agent1 .sinks.sink1b.channel = channel1b
agent1.sources.source1.type = spooldir
agent1 .sources.sourcel.spoolDir = /tmp/spooldir2
agent1.sinks.sink1a.type = hdfs
agent1 .sinks, sink1a.hdfs. path = /tmp/flume/primary agent1 .sinks.sink1a.hdfs.tilePrefix = events
agent1 .sinks.sink1a.hdfs.fileSuffix = .log
agent1 .sinks.sink1a.hdfs.fileType = Data Stream
agent1 .sinks.sink1b.type = hdfs
agent1 .sinks.sink1b.hdfs.path = /tmp/flume/secondary agent1 .sinks.sink1b.hdfs.filePrefix = events
agent1.sinks.sink1b.hdfs.fileSuffix = .log
agent1 .sinks.sink1b.hdfs.fileType = Data Stream
agent1.channels.channel1a.type = file
agent1.channels.channel1b.type = memory
step 4 : Run below command which will use this configuration file and append data in hdfs.
Start flume service:
flume-ng agent -conf /home/cloudera/flumeconf -conf-file /home/cloudera/flumeconf/flume8.conf --name age
Step 5: Open another terminal and create a file in /tmp/spooldir2/
echo "IBM, 100, 20160104" » /tmp/spooldir2/.bb.txt
echo "IBM, 103, 20160105" » /tmp/spooldir2/.bb.txt mv /tmp/spooldir2/.bb.txt /tmp/spooldir2/bb.txt
After few mins
echo "IBM.100.2, 20160104" »/tmp/spooldir2/.dr.txt
echo "IBM, 103.1, 20160105" » /tmp/spooldir2/.dr.txt mv /tmp/spooldir2/.dr.txt /tmp/spooldir2/dr.txt



Problem Scenario 83 : In Continuation of previous question, please accomplish following activities.

1. Select all the records with quantity >= 5000 and name starts with 'Pen'
2. Select all the records with quantity >= 5000, price is less than 1.24 and name starts with
'Pen'
3. Select all the records witch does not have quantity >= 5000 and name does not starts
with 'Pen'
4. Select all the products which name is 'Pen Red', 'Pen Black'
5. Select all the products which has price BETWEEN 1.0 AND 2.0 AND quantity
BETWEEN 1000 AND 2000.

  1. See the explanation for Step by Step Solution and configuration.

Answer(s): A

Explanation:

Solution :Step 1: Select all the records with quantity >= 5000 and name starts with 'Pen' val results = sqlContext.sql(......SELECT * FROM products WHERE quantity >= 5000 AND name LIKE 'Pen %.......)
results.show()
Step 2: Select all the records with quantity >= 5000 , price is less than 1.24 and name starts with 'Pen'
val results = sqlContext.sql(......SELECT * FROM products WHERE quantity >= 5000 AND price < 1.24 AND name LIKE 'Pen %.......)
results. showQ
Step 3: Select all the records witch does not have quantity >= 5000 and name does not starts with 'Pen'
val results = sqlContext.sql('.....SELECT * FROM products WHERE NOT (quantity >= 5000 AND name LIKE 'Pen %')......)
results. showQ
Step 4: Select all the products wchich name is 'Pen Red', 'Pen Black' val results = sqlContext.sql('.....SELECT' FROM products WHERE name IN ('Pen Red', 'Pen Black')......)
results. showQ
Step 5: Select all the products which has price BETWEEN 1.0 AND 2.0 AND quantity BETWEEN 1000 AND 2000.
val results = sqlContext.sql(......SELECT * FROM products WHERE (price BETWEEN 1.0 AND 2.0) AND (quantity BETWEEN 1000 AND 2000)......) results. show()



Problem Scenario 82 : You have been given table in Hive with following structure (Which you have created in previous exercise).
productid int code string name string quantity int price float
Using SparkSQL accomplish following activities.

1. Select all the products name and quantity having quantity <= 2000
2. Select name and price of the product having code as 'PEN'
3. Select all the products, which name starts with PENCIL
4. Select all products which "name" begins with 'P\ followed by any two characters,
followed by space, followed by zero or more characters

  1. See the explanation for Step by Step Solution and configuration.

Answer(s): A

Explanation:

Solution :
Step 1: Copy following tile (Mandatory Step in Cloudera QuickVM) if you have not done it.
sudo su root
cp /usr/lib/hive/conf/hive-site.xml /usr/lib/sparkVconf/
Step 2: Now start spark-shell
Step 3 ; Select all the products name and quantity having quantity <= 2000 val results = sqlContext.sql(......SELECT name, quantity FROM products WHERE quantity <= 2000......)
results.showQ
Step 4: Select name and price of the product having code as 'PEN' val results = sqlContext.sql(......SELECT name, price FROM products WHERE code = 'PEN.......)
results. showQ
Step 5: Select all the products , which name starts with PENCIL val results = sqlContext.sql(......SELECT name, price FROM products WHERE upper(name) LIKE 'PENCIL%.......}
results. showQ
Step 6: select all products which "name" begins with 'P', followed by any two characters, followed by space, followed byzero or more characters -- "name" begins with 'P', followed by any two characters,
- followed by space, followed by zero or more characters val results = sqlContext.sql(......SELECT name, price FROM products WHERE name LIKE 'P_ %.......)
results. show()



Problem Scenario 20 : You have been given MySQL DB with following details.
user=retail_dba
password=cloudera
database=retail_db
table=retail_db.categories
jdbc URL = jdbc:mysql://quickstart:3306/retail_db
Please accomplish following activities.

1. Write a Sqoop Job which will import "retaildb.categories" table to hdfs, in a directory
name "categories_targetJob".

  1. See the explanation for Step by Step Solution and configuration.

Answer(s): A

Explanation:

Solution :Step 1: Connecting to existing MySQL Database mysql -user=retail_dba -- password=cloudera retail_db
Step 2: Show all the available tables show tables;
Step 3: Below is the command to create Sqoop Job (Please note that - import space is mandatory)
sqoop job -create sqoopjob \ -- import \
-connect "jdbc:mysql://quickstart:3306/retail_db" \
-username=retail_dba \
-password=cloudera \
-table categories \
-target-dir categories_targetJob \
-fields-terminated-by '|' \
-lines-terminated-by '\n'
Step 4: List all the Sqoop Jobs sqoop job --list
Step 5: Show details of the Sqoop Job sqoop job --show sqoopjob
Step 6: Execute the sqoopjob sqoopjob --exec sqoopjob
Step 7: Check the output of import job
hdfs dfs -Is categories_target_job
hdfs dfs -cat categories_target_job/part*



Problem Scenario 59 : You have been given below code snippet.
val x = sc.parallelize(1 to 20)
val y = sc.parallelize(10 to 30) operationl
z.collect
Write a correct code snippet for operationl which will produce desired output, shown below. Array[lnt] = Array(16, 12, 20, 13, 17, 14, 18, 10, 19, 15, 11)

  1. See the explanation for Step by Step Solution and configuration.

Answer(s): A

Explanation:

Solution :
val z = x.intersection(y)
intersection : Returns the elements in the two RDDs which are the same.



Viewing Page 1 of 21



Share your comments for Cloudera CCA175 exam with other users:

Debabrata Das 8/28/2023 8:42:00 AM

i am taking oracle fcc certification test next two days, pls share question dumps
Anonymous


nITA KALE 8/22/2023 1:57:00 AM

i need dumps
Anonymous


CV 9/9/2023 1:54:00 PM

its time to comptia sec+
GREECE


SkepticReader 8/1/2023 8:51:00 AM

question 35 has an answer for a different question. i believe the answer is "a" because it shut off the firewall. "0" in registry data means that its false (aka off).
UNITED STATES


Nabin 10/16/2023 4:58:00 AM

helpful content
MALAYSIA


Blessious Phiri 8/15/2023 3:19:00 PM

oracle 19c is complex db
Anonymous


Sreenivas 10/24/2023 12:59:00 AM

helpful for practice
Anonymous


Liz 9/11/2022 11:27:00 PM

support team is fast and deeply knowledgeable. i appreciate that a lot.
UNITED STATES


Namrata 7/15/2023 2:22:00 AM

helpful questions
Anonymous


lipsa 11/8/2023 12:54:00 PM

thanks for question
Anonymous


Eli 6/18/2023 11:27:00 PM

the software is provided for free so this is a big change. all other sites are charging for that. also that fucking examtopic site that says free is not free at all. you are hit with a pay-wall.
EUROPEAN UNION


open2exam 10/29/2023 1:14:00 PM

i need exam questions nca 6.5 any help please ?
Anonymous


Gerald 9/11/2023 12:22:00 PM

just took the comptia cybersecurity analyst (cysa+) - wished id seeing this before my exam
UNITED STATES


ryo 9/10/2023 2:27:00 PM

very helpful
MEXICO


Jamshed 6/20/2023 4:32:00 AM

i need this exam
PAKISTAN


Roberto Capra 6/14/2023 12:04:00 PM

nice questions... are these questions the same of the exam?
Anonymous


Synt 5/23/2023 9:33:00 PM

need to view
UNITED STATES


Vey 5/27/2023 12:06:00 AM

highly appreciate for your sharing.
CAMBODIA


Tshepang 8/18/2023 4:41:00 AM

kindly share this dump. thank you
Anonymous


Jay 9/26/2023 8:00:00 AM

link plz for download
UNITED STATES


Leo 10/30/2023 1:11:00 PM

data quality oecd
Anonymous


Blessious Phiri 8/13/2023 9:35:00 AM

rman is one good recovery technology
Anonymous


DiligentSam 9/30/2023 10:26:00 AM

need it thx
Anonymous


Vani 8/10/2023 8:11:00 PM

good questions
NEW ZEALAND


Fares 9/11/2023 5:00:00 AM

good one nice revision
Anonymous


Lingaraj 10/26/2023 1:27:00 AM

i love this thank you i need
Anonymous


Muhammad Rawish Siddiqui 12/5/2023 12:38:00 PM

question # 142: data governance is not one of the deliverables in the document and content management context diagram.
SAUDI ARABIA


al 6/7/2023 10:25:00 AM

most answers not correct here
Anonymous


Bano 1/19/2024 2:29:00 AM

what % of questions do we get in the real exam?
UNITED STATES


Oliviajames 10/25/2023 5:31:00 AM

i just want to tell you. i took my microsoft az-104 exam and passed it. your program was awesome. i especially liked your detailed questions and answers and practice tests that made me well-prepared for the exam. thanks to this website!!!
UNITED STATES


Divya 8/27/2023 12:31:00 PM

all the best
UNITED STATES


KY 1/1/2024 11:01:00 PM

very usefull document
Anonymous


Arun 9/20/2023 4:52:00 PM

nice and helpful questions
INDIA


Joseph J 7/11/2023 2:53:00 PM

i found the questions helpful
UNITED STATES