Cloudera CCA175 Exam (page: 1)
Cloudera CCA Spark and Hadoop Developer Exam
Updated on: 07-Nov-2025

Viewing Page 1 of 21

CORRECT TEXT
Problem Scenario 28 : You need to implement near real time solutions for collecting information when submitted in file with below
Data
echo "IBM, 100, 20160104" >> /tmp/spooldir2/.bb.txt
echo "IBM, 103, 20160105" >> /tmp/spooldir2/.bb.txt
mv /tmp/spooldir2/.bb.txt /tmp/spooldir2/bb.txt
After few mins
echo "IBM, 100.2, 20160104" >> /tmp/spooldir2/.dr.txt
echo "IBM, 103.1, 20160105" >> /tmp/spooldir2/.dr.txt
mv /tmp/spooldir2/.dr.txt /tmp/spooldir2/dr.txt
You have been given below directory location (if not available than create it) /tmp/spooldir2 .
As soon as file committed in this directory that needs to be available in hdfs in /tmp/flume/primary as well as /tmp/flume/secondary location.
However, note that/tmp/flume/secondary is optional, if transaction failed which writes in this directory need not to be rollback.
Write a flume configuration file named flumeS.conf and use it to load data in hdfs with following additional properties .

1. Spool /tmp/spooldir2 directory
2. File prefix in hdfs sholuld be events
3. File suffix should be .log
4. If file is not committed and in use than it should have _ as prefix.
5. Data should be written as text to hdfs

  1. See the explanation for Step by Step Solution and configuration.

Answer(s): A

Explanation:

Solution :Step 1: Create directory mkdir /tmp/spooldir2
Step 2: Create flume configuration file, with below configuration for source, sink and channel and save it in flume8.conf.
agent1 .sources = source1
agent1.sinks = sink1a sink1bagent1.channels = channel1a channel1b
agent1.sources.source1.channels = channel1a channel1b agent1.sources.source1.selector.type = replicating
agent1.sources.source1.selector.optional = channel1b agent1.sinks.sink1a.channel = channel1a
agent1 .sinks.sink1b.channel = channel1b
agent1.sources.source1.type = spooldir
agent1 .sources.sourcel.spoolDir = /tmp/spooldir2
agent1.sinks.sink1a.type = hdfs
agent1 .sinks, sink1a.hdfs. path = /tmp/flume/primary agent1 .sinks.sink1a.hdfs.tilePrefix = events
agent1 .sinks.sink1a.hdfs.fileSuffix = .log
agent1 .sinks.sink1a.hdfs.fileType = Data Stream
agent1 .sinks.sink1b.type = hdfs
agent1 .sinks.sink1b.hdfs.path = /tmp/flume/secondary agent1 .sinks.sink1b.hdfs.filePrefix = events
agent1.sinks.sink1b.hdfs.fileSuffix = .log
agent1 .sinks.sink1b.hdfs.fileType = Data Stream
agent1.channels.channel1a.type = file
agent1.channels.channel1b.type = memory
step 4 : Run below command which will use this configuration file and append data in hdfs.
Start flume service:
flume-ng agent -conf /home/cloudera/flumeconf -conf-file /home/cloudera/flumeconf/flume8.conf --name age
Step 5: Open another terminal and create a file in /tmp/spooldir2/
echo "IBM, 100, 20160104" » /tmp/spooldir2/.bb.txt
echo "IBM, 103, 20160105" » /tmp/spooldir2/.bb.txt mv /tmp/spooldir2/.bb.txt /tmp/spooldir2/bb.txt
After few mins
echo "IBM.100.2, 20160104" »/tmp/spooldir2/.dr.txt
echo "IBM, 103.1, 20160105" » /tmp/spooldir2/.dr.txt mv /tmp/spooldir2/.dr.txt /tmp/spooldir2/dr.txt



Problem Scenario 83 : In Continuation of previous question, please accomplish following activities.

1. Select all the records with quantity >= 5000 and name starts with 'Pen'
2. Select all the records with quantity >= 5000, price is less than 1.24 and name starts with
'Pen'
3. Select all the records witch does not have quantity >= 5000 and name does not starts
with 'Pen'
4. Select all the products which name is 'Pen Red', 'Pen Black'
5. Select all the products which has price BETWEEN 1.0 AND 2.0 AND quantity
BETWEEN 1000 AND 2000.

  1. See the explanation for Step by Step Solution and configuration.

Answer(s): A

Explanation:

Solution :Step 1: Select all the records with quantity >= 5000 and name starts with 'Pen' val results = sqlContext.sql(......SELECT * FROM products WHERE quantity >= 5000 AND name LIKE 'Pen %.......)
results.show()
Step 2: Select all the records with quantity >= 5000 , price is less than 1.24 and name starts with 'Pen'
val results = sqlContext.sql(......SELECT * FROM products WHERE quantity >= 5000 AND price < 1.24 AND name LIKE 'Pen %.......)
results. showQ
Step 3: Select all the records witch does not have quantity >= 5000 and name does not starts with 'Pen'
val results = sqlContext.sql('.....SELECT * FROM products WHERE NOT (quantity >= 5000 AND name LIKE 'Pen %')......)
results. showQ
Step 4: Select all the products wchich name is 'Pen Red', 'Pen Black' val results = sqlContext.sql('.....SELECT' FROM products WHERE name IN ('Pen Red', 'Pen Black')......)
results. showQ
Step 5: Select all the products which has price BETWEEN 1.0 AND 2.0 AND quantity BETWEEN 1000 AND 2000.
val results = sqlContext.sql(......SELECT * FROM products WHERE (price BETWEEN 1.0 AND 2.0) AND (quantity BETWEEN 1000 AND 2000)......) results. show()



Problem Scenario 82 : You have been given table in Hive with following structure (Which you have created in previous exercise).
productid int code string name string quantity int price float
Using SparkSQL accomplish following activities.

1. Select all the products name and quantity having quantity <= 2000
2. Select name and price of the product having code as 'PEN'
3. Select all the products, which name starts with PENCIL
4. Select all products which "name" begins with 'P\ followed by any two characters,
followed by space, followed by zero or more characters

  1. See the explanation for Step by Step Solution and configuration.

Answer(s): A

Explanation:

Solution :
Step 1: Copy following tile (Mandatory Step in Cloudera QuickVM) if you have not done it.
sudo su root
cp /usr/lib/hive/conf/hive-site.xml /usr/lib/sparkVconf/
Step 2: Now start spark-shell
Step 3 ; Select all the products name and quantity having quantity <= 2000 val results = sqlContext.sql(......SELECT name, quantity FROM products WHERE quantity <= 2000......)
results.showQ
Step 4: Select name and price of the product having code as 'PEN' val results = sqlContext.sql(......SELECT name, price FROM products WHERE code = 'PEN.......)
results. showQ
Step 5: Select all the products , which name starts with PENCIL val results = sqlContext.sql(......SELECT name, price FROM products WHERE upper(name) LIKE 'PENCIL%.......}
results. showQ
Step 6: select all products which "name" begins with 'P', followed by any two characters, followed by space, followed byzero or more characters -- "name" begins with 'P', followed by any two characters,
- followed by space, followed by zero or more characters val results = sqlContext.sql(......SELECT name, price FROM products WHERE name LIKE 'P_ %.......)
results. show()



Problem Scenario 20 : You have been given MySQL DB with following details.
user=retail_dba
password=cloudera
database=retail_db
table=retail_db.categories
jdbc URL = jdbc:mysql://quickstart:3306/retail_db
Please accomplish following activities.

1. Write a Sqoop Job which will import "retaildb.categories" table to hdfs, in a directory
name "categories_targetJob".

  1. See the explanation for Step by Step Solution and configuration.

Answer(s): A

Explanation:

Solution :Step 1: Connecting to existing MySQL Database mysql -user=retail_dba -- password=cloudera retail_db
Step 2: Show all the available tables show tables;
Step 3: Below is the command to create Sqoop Job (Please note that - import space is mandatory)
sqoop job -create sqoopjob \ -- import \
-connect "jdbc:mysql://quickstart:3306/retail_db" \
-username=retail_dba \
-password=cloudera \
-table categories \
-target-dir categories_targetJob \
-fields-terminated-by '|' \
-lines-terminated-by '\n'
Step 4: List all the Sqoop Jobs sqoop job --list
Step 5: Show details of the Sqoop Job sqoop job --show sqoopjob
Step 6: Execute the sqoopjob sqoopjob --exec sqoopjob
Step 7: Check the output of import job
hdfs dfs -Is categories_target_job
hdfs dfs -cat categories_target_job/part*



Problem Scenario 59 : You have been given below code snippet.
val x = sc.parallelize(1 to 20)
val y = sc.parallelize(10 to 30) operationl
z.collect
Write a correct code snippet for operationl which will produce desired output, shown below. Array[lnt] = Array(16, 12, 20, 13, 17, 14, 18, 10, 19, 15, 11)

  1. See the explanation for Step by Step Solution and configuration.

Answer(s): A

Explanation:

Solution :
val z = x.intersection(y)
intersection : Returns the elements in the two RDDs which are the same.



Viewing Page 1 of 21



Share your comments for Cloudera CCA175 exam with other users:

SM 1211 10/12/2023 10:06:00 PM

hi everyone
UNITED STATES


A 10/2/2023 6:08:00 PM

waiting for the dump. please upload.
UNITED STATES


Anonymous 7/16/2023 11:05:00 AM

upload cks exam questions
Anonymous


Johan 12/13/2023 8:16:00 AM

awesome training material
NETHERLANDS


PC 7/28/2023 3:49:00 PM

where is dump
Anonymous


YoloStar Yoloing 10/22/2023 9:58:00 PM

q. 289 - the correct answer should be b not d, since the question asks for the most secure way to provide access to a s3 bucket (a single one), and by principle of the least privilege you should not be giving access to all buckets.
Anonymous


Zelalem Nega 5/14/2023 12:45:00 PM

please i need if possible h12-831,
UNITED KINGDOM


unknown-R 11/23/2023 7:36:00 AM

good collection of questions and solution for pl500 certification
UNITED STATES


Swaminathan 5/11/2023 9:59:00 AM

i would like to appear the exam.
Anonymous


Veenu 10/24/2023 6:26:00 AM

i am very happy as i cleared my comptia a+ 220-1101 exam. i studied from as it has all exam dumps and mock tests available. i got 91% on the test.
Anonymous


Karan 5/17/2023 4:26:00 AM

need this dump
Anonymous


Ramesh Kutumbaka 12/30/2023 11:17:00 PM

its really good to eventuate knowledge before appearing for the actual exam.
Anonymous


anonymous 7/20/2023 10:31:00 PM

this is great
CANADA


Xenofon 6/26/2023 9:35:00 AM

please i want the questions to pass the exam
UNITED STATES


Diego 1/21/2024 8:21:00 PM

i need to pass exam
Anonymous


Vichhai 12/25/2023 3:25:00 AM

great, i appreciate it.
AUSTRALIA


P Simon 8/25/2023 2:39:00 AM

please could you upload (isc)2 certified in cybersecurity (cc) exam questions
SOUTH AFRICA


Karim 10/8/2023 8:34:00 PM

good questions, wrong answers
Anonymous


Itumeleng 1/6/2024 12:53:00 PM

im preparing for exams
Anonymous


MS 1/19/2024 2:56:00 PM

question no: 42 isnt azure vm an iaas solution? so, shouldnt the answer be "no"?
Anonymous


keylly 11/28/2023 10:10:00 AM

im study azure
Anonymous


dorcas 9/22/2023 8:08:00 AM

i need this now
Anonymous


treyf 11/9/2023 5:13:00 AM

i took the aws saa-c03 test and scored 935/1000. it has all the exam dumps and important info.
UNITED STATES


anonymous 1/11/2024 4:50:00 AM

good questions
Anonymous


Anjum 9/23/2023 6:22:00 PM

well explained
Anonymous


Thakor 6/7/2023 11:52:00 PM

i got the full version and it helped me pass the exam. pdf version is very good.
INDIA


sartaj 7/18/2023 11:36:00 AM

provide the download link, please
INDIA


loso 7/25/2023 5:18:00 AM

please upload thank.
THAILAND


Paul 6/23/2023 7:12:00 AM

please can you share 1z0-1055-22 dump pls
UNITED STATES


exampei 10/7/2023 8:14:00 AM

i will wait impatiently. thank youu
Anonymous


Prince 10/31/2023 9:09:00 PM

is it possible to clear the exam if we focus on only these 156 questions instead of 623 questions? kindly help!
Anonymous


Ali Azam 12/7/2023 1:51:00 AM

really helped with preparation of my scrum exam
Anonymous


Jerman 9/29/2023 8:46:00 AM

very informative and through explanations
Anonymous


Jimmy 11/4/2023 12:11:00 PM

prep for exam
INDONESIA