Confluent CCDAK Exam (page: 2)
Confluent Certified Developer for Apache Kafka Certification Examination
Updated on: 25-Dec-2025

Viewing Page 2 of 31

A client connects to a broker in the cluster and sends a fetch request for a partition in a topic. It gets an exception Not Leader For Partition Exception in the response. How does client handle this situation?

  1. Get the Broker id from Zookeeper that is hosting the leader replica and send request to it
  2. Send metadata request to the same broker for the topic and select the broker hosting the leader replica
  3. Send metadata request to Zookeeper for the topic and select the broker hosting the leader replica
  4. Send fetch request to each Broker in the cluster

Answer(s): B

Explanation:

In case the consumer has the wrong leader of a partition, it will issue a metadata request. The Metadata request can be handled by any node, so clients know afterwards which broker are the designated leader for the topic partitions. Produce and consume requests can only be sent to the node hosting partition leader.



What is the risk of increasing max.in.flight.requests.per.connection while also enabling retries in a producer?

  1. At least once delivery is not guaranteed
  2. Message order not preserved
  3. Reduce throughput
  4. Less resilient

Answer(s): B

Explanation:

Some messages may require multiple retries. If there are more than 1 requests in flight, it may result in messages received out of order. Note an exception to this rule is if you enable the producer settingenable.idempotence=true which takes care of the out of ordering case on its own.


Reference:

https://issues.apache.org/jira/browse/KAFKA-5494



A Kafka producer application wants to send log messages to a topic that does not include any key. What are the properties that are mandatory to configure for the producer configuration? (Select three)

  1. bootstrap.servers
  2. partition
  3. key.serializer
  4. value.serializer
  5. key
  6. value

Answer(s): A,C,D

Explanation:

Both key and value serializer are mandatory.



To import data from external databases, I should use

  1. Confluent REST Proxy
  2. Kafka Connect Sink
  3. Kafka Streams
  4. Kafka Connect Source

Answer(s): D

Explanation:

Kafka Connect Sink is used to export data from Kafka to external databases and Kafka Connect Source is used to import from external databases into Kafka.



You are running a Kafka Streams application in a Docker container managed by Kubernetes, and upon application restart, it takes a long time for the docker container to replicate the state and get back to processing the data. How can you improve dramatically the application restart?

  1. Mount a persistent volume for your RocksDB
  2. Increase the number of partitions in your inputs topic
  3. Reduce the Streams caching property
  4. Increase the number of Streams threads

Answer(s): A

Explanation:

Although any Kafka Streams application is stateless as the state is stored in Kafka, it can take a while and lots of resources to recover the state from Kafka. In order to speed up recovery, it is advised to store the Kafka Streams state on a persistent volume, so that only the missing part of the state needs to be recovered.



Viewing Page 2 of 31



Share your comments for Confluent CCDAK exam with other users:

Testbear 6/13/2023 11:15:00 AM

please upload
ITALY