Confluent CCDAK Exam (page: 4)
Confluent Certified Developer for Apache Kafka Certification Examination
Updated on: 25-Dec-2025

Viewing Page 4 of 31

You have a Kafka cluster and all the topics have a replication factor of 3. One intern at your company stopped a broker, and accidentally deleted all the data of that broker on the disk. What will happen if the broker is restarted?

  1. The broker will start, and other topics will also be deleted as the broker data on the disk got deleted
  2. The broker will start, and won't be online until all the data it needs to have is replicated from other leaders
  3. The broker will crash
  4. The broker will start, and won't have any data. If the broker comes leader, we have a data loss

Answer(s): B

Explanation:

Kafka replication mechanism makes it resilient to the scenarios where the broker lose data on disk, but can recover from replicating from other brokers. This makes Kafka amazing!



Select all that applies (select THREE)

  1. min.insync.replicas is a producer setting
  2. acks is a topic setting
  3. acks is a producer setting
  4. min.insync.replicas is a topic setting
  5. min.insync.replicas matters regardless of the values of acks
  6. min.insync.replicas only matters if acks=all

Answer(s): C,D,F

Explanation:

acks is a producer setting min.insync.replicas is a topic or broker setting and is only effective when acks=all



A customer has many consumer applications that process messages from a Kafka topic. Each consumer application can only process 50 MB/s. Your customer wants to achieve a target throughput of 1 GB/s. What is the minimum number of partitions will you suggest to the customer for that particular topic?

  1. 10
  2. 20
  3. 1
  4. 50

Answer(s): B

Explanation:

each consumer can process only 50 MB/s, so we need at least 20 consumers consuming one partition so that 50 * 20 = 1000 MB target is achieved.



Your producer is producing at a very high rate and the batches are completely full each time. How can you improve the producer throughput? (select two)

  1. Enable compression
  2. Disable compression
  3. Increase batch.size
  4. Decrease batch.size
  5. Decrease linger.ms Increase linger.ms

Answer(s): A,C

Explanation:

batch.size controls how many bytes of data to collect before sending messages to the Kafka broker. Set this as high as possible, without exceeding available memory. Enabling compression can also help make more compact batches and increase the throughput of your producer. Linger.ms will have no effect as the batches are already full



In Avro, adding a field to a record without default is aschema evolution

  1. forward
  2. backward
  3. full
  4. breaking

Answer(s): A

Explanation:

Clients with old schema will be able to read records saved with new schema.



Viewing Page 4 of 31



Share your comments for Confluent CCDAK exam with other users:

Testbear 6/13/2023 11:15:00 AM

please upload
ITALY