Linux Foundation CNPA Exam (page: 2)
Linux Foundation Certified Cloud Native Platform Engineering Associate
Updated on: 12-Jan-2026

Viewing Page 2 of 12

In a Kubernetes environment, which component is responsible for watching the state of resources during the reconciliation process?

  1. Kubernetes Scheduler
  2. Kubernetes Dashboard
  3. Kubernetes API Server
  4. Kubernetes Controller

Answer(s): D

Explanation:

The Kubernetes reconciliation process ensures that the actual cluster state matches the desired state defined in manifests. The Kubernetes Controller (option D) is responsible for watching the state of resources through the API Server and taking action to reconcile differences. For example, the Deployment Controller ensures that the number of Pods matches the replica count specified, while the Node Controller monitors node health.

Option A (Scheduler) is incorrect because the Scheduler's role is to assign Pods to nodes based on constraints and availability, not ongoing reconciliation. Option B (Dashboard) is simply a UI for visualization and does not manage cluster state. Option C (API Server) exposes the Kubernetes API and serves as the communication hub, but it does not perform reconciliation logic itself.

Controllers embody the core Kubernetes design principle: continuous reconciliation between declared state and observed state. This makes them fundamental to declarative infrastructure and aligns with GitOps practices where controllers continuously enforce desired configurations from source control.


Reference:

-- CNCF Kubernetes Documentation

-- CNCF GitOps Principles

-- Cloud Native Platform Engineering Study Guide



To simplify service consumption for development teams on a Kubernetes platform, which approach combines service discovery with an abstraction of underlying infrastructure details?

  1. Manual service dependencies configuration within application code.
  2. Shared service connection strings and network configurations document.
  3. Direct Kubernetes API access with detailed documentation.
  4. Service catalog with abstracted APIs and automated service registration.

Answer(s): D

Explanation:

Simplifying developer access to platform services is a central goal of internal developer platforms (IDPs). Option D is correct because a service catalog with abstracted APIs and automated registration provides a unified interface for developers to consume services without dealing with low-level infrastructure details. This approach combines service discovery with abstraction, offering golden paths and self-service capabilities.

Option A burdens developers with hardcoded dependencies, reducing flexibility and portability. Option B relies on manual documentation, which is error-prone and not dynamic. Option C increases cognitive load by requiring developers to interact directly with Kubernetes APIs, which goes against platform engineering's goal of reducing complexity.

A service catalog enables developers to provision databases, messaging queues, or APIs with minimal input, while the platform automates backend provisioning and wiring. It also improves consistency, compliance, and observability by embedding platform-wide policies into the service provisioning workflows. This results in a seamless developer experience that accelerates delivery while maintaining governance.


Reference:

-- CNCF Platforms Whitepaper

-- CNCF Platform Engineering Maturity Model

-- Cloud Native Platform Engineering Study Guide



A team wants to deploy a new feature to production for internal users only and be able to instantly disable it if problems occur, without redeploying code.
Which strategy is most suitable?

  1. Use a blue/green deployment to direct internal users to one version and switch as needed.
  2. Use feature flags to release the feature to selected users and control its availability through settings.
  3. Use a canary deployment to gradually expose the feature to a small group of random users.
  4. Deploy the feature to all users and prepare to roll it back manually if an issue is detected.

Answer(s): B

Explanation:

Feature flags are the most effective way to control feature exposure to specific users, such as internal testers, while enabling fast rollback without redeployment. Option B is correct because feature flags allow teams to decouple deployment from release, giving precise runtime control over feature availability. This means that once the code is deployed, the team can toggle the feature on or off for different cohorts (e.g., internal users) dynamically.

Option A (blue/green deployment) controls traffic between two environments but does not provide user-level granularity. Option C (canary deployments) gradually expose changes but focus on random subsets of users rather than targeted groups such as internal employees. Option D requires redeployment or rollback, which introduces risk and slows down incident response.

Feature flags are widely recognized in platform engineering as a core continuous delivery practice that improves safety, accelerates experimentation, and enhances resilience by enabling immediate mitigation of issues.


Reference:

-- CNCF Platforms Whitepaper

-- Cloud Native Platform Engineering Study Guide

-- Continuous Delivery Foundation Guidance



In the context of observability, which telemetry signal is primarily used to record events that occur within a system and are timestamped?

  1. Logs
  2. Alerts
  3. Traces
  4. Metrics

Answer(s): A

Explanation:

Logs are detailed, timestamped records of discrete events that occur within a system. They provide granular insight into what has happened, making them crucial for debugging, auditing, and incident investigations. Option A is correct because logs capture both normal and error events, often containing contextual information such as error codes, user IDs, or request payloads.

Option B (alerts) are secondary outputs generated from telemetry signals like logs or metrics and are not raw data themselves. Option C (traces) represent the flow of requests across distributed systems, showing relationships and latency between services but not arbitrary events. Option D (metrics) are numeric aggregates sampled over intervals (e.g., CPU usage, latency), not discrete, timestamped events.

Observability guidance in cloud native systems emphasizes the "three pillars" of telemetry: logs, metrics, and traces. Logs are indispensable for root cause analysis and compliance because they preserve historical event context.


Reference:

-- CNCF Observability Whitepaper

-- OpenTelemetry Documentation (aligned with CNCF)

-- Cloud Native Platform Engineering Study Guide



In assessing the effectiveness of platform engineering initiatives, which DORA metric most directly correlates to the time it takes for code from its initial commit to be deployed into production?

  1. Lead Time for Changes
  2. Deployment Frequency
  3. Mean Time to Recovery
  4. Change Failure Rate

Answer(s): A

Explanation:

Lead Time for Changes is a DORA (DevOps Research and Assessment) metric that measures the time from code commit to successful deployment in production. Option A is correct because it directly reflects how quickly the platform enables developers to turn ideas into delivered software. Shorter lead times indicate an efficient delivery pipeline, streamlined workflows, and effective automation.

Option B (Deployment Frequency) measures how often code is deployed, not how long it takes to reach production. Option C (Mean Time to Recovery) measures operational resilience after failures. Option D (Change Failure Rate) indicates stability by measuring the percentage of deployments causing incidents.
While all DORA metrics are valuable, only Lead Time for Changes measures end- to-end speed of delivery.

In platform engineering, improving lead time often involves automating CI/CD pipelines, implementing GitOps, and reducing manual approvals. It is a core measurement of developer experience and platform efficiency.


Reference:

-- CNCF Platforms Whitepaper

-- Accelerate: State of DevOps Report (DORA Metrics)

-- Cloud Native Platform Engineering Study Guide



In the context of observability for cloud native platforms, which of the following best describes the role of OpenTelemetry?

  1. OpenTelemetry is primarily used for logging data only.
  2. OpenTelemetry is a proprietary solution that limits its use to specific cloud providers.
  3. OpenTelemetry provides a standardized way to collect and transmit observability data.
  4. OpenTelemetry is solely focused on infrastructure monitoring.

Answer(s): C

Explanation:

OpenTelemetry is an open-source CNCF project that provides vendor-neutral, standardized APIs, SDKs, and agents for collecting and exporting observability data such as metrics, logs, and traces. Option C is correct because OpenTelemetry's purpose is to unify how telemetry data is generated, transmitted, and consumed, regardless of which backend (e.g., Prometheus, Jaeger, Elastic, commercial APM tools) is used.

Option A is incorrect because OpenTelemetry supports all three signal types (metrics, logs, traces), not just logs. Option B is incorrect because it is an open, community-driven standard and not tied to a single vendor or cloud provider. Option D is misleading because OpenTelemetry covers distributed applications, services, and infrastructure--far beyond just infrastructure monitoring.

OpenTelemetry reduces vendor lock-in and promotes interoperability, making it a cornerstone of cloud native observability strategies. Platform engineering teams rely on it to ensure consistent data collection, enabling better insights, faster debugging, and improved reliability of cloud native platforms.


Reference:

-- CNCF Observability Whitepaper

-- OpenTelemetry CNCF Project Documentation

-- Cloud Native Platform Engineering Study Guide



A company is implementing a service mesh for secure service-to-service communication in their cloud native environment.
What is the primary benefit of using mutual TLS (mTLS) within this context?

  1. Allows services to authenticate each other and secure data in transit.
  2. Allows services to bypass security checks for better performance.
  3. Enables logging of all service communications for audit purposes.
  4. Simplifies the deployment of microservices by automatically scaling them.

Answer(s): A

Explanation:

Mutual TLS (mTLS) is a core feature of service meshes, such as Istio or Linkerd, that enhances security in cloud native environments by ensuring that both communicating services authenticate each other and that the communication channel is encrypted. Option A is correct because mTLS delivers two critical benefits: authentication (verifying the identity of both client and server services) and encryption (protecting data in transit from interception or tampering).

Option B is incorrect because mTLS does not bypass security--it enforces it. Option C is partly true in that service meshes often support observability and logging, but that is not the primary purpose of mTLS. Option D relates to scaling, which is outside the scope of mTLS.

In platform engineering, mTLS is a fundamental security mechanism that provides zero-trust networking between microservices, ensuring secure communication without requiring application- level changes. It strengthens compliance with security and data protection requirements, which are crucial in regulated industries.


Reference:

-- CNCF Service Mesh Whitepaper

-- CNCF Platforms Whitepaper

-- Cloud Native Platform Engineering Study Guide



What is the primary purpose of using multiple environments (e.g., development, staging, production) in a cloud native platform?

  1. Isolates different stages of application development and deployment
  2. Reduces cloud costs by running applications in different locations.
  3. Increases application performance by distributing traffic.
  4. Ensures all applications use the same infrastructure.

Answer(s): A

Explanation:

The primary reason for implementing multiple environments in cloud native platforms is to isolate the different phases of the software development lifecycle. Option A is correct because environments such as development, staging, and production enable testing and validation at each stage without impacting end users. Development environments allow rapid iteration, staging environments simulate production for integration and performance testing, and production environments serve real users.

Option B (reducing costs) may be a side effect but is not the main purpose. Option C (distributing traffic) relates more to load balancing and high availability, not environment separation. Option D is the opposite of the goal--different environments often require tailored infrastructure to meet their distinct purposes.

Isolation through multiple environments is fundamental to reducing risk, supporting continuous delivery, and ensuring stability. This practice also allows for compliance checks, automated testing, and user acceptance validation before changes reach production.


Reference:

-- CNCF Platforms Whitepaper

-- Team Topologies & Platform Engineering Guidance

-- Cloud Native Platform Engineering Study Guide



Viewing Page 2 of 12



Share your comments for Linux Foundation CNPA exam with other users:

Ted 6/21/2023 11:11:00 PM

just paid and downlaod the 2 exams using the 50% sale discount. so far i was able to download the pdf and the test engine. all looks good.
GERMANY


Paul K 11/27/2023 2:28:00 AM

i think it should be a,c. option d goes against the principle of building anything custom unless there are no work arounds available
INDIA


ph 6/16/2023 12:41:00 AM

very legible
Anonymous


sephs2001 7/31/2023 10:42:00 PM

is this exam accurate or helpful?
Anonymous


ash 7/11/2023 3:00:00 AM

please upload dump, i have exam in 2 days
INDIA


Sneha 8/17/2023 6:29:00 PM

this is useful
CANADA


sachin 12/27/2023 2:45:00 PM

question 232 answer should be perimeter not netowrk layer. wrong answer selected
Anonymous


tomAws 7/18/2023 5:05:00 AM

nice questions
BRAZIL


Rahul 6/11/2023 2:07:00 AM

hi team, could you please provide this dump ?
INDIA


TeamOraTech 12/5/2023 9:49:00 AM

very helpful to clear the exam and understand the concept.
Anonymous


Curtis 7/12/2023 8:20:00 PM

i think it is great that you are helping people when they need it. thanks.
UNITED STATES


sam 7/17/2023 6:22:00 PM

cannot evaluate yet
Anonymous


nutz 7/20/2023 1:54:00 AM

a laptops wireless antenna is most likely located in the bezel of the lid
UNITED STATES


rajesh soni 1/17/2024 6:53:00 AM

good examplae to learn basic
INDIA


Tanya 10/25/2023 7:07:00 AM

this is useful information
Anonymous


Nasir Mahmood 12/11/2023 7:32:00 AM

looks usefull
Anonymous


Jason 9/30/2023 1:07:00 PM

question 81 should be c.
CANADA


TestPD1 8/10/2023 12:22:00 PM

question 18 : response isnt a ?
EUROPEAN UNION


ally 8/19/2023 5:31:00 PM

plaese add questions
TURKEY


DIA 10/7/2023 5:59:00 AM

is dumps still valid ?
FRANCE


Annie 7/7/2023 8:33:00 AM

thanks for this
EUROPEAN UNION


arnie 9/17/2023 6:38:00 AM

please upload questions
Anonymous


Tanuj Rana 7/22/2023 2:33:00 AM

please upload the question dump for professional machinelearning
Anonymous


Future practitioner 8/10/2023 1:26:00 PM

question 4 answer is c. this site shows the correct answer as b. "adopt a consumption model" is clearly a cost optimization design principle. looks like im done using this site to study!!!
Anonymous


Ace 8/3/2023 10:37:00 AM

number 52 answer is d
UNITED STATES


Nathan 12/17/2023 12:04:00 PM

just started preparing for my exam , and this site is so much help
Anonymous


Corey 12/29/2023 5:06:00 PM

question 35 is incorrect, the correct answer is c, it even states so: explanation: when a vm is infected with ransomware, you should not restore the vm to the infected vm. this is because the ransomware will still be present on the vm, and it will encrypt the files again. you should also not restore the vm to any vm within the companys subscription. this is because the ransomware could spread to other vms in the subscription. the best way to restore a vm that is infected with ransomware is to restore it to a new azure vm. this will ensure that the ransomware is not present on the new vm.
Anonymous


Rajender 10/18/2023 3:54:00 AM

i would like to take psm1 exam.
Anonymous


Blessious Phiri 8/14/2023 9:53:00 AM

cbd and pdb are key to the database
SOUTH AFRICA


Alkaed 10/19/2022 10:41:00 AM

the purchase and download process is very much streamlined. the xengine application is very nice and user-friendly but there is always room for improvement.
NETHERLANDS


Dave Gregen 9/4/2023 3:17:00 PM

please upload p_sapea_2023
SWEDEN


Sarah 6/13/2023 1:42:00 PM

anyone use this? the question dont seem to follow other formats and terminology i have been studying im getting worried
CANADA


Shuv 10/3/2023 8:19:00 AM

good questions
UNITED STATES


Reb974 8/5/2023 1:44:00 AM

hello are these questions valid for ms-102
CANADA