Linux Foundation PCA Exam (page: 2)
Linux Foundation Prometheus Certified Associate
Updated on: 12-Jan-2026

Viewing Page 2 of 9

What is an example of a single-target exporter?

  1. Redis Exporter
  2. SNMP Exporter
  3. Node Exporter
  4. Blackbox Exporter

Answer(s): A

Explanation:

A single-target exporter in Prometheus is designed to expose metrics for a specific service instance rather than multiple dynamic endpoints. The Redis Exporter is a prime example -- it connects to one Redis server instance and exports its metrics (like memory usage, keyspace hits, or command statistics) to Prometheus.

By contrast, exporters like the SNMP Exporter and Blackbox Exporter can probe multiple targets dynamically, making them multi-target exporters. The Node Exporter, while often deployed per host, is considered a host-level exporter, not a true single-target one in configuration behavior.

The Redis Exporter is instrumented specifically for a single Redis endpoint per configuration, aligning it with Prometheus's single-target exporter definition. This design simplifies monitoring and avoids dynamic reconfiguration.


Reference:

Verified from Prometheus documentation and official exporter guidelines ­ Writing Exporters, Exporter Types, and Redis Exporter Overview sections.



How do you configure the rule evaluation interval in Prometheus?

  1. You can configure the evaluation interval in the global configuration file and in the rule configuration file.
  2. You can configure the evaluation interval in the service discovery configuration and in the command-line flags.
  3. You can configure the evaluation interval in the scraping job configuration file and in the command-line flags.
  4. You can configure the evaluation interval in the Prometheus TSDB configuration file and in the rule configuration file.

Answer(s): A

Explanation:

Prometheus evaluates alerting and recording rules at a regular cadence determined by the evaluation_interval setting. This can be defined globally in the main Prometheus configuration file (prometheus.yml) under the global: section or overridden for specific rule groups in the rule configuration files.

The global evaluation_interval specifies how frequently Prometheus should execute all configured rules, while rule-specific intervals can fine-tune evaluation frequency for individual groups. For instance:

global:

evaluation_interval: 30s

This means Prometheus evaluates rules every 30 seconds unless a rule file specifies otherwise.

This parameter is distinct from scrape_interval, which governs metric collection frequency from targets. It has no relation to TSDB, service discovery, or command-line flags.


Reference:

Verified from Prometheus documentation ­ Configuration File Reference, Rule Evaluation and Recording Rules sections.



Which of the following metrics is unsuitable for a Prometheus setup?

  1. prometheus_engine_query_log_enabled
  2. promhttp_metric_handler_requests_total{code="500"}
  3. http_response_total{handler="static/*filepath"}
  4. user_last_login_timestamp_seconds{email="john.doe@example.com"}

Answer(s): D

Explanation:

The metric user_last_login_timestamp_seconds{email="john.doe@example.com"} is unsuitable for Prometheus because it includes a high-cardinality label (email). Each unique email address would generate a separate time series, potentially numbering in the millions, which severely impacts

Prometheus performance and memory usage.

Prometheus is optimized for low- to medium-cardinality metrics that represent system-wide behavior rather than per-user data. High-cardinality metrics cause data explosion, complicating queries and overwhelming the storage engine.

By contrast, the other metrics--prometheus_engine_query_log_enabled, promhttp_metric_handler_requests_total{code="500"}, and http_response_total{handler="static/*filepath"}--adhere to Prometheus best practices. They represent operational or service-level metrics with limited, manageable label value sets.


Reference:

Extracted and verified from Prometheus documentation ­ Metric and Label Naming Best Practices, Cardinality Management, and Anti-Patterns for Metric Design sections.



What Prometheus component would you use if targets are running behind a Firewall/NAT?

  1. Pull Proxy
  2. Pull Gateway
  3. HA Proxy
  4. PushProx

Answer(s): D

Explanation:

When Prometheus targets are behind firewalls or NAT and cannot be reached directly by the Prometheus server's pull mechanism, the recommended component to use is PushProx.

PushProx works by reversing the usual pull model. It consists of a PushProx Proxy (accessible by Prometheus) and PushProx Clients (running alongside the targets). The clients establish outbound connections to the proxy, which allows Prometheus to "pull" metrics indirectly. This approach bypasses network restrictions without compromising the Prometheus data model.

Unlike the Pushgateway (which is used for short-lived batch jobs, not network-isolated targets), PushProx maintains the Prometheus "pull" semantics while accommodating environments where direct scraping is impossible.


Reference:

Verified from Prometheus documentation and official PushProx design notes ­ Monitoring Behind NAT/Firewall, PushProx Overview, and Architecture and Usage Scenarios sections.



You'd like to monitor a short-lived batch job.
What Prometheus component would you use?

  1. PullProxy
  2. PushGateway
  3. PushProxy
  4. PullGateway

Answer(s): B

Explanation:

Prometheus normally operates on a pull-based model, where it scrapes metrics from long-running targets. However, short-lived batch jobs (such as cron jobs or data processing tasks) often finish before Prometheus can scrape them. To handle this scenario, Prometheus provides the Pushgateway component.

The Pushgateway allows ephemeral jobs to push their metrics to an intermediary gateway. Prometheus then scrapes these metrics from the Pushgateway like any other target. This ensures short-lived jobs have their metrics preserved even after completion.

The Pushgateway should not be used for continuously running applications because it breaks Prometheus's usual target lifecycle semantics. Instead, it is intended solely for transient job metrics, like backups or CI/CD tasks.


Reference:

Verified from Prometheus documentation ­ Pushing Metrics ­ The Pushgateway and Use Cases for Short-Lived Jobs sections.



How do you calculate the average request duration during the last 5 minutes from a histogram or summary called http_request_duration_seconds?

  1. rate(http_request_duration_seconds_sum[5m]) /
    rate(http_request_duration_seconds_count[5m])
  2. rate(http_request_duration_seconds_total[5m]) /
    rate(http_request_duration_second$_count[5m])
  3. rate(http_request_duration_seconds_total[5m]) /
    rate(http_request_duration_seconds_average[5m])
  4. rate(http_request_duration_seconds_sum[5m]) /
    rate(http_request_duration_seconds_average[5m])

Answer(s): A

Explanation:

In Prometheus, histograms and summaries expose metrics with _sum and _count suffixes to represent total accumulated values and sample counts, respectively. To compute the average request duration over a given time window (for example, 5 minutes), you divide the rate of increase of _sum by the rate of increase of _count:

\text{Average duration} =
\frac{\text{rate(http_request_duration_seconds_sum[5m])}}{\text{rate(http_request_duration_seco nds_count[5m])}}

Here,

http_request_duration_seconds_sum represents the total accumulated request time, and http_request_duration_seconds_count represents the number of requests observed.

By dividing these rates, you obtain the average request duration per request over the specified time range.


Reference:

Extracted and verified from Prometheus documentation ­ Querying Histograms and Summaries, PromQL Rate Function, and Metric Naming Conventions sections.



If the vector selector foo[5m] contains 1 1 NaN, what would max_over_time(foo[5m]) return?

  1. It errors out.
  2. 1
  3. NaN
  4. No answer.

Answer(s): B

Explanation:

In PromQL, range vector functions like max_over_time() compute an aggregate value (in this case,

the maximum) over all samples within a specified time range. The function ignores NaN (Not-a- Number) values when computing the result.

Given the range vector foo[5m] containing samples [1, 1, NaN], the maximum value among the valid numeric samples is 1. Therefore, max_over_time(foo[5m]) returns 1.

Prometheus functions handle missing or invalid data points gracefully--ignoring NaN ensures stable calculations even when intermittent collection issues or resets occur. The function only errors if the selector is syntactically invalid or if no numeric samples exist at all.


Reference:

Verified from Prometheus documentation ­ PromQL Range Vector Functions, Aggregation Over Time Functions, and Handling NaN Values in PromQL sections.



Given the following Histogram metric data, how many requests took less than or equal to 0.1 seconds?

apiserver_request_duration_seconds_bucket{job="kube-apiserver", le="+Inf"} 3

apiserver_request_duration_seconds_bucket{job="kube-apiserver", le="0.05"} 0

apiserver_request_duration_seconds_bucket{job="kube-apiserver", le="0.1"} 1

apiserver_request_duration_seconds_bucket{job="kube-apiserver", le="1"} 3

apiserver_request_duration_seconds_count{job="kube-apiserver"} 3

apiserver_request_duration_seconds_sum{job="kube-apiserver"} 0.554003785

  1. 0
  2. 0.554003785
  3. 1
  4. 3

Answer(s): C

Explanation:

In Prometheus, histogram metrics use cumulative buckets to record the count of observations that fall within specific duration thresholds. Each bucket has a label le ("less than or equal to"), representing the upper bound of that bucket.

In the given metric, the bucket labeled le="0.1" has a value of 1, meaning exactly one request took less than or equal to 0.1 seconds. Buckets are cumulative, so:

le="0.05" 0 requests 0.05 seconds le="0.1" 1 request 0.1 seconds le="1" 3 requests 1 second le="+Inf" all 3 requests total

The _sum and _count values represent total duration and request count respectively, but the number of requests below a given threshold is read directly from the bucket's le value.


Reference:

Verified from Prometheus documentation ­ Understanding Histograms and Summaries, Bucket Semantics, and Histogram Query Examples sections.



Viewing Page 2 of 9



Share your comments for Linux Foundation PCA exam with other users:

Moussa 12/12/2023 5:52:00 AM

intéressant
BURKINA FASO


Madan 6/22/2023 9:22:00 AM

thank you for making the interactive questions
Anonymous


Vavz 11/2/2023 6:51:00 AM

questions are accurate
Anonymous


Su 11/23/2023 4:34:00 AM

i need questions/dumps for this exam.
Anonymous


LuvSN 7/16/2023 11:19:00 AM

i need this exam, when will it be uploaded
ROMANIA


Mihai 7/19/2023 12:03:00 PM

i need the dumps !
Anonymous


Wafa 11/13/2023 3:06:00 AM

very helpful
Anonymous


Alokit 7/3/2023 2:13:00 PM

good source
Anonymous


Show-Stopper 7/27/2022 11:19:00 PM

my 3rd test and passed on first try. hats off to this brain dumps site.
UNITED STATES


Michelle 6/23/2023 4:06:00 AM

please upload it
Anonymous


Lele 11/20/2023 11:55:00 AM

does anybody know if are these real exam questions?
EUROPEAN UNION


Girish Jain 10/9/2023 12:01:00 PM

are these questions similar to actual questions in the exam? because they seem to be too easy
Anonymous


Phil 12/8/2022 11:16:00 PM

i have a lot of experience but what comes in the exam is totally different from the practical day to day tasks. so i thought i would rather rely on these brain dumps rather failing the exam.
GERMANY


BV 6/8/2023 4:35:00 AM

good questions
NETHERLANDS


krishna 12/19/2023 2:05:00 AM

valied exam dumps. they were very helpful and i got a pretty good score. i am very grateful for this service and exam questions
Anonymous


Pie 9/3/2023 4:56:00 AM

will it help?
INDIA


Lucio 10/6/2023 1:45:00 PM

very useful to verify knowledge before exam
POLAND


Ajay 5/17/2023 4:54:00 AM

good stuffs
Anonymous


TestPD1 8/10/2023 12:19:00 PM

question 17 : responses arent b and c ?
EUROPEAN UNION


Nhlanhla 12/13/2023 5:26:00 AM

just passed the exam on my first try using these dumps.
Anonymous


Rizwan 1/6/2024 2:18:00 AM

very helpful
INDIA


Yady 5/24/2023 10:40:00 PM

these questions look good.
SINGAPORE


Kettie 10/12/2023 1:18:00 AM

this is very helpful content
Anonymous


SB 7/21/2023 3:18:00 AM

please provide the dumps
UNITED STATES


David 8/2/2023 8:20:00 AM

it is amazing
Anonymous


User 8/3/2023 3:32:00 AM

quesion 178 about "a banking system that predicts whether a loan will be repaid is an example of the" the answer is classification. not regresion, you should fix it.
EUROPEAN UNION


quen 7/26/2023 10:39:00 AM

please upload apache spark dumps
Anonymous


Erineo 11/2/2023 5:34:00 PM

q14 is b&c to reduce you will switch off mail for every single alert and you will switch on daily digest to get a mail once per day, you might even skip the empty digest mail but i see this as a part of the daily digest adjustment
Anonymous


Paul 10/21/2023 8:25:00 AM

i think it is good question
Anonymous


Unknown 8/15/2023 5:09:00 AM

good for students who wish to give certification.
INDIA


Ch 11/20/2023 10:56:00 PM

is there a google drive link to the images? the links in questions are not working.
AUSTRALIA


Joey 5/16/2023 5:25:00 AM

very promising, looks great, so much wow!
Anonymous


alaska 10/24/2023 5:48:00 AM

i scored 87% on the az-204 exam. thanks! i always trust
GERMANY


nnn 7/9/2023 11:09:00 PM

good need more
Anonymous