Linux Foundation PCA Exam (page: 2)
Linux Foundation Prometheus Certified Associate
Updated on: 02-Mar-2026

Viewing Page 2 of 9

What is an example of a single-target exporter?

  1. Redis Exporter
  2. SNMP Exporter
  3. Node Exporter
  4. Blackbox Exporter

Answer(s): A

Explanation:

A single-target exporter in Prometheus is designed to expose metrics for a specific service instance rather than multiple dynamic endpoints. The Redis Exporter is a prime example -- it connects to one Redis server instance and exports its metrics (like memory usage, keyspace hits, or command statistics) to Prometheus.

By contrast, exporters like the SNMP Exporter and Blackbox Exporter can probe multiple targets dynamically, making them multi-target exporters. The Node Exporter, while often deployed per host, is considered a host-level exporter, not a true single-target one in configuration behavior.

The Redis Exporter is instrumented specifically for a single Redis endpoint per configuration, aligning it with Prometheus's single-target exporter definition. This design simplifies monitoring and avoids dynamic reconfiguration.


Reference:

Verified from Prometheus documentation and official exporter guidelines ­ Writing Exporters, Exporter Types, and Redis Exporter Overview sections.



How do you configure the rule evaluation interval in Prometheus?

  1. You can configure the evaluation interval in the global configuration file and in the rule configuration file.
  2. You can configure the evaluation interval in the service discovery configuration and in the command-line flags.
  3. You can configure the evaluation interval in the scraping job configuration file and in the command-line flags.
  4. You can configure the evaluation interval in the Prometheus TSDB configuration file and in the rule configuration file.

Answer(s): A

Explanation:

Prometheus evaluates alerting and recording rules at a regular cadence determined by the evaluation_interval setting. This can be defined globally in the main Prometheus configuration file (prometheus.yml) under the global: section or overridden for specific rule groups in the rule configuration files.

The global evaluation_interval specifies how frequently Prometheus should execute all configured rules, while rule-specific intervals can fine-tune evaluation frequency for individual groups. For instance:

global:

evaluation_interval: 30s

This means Prometheus evaluates rules every 30 seconds unless a rule file specifies otherwise.

This parameter is distinct from scrape_interval, which governs metric collection frequency from targets. It has no relation to TSDB, service discovery, or command-line flags.


Reference:

Verified from Prometheus documentation ­ Configuration File Reference, Rule Evaluation and Recording Rules sections.



Which of the following metrics is unsuitable for a Prometheus setup?

  1. prometheus_engine_query_log_enabled
  2. promhttp_metric_handler_requests_total{code="500"}
  3. http_response_total{handler="static/*filepath"}
  4. user_last_login_timestamp_seconds{email="john.doe@example.com"}

Answer(s): D

Explanation:

The metric user_last_login_timestamp_seconds{email="john.doe@example.com"} is unsuitable for Prometheus because it includes a high-cardinality label (email). Each unique email address would generate a separate time series, potentially numbering in the millions, which severely impacts

Prometheus performance and memory usage.

Prometheus is optimized for low- to medium-cardinality metrics that represent system-wide behavior rather than per-user data. High-cardinality metrics cause data explosion, complicating queries and overwhelming the storage engine.

By contrast, the other metrics--prometheus_engine_query_log_enabled, promhttp_metric_handler_requests_total{code="500"}, and http_response_total{handler="static/*filepath"}--adhere to Prometheus best practices. They represent operational or service-level metrics with limited, manageable label value sets.


Reference:

Extracted and verified from Prometheus documentation ­ Metric and Label Naming Best Practices, Cardinality Management, and Anti-Patterns for Metric Design sections.



What Prometheus component would you use if targets are running behind a Firewall/NAT?

  1. Pull Proxy
  2. Pull Gateway
  3. HA Proxy
  4. PushProx

Answer(s): D

Explanation:

When Prometheus targets are behind firewalls or NAT and cannot be reached directly by the Prometheus server's pull mechanism, the recommended component to use is PushProx.

PushProx works by reversing the usual pull model. It consists of a PushProx Proxy (accessible by Prometheus) and PushProx Clients (running alongside the targets). The clients establish outbound connections to the proxy, which allows Prometheus to "pull" metrics indirectly. This approach bypasses network restrictions without compromising the Prometheus data model.

Unlike the Pushgateway (which is used for short-lived batch jobs, not network-isolated targets), PushProx maintains the Prometheus "pull" semantics while accommodating environments where direct scraping is impossible.


Reference:

Verified from Prometheus documentation and official PushProx design notes ­ Monitoring Behind NAT/Firewall, PushProx Overview, and Architecture and Usage Scenarios sections.



You'd like to monitor a short-lived batch job.
What Prometheus component would you use?

  1. PullProxy
  2. PushGateway
  3. PushProxy
  4. PullGateway

Answer(s): B

Explanation:

Prometheus normally operates on a pull-based model, where it scrapes metrics from long-running targets. However, short-lived batch jobs (such as cron jobs or data processing tasks) often finish before Prometheus can scrape them. To handle this scenario, Prometheus provides the Pushgateway component.

The Pushgateway allows ephemeral jobs to push their metrics to an intermediary gateway. Prometheus then scrapes these metrics from the Pushgateway like any other target. This ensures short-lived jobs have their metrics preserved even after completion.

The Pushgateway should not be used for continuously running applications because it breaks Prometheus's usual target lifecycle semantics. Instead, it is intended solely for transient job metrics, like backups or CI/CD tasks.


Reference:

Verified from Prometheus documentation ­ Pushing Metrics ­ The Pushgateway and Use Cases for Short-Lived Jobs sections.



How do you calculate the average request duration during the last 5 minutes from a histogram or summary called http_request_duration_seconds?

  1. rate(http_request_duration_seconds_sum[5m]) /
    rate(http_request_duration_seconds_count[5m])
  2. rate(http_request_duration_seconds_total[5m]) /
    rate(http_request_duration_second$_count[5m])
  3. rate(http_request_duration_seconds_total[5m]) /
    rate(http_request_duration_seconds_average[5m])
  4. rate(http_request_duration_seconds_sum[5m]) /
    rate(http_request_duration_seconds_average[5m])

Answer(s): A

Explanation:

In Prometheus, histograms and summaries expose metrics with _sum and _count suffixes to represent total accumulated values and sample counts, respectively. To compute the average request duration over a given time window (for example, 5 minutes), you divide the rate of increase of _sum by the rate of increase of _count:

\text{Average duration} =
\frac{\text{rate(http_request_duration_seconds_sum[5m])}}{\text{rate(http_request_duration_seco nds_count[5m])}}

Here,

http_request_duration_seconds_sum represents the total accumulated request time, and http_request_duration_seconds_count represents the number of requests observed.

By dividing these rates, you obtain the average request duration per request over the specified time range.


Reference:

Extracted and verified from Prometheus documentation ­ Querying Histograms and Summaries, PromQL Rate Function, and Metric Naming Conventions sections.



If the vector selector foo[5m] contains 1 1 NaN, what would max_over_time(foo[5m]) return?

  1. It errors out.
  2. 1
  3. NaN
  4. No answer.

Answer(s): B

Explanation:

In PromQL, range vector functions like max_over_time() compute an aggregate value (in this case,

the maximum) over all samples within a specified time range. The function ignores NaN (Not-a- Number) values when computing the result.

Given the range vector foo[5m] containing samples [1, 1, NaN], the maximum value among the valid numeric samples is 1. Therefore, max_over_time(foo[5m]) returns 1.

Prometheus functions handle missing or invalid data points gracefully--ignoring NaN ensures stable calculations even when intermittent collection issues or resets occur. The function only errors if the selector is syntactically invalid or if no numeric samples exist at all.


Reference:

Verified from Prometheus documentation ­ PromQL Range Vector Functions, Aggregation Over Time Functions, and Handling NaN Values in PromQL sections.



Given the following Histogram metric data, how many requests took less than or equal to 0.1 seconds?

apiserver_request_duration_seconds_bucket{job="kube-apiserver", le="+Inf"} 3

apiserver_request_duration_seconds_bucket{job="kube-apiserver", le="0.05"} 0

apiserver_request_duration_seconds_bucket{job="kube-apiserver", le="0.1"} 1

apiserver_request_duration_seconds_bucket{job="kube-apiserver", le="1"} 3

apiserver_request_duration_seconds_count{job="kube-apiserver"} 3

apiserver_request_duration_seconds_sum{job="kube-apiserver"} 0.554003785

  1. 0
  2. 0.554003785
  3. 1
  4. 3

Answer(s): C

Explanation:

In Prometheus, histogram metrics use cumulative buckets to record the count of observations that fall within specific duration thresholds. Each bucket has a label le ("less than or equal to"), representing the upper bound of that bucket.

In the given metric, the bucket labeled le="0.1" has a value of 1, meaning exactly one request took less than or equal to 0.1 seconds. Buckets are cumulative, so:

le="0.05" 0 requests 0.05 seconds le="0.1" 1 request 0.1 seconds le="1" 3 requests 1 second le="+Inf" all 3 requests total

The _sum and _count values represent total duration and request count respectively, but the number of requests below a given threshold is read directly from the bucket's le value.


Reference:

Verified from Prometheus documentation ­ Understanding Histograms and Summaries, Bucket Semantics, and Histogram Query Examples sections.



Viewing Page 2 of 9



Share your comments for Linux Foundation PCA exam with other users:

Annie 6/7/2023 3:46:00 AM

i need this exam.. please upload these are really helpful
PAKISTAN


Shubhra Rathi 8/26/2023 1:08:00 PM

please upload the oracle 1z0-1059-22 dumps
Anonymous


Shiji 10/15/2023 1:34:00 PM

very good questions
INDIA


Rita Rony 11/27/2023 1:36:00 PM

nice, first step to exams
Anonymous


Aloke Paul 9/11/2023 6:53:00 AM

is this valid for chfiv9 as well... as i am reker 3rd time...
CHINA


Calbert Francis 1/15/2024 8:19:00 PM

great exam for people taking 220-1101
UNITED STATES


Ayushi Baria 11/7/2023 7:44:00 AM

this is very helpfull for me
Anonymous


alma 8/25/2023 1:20:00 PM

just started preparing for the exam
UNITED KINGDOM


CW 7/10/2023 6:46:00 PM

these are the type of questions i need.
UNITED STATES


Nobody 8/30/2023 9:54:00 PM

does this actually work? are they the exam questions and answers word for word?
Anonymous


Salah 7/23/2023 9:46:00 AM

thanks for providing these questions
Anonymous


Ritu 9/15/2023 5:55:00 AM

interesting
CANADA


Ron 5/30/2023 8:33:00 AM

these dumps are pretty good.
Anonymous


Sowl 8/10/2023 6:22:00 PM

good questions
UNITED STATES


Blessious Phiri 8/15/2023 2:02:00 PM

dbua is used for upgrading oracle database
Anonymous


Richard 10/24/2023 6:12:00 AM

i am thrilled to say that i passed my amazon web services mls-c01 exam, thanks to study materials. they were comprehensive and well-structured, making my preparation efficient.
Anonymous


Janjua 5/22/2023 3:31:00 PM

please upload latest ibm ace c1000-056 dumps
GERMANY


Matt 12/30/2023 11:18:00 AM

if only explanations were provided...
FRANCE


Rasha 6/29/2023 8:23:00 PM

yes .. i need the dump if you can help me
Anonymous


Anonymous 7/25/2023 8:05:00 AM

good morning, could you please upload this exam again?
SPAIN


AJ 9/24/2023 9:32:00 AM

hi please upload sre foundation and practitioner exam questions
Anonymous


peter parker 8/10/2023 10:59:00 AM

the exam is listed as 80 questions with a pass mark of 70%, how is your 50 questions related?
Anonymous


Berihun 7/13/2023 7:29:00 AM

all questions are so important and covers all ccna modules
Anonymous


nspk 1/19/2024 12:53:00 AM

q 44. ans:- b (goto setup > order settings > select enable optional price books for orders) reference link --> https://resources.docs.salesforce.com/latest/latest/en-us/sfdc/pdf/sfom_impl_b2b_b2b2c.pdf(decide whether you want to enable the optional price books feature. if so, select enable optional price books for orders. you can use orders in salesforce while managing price books in an external platform. if you’re using d2c commerce, you must select enable optional price books for orders.)
Anonymous


Muhammad Rawish Siddiqui 12/2/2023 5:28:00 AM

"cost of replacing data if it were lost" is also correct.
SAUDI ARABIA


Anonymous 7/14/2023 3:17:00 AM

pls upload the questions
UNITED STATES


Mukesh 7/10/2023 4:14:00 PM

good questions
UNITED KINGDOM


Elie Abou Chrouch 12/11/2023 3:38:00 AM

question 182 - correct answer is d. ethernet frame length is 64 - 1518b. length of user data containing is that frame: 46 - 1500b.
Anonymous


Damien 9/23/2023 8:37:00 AM

i need this exam pls
Anonymous


Nani 9/10/2023 12:02:00 PM

its required for me, please make it enable to access. thanks
UNITED STATES


ethiopia 8/2/2023 2:18:00 AM

seems good..
ETHIOPIA


whoAreWeReally 12/19/2023 8:29:00 PM

took the test last week, i did have about 15 - 20 word for word from this site on the test. (only was able to cram 600 of the questions from this site so maybe more were there i didnt review) had 4 labs, bgp, lacp, vrf with tunnels and actually had to skip a lab due to time. lots of automation syntax questions.
EUROPEAN UNION


vs 9/2/2023 12:19:00 PM

no comments
Anonymous


john adenu 11/14/2023 11:02:00 AM

nice questions bring out the best in you.
Anonymous