CompTIA CY0-001 Exam (page: 2)
CompTIA SecAI+ Beta
Updated on: 12-Feb-2026

Viewing Page 2 of 11

Which of the following is the most concerning risk for a company that allows corporate end users to use public- facing large language models (LLMs)?

  1. Inaccuracies due to hallucinations
  2. Out-of-date acceptable use policies
  3. Data security regulatory violations
  4. Malicious code generation

Answer(s): C

Explanation:

The greatest concern with employees using public-facing LLMs is the potential exposure of sensitive or regulated corporate data. Submitting such information to external systems may violate data protection laws (e.g., GDPR, HIPAA), creating legal and compliance risks that outweigh issues like hallucinations or malicious outputs.



Which of the following requires developers to harden infrastructure to protect AI systems?

  1. Intake processes
  2. Acceptable use policies
  3. Development guidelines
  4. Configuration standards

Answer(s): D

Explanation:

Configuration standards define how infrastructure and systems must be securely set up and maintained. By following these standards, developers harden the environment that supports AI systems, reducing risks from misconfigurations and vulnerabilities.



Which of the following is the best example of an AI model that is trained to identify multiple points from input using a neural network to provide output for authentication?

  1. Facial recognition
  2. Encryption key
  3. Open Authorization (OAuth)
  4. Bounding box

Answer(s): A

Explanation:

Facial recognition uses neural networks to analyze multiple points or features from an input image (such as eyes, nose, mouth, and facial structure) to generate a unique identifier for authentication purposes.



An organization is developing and implementing AI features into a customer service application.
Which of the following practices should the organization put the place before releasing the application for customer trials?

  1. Data masking and sanitization
  2. External compliance audits
  3. Approved AI vendor lists
  4. Third-party risk management

Answer(s): A

Explanation:

Before releasing AI features for customer trials, it is critical to protect sensitive information that may be used during testing. Data masking and sanitization ensure customer or corporate data is anonymized or obfuscated, reducing the risk of data exposure while still allowing realistic evaluation of the AI system.



An internal user enters a client credit card number into an internal generative machine learning (ML) model:

#User prompt: Customer Jane Doe has a new credit card that she wants to add to her account. The number is 5555-5555-5555-5555

Which of the following is the most effective way to prevent prompt injection attacks against a large language model (LLM)?

  1. Guardrails
  2. Antivirus
  3. Web application firewall (WAF)
  4. Role-based access control

Answer(s): A

Explanation:

Guardrails are the primary security control for LLMs to prevent prompt injection attacks. They enforce rules on what inputs are accepted and how the model responds, blocking malicious or sensitive prompts (such as credit card numbers) before they can manipulate or exploit the model.



A security alert triggers an agentic system. An analyst notices the following payload in the logs"



The alert includes multiple shell commands that are not typically run as part of any hardening.
Which of the following is the most effective control to implement?

  1. Adding logic that includes approved strings before running the shell commands
  2. Deprecating model usage and retaining the model with safer parameters
  3. Modifying the application to ignore the SECURITY_UPDATE tag
  4. Using only approved libraries when interacting with agentic systems

Answer(s): A

Explanation:

The payload in the alert attempts to trick the system into executing unauthorized shell commands. The most effective control is to implement allow-list validation (approved strings) before execution. This ensures that only predefined, safe commands are executed, blocking prompt injection attempts that introduce malicious code such as the fake patch script.



A global security operations center (SOC) wants to adapt and leverage the strength of AI in order to enhance its security operations.
Which of the following is the best way to enhance the global SOC functions?

  1. Generate code and execute in production to help save time.
  2. Enable a personal assistant that can act in the global SOC with no human intervention.
  3. Use open-source models in production to help the efficiency of threat detection and threat analysis.
  4. Summarize alerts to easily gain insights on the environment.

Answer(s): D

Explanation:

AI can significantly enhance SOC operations by summarizing and correlating high volumes of alerts, enabling analysts to quickly identify patterns, prioritize threats, and gain actionable insights. This reduces analyst fatigue and improves response times without introducing unsafe automation risks.



An attacker successfully completes a denial-of-service (DoS) attack through the context window of an AI system. Thousands of characters are obfuscated and hidden behind an emoji.
Which of the following techniques best mitigates this type of attack?

  1. Fraud detection
  2. Large language model (LLM)-as-a-judge
  3. Pattern recognition
  4. Prompt filter

Answer(s): D

Explanation:

A DoS attack through the context window relies on overwhelming the model with excessive or obfuscated input. Prompt filtering prevents such malicious or oversized inputs from being processed, ensuring that the model only receives safe, properly structured data within acceptable limits.



Viewing Page 2 of 11



Share your comments for CompTIA CY0-001 exam with other users:

Marianne 10/22/2023 11:57:00 PM

i cannot see the button to go to the questions
Anonymous


sushant 6/28/2023 4:52:00 AM

good questions
EUROPEAN UNION


A\MAM 6/27/2023 5:17:00 PM

q-6 ans-b correct. https://docs.paloaltonetworks.com/pan-os/9-1/pan-os-cli-quick-start/use-the-cli/commit-configuration-changes
UNITED STATES


unanimous 12/15/2023 6:38:00 AM

very nice very nice
Anonymous


akminocha 9/28/2023 10:36:00 AM

please help us with 1z0-1107-2 dumps
INDIA


Jefi 9/4/2023 8:15:00 AM

please upload the practice questions
Anonymous


Thembelani 5/30/2023 2:45:00 AM

need this dumps
Anonymous


Abduraimov 4/19/2023 12:43:00 AM

preparing for this exam is overwhelming. you cannot pass without the help of these exam dumps.
UNITED KINGDOM


Puneeth 10/5/2023 2:06:00 AM

new to this site but i feel it is good
EUROPEAN UNION


Ashok Kumar 1/2/2024 6:53:00 AM

the correct answer to q8 is b. explanation since the mule app has a dependency, it is necessary to include project modules and dependencies to make sure the app will run successfully on the runtime on any other machine. source code of the component that the mule app is dependent of does not need to be included in the exported jar file, because the source code is not being used while executing an app. compiled code is being used instead.
Anonymous


Merry 7/30/2023 6:57:00 AM

good questions
Anonymous


VoiceofMidnight 12/17/2023 4:07:00 PM

Delayed the exam until December 29th.
UNITED STATES


Umar Ali 8/29/2023 2:59:00 PM

A and D are True
Anonymous


vel 8/28/2023 9:17:09 AM

good one with explanation
Anonymous


Gurdeep 1/18/2024 4:00:15 PM

This is one of the most useful study guides I have ever used.
CANADA