ISACA AAISM Exam (page: 1)
ISACA Advanced in AI Security Management
Updated on: 26-Oct-2025

Viewing Page 1 of 13

A financial institution plans to deploy an AI system to provide credit risk assessments for loan applications.
Which of the following should be given the HIGHEST priority in the system's design to ensure ethical decision-making and prevent bias?

  1. Regularly update the model with new customer data to improve prediction accuracy.
  2. Integrate a mechanism for customers to appeal decisions directly within the system.
  3. Train the system to provide advisory outputs with final decisions made by human experts.
  4. Restrict the model's decision-making criteria to objective financial metrics only.

Answer(s): C

Explanation:

In AI governance frameworks, credit scoring is treated as a high-risk application. For such systems, the highest-priority safeguard is human oversight to ensure fairness, accountability, and prevention of bias in automated decisions.

The AI Security ManagementTM (AAISM) domain of AI Governance and Program Management emphasizes that high-impact AI systems require explicit governance structures and human accountability. Human-in-the-loop design ensures that final decisions remain the responsibility of human experts rather than being fully automated. This is particularly critical in financial contexts, where biased outputs can affect individuals' access to credit and create compliance risks.

Official ISACA AI governance guidance specifies:

High-risk AI systems must comply with strict requirements, including human oversight, transparency, and fairness.

The purpose of human oversight is to reduce risks to fundamental rights by ensuring humans can intervene or override an automated decision.

Bias controls are strengthened by requiring human review processes that can analyze outputs and prevent unfair discrimination.

Why other options are not the highest priority:

A . Regular updates improve accuracy but do not guarantee fairness or ethical decision-making. Model drift can introduce new bias if not governed properly.

B . Appeals mechanisms are important for accountability, but they operate after harm has occurred. Governance frameworks emphasize prevention through human oversight in the decision loop.

D . Restricting criteria to "objective metrics" is insufficient, as even objective data can contain hidden proxies for protected attributes. Bias mitigation requires monitoring, testing, and human oversight, not only feature restriction.

AAISM Domain Alignment:

Domain 1 ­ AI Governance and Program Management: Ensures accountability, ethical oversight, and governance structures.

Domain 2 ­ AI Risk Management: Identifies and mitigates risks such as bias, discrimination, and lack of transparency.

Domain 3 ­ AI Technologies and Controls: Provides the technical enablers for implementing oversight mechanisms and bias detection tools.

Reference from AAISM and ISACA materials:

AAISM Exam Content Outline ­ Domain 1: AI Governance and Program Management (roles, responsibilities, oversight).

ISACA AI Governance Guidance (human oversight as mandatory in high-risk AI applications).

Bias and Fairness Controls in AI (human review and intervention as a primary safeguard).



A retail organization implements an AI-driven recommendation system that utilizes customer purchase history.
Which of the following is the BEST way for the organization to ensure privacy and comply with regulatory standards?

  1. Conducting quarterly retraining of the AI model to maintain the accuracy of recommendations
  2. Maintaining a register of legal and regulatory requirements for privacy
  3. Establishing a governance committee to oversee AI privacy practices
  4. Storing customer data indefinitely to ensure the AI model has a complete history

Answer(s): B

Explanation:

According to the AI Security ManagementTM (AAISM) study framework, compliance with privacy and regulatory standards must begin with a formalized process of identifying, documenting, and maintaining applicable obligations. The guidance explicitly notes that organizations should maintain a comprehensive register of legal and regulatory requirements to ensure accountability and alignment with privacy laws. This register serves as the foundation for all governance, risk, and control practices surrounding AI systems that handle personal data.

Maintaining such a register ensures that the recommendation system operates under the principles of privacy by design and privacy by default. It allows decision-makers and auditors to trace every AI data processing activity back to relevant compliance obligations, thereby demonstrating adherence to laws such as GDPR, CCPA, or other jurisdictional mandates.

Other measures listed in the options contribute to good practice but do not achieve the same direct compliance outcome. Retraining models improves technical accuracy but does not address legal obligations. Oversight committees are valuable but require the documented register as a baseline to oversee effectively. Indefinite storage of customer data contradicts regulatory requirements, particularly the principle of data minimization and storage limitation.

AAISM Domain Alignment:

This requirement falls under Domain 1 ­ AI Governance and Program Management, which emphasizes organizational accountability, policy creation, and maintaining compliance documentation as part of a structured governance program.

Reference from AAISM and ISACA materials:

AAISM Exam Content Outline ­ Domain 1: AI Governance and Program Management

AI Security Management Study Guide ­ Privacy and Regulatory Compliance Controls

ISACA AI Governance Guidance ­ Maintaining Registers of Applicable Legal Requirements



An organization is updating its vendor arrangements to facilitate the safe adoption of AI technologies.
Which of the following would be the PRIMARY challenge in delivering this initiative?

  1. Failure to adequately assess AI risk
  2. Inability to sufficiently identify shadow AI within the organization
  3. Unwillingness of large AI companies to accept updated terms
  4. Insufficient legal team experience with AI

Answer(s): C

Explanation:

In the AAISMTM guidance, vendor management for AI adoption highlights that large AI providers often resist contractual changes, particularly when customers seek to impose stricter security, transparency, or ethical obligations. The official study materials emphasize that while organizations must evaluate AI risk and build internal expertise, the primary challenge lies in negotiating acceptable contractual terms with dominant AI vendors who may not be willing to adjust their standardized agreements. This resistance limits the ability of organizations to enforce oversight, bias controls, and compliance requirements contractually.


Reference:

AAISM Exam Content Outline ­ AI Risk Management

AI Security Management Study Guide ­ Third-Party and Vendor Risk



After implementing a third-party generative AI tool, an organization learns about new regulations related to how organizations use AI.
Which of the following would be the BEST justification for the organization to decide not to comply?

  1. The AI tool is widely used within the industry
  2. The AI tool is regularly audited
  3. The risk is within the organization's risk appetite
  4. The cost of noncompliance was not determined

Answer(s): C

Explanation:

The AAISM framework clarifies that compliance decisions must always be tied to an organization's risk appetite and tolerance.
When new regulations emerge, management may choose not to comply if the associated risk remains within the documented and approved risk appetite, provided that accountability is established and governance structures support this decision. Other options such as widespread industry use, third-party audits, or lack of cost assessment do not justify noncompliance under the governance principles. The risk appetite framework is the only recognized justification under AI governance principles.


Reference:

AAISM Study Guide ­ AI Governance and Program Management

ISACA AI Risk Guidance ­ Risk Appetite and Compliance Decisions



Which of the following is the MOST serious consequence of an AI system correctly guessing the personal information of individuals and drawing conclusions based on that information?

  1. The exposure of personal information may result in litigation
  2. The publicly available output of the model may include false or defamatory statements about individuals
  3. The output may reveal information about individuals or groups without their knowledge
  4. The exposure of personal information may lead to a decline in public trust

Answer(s): C

Explanation:

The AAISM curriculum states that the most serious privacy concern occurs when AI systems infer and disclose sensitive personal or group information without the knowledge or consent of the individuals. This constitutes a direct breach of privacy rights and data protection principles, including those enshrined in GDPR and other global regulations.
While litigation, reputational damage, or loss of trust are significant consequences, the unauthorized revelation of personal information through inference is classified as the most severe, because it directly undermines individual autonomy and confidentiality.


Reference:

AAISM Exam Content Outline ­ AI Risk Management

AI Security Management Study Guide ­ Privacy and Confidentiality Risks



Which of the following should be done FIRST when developing an acceptable use policy for generative AI?

  1. Determine the scope and intended use of AI
  2. Review AI regulatory requirements
  3. Consult with risk management and legal
  4. Review existing company policies

Answer(s): A

Explanation:

According to the AAISM framework, the first step in drafting an acceptable use policy is defining the scope and intended use of the AI system. This ensures that governance, regulatory considerations, risk assessments, and alignment with organizational policies are all tailored to the specific applications and functions the AI will serve. Once scope and intended use are clearly defined, legal, regulatory, and risk considerations can be systematically applied. Without this step, policies risk being generic and misaligned with business objectives.


Reference:

AAISM Study Guide ­ AI Governance and Program Management (Policy Development Lifecycle)

ISACA AI Governance Guidance ­ Defining Scope and Use Priorities



A model producing contradictory outputs based on highly similar inputs MOST likely indicates the presence of:

  1. Poisoning attacks
  2. Evasion attacks
  3. Membership inference
  4. Model exfiltration

Answer(s): B

Explanation:

The AAISM study framework describes evasion attacks as attempts to manipulate or probe a trained model during inference by using crafted inputs that appear normal but cause the system to generate inconsistent or erroneous outputs. Contradictory results from nearly identical queries are a typical symptom of evasion, as the attacker is probing decision boundaries to find weaknesses. Poisoning attacks occur during training, not inference, while membership inference relates to exposing whether data was part of the training set, and model exfiltration involves extracting proprietary parameters or architecture. The clearest indication of contradictory outputs from similar queries therefore aligns directly with the definition of evasion attacks in AAISM materials.


Reference:

AAISM Study Guide ­ AI Technologies and Controls (Adversarial Machine Learning and Attack Types)

ISACA AI Security Management ­ Inference-time Attack Scenarios



Which of the following recommendations would BEST help a service provider mitigate the risk of lawsuits arising from generative AI's access to and use of internet data?

  1. Activate filtering logic to exclude intellectual property flags
  2. Disclose service provider policies to declare compliance with regulations
  3. Appoint a data steward specialized in AI to strengthen security governance
  4. Review log information that records how data was collected

Answer(s): A

Explanation:

The AAISM materials highlight that one of the primary legal risks with generative AI systems is the unauthorized use of copyrighted or intellectual property­protected data drawn from internet sources. To mitigate lawsuits, the most effective recommendation is to implement filtering logic that actively excludes data flagged for intellectual property risks before ingestion or generation.
While disclosing compliance policies, appointing governance roles, or reviewing logs are supportive measures, they do not directly prevent the core liability of using restricted content. The study guide explicitly emphasizes that proactive filtering and data governance controls are the most effective safeguards against legal disputes concerning content origin.


Reference:

AAISM Exam Content Outline ­ AI Risk Management (Legal and Intellectual Property Risks)

AI Security Management Study Guide ­ Generative AI Data Governance



Viewing Page 1 of 13



Share your comments for ISACA AAISM exam with other users:

sartaj 7/18/2023 11:36:00 AM

provide the download link, please
INDIA


loso 7/25/2023 5:18:00 AM

please upload thank.
THAILAND


Paul 6/23/2023 7:12:00 AM

please can you share 1z0-1055-22 dump pls
UNITED STATES


exampei 10/7/2023 8:14:00 AM

i will wait impatiently. thank youu
Anonymous


Prince 10/31/2023 9:09:00 PM

is it possible to clear the exam if we focus on only these 156 questions instead of 623 questions? kindly help!
Anonymous


Ali Azam 12/7/2023 1:51:00 AM

really helped with preparation of my scrum exam
Anonymous


Jerman 9/29/2023 8:46:00 AM

very informative and through explanations
Anonymous


Jimmy 11/4/2023 12:11:00 PM

prep for exam
INDONESIA


Abhi 9/19/2023 1:22:00 PM

thanks for helping us
Anonymous


mrtom33 11/20/2023 4:51:00 AM

i prepared for the eccouncil 350-401 exam. i scored 92% on the test.
Anonymous


JUAN 6/28/2023 2:12:00 AM

aba questions to practice
UNITED STATES


LK 1/2/2024 11:56:00 AM

great content
Anonymous


Srijeeta 10/8/2023 6:24:00 AM

how do i get the remaining questions?
INDIA


Jovanne 7/26/2022 11:42:00 PM

well formatted pdf and the test engine software is free. well worth the money i sept.
ITALY


CHINIMILLI SATISH 8/29/2023 6:22:00 AM

looking for 1z0-116
Anonymous


Pedro Afonso 1/15/2024 8:01:00 AM

in question 22, shouldnt be in the data (option a) layer?
Anonymous


Pushkar 11/7/2022 12:12:00 AM

the questions are incredibly close to real exam. you people are amazing.
INDIA


Ankit S 11/13/2023 3:58:00 AM

q15. answer is b. simple
UNITED STATES


S. R 12/8/2023 9:41:00 AM

great practice
FRANCE


Mungara 3/14/2023 12:10:00 AM

thanks to this exam dumps, i felt confident and passed my exam with ease.
UNITED STATES


Anonymous 7/25/2023 2:55:00 AM

need 1z0-1105-22 exam
Anonymous


Nigora 5/31/2022 10:05:00 PM

this is a beautiful tool. passed after a week of studying.
UNITED STATES


Av dey 8/16/2023 2:35:00 PM

can you please upload the dumps for 1z0-1096-23 for oracle
INDIA


Mayur Shermale 11/23/2023 12:22:00 AM

its intresting, i would like to learn more abouth this
JAPAN


JM 12/19/2023 2:23:00 PM

q252: dns poisoning is the correct answer, not locator redirection. beaconing is detected from a host. this indicates that the system has been infected with malware, which could be the source of local dns poisoning. location redirection works by either embedding the redirection in the original websites code or having a user click on a url that has an embedded redirect. since users at a different office are not getting redirected, it isnt an embedded redirection on the original website and since the user is manually typing in the url and not clicking a link, it isnt a modified link.
UNITED STATES


Freddie 12/12/2023 12:37:00 PM

helpful dump questions
SOUTH AFRICA


Da Costa 8/25/2023 7:30:00 AM

question 423 eigrp uses metric
Anonymous


Bsmaind 8/20/2023 9:22:00 AM

hello nice dumps
Anonymous


beau 1/12/2024 4:53:00 PM

good resource for learning
UNITED STATES


Sandeep 12/29/2023 4:07:00 AM

very useful
Anonymous


kevin 9/29/2023 8:04:00 AM

physical tempering techniques
Anonymous


Blessious Phiri 8/15/2023 4:08:00 PM

its giving best technical knowledge
Anonymous


Testbear 6/13/2023 11:15:00 AM

please upload
ITALY


shime 10/24/2023 4:23:00 AM

great question with explanation thanks!!
ETHIOPIA