Amazon AWS Certified Generative AI Developer - Professional AIP-C01 Exam Questions in PDF

Free Amazon AIP-C01 Dumps Questions (page: 1)

A retail company has a generative AI (GenAI) product recommendation application that uses Amazon Bedrock. The application suggests products to customers based on browsing history and demographics. The company needs to implement fairness evaluation across multiple demographic groups to detect and measure bias in recommendations between two prompt approaches. The company wants to collect and monitor fairness metrics in real time. The company must receive an alert if the fairness metrics show a discrepancy of more than 15% between demographic groups. The company must receive weekly reports that compare the performance of the two prompt approaches.

Which solution will meet these requirements with the LEAST custom development effort?

  1. Configure an Amazon CloudWatch dashboard to display default metrics from Amazon Bedrock API calls.
    Create custom metrics based on model outputs. Set up Amazon EventBridge rules to invoke AWS lambda functions that perform post-processing analysis on model responses and publish custom fairness metrics.
  2. Create the two prompt variants in Amazon Bedrock Prompt Management. Use Amazon Bedrock Flows to deploy the prompt variants with defined traffic allocation. Configure Amazon Bedrock guardrails that have content filters to monitor demographic fairness. Set up Amazon CloudWatch alarms on the GuardrailContentSource dimension that use InvocationsIntervened metrics to detect recommendation discrepancy threshold violations.
  3. Set up Amazon SageMaker Clarify to analyze model outputs. Publish fairness metrics to Amazon CloudWatch. Create CloudWatch composite alarms that combine SageMaker Clarify bias metrics with Amazon Bedrock latency metrics to provide a comprehensive fairness evaluation dashboard.
  4. Create an Amazon Bedrock model evaluation job to compare fairness between the two prompt variants.
    Enable model invocation logging in Amazon CloudWatch. Set up CloudWatch alarms for InvocationsIntervened metrics with a dimension for each demographic group.

Answer(s): C

Explanation:

Amazon SageMaker Clarify provides built-in bias and fairness evaluation across demographic groups without requiring you to build custom scoring logic. Clarify can compute and publish fairness metrics to Amazon CloudWatch for near-real-time monitoring, where CloudWatch alarms can alert when group-to-group metric deltas exceed the 15% threshold. Clarify also produces periodic bias analysis outputs that can be used to generate weekly comparative reporting for the two prompt approaches with minimal additional implementation.



A finance company is developing an AI assistant to help clients plan investments and manage their portfolios. The company identifies several high-risk conversation patterns such as requests for specific stock recommendations or guaranteed returns. High-risk conversation patterns could lead to regulatory violations if the company cannot implement appropriate controls.

The company must ensure that the AI assistant does not provide inappropriate financial advice, generate content about competitors, or make claims that are not factually grounded in the company's approved financial guidance. The company wants to use Amazon Bedrock Guardrails to implement a solution.

Which combination of steps will meet these requirements? (Choose three.)

  1. Add the high-risk conversation patterns to a denied topics guardrail.
  2. Configure a content filter guardrail to filter prompts that contain the high-risk conversation patterns.
  3. Configure a content filter guardrail to filter prompts that contain competitor names.
  4. Add the names of competitors as custom word filters. Set the input and output actions to block.
  5. Set a low grounding score threshold.
  6. Set a high grounding score threshold.

Answer(s): A,D,F

Explanation:

Adding high-risk financial requests as denied topics ensures the assistant blocks conversations that could result in regulatory violations or inappropriate advice. Custom word filters with competitor names and block actions prevent the model from generating or responding with competitor-related content. Setting a high grounding score threshold forces responses to stay closely aligned with approved, trusted financial guidance, reducing the risk of unsupported or non-factual claims.



A company has deployed an AI assistant as a React application that uses AWS Amplify, an AWS AppSync GraphQL API, and Amazon Bedrock Knowledge Bases. The application uses the GraphQL API to call the Amazon Bedrock RetrieveAndGenerate API for knowledge base interactions. The company configures an AWS Lambda resolver to use the RequestResponse invocation type.

Application users report frequent timeouts and slow response times. Users report these problems more frequently for complex questions that require longer processing.

The company needs a solution to fix these performance issues and enhance the user experience.

Which solution will meet these requirements?

  1. Use AWS Amplify AI Kit to implement streaming responses from the GraphQL API and to optimize client- side rendering.
  2. Increase the timeout value of the Lambda resolver. Implement retry logic with exponential backoff.
  3. Update the application to send an API request to an Amazon SQS queue. Update the AWS AppSync resolver to poll and process the queue.
  4. Change the RetrieveAndGenerate API to the InvokeModelWithResponseStream API. Update the application to use an Amazon API Gateway WebSocket API to support the streaming response.

Answer(s): A

Explanation:

AWS Amplify AI Kit provides a higher-level implementation for streaming AI responses to the React client, which improves perceived latency for long-running Bedrock Knowledge Bases requests. Streaming reduces timeouts for complex questions by returning partial output as it is generated, and it enhances the user experience without requiring a custom WebSocket architecture or significant backend redesign.



An ecommerce company operates a global product recommendation system that needs to switch between multiple foundation models (FM) in Amazon Bedrock based on regulations, cost optimization, and performance requirements. The company must apply custom controls based on proprietary business logic, including dynamic cost thresholds, AWS Region-specific compliance rules, and real-time A/B testing across multiple FMs. The system must be able to switch between FMs without deploying new code. The system must route user requests based on complex rules including user tier, transaction value, regulatory zone, and real-time cost metrics that change hourly and require immediate propagation across thousands of concurrent requests.

Which solution will meet these requirements?

  1. Deploy an AWS Lambda function that uses environment variables to store routing rules and Amazon Bedrock FM IDs. Use the Lambda console to update the environment variables when business requirements change. Configure an Amazon API Gateway REST API to read request parameters to make routing decisions.
  2. Deploy Amazon API Gateway REST API request transformation templates to implement routing logic based on request attributes. Store Amazon Bedrock FM endpoints as REST API stage variables. Update the variables when the system switches between models.
  3. Configure an AWS Lambda function to fetch routing configurations from the AWS AppConfig Agent for each user request. Run business logic in the Lambda function to select the appropriate FM for each request.
    Expose the FM through a single Amazon API Gateway REST API endpoint.
  4. Use AWS Lambda authorizers for an Amazon API Gateway REST API to evaluate routing rules that are stored in AWS AppConfig. Return authorization contexts based on business logic. Route requests to model- specific Lambda functions for each Amazon Bedrock FM.

Answer(s): C

Explanation:

AWS AppConfig is designed for dynamic, centralized configuration with fast propagation, so routing rules can be updated without code deployments and take effect quickly across high concurrency. Having Lambda fetch the latest AppConfig configuration and apply proprietary logic allows complex routing based on user attributes, regulatory zone, and frequently changing hourly cost metrics. Exposing a single API endpoint keeps the client stable while the backend switches among multiple Bedrock foundation models purely through configuration changes.



A company is developing an internal generative AI (GenAI) assistant that uses Amazon Bedrock to summarize corporate documents for multiple business units. The GenAI assistant must generate responses in a consistent format that includes a document summary, classification of business risks, and terms that are flagged for review. The GenAI assistant must adapt the tone of responses for each user's business unit, such as legal, human resources, or finance. The GenAI assistant must block hate speech, inappropriate topics, and sensitive information such as personal health information.

The company needs a solution to centrally manage prompt variants across business units and teams. The company wants to minimize ongoing orchestration efforts and maintenance for post-processing logic. The company also wants to have the ability to adjust content moderation criteria for the GenAI assistant over time.

Which solution will meet these requirements with the LEAST maintenance overhead?

  1. Use Amazon Bedrock Prompt Management to configure reusable templates and business unit-specific prompt variants. Apply Amazon Bedrock guardrails that have category filters and sensitive term lists to block prohibited content.
  2. Use Amazon Bedrock Prompt Management to define base templates. Enforce business unit-specific tone by using system prompt variables. Configure Amazon Bedrock guardrails to apply audience-based threshold tuning. Manage the guardrails by using an internal administration API.
  3. Use Amazon Bedrock with business unit-based instruction injection in API calls. Store response formatting rules in Amazon DynamoDB. Use AWS Step functions to validate responses. Use Amazon Comprehend to apply content filters after the GenAI assistant generates responses.
  4. Use Amazon Bedrock with custom prompt templates that are stored in Amazon DynamoDB. Create one AWS Lambda function to select business unit-specific prompts. Create a second Lambda function to call Amazon Comprehend to filter prohibited content from responses.

Answer(s): A

Explanation:

Amazon Bedrock Prompt Management centrally manages reusable prompt templates and business unit- specific variants, which enforces a consistent response structure while allowing tone differences per business unit without custom orchestration or post-processing. Amazon Bedrock guardrails provide managed moderation controls (for hate speech, inappropriate topics) and sensitive information handling using category filters and sensitive term lists, and these controls can be adjusted over time without building and maintaining separate moderation pipelines.



A financial services company is building a customer support application that retrieves relevant financial regulation documents from a database based on semantic similarities to user queries. The application must integrate with Amazon Bedrock to generate responses. The application must be able to search documents that are in English, Spanish, and Portuguese. The application must filter documents by metadata such as publication date, regulatory agency, and document type.

The database stores approximately 10 million document embeddings. To minimize operational overhead, the company wants a solution that minimizes management and maintenance effort. The application must provide low-latency responses for real-time customer interactions.

Which solution will meet these requirements?

  1. Use Amazon OpenSearch Serverless to provide vector search capabilities and metadata filtering. Connect to Amazon Bedrock Knowledge Bases to enable Retrieval Augmented Generation (RAG) capabilities that use an Anthropic Claude foundation model (FM).
  2. Deploy an Amazon Aurora PostgreSQL database with the pgvector extension. Define tables to store embeddings and metadata. Use SQL queries to perform similarity searches. Send retrieved documents to Amazon Bedrock to generate responses.
  3. Use Amazon S3 Vectors to configure a vector index and non-filterable metadata fields. Integrate S3 Vectors with Amazon Bedrock to enable Retrieval Augmented Generation (RAG) capabilities.
  4. Set up an Amazon Neptune Analytics graph database. Configure a vector index that has appropriate dimensionality to store document embeddings. Use Amazon Bedrock to perform graph-based retrieval and to generate responses.

Answer(s): A

Explanation:

Amazon OpenSearch Serverless provides managed, low-latency vector search at scale for millions of embeddings and supports metadata filtering for fields like publication date, agency, and document type.
Integrating it with Amazon Bedrock Knowledge Bases delivers a managed RAG workflow with minimal operational overhead, while multilingual search is supported by using multilingual embedding generation for English, Spanish, and Portuguese and then retrieving semantically similar content from the vector index.



A medical company is building a generative AI (GenAI) application that uses RAG to provide evidence-based medical information. The application uses Amazon OpenSearch Service to retrieve vector embeddings. Users report that searches frequently miss results that contain exact medical terms and acronyms and return too many semantically similar but irrelevant documents. The company needs to improve retrieval quality and maintain low end user latency, even as the document collection grows to millions of documents.

Which solution will meet these requirements with the LEAST operational overhead?

  1. Configure hybrid search by combining vector similarity with keyword matching to improve semantic understanding and exact term and acronym matching.
  2. Increase the dimensions of the vector embeddings from 384 to 1536. Use a post-processing AWS Lambda function to filter out irrelevant results after retrieval.
  3. Replace OpenSearch Service with Amazon Kendra. Use query expansion to handle medical acronyms and terminology variants during pre-processing.
  4. Implement a two-stage retrieval architecture in which initial vector search results are re-ranked by an ML model that is hosted on Amazon SageMaker AI.

Answer(s): A

Explanation:

Hybrid search combines vector similarity with traditional keyword matching, so the retriever can still match exact medical terms and acronyms while using embeddings for semantic recall. This reduces irrelevant "semantic-only" matches and improves precision without adding new managed services or custom re-ranking
pipelines, keeping latency low and operational overhead minimal as the corpus scales to millions of documents.



A company runs a generative AI (GenAI)-powered summarization application in an application AWS account that uses Amazon Bedrock. The application architecture includes an Amazon API Gateway REST API that forwards requests to AWS Lambda functions that are attached to private VPC subnets. The application summarizes sensitive customer records that the company stores in a governed data lake in a centralized data storage account. The company has enabled Amazon S3, Amazon Athena, and AWS Glue in the data storage account.

The company must ensure that calls that the application makes to Amazon Bedrock use only private connectivity between the company's application VPC and Amazon Bedrock. The company's data lake must provide fine-grained column-level access across the company's AWS accounts.

Which solution will meet these requirements?

  1. In the application account, create interface VPC endpoints for Amazon Bedrock runtimes. Run Lambda functions in private subnets. Use IAM conditions on inference and data-plane policies to allow calls only to approved endpoints and roles. In the data storage account, use AWS Lake Formation LF-tag-based access control to create table and column-level cross-account grants.
  2. Run Lambda functions in private subnets. Configure a NAT gateway to provide access to Amazon Bedrock and the data lake. Use S3 bucket policies and ACLs to manage permissions. Export AWS CloudTrail logs to Amazon S3 to perform weekly reviews.
  3. Create a gateway endpoint only for Amazon S3 in the application account. Invoke Amazon Bedrock through public endpoints. Use database-level grants in AWS Lake Formation to manage data access. Stream AWS CloudTrail logs to Amazon CloudWatch Logs. Do not set up metric filters or alarms.
  4. Use VPC endpoints to provide access to Amazon Bedrock and Amazon S3 in the application account. Use only IAM path-based policies to manage data lake access. Send AWS CloudTrail logs to Amazon CloudWatch Logs. Periodically create dashboards and allow public fallback for cross-Region reads to reduce setup time.

Answer(s): A

Explanation:

Interface VPC endpoints for the Amazon Bedrock runtime provide private connectivity from the VPC to Bedrock without using the public internet, and IAM conditions can restrict Bedrock invocation to those specific VPC endpoints and approved roles to enforce private-only access. AWS Lake Formation LF-tag-based access control supports fine-grained cross-account permissions, including column-level grants on governed tables in S3/Athena/Glue, which satisfies the centralized data lake requirement.



Viewing page 1 of 14

Share your comments for Amazon AIP-C01 exam with other users:

sartaj 7/18/2023 11:36:00 AM

provide the download link, please
INDIA


loso 7/25/2023 5:18:00 AM

please upload thank.
THAILAND


Paul 6/23/2023 7:12:00 AM

please can you share 1z0-1055-22 dump pls
UNITED STATES


exampei 10/7/2023 8:14:00 AM

i will wait impatiently. thank youu
Anonymous


Prince 10/31/2023 9:09:00 PM

is it possible to clear the exam if we focus on only these 156 questions instead of 623 questions? kindly help!
Anonymous


Ali Azam 12/7/2023 1:51:00 AM

really helped with preparation of my scrum exam
Anonymous


Jerman 9/29/2023 8:46:00 AM

very informative and through explanations
Anonymous


Jimmy 11/4/2023 12:11:00 PM

prep for exam
INDONESIA


Abhi 9/19/2023 1:22:00 PM

thanks for helping us
Anonymous


mrtom33 11/20/2023 4:51:00 AM

i prepared for the eccouncil 350-401 exam. i scored 92% on the test.
Anonymous


JUAN 6/28/2023 2:12:00 AM

aba questions to practice
UNITED STATES


LK 1/2/2024 11:56:00 AM

great content
Anonymous


Srijeeta 10/8/2023 6:24:00 AM

how do i get the remaining questions?
INDIA


Jovanne 7/26/2022 11:42:00 PM

well formatted pdf and the test engine software is free. well worth the money i sept.
ITALY


CHINIMILLI SATISH 8/29/2023 6:22:00 AM

looking for 1z0-116
Anonymous


Pedro Afonso 1/15/2024 8:01:00 AM

in question 22, shouldnt be in the data (option a) layer?
Anonymous


Pushkar 11/7/2022 12:12:00 AM

the questions are incredibly close to real exam. you people are amazing.
INDIA


Ankit S 11/13/2023 3:58:00 AM

q15. answer is b. simple
UNITED STATES


S. R 12/8/2023 9:41:00 AM

great practice
FRANCE


Mungara 3/14/2023 12:10:00 AM

thanks to this exam dumps, i felt confident and passed my exam with ease.
UNITED STATES


Anonymous 7/25/2023 2:55:00 AM

need 1z0-1105-22 exam
Anonymous


Nigora 5/31/2022 10:05:00 PM

this is a beautiful tool. passed after a week of studying.
UNITED STATES


Av dey 8/16/2023 2:35:00 PM

can you please upload the dumps for 1z0-1096-23 for oracle
INDIA


Mayur Shermale 11/23/2023 12:22:00 AM

its intresting, i would like to learn more abouth this
JAPAN


JM 12/19/2023 2:23:00 PM

q252: dns poisoning is the correct answer, not locator redirection. beaconing is detected from a host. this indicates that the system has been infected with malware, which could be the source of local dns poisoning. location redirection works by either embedding the redirection in the original websites code or having a user click on a url that has an embedded redirect. since users at a different office are not getting redirected, it isnt an embedded redirection on the original website and since the user is manually typing in the url and not clicking a link, it isnt a modified link.
UNITED STATES


Freddie 12/12/2023 12:37:00 PM

helpful dump questions
SOUTH AFRICA


Da Costa 8/25/2023 7:30:00 AM

question 423 eigrp uses metric
Anonymous


Bsmaind 8/20/2023 9:22:00 AM

hello nice dumps
Anonymous


beau 1/12/2024 4:53:00 PM

good resource for learning
UNITED STATES


Sandeep 12/29/2023 4:07:00 AM

very useful
Anonymous


kevin 9/29/2023 8:04:00 AM

physical tempering techniques
Anonymous


Blessious Phiri 8/15/2023 4:08:00 PM

its giving best technical knowledge
Anonymous


Testbear 6/13/2023 11:15:00 AM

please upload
ITALY


shime 10/24/2023 4:23:00 AM

great question with explanation thanks!!
ETHIOPIA


AI Tutor 👋 I’m here to help!