Amazon AWS Certified Generative AI Developer - Professional AIP-C01 (page: 2)

Amazon AWS Certified Generative AI Developer - Professional AIP-C01

Updated 12-Apr-2026

A media company must use Amazon Bedrock to implement a robust governance process for AI-generated content. The company needs to manage hundreds of prompt templates. Multiple teams use the templates across multiple AWS Regions to generate content. The solution must provide version control with approval workflows that include notifications for pending reviews. The solution must also provide detailed audit trails that document prompt activities and consistent prompt parameterization to enforce quality standards.

Which solution will meet these requirements?

  1. Configure Amazon Bedrock Studio prompt templates. Use Amazon CloudWatch to create dashboards that display prompt usage metrics. Store the approval status of content in Amazon DynamoDB. Use AWS Lambda functions to enforce approvals.
  2. Use Amazon Bedrock Prompt Management to implement version control. Configure AWS CloudTrail for audit logging. Use IAM policies to control approval permissions. Create parameterized prompt templates by specifying variables.
  3. Use AWS Step Functions to create an approval workflow. Store prompts as documents in Amazon S3. Use tags to implement version control. Use Amazon EventBridge to send notifications.
  4. Deploy Amazon SageMaker Canvas with prompt templates that are stored in Amazon S3. Use AWS CloudFormation to implement version control. Use AWS Config to enforce approval policies.

Answer(s): B

Explanation:

Amazon Bedrock Prompt Management is designed to centrally manage many prompt templates with versioning and consistent parameterization through variables, and it supports multi-team reuse across Regions. AWS CloudTrail provides detailed audit trails of prompt-related API activity, and IAM can enforce who is allowed to create, update, or approve prompt versions.



A company is developing a customer support application that uses Amazon Bedrock foundation models (FMs) to provide real-time AI assistance to the company's employees. The application must display AI-generated responses character by character as the responses are generated. The application needs to support thousands of concurrent users with minimal latency. The responses typically take 15 to 45 seconds to finish.

Which solution will meet these requirements?

  1. Configure an Amazon API Gateway WebSocket API with an AWS Lambda integration. Configure the WebSocket API to invoke the Amazon Bedrock InvokeModelWithResponseStream API and stream partial responses through WebSocket connections.
  2. Configure an Amazon API Gateway REST API with an AWS Lambda integration. Configure the REST API to invoke the Amazon Bedrock standard InvokeModel API and implement frontend client-side polling every 100 ms for complete response chunks.
  3. Implement direct frontend client connections to Amazon Bedrock by using IAM user credentials and the InvokeModelWithResponseStream API without any intermediate gateway or proxy layer.
  4. Configure an Amazon API Gateway HTTP API with an AWS Lambda integration. Configure the HTTP API to cache complete responses in an Amazon DynamoDB table and serve the responses through multiple paginated GET requests to frontend clients.

Answer(s): A

Explanation:

Amazon Bedrock InvokeModelWithResponseStream provides token streaming so the UI can render output as it is generated. An API Gateway WebSocket API maintains a long-lived, low-latency bidirectional connection that can push partial chunks to thousands of concurrent clients over 15-45 second generations without inefficient polling or exposing direct Bedrock access from the browser.



A company is using Amazon Bedrock to design an application to help researchers apply for grants. The application is based on an Amazon Nova Pro foundation model (FM). The application contains four required inputs and must provide responses in a consistent text format. The company wants to receive a notification in Amazon Bedrock if a response contains bullying language. However, the company does not want to block all flagged responses.

The company creates an Amazon Bedrock flow that takes an input prompt and sends it to the Amazon Nova Pro FM. The Amazon Nova Pro FM provides a response.

Which additional steps must the company take to meet these requirements? (Choose two.)

  1. Use Amazon Bedrock Prompt Management to specify the required inputs as variables. Select an Amazon Nova Pro FM. Specify the output format for the response. Add the prompt to the prompts node of the flow.
  2. Create an Amazon Bedrock guardrail that applies the hate content filter. Set the filter response to block.
    Add the guardrail to the prompts node of the flow.
  3. Create an Amazon Bedrock prompt router. Specify an Amazon Nova Pro FM. Add the required inputs as
    variables to the input node of the flow. Add the prompt router to the prompts node. Add the output format to the output node.
  4. Create an Amazon Bedrock guardrail that applies the insults content filter. Set the filter response to detect.
    Add the guardrail to the prompts node of the flow.
  5. Create an Amazon Bedrock application inference profile that specifies an Amazon Nova Pro FM. Specify the output format for the response in the description. Include a tag for each of the input variables. Add the profile to the prompts node of the flow.

Answer(s): A,D

Explanation:

Prompt Management lets the company define a reusable prompt template with the four required inputs as variables and enforce a consistent output format, and the flow can reference that template from the prompts node. A Bedrock guardrail with the insults filter set to detect (not block) will flag bullying/insulting content and generate detections/notifications without preventing the response from being returned.



A healthcare company is using Amazon Bedrock to build a Retrieval Augmented Generation (RAG) application that helps practitioners make clinical decisions. The application must achieve high accuracy for patient information retrievals, identify hallucinations in generated content, and reduce human review costs.

Which solution will meet these requirements?

  1. Use Amazon Comprehend to analyze and classify RAG responses and to extract medical entities and relationships. Use AWS Step Functions to orchestrate automated evaluations. Configure Amazon CloudWatch metrics to track entity recognition confidence scores. Configure CloudWatch to send an alert when accuracy falls below specified thresholds.
  2. Implement automated large language model (LLM)-based evaluations that use a specialized model that is fine-tuned for medical content to assess all responses. Deploy AWS Lambda functions to parallelize evaluations. Publish results to Amazon CloudWatch metrics that track relevance and factual accuracy.
  3. Configure Amazon CloudWatch Synthetics to generate test queries that have known answers on a regular schedule, and track model success rates. Set up dashboards that compare synthetic test results against expected outcomes.
  4. Deploy a hybrid evaluation system that uses an automated LLM-as-a-judge evaluation to initially screen responses and targeted human reviews for edge cases. Use Amazon SageMaker Feature Store to maintain evaluation datasets. Use a built-in Amazon Bedrock evaluation to track retrieval precision and hallucination rates.

Answer(s): D

Explanation:

A hybrid approach that uses an automated LLM-as-a-judge to evaluate relevance, factual consistency, and hallucinations provides scalable, high-accuracy screening while reducing the need for manual review.
Escalating only uncertain or edge cases to humans minimizes review costs. Amazon Bedrock's built-in evaluation capabilities directly measure retrieval precision and hallucination rates for RAG workloads, giving purpose-built metrics with less custom development and operational overhead.



Company configures a landing zone in AWS Control Tower. The company handles sensitive data that must remain within the European Union. The company must use only the eu-central-1 Region. The company uses SCPs to enforce data residency policies. GenAI developers at the company are assigned IAM roles that have full permissions for Amazon Bedrock.

The company must ensure that GenAI developers can use the Amazon Nova Pro model through Amazon Bedrock only by using cross-Region inference (CRI) and only in eu-central-1. The company enables model access for the GenAI developer IAM roles in Amazon Bedrock. However, when a GenAI developer attempts to

invoke the model through the Amazon Bedrock Chat/Text playground, the GenAI developer receives the following error.

User: arn:aws:sts::123456789012:assumed-role/AssumedDevRole/DevUserName Action: bedrock:InvokeModelWithResponseStream
On resource(s): arn:aws:bedrock:eu-west-3::foundation-model/amazon.nova-pro-v1:0 Context: a service control policy explicitly denies the action

The company needs a solution to resolve the error. The solution must retain the company's existing governance controls and must provide precise access control. The solution must comply with the company's existing data residency policies.

Which combination of solutions will meet these requirements? (Choose two.)

  1. Add an AdministratorAccess policy to the GenAI developer IAM role.
  2. Extend the existing SCPs to enable CRI for the eu.amazon.nova-pro-v1:0 inference profile.
  3. Enable Amazon Bedrock model access for Amazon Nova Pro in the eu-west-3 Region.
  4. Validate that the GenAI developer IAM roles have permissions to invoke Amazon Nova Pro through the eu.amazon.nova-pro.v1:0 inference profile on all European Union AWS Regions that can serve the model.
  5. Extend the existing SCP to enable CRI for the eu.* inference profile.

Answer(s): B,D

Explanation:

Extending the SCP to allow the specific EU cross-Region inference profile for Amazon Nova Pro preserves the existing Control Tower/SCP governance while enabling the intended CRI access path instead of direct regional foundation-model invocation. Ensuring the developer roles are permitted to invoke the Nova Pro EU inference profile in any EU Region that may serve the CRI request prevents failures when Bedrock routes the inference to an eligible EU Region (such as eu-west-3) while keeping the developer entry point constrained to eu-central-1.



A financial services company is developing a customer service AI assistant by using Amazon Bedrock. The AI assistant must not discuss investment advice with users. The AI assistant must block harmful content, mask personally identifiable information (PII), and maintain audit trails for compliance reporting. The AI assistant must apply content filtering to both user inputs and model responses based on content sensitivity.

The company requires an Amazon Bedrock guardrail configuration that will effectively enforce policies with minimal false positives. The solution must provide multiple handling strategies for multiple types of sensitive content.

Which solution will meet these requirements?

  1. Configure a single guardrail and set content filters to high for all categories. Set up denied topics for investment advice and include sample phrases to block. Set up sensitive information filters that apply the block action for all PII entities. Apply the guardrail to all model inference calls.
  2. Configure multiple guardrails by using tiered policies. Create one guardrail and set content filters to high.
    Configure the guardrail to block PII for public interactions. Configure a second guardrail and set content filters to medium. Configure the second guardrail to mask PII for internal use. Configure multiple topic- specific guardrails to block investment advice and set up contextual grounding checks.
  3. Configure a guardrail and set content filters to medium for harmful content. Set up denied topics for investment advice and include clear definitions and sample phrases to block. Configure sensitive information filters to mask PII in responses and to block financial information in inputs. Enable both input and output evaluations that use custom blocked messages for audits.
  4. Create a separate guardrail for each use case. Create one guardrail that applies a harmful content filter.
    Create a guardrail to apply topic filters for investment advice. Create a guardrail to apply sensitive information filters to block PII. Use AWS Step Functions to chain the guardrails together sequentially. Use conditional logic based on content classification.

Answer(s): C

Explanation:

A single Amazon Bedrock guardrail can enforce multiple policy types with different handling strategies while keeping management simple and reducing false positives. Medium content filters appropriately block harmful content without being overly restrictive, denied topics with clear definitions prevent investment advice, and sensitive information filters can both mask PII in outputs and block sensitive financial data in inputs. Enabling evaluation on both requests and responses ensures end-to-end filtering and supports auditability for compliance.



An ecommerce company is developing a generative AI (GenAI) solution that uses Amazon Bedrock with Anthropic Claude to recommend products to customers. Customers report that some of the recommended products are not available for sale on the website or are not relevant to the customer. Customers also report that the solutions takes a long time to generate some recommendations.

The company investigates the issues and finds that most interactions between customers and the product recommendation solution are unique. The company confirms that the solutions recommends products that are not in the company's product catalog. The company must resolve these issues.

Which solution will meet this requirement?

  1. Increase grounding within Amazon Bedrock Guardrails. Enable Automated Reasoning checks. Set up provisioned throughput.
  2. Use prompt engineering to restrict the model responses to relevant products. Use streaming techniques such as the InvokeModelWithResponseStream action to reduce perceived latency for the customers.
  3. Create an Amazon Bedrock knowledge base. Implement Retrieval Augmented Generation (RAG). Set the PerformanceConfigLatency parameter to optimized.
  4. Store product catalog data in Amazon OpenSearch Service. Validate the model's product recommendations against the product catalog. Use Amazon DynamoDB to implement response caching.

Answer(s): C

Explanation:

A Bedrock Knowledge Base with RAG grounds Claude's recommendations in the company's actual product catalog, which prevents suggesting products that are not available and improves relevance by retrieving only approved catalog items as context. Using the optimized latency performance configuration reduces end-to-end response time, which addresses the slow recommendation experience for largely unique interactions where caching is less effective.



A company is using AWS Lambda and REST APIs to build a reasoning agent to automate support workflows. The system must preserve memory across interactions, share the relevant agent state, and support event- driven invocation and synchronous invocation. The system must also enforce access control and session- based permissions.

Which combination of steps provides the MOST scalable solution? (Choose two.)

  1. Use Amazon Bedrock AgentCore to manage memory and session-aware reasoning. Deploy the agent with built-in identity support, event handling, and observability.
  2. Register the Lambda functions and the REST APIs as actions by using Amazon API Gateway and Amazon EventBridge. Enable Amazon Bedrock AgentCore to invoke the Lambda functions and the REST APIs without custom orchestration code.
  3. Use Amazon Bedrock Agents for reasoning and conversation management. Use AWS Step Functions and Amazon SQS queues for orchestration. Store the agent state in Amazon DynamoDB to maintain memory
    between steps.
  4. Deploy the reasoning logic as a container on Amazon ECS behind Amazon API Gateway. Use Amazon Aurora to store memory data and identity data.
  5. Build a custom RAG pipeline by using Amazon Kendra and Amazon Bedrock. Use AWS Lambda to orchestrate tool invocations. Store the agent state in Amazon S3.

Answer(s): A,B

Explanation:

Amazon Bedrock AgentCore provides managed, session-aware memory and agent state sharing with built-in identity controls, observability, and support for synchronous and event-driven execution, which avoids building and scaling a custom state layer. Registering Lambda functions and REST APIs as actions that AgentCore can invoke through event and API integrations removes custom orchestration code and scales cleanly as workflows and tools grow.



Page 2 of 14

Share your comments for Amazon AWS Certified Generative AI Developer - Professional AIP-C01 exam with other users:

Deep 6/12/2023 7:22:00 AM

needed dumps
INDIA


tumz 1/16/2024 10:30:00 AM

very helpful
UNITED STATES


NRI 8/27/2023 10:05:00 AM

will post once the exam is finished
UNITED STATES


kent 11/3/2023 10:45:00 AM

relevant questions
Anonymous


Qasim 6/11/2022 9:43:00 AM

just clear exam on 10/06/2202 dumps is valid all questions are came same in dumps only 2 new questions total 46 questions 1 case study with 5 question no lab/simulation in my exam please check the answers best of luck
Anonymous


Cath 10/10/2023 10:09:00 AM

q.112 - correct answer is c - the event registry is a module that provides event definitions. answer a - not correct as it is the definition of event log
VIET NAM


Shiji 10/15/2023 1:31:00 PM

good and useful.
INDIA


Ade 6/25/2023 1:14:00 PM

good questions
Anonymous


Praveen P 11/8/2023 5:18:00 AM

good content
UNITED STATES


Anastasiia 12/28/2023 9:06:00 AM

totally not correct answers. 21. you have one gcp account running in your default region and zone and another account running in a non-default region and zone. you want to start a new compute engine instance in these two google cloud platform accounts using the command line interface. what should you do? correct: create two configurations using gcloud config configurations create [name]. run gcloud config configurations activate [name] to switch between accounts when running the commands to start the compute engine instances.
Anonymous


Priyanka 7/24/2023 2:26:00 AM

kindly upload the dumps
Anonymous


Nabeel 7/25/2023 4:11:00 PM

still learning
Anonymous


gure 7/26/2023 5:10:00 PM

excellent way to learn
UNITED STATES


ciken 8/24/2023 2:55:00 PM

help so much
Anonymous


Biswa 11/20/2023 9:28:00 AM

understand sql col.
Anonymous


Saint Pierre 10/24/2023 6:21:00 AM

i would give 5 stars to this website as i studied for az-800 exam from here. it has all the relevant material available for preparation. i got 890/1000 on the test.
Anonymous


Rose 7/24/2023 2:16:00 PM

this is nice.
Anonymous


anon 10/15/2023 12:21:00 PM

q55- the ridac workflow can be modified using flow designer, correct answer is d not a
UNITED STATES


NanoTek3 6/13/2022 10:44:00 PM

by far this is the most accurate exam dumps i have ever purchased. all questions are in the exam. i saw almost 90% of the questions word by word.
UNITED STATES


eriy 11/9/2023 5:12:00 AM

i cleared the az-104 exam by scoring 930/1000 on the exam. it was all possible due to this platform as it provides premium quality service. thank you!
UNITED STATES


Muhammad Rawish Siddiqui 12/8/2023 8:12:00 PM

question # 232: accessibility, privacy, and innovation are not data quality dimensions.
SAUDI ARABIA


Venkat 12/27/2023 9:04:00 AM

looks wrong answer for 443 question, please check and update
Anonymous


Varun 10/29/2023 9:11:00 PM

great question
Anonymous


Doc 10/29/2023 9:36:00 PM

question: a user wants to start a recruiting posting job posting. what must occur before the posting process can begin? 3 ans: comment- option e is incorrect reason: as part of enablement steps, sap recommends that to be able to post jobs to a job board, a user need to have the correct permission and secondly, be associated with one posting profile at minimum
UNITED KINGDOM


It‘s not A 9/17/2023 5:31:00 PM

answer to question 72 is d [sys_user_role]
Anonymous


indira m 8/14/2023 12:15:00 PM

please provide the pdf
UNITED STATES


ribrahim 8/1/2023 6:05:00 AM

hey guys, just to let you all know that i cleared my 312-38 today within 1 hr with 100 questions and passed. thank you so much brain-dumps.net all the questions that ive studied in this dump came out exactly the same word for word "verbatim". you rock brain-dumps.net!!! section name total score gained score network perimeter protection 16 11 incident response 10 8 enterprise virtual, cloud, and wireless network protection 12 8 application and data protection 13 10 network défense management 10 9 endpoint protection 15 12 incident d
SINGAPORE


Andrew 8/23/2023 6:02:00 PM

very helpful
Anonymous


latha 9/7/2023 8:14:00 AM

useful questions
GERMANY


ibrahim 11/9/2023 7:57:00 AM

page :20 https://exam-dumps.com/snowflake/free-cof-c02-braindumps.html?p=20#collapse_453 q 74: true or false: pipes can be suspended and resumed. true. desc.: pausing or resuming pipes in addition to the pipe owner, a role that has the following minimum permissions can pause or resume the pipe https://docs.snowflake.com/en/user-guide/data-load-snowpipe-intro
FINLAND


Franklin Allagoa 7/5/2023 5:16:00 AM

i want hcia exam dumps
Anonymous


SSA 12/24/2023 1:18:00 PM

good training
Anonymous


BK 8/11/2023 12:23:00 PM

very useful
INDIA


Deepika Narayanan 7/13/2023 11:05:00 PM

yes need this exam dumps
Anonymous


AI Tutor 👋 I’m here to help!