Amazon SAP-C01 Exam (page: 5)
Amazon AWS Certified Solutions Architect - Professional SAP-C02
Updated on: 30-Nov-2025

Viewing Page 5 of 107

A software company has deployed an application that consumes a REST API by using Amazon API Gateway, AWS Lambda functions, and an Amazon DynamoDB table. The application is showing an increase in the number of errors during PUT requests. Most of the PUT calls come from a small number of clients that are authenticated with specific API keys.

A solutions architect has identified that a large number of the PUT requests originate from one client. The API is noncritical, and clients can tolerate retries of unsuccessful calls. However, the errors are displayed to customers and are causing damage to the API’s reputation.

What should the solutions architect recommend to improve the customer experience?

  1. Implement retry logic with exponential backoff and irregular variation in the client application. Ensure that the errors are caught and handled with descriptive error messages.
  2. Implement API throttling through a usage plan at the API Gateway level. Ensure that the client application handles code 429 replies without error.
  3. Turn on API caching to enhance responsiveness for the production stage. Run 10-minute load tests. Verify that the cache capacity is appropriate for the workload.
  4. Implement reserved concurrency at the Lambda function level to provide the resources that are needed during sudden increases in traffic.

Answer(s): B

Explanation:

B) Implement API throttling through a usage plan at the API Gateway level. Ensure that the client application handles code 429 replies without error is the correct answer. Throttling at the API Gateway level can limit the number of PUT requests a client can make, which will reduce the load on the system and minimize errors. By ensuring the client handles HTTP 429 (Too Many Requests) responses properly, retries can be performed when needed, preventing errors from being displayed to customers and improving the overall experience.

This solution ensures that clients making excessive requests are managed efficiently while allowing retries to maintain the application's integrity.



A company is running a data-intensive application on AWS. The application runs on a cluster of hundreds of Amazon EC2 instances. A shared file system also runs on several EC2 instances that store 200 TB of data. The application reads and modifies the data on the shared file system and generates a report. The job runs once monthly, reads a subset of the files from the shared file system, and takes about 72 hours to complete. The compute instances scale in an Auto Scaling group, but the instances that host the shared file system run continuously. The compute and storage instances are all in the same AWS Region.

A solutions architect needs to reduce costs by replacing the shared file system instances. The file system must provide high performance access to the needed data for the duration of the 72-hour run.

Which solution will provide the LARGEST overall cost reduction while meeting these requirements?

  1. Migrate the data from the existing shared file system to an Amazon S3 bucket that uses the S3 Intelligent-Tiering storage class. Before the job runs each month, use Amazon FSx for Lustre to create a new file system with the data from Amazon S3 by using lazy loading. Use the new file system as the shared storage for the duration of the job. Delete the file system when the job is complete.
  2. Migrate the data from the existing shared file system to a large Amazon Elastic Block Store (Amazon EBS) volume with Multi-Attach enabled. Attach the EBS volume to each of the instances by using a user data script in the Auto Scaling group launch template. Use the EBS volume as the shared storage for the duration of the job. Detach the EBS volume when the job is complete.
  3. Migrate the data from the existing shared file system to an Amazon S3 bucket that uses the S3 Standard storage class. Before the job runs each month, use Amazon FSx for Lustre to create a new file system with the data from Amazon S3 by using batch loading. Use the new file system as the shared storage for the duration of the job. Delete the file system when the job is complete.
  4. Migrate the data from the existing shared file system to an Amazon S3 bucket. Before the job runs each month, use AWS Storage Gateway to create a file gateway with the data from Amazon S3. Use the file gateway as the shared storage for the job. Delete the file gateway when the job is complete.

Answer(s): A

Explanation:

A) Migrate the data from the existing shared file system to an Amazon S3 bucket that uses the S3 Intelligent-Tiering storage class. Before the job runs each month, use Amazon FSx for Lustre to create a new file system with the data from Amazon S3 by using lazy loading. Use the new file system as the shared storage for the duration of the job. Delete the file system when the job is complete is the correct solution because it leverages Amazon S3 for cost-effective long-term storage with S3 Intelligent-Tiering. By using Amazon FSx for Lustre, the solution provides high-performance access to data during the 72-hour job, as Lustre is optimized for high-performance computing. Lazy loading ensures that only the necessary data is loaded as it's accessed, further optimizing costs. Deleting the file system after the job completes minimizes costs for the inactive periods.



A company is developing a new service that will be accessed using TCP on a static port. A solutions architect must ensure that the service is highly available, has redundancy across Availability Zones, and is accessible using the DNS name my.service.com, which is publicly accessible. The service must use fixed address assignments so other companies can add the addresses to their allow lists.

Assuming that resources are deployed in multiple Availability Zones in a single Region, which solution will meet these requirements?

  1. Create Amazon EC2 instances with an Elastic IP address for each instance. Create a Network Load Balancer (NLB) and expose the static TCP port. Register EC2 instances with the NLB. Create a new name server record set named my.service.com, and assign the Elastic IP addresses of the EC2 instances to the record set. Provide the Elastic IP addresses of the EC2 instances to the other companies to add to their allow lists.
  2. Create an Amazon ECS cluster and a service definition for the application. Create and assign public IP addresses for the ECS cluster. Create a Network Load Balancer (NLB) and expose the TCP port. Create a target group and assign the ECS cluster name to the NL Create a new A record set named my.service.com, and assign the public IP addresses of the ECS cluster to the record set. Provide the public IP addresses of the ECS cluster to the other companies to add to their allow lists.
  3. Create Amazon EC2 instances for the service. Create one Elastic IP address for each Availability Zone. Create a Network Load Balancer (NLB) and expose the assigned TCP port. Assign the Elastic IP addresses to the NLB for each Availability Zone. Create a target group and register the EC2 instances with the NLB. Create a new A (alias) record set named my.service.com, and assign the NLB DNS name to the record set.
  4. Create an Amazon ECS cluster and a service definition for the application. Create and assign public IP address for each host in the cluster. Create an Application Load Balancer (ALB) and expose the static TCP port. Create a target group and assign the ECS service definition name to the ALB. Create a new CNAME record set and associate the public IP addresses to the record set. Provide the Elastic IP addresses of the Amazon EC2 instances to the other companies to add to their allow lists.

Answer(s): C

Explanation:

C) Create Amazon EC2 instances for the service. Create one Elastic IP address for each Availability Zone. Create a Network Load Balancer (NLB) and expose the assigned TCP port. Assign the Elastic IP addresses to the NLB for each Availability Zone. Create a target group and register the EC2 instances with the NLB. Create a new A (alias) record set named my.service.com, and assign the NLB DNS name to the record set is the correct solution because it ensures high availability and redundancy across Availability Zones with a Network Load Balancer (NLB) that can handle static TCP ports. By using Elastic IPs in each Availability Zone, the company ensures that fixed IP addresses are used, allowing other companies to whitelist them. Associating the NLB DNS name with the A (alias) record satisfies the requirement for the DNS name my.service.com.



A company uses an on-premises data analytics platform. The system is highly available in a fully redundant configuration across 12 servers in the company’s data center.

The system runs scheduled jobs, both hourly and daily, in addition to one-time requests from users. Scheduled jobs can take between 20 minutes and 2 hours to finish running and have tight SLAs. The scheduled jobs account for 65% of the system usage. User jobs typically finish running in less than 5 minutes and have no SLA. The user jobs account for 35% of system usage. During system failures, scheduled jobs must continue to meet SLAs. However, user jobs can be delayed.

A solutions architect needs to move the system to Amazon EC2 instances and adopt a consumption-based model to reduce costs with no long-term commitments. The solution must maintain high availability and must not affect the SLAs.

Which solution will meet these requirements MOST cost-effectively?

  1. Split the 12 instances across two Availability Zones in the chosen AWS Region. Run two instances in each Availability Zone as On-Demand Instances with Capacity Reservations. Run four instances in each Availability Zone as Spot Instances.
  2. Split the 12 instances across three Availability Zones in the chosen AWS Region. In one of the Availability Zones, run all four instances as On-Demand Instances with Capacity Reservations. Run the remaining instances as Spot Instances.
  3. Split the 12 instances across three Availability Zones in the chosen AWS Region. Run two instances in each Availability Zone as On-Demand Instances with a Savings Plan. Run two instances in each Availability Zone as Spot Instances.
  4. Split the 12 instances across three Availability Zones in the chosen AWS Region. Run three instances in each Availability Zone as On-Demand Instances with Capacity Reservations. Run one instance in each Availability Zone as a Spot Instance.

Answer(s): D

Explanation:

D) Split the 12 instances across three Availability Zones in the chosen AWS Region. Run three instances in each Availability Zone as On-Demand Instances with Capacity Reservations. Run one instance in each Availability Zone as a Spot Instance is the correct solution because it ensures high availability by distributing instances across three Availability Zones while using On-Demand Instances with Capacity Reservations for the critical scheduled jobs that need to meet SLAs. The use of Spot Instances helps reduce costs, as they are ideal for non-SLA-bound user jobs that can be delayed. This approach strikes the right balance between cost-effectiveness and high availability for critical scheduled jobs.



A security engineer determined that an existing application retrieves credentials to an Amazon RDS for MySQL database from an encrypted file in Amazon S3. For the next version of the application, the security engineer wants to implement the following application design changes to improve security:

-The database must use strong, randomly generated passwords stored in a secure AWS managed service.
-The application resources must be deployed through AWS CloudFormation.
-The application must rotate credentials for the database every 90 days.

A solutions architect will generate a CloudFormation template to deploy the application.

Which resources specified in the CloudFormation template will meet the security engineer’s requirements with the LEAST amount of operational overhead?

  1. Generate the database password as a secret resource using AWS Secrets Manager. Create an AWS Lambda function resource to rotate the database password. Specify a Secrets Manager RotationSchedule resource to rotate the database password every 90 days.
  2. Generate the database password as a SecureString parameter type using AWS Systems Manager Parameter Store. Create an AWS Lambda function resource to rotate the database password. Specify a Parameter Store RotationSchedule resource to rotate the database password every 90 days.
  3. Generate the database password as a secret resource using AWS Secrets Manager. Create an AWS Lambda function resource to rotate the database password. Create an Amazon EventBridge scheduled rule resource to trigger the Lambda function password rotation every 90 days.
  4. Generate the database password as a SecureString parameter type using AWS Systems Manager Parameter Store. Specify an AWS AppSync DataSource resource to automatically rotate the database password every 90 days.

Answer(s): A

Explanation:

A) Generate the database password as a secret resource using AWS Secrets Manager. Create an AWS Lambda function resource to rotate the database password. Specify a Secrets Manager RotationSchedule resource to rotate the database password every 90 days is the correct answer because AWS Secrets Manager is specifically designed to manage and rotate credentials securely. It integrates easily with AWS Lambda to automate password rotation and allows for a RotationSchedule to ensure the credentials are updated every 90 days. This approach minimizes operational overhead and aligns with the security engineer’s requirements for strong, randomly generated passwords and automatic rotation.



Viewing Page 5 of 107



Share your comments for Amazon SAP-C01 exam with other users:

Zoubeyr 9/8/2023 5:56:00 AM

question 11 : d
FRANCE


User 8/29/2023 3:24:00 AM

only the free dumps will be enough for pass, or have to purchase the premium one. please suggest.
Anonymous


CW 7/6/2023 7:37:00 PM

good questions. thanks.
Anonymous


Farooqi 11/21/2023 1:37:00 AM

good for practice.
INDIA


Isaac 10/28/2023 2:30:00 PM

great case study
UNITED STATES


Malviya 2/3/2023 9:10:00 AM

the questions in this exam dumps is valid. i passed my test last monday. i only whish they had their pricing in inr instead of usd. but it is still worth it.
INDIA


rsmyth 5/18/2023 12:44:00 PM

q40 the answer is not d, why are you giving incorrect answers? snapshot consolidation is used to merge the snapshot delta disk files to the vm base disk
IRELAND


Keny 6/23/2023 9:00:00 PM

thanks, very relevant
PERU


Muhammad Rawish Siddiqui 11/29/2023 12:14:00 PM

wrong answer. it is true not false.
SAUDI ARABIA


Josh 7/10/2023 1:54:00 PM

please i need the mo-100 questions
Anonymous


VINNY 6/2/2023 11:59:00 AM

very good use full
Anonymous


Andy 12/6/2023 5:56:00 AM

very valid questions
Anonymous


Mamo 8/12/2023 7:46:00 AM

will these question help me to clear pl-300 exam?
UNITED STATES


Marial Manyang 7/26/2023 10:13:00 AM

please provide me with these dumps questions. thanks
Anonymous


Amel Mhamdi 12/16/2022 10:10:00 AM

in the pdf downloaded is write google cloud database engineer i think that it isnt the correct exam
FRANCE


Angel 8/30/2023 10:58:00 PM

i think you have the answers wrong regarding question: "what are three core principles of web content accessibility guidelines (wcag)? answer: robust, operable, understandable
UNITED STATES


SH 5/16/2023 1:43:00 PM

these questions are not valid , they dont come for the exam now
UNITED STATES


sudhagar 9/6/2023 3:02:00 PM

question looks valid
UNITED STATES


Van 11/24/2023 4:02:00 AM

good for practice
Anonymous


Divya 8/2/2023 6:54:00 AM

need more q&a to go ahead
Anonymous


Rakesh 10/6/2023 3:06:00 AM

question 59 - a newly-created role is not assigned to any user, nor granted to any other role. answer is b https://docs.snowflake.com/en/user-guide/security-access-control-overview
Anonymous


Nik 11/10/2023 4:57:00 AM

just passed my exam today. i saw all of these questions in my text today. so i can confirm this is a valid dump.
HONG KONG


Deep 6/12/2023 7:22:00 AM

needed dumps
INDIA


tumz 1/16/2024 10:30:00 AM

very helpful
UNITED STATES


NRI 8/27/2023 10:05:00 AM

will post once the exam is finished
UNITED STATES


kent 11/3/2023 10:45:00 AM

relevant questions
Anonymous


Qasim 6/11/2022 9:43:00 AM

just clear exam on 10/06/2202 dumps is valid all questions are came same in dumps only 2 new questions total 46 questions 1 case study with 5 question no lab/simulation in my exam please check the answers best of luck
Anonymous


Cath 10/10/2023 10:09:00 AM

q.112 - correct answer is c - the event registry is a module that provides event definitions. answer a - not correct as it is the definition of event log
VIET NAM


Shiji 10/15/2023 1:31:00 PM

good and useful.
INDIA


Ade 6/25/2023 1:14:00 PM

good questions
Anonymous


Praveen P 11/8/2023 5:18:00 AM

good content
UNITED STATES


Anastasiia 12/28/2023 9:06:00 AM

totally not correct answers. 21. you have one gcp account running in your default region and zone and another account running in a non-default region and zone. you want to start a new compute engine instance in these two google cloud platform accounts using the command line interface. what should you do? correct: create two configurations using gcloud config configurations create [name]. run gcloud config configurations activate [name] to switch between accounts when running the commands to start the compute engine instances.
Anonymous


Priyanka 7/24/2023 2:26:00 AM

kindly upload the dumps
Anonymous


Nabeel 7/25/2023 4:11:00 PM

still learning
Anonymous