Amazon SAP-C02 Exam (page: 10)
Amazon AWS Certified Solutions Architect - Professional SAP-C02
Updated on: 09-Feb-2026

Viewing Page 10 of 68

A company is running several workloads in a single AWS account. A new company policy states that engineers can provision only approved resources and that engineers must use AWS CloudFormation to provision these resources. A solutions architect needs to create a solution to enforce the new restriction on the IAM role that the engineers use for access.

What should the solutions architect do to create the solution?

  1. Upload AWS CloudFormation templates that contain approved resources to an Amazon S3 bucket. Update the IAM policy for the engineers’ IAM role to only allow access to Amazon S3 and AWS CloudFormation. Use AWS CloudFormation templates to provision resources.
  2. Update the IAM policy for the engineers’ IAM role with permissions to only allow provisioning of approved resources and AWS CloudFormation. Use AWS CloudFormation templates to create stacks with approved resources.
  3. Update the IAM policy for the engineers’ IAM role with permissions to only allow AWS CloudFormation actions. Create a new IAM policy with permission to provision approved resources, and assign the policy to a new IAM service role. Assign the IAM service role to AWS CloudFormation during stack creation.
  4. Provision resources in AWS CloudFormation stacks. Update the IAM policy for the engineers’ IAM role to only allow access to their own AWS CloudFormation stack.

Answer(s): C

Explanation:

C) Update the IAM policy for the engineers’ IAM role with permissions to only allow AWS CloudFormation actions. Create a new IAM policy with permission to provision approved resources, and assign the policy to a new IAM service role. Assign the IAM service role to AWS CloudFormation during stack creation.

This solution ensures that engineers can only use AWS CloudFormation to provision resources, while the actual resource provisioning is restricted to only approved resources via the newly created IAM service role. By assigning the service role during stack creation, engineers are limited to using CloudFormation and cannot bypass the resource restrictions. This approach aligns with the company's new policy while ensuring control over resource provisioning.



A solutions architect is designing the data storage and retrieval architecture for a new application that a company will be launching soon. The application is designed to ingest millions of small records per minute from devices all around the world. Each record is less than 4 KB in size and needs to be stored in a durable location where it can be retrieved with low latency. The data is ephemeral and the company is required to store the data for 120 days only, after which the data can be deleted.

The solutions architect calculates that, during the course of a year, the storage requirements would be about 10-15 TB.

Which storage strategy is the MOST cost-effective and meets the design requirements?

  1. Design the application to store each incoming record as a single .csv file in an Amazon S3 bucket to allow for indexed retrieval. Configure a lifecycle policy to delete data older than 120 days.
  2. Design the application to store each incoming record in an Amazon DynamoDB table properly configured for the scale. Configure the DynamoDB Time to Live (TTL) feature to delete records older than 120 days.
  3. Design the application to store each incoming record in a single table in an Amazon RDS MySQL database. Run a nightly cron job that runs a query to delete any records older than 120 days.
  4. Design the application to batch incoming records before writing them to an Amazon S3 bucket. Update the metadata for the object to contain the list of records in the batch and use the Amazon S3 metadata search feature to retrieve the data. Configure a lifecycle policy to delete the data after 120 days.

Answer(s): B

Explanation:

B) Design the application to store each incoming record in an Amazon DynamoDB table properly configured for the scale. Configure the DynamoDB Time to Live (TTL) feature to delete records older than 120 days.

Amazon DynamoDB is highly suitable for storing small records with low latency and high throughput, making it ideal for handling millions of small records per minute. DynamoDB's Time to Live (TTL) feature allows automatic deletion of records older than 120 days, ensuring that the data retention policy is followed without manual intervention. This approach is cost-effective, durable, and scalable to handle the expected data volume and access pattern, meeting both the performance and storage requirements.



A retail company is hosting an ecommerce website on AWS across multiple AWS Regions. The company wants the website to be operational at all times for online purchases. The website stores data in an Amazon RDS for MySQL DB instance.

Which solution will provide the HIGHEST availability for the database?

  1. Configure automated backups on Amazon RDS. In the case of disruption, promote an automated backup to be a standalone DB instance. Direct database traffic to the promoted DB instance. Create a replacement read replica that has the promoted DB instance as its source.
  2. Configure global tables and read replicas on Amazon RDS. Activate the cross-Region scope. In the case of disruption, use AWS Lambda to copy the read replicas from one Region to another Region.
  3. Configure global tables and automated backups on Amazon RDS. In the case of disruption, use AWS Lambda to copy the read replicas from one Region to another Region.
  4. Configure read replicas on Amazon RDS. In the case of disruption, promote a cross-Region and read replica to be a standalone DB instance. Direct database traffic to the promoted DB instance. Create a replacement read replica that has the promoted DB instance as its source.

Answer(s): D

Explanation:

D) Configure read replicas on Amazon RDS. In the case of disruption, promote a cross-Region read replica to be a standalone DB instance. Direct database traffic to the promoted DB instance. Create a replacement read replica that has the promoted DB instance as its source.

This solution provides high availability and disaster recovery by using cross-Region read replicas. In the event of a disruption, a cross-Region read replica can be promoted to a standalone DB instance, ensuring minimal downtime and data loss. This approach ensures that the ecommerce website remains operational in multiple Regions, providing the highest availability for the database, with the ability to quickly restore normal operations by creating a new read replica from the promoted instance.



Example Corp. has an on-premises data center and a VPC named VPC A in the Example Corp. AWS account. The on-premises network connects to VPC A through an AWS Site-To-Site VPN. The on-premises servers can properly access VPC A. Example Corp. just acquired AnyCompany, which has a VPC named VPC B. There is no IP address overlap among these networks. Example Corp. has peered VPC A and VPC B.

Example Corp. wants to connect from its on-premise servers to VPC B. Example Corp. has properly set up the network ACL and security groups.

Which solution will meet this requirement with the LEAST operational effort?

  1. Create a transit gateway. Attach the Site-to-Site VPN, VPC A, and VPC B to the transit gateway. Update the transit gateway route tables for all networks to add IP range routes for all other networks.
  2. Create a transit gateway. Create a Site-to-Site VPN connection between the on-premises network and VPC B, and connect the VPN connection to the transit gateway. Add a route to direct traffic to the peered VPCs, and add an authorization rule to give clients access to the VPCs A and
  3. Update the route tables for the Site-to-Site VPN and both VPCs for all three networks. Configure BGP propagation for all three networks. Wait for up to 5 minutes for BGP propagation to finish.
  4. Modify the Site-to-Site VPN’s virtual private gateway definition to include VPC A and VPC B. Split the two routers of the virtual private getaway between the two VPCs.

Answer(s): A

Explanation:

A) Create a transit gateway. Attach the Site-to-Site VPN, VPC A, and VPC B to the transit gateway. Update the transit gateway route tables for all networks to add IP range routes for all other networks.

This solution meets the requirement with the least operational effort because an AWS Transit Gateway simplifies network management by acting as a hub to interconnect multiple VPCs and VPN connections. By attaching the Site-to-Site VPN, VPC A, and VPC B to the transit gateway, Example Corp can route traffic between all three networks seamlessly. This setup eliminates the need for complex routing and network peering configuration changes across multiple VPCs. It is scalable and reduces operational complexity.



A company recently completed the migration from an on-premises data center to the AWS Cloud by using a replatforming strategy. One of the migrated servers is running a legacy Simple Mail Transfer Protocol (SMTP) service that a critical application relies upon. The application sends outbound email messages to the company’s customers. The legacy SMTP server does not support TLS encryption and uses TCP port 25. The application can use SMTP only.

The company decides to use Amazon Simple Email Service (Amazon SES) and to decommission the legacy SMTP server. The company has created and validated the SES domain. The company has lifted the SES limits.

What should the company do to modify the application to send email messages from Amazon SES?

  1. Configure the application to connect to Amazon SES by using TLS Wrapper. Create an IAM role that has ses:SendEmail and ses:SendRawEmail permissions. Attach the IAM role to an Amazon EC2 instance.
  2. Configure the application to connect to Amazon SES by using STARTTLS. Obtain Amazon SES SMTP credentials. Use the credentials to authenticate with Amazon SES.
  3. Configure the application to use the SES API to send email messages. Create an IAM role that has ses:SendEmail and ses:SendRawEmail permissions. Use the IAM role as a service role for Amazon SES.
  4. Configure the application to use AWS SDKs to send email messages. Create an IAM user for Amazon SES. Generate API access keys. Use the access keys to authenticate with Amazon SES.

Answer(s): B

Explanation:

B) Configure the application to connect to Amazon SES by using STARTTLS. Obtain Amazon SES SMTP credentials. Use the credentials to authenticate with Amazon SES.

Amazon Simple Email Service (SES) requires the use of STARTTLS for encryption when sending emails via SMTP. Since the application can only use SMTP, this solution is the most appropriate and secure. The company needs to obtain SMTP credentials for Amazon SES, which are used for authentication. This method ensures that the application can continue to send emails securely using Amazon SES, with minimal changes required.



A company recently acquired several other companies. Each company has a separate AWS account with a different billing and reporting method. The acquiring company has consolidated all the accounts into one organization in AWS Organizations. However, the acquiring company has found it difficult to generate a cost report that contains meaningful groups for all the teams.

The acquiring company’s finance team needs a solution to report on costs for all the companies through a self-managed application.

Which solution will meet these requirements?

  1. Create an AWS Cost and Usage Report for the organization. Define tags and cost categories in the report. Create a table in Amazon Athena. Create an Amazon QuickSight dataset based on the Athena table. Share the dataset with the finance team.
  2. Create an AWS Cost and Usage Report for the organization. Define tags and cost categories in the report. Create a specialized template in AWS Cost Explorer that the finance department will use to build reports.
  3. Create an Amazon QuickSight dataset that receives spending information from the AWS Price List Query API. Share the dataset with the finance team.
  4. Use the AWS Price List Query API to collect account spending information. Create a specialized template in AWS Cost Explorer that the finance department will use to build reports.

Answer(s): A

Explanation:

A) Create an AWS Cost and Usage Report for the organization. Define tags and cost categories in the report. Create a table in Amazon Athena. Create an Amazon QuickSight dataset based on the Athena table. Share the dataset with the finance team.

This solution allows the finance team to generate meaningful cost reports across all accounts within the organization. By creating an AWS Cost and Usage Report (CUR) and defining tags and cost categories, the company can segment costs by teams or companies. The Athena table allows querying the detailed cost and usage data, and Amazon QuickSight provides visualization and reporting capabilities, which can be shared with the finance team. This solution meets the requirements for self-management and detailed reporting.



A company runs an IoT platform on AWS. IoT sensors in various locations send data to the company’s Node.js API servers on Amazon EC2 instances running behind an Application Load Balancer. The data is stored in an Amazon RDS MySQL DB instance that uses a 4 TB General Purpose SSD volume.

The number of sensors the company has deployed in the field has increased over time, and is expected to grow significantly. The API servers are consistently overloaded and RDS metrics show high write latency.

Which of the following steps together will resolve the issues permanently and enable growth as new sensors are provisioned, while keeping this platform cost-efficient? (Choose two.)

  1. Resize the MySQL General Purpose SSD storage to 6 TB to improve the volume’s IOPS.
  2. Re-architect the database tier to use Amazon Aurora instead of an RDS MySQL DB instance and add read replicas.
  3. Leverage Amazon Kinesis Data Streams and AWS Lambda to ingest and process the raw data.
  4. Use AWS X-Ray to analyze and debug application issues and add more API servers to match the load.
  5. Re-architect the database tier to use Amazon DynamoDB instead of an RDS MySQL DB instance.

Answer(s): C,E

Explanation:

C) Leverage Amazon Kinesis Data Streams and AWS Lambda to ingest and process the raw data.
This step offloads the data ingestion from the API servers, allowing Kinesis Data Streams to handle the large influx of data from the IoT sensors. This will significantly reduce the load on the API servers and make the system more scalable as new sensors are deployed.

E) Re-architect the database tier to use Amazon DynamoDB instead of an RDS MySQL DB instance.
Switching to Amazon DynamoDB provides a scalable and high-performance database solution for handling large amounts of IoT data, especially for write-heavy workloads. DynamoDB is a better fit for handling IoT sensor data with minimal latency and automatic scaling.

These two steps together will resolve the performance bottlenecks and allow the system to grow efficiently as new sensors are added.



A company is building an electronic document management system in which users upload their documents. The application stack is entirely serverless and runs on AWS in the eu-central-1 Region. The system includes a web application that uses an Amazon CloudFront distribution for delivery with Amazon S3 as the origin. The web application communicates with Amazon API Gateway Regional endpoints. The API Gateway APIs call AWS Lambda functions that store metadata in an Amazon Aurora Serverless database and put the documents into an S3 bucket.
The company is growing steadily and has completed a proof of concept with its largest customer. The company must improve latency outside of Europe.

Which combination of actions will meet these requirements? (Choose two.)

  1. Enable S3 Transfer Acceleration on the S3 bucket. Ensure that the web application uses the Transfer Acceleration signed URLs.
  2. Create an accelerator in AWS Global Accelerator. Attach the accelerator to the CloudFront distribution.
  3. Change the API Gateway Regional endpoints to edge-optimized endpoints.
  4. Provision the entire stack in two other locations that are spread across the world. Use global databases on the Aurora Serverless cluster.
  5. Add an Amazon RDS proxy between the Lambda functions and the Aurora Serverless database.

Answer(s): A,C

Explanation:

A) Enable S3 Transfer Acceleration on the S3 bucket. Ensure that the web application uses the Transfer Acceleration signed URLs.
Enabling S3 Transfer Acceleration will improve upload and download speeds by using Amazon CloudFront’s globally distributed edge locations, reducing latency for users outside of Europe.

C) Change the API Gateway Regional endpoints to edge-optimized endpoints.
Edge-optimized API Gateway endpoints use the Amazon CloudFront network to reduce latency for global users by routing requests to the nearest edge location, improving the API response times outside of Europe.

These solutions help address the latency issues for users outside Europe by improving data transfer speeds and API response times.



Viewing Page 10 of 68



Share your comments for Amazon SAP-C02 exam with other users:

Andrew 8/23/2023 6:02:00 PM

very helpful
Anonymous


Mukesh 7/10/2023 4:14:00 PM

good questions
UNITED KINGDOM