Amazon SAP-C02 Exam (page: 2)
Amazon AWS Certified Solutions Architect - Professional SAP-C02
Updated on: 07-Feb-2026

Viewing Page 2 of 68

A retail company is operating its ecommerce application on AWS. The application runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The company uses an Amazon RDS DB instance as the database backend. Amazon CloudFront is configured with one origin that points to the ALB. Static content is cached. Amazon Route 53 is used to host all public zones.

After an update of the application, the ALB occasionally returns a 502 status code (Bad Gateway) error. The root cause is malformed HTTP headers that are returned to the ALB. The webpage returns successfully when a solutions architect reloads the webpage immediately after the error occurs.

While the company is working on the problem, the solutions architect needs to provide a custom error page instead of the standard ALB error page to visitors.

Which combination of steps will meet this requirement with the LEAST amount of operational overhead? (Choose two.)

  1. Create an Amazon S3 bucket. Configure the S3 bucket to host a static webpage. Upload the custom error pages to Amazon S3.
  2. Create an Amazon CloudWatch alarm to invoke an AWS Lambda function if the ALB health check response Target.FailedHealthChecks is greater than 0. Configure the Lambda function to modify the forwarding rule at the ALB to point to a publicly accessible web server.
  3. Modify the existing Amazon Route 53 records by adding health checks. Configure a fallback target if the health check fails. Modify DNS records to point to a publicly accessible webpage.
  4. Create an Amazon CloudWatch alarm to invoke an AWS Lambda function if the ALB health check response Elb.InternalError is greater than 0. Configure the Lambda function to modify the forwarding rule at the ALB to point to a public accessible web server.
  5. Add a custom error response by configuring a CloudFront custom error page. Modify DNS records to point to a publicly accessible web page.

Answer(s): A,E

Explanation:

To provide a custom error page with minimal operational overhead:

A: Create an Amazon S3 bucket. Configure the S3 bucket to host a static webpage. Upload the custom error pages to Amazon S3.

This allows for a simple and scalable way to serve custom error pages.
E: Add a custom error response by configuring a CloudFront custom error page. Modify DNS records to point to a publicly accessible web page.

This ensures users see a custom error page through CloudFront, reducing backend load and providing a seamless experience.



A company has many AWS accounts and uses AWS Organizations to manage all of them. A solutions architect must implement a solution that the company can use to share a common network across multiple accounts.

The company’s infrastructure team has a dedicated infrastructure account that has a VPC. The infrastructure team must use this account to manage the network. Individual accounts cannot have the ability to manage their own networks. However, individual accounts must be able to create AWS resources within subnets.

Which combination of actions should the solutions architect perform to meet these requirements? (Choose two.)

  1. Create a transit gateway in the infrastructure account.
  2. Enable resource sharing from the AWS Organizations management account.
  3. Create VPCs in each AWS account within the organization in AWS Organizations. Configure the VPCs to share the same CIDR range and subnets as the VPC in the infrastructure account. Peer the VPCs in each individual account with the VPC in the infrastructure account.
  4. Create a resource share in AWS Resource Access Manager in the infrastructure account. Select the specific AWS Organizations OU that will use the shared network. Select each subnet to associate with the resource share.
  5. Create a resource share in AWS Resource Access Manager in the infrastructure account. Select the specific AWS Organizations OU that will use the shared network. Select each prefix list to associate with the resource share.

Answer(s): B,D

Explanation:

To share a common network across multiple AWS accounts, the solutions architect should leverage AWS Resource Access Manager (RAM) and AWS Organizations for efficient and secure resource sharing.

B: Enable resource sharing from the AWS Organizations management account: This action allows the sharing of resources, such as VPCs and subnets, across accounts within the organization. AWS Organizations helps streamline governance and resource management across multiple AWS accounts.

D: Create a resource share in AWS Resource Access Manager in the infrastructure account: By using AWS RAM, the infrastructure team can share specific resources like subnets with other accounts, ensuring that individual accounts can create resources in shared subnets without managing their own network infrastructure. RAM allows secure and managed sharing of resources within the organization's structure.

These steps ensure that the network is centrally managed by the infrastructure team while still allowing other accounts to deploy resources within the shared network environment.



A company wants to use a third-party software-as-a-service (SaaS) application. The third-party SaaS application is consumed through several API calls. The third-party SaaS application also runs on AWS inside a VPC.

The company will consume the third-party SaaS application from inside a VPC. The company has internal security policies that mandate the use of private connectivity that does not traverse the internet. No resources that run in the company VPC are allowed to be accessed from outside the company’s VPC. All permissions must conform to the principles of least privilege.

Which solution meets these requirements?

  1. Create an AWS PrivateLink interface VPC endpoint. Connect this endpoint to the endpoint service that the third-party SaaS application provides. Create a security group to limit the access to the endpoint. Associate the security group with the endpoint.
  2. Create an AWS Site-to-Site VPN connection between the third-party SaaS application and the company VPC. Configure network ACLs to limit access across the VPN tunnels.
  3. Create a VPC peering connection between the third-party SaaS application and the company VP Update route tables by adding the needed routes for the peering connection.
  4. Create an AWS PrivateLink endpoint service. Ask the third-party SaaS provider to create an interface VPC endpoint for this endpoint service. Grant permissions for the endpoint service to the specific account of the third-party SaaS provider.

Answer(s): A

Explanation:

A: Create an AWS PrivateLink interface VPC endpoint is the correct solution because AWS PrivateLink allows secure, private connectivity between VPCs and third-party SaaS applications without exposing traffic to the internet. The traffic between the company’s VPC and the third-party SaaS application stays within the AWS network, adhering to the company's internal security policies that mandate private connectivity.

By creating an interface VPC endpoint, the company ensures that the third-party SaaS API calls are handled privately and securely. The use of security groups on the endpoint further restricts access and conforms to the principle of least privilege, limiting communication only to the necessary resources.

This approach eliminates the need for VPNs or VPC peering, which could either expose data to unnecessary risks or lead to more complex routing configurations.



A company needs to implement a patching process for its servers. The on-premises servers and Amazon EC2 instances use a variety of tools to perform patching. Management requires a single report showing the patch status of all the servers and instances.

Which set of actions should a solutions architect take to meet these requirements?

  1. Use AWS Systems Manager to manage patches on the on-premises servers and EC2 instances. Use Systems Manager to generate patch compliance reports.
  2. Use AWS OpsWorks to manage patches on the on-premises servers and EC2 instances. Use Amazon QuickSight integration with OpsWorks to generate patch compliance reports.
  3. Use an Amazon EventBridge rule to apply patches by scheduling an AWS Systems Manager patch remediation job. Use Amazon Inspector to generate patch compliance reports.
  4. Use AWS OpsWorks to manage patches on the on-premises servers and EC2 instances. Use AWS X-Ray to post the patch status to AWS Systems Manager OpsCenter to generate patch compliance reports.

Answer(s): A

Explanation:

A: Use AWS Systems Manager to manage patches on the on-premises servers and EC2 instances is the correct solution because AWS Systems Manager provides a unified approach for patch management across both on-premises servers and EC2 instances. Systems Manager's Patch Manager component can automate the process of patching and ensure compliance with patching policies.

Additionally, Systems Manager offers the capability to generate detailed patch compliance reports, which meet the requirement for a single report showing the patch status of all servers and instances, both on-premises and in the cloud. This approach simplifies the patching process and provides centralized visibility into patch compliance across environments.



A company is running an application on several Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer. The load on the application varies throughout the day, and EC2 instances are scaled in and out on a regular basis. Log files from the EC2 instances are copied to a central Amazon S3 bucket every 15 minutes. The security team discovers that log files are missing from some of the terminated EC2 instances.

Which set of actions will ensure that log files are copied to the central S3 bucket from the terminated EC2 instances?

  1. Create a script to copy log files to Amazon S3, and store the script in a file on the EC2 instance. Create an Auto Scaling lifecycle hook and an Amazon EventBridge rule to detect lifecycle events from the Auto Scaling group. Invoke an AWS Lambda function on the autoscaling:EC2_INSTANCE_TERMINATING transition to send ABANDON to the Auto Scaling group to prevent termination, run the script to copy the log files, and terminate the instance using the AWS SDK.
  2. Create an AWS Systems Manager document with a script to copy log files to Amazon S3. Create an Auto Scaling lifecycle hook and an Amazon EventBridge rule to detect lifecycle events from the Auto Scaling group. Invoke an AWS Lambda function on the autoscaling:EC2_INSTANCE_TERMINATING transition to call the AWS Systems Manager API SendCommand operation to run the document to copy the log files and send CONTINUE to the Auto Scaling group to terminate the instance.
  3. Change the log delivery rate to every 5 minutes. Create a script to copy log files to Amazon S3, and add the script to EC2 instance user data. Create an Amazon EventBridge rule to detect EC2 instance termination. Invoke an AWS Lambda function from the EventBridge rule that uses the AWS CLI to run the user-data script to copy the log files and terminate the instance.
  4. Create an AWS Systems Manager document with a script to copy log files to Amazon S3. Create an Auto Scaling lifecycle hook that publishes a message to an Amazon Simple Notification Service (Amazon SNS) topic. From the SNS notification, call the AWS Systems Manager API SendCommand operation to run the document to copy the log files and send ABANDON to the Auto Scaling group to terminate the instance.

Answer(s): B

Explanation:

B: Create an AWS Systems Manager document with a script to copy log files to Amazon S3... is the correct approach because it leverages AWS Systems Manager to manage the task of copying log files when an instance is being terminated. By using an Auto Scaling lifecycle hook and AWS Lambda, the system can detect when an instance is about to be terminated, execute the necessary commands via AWS Systems Manager SendCommand, and ensure that the logs are copied to the S3 bucket before the instance is shut down. This approach helps ensure log file integrity without manual intervention and respects the automated scaling and termination process of the Auto Scaling group.



A company is using multiple AWS accounts. The DNS records are stored in a private hosted zone for Amazon Route 53 in Account A. The company’s applications and databases are running in Account B.

A solutions architect will deploy a two-tier application in a new VPC. To simplify the configuration, the db.example.com CNAME record set for the Amazon RDS endpoint was created in a private hosted zone for Amazon Route 53.

During deployment, the application failed to start. Troubleshooting revealed that db.example.com is not resolvable on the Amazon EC2 instance. The solutions architect confirmed that the record set was created correctly in Route 53.

Which combination of steps should the solutions architect take to resolve this issue? (Choose two.)

  1. Deploy the database on a separate EC2 instance in the new VPC. Create a record set for the instance’s private IP in the private hosted zone.
  2. Use SSH to connect to the application tier EC2 instance. Add an RDS endpoint IP address to the /etc/resolv.conf file.
  3. Create an authorization to associate the private hosted zone in Account A with the new VPC in Account B.
  4. Create a private hosted zone for the example com domain in Account B. Configure Route 53 replication between AWS accounts.
  5. Associate a new VPC in Account B with a hosted zone in Account A. Delete the association authorization in Account A.

Answer(s): C,E

Explanation:

The correct steps to resolve the DNS resolution issue are:

C: Create an authorization to associate the private hosted zone in Account A with the new VPC in Account B: Private hosted zones need explicit authorization to associate with a VPC in a different account. This ensures that resources in Account B (like the application) can access DNS records from the private hosted zone in Account A.

E: Associate a new VPC in Account B with a hosted zone in Account A: Once the authorization is created, the next step is to associate the VPC in Account B with the private hosted zone in Account A. This allows the EC2 instances in Account B to resolve DNS queries to db.example.com, ensuring proper communication between the application and the RDS instance.



A company used Amazon EC2 instances to deploy a web fleet to host a blog site. The EC2 instances are behind an Application Load Balancer (ALB) and are configured in an Auto Scaling group. The web application stores all blog content on an Amazon EFS volume.

The company recently added a feature for bloggers to add video to their posts, attracting 10 times the previous user traffic. At peak times of day, users report buffering and timeout issues while attempting to reach the site or watch videos.

Which is the MOST cost-efficient and scalable deployment that will resolve the issues for users?

  1. Reconfigure Amazon EFS to enable maximum I/O.
  2. Update the blog site to use instance store volumes for storage. Copy the site contents to the volumes at launch and to Amazon S3 at shutdown.
  3. Configure an Amazon CloudFront distribution. Point the distribution to an S3 bucket, and migrate the videos from EFS to Amazon S3.
  4. Set up an Amazon CloudFront distribution for all site contents, and point the distribution at the ALB.

Answer(s): C

Explanation:

C) Configure an Amazon CloudFront distribution. Point the distribution to an S3 bucket, and migrate the videos from EFS to Amazon S3 is the correct answer because Amazon CloudFront is a content delivery network (CDN) that can cache static and dynamic content closer to the users, significantly reducing latency and improving performance for video streaming. Migrating the video content from Amazon EFS to Amazon S3 provides cost-effective storage for large objects like videos, while CloudFront ensures fast and efficient delivery. This solution also scales automatically with increased traffic, making it the most cost-efficient and scalable option.



A company with global offices has a single 1 Gbps AWS Direct Connect connection to a single AWS Region. The company’s on-premises network uses the connection to communicate with the company’s resources in the AWS Cloud. The connection has a single private virtual interface that connects to a single VPC.

A solutions architect must implement a solution that adds a redundant Direct Connect connection in the same Region. The solution also must provide connectivity to other Regions through the same pair of Direct Connect connections as the company expands into other Regions.

Which solution meets these requirements?

  1. Provision a Direct Connect gateway. Delete the existing private virtual interface from the existing connection. Create the second Direct Connect connection. Create a new private virtual interface on each connection, and connect both private virtual interfaces to the Direct Connect gateway. Connect the Direct Connect gateway to the single VPC.
  2. Keep the existing private virtual interface. Create the second Direct Connect connection. Create a new private virtual interface on the new connection, and connect the new private virtual interface to the single VPC.
  3. Keep the existing private virtual interface. Create the second Direct Connect connection. Create a new public virtual interface on the new connection, and connect the new public virtual interface to the single VP
  4. Provision a transit gateway. Delete the existing private virtual interface from the existing connection. Create the second Direct Connect connection. Create a new private virtual interface on each connection, and connect both private virtual interfaces to the transit gateway. Associate the transit gateway with the single VPC.

Answer(s): A

Explanation:

A. Provision a Direct Connect gateway. Delete the existing private virtual interface from the existing connection. Create the second Direct Connect connection. Create a new private virtual interface on each connection, and connect both private virtual interfaces to the Direct Connect gateway. Connect the Direct Connect gateway to the single VPC is the correct solution because it provides redundancy and future-proofing for connectivity to other AWS Regions.

Using a Direct Connect gateway enables access to multiple VPCs across different AWS Regions using the same Direct Connect connections. Deleting the existing private virtual interface and creating new private virtual interfaces on both connections ensures that the setup is aligned with the Direct Connect gateway, providing resilience and scalability as the company expands into other Regions.



Viewing Page 2 of 68



Share your comments for Amazon SAP-C02 exam with other users:

Andrew 8/23/2023 6:02:00 PM

very helpful
Anonymous


Mukesh 7/10/2023 4:14:00 PM

good questions
UNITED KINGDOM