Amazon SAP-C02 Exam (page: 13)
Amazon AWS Certified Solutions Architect - Professional SAP-C02
Updated on: 09-Feb-2026

Viewing Page 13 of 68

A company has introduced a new policy that allows employees to work remotely from their homes if they connect by using a VPN. The company is hosting internal applications with VPCs in multiple AWS accounts. Currently, the applications are accessible from the company's on-premises office network through an AWS Site-to-Site VPN connection. The VPC in the company's main AWS account has peering connections established with VPCs in other AWS accounts.

A solutions architect must design a scalable AWS Client VPN solution for employees to use while they work from home.

What is the MOST cost-effective solution that meets these requirements?

  1. Create a Client VPN endpoint in each AWS account. Configure required routing that allows access to internal applications.
  2. Create a Client VPN endpoint in the main AWS account. Configure required routing that allows access to internal applications.
  3. Create a Client VPN endpoint in the main AWS account. Provision a transit gateway that is connected to each AWS account. Configure required routing that allows access to internal applications.
  4. Create a Client VPN endpoint in the main AWS account. Establish connectivity between the Client VPN endpoint and the AWS Site-to-Site VPN.

Answer(s): B

Explanation:

B) Create a Client VPN endpoint in the main AWS account. Configure required routing that allows access to internal applications.

Creating a Client VPN endpoint in the main AWS account is the most cost-effective solution because it centralizes the VPN management while allowing remote employees to access internal applications hosted across multiple VPCs. By configuring the necessary routing, employees can connect securely to the applications without needing multiple VPN endpoints in each account, reducing both complexity and cost. This setup leverages the existing infrastructure efficiently while ensuring scalability as more users connect remotely.



A company is running an application in the AWS Cloud. Recent application metrics show inconsistent response times and a significant increase in error rates. Calls to third-party services are causing the delays. Currently, the application calls third-party services synchronously by directly invoking an AWS Lambda function.

A solutions architect needs to decouple the third-party service calls and ensure that all the calls are eventually completed.

Which solution will meet these requirements?

  1. Use an Amazon Simple Queue Service (Amazon SQS) queue to store events and invoke the Lambda function.
  2. Use an AWS Step Functions state machine to pass events to the Lambda function.
  3. Use an Amazon EventBridge rule to pass events to the Lambda function.
  4. Use an Amazon Simple Notification Service (Amazon SNS) topic to store events and Invoke the Lambda function.

Answer(s): A

Explanation:

A) Use an Amazon Simple Queue Service (Amazon SQS) queue to store events and invoke the Lambda function.

Using Amazon SQS allows the application to decouple the calls to third-party services, which helps in managing inconsistent response times and reducing error rates. By placing the service calls in an SQS queue, the application can continue processing other tasks while the Lambda function retrieves and processes the messages asynchronously. This design ensures that all calls to third-party services are eventually completed, even if there are temporary issues, thus enhancing the overall reliability and performance of the application.



A company is running applications on AWS in a multi-account environment. The company's sales team and marketing team use separate AWS accounts in AWS Organizations.

The sales team stores petabytes of data in an Amazon S3 bucket. The marketing team uses Amazon QuickSight for data visualizations. The marketing team needs access to data that the sates team stores in the S3 bucket. The company has encrypted the S3 bucket with an AWS Key Management Service (AWS KMS) key. The marketing team has already created the IAM service role for QuickSight to provide QuickSight access in the marketing AWS account. The company needs a solution that will provide secure access to the data in the S3 bucket across AWS accounts.

Which solution will meet these requirements with the LEAST operational overhead?

  1. Create a new S3 bucket in the marketing account. Create an S3 replication rule in the sales account to copy the objects to the new S3 bucket in the marketing account. Update the QuickSight permissions in the marketing account to grant access to the new S3 bucket.
  2. Create an SCP to grant access to the S3 bucket to the marketing account. Use AWS Resource Access Manager (AWS RAM) to share the KMS key from the sates account with the marketing account. Update the QuickSight permissions in the marketing account to grant access to the S3 bucket.
  3. Update the S3 bucket policy in the marketing account to grant access to the QuickSight role. Create a KMS grant for the encryption key that is used in the S3 bucket. Grant decrypt access to the QuickSight role. Update the QuickSight permissions in the marketing account to grant access to the S3 bucket.
  4. Create an IAM role in the sales account and grant access to the S3 bucket. From the marketing account, assume the IAM role in the sales account to access the S3 bucket. Update the QuickSight rote, to create a trust relationship with the new IAM role in the sales account.

Answer(s): D

Explanation:

D) Create an IAM role in the sales account and grant access to the S3 bucket. From the marketing account, assume the IAM role in the sales account to access the S3 bucket. Update the QuickSight role to create a trust relationship with the new IAM role in the sales account.

This solution allows the marketing team to access the S3 bucket in the sales account securely without duplicating data or creating additional complexities. By creating an IAM role in the sales account that has permissions to access the S3 bucket and allowing the marketing account to assume that role, the company maintains a single source of truth for the data and leverages IAM roles for secure cross-account access. Additionally, this method requires minimal operational overhead since it does not involve data replication or the need for complex configurations.



A company is planning to migrate its business-critical applications from an on-premises data center to AWS. The company has an on-premises installation of a Microsoft SQL Server Always On cluster. The company wants to migrate to an AWS managed database service. A solutions architect must design a heterogeneous database migration on AWS.

Which solution will meet these requirements?

  1. Migrate the SQL Server databases to Amazon RDS for MySQL by using backup and restore utilities.
  2. Use an AWS Snowball Edge Storage Optimized device to transfer data to Amazon S3. Set up Amazon RDS for MySQL. Use S3 integration with SQL Server features, such as BULK INSERT.
  3. Use the AWS Schema Conversion Tool to translate the database schema to Amazon RDS for MySQL. Then use AWS Database Migration Service (AWS DMS) to migrate the data from on-premises databases to Amazon RDS.
  4. Use AWS DataSync to migrate data over the network between on-premises storage and Amazon S3. Set up Amazon RDS for MySQL. Use S3 integration with SQL Server features, such as BULK INSERT.

Answer(s): C

Explanation:

C) Use the AWS Schema Conversion Tool to translate the database schema to Amazon RDS for MySQL. Then use AWS Database Migration Service (AWS DMS) to migrate the data from on-premises databases to Amazon RDS.

This solution effectively addresses the requirement for a heterogeneous database migration from Microsoft SQL Server to Amazon RDS for MySQL. The AWS Schema Conversion Tool helps to convert the database schema from SQL Server to the MySQL format, ensuring compatibility. AWS DMS then facilitates the data migration while minimizing downtime, as it can replicate ongoing changes from the source database to the target. This approach is suitable for business-critical applications, providing a seamless transition with the necessary tools to manage schema changes and data migration efficiently.



A publishing company's design team updates the icons and other static assets that an ecommerce web application uses. The company serves the icons and assets from an Amazon S3 bucket that is hosted in the company's production account. The company also uses a development account that members of the design team can access.

After the design team tests the static assets in the development account, the design team needs to load the assets into the S3 bucket in the production account. A solutions architect must provide the design team with access to the production account without exposing other parts of the web application to the risk of unwanted changes.

Which combination of steps will meet these requirements? (Choose three.)

  1. In the production account, create a new IAM policy that allows read and write access to the S3 bucket.
  2. In the development account, create a new IAM policy that allows read and write access to the S3 bucket.
  3. In the production account, create a role Attach the new policy to the role. Define the development account as a trusted entity.
  4. In the development account, create a role. Attach the new policy to the role Define the production account as a trusted entity.
  5. In the development account, create a group that contains all the IAM users of the design team Attach a different IAM policy to the group to allow the sts:AssumeRole action on the role In the production account.
  6. In the development account, create a group that contains all the IAM users of the design team Attach a different IAM policy to the group to allow the sts:AssumeRole action on the role in the development account.

Answer(s): A,C,E

Explanation:

A) In the production account, create a new IAM policy that allows read and write access to the S3 bucket.
This policy ensures that the design team can interact with the specific S3 bucket where the static assets are stored, allowing them to upload new assets and manage existing ones without affecting other resources in the production account.

C) In the production account, create a role. Attach the new policy to the role. Define the development account as a trusted entity.
Creating a role in the production account with the necessary permissions and defining the development account as a trusted entity allows users from the development account to assume this role and gain temporary access to the S3 bucket.

E) In the development account, create a group that contains all the IAM users of the design team. Attach a different IAM policy to the group to allow the sts
action on the role in the production account.
This step allows the design team members in the development account to assume the role created in the production account, enabling them to perform actions on the S3 bucket without directly granting them access to the production account.

Together, these steps create a secure and controlled access mechanism for the design team, allowing them to manage static assets while minimizing risks to other parts of the production environment.



A company developed a pilot application by using AWS Elastic Beanstalk and Java. To save costs during development, the company's development team deployed the application into a single-instance environment. Recent tests indicate that the application consumes more CPU than expected. CPU utilization is regularly greater than 85%, which causes some performance bottlenecks.

A solutions architect must mitigate the performance issues before the company launches the application to production.

Which solution will meet these requirements with the LEAST operational overhead?

  1. Create a new Elastic Beanstalk application. Select a load-balanced environment type. Select all Availability Zones. Add a scale-out rule that will run if the maximum CPU utilization is over 85% for 5 minutes.
  2. Create a second Elastic Beanstalk environment. Apply the traffic-splitting deployment policy. Specify a percentage of incoming traffic to direct to the new environment in the average CPU utilization is over 85% for 5 minutes.
  3. Modify the existing environment’s capacity configuration to use a load-balanced environment type. Select all Availability Zones. Add a scale-out rule that will run if the average CPU utilization is over 85% for 5 minutes.
  4. Select the Rebuild environment action with the load balancing option. Select an Availability Zones. Add a scale-out rule that will run if the sum CPU utilization is over 85% for 5 minutes.

Answer(s): C

Explanation:

C) Modify the existing environment’s capacity configuration to use a load-balanced environment type. Select all Availability Zones. Add a scale-out rule that will run if the average CPU utilization is over 85% for 5 minutes.
This option allows the existing Elastic Beanstalk application to scale horizontally by adding more instances when CPU utilization exceeds 85%. It leverages Elastic Beanstalk's built-in capabilities to manage scaling and load balancing with minimal operational overhead. By modifying the current environment instead of creating a new one, the solution is efficient and straightforward, allowing the development team to focus on application performance without significant changes to their deployment setup.

This approach ensures that the application can handle increased load effectively while maintaining operational simplicity.



A finance company is running its business-critical application on current-generation Linux EC2 instances. The application includes a self-managed MySQL database performing heavy I/O operations. The application is working fine to handle a moderate amount of traffic during the month. However, it slows down during the final three days of each month due to month-end reporting, even though the company is using Elastic Load Balancers and Auto Scaling within its infrastructure to meet the increased demand.

Which of the following actions would allow the database to handle the month-end load with the LEAST impact on performance?

  1. Pre-warming Elastic Load Balancers, using a bigger instance type, changing all Amazon EBS volumes to GP2 volumes.
  2. Performing a one-time migration of the database cluster to Amazon RDS, and creating several additional read replicas to handle the load during end of month.
  3. Using Amazon CloudWatch with AWS Lambda to change the type, size, or IOPS of Amazon EBS volumes in the cluster based on a specific CloudWatch metric.
  4. Replacing all existing Amazon EBS volumes with new PIOPS volumes that have the maximum available storage size and I/O per second by taking snapshots before the end of the month and reverting back afterwards.

Answer(s): B

Explanation:

B) Performing a one-time migration of the database cluster to Amazon RDS, and creating several additional read replicas to handle the load during end of month.
Migrating to Amazon RDS provides several advantages, including automated management tasks such as backups, scaling, and patching, which can significantly reduce the operational burden. By creating additional read replicas, the company can distribute the read workload during peak times like month-end reporting, thus improving performance and reducing the load on the primary database instance. This approach allows for better scalability and performance optimization with minimal manual intervention and operational overhead.



A company runs a Java application that has complex dependencies on VMs that are in the company's data center. The application is stable. but the company wants to modernize the technology stack. The company wants to migrate the application to AWS and minimize the administrative overhead to maintain the servers.

Which solution will meet these requirements with the LEAST code changes?

  1. Migrate the application to Amazon Elastic Container Service (Amazon ECS) on AWS Fargate by using AWS App2Container. Store container images in Amazon Elastic Container Registry (Amazon ECR). Grant the ECS task execution role permission 10 access the ECR image repository. Configure Amazon ECS to use an Application Load Balancer (ALB). Use the ALB to interact with the application.
  2. Migrate the application code to a container that runs in AWS Lambda. Build an Amazon API Gateway REST API with Lambda integration. Use API Gateway to interact with the application.
  3. Migrate the application to Amazon Elastic Kubernetes Service (Amazon EKS) on EKS managed node groups by using AWS App2Container. Store container images in Amazon Elastic Container Registry (Amazon ECR). Give the EKS nodes permission to access the ECR image repository. Use Amazon API Gateway to interact with the application.
  4. Migrate the application code to a container that runs in AWS Lambda. Configure Lambda to use an Application Load Balancer (ALB). Use the ALB to interact with the application.

Answer(s): A

Explanation:

A) Migrate the application to Amazon Elastic Container Service (Amazon ECS) on AWS Fargate by using AWS App2Container. Store container images in Amazon Elastic Container Registry (Amazon ECR). Grant the ECS task execution role permission to access the ECR image repository. Configure Amazon ECS to use an Application Load Balancer (ALB). Use the ALB to interact with the application.
This solution allows for minimal code changes while modernizing the application. Using AWS App2Container facilitates the migration of the existing application to containerized environments with little effort. By leveraging Amazon ECS on AWS Fargate, the company eliminates the need to manage underlying server infrastructure, thus minimizing administrative overhead. Additionally, using an ALB provides a scalable way to handle traffic to the application, ensuring high availability.



Viewing Page 13 of 68



Share your comments for Amazon SAP-C02 exam with other users:

Andrew 8/23/2023 6:02:00 PM

very helpful
Anonymous


Mukesh 7/10/2023 4:14:00 PM

good questions
UNITED KINGDOM