Amazon SAP-C02 Exam (page: 8)
Amazon AWS Certified Solutions Architect - Professional SAP-C02
Updated on: 09-Feb-2026

Viewing Page 8 of 68

A company has a monolithic application that is critical to the company’s business. The company hosts the application on an Amazon EC2 instance that runs Amazon Linux 2. The company’s application team receives a directive from the legal department to back up the data from the instance’s encrypted Amazon Elastic Block Store (Amazon EBS) volume to an Amazon S3 bucket. The application team does not have the administrative SSH key pair for the instance. The application must continue to serve the users.

Which solution will meet these requirements?

  1. Attach a role to the instance with permission to write to Amazon S3. Use the AWS Systems Manager Session Manager option to gain access to the instance and run commands to copy data into Amazon S3.
  2. Create an image of the instance with the reboot option turned on. Launch a new EC2 instance from the image. Attach a role to the new instance with permission to write to Amazon S3. Run a command to copy data into Amazon S3.
  3. Take a snapshot of the EBS volume by using Amazon Data Lifecycle Manager (Amazon DLM). Copy the data to Amazon S3.
  4. Create an image of the instance. Launch a new EC2 instance from the image. Attach a role to the new instance with permission to write to Amazon S3. Run a command to copy data into Amazon S3.

Answer(s): A

Explanation:

A) Attach a role to the instance with permission to write to Amazon S3. Use the AWS Systems Manager Session Manager option to gain access to the instance and run commands to copy data into Amazon S3 is the correct answer.

This solution allows you to securely access the EC2 instance without needing the SSH key pair by using AWS Systems Manager Session Manager. Once access is gained through Session Manager, the necessary commands can be executed to copy data from the EBS volume to the Amazon S3 bucket. Attaching an IAM role with S3 write permissions to the instance ensures that the instance has the necessary permissions to upload the data to S3.

This approach does not interrupt the running application, ensuring that the application continues to serve users while meeting the backup requirement from the legal department.



A solutions architect needs to copy data from an Amazon S3 bucket m an AWS account to a new S3 bucket in a new AWS account. The solutions architect must implement a solution that uses the AWS CLI. Which combination of steps will successfully copy the data? (Choose three.)

  1. Create a bucket policy to allow the source bucket to list its contents and to put objects and set object ACLs in the destination bucket. Attach the bucket policy to the destination bucket.
  2. Create a bucket policy to allow a user in the destination account to list the source bucket’s contents and read the source bucket’s objects. Attach the bucket policy to the source bucket.
  3. Create an IAM policy in the source account. Configure the policy to allow a user in the source account to list contents and get objects in the source bucket, and to list contents, put objects, and set object ACLs in the destination bucket. Attach the policy to the user.
  4. Create an IAM policy in the destination account. Configure the policy to allow a user in the destination account to list contents and get objects in the source bucket, and to list contents, put objects, and set objectACLs in the destination bucket. Attach the policy to the user.
  5. Run the aws s3 sync command as a user in the source account. Specify the source and destination buckets to copy the data.
  6. Run the aws s3 sync command as a user in the destination account. Specify the source and destination buckets to copy the data.

Answer(s): B,D,F

Explanation:

The correct answers are:

B) Create a bucket policy to allow a user in the destination account to list the source bucket’s contents and read the source bucket’s objects. Attach the bucket policy to the source bucket.
This step ensures that the destination account has the necessary permissions to access and read the objects from the source bucket.

D) Create an IAM policy in the destination account. Configure the policy to allow a user in the destination account to list contents and get objects in the source bucket, and to list contents, put objects, and set object ACLs in the destination bucket. Attach the policy to the user.
This step grants the user in the destination account permissions to interact with both the source and destination buckets.

F) Run the aws s3 sync command as a user in the destination account. Specify the source and destination buckets to copy the data.
Running the aws s3 sync command from the destination account allows the user to copy the data from the source S3 bucket to the new S3 bucket, ensuring that the permissions set in the previous steps are applied correctly.

This combination of actions ensures that the data is copied from the source bucket in one AWS account to the destination bucket in another AWS account using the AWS CLI, with the appropriate permissions for accessing and managing the data across accounts.



A company built an application based on AWS Lambda deployed in an AWS CloudFormation stack. The last production release of the web application introduced an issue that resulted in an outage lasting several minutes. A solutions architect must adjust the deployment process to support a canary release.

Which solution will meet these requirements?

  1. Create an alias for every new deployed version of the Lambda function. Use the AWS CLI update-alias command with the routing-config parameter to distribute the load.
  2. Deploy the application into a new CloudFormation stack. Use an Amazon Route 53 weighted routing policy to distribute the load.
  3. Create a version for every new deployed Lambda function. Use the AWS CLI update-function-configuration command with the routing-config parameter to distribute the load.
  4. Configure AWS CodeDeploy and use CodeDeployDefault.OneAtATime in the Deployment configuration to distribute the load.

Answer(s): A

Explanation:

A) Create an alias for every new deployed version of the Lambda function. Use the AWS CLI update-alias command with the routing-config parameter to distribute the load is the correct answer.

This approach allows you to implement a canary release using AWS Lambda's versioning and aliases. By creating an alias for the new version and using the update-alias command with the routing-config parameter, you can gradually shift traffic to the new version of the Lambda function. This allows you to test the new version with a small percentage of users before fully rolling it out, which is a key aspect of canary releases.

This method ensures that you can detect and mitigate any issues with new Lambda function versions before they affect all users, minimizing the risk of outages or issues during deployment.



A finance company hosts a data lake in Amazon S3. The company receives financial data records over SFTP each night from several third parties. The company runs its own SFTP server on an Amazon EC2 instance in a public subnet of a VPC. After the files are uploaded, they are moved to the data lake by a cron job that runs on the same instance. The SFTP server is reachable on DNS sftp.example.com through the use of Amazon Route 53.

What should a solutions architect do to improve the reliability and scalability of the SFTP solution?

  1. Move the EC2 instance into an Auto Scaling group. Place the EC2 instance behind an Application Load Balancer (ALB). Update the DNS record sftp.example.com in Route 53 to point to the ALB.
  2. Migrate the SFTP server to AWS Transfer for SFTP. Update the DNS record sftp.example.com in Route 53 to point to the server endpoint hostname.
  3. Migrate the SFTP server to a file gateway in AWS Storage Gateway. Update the DNS record sftp.example.com in Route 53 to point to the file gateway endpoint.
  4. Place the EC2 instance behind a Network Load Balancer (NLB). Update the DNS record sftp.example.com in Route 53 to point to the NLB.

Answer(s): B

Explanation:

B) Migrate the SFTP server to AWS Transfer for SFTP. Update the DNS record sftp.example.com in Route 53 to point to the server endpoint hostname is the correct answer.

AWS Transfer for SFTP is a fully managed service that scales automatically and is highly reliable compared to managing an SFTP server on an EC2 instance. This migration would offload the operational burden of managing the SFTP server while providing enhanced scalability, availability, and built-in integration with Amazon S3 for direct data transfer to the data lake. By updating the DNS record in Route 53 to point to the AWS Transfer SFTP endpoint, the company ensures a smooth transition without requiring changes from the third parties uploading the data.

This solution improves both reliability and scalability without the need for manual instance management or custom scaling configurations.



A company wants to migrate an application to Amazon EC2 from VMware Infrastructure that runs in an on-premises data center. A solutions architect must preserve the software and configuration settings during the migration.

What should the solutions architect do to meet these requirements?

  1. Configure the AWS DataSync agent to start replicating the data store to Amazon FSx for Windows File Server. Use the SMB share to host the VMware data store. Use VM Import/Export to move the VMs to Amazon EC2.
  2. Use the VMware vSphere client to export the application as an image in Open Virtualization Format (OVF) format. Create an Amazon S3 bucket to store the image in the destination AWS Region. Create and apply an IAM role for VM Import. Use the AWS CLI to run the EC2 import command.
  3. Configure AWS Storage Gateway for files service to export a Common Internet File System (CIFS) share. Create a backup copy to the shared folder. Sign in to the AWS Management Console and create an AMI from the backup copy. Launch an EC2 instance that is based on the AMI.
  4. Create a managed-instance activation for a hybrid environment in AWS Systems Manager. Download and install Systems Manager Agent on the on-premises VM. Register the VM with Systems Manager to be a managed instance. Use AWS Backup to create a snapshot of the VM and create an AMI. Launch an EC2 instance that is based on the AMI.

Answer(s): B

Explanation:

B) Use the VMware vSphere client to export the application as an image in Open Virtualization Format (OVF) format. Create an Amazon S3 bucket to store the image in the destination AWS Region. Create and apply an IAM role for VM Import. Use the AWS CLI to run the EC2 import command is the correct answer.

This solution preserves the application, software, and configuration settings by exporting the VMware VM as an Open Virtualization Format (OVF) image, which can be imported directly into AWS using the VM Import/Export service. By storing the image in an S3 bucket, you can transfer it to AWS and use the EC2 import command to convert it into an Amazon Machine Image (AMI). This approach ensures that the application runs as it did in the on-premises VMware environment without requiring reinstallation or reconfiguration.

This method is specifically designed for VMware-to-EC2 migrations and meets the requirement to preserve all software and configuration settings during the migration.



A video processing company has an application that downloads images from an Amazon S3 bucket, processes the images, stores a transformed image in a second S3 bucket, and updates metadata about the image in an Amazon DynamoDB table. The application is written in Node.js and runs by using an AWS Lambda function. The Lambda function is invoked when a new image is uploaded to Amazon S3.

The application ran without incident for a while. However, the size of the images has grown significantly. The Lambda function is now failing frequently with timeout errors. The function timeout is set to its maximum value. A solutions architect needs to refactor the application’s architecture to prevent invocation failures. The company does not want to manage the underlying infrastructure.

Which combination of steps should the solutions architect take to meet these requirements? (Choose two.)

  1. Modify the application deployment by building a Docker image that contains the application code. Publish the image to Amazon Elastic Container Registry (Amazon ECR).
  2. Create a new Amazon Elastic Container Service (Amazon ECS) task definition with a compatibility type of AWS Fargate. Configure the task definition to use the new image in Amazon Elastic Container Registry (Amazon ECR). Adjust the Lambda function to invoke an ECS task by using the ECS task definition when a new file arrives in Amazon S3.
  3. Create an AWS Step Functions state machine with a Parallel state to invoke the Lambda function. Increase the provisioned concurrency of the Lambda function.
  4. Create a new Amazon Elastic Container Service (Amazon ECS) task definition with a compatibility type of Amazon EC2. Configure the task definition to use the new image in Amazon Elastic Container Registry (Amazon ECR). Adjust the Lambda function to invoke an ECS task by using the ECS task definition when a new file arrives in Amazon S3.
  5. Modify the application to store images on Amazon Elastic File System (Amazon EFS) and to store metadata on an Amazon RDS DB instance. Adjust the Lambda function to mount the EFS file share.

Answer(s): A,B

Explanation:

The correct answers are:

A) Modify the application deployment by building a Docker image that contains the application code. Publish the image to Amazon Elastic Container Registry (Amazon ECR).
By containerizing the application, you can overcome the Lambda function's limitations related to execution time and resource constraints. The Docker image can handle larger image processing workloads, and storing the image in Amazon ECR allows it to be easily deployed in other services, like ECS.

B) Create a new Amazon Elastic Container Service (Amazon ECS) task definition with a compatibility type of AWS Fargate. Configure the task definition to use the new image in Amazon Elastic Container Registry (Amazon ECR). Adjust the Lambda function to invoke an ECS task by using the ECS task definition when a new file arrives in Amazon S3.
This solution offloads the image processing task to AWS Fargate, which provides a serverless container service, ensuring that the company does not need to manage the infrastructure. Fargate can handle larger processing tasks and can scale based on demand. The Lambda function would trigger an ECS task to process the images, which solves the timeout issue.

These steps provide a scalable, serverless solution without the need to manage underlying infrastructure while handling the increased image sizes.



A company has an organization in AWS Organizations. The company is using AWS Control Tower to deploy a landing zone for the organization. The company wants to implement governance and policy enforcement. The company must implement a policy that will detect Amazon RDS DB instances that are not encrypted at rest in the company’s production OU.

Which solution will meet this requirement?

  1. Turn on mandatory guardrails in AWS Control Tower. Apply the mandatory guardrails to the production OU.
  2. Enable the appropriate guardrail from the list of strongly recommended guardrails in AWS Control Tower. Apply the guardrail to the production OU.
  3. Use AWS Config to create a new mandatory guardrail. Apply the rule to all accounts in the production OU.
  4. Create a custom SCP in AWS Control Tower. Apply the SCP to the production OU.

Answer(s): B

Explanation:

B) Enable the appropriate guardrail from the list of strongly recommended guardrails in AWS Control Tower. Apply the guardrail to the production OU is the correct answer.

AWS Control Tower offers strongly recommended guardrails, which include governance rules that can detect and enforce encryption for resources like Amazon RDS. By enabling the appropriate strongly recommended guardrail and applying it to the production OU, the company can enforce encryption for RDS instances and detect any non-compliance.

This option leverages AWS Control Tower's built-in governance features without needing to create custom rules or service control policies, ensuring policy enforcement with minimal operational overhead.



A startup company hosts a fleet of Amazon EC2 instances in private subnets using the latest Amazon Linux 2 AMI. The company’s engineers rely heavily on SSH access to the instances for troubleshooting.

The company’s existing architecture includes the following:

-A VPC with private and public subnets, and a NAT gateway.
-Site-to-Site VPN for connectivity with the on-premises environment.
-EC2 security groups with direct SSH access from the on-premises environment.

The company needs to increase security controls around SSH access and provide auditing of commands run by the engineers.

Which strategy should a solutions architect use?

  1. Install and configure EC2 Instance Connect on the fleet of EC2 instances. Remove all security group rules attached to EC2 instances that allow inbound TCP on port 22. Advise the engineers to remotely access the instances by using the EC2 Instance Connect CLI.
  2. Update the EC2 security groups to only allow inbound TCP on port 22 to the IP addresses of the engineer’s devices. Install the Amazon CloudWatch agent on all EC2 instances and send operating system audit logs to CloudWatch Logs.
  3. Update the EC2 security groups to only allow inbound TCP on port 22 to the IP addresses of the engineer’s devices. Enable AWS Config for EC2 security group resource changes. Enable AWS Firewall Manager and apply a security group policy that automatically remediates changes to rules.
  4. Create an IAM role with the AmazonSSMManagedInstanceCore managed policy attached. Attach the IAM role to all the EC2 instances. Remove all security group rules attached to the EC2 instances that allow inbound TCP on port 22. Have the engineers install the AWS Systems Manager Session Manager plugin for their devices and remotely access the instances by using the start-session API call from Systems Manager.

Answer(s): D

Explanation:

D) Create an IAM role with the AmazonSSMManagedInstanceCore managed policy attached. Attach the IAM role to all the EC2 instances. Remove all security group rules attached to the EC2 instances that allow inbound TCP on port 22. Have the engineers install the AWS Systems Manager Session Manager plugin for their devices and remotely access the instances by using the start-session API call from Systems Manager is the correct solution.

AWS Systems Manager Session Manager provides a secure and auditable way to manage SSH access to EC2 instances without needing to open port 22 for SSH access, which improves security. By attaching the AmazonSSMManagedInstanceCore managed policy to the instances, you enable Systems Manager features, including Session Manager.

This solution has the following advantages:

No need for SSH or inbound port 22 access, improving the security posture of the environment.
Full auditing of session activity through AWS CloudTrail and CloudWatch Logs.
The engineers can access the instances securely via the start-session API without needing SSH keys, which adds an extra layer of security and control.
This approach meets the requirements to enhance security, eliminate open SSH ports, and provide auditable logs of commands executed on the instances.



Viewing Page 8 of 68



Share your comments for Amazon SAP-C02 exam with other users:

Andrew 8/23/2023 6:02:00 PM

very helpful
Anonymous


Mukesh 7/10/2023 4:14:00 PM

good questions
UNITED KINGDOM