Amazon AWS Certified CloudOps Engineer - Associate SOA-C03 Exam Questions in PDF

Free Amazon SOA-C03 Dumps Questions (page: 3)

A company runs a website on Amazon EC2 instances. Users can upload images to an Amazon S3 bucket and publish the images to the website. The company wants to deploy a serverless image- processing application that uses an AWS Lambda function to resize the uploaded images.

The company's development team has created the Lambda function. A CloudOps engineer must implement a solution to invoke the Lambda function when users upload new images to the S3 bucket.

Which solution will meet this requirement?

  1. Configure an Amazon Simple Notification Service (Amazon SNS) topic to invoke the Lambda function when a user uploads a new image to the S3 bucket.
  2. Configure an Amazon CloudWatch alarm to invoke the Lambda function when a user uploads a new image to the S3 bucket.
  3. Configure S3 Event Notifications to invoke the Lambda function when a user uploads a new image to the S3 bucket.
  4. Configure an Amazon Simple Queue Service (Amazon SQS) queue to invoke the Lambda function when a user uploads a new image to the S3 bucket.

Answer(s): C

Explanation:

Comprehensive and Detailed Explanation From Exact Extract of AWS CloudOps Doocuments:

Use Amazon S3 Event Notifications with AWS Lambda to trigger image processing on object creation. S3 natively supports invoking Lambda for events such as s3:ObjectCreated:*, providing a serverless, low-latency pipeline without managing additional services. AWS operational guidance states that "Amazon S3 can directly invoke a Lambda function in response to object-created events," allowing you to pass event metadata (bucket/key) to the function for resizing and writing results back to S3. This approach minimizes operational overhead, scales automatically with upload volume, and integrates with standard retry semantics. SNS or SQS can be added for fan-out or buffering patterns, but they are not required when the requirement is simply "invoke the Lambda function on upload." CloudWatch alarms do not detect individual S3 object uploads and cannot directly satisfy per-object triggers. Therefore, configuring S3 Lambda event notifications meets the requirement most directly and aligns with CloudOps best practices for event-driven, serverless automation.


Reference:

· Using AWS Lambda with Amazon S3 (Lambda Developer Guide)

· Amazon S3 Event Notifications (S3 User Guide)

· AWS Well-Architected ­ Serverless Applications (Operational Excellence)



A company hosts a production MySQL database on an Amazon Aurora single-node DB cluster. The database is queried heavily for reporting purposes. The DB cluster is experiencing periods of performance degradation because of high CPU utilization and maximum connections errors. A CloudOps engineer needs to improve the stability of the database.

Which solution will meet these requirements?

  1. Create an Aurora Replica node. Create an Auto Scaling policy to scale replicas based on CPU utilization. Ensure that all reporting requests use the read-only connection string.
  2. Create a second Aurora MySQL single-node DB cluster in a second Availability Zone. Ensure that all reporting requests use the connection string for this additional node.
  3. Create an AWS Lambda function that caches reporting requests. Ensure that all reporting requests call the Lambda function.
  4. Create a multi-node Amazon ElastiCache cluster. Ensure that all reporting requests use the ElastiCache cluster. Use the database if the data is not in the cache.

Answer(s): A

Explanation:

Comprehensive and Detailed Explanation From Exact Extract of AWS CloudOps Doocuments:

Amazon Aurora supports up to 15 Aurora Replicas that share the same storage volume and provide read scaling and improved availability. Official guidance states that replicas "offload read traffic from the writer" and that you should direct read-only workloads to the reader endpoint, reducing CPU pressure and connection counts on the primary. Aurora also supports Replica Auto Scaling through Application Auto Scaling policies using metrics such as CPU utilization or connections to add or remove replicas automatically. This design addresses both high CPU and maximum connections by moving reporting traffic to read replicas while keeping a single write primary for OLTP. Option B creates a separate cluster with independent storage, increasing operational overhead and data synchronization complexity. Options C and D introduce application-layer caching changes that may not guarantee data freshness or relieve the write node directly. Therefore, adding read replicas and routing reporting to the reader endpoint, with auto scaling based on load, is the least intrusive, CloudOps-aligned way to stabilize performance.


Reference:

· Amazon Aurora ­ Replicas and Reader Endpoint (Aurora User Guide)

· Aurora Replica Auto Scaling (Aurora & Application Auto Scaling Guides)

· AWS Well-Architected Framework ­ Reliability & Performance Efficiency



A CloudOps engineer configures an application to run on Amazon EC2 instances behind an Application Load Balancer (ALB) in a simple scaling Auto Scaling group with the default settings. The Auto Scaling group is configured to use the RequestCountPerTarget metric for scaling. The CloudOps engineer notices that the RequestCountPerTarget metric exceeded the specified limit twice in 180 seconds.

How will the number of EC2 instances in this Auto Scaling group be affected in this scenario?

  1. The Auto Scaling group will launch an additional EC2 instance every time the RequestCountPerTarget metric exceeds the predefined limit.
  2. The Auto Scaling group will launch one EC2 instance and will wait for the default cooldown period before launching another instance.
  3. The Auto Scaling group will send an alert to the ALB to rebalance the traffic and not add new EC2 instances until the load is normalized.
  4. The Auto Scaling group will try to distribute the traffic among all EC2 instances before launching another instance.

Answer(s): B

Explanation:

Comprehensive and Detailed Explanation From Exact Extract of AWS CloudOps Doocuments:

With simple scaling policies, an Auto Scaling group performs one scaling activity when the alarm condition is met, then observes a default cooldown period (300 seconds) before another scaling activity of the same type can begin. CloudOps guidance explains that cooldown prevents rapid successive scale-outs by allowing time for the newly launched instance(s) to register with the load balancer and impact the metric. Even if the alarm breaches multiple times during the cooldown window, the group waits until the cooldown completes before evaluating and acting again. In this case, although RequestCountPerTarget exceeded the threshold twice within 180 seconds, the group will launch a single instance and then wait for cooldown before any additional scale-out can occur. Options A, C, and D do not reflect the behavior of simple scaling with cooldowns; A describes step/target-tracking-like behavior, and C/D are not Auto Scaling mechanics.


Reference:

· Amazon EC2 Auto Scaling ­ Simple Scaling Policies and Cooldown (User Guide)

· Elastic Load Balancing Metrics ­ ALB RequestCountPerTarget (CloudWatch Metrics)

· AWS Well-Architected Framework ­ Performance Efficiency & Operational Excellence



A company uses Amazon ElastiCache (Redis OSS) to cache application dat

  1. A CloudOps engineer must implement a solution to increase the resilience of the cache. The solution also must minimize the recovery time objective (RTO).
    Which solution will meet these requirements?
  2. Replace ElastiCache (Redis OSS) with ElastiCache (Memcached).
  3. Create an Amazon EventBridge rule to initiate a backup every hour. Restore the backup when necessary.
  4. Create a read replica in a second Availability Zone. Enable Multi-AZ for the ElastiCache (Redis OSS) replication group.
  5. Enable automatic backups. Restore the backups when necessary.

Answer(s): C

Explanation:

Comprehensive and Detailed Explanation From Exact Extract of AWS CloudOps Doocuments:

For high availability and fast failover, ElastiCache for Redis supports replication groups with Multi-AZ and automatic failover. CloudOps guidance states that a primary node can be paired with one or more replicas across multiple Availability Zones; if the primary fails, Redis automatically promotes a replica to primary in seconds, thereby minimizing RTO. This architecture maintains in-memory data continuity without waiting for backup restore operations. Backups (Options B and D) provide durability but require restore and re-warm procedures that increase RTO and may impact application latency. Switching engines (Option A) to Memcached does not provide Redis replication/failover semantics and would not inherently improve resilience for this use case. Therefore, creating a read replica in a different AZ and enabling Multi-AZ with automatic failover is the prescribed CloudOps pattern to increase resilience and achieve the lowest practical RTO for Redis caches.


Reference:

· AWS Certified CloudOps Engineer ­ Associate (SOA-C03) Exam Guide ­ Reliability and Business Continuity

· Amazon ElastiCache for Redis ­ Replication Groups, Multi-AZ, and Automatic Failover

· AWS Well-Architected Framework ­ Reliability Pillar



An AWS CloudFormation template creates an Amazon RDS instance. This template is used to build up development environments as needed and then delete the stack when the environment is no longer required. The RDS-persisted data must be retained for further use, even after the CloudFormation stack is deleted.

How can this be achieved in a reliable and efficient way?

  1. Write a script to continue backing up the RDS instance every five minutes.
  2. Create an AWS Lambda function to take a snapshot of the RDS instance, and manually invoke the function before deleting the stack.
  3. Use the Snapshot Deletion Policy in the CloudFormation template definition of the RDS instance.
  4. Create a new CloudFormation template to perform backups of the RDS instance, and run this template before deleting the stack.

Answer(s): C

Explanation:

Comprehensive and Detailed Explanation From Exact Extract of AWS CloudOps Doocuments:

AWS CloudFormation supports the DeletionPolicy attribute to control what happens to a resource when a stack is deleted. For Amazon RDS DB instances, setting DeletionPolicy: Snapshot instructs CloudFormation to retain a final DB snapshot automatically at stack deletion. CloudOps best practice recommends using this native mechanism for data retention and auditability, avoiding manual scripts or out-of-band processes. Options A, B, and D introduce operational overhead and potential human error. With DeletionPolicy set to Snapshot, the environment can be repeatedly created and torn down while preserving data states for later restoration with minimal manual steps. This aligns with IaC principles--declarative, repeatable, and reliable--and supports efficient lifecycle management of ephemeral development stacks.


Reference:

· AWS Certified CloudOps Engineer ­ Associate (SOA-C03) Exam Guide ­ Deployment, Provisioning and Automation

· AWS CloudFormation User Guide ­ DeletionPolicy Attribute (Snapshot for RDS)

· AWS Well-Architected Framework ­ Operational Excellence Pillar



A company has a VPC that contains a public subnet and a private subnet. The company deploys an Amazon EC2 instance that uses an Amazon Linux Amazon Machine Image (AMI) and has the AWS Systems Manager Agent (SSM Agent) installed in the private subnet. The EC2 instance is in a security group that allows only outbound traffic.

A CloudOps engineer needs to give a group of privileged administrators the ability to connect to the instance through SSH without exposing the instance to the internet.

Which solution will meet this requirement?

  1. Create an EC2 Instance Connect endpoint in the private subnet. Update the security group to allow inbound SSH traffic. Create an IAM group for privileged administrators. Assign the PowerUserAccess managed policy to the IAM group.
  2. Create a Systems Manager endpoint in the private subnet. Update the security group to allow SSH traffic from the private network where the Systems Manager endpoint is connected. Create an IAM group for privileged administrators. Assign the PowerUserAccess managed policy to the IAM group.
  3. Create an EC2 Instance Connect endpoint in the public subnet. Update the security group to allow SSH traffic from the private network. Create an IAM group for privileged administrators. Assign the PowerUserAccess managed policy to the IAM group.
  4. Create a Systems Manager endpoint in the public subnet. Create an IAM role that has the AmazonSSMManagedInstanceCore permission for the EC2 instance. Create an IAM group for privileged administrators. Assign the AmazonEC2ReadOnlyAccess IAM policy to the IAM group.

Answer(s): A

Explanation:

Comprehensive and Detailed Explanation From Exact Extract of AWS CloudOps Doocuments:

EC2 Instance Connect Endpoint (EIC Endpoint) enables SSH to instances in private subnets without public IPs and without needing to traverse the public internet. CloudOps guidance explains that you deploy the endpoint in the same VPC/subnet as the targets, then allow inbound SSH on the instance security group from the endpoint's security group. Access is governed by IAM--administrators must have Instance Connect permissions; while the example uses a broad policy, the key mechanism is EIC in the private subnet plus SG rules scoped to the endpoint. Systems Manager Session Manager can provide shell access without SSH, but the requirement explicitly states "connect through SSH," making EIC the purpose-built solution. Options B and D misuse Systems Manager for SSH and propose unnecessary SG changes or incorrect endpoint placement; Option C places the endpoint in a public subnet, which is not required for private SSH access. Therefore, creating an EC2 Instance Connect endpoint in the private subnet and updating SGs accordingly meets the requirement while keeping the instance non-internet-exposed.


Reference:

· AWS Certified CloudOps Engineer ­ Associate (SOA-C03) Exam Guide ­ Security and Compliance

· Amazon EC2 ­ Instance Connect Endpoint (Private SSH Access)

· AWS Well-Architected Framework ­ Security Pillar (Least Privilege Network Access)



A global company runs a critical primary workload in the us-east-1 Region. The company wants to ensure business continuity with minimal downtime in case of a workload failure. The company wants to replicate the workload to a second AWS Region.

A CloudOps engineer needs a solution that achieves a recovery time objective (RTO) of less than 10 minutes and a zero recovery point objective (RPO) to meet service level agreements.

Which solution will meet these requirements?

  1. Implement a pilot light architecture that provides real-time data replication in the second Region.
    Configure Amazon Route 53 health checks and automated DNS failover.
  2. Implement a warm standby architecture that provides regular data replication in a second Region.
    Configure Amazon Route 53 health checks and automated DNS failover.
  3. Implement an active-active architecture that provides real-time data replication across two Regions. Use Amazon Route 53 health checks and a weighted routing policy.
  4. Implement a custom script to generate a regular backup of the data and store it in an S3 bucket that is in a second Region. Use the backup to launch the application in the second Region in the event of a workload failure.

Answer(s): C

Explanation:

According to the AWS Cloud Operations and Disaster Recovery documentation, the active-active multi-Region architecture provides the lowest possible RTO and RPO among all disaster recovery strategies. In this approach, workloads are deployed and actively running in multiple AWS Regions simultaneously. All data is continuously replicated in real time between Regions using fully managed replication services, ensuring zero data loss (zero RPO).

Because both Regions are active and capable of handling requests, failover between them is instantaneous, meeting the RTO of less than 10 minutes. Amazon Route 53 is used with weighted or latency-based routing policies and health checks to automatically route traffic away from an impaired Region to the healthy Region without manual intervention.

In contrast:

Pilot Light Architecture maintains only a minimal copy of the environment in the secondary Region. It requires time to scale up infrastructure during a disaster, resulting in longer RTO and potential data loss (non-zero RPO).

Warm Standby Architecture keeps partially running infrastructure in the secondary Region. Although faster than pilot light, it still requires scaling and synchronization, resulting in higher RTO and RPO compared to active-active.

Backup and Restore (option D) relies on periodic backups and restores data when needed. This approach has the highest RTO and RPO, unsuitable for mission-critical workloads demanding high availability and zero data loss.

Therefore, based on AWS-recommended disaster recovery strategies outlined in the AWS Cloud Operations and Disaster Recovery Guide, the Active-Active Multi-Region architecture (Option C) is the only approach that guarantees RTO <10 minutes and RPO = 0, achieving continuous availability and business continuity across Regions.


Reference:

AWS Cloud Operations and Disaster Recovery Whitepaper ­ Section: Disaster Recovery Strategies ­ Multi-Site (Active-Active) Approach; AWS CloudOps Best Practices for Reliability and Business Continuity.



Optimization]

A CloudOps engineer is using AWS Compute Optimizer to generate recommendations for a fleet of Amazon EC2 instances. Some of the instances use newly released instance types, while other instances use older instance types.

After the analysis is complete, the CloudOps engineer notices that some of the EC2 instances are missing from the Compute Optimizer dashboard.

What is the likely cause of this issue?

  1. The missing instances have insufficient historical Amazon CloudWatch metric data for analysis.
  2. Compute Optimizer does not support the instance types of the missing instances.
  3. Compute Optimizer already considers the missing instances to be optimized.
  4. The missing instances are running a Windows operating system.

Answer(s): B

Explanation:

According to the AWS Cloud Operations and Compute Optimizer documentation, Compute Optimizer provides right-sizing recommendations by analyzing Amazon CloudWatch metrics and instance configuration data. However, AWS explicitly notes that only supported instance types are included in Compute Optimizer analyses. If an EC2 instance type is newly released or not yet supported by Compute Optimizer, it will not appear in the Compute Optimizer dashboard until official support is added.

The documentation explains that "Compute Optimizer analyses only supported resource types and instance families. Instances using unsupported or newly launched instance types will not appear in the Compute Optimizer console." This ensures the service provides accurate recommendations based on sufficient performance history and benchmark data.

While CloudWatch metrics are required for analysis, the complete absence of instances from the dashboard -- rather than "insufficient metric data" notifications -- indicates unsupported instance types. Compute Optimizer would normally still display those with limited metrics but would flag them as "insufficient data," not remove them entirely.

Therefore, the most accurate cause of missing instances in this case is that Compute Optimizer does not support the newly released instance types, making option B correct.


Reference:

AWS Cloud Operations & Compute Optimizer Guide ­ Section: Supported Resources and Limitations in Compute Optimizer



Share your comments for Amazon SOA-C03 exam with other users:

tomAws 7/18/2023 5:05:00 AM

nice questions
BRAZIL


AI Tutor 👋 I’m here to help!