Amazon AWS Certified CloudOps Engineer - Associate SOA-C03 Exam (page: 2)
Amazon AWS Certified CloudOps Engineer - Associate SOA-C03
Updated on: 31-Mar-2026

A user working in the Amazon EC2 console increased the size of an Amazon Elastic Block Store

(Amazon EBS) volume attached to an Amazon EC2 Windows instance. The change is not reflected in the file system.

What should a CloudOps engineer do to resolve this issue?

  1. Extend the file system with operating system-level tools to use the new storage capacity.
  2. Reattach the EBS volume to the EC2 instance.
  3. Reboot the EC2 instance that is attached to the EBS volume.
  4. Take a snapshot of the EBS volume. Replace the original volume with a volume that is created from the snapshot.

Answer(s): A

Explanation:

When an Amazon EBS volume is resized, the new storage capacity is immediately available to the attached EC2 instance. However, EBS does not automatically extend the file system. The CloudOps engineer must manually extend the file system within the operating system to utilize the additional space.

AWS documentation for EC2 and EBS specifies:

"After you increase the size of an EBS volume, use file system­specific tools to extend the file system so that the operating system can use the new storage capacity."

On Windows instances, this can be achieved through Disk Management or diskpart commands. On Linux systems, utilities such as growpart and resize2fs are used.

Options B and C do not modify file system metadata and are ineffective. Option D unnecessarily replaces the volume, which adds risk and downtime. Thus, Option A aligns with the Monitoring and Performance Optimization practices of AWS CloudOps by properly extending the file system to recognize the new capacity.


Reference:

· AWS Certified CloudOps Engineer ­ Associate (SOA-C03) Exam Guide ­ Domain 1

· Amazon EBS ­ Modifying EBS Volumes

· Amazon EC2 User Guide ­ Extending a File System After Resizing a Volume · AWS Well-Architected Framework ­ Performance Efficiency Pillar



Optimization]

A company has a workload that is sending log data to Amazon CloudWatch Logs. One of the fields includes a measure of application latency. A CloudOps engineer needs to monitor the p90 statistic of this field over time.

What should the CloudOps engineer do to meet this requirement?

  1. Create an Amazon CloudWatch Contributor Insights rule on the log data.
  2. Create a metric filter on the log data.
  3. Create a subscription filter on the log data.
  4. Create an Amazon CloudWatch Application Insights rule for the workload.

Answer(s): B

Explanation:

To analyze and visualize custom statistics such as the p90 latency (90th percentile), a CloudWatch metric must be generated from the log data. The correct method is to create a metric filter that extracts the latency value from each log event and publishes it as a CloudWatch metric. Once the metric is published, percentile statistics (p90, p95, etc.) can be displayed in CloudWatch dashboards or alarms.

AWS documentation states:

"You can use metric filters to extract numerical fields from log events and publish them as metrics in CloudWatch. CloudWatch supports percentile statistics such as p90 and p95 for these metrics."

Contributor Insights (Option A) is for analyzing frequent contributors, not numeric distributions. Subscription filters (Option C) are used for log streaming, and Application Insights (Option D)

provides monitoring of application health but not custom p90 statistics. Hence, Option B is the CloudOps-aligned, minimal-overhead solution for percentile latency monitoring.


Reference:

· AWS Certified CloudOps Engineer ­ Associate (SOA-C03) Exam Guide ­ Domain 1: Monitoring and Logging

· Amazon CloudWatch Logs ­ Metric Filters

· AWS Well-Architected Framework ­ Operational Excellence Pillar



A company is running an application on premises and wants to use AWS for data backup. All of the data must be available locally. The backup application can write only to block-based storage that is compatible with the Portable Operating System Interface (POSIX).

Which backup solution will meet these requirements?

  1. Configure the backup software to use Amazon S3 as the target for the data backups.
  2. Configure the backup software to use Amazon S3 Glacier Flexible Retrieval as the target for the data backups.
  3. Use AWS Storage Gateway, and configure it to use gateway-cached volumes.
  4. Use AWS Storage Gateway, and configure it to use gateway-stored volumes.

Answer(s): D

Explanation:

The Storage Gateway service enables hybrid cloud backup by presenting local block storage that synchronizes with AWS cloud storage. For scenarios where all data must remain available locally while still backed up to AWS, the correct mode is gateway-stored volumes.

AWS documentation defines:

"Use stored volumes if you want to keep all your data locally while asynchronously backing up point- in-time snapshots to Amazon S3 for durable storage."

These volumes expose an iSCSI interface compatible with POSIX file systems, allowing direct use by on-premises backup software.

Gateway-cached volumes (Option C) store primary data in AWS with limited local cache, violating the "all data must be available locally" requirement. Options A and B are object-based storage solutions, not compatible with POSIX or block-based backup applications.

Therefore, Option D fully satisfies CloudOps reliability and continuity best practices by ensuring local availability, cloud durability, and POSIX compatibility for backups.


Reference:

· AWS Certified CloudOps Engineer ­ Associate (SOA-C03) Exam Guide ­ Domain 2: Reliability and Business Continuity

· AWS Storage Gateway User Guide ­ Stored Volumes Overview

· AWS Well-Architected Framework ­ Reliability Pillar

· AWS Hybrid Cloud Storage Best Practices



A CloudOps engineer needs to control access to groups of Amazon EC2 instances using AWS Systems Manager Session Manager. Specific tags on the EC2 instances have already been added.

Which additional actions should the CloudOps engineer take to control access? (Select TWO.)

  1. Attach an IAM policy to the users or groups that require access to the EC2 instances.
  2. Attach an IAM role to control access to the EC2 instances.
  3. Create a placement group for the EC2 instances and add a specific tag.
  4. Create a service account and attach it to the EC2 instances that need to be controlled.
  5. Create an IAM policy that grants access to any EC2 instances with a tag specified in the Condition element.

Answer(s): A,E

Explanation:

AWS Systems Manager Session Manager allows secure, auditable instance access without SSH keys or inbound ports. To control access based on instance tags, CloudOps best practices require two configurations:

Attach an IAM policy to users or groups granting ssm:StartSession, ssm:DescribeInstanceInformation, and ssm:DescribeSessions.

Include a Condition element in the IAM policy referencing instance tags, such as Condition:
{"StringEquals": {"ssm:resourceTag/Environment": "Production"}}.

This ensures users can start sessions only with instances that have matching tags, providing fine- grained access control.

AWS CloudOps documentation under Security and Compliance states:

"Use IAM policies with resource tags in the Condition element to restrict which managed instances users can access using Session Manager."

Options B and D incorrectly suggest attaching roles or service accounts that are not relevant to user- level access control. Option C (placement groups) pertains to networking and performance, not access management. Therefore, A and E together provide tag-based, least-privilege access as required.


Reference:

· AWS Certified CloudOps Engineer ­ Associate (SOA-C03) Exam Guide ­ Domain 4: Security and Compliance

· AWS Systems Manager User Guide ­ Controlling Access to Session Manager Using Tags

· AWS IAM Policy Reference ­ Condition Keys for AWS Systems Manager

· AWS Well-Architected Framework ­ Security Pillar



A global gaming company is preparing to launch a new game on AWS. The game runs in multiple AWS Regions on a fleet of Amazon EC2 instances. The instances are in an Auto Scaling group behind an Application Load Balancer (ALB) in each Region. The company plans to use Amazon Route 53 for DNS services. The DNS configuration must direct users to the Region that is closest to them and must provide automated failover.

Which combination of steps should a CloudOps engineer take to configure Route 53 to meet these requirements? (Select TWO.)

  1. Create Amazon CloudWatch alarms that monitor the health of the ALB in each Region. Configure Route 53 DNS failover by using a health check that monitors the alarms.
  2. Create Amazon CloudWatch alarms that monitor the health of the EC2 instances in each Region.
    Configure Route 53 DNS failover by using a health check that monitors the alarms.
  3. Configure Route 53 DNS failover by using a health check that monitors the private IP address of an EC2 instance in each Region.
  4. Configure Route 53 geoproximity routing. Specify the Regions that are used for the infrastructure.
  5. Configure Route 53 simple routing. Specify the continent, country, and state or province that are used for the infrastructure.

Answer(s): A,D

Explanation:

The combination of geoproximity routing and DNS failover health checks provides global low-latency routing with high availability.

Geoproximity routing in Route 53 routes users to the AWS Region closest to their geographic location, optimizing latency. For automatic failover, Route 53 health checks can monitor CloudWatch alarms tied to the health of the ALB in each Region.
When a Region becomes unhealthy, Route 53 reroutes traffic to the next available Region automatically.

AWS documentation states:

"Use geoproximity routing to direct users to resources based on geographic location, and configure health checks to provide DNS failover for high availability."

Option B incorrectly monitors EC2 instances directly, which is not efficient at scale. Option C uses private IPs, which cannot be globally health-checked. Option E (simple routing) does not support geographic or failover routing. Hence, A and D together meet both the proximity and failover requirements.


Reference:

· AWS Certified CloudOps Engineer ­ Associate (SOA-C03) Exam Guide ­ Domain 5: Networking and Content Delivery

· Amazon Route 53 Developer Guide ­ Geoproximity Routing and DNS Failover

· AWS Well-Architected Framework ­ Reliability Pillar

· Amazon CloudWatch Alarms ­ Integration with Route 53 Health Checks



A company requires the rotation of administrative credentials for production workloads on a regular basis. A CloudOps engineer must implement this policy for an Amazon RDS DB instance's master user password.

Which solution will meet this requirement with the LEAST operational effort?

  1. Create an AWS Lambda function to change the RDS master user password. Create an Amazon EventBridge scheduled rule to invoke the Lambda function.
  2. Create a new SecureString parameter in AWS Systems Manager Parameter Store. Encrypt the parameter with an AWS Key Management Service (AWS KMS) key. Configure automatic rotation.
  3. Create a new String parameter in AWS Systems Manager Parameter Store. Configure automatic rotation.
  4. Create a new RDS database secret in AWS Secrets Manager. Apply the secret to the RDS DB instance. Configure automatic rotation.

Answer(s): D

Explanation:

AWS Secrets Manager natively supports credential management and automatic rotation for Amazon RDS master user passwords.
When a secret is associated with an RDS instance, Secrets Manager automatically updates the password both in the secret and on the database, without downtime or manual scripting.

AWS documentation confirms:

"AWS Secrets Manager can automatically rotate the master user password for Amazon RDS databases. Rotation is fully managed and integrated, requiring no custom code or maintenance."

Option A introduces unnecessary Lambda automation. Option B and C use Parameter Store, which does not provide direct RDS password rotation. Therefore, Option D achieves secure, automatic credential rotation with least operational effort, fully aligned with CloudOps security automation principles.


Reference:

· AWS Certified CloudOps Engineer ­ Associate (SOA-C03) Exam Guide ­ Domain 4: Security and Compliance

· AWS Secrets Manager ­ Rotating Secrets for Amazon RDS

· AWS Well-Architected Framework ­ Security Pillar

· Amazon RDS User Guide ­ Managing Master User Passwords



A company has a microservice that runs on a set of Amazon EC2 instances. The EC2 instances run behind an Application Load Balancer (ALB).

A CloudOps engineer must use Amazon Route 53 to create a record that maps the ALB URL to example.com.

Which type of record will meet this requirement?

  1. An A record
  2. An AAAA record
  3. An alias record
  4. A CNAME record

Answer(s): C

Explanation:

An alias record is the recommended Route 53 record type to map domain names (e.g., example.com) to AWS-managed resources such as an Application Load Balancer. Alias records are extension types of A or AAAA records that support AWS resources directly, providing automatic DNS integration and no additional query costs.

AWS documentation states:

"Use alias records to map your domain or subdomain to an AWS resource such as an Application Load Balancer, CloudFront distribution, or S3 website endpoint."

A and AAAA records are used for static IP addresses, not load balancers. CNAME records cannot be used at the root domain (e.g., example.com). Thus, Option C is correct as it meets CloudOps networking best practices for scalable, managed DNS resolution to ALBs.


Reference:

· AWS Certified CloudOps Engineer ­ Associate (SOA-C03) Exam Guide ­ Domain 5: Networking and Content Delivery

· Amazon Route 53 Developer Guide ­ Alias Records

· AWS Well-Architected Framework ­ Reliability and Performance Efficiency Pillars

· Elastic Load Balancing ­ Integrating with Route 53



Application A runs on Amazon EC2 instances behind a Network Load Balancer (NLB). The EC2 instances are in an Auto Scaling group and are in the same subnet that is associated with the NLB. Other applications from an on-premises environment cannot communicate with Application A on port 8080.

To troubleshoot the issue, a CloudOps engineer analyzes the flow logs. The flow logs include the following records:

ACCEPT from 192.168.0.13:59003 172.31.16.139:8080

REJECT from 172.31.16.139:8080 192.168.0.13:59003

What is the reason for the rejected traffic?

  1. The security group of the EC2 instances has no Allow rule for the traffic from the NLB.
  2. The security group of the NLB has no Allow rule for the traffic from the on-premises environment.
  3. The ACL of the on-premises environment does not allow traffic to the AWS environment.
  4. The network ACL that is associated with the subnet does not allow outbound traffic for the ephemeral port range.

Answer(s): D

Explanation:

Comprehensive and Detailed Explanation From Exact Extract of AWS CloudOps Doocuments:

VPC Flow Logs show the request arriving and being ACCEPTed on dstport 8080 and the corresponding response being REJECTed on the return path to the client's ephemeral port (59003). AWS networking guidance states that security groups are stateful (return traffic is automatically allowed) while network ACLs are stateless and require explicit inbound and outbound rules for both directions. CloudOps operational guidance for VPC networking further notes that when you allow an inbound request (for example, TCP 8080) through a subnet's network ACL, you must also allow the outbound ephemeral port range (typically 1024­65535) for the response traffic; otherwise, the return packets are dropped and appear as REJECT in flow logs. The observed pattern--request accepted to 8080, response rejected to 59003--matches a missing outbound ephemeral-range allow on the subnet's NACL. Therefore, the cause is the subnet NACL, not security groups or on-premises ACLs. The remediation is to add an outbound ALLOW rule on the NACL for the appropriate ephemeral TCP port range back to the on-premises CIDR (and the corresponding inbound rule if asymmetric).


Reference:

· AWS Certified CloudOps Engineer ­ Associate (SOA-C03) Exam Guide ­ Networking and Content Delivery

· Amazon VPC ­ Network ACLs (stateless behavior and rule requirements)

· Amazon VPC ­ Security Groups (stateful return traffic)

· VPC Flow Logs ­ Record fields, ACCEPT/REJECT analysis



Viewing Page 2 of 22



Share your comments for Amazon AWS Certified CloudOps Engineer - Associate SOA-C03 exam with other users:

rajesh soni 1/17/2024 6:53:00 AM

good examplae to learn basic
INDIA


Tanya 10/25/2023 7:07:00 AM

this is useful information
Anonymous


Nasir Mahmood 12/11/2023 7:32:00 AM

looks usefull
Anonymous


Jason 9/30/2023 1:07:00 PM

question 81 should be c.
CANADA


TestPD1 8/10/2023 12:22:00 PM

question 18 : response isnt a ?
EUROPEAN UNION


ally 8/19/2023 5:31:00 PM

plaese add questions
TURKEY


DIA 10/7/2023 5:59:00 AM

is dumps still valid ?
FRANCE


Annie 7/7/2023 8:33:00 AM

thanks for this
EUROPEAN UNION


arnie 9/17/2023 6:38:00 AM

please upload questions
Anonymous


Tanuj Rana 7/22/2023 2:33:00 AM

please upload the question dump for professional machinelearning
Anonymous


Future practitioner 8/10/2023 1:26:00 PM

question 4 answer is c. this site shows the correct answer as b. "adopt a consumption model" is clearly a cost optimization design principle. looks like im done using this site to study!!!
Anonymous


Ace 8/3/2023 10:37:00 AM

number 52 answer is d
UNITED STATES


Nathan 12/17/2023 12:04:00 PM

just started preparing for my exam , and this site is so much help
Anonymous


Corey 12/29/2023 5:06:00 PM

question 35 is incorrect, the correct answer is c, it even states so: explanation: when a vm is infected with ransomware, you should not restore the vm to the infected vm. this is because the ransomware will still be present on the vm, and it will encrypt the files again. you should also not restore the vm to any vm within the companys subscription. this is because the ransomware could spread to other vms in the subscription. the best way to restore a vm that is infected with ransomware is to restore it to a new azure vm. this will ensure that the ransomware is not present on the new vm.
Anonymous


Rajender 10/18/2023 3:54:00 AM

i would like to take psm1 exam.
Anonymous


Blessious Phiri 8/14/2023 9:53:00 AM

cbd and pdb are key to the database
SOUTH AFRICA


Alkaed 10/19/2022 10:41:00 AM

the purchase and download process is very much streamlined. the xengine application is very nice and user-friendly but there is always room for improvement.
NETHERLANDS


Dave Gregen 9/4/2023 3:17:00 PM

please upload p_sapea_2023
SWEDEN


Sarah 6/13/2023 1:42:00 PM

anyone use this? the question dont seem to follow other formats and terminology i have been studying im getting worried
CANADA


Shuv 10/3/2023 8:19:00 AM

good questions
UNITED STATES


Reb974 8/5/2023 1:44:00 AM

hello are these questions valid for ms-102
CANADA


Mchal 7/20/2023 3:38:00 AM

some questions are wrongly answered but its good nonetheless
POLAND


Sonbir 8/8/2023 1:04:00 PM

how to get system serial number using intune
Anonymous


Manju 10/19/2023 1:19:00 PM

is it really helpful to pass the exam
Anonymous


LeAnne Hair 8/24/2023 12:47:00 PM

#229 in incorrect - all the customers require an annual review
UNITED STATES


Abdul SK 9/28/2023 11:42:00 PM

kindy upload
Anonymous


Aderonke 10/23/2023 12:53:00 PM

fantastic assessment on psm 1
UNITED KINGDOM


SAJI 7/20/2023 2:51:00 AM

56 question correct answer a,b
Anonymous


Raj Kumar 10/23/2023 8:52:00 PM

thank you for providing the q bank
CANADA


piyush keshari 7/7/2023 9:46:00 PM

true quesstions
Anonymous


B.A.J 11/6/2023 7:01:00 AM

i can“t believe ms asks things like this, seems to be only marketing material.
Anonymous


Guss 5/23/2023 12:28:00 PM

hi, could you please add the last update of ns0-527
Anonymous


Rond65 8/22/2023 4:39:00 PM

question #3 refers to vnet4 and vnet5. however, there is no vnet5 listed in the case study (testlet 2).
UNITED STATES


Cheers 12/13/2023 9:55:00 AM

sometimes it may be good some times it may be
GERMANY