Amazon AWS Certified CloudOps Engineer - Associate SOA-C03 Exam (page: 2)
Amazon AWS Certified CloudOps Engineer - Associate SOA-C03
Updated on: 31-Mar-2026

A user working in the Amazon EC2 console increased the size of an Amazon Elastic Block Store

(Amazon EBS) volume attached to an Amazon EC2 Windows instance. The change is not reflected in the file system.

What should a CloudOps engineer do to resolve this issue?

  1. Extend the file system with operating system-level tools to use the new storage capacity.
  2. Reattach the EBS volume to the EC2 instance.
  3. Reboot the EC2 instance that is attached to the EBS volume.
  4. Take a snapshot of the EBS volume. Replace the original volume with a volume that is created from the snapshot.

Answer(s): A

Explanation:

When an Amazon EBS volume is resized, the new storage capacity is immediately available to the attached EC2 instance. However, EBS does not automatically extend the file system. The CloudOps engineer must manually extend the file system within the operating system to utilize the additional space.

AWS documentation for EC2 and EBS specifies:

"After you increase the size of an EBS volume, use file system­specific tools to extend the file system so that the operating system can use the new storage capacity."

On Windows instances, this can be achieved through Disk Management or diskpart commands. On Linux systems, utilities such as growpart and resize2fs are used.

Options B and C do not modify file system metadata and are ineffective. Option D unnecessarily replaces the volume, which adds risk and downtime. Thus, Option A aligns with the Monitoring and Performance Optimization practices of AWS CloudOps by properly extending the file system to recognize the new capacity.


Reference:

· AWS Certified CloudOps Engineer ­ Associate (SOA-C03) Exam Guide ­ Domain 1

· Amazon EBS ­ Modifying EBS Volumes

· Amazon EC2 User Guide ­ Extending a File System After Resizing a Volume · AWS Well-Architected Framework ­ Performance Efficiency Pillar



Optimization]

A company has a workload that is sending log data to Amazon CloudWatch Logs. One of the fields includes a measure of application latency. A CloudOps engineer needs to monitor the p90 statistic of this field over time.

What should the CloudOps engineer do to meet this requirement?

  1. Create an Amazon CloudWatch Contributor Insights rule on the log data.
  2. Create a metric filter on the log data.
  3. Create a subscription filter on the log data.
  4. Create an Amazon CloudWatch Application Insights rule for the workload.

Answer(s): B

Explanation:

To analyze and visualize custom statistics such as the p90 latency (90th percentile), a CloudWatch metric must be generated from the log data. The correct method is to create a metric filter that extracts the latency value from each log event and publishes it as a CloudWatch metric. Once the metric is published, percentile statistics (p90, p95, etc.) can be displayed in CloudWatch dashboards or alarms.

AWS documentation states:

"You can use metric filters to extract numerical fields from log events and publish them as metrics in CloudWatch. CloudWatch supports percentile statistics such as p90 and p95 for these metrics."

Contributor Insights (Option A) is for analyzing frequent contributors, not numeric distributions. Subscription filters (Option C) are used for log streaming, and Application Insights (Option D)

provides monitoring of application health but not custom p90 statistics. Hence, Option B is the CloudOps-aligned, minimal-overhead solution for percentile latency monitoring.


Reference:

· AWS Certified CloudOps Engineer ­ Associate (SOA-C03) Exam Guide ­ Domain 1: Monitoring and Logging

· Amazon CloudWatch Logs ­ Metric Filters

· AWS Well-Architected Framework ­ Operational Excellence Pillar



A company is running an application on premises and wants to use AWS for data backup. All of the data must be available locally. The backup application can write only to block-based storage that is compatible with the Portable Operating System Interface (POSIX).

Which backup solution will meet these requirements?

  1. Configure the backup software to use Amazon S3 as the target for the data backups.
  2. Configure the backup software to use Amazon S3 Glacier Flexible Retrieval as the target for the data backups.
  3. Use AWS Storage Gateway, and configure it to use gateway-cached volumes.
  4. Use AWS Storage Gateway, and configure it to use gateway-stored volumes.

Answer(s): D

Explanation:

The Storage Gateway service enables hybrid cloud backup by presenting local block storage that synchronizes with AWS cloud storage. For scenarios where all data must remain available locally while still backed up to AWS, the correct mode is gateway-stored volumes.

AWS documentation defines:

"Use stored volumes if you want to keep all your data locally while asynchronously backing up point- in-time snapshots to Amazon S3 for durable storage."

These volumes expose an iSCSI interface compatible with POSIX file systems, allowing direct use by on-premises backup software.

Gateway-cached volumes (Option C) store primary data in AWS with limited local cache, violating the "all data must be available locally" requirement. Options A and B are object-based storage solutions, not compatible with POSIX or block-based backup applications.

Therefore, Option D fully satisfies CloudOps reliability and continuity best practices by ensuring local availability, cloud durability, and POSIX compatibility for backups.


Reference:

· AWS Certified CloudOps Engineer ­ Associate (SOA-C03) Exam Guide ­ Domain 2: Reliability and Business Continuity

· AWS Storage Gateway User Guide ­ Stored Volumes Overview

· AWS Well-Architected Framework ­ Reliability Pillar

· AWS Hybrid Cloud Storage Best Practices



A CloudOps engineer needs to control access to groups of Amazon EC2 instances using AWS Systems Manager Session Manager. Specific tags on the EC2 instances have already been added.

Which additional actions should the CloudOps engineer take to control access? (Select TWO.)

  1. Attach an IAM policy to the users or groups that require access to the EC2 instances.
  2. Attach an IAM role to control access to the EC2 instances.
  3. Create a placement group for the EC2 instances and add a specific tag.
  4. Create a service account and attach it to the EC2 instances that need to be controlled.
  5. Create an IAM policy that grants access to any EC2 instances with a tag specified in the Condition element.

Answer(s): A,E

Explanation:

AWS Systems Manager Session Manager allows secure, auditable instance access without SSH keys or inbound ports. To control access based on instance tags, CloudOps best practices require two configurations:

Attach an IAM policy to users or groups granting ssm:StartSession, ssm:DescribeInstanceInformation, and ssm:DescribeSessions.

Include a Condition element in the IAM policy referencing instance tags, such as Condition:
{"StringEquals": {"ssm:resourceTag/Environment": "Production"}}.

This ensures users can start sessions only with instances that have matching tags, providing fine- grained access control.

AWS CloudOps documentation under Security and Compliance states:

"Use IAM policies with resource tags in the Condition element to restrict which managed instances users can access using Session Manager."

Options B and D incorrectly suggest attaching roles or service accounts that are not relevant to user- level access control. Option C (placement groups) pertains to networking and performance, not access management. Therefore, A and E together provide tag-based, least-privilege access as required.


Reference:

· AWS Certified CloudOps Engineer ­ Associate (SOA-C03) Exam Guide ­ Domain 4: Security and Compliance

· AWS Systems Manager User Guide ­ Controlling Access to Session Manager Using Tags

· AWS IAM Policy Reference ­ Condition Keys for AWS Systems Manager

· AWS Well-Architected Framework ­ Security Pillar



A global gaming company is preparing to launch a new game on AWS. The game runs in multiple AWS Regions on a fleet of Amazon EC2 instances. The instances are in an Auto Scaling group behind an Application Load Balancer (ALB) in each Region. The company plans to use Amazon Route 53 for DNS services. The DNS configuration must direct users to the Region that is closest to them and must provide automated failover.

Which combination of steps should a CloudOps engineer take to configure Route 53 to meet these requirements? (Select TWO.)

  1. Create Amazon CloudWatch alarms that monitor the health of the ALB in each Region. Configure Route 53 DNS failover by using a health check that monitors the alarms.
  2. Create Amazon CloudWatch alarms that monitor the health of the EC2 instances in each Region.
    Configure Route 53 DNS failover by using a health check that monitors the alarms.
  3. Configure Route 53 DNS failover by using a health check that monitors the private IP address of an EC2 instance in each Region.
  4. Configure Route 53 geoproximity routing. Specify the Regions that are used for the infrastructure.
  5. Configure Route 53 simple routing. Specify the continent, country, and state or province that are used for the infrastructure.

Answer(s): A,D

Explanation:

The combination of geoproximity routing and DNS failover health checks provides global low-latency routing with high availability.

Geoproximity routing in Route 53 routes users to the AWS Region closest to their geographic location, optimizing latency. For automatic failover, Route 53 health checks can monitor CloudWatch alarms tied to the health of the ALB in each Region.
When a Region becomes unhealthy, Route 53 reroutes traffic to the next available Region automatically.

AWS documentation states:

"Use geoproximity routing to direct users to resources based on geographic location, and configure health checks to provide DNS failover for high availability."

Option B incorrectly monitors EC2 instances directly, which is not efficient at scale. Option C uses private IPs, which cannot be globally health-checked. Option E (simple routing) does not support geographic or failover routing. Hence, A and D together meet both the proximity and failover requirements.


Reference:

· AWS Certified CloudOps Engineer ­ Associate (SOA-C03) Exam Guide ­ Domain 5: Networking and Content Delivery

· Amazon Route 53 Developer Guide ­ Geoproximity Routing and DNS Failover

· AWS Well-Architected Framework ­ Reliability Pillar

· Amazon CloudWatch Alarms ­ Integration with Route 53 Health Checks



A company requires the rotation of administrative credentials for production workloads on a regular basis. A CloudOps engineer must implement this policy for an Amazon RDS DB instance's master user password.

Which solution will meet this requirement with the LEAST operational effort?

  1. Create an AWS Lambda function to change the RDS master user password. Create an Amazon EventBridge scheduled rule to invoke the Lambda function.
  2. Create a new SecureString parameter in AWS Systems Manager Parameter Store. Encrypt the parameter with an AWS Key Management Service (AWS KMS) key. Configure automatic rotation.
  3. Create a new String parameter in AWS Systems Manager Parameter Store. Configure automatic rotation.
  4. Create a new RDS database secret in AWS Secrets Manager. Apply the secret to the RDS DB instance. Configure automatic rotation.

Answer(s): D

Explanation:

AWS Secrets Manager natively supports credential management and automatic rotation for Amazon RDS master user passwords.
When a secret is associated with an RDS instance, Secrets Manager automatically updates the password both in the secret and on the database, without downtime or manual scripting.

AWS documentation confirms:

"AWS Secrets Manager can automatically rotate the master user password for Amazon RDS databases. Rotation is fully managed and integrated, requiring no custom code or maintenance."

Option A introduces unnecessary Lambda automation. Option B and C use Parameter Store, which does not provide direct RDS password rotation. Therefore, Option D achieves secure, automatic credential rotation with least operational effort, fully aligned with CloudOps security automation principles.


Reference:

· AWS Certified CloudOps Engineer ­ Associate (SOA-C03) Exam Guide ­ Domain 4: Security and Compliance

· AWS Secrets Manager ­ Rotating Secrets for Amazon RDS

· AWS Well-Architected Framework ­ Security Pillar

· Amazon RDS User Guide ­ Managing Master User Passwords



A company has a microservice that runs on a set of Amazon EC2 instances. The EC2 instances run behind an Application Load Balancer (ALB).

A CloudOps engineer must use Amazon Route 53 to create a record that maps the ALB URL to example.com.

Which type of record will meet this requirement?

  1. An A record
  2. An AAAA record
  3. An alias record
  4. A CNAME record

Answer(s): C

Explanation:

An alias record is the recommended Route 53 record type to map domain names (e.g., example.com) to AWS-managed resources such as an Application Load Balancer. Alias records are extension types of A or AAAA records that support AWS resources directly, providing automatic DNS integration and no additional query costs.

AWS documentation states:

"Use alias records to map your domain or subdomain to an AWS resource such as an Application Load Balancer, CloudFront distribution, or S3 website endpoint."

A and AAAA records are used for static IP addresses, not load balancers. CNAME records cannot be used at the root domain (e.g., example.com). Thus, Option C is correct as it meets CloudOps networking best practices for scalable, managed DNS resolution to ALBs.


Reference:

· AWS Certified CloudOps Engineer ­ Associate (SOA-C03) Exam Guide ­ Domain 5: Networking and Content Delivery

· Amazon Route 53 Developer Guide ­ Alias Records

· AWS Well-Architected Framework ­ Reliability and Performance Efficiency Pillars

· Elastic Load Balancing ­ Integrating with Route 53



Application A runs on Amazon EC2 instances behind a Network Load Balancer (NLB). The EC2 instances are in an Auto Scaling group and are in the same subnet that is associated with the NLB. Other applications from an on-premises environment cannot communicate with Application A on port 8080.

To troubleshoot the issue, a CloudOps engineer analyzes the flow logs. The flow logs include the following records:

ACCEPT from 192.168.0.13:59003 172.31.16.139:8080

REJECT from 172.31.16.139:8080 192.168.0.13:59003

What is the reason for the rejected traffic?

  1. The security group of the EC2 instances has no Allow rule for the traffic from the NLB.
  2. The security group of the NLB has no Allow rule for the traffic from the on-premises environment.
  3. The ACL of the on-premises environment does not allow traffic to the AWS environment.
  4. The network ACL that is associated with the subnet does not allow outbound traffic for the ephemeral port range.

Answer(s): D

Explanation:

Comprehensive and Detailed Explanation From Exact Extract of AWS CloudOps Doocuments:

VPC Flow Logs show the request arriving and being ACCEPTed on dstport 8080 and the corresponding response being REJECTed on the return path to the client's ephemeral port (59003). AWS networking guidance states that security groups are stateful (return traffic is automatically allowed) while network ACLs are stateless and require explicit inbound and outbound rules for both directions. CloudOps operational guidance for VPC networking further notes that when you allow an inbound request (for example, TCP 8080) through a subnet's network ACL, you must also allow the outbound ephemeral port range (typically 1024­65535) for the response traffic; otherwise, the return packets are dropped and appear as REJECT in flow logs. The observed pattern--request accepted to 8080, response rejected to 59003--matches a missing outbound ephemeral-range allow on the subnet's NACL. Therefore, the cause is the subnet NACL, not security groups or on-premises ACLs. The remediation is to add an outbound ALLOW rule on the NACL for the appropriate ephemeral TCP port range back to the on-premises CIDR (and the corresponding inbound rule if asymmetric).


Reference:

· AWS Certified CloudOps Engineer ­ Associate (SOA-C03) Exam Guide ­ Networking and Content Delivery

· Amazon VPC ­ Network ACLs (stateless behavior and rule requirements)

· Amazon VPC ­ Security Groups (stateful return traffic)

· VPC Flow Logs ­ Record fields, ACCEPT/REJECT analysis



Viewing Page 2 of 22



Share your comments for Amazon AWS Certified CloudOps Engineer - Associate SOA-C03 exam with other users:

John 11/12/2023 8:48:00 PM

why only give explanations on some, and not all questions and their respective answers?
UNITED STATES


Biswa 11/20/2023 8:50:00 AM

refresh db knowledge
Anonymous


Shalini Sharma 10/17/2023 8:29:00 AM

interested for sap certification
JAPAN


ethan 9/24/2023 12:38:00 PM

could you please upload practice questions for scr exam ?
HONG KONG


vijay joshi 8/19/2023 3:15:00 AM

please upload free oracle cloud infrastructure 2023 foundations associate exam braindumps
Anonymous


Ayodele Talabi 8/25/2023 9:25:00 PM

sweating! they are tricky
CANADA


Romero 3/23/2022 4:20:00 PM

i never use these dumps sites but i had to do it for this exam as it is impossible to pass without using these question dumps.
UNITED STATES


John Kennedy 9/20/2023 3:33:00 AM

good practice and well sites.
Anonymous


Nenad 7/12/2022 11:05:00 PM

passed my first exam last week and pass the second exam this morning. thank you sir for all the help and these brian dumps.
INDIA


Lucky 10/31/2023 2:01:00 PM

does anyone who attended exam csa 8.8, can confirm these questions are really coming ? or these are just for practicing?
HONG KONG


Prateek 9/18/2023 11:13:00 AM

kindly share the dumps
UNITED STATES


Irfan 11/25/2023 1:26:00 AM

very nice content
Anonymous


php 6/16/2023 12:49:00 AM

passed today
Anonymous


Durga 6/23/2023 1:22:00 AM

hi can you please upload questions
Anonymous


JJ 5/28/2023 4:32:00 AM

please upload quetions
THAILAND


Norris 1/3/2023 8:06:00 PM

i passed my exam thanks to this braindumps questions. these questions are valid in us and i highly recommend it!
UNITED STATES


abuti 7/21/2023 6:10:00 PM

are they truely latest
Anonymous


Curtis Nakawaki 7/5/2023 8:46:00 PM

questions appear contemporary.
UNITED STATES


Vv 12/2/2023 6:31:00 AM

good to prepare in this site
UNITED STATES


praveenkumar 11/20/2023 11:57:00 AM

very helpful to crack first attempt
Anonymous


asad Raza 5/15/2023 5:38:00 AM

please upload this exam
CHINA


Reeta 7/17/2023 5:22:00 PM

please upload the c_activate22 dump questions with answer
SWEDEN


Wong 12/20/2023 11:34:00 AM

q10 - the answer should be a. if its c, the criteria will meet if either the prospect is not part of the suppression lists or if the job title contains vice president
MALAYSIA


david 12/12/2023 12:38:00 PM

this was on the exam as of 1211/2023
Anonymous


Tink 7/24/2023 9:23:00 AM

great for prep
GERMANY


Jaro 12/18/2023 3:12:00 PM

i think in question 7 the first answer should be power bi portal (not power bi)
Anonymous


9eagles 4/7/2023 10:04:00 AM

on question 10 and so far 2 wrong answers as evident in the included reference link.
Anonymous


Tai 8/28/2023 5:28:00 AM

wonderful material
SOUTH AFRICA


VoiceofMidnight 12/29/2023 4:48:00 PM

i passed!! ...but barely! got 728, but needed 720 to pass. the exam hit me with labs right out of the gate! then it went to multiple choice. protip: study the labs!
UNITED STATES


A K 8/3/2023 11:56:00 AM

correct answer for question 92 is c -aws shield
Anonymous


Nitin Mindhe 11/27/2023 6:12:00 AM

great !! it is really good
IRELAND


BailleyOne 11/22/2023 1:45:00 AM

explanations for the answers are to the point.
Anonymous


patel 10/25/2023 8:17:00 AM

how can rea next
INDIA


MortonG 10/19/2023 6:32:00 PM

question: 128 d is the wrong answer...should be c
EUROPEAN UNION