Amazon AWS Certified CloudOps Engineer - Associate SOA-C03 Exam (page: 1)
Amazon AWS Certified CloudOps Engineer - Associate SOA-C03
Updated on: 12-Feb-2026

A company's ecommerce application is running on Amazon EC2 instances that are behind an Application Load Balancer (ALB). The instances are in an Auto Scaling group. Customers report that the website is occasionally down.
When the website is down, it returns an HTTP 500 (server error) status code to customer browsers.

The Auto Scaling group's health check is configured for EC2 status checks, and the instances appear healthy.

Which solution will resolve the problem?

  1. Replace the ALB with a Network Load Balancer.
  2. Add Elastic Load Balancing (ELB) health checks to the Auto Scaling group.
  3. Update the target group configuration on the ALB. Enable session affinity (sticky sessions).
  4. Install the Amazon CloudWatch agent on all instances. Configure the agent to reboot the instances.

Answer(s): B

Explanation:

In this scenario, the EC2 instances pass their EC2 status checks, indicating that the operating system is responsive. However, the application hosted on the instance is failing intermittently, returning HTTP 500 errors. This demonstrates a discrepancy between the instance-level health and the application-level health.

According to AWS CloudOps best practices under Monitoring, Logging, Analysis, Remediation and Performance Optimization (SOA-C03 Domain 1), Auto Scaling groups should incorporate Elastic Load Balancing (ELB) health checks instead of relying solely on EC2 status checks. The ELB health check probes the application endpoint (for example, HTTP or HTTPS target group health checks), ensuring that the application itself is functioning correctly.

When an instance fails an ELB health check, Amazon EC2 Auto Scaling will automatically mark the instance as unhealthy and replace it with a new one, ensuring continuous availability and performance optimization.

Extract from AWS CloudOps (SOA-C03) Study Guide ­ Domain 1:

"Implement monitoring and health checks using ALB and EC2 Auto Scaling integration. Application Load Balancer health checks allow Auto Scaling to terminate and replace instances that fail application-level health checks, ensuring consistent application performance."

Extract from AWS Auto Scaling Documentation:

"When you enable the ELB health check type for your Auto Scaling group, Amazon EC2 Auto Scaling considers both EC2 status checks and Elastic Load Balancing health checks to determine instance health. If an instance fails the ELB health check, it is automatically replaced."

Therefore, the correct answer is B, as it ensures proper application-level monitoring and remediation using ALB-integrated ELB health checks--a core CloudOps operational practice for proactive incident response and availability assurance.

References (AWS CloudOps Verified Source Extracts):

AWS Certified CloudOps Engineer ­ Associate (SOA-C03) Exam Guide: Domain 1 ­ Monitoring, Logging, and Remediation.

AWS Auto Scaling User Guide: Health checks for Auto Scaling instances (Elastic Load Balancing integration).

AWS Well-Architected Framework ­ Operational Excellence and Reliability Pillars.

AWS Elastic Load Balancing Developer Guide ­ Target group health checks and monitoring.



A company hosts a critical legacy application on two Amazon EC2 instances that are in one

Availability Zone. The instances run behind an Application Load Balancer (ALB). The company uses Amazon CloudWatch alarms to send Amazon Simple Notification Service (Amazon SNS) notifications when the ALB health checks detect an unhealthy instance. After a notification, the company's engineers manually restart the unhealthy instance. A CloudOps engineer must configure the application to be highly available and more resilient to failures.
Which solution will meet these requirements?

  1. Create an Amazon Machine Image (AMI) from a healthy instance. Launch additional instances from the AMI in the same Availability Zone. Add the new instances to the ALB target group.
  2. Increase the size of each instance. Create an Amazon EventBridge rule. Configure the EventBridge rule to restart the instances if they enter a failed state.
  3. Create an Amazon Machine Image (AMI) from a healthy instance. Launch an additional instance from the AMI in the same Availability Zone. Add the new instance to the ALB target group. Create an AWS Lambda function that runs when an instance is unhealthy. Configure the Lambda function to stop and restart the unhealthy instance.
  4. Create an Amazon Machine Image (AMI) from a healthy instance. Create a launch template that uses the AMI. Create an Amazon EC2 Auto Scaling group that is deployed across multiple Availability Zones. Configure the Auto Scaling group to add instances to the ALB target group.

Answer(s): D

Explanation:

High availability requires removing single-AZ risk and eliminating manual recovery. The AWS Reliability best practices state to design for multi-AZ and automatic healing: Auto Scaling "helps maintain application availability and allows you to automatically add or remove EC2 instances" (AWS Auto Scaling User Guide). The Reliability Pillar recommends to "distribute workloads across multiple Availability Zones" and to "automate recovery from failure" (AWS Well-Architected Framework ­ Reliability Pillar). Attaching the Auto Scaling group to an ALB target group enables health-based replacement: instances failing load balancer health checks are replaced and traffic is routed only to healthy targets. Using an AMI in a launch template ensures consistent, repeatable instance configuration (AWS EC2 Launch Templates). Options A and C keep all instances in a single Availability Zone and rely on manual or ad-hoc restarts, which do not meet high-availability or resiliency goals. Option B only scales vertically and adds a restart rule; it neither removes the single-AZ failure domain nor provides automated replacement. Therefore, creating a multi-AZ EC2 Auto Scaling group with a launch template and attaching it to the ALB target group (Option D) is the CloudOps-aligned solution for resilience and business continuity.

References (AWS CloudOps Documents / Study Guide):

· AWS Certified CloudOps Engineer ­ Associate (SOA-C03) Exam Guide: Domain 2 ­ Reliability and

Business Continuity

· AWS Well-Architected Framework ­ Reliability Pillar

· Amazon EC2 Auto Scaling User Guide ­ Health checks and replacement

· Elastic Load Balancing User Guide ­ Target group health checks and ALB integration

· Amazon EC2 Launch Templates ­ Reproducible instance configuration



An Amazon EC2 instance is running an application that uses Amazon Simple Queue Service (Amazon SQS) queues. A CloudOps engineer must ensure that the application can read, write, and delete messages from the SQS queues.

Which solution will meet these requirements in the MOST secure manner?

  1. Create an IAM user with an IAM policy that allows the sqs:SendMessage permission, the sqs:ReceiveMessage permission, and the sqs:DeleteMessage permission to the appropriate queues.
    Embed the IAM user's credentials in the application's configuration.
  2. Create an IAM user with an IAM policy that allows the sqs:SendMessage permission, the sqs:ReceiveMessage permission, and the sqs:DeleteMessage permission to the appropriate queues.
    Export the IAM user's access key and secret access key as environment variables on the EC2 instance.
  3. Create and associate an IAM role that allows EC2 instances to call AWS services. Attach an IAM policy to the role that allows sqs:* permissions to the appropriate queues.
  4. Create and associate an IAM role that allows EC2 instances to call AWS services. Attach an IAM policy to the role that allows the sqs:SendMessage permission, the sqs:ReceiveMessage permission, and the sqs:DeleteMessage permission to the appropriate queues.

Answer(s): D

Explanation:

The most secure pattern is to use an IAM role for Amazon EC2 with the minimum required permissions. AWS guidance states: "Use roles for applications that run on Amazon EC2 instances" and "grant least privilege by allowing only the actions required to perform a task." By attaching a role to the instance, short-lived credentials are automatically provided through the instance metadata service; this removes the need to create long-term access keys or embed secrets. Granting only sqs:SendMessage, sqs:ReceiveMessage, and sqs:DeleteMessage against the specific SQS queues enforces least privilege and aligns with CloudOps security controls. Options A and B rely on IAM user access keys, which contravene best practices for workloads on EC2 and increase credential- management risk. Option C uses a role but grants sqs:*, violating least-privilege principles.

Therefore, Option D meets the security requirement with scoped, temporary credentials and precise permissions.

References (AWS CloudOps Documents / Study Guide):

· AWS Certified CloudOps Engineer ­ Associate (SOA-C03) Exam Guide ­ Security & Compliance

· IAM Best Practices ­ "Use roles instead of long-term access keys," "Grant least privilege"

· IAM Roles for Amazon EC2 ­ Temporary credentials for applications on EC2

· Amazon SQS ­ Identity and access management for Amazon SQS



A company runs an application that logs user data to an Amazon CloudWatch Logs log group. The company discovers that personal information the application has logged is visible in plain text in the CloudWatch logs.

The company needs a solution to redact personal information in the logs by default. Unredacted information must be available only to the company's security team.
Which solution will meet these requirements?

  1. Create an Amazon S3 bucket. Create an export task from appropriate log groups in CloudWatch.
    Export the logs to the S3 bucket. Configure an Amazon Macie scan to discover personal data in the S3 bucket. Invoke an AWS Lambda function to move identified personal data to a second S3 bucket.
    Update the S3 bucket policies to grant only the security team access to both buckets.
  2. Create a customer managed AWS KMS key. Configure the KMS key policy to allow only the security team to perform decrypt operations. Associate the KMS key with the application log group.
  3. Create an Amazon CloudWatch data protection policy for the application log group. Configure data identifiers for the types of personal information that the application logs. Ensure that the security team has permission to call the unmask API operation on the application log group.
  4. Create an OpenSearch domain. Create an AWS Glue workflow that runs a Detect PII transform job and streams the output to the OpenSearch domain. Configure the CloudWatch log group to stream the logs to AWS Glue. Modify the OpenSearch domain access policy to allow only the security team to access the domain.

Answer(s): C

Explanation:

CloudWatch Logs data protection provides native redaction/masking of sensitive data at ingestion and query. AWS documentation states it can "detect and protect sensitive data in logs" using data identifiers, and that authorized users can "use the unmask action to view the original data." Creating a data protection policy on the log group masks PII by default for all viewers, satisfying the requirement to redact personal information. Granting only the security team permission to invoke the unmask API operation ensures that unredacted content is restricted. Option B (KMS) encrypts at rest but does not redact fields; encryption alone does not prevent plaintext visibility to authorized readers. Options A and D add complexity and latency, move data out of CloudWatch, and do not provide default inline redaction/unmask controls in CloudWatch itself. Therefore, the CloudOps- aligned, managed solution is to use CloudWatch Logs data protection with appropriate data identifiers and unmask permissions limited to the security team.

References (AWS CloudOps Documents / Study Guide):

· AWS Certified CloudOps Engineer ­ Associate (SOA-C03) Exam Guide ­ Monitoring & Logging

· Amazon CloudWatch Logs ­ Data Protection (masking/redaction with data identifiers)

· CloudWatch Logs ­ Permissions for masking and unmasking sensitive data

· AWS Well-Architected Framework ­ Security and Operational Excellence (sensitive data handling)



A multinational company uses an organization in AWS Organizations to manage over 200 member accounts across multiple AWS Regions. The company must ensure that all AWS resources meet specific security requirements.

The company must not deploy any EC2 instances in the ap-southeast-2 Region. The company must completely block root user actions in all member accounts. The company must prevent any user from deleting AWS CloudTrail logs, including administrators. The company requires a centrally managed solution that the company can automatically apply to all existing and future accounts.
Which solution will meet these requirements?

  1. Create AWS Config rules with remediation actions in each account to detect policy violations.
    Implement IAM permissions boundaries for the account root users.
  2. Enable AWS Security Hub across the organization. Create custom security standards to enforce the security requirements. Use AWS CloudFormation StackSets to deploy the standards to all the accounts in the organization. Set up Security Hub automated remediation actions.
  3. Use AWS Control Tower for account governance. Configure Region deny controls. Use Service Control Policies (SCPs) to restrict root user access.
  4. Configure AWS Firewall Manager with security policies to meet the security requirements. Use an AWS Config aggregator with organization-wide conformance packs to detect security policy violations.

Answer(s): C

Explanation:

AWS CloudOps governance best practices emphasize centralized account management and preventive guardrails. AWS Control Tower integrates directly with AWS Organizations and provides "Region deny controls" and "Service Control Policies (SCPs)" that apply automatically to all existing and newly created member accounts. SCPs are organization-wide guardrails that define the maximum permissions for accounts. They can explicitly deny actions such as launching EC2 instances in a specific Region, or block root user access.

To prevent CloudTrail log deletion, SCPs can also include denies on cloudtrail:DeleteTrail and s3:DeleteObject actions targeting the CloudTrail log S3 bucket. These SCPs ensure that no user, including administrators, can violate the compliance requirements.

AWS documentation under the Security and Compliance domain for CloudOps states:

"Use AWS Control Tower to establish a secure, compliant, multi-account environment with preventive guardrails through service control policies and detective controls through AWS Config."

This approach meets all stated needs: centralized enforcement, automatic propagation to new accounts, region-based restrictions, and immutable audit logs. Options A, B, and D either detect violations reactively or lack complete enforcement and automation across future accounts.

References (AWS CloudOps Documents / Study Guide):

· AWS Certified CloudOps Engineer ­ Associate (SOA-C03) Exam Guide ­ Domain 4: Security and Compliance

· AWS Control Tower ­ Preventive and Detective Guardrails

· AWS Organizations ­ Service Control Policies (SCPs)

· AWS Well-Architected Framework ­ Security Pillar (Governance and Centralized Controls)



A company's AWS accounts are in an organization in AWS Organizations. The organization has all features enabled. The accounts use Amazon EC2 instances to host applications. The company manages the EC2 instances manually by using the AWS Management Console. The company applies updates to the EC2 instances by using an SSH connection to each EC2 instance.

The company needs a solution that uses AWS Systems Manager to manage all the organization's current and future EC2 instances. The latest version of Systems Manager Agent (SSM Agent) is running on the EC2 instances.

Which solution will meet these requirements?

  1. Configure a home AWS Region in Systems Manager Quick Setup in the organization's management account. Deploy the Systems Manager Default Host Management Configuration Quick Setup from the management account.
  2. Configure a home AWS Region in Systems Manager Quick Setup in the organization's management account. Create a Systems Manager Run Command that attaches the AmazonSSMServiceRolePolicy IAM policy to every IAM role that the EC2 instances use. Invoke the command in every account in the organization.
  3. Create an AWS CloudFormation stack set that contains a Systems Manager parameter to define the Default Host Management Configuration role. Use the organization's management account to deploy the stack set to every account in the organization.
  4. Create an AWS CloudFormation stack set that contains an EC2 instance profile with the AmazonSSMManagedEC2InstanceDefaultPolicy IAM policy attached. Use the organization's management account to deploy the stack set to every account in the organization.

Answer(s): A

Explanation:

AWS CloudOps automation best practices recommend using AWS Systems Manager Quick Setup for organization-wide management and configuration of EC2 instances. The Default Host Management Configuration Quick Setup automatically enables Systems Manager capabilities such as Patch Manager, Inventory, Session Manager, and Automation across all managed instances within the organization.

When deployed from the management account, Quick Setup automatically integrates with AWS Organizations to propagate configuration and permissions to existing and future accounts. This meets the requirement for organization-wide management with no manual configuration or SSH access.
AWS documentation notes:

"You can use Quick Setup in the management account of an organization in AWS Organizations to configure Systems Manager capabilities for all accounts and Regions. Quick Setup automatically keeps configurations up to date."

Options B, C, and D require custom deployments or manual IAM updates, lacking centralized automation. Therefore, Option A fully satisfies CloudOps standards for automated provisioning and ongoing management of EC2 instances across an organization.

References (AWS CloudOps Documents / Study Guide):

· AWS Certified CloudOps Engineer ­ Associate (SOA-C03) Exam Guide ­ Domain 3: Deployment,

Provisioning and Automation

· AWS Systems Manager ­ Quick Setup and Default Host Management Configuration

· AWS Organizations Integration with Systems Manager

· AWS Well-Architected Framework ­ Operational Excellence Pillar



A CloudOps engineer creates an AWS CloudFormation template to define an application stack that can be deployed in multiple AWS Regions. The CloudOps engineer also creates an Amazon CloudWatch dashboard by using the AWS Management Console. Each deployment of the application requires its own CloudWatch dashboard.

How can the CloudOps engineer automate the creation of the CloudWatch dashboard each time the application is deployed?

  1. Create a script by using the AWS CLI to run the aws cloudformation put-dashboard command with the name of the dashboard. Run the command each time a new CloudFormation stack is created.
  2. Export the existing CloudWatch dashboard as JSON. Update the CloudFormation template to define an AWS::CloudWatch::Dashboard resource. Include the exported JSON in the resource's DashboardBody property.
  3. Update the CloudFormation template to define an AWS::CloudWatch::Dashboard resource. Use the intrinsic Ref function to reference the ID of the existing CloudWatch dashboard.
  4. Update the CloudFormation template to define an AWS::CloudWatch::Dashboard resource.
    Specify the name of the existing dashboard in the DashboardName property.

Answer(s): B

Explanation:

According to CloudOps automation and monitoring best practices, CloudWatch dashboards should be provisioned as infrastructure-as-code (IaC) resources using AWS CloudFormation to ensure consistency, repeatability, and version control. AWS CloudFormation supports the AWS::CloudWatch::Dashboard resource, where the DashboardBody property accepts a JSON object describing widgets, metrics, and layout.

By exporting the existing dashboard configuration as JSON and embedding it into the CloudFormation template, every deployment of the application automatically creates its corresponding dashboard. This method aligns with the CloudOps requirement for automated deployment and operational visibility within the same stack lifecycle.

AWS documentation explicitly states:

"Use the AWS::CloudWatch::Dashboard resource to create a dashboard from your template. You can include the same JSON you use to define a dashboard in the console."

Option A requires manual execution. Options C and D incorrectly reference or reuse existing dashboards, failing to produce unique, deployment-specific dashboards.

References (AWS CloudOps Documents / Study Guide):

· AWS Certified CloudOps Engineer ­ Associate (SOA-C03) Exam Guide ­ Domain 1: Monitoring and

Logging

· AWS CloudFormation User Guide ­ Resource Type: AWS::CloudWatch::Dashboard

· AWS Well-Architected Framework ­ Operational Excellence Pillar

· Amazon CloudWatch ­ Automating Dashboards with Infrastructure as Code



A CloudOps engineer needs to ensure that AWS resources across multiple AWS accounts are tagged consistently. The company uses an organization in AWS Organizations to centrally manage the accounts. The company wants to implement cost allocation tags to accurately track the costs that are allocated to each business unit.

Which solution will meet these requirements with the LEAST operational overhead?

  1. Use Organizations tag policies to enforce mandatory tagging on all resources. Enable cost allocation tags in the AWS Billing and Cost Management console.
  2. Configure AWS CloudTrail events to invoke an AWS Lambda function to detect untagged resources and to automatically assign tags based on predefined rules.
  3. Use AWS Config to evaluate tagging compliance. Use AWS Budgets to apply tags for cost allocation.
  4. Use AWS Service Catalog to provision only pre-tagged resources. Use AWS Trusted Advisor to enforce tagging across the organization.

Answer(s): A

Explanation:

Tagging is essential for governance, cost management, and automation in CloudOps operations. The AWS Organizations tag policies feature allows centralized definition and enforcement of required tag keys and accepted values across all accounts in an organization. According to the AWS CloudOps study guide under Deployment, Provisioning, and Automation, tag policies enable automatic validation of tags, ensuring consistency with minimal manual overhead.

Once tagging consistency is enforced, enabling cost allocation tags in the AWS Billing and Cost Management console allows accurate cost distribution per business unit. AWS documentation states:

"Use AWS Organizations tag policies to standardize tags across accounts. You can activate cost allocation tags in the Billing console to track and allocate costs."

Option B introduces unnecessary complexity with Lambda automation. Option C detects but does not enforce tagging. Option D limits flexibility to Service Catalog resources only. Therefore, Option A provides a centrally managed, automated, and low-overhead solution that meets CloudOps tagging and cost-tracking requirements.

References (AWS CloudOps Documents / Study Guide):

· AWS Certified CloudOps Engineer ­ Associate (SOA-C03) Exam Guide ­ Domain 3: Deployment, Provisioning and Automation

· AWS Organizations ­ Tag Policies

· AWS Billing and Cost Management ­ Cost Allocation Tags

· AWS Well-Architected Framework ­ Operational Excellence and Cost Optimization Pillars



Viewing Page 1 of 10



Share your comments for Amazon AWS Certified CloudOps Engineer - Associate SOA-C03 exam with other users:

Oliviajames 10/25/2023 5:31:00 AM

i just want to tell you. i took my microsoft az-104 exam and passed it. your program was awesome. i especially liked your detailed questions and answers and practice tests that made me well-prepared for the exam. thanks to this website!!!
UNITED STATES


Divya 8/27/2023 12:31:00 PM

all the best
UNITED STATES


KY 1/1/2024 11:01:00 PM

very usefull document
Anonymous


Arun 9/20/2023 4:52:00 PM

nice and helpful questions
INDIA


Joseph J 7/11/2023 2:53:00 PM

i found the questions helpful
UNITED STATES


Meg 10/12/2023 8:02:00 AM

q 105 . ans is d
INDIA


Navaneeth S 7/14/2023 7:57:00 AM

i have interest to get a sybase iq dba certification
UNITED STATES


Aish 10/11/2023 5:27:00 AM

want to pass exm.
INDIA


Anonymous 6/12/2023 7:23:00 AM

are the answers correct?
INDIA


Kris 7/7/2023 9:43:00 AM

good morning, could you please upload this exam again, i need it to test my knowledge in sd-wan with version 7.0.
Anonymous


Meghraj mali 10/7/2023 1:47:00 PM

very nice question
CANADA


Noel 11/1/2022 9:14:00 PM

i have learning disability and this exam dumps allowed me to focus on the actual questions and not worry about notes and the those other study materials.
SOUTH AFRICA


Jas 10/25/2023 6:01:00 PM

165 should be apt
UNITED STATES


Neetu 6/22/2023 8:41:00 AM

please upload the dumps, real need of them
Anonymous


Mark 10/24/2023 1:34:00 AM

any recent feeedback?
UNITED STATES


Gopinadh 8/9/2023 4:05:00 AM

question number 2 is indicating you are giving proper questions. observe and change properly.
Anonymous


Santhi 1/1/2024 8:23:00 AM

passed today.40% questions were new.litwere case study,lots of new questions on afd,ratelimit,tm,lb,app gatway.got 2 set series of questions which are not present here.questions on azure cyclecloud, no.of vnet/vms required for implimentation,blueprints assignment/management group etc
INDIA


Raviraj Magadum 1/12/2024 11:39:00 AM

practice test
INDIA


sivaramakrishnan 7/27/2023 8:12:00 AM

want the dumps for emc content management server programming(cmsp)
Anonymous


Aderonke 10/23/2023 1:52:00 PM

brilliant and helpful
UNITED KINGDOM


Az 9/16/2023 2:43:00 PM

q75. azure files is pass
SWITZERLAND


ketty 11/9/2023 8:10:00 AM

very helpful
Anonymous


Sonail 5/2/2022 1:36:00 PM

thank you for these questions. it helped a lot.
UNITED STATES


Shariq 7/28/2023 8:00:00 AM

how do i get the h12-724 dumps
Anonymous


adi 10/30/2023 11:51:00 PM

nice data dumps
Anonymous


EDITH NCUBE 7/25/2023 7:28:00 AM

answers are correct
SOUTH AFRICA


Raja 6/20/2023 4:38:00 AM

good explanation
UNITED STATES


BigMouthDog 1/22/2022 8:17:00 PM

hi team just want to know if there is any update version of the exam 350-401
AUSTRALIA


francesco 10/30/2023 11:08:00 AM

helpful on 2017 scrum guide
EUROPEAN UNION


Amitabha Roy 10/5/2023 3:16:00 AM

planning to attempt for the exam.
Anonymous


Prem Yadav 7/29/2023 6:20:00 AM

pleaseee upload
INDIA


Ahmed Hashi 7/6/2023 5:40:00 PM

thanks ly so i have information cia
EUROPEAN UNION


mansi 5/31/2023 7:58:00 AM

hello team, i need sap qm dumps for practice
INDIA


Jamil aljamil 12/4/2023 4:47:00 AM

it’s good but not senatios based
UNITED KINGDOM