A company uses an organization in AWS Organizations to manage multiple AWS accounts. The company needs to send specific events from all the accounts in the organization to a new receiver account, where an AWS Lambda function will process the events.A CloudOps engineer configures Amazon EventBridge to route events to a target event bus in the us- west-2 Region in the receiver account. The CloudOps engineer creates rules in both the sender and receiver accounts that match the specified events. The rules do not specify an account parameter in the event pattern. IAM roles are created in the sender accounts to allow PutEvents actions on the target event bus.However, the first test events from the us-east-1 Region are not processed by the Lambda function in the receiving account.What is the likely reason the events are not processed?
Answer(s): C
Per the AWS Cloud Operations and EventBridge documentation, when events are sent across AWS accounts -- particularly from multiple accounts in an AWS Organization -- the target event bus in the receiver account must include a resource-based policy that explicitly allows events:PutEvents API calls from the sender accounts or the organization ID.Even if the sender accounts have IAM permissions to call PutEvents, the receiving event bus must trust those accounts via a resource policy. Without this configuration, EventBridge automatically rejects incoming cross-account events, and those events never reach the target Lambda function for processing.AWS guidance states that "Cross-account event delivery requires a resource-based policy on the event bus that grants permissions to the source accounts or organization." The policy can include either individual AWS account IDs or the organization's root ID.In this scenario, because the events originate from multiple accounts and there is no resource policy on the target event bus to authorize those sender accounts, the events are not delivered.Therefore, the correct cause is C the resource-based policy on the target event bus must be modified to allow PutEvents API calls from the sender accounts.
AWS Cloud Operations EventBridge Cross-Account Event Delivery Section, Permissions for Event Bus Targets and Organizational Event Routing
A CloudOps engineer needs to track the costs of data transfer between AWS Regions. The CloudOps engineer must implement a solution to send alerts to an email distribution list when transfer costs reach 75% of a specific threshold.What should the CloudOps engineer do to meet these requirements?
According to the AWS Cloud Operations and Cost Management documentation, AWS Budgets is the recommended service to track and alert on cost thresholds across all AWS accounts and resources. It allows users to define cost, usage, or reservation budgets, and configure notifications to trigger when usage or cost reaches defined percentages of the budgeted value (e.g., 75%, 90%, 100%).The AWS Budgets system integrates natively with Amazon Simple Notification Service (SNS) to deliver alerts to an email distribution list or SNS topic subscribers. AWS Budgets supports granular cost filters, including specific service categories such as data transfer, regions, or linked accounts, ensuring precise visibility into inter-Region transfer costs.By contrast, CloudWatch billing alarms (Option B) monitor total account charges only and do not allow detailed service-level filtering, such as data transfer between Regions. Cost and Usage Reports (Option A) are for detailed cost analysis, not real-time alerting, and VPC Flow Logs (Option D) capture traffic data, not billing or cost-based metrics.Thus, using AWS Budgets with a 75% alert threshold best satisfies the operational and notification requirements.
AWS CloudOps and Cost Management Guide Section: AWS Budgets for Cost Monitoring and Alerts
A CloudOps engineer needs to set up alerting and remediation for a web application. The application consists of Amazon EC2 instances that have AWS Systems Manager Agent (SSM Agent) installed. Each EC2 instance runs a custom web server. The EC2 instances run behind a load balancer and write logs locally.The CloudOps engineer must implement a solution that restarts the web server software automatically if specific web errors are detected in the logs.Which combination of steps will meet these requirements? (Select THREE.)
Answer(s): A,C,E
Per the AWS Cloud Operations, Monitoring, and Automation documentation, the correct workflow for automated operational remediation is:Amazon CloudWatch Agent is installed on each EC2 instance (Option A) to collect local log data and push it to Amazon CloudWatch Logs.A CloudWatch Metric Filter (Option C) is then defined to identify specific error strings or patterns within those logs (e.g., "HTTP 5xx" or "Service Unavailable"). When such an event occurs, CloudWatch Alarms are triggered.Upon alarm activation, Amazon EventBridge rules (Option E) are configured to respond automatically by invoking an AWS Systems Manager Automation runbook, which executes an action to restart the web server process on the affected instance via SSM Agent.This approach aligns directly with AWS's recommended CloudOps remediation pattern, known as event-driven automation, which ensures minimal downtime and eliminates manual intervention.Options involving CloudTrail (B) or SES notifications (D) are incorrect because they are unrelated to log-based application monitoring and automated remediation workflows.
AWS Cloud Operations & Systems Manager Guide Section: Automated Remediation using CloudWatch, EventBridge, and Systems Manager Automation
A CloudOps engineer is configuring an Amazon CloudFront distribution to use an SSL/TLS certificate. The CloudOps engineer must ensure automatic certificate renewal.Which combination of steps will meet this requirement? (Select TWO.)
Answer(s): A,E
The AWS Cloud Operations and Security documentation specifies that for Amazon CloudFront,automatic certificate renewal is only supported for certificates issued by AWS Certificate Manager (ACM). When a certificate is managed by ACM and validated through DNS validation, ACM automatically renews the certificate before expiration without requiring manual intervention.Option A ensures that the certificate is issued and managed by ACM, enabling full integration with CloudFront. Option E (DNS validation) is essential for automation; AWS performs revalidation automatically as long as the DNS validation record remains in place.By contrast, email validation (Option D) requires manual user confirmation upon renewal, which prevents automatic renewals. Certificates issued by third-party certificate authorities (Option B) are manually managed and must be reimported into ACM after renewal. CloudFront does not have a direct feature (Option C) to renew certificates; it relies on ACM's lifecycle management.Thus, combining ACM-issued certificates (A) with DNS validation (E) ensures continuous, automated renewal with no downtime or human action required.
AWS Cloud Operations and Security Best Practices Section: Using AWS Certificate Manager with CloudFront for Automatic Certificate Renewal
A company has an on-premises DNS solution and wants to resolve DNS records in an Amazon Route 53 private hosted zone for example.com. The company has set up an AWS Direct Connect connection for network connectivity between the on-premises network and the VPC. A CloudOps engineer must ensure that an on-premises server can query records in the example.com domain.What should the CloudOps engineer do to meet these requirements?
Answer(s): A
According to AWS Cloud Operations and Networking documentation, Route 53 Resolver inbound endpoints allow DNS queries to originate from on-premises DNS servers and resolve private hosted zone records in AWS. The inbound endpoint provides DNS resolver IP addresses within the VPC, which the on-premises DNS servers can forward queries to over AWS Direct Connect or VPN connections.The inbound endpoint must be associated with a security group that permits inbound traffic on TCP and UDP port 53 from the on-premises DNS server IP addresses. This ensures that DNS requests from the on-premises environment reach the VPC Resolver for resolution of private domains like example.com.By contrast, outbound endpoints are used for the opposite direction--resolving external (on- premises or internet) DNS names from within AWS VPCs. Therefore, only an inbound endpoint correctly satisfies the direction of resolution in this scenario.
AWS Cloud Operations & Route 53 Resolver Guide Section: Inbound and Outbound Endpoints for Hybrid DNS Resolution
A medical research company uses an Amazon Bedrock powered AI assistant with agents and knowledge bases to provide physicians quick access to medical study protocols. The company needs to generate audit reports that contain user identities, usage data for Bedrock agents, access data for knowledge bases, and interaction parameters.Which solution will meet these requirements?
As per AWS Cloud Operations, Bedrock, and Governance documentation, AWS CloudTrail is the authoritative service for capturing API activity and audit trails across AWS accounts. For Amazon Bedrock, CloudTrail records all user-initiated API calls, including interactions with agents, knowledge bases, and generative AI model parameters.Using CloudTrail Lake, organizations can store, query, and analyze CloudTrail events directly without needing to export data. CloudTrail Lake supports SQL-like queries for generating audit and compliance reports, enabling the company to retrieve information such as user identity, API usage, timestamp, model or agent ID, and invocation parameters.In contrast, CloudWatch focuses on operational metrics and log streaming, not API-level identity data. OpenSearch or Flink would add unnecessary complexity and cost for this use case.Thus, the AWS-recommended CloudOps best practice is to leverage CloudTrail with CloudTrail Lake to maintain auditable, queryable API activity for Bedrock workloads, fulfilling governance and compliance requirements.
AWS Cloud Operations & Governance Guide Section: Auditing and Governance for Generative AI Workloads Using AWS CloudTrail and CloudTrail Lake
A company needs to enforce tagging requirements for Amazon DynamoDB tables in its AWSaccounts. A CloudOps engineer must implement a solution to identify and remediate all DynamoDB tables that do not have the appropriate tags.Which solution will meet these requirements with the LEAST operational overhead?
According to the AWS Cloud Operations, Governance, and Compliance documentation, AWS Config provides managed rules that automatically evaluate resource configurations for compliance. The "required-tags" managed rule allows CloudOps teams to specify mandatory tags (e.g., Environment, Owner, CostCenter) and automatically detect non-compliant resources such as DynamoDB tables.Furthermore, AWS Config supports automatic remediation through AWS Systems ManagerAutomation runbooks, enabling correction actions (for example, adding missing tags) without manual intervention. This automation minimizes operational overhead and ensures continuous compliance across multiple accounts.Using a custom Lambda function (Options A or B) introduces unnecessary management complexity, while EventBridge rules alone (Option D) do not provide resource compliance tracking or historical visibility.Therefore, Option C provides the most efficient, fully managed, and compliant CloudOps solution.
AWS Cloud Operations & Governance Guide Section: Compliance Automation UsingAWS Config Managed Rules and Systems Manager Remediation
A company's website runs on an Amazon EC2 Linux instance. The website needs to serve PDF files from an Amazon S3 bucket. All public access to the S3 bucket is blocked at the account level. The company needs to allow website users to download the PDF files.Which solution will meet these requirements with the LEAST administrative effort?
Answer(s): B
Per the AWS Cloud Operations, Networking, and Security documentation, the best practice for serving private S3 content securely to end users is to use Amazon CloudFront with Origin Access Control (OAC).OAC enables CloudFront to access S3 buckets privately, even when Block Public Access settings are enabled at the account level. This allows content to be delivered globally and securely without making the S3 bucket public. The bucket policy explicitly allows access only from the CloudFront distribution, ensuring that users can retrieve PDF files only via CloudFront URLs.This configuration offers:Automatic scalability through CloudFront caching,Improved security via private access control,Minimal administration effort with fully managed services.Other options require manual handling or make the bucket public, violating AWS security best practices.Therefore, Option B--using CloudFront with Origin Access Control and a restrictive bucket policy-- provides the most secure, efficient, and low-maintenance CloudOps solution.
AWS Cloud Operations and Content Delivery Guide Section: Serving Private Content Securely from Amazon S3 via CloudFront Using Origin Access Control
Share your comments for Amazon SOA-C03 exam with other users:
tomAws 7/18/2023 5:05:00 AM
nice questions