Amazon SAA-C03 Exam (page: 2)
Amazon AWS Certified Solutions Architect - Associate SAA-C03
Updated on: 31-Mar-2026

Viewing Page 2 of 129

A company is running an SMB file server in its data center. The file server stores large files that are accessed frequently for the first few days after the files are created. After 7 days the files are rarely accessed.
The total data size is increasing and is close to the company's total storage capacity. A solutions architect must increase the company's available storage space without losing low-latency access to the most recently accessed files. The solutions architect must also provide file lifecycle management to avoid future storage issues.
Which solution will meet these requirements?

  1. Use AWS DataSync to copy data that is older than 7 days from the SMB file server to AWS.
  2. Create an Amazon S3 File Gateway to extend the company's storage space. Create an S3 Lifecycle policy to transition the data to S3 Glacier Deep Archive after 7 days.
  3. Create an Amazon FSx for Windows File Server file system to extend the company's storage space.
  4. Install a utility on each user's computer to access Amazon S3. Create an S3 Lifecycle policy to transition the data to S3 Glacier Flexible Retrieval after 7 days.

Answer(s): B

Explanation:

A short summary: Using a File Gateway with S3 storage and lifecycle policies best preserves low-latency access for recent files while expanding capacity and managing data aging.
A) Incorrect. DataSync moves data, but does not provide continuous access to recent files on-premises and lacks integrated lifecycle management for ongoing growth.
B) Correct. S3 File Gateway extends on-premises storage with S3, preserving low-latency access for recent files via local cache, while S3 Lifecycle moves older data to Glacier Deep Archive to free space.
C) Incorrect. FSx for Windows provides additional Windows file storage but does not inherently integrate with on-prem SMB access or lifecycle tiers to archive data.
D) Incorrect. Client-side S3 access with lifecycle to Glacier Flexible Retrieval reduces performance and does not maintain local low-latency access for active files.



A company is building an ecommerce web application on AWS. The application sends information about new orders to an Amazon API Gateway REST API to process. The company wants to ensure that orders are processed in the order that they are received.
Which solution will meet these requirements?

  1. Use an API Gateway integration to publish a message to an Amazon Simple Notification Service (Amazon SNS) topic when the application receives an order. Subscribe an AWS Lambda function to the topic to perform processing.
  2. Use an API Gateway integration to send a message to an Amazon Simple Queue Service (Amazon SQS) FIFO queue when the application receives an order. Configure the SQS FIFO queue to invoke an AWS Lambda function for processing.
  3. Use an API Gateway authorizer to block any requests while the application processes an order.
  4. Use an API Gateway integration to send a message to an Amazon Simple Queue Service (Amazon SQS) standard queue when the application receives an order. Configure the SQS standard queue to invoke an AWS Lambda function for processing.

Answer(s): B

Explanation:

Using an API Gateway integration to place each new order into an SQS FIFO queue ensures strict message ordering and at-least-once processing with exactly-once in-order semantics, which is appropriate for processing ecommerce orders in arrival order. The FIFO queue guarantees ordering via message group IDs and provides deduplication.
A) SNS does not preserve per-message processing order across multiple subscribers; ordering is not guaranteed.
C) API Gateway authorizers control access, not processing order.
D) SQS standard queue does not guarantee strict ordering; messages can be delivered out of order.



A company has an application that runs on Amazon EC2 instances and uses an Amazon Aurora database. The EC2 instances connect to the database by using user names and passwords that are stored locally in a file. The company wants to minimize the operational overhead of credential management.
What should a solutions architect do to accomplish this goal?

  1. Use AWS Secrets Manager. Turn on automatic rotation.
  2. Use AWS Systems Manager Parameter Store. Turn on automatic rotation.
  3. Create an Amazon S3 bucket to store objects that are encrypted with an AWS Key Management Service (AWS KMS) encryption key. Migrate the credential file to the S3 bucket. Point the application to the S3 bucket.
  4. Create an encrypted Amazon Elastic Block Store (Amazon EBS) volume for each EC2 instance. Attach the new EBS volume to each EC2 instance. Migrate the credential file to the new EBS volume. Point the application to the new EBS volume.

Answer(s): A

Explanation:

AWS Secrets Manager provides centralized credential management with automatic rotation for database credentials, minimizing operational overhead for applications running on EC2 and connecting to Aurora. It supports seamless retrieval by apps and can rotate secrets without code changes.
A) Correct: Secrets Manager with automatic rotation reduces manual credential maintenance and ensures credentials are rotated automatically.
B) Systems Manager Parameter Store is suitable for secret values but lacks built-in database credential rotation like Secrets Manager; rotation is not as seamless for database credentials.
C) Storing credentials in S3 introduces risk if access controls or rotation aren’t robust; not ideal for dynamic database credentials.
D) Encrypting and mounting credentials on EBS volumes increases maintenance and does not automate rotation or centralized management.



A global company hosts its web application on Amazon EC2 instances behind an Application Load Balancer (ALB). The web application has static data and dynamic data. The company stores its static data in an Amazon S3 bucket. The company wants to improve performance and reduce latency for the static data and dynamic data. The company is using its own domain name registered with Amazon Route 53.
What should a solutions architect do to meet these requirements?

  1. Create an Amazon CloudFront distribution that has the S3 bucket and the ALB as origins. Configure Route 53 to route traffic to the CloudFront distribution.
  2. Create an Amazon CloudFront distribution that has the ALB as an origin. Create an AWS Global Accelerator standard accelerator that has the S3 bucket as an endpoint Configure Route 53 to route traffic to the CloudFront distribution.
  3. Create an Amazon CloudFront distribution that has the S3 bucket as an origin. Create an AWS Global Accelerator standard accelerator that has the ALB and the CloudFront distribution as endpoints. Create a custom domain name that points to the accelerator DNS name. Use the custom domain name as an endpoint for the web application.
  4. Create an Amazon CloudFront distribution that has the ALB as an origin. Create an AWS Global Accelerator standard accelerator that has the S3 bucket as an endpoint. Create two domain names. Point one domain name to the CloudFront DNS name for dynamic content. Point the other domain name to the accelerator DNS name for static content. Use the domain names as endpoints for the web application.

Answer(s): A

Explanation:

A) CloudFront with S3 as origin and ALB as origin? Wait, correct is A: CloudFront distribution with S3 bucket and ALB as origins, and Route 53 routes to CloudFront. This provides caching for static data from S3 and dynamic data from ALB, reducing latency globally; CloudFront edge caches serve static content and forward dynamic requests to ALB, while Route 53 routing to the CloudFront distribution ensures global performance. B, C, D incorrect: Global Accelerator is not needed when CloudFront already provides edge caching; mixing endpoints and multiple accelerators adds complexity and can misroute traffic; using separate endpoints or domain splits is not required and can complicate TLS and routing.



A company performs monthly maintenance on its AWS infrastructure. During these maintenance activities, the company needs to rotate the credentials for its Amazon RDS for MySQL databases across multiple AWS Regions.
Which solution will meet these requirements with the LEAST operational overhead?

  1. Store the credentials as secrets in AWS Secrets Manager. Use multi-Region secret replication for the required Regions. Configure Secrets Manager to rotate the secrets on a schedule.
  2. Store the credentials as secrets in AWS Systems Manager by creating a secure string parameter. Use multi-Region secret replication for the required Regions. Configure Systems Manager to rotate the secrets on a schedule.
  3. Store the credentials in an Amazon S3 bucket that has server-side encryption (SSE) enabled. Use Amazon EventBridge (Amazon CloudWatch Events) to invoke an AWS Lambda function to rotate the credentials.
  4. Encrypt the credentials as secrets by using AWS Key Management Service (AWS KMS) multi-Region customer managed keys. Store the secrets in an Amazon DynamoDB global table. Use an AWS Lambda function to retrieve the secrets from DynamoDB. Use the RDS API to rotate the secrets.

Answer(s): A

Explanation:

A) Secrets Manager with multi-Region replication and rotation minimizes operational overhead by providing built-in secret storage, automatic rotation, and cross-region replication for IAM/RDS credentials. This aligns with RDS for MySQL integration and reduces custom tooling.
B) Systems Manager Parameter Store replication exists but not as seamless for cross-region rotation of database credentials; multi-region replication is less common and rotation may require custom steps.
C) S3 SSE plus Lambda rotation adds significant custom logic and lacks native secret rotation for RDS; higher maintenance.
D) DynamoDB with KMS keys and Lambda rotation is a custom approach requiring bespoke rotation logic and does not provide native, managed RDS credential rotation.



A company runs an ecommerce application on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. The Auto Scaling group scales based on CPU utilization metrics. The ecommerce application stores the transaction data in a MySQL 8.0 database that is hosted on a large EC2 instance.
The database's performance degrades quickly as application load increases. The application handles more read requests than write transactions. The company wants a solution that will automatically scale the database to meet the demand of unpredictable read workloads while maintaining high availability.
Which solution will meet these requirements?

  1. Use Amazon Redshift with a single node for leader and compute functionality.
  2. Use Amazon RDS with a Single-AZ deployment Configure Amazon RDS to add reader instances in a different Availability Zone.
  3. Use Amazon Aurora with a Multi-AZ deployment. Configure Aurora Auto Scaling with Aurora Replicas.
  4. Use Amazon ElastiCache for Memcached with EC2 Spot Instances.

Answer(s): C

Explanation:

Aurora with Auto Scaling and replicas provides a highly available, read-heavy, scalable relational database layer that automatically adds read replicas to handle unpredictable read workloads while maintaining multi-AZ durability.
A) Redshift is a data warehouse optimized for analytics, not as a transactional DB or OLTP with automatic read scaling for a live ecommerce workload.
B) Single-AZ RDS with cross-AZ readers helps availability but lacks the elastic, per-tenant read-scaling and automatic replica fleet management provided by Aurora Auto Scaling.
D) ElastiCache Memcached adds a cache layer but does not replace the authoritative transactional database; relying on Spot Instances is unreliable for a critical database tier.



A company recently migrated to AWS and wants to implement a solution to protect the traffic that flows in and out of the production VPC. The company had an inspection server in its on-premises data center. The inspection server performed specific operations such as traffic flow inspection and traffic filtering. The company wants to have the same functionalities in the AWS Cloud.
Which solution will meet these requirements?

  1. Use Amazon GuardDuty for traffic inspection and traffic filtering in the production VPC.
  2. Use Traffic Mirroring to mirror traffic from the production VPC for traffic inspection and filtering.
  3. Use AWS Network Firewall to create the required rules for traffic inspection and traffic filtering for the production VP
  4. Use AWS Firewall Manager to create the required rules for traffic inspection and traffic filtering for the production VPC.

Answer(s): C

Explanation:

Amazon Network Firewall provides managed inspection and filtering capabilities within a VPC, allowing you to define rules to inspect and filter traffic entering and leaving the production VPC, matching the on-prem inspection server functionality. A) GuardDuty is for threat detection and monitoring, not real-time traffic inspection/filtering. B) Traffic Mirroring duplicates traffic for analysis elsewhere but does not perform in-line inspection or enforcement. D) Firewall Manager centralizes policy management across accounts but relies on underlying firewall appliances; it does not itself provide direct in-line inspection/filtering within the VPC. Therefore, Network Firewall is the correct in-line traffic inspection and filtering solution.



A company hosts a data lake on AWS. The data lake consists of data in Amazon S3 and Amazon RDS for PostgreSQL. The company needs a reporting solution that provides data visualization and includes all the data sources within the data lake. Only the company's management team should have full access to all the visualizations. The rest of the company should have only limited access.
Which solution will meet these requirements?

  1. Create an analysis in Amazon QuickSight. Connect all the data sources and create new datasets. Publish dashboards to visualize the data. Share the dashboards with the appropriate IAM roles.
  2. Create an analysis in Amazon QuickSight. Connect all the data sources and create new datasets. Publish dashboards to visualize the data. Share the dashboards with the appropriate users and groups.
  3. Create an AWS Glue table and crawler for the data in Amazon S3. Create an AWS Glue extract, transform, and load (ETL) job to produce reports. Publish the reports to Amazon S3. Use S3 bucket policies to limit access to the reports.
  4. Create an AWS Glue table and crawler for the data in Amazon S3. Use Amazon Athena Federated Query to access data within Amazon RDS for PostgreSQL. Generate reports by using Amazon Athena. Publish the reports to Amazon S3. Use S3 bucket policies to limit access to the reports.

Answer(s): B

Explanation:

Amazon QuickSight supports centralized dashboards that can connect to multiple data sources (S3, RDS PostgreSQL), with fine-grained access control via IAM users/groups or in-app permissions, enabling management to have full access while others have restricted access. B) aligns with per-user/group access control and multi-source visualization.
A) Inadequate if it relies on IAM roles instead of per-user/group permissions; may not enforce separate access for the broader user base.
C) ETL and S3-only reporting lacks integrated visualization and per-user access controls across data sources.
D) Federation and cross-service querying add complexity; access control and visualization governance are not as straightforward as in QuickSight dashboards.



Viewing Page 2 of 129



Share your comments for Amazon SAA-C03 exam with other users:

Palash Ghosh 9/11/2023 8:30:00 AM

easy questions
Anonymous


Yolostar Again 10/12/2023 3:02:00 PM

q.189 - answers are incorrect.
Anonymous


Sam 9/7/2023 6:51:00 AM

question 8 - can cloudtrail be used for storing jobs? based on aws - aws cloudtrail is used for governance, compliance and investigating api usage across all of our aws accounts. every action that is taken by a user or script is an api call so this is logged to [aws] cloudtrail. something seems incorrect here.
UNITED STATES


test user 9/24/2023 3:15:00 AM

thanks for the questions
AUSTRALIA


Ayushi Baria 11/7/2023 7:44:00 AM

this is very helpfull for me
Anonymous


Danny Zas 9/15/2023 4:45:00 AM

this is a good experience
UNITED STATES


YoloStar Yoloing 10/22/2023 9:58:00 PM

q. 289 - the correct answer should be b not d, since the question asks for the most secure way to provide access to a s3 bucket (a single one), and by principle of the least privilege you should not be giving access to all buckets.
Anonymous


treyf 11/9/2023 5:13:00 AM

i took the aws saa-c03 test and scored 935/1000. it has all the exam dumps and important info.
UNITED STATES


Prince 10/31/2023 9:09:00 PM

is it possible to clear the exam if we focus on only these 156 questions instead of 623 questions? kindly help!
Anonymous


Marc blue 9/15/2023 4:11:00 AM

great job. hope this helps out.
UNITED STATES