Amazon DBS-C01 Exam (page: 3)
Amazon AWS Certified Database - Specialty
Updated on: 07-Feb-2026

Viewing Page 3 of 42

A company needs a data warehouse solution that keeps data in a consistent, highly structured format. The company requires fast responses for end-user queries when looking at data from the current year, and users must have access to the full 15-year dataset, when needed. This solution also needs to handle a fluctuating number incoming queries. Storage costs for the 100 TB of data must be kept low.

Which solution meets these requirements?

  1. Leverage an Amazon Redshift data warehouse solution using a dense storage instance type while keeping all the data on local Amazon Redshift storage. Provision enough instances to support high demand.
  2. Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent data. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum layer. Provision enough instances to support high demand.
  3. Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent data. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum layer. Enable Amazon Redshift Concurrency Scaling.
  4. Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent data. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum layer. Leverage Amazon Redshift elastic resize.

Answer(s): C



A gaming company wants to deploy a game in multiple Regions. The company plans to save local high scores in Amazon DynamoDB tables in each Region. A Database Specialist needs to design a solution to automate the deployment of the database with identical configurations in additional Regions, as needed. The solution should also automate configuration changes across all Regions.

Which solution would meet these requirements and deploy the DynamoDB tables?

  1. Create an AWS CLI command to deploy the DynamoDB table to all the Regions and save it for future deployments.
  2. Create an AWS CloudFormation template and deploy the template to all the Regions.
  3. Create an AWS CloudFormation template and use a stack set to deploy the template to all the Regions.
  4. Create DynamoDB tables using the AWS Management Console in all the Regions and create a step-by- step guide for future deployments.

Answer(s): C



A team of Database Specialists is currently investigating performance issues on an Amazon RDS for MySQL DB instance and is reviewing related metrics. The team wants to narrow the possibilities down to specific database wait events to better understand the situation.

How can the Database Specialists accomplish this?

  1. Enable the option to push all database logs to Amazon CloudWatch for advanced analysis
  2. Create appropriate Amazon CloudWatch dashboards to contain specific periods of time
  3. Enable Amazon RDS Performance Insights and review the appropriate dashboard
  4. Enable Enhanced Monitoring will the appropriate settings

Answer(s): C



A large company is using an Amazon RDS for Oracle Multi-AZ DB instance with a Java application. As a part of its disaster recovery annual testing, the company would like to simulate an Availability Zone failure and record how the application reacts during the DB instance failover activity. The company does not want to make any code changes for this activity.

What should the company do to achieve this in the shortest amount of time?

  1. Use a blue-green deployment with a complete application-level failover test
  2. Use the RDS console to reboot the DB instance by choosing the option to reboot with failover
  3. Use RDS fault injection queries to simulate the primary node failure
  4. Add a rule to the NACL to deny all traffic on the subnets associated with a single Availability Zone

Answer(s): B

Explanation:


Reference:

https://wellarchitectedlabs.com/Reliability/300_Testing_for_Resiliency_of_EC2_RDS_and_S3/Lab_Guide.html



A company maintains several databases using Amazon RDS for MySQL and PostgreSQL. Each RDS database generates log files with retention periods set to their default values. The company has now mandated that database logs be maintained for up to 90 days in a centralized repository to facilitate real-time and after- the-fact analyses.

What should a Database Specialist do to meet these requirements with minimal effort?

  1. Create an AWS Lambda function to pull logs from the RDS databases and consolidate the log files in an Amazon S3 bucket. Set a lifecycle policy to expire the objects after 90 days.
  2. Modify the RDS databases to publish log to Amazon CloudWatch Logs. Change the log retention policy for each log group to expire the events after 90 days.
  3. Write a stored procedure in each RDS database to download the logs and consolidate the log files in an Amazon S3 bucket. Set a lifecycle policy to expire the objects after 90 days.
  4. Create an AWS Lambda function to download the logs from the RDS databases and publish the logs to Amazon CloudWatch Logs. Change the log retention policy for the log group to expire the events after 90 days.

Answer(s): B



A Database Specialist is setting up a new Amazon Aurora DB cluster with one primary instance and three Aurora Replicas for a highly intensive, business-critical application. The Aurora DB cluster has one medium- sized primary instance, one large-sized replica, and two medium sized replicas. The Database Specialist did not assign a promotion tier to the replicas.

In the event of a primary failure, what will occur?

  1. Aurora will promote an Aurora Replica that is of the same size as the primary instance
  2. Aurora will promote an arbitrary Aurora Replica
  3. Aurora will promote the largest-sized Aurora Replica
  4. Aurora will not promote an Aurora Replica

Answer(s): C


Reference:

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-ug.pdf



A company is running its line of business application on AWS, which uses Amazon RDS for MySQL at the persistent data store. The company wants to minimize downtime when it migrates the database to Amazon Aurora.

Which migration method should a Database Specialist use?

  1. Take a snapshot of the RDS for MySQL DB instance and create a new Aurora DB cluster with the option to migrate snapshots.
  2. Make a backup of the RDS for MySQL DB instance using the my sql dump utility, create a new Aurora DB cluster, and restore the backup.
  3. Create an Aurora Replica from the RDS for MySQL DB instance and promote the Aurora DB cluster.
  4. Create a clone of the RDS for MySQL DB instance and promote the Aurora DB cluster.

Answer(s): C


Reference:

https://d1.awsstatic.com/whitepapers/RDS/Migrating%20your%20databases%20to%20Amazon%20Aurora.pdf (10)



The Security team for a finance company was notified of an internal security breach that happened 3 weeks ago. A Database Specialist must start producing audit logs out of the production Amazon Aurora PostgreSQL cluster for the Security team to use for monitoring and alerting. The Security team is required to perform real- time alerting and monitoring outside the Aurora DB cluster and wants to have the cluster push encrypted files to the chosen solution.
Which approach will meet these requirements?

  1. Use pg_audit to generate audit logs and send the logs to the Security team.
  2. Use AWS CloudTrail to audit the DB cluster and the Security team will get data from Amazon S3.
  3. Set up database activity streams and connect the data stream from Amazon Kinesis to consumer applications.
  4. Turn on verbose logging and set up a schedule for the logs to be dumped out for the Security team.

Answer(s): C


Reference:

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-ug.pdf (525)



Viewing Page 3 of 42



Share your comments for Amazon DBS-C01 exam with other users:

Mungara 3/14/2023 12:10:00 AM

thanks to this exam dumps, i felt confident and passed my exam with ease.
UNITED STATES