Amazon DBS-C01 Exam (page: 7)
Amazon AWS Certified Database - Specialty
Updated on: 09-Feb-2026

Viewing Page 7 of 42

A Database Specialist is designing a disaster recovery strategy for a production Amazon DynamoDB table. The table uses provisioned read/write capacity mode, global secondary indexes, and time to live (TTL). The Database Specialist has restored the latest backup to a new table.
To prepare the new table with identical settings, which steps should be performed? (Choose two.)

  1. Re-create global secondary indexes in the new table
  2. Define IAM policies for access to the new table
  3. Define the TTL settings
  4. Encrypt the table from the AWS Management Console or use the update-table command
  5. Set the provisioned read and write capacity

Answer(s): B,C

Explanation:


Reference:

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.html



A Database Specialist is creating Amazon DynamoDB tables, Amazon CloudWatch alarms, and associated infrastructure for an Application team using a development AWS account. The team wants a deployment method that will standardize the core solution components while managing environment-specific settings separately, and wants to minimize rework due to configuration errors.
Which process should the Database Specialist recommend to meet these requirements?

  1. Organize common and environmental-specific parameters hierarchically in the AWS Systems Manager Parameter Store, then reference the parameters dynamically from an AWS CloudFormation template. Deploy the CloudFormation stack using the environment name as a parameter.
  2. Create a parameterized AWS CloudFormation template that builds the required objects. Keep separate environment parameter files in separate Amazon S3 buckets. Provide an AWS CLI command that deploys the CloudFormation stack directly referencing the appropriate parameter bucket.
  3. Create a parameterized AWS CloudFormation template that builds the required objects. Import the template into the CloudFormation interface in the AWS Management Console. Make the required changes to the parameters and deploy the CloudFormation stack.
  4. Create an AWS Lambda function that builds the required objects using an AWS SDK. Set the required parameter values in a test event in the Lambda console for each environment that the Application team can modify, as needed. Deploy the infrastructure by triggering the test event in the console.

Answer(s): A

Explanation:


Reference:

https://aws.amazon.com/blogs/mt/aws-cloudformation-signed-sealed-and-deployed/



A company runs online transaction processing (OLTP) workloads on an Amazon RDS for PostgreSQL Multi- AZ DB instance. Tests were run on the database after work hours, which generated additional database logs. The free storage of the RDS DB instance is low due to these additional logs.
What should the company do to address this space constraint issue?

  1. Log in to the host and run the rm $PGDATA/pg_logs/* command
  2. Modify the rds.log_retention_period parameter to 1440 and wait up to 24 hours for database logs to be deleted
  3. Create a ticket with AWS Support to have the logs deleted
  4. Run the SELECT rds_rotate_error_log() stored procedure to rotate the logs

Answer(s): B



A user has a non-relational key-value database. The user is looking for a fully managed AWS service that will offload the administrative burdens of operating and scaling distributed databases. The solution must be cost- effective and able to handle unpredictable application traffic.

What should a Database Specialist recommend for this user?

  1. Create an Amazon DynamoDB table with provisioned capacity mode
  2. Create an Amazon DocumentDB cluster
  3. Create an Amazon DynamoDB table with on-demand capacity mode
  4. Create an Amazon Aurora Serverless DB cluster

Answer(s): C

Explanation:


Reference:

https://aws.amazon.com/dynamodb/



A gaming company is designing a mobile gaming app that will be accessed by many users across the globe. The company wants to have replication and full support for multi-master writes. The company also wants to ensure low latency and consistent performance for app users.
Which solution meets these requirements?

  1. Use Amazon DynamoDB global tables for storage and enable DynamoDB automatic scaling
  2. Use Amazon Aurora for storage and enable cross-Region Aurora Replicas
  3. Use Amazon Aurora for storage and cache the user content with Amazon ElastiCache
  4. Use Amazon Neptune for storage

Answer(s): A

Explanation:


Reference:

https://aws.amazon.com/blogs/database/how-to-use-amazon-dynamodb-global-tables-to-power-multiregion-architectures/



A Database Specialist needs to speed up any failover that might occur on an Amazon Aurora PostgreSQL DB cluster. The Aurora DB cluster currently includes the primary instance and three Aurora Replicas.
How can the Database Specialist ensure that failovers occur with the least amount of downtime for the application?

  1. Set the TCP keepalive parameters low
  2. Call the AWS CLI failover-db-cluster command
  3. Enable Enhanced Monitoring on the DB cluster
  4. Start a database activity stream on the DB cluster

Answer(s): A



A Database Specialist needs to define a database migration strategy to migrate an on-premises Oracle database to an Amazon Aurora MySQL DB cluster. The company requires near-zero downtime for the data migration. The solution must also be cost-effective.
Which approach should the Database Specialist take?

  1. Dump all the tables from the Oracle database into an Amazon S3 bucket using data pump (expdp). Run data transformations in AWS Glue. Load the data from the S3 bucket to the Aurora DB cluster.
  2. Order an AWS Snowball appliance and copy the Oracle backup to the Snowball appliance. Once the Snowball data is delivered to Amazon S3, create a new Aurora DB cluster. Enable the S3 integration to migrate the data directly from Amazon S3 to Amazon RDS.
  3. Use the AWS Schema Conversion Tool (AWS SCT) to help rewrite database objects to MySQL during the schema migration. Use AWS DMS to perform the full load and change data capture (CDC) tasks.
  4. Use AWS Server Migration Service (AWS SMS) to import the Oracle virtual machine image as an Amazon EC2 instance. Use the Oracle Logical Dump utility to migrate the Oracle data from Amazon EC2 to an Aurora DB cluster.

Answer(s): C



A marketing company is using Amazon DocumentDB and requires that database audit logs be enabled. A Database Specialist needs to configure monitoring so that all data definition language (DDL) statements performed are visible to the Administrator. The Database Specialist has set the audit_logs parameter to enabled in the cluster parameter group.

What should the Database Specialist do to automatically collect the database logs for the Administrator?

  1. Enable DocumentDB to export the logs to Amazon CloudWatch Logs
  2. Enable DocumentDB to export the logs to AWS CloudTrail
  3. Enable DocumentDB Events to export the logs to Amazon CloudWatch Logs
  4. Configure an AWS Lambda function to download the logs using the download-db-log-file-portion operation and store the logs in Amazon S3

Answer(s): A

Explanation:


Reference:

https://docs.aws.amazon.com/documentdb/latest/developerguide/profiling.html



Viewing Page 7 of 42



Share your comments for Amazon DBS-C01 exam with other users:

Mungara 3/14/2023 12:10:00 AM

thanks to this exam dumps, i felt confident and passed my exam with ease.
UNITED STATES