Amazon DEA-C01 Exam (page: 2)
Amazon AWS Certified Data Engineer - Associate DEA-C01
Updated on: 31-Mar-2026

Viewing Page 2 of 27

A data engineer needs to schedule a workflow that runs a set of AWS Glue jobs every day. The data engineer does not require the Glue jobs to run or finish at a specific time.
Which solution will run the Glue jobs in the MOST cost-effective way?

  1. Choose the FLEX execution class in the Glue job properties.
  2. Use the Spot Instance type in Glue job properties.
  3. Choose the STANDARD execution class in the Glue job properties.
  4. Choose the latest version in the GlueVersion field in the Glue job properties.

Answer(s): A

Explanation:

A) Choosing FLEX execution class is most cost-effective for nondeterministic or flexible-start workflows, as FLEX allows Glue to use fewer compute resources and scale efficiently when exact start/finish times aren’t required.
B) Spot Instances are not a Glue job option; Glue supports on-demand, streaming, and a few pricing models, but Spot isn’t applicable for Glue jobs.
C) STANDARD is a fixed cost model; it may be more expensive for sporadic, scheduled tasks that don’t need guaranteed timing.
D) GlueVersion specifies features, not cost optimization; newer versions don’t inherently reduce cost for flexible schedules.



A data engineer needs to create an AWS Lambda function that converts the format of data from .csv to Apache Parquet. The Lambda function must run only if a user uploads a .csv file to an Amazon S3 bucket.
Which solution will meet these requirements with the LEAST operational overhead?

  1. Create an S3 event notification that has an event type of s3:ObjectCreated:*. Use a filter rule to generate notifications only when the suffix includes .csv. Set the Amazon Resource Name (ARN) of the Lambda function as the destination for the event notification.
  2. Create an S3 event notification that has an event type of s3:ObjectTagging:* for objects that have a tag set to .csv. Set the Amazon Resource Name (ARN) of the Lambda function as the destination for the event notification.
  3. Create an S3 event notification that has an event type of s3:*. Use a filter rule to generate notifications only when the suffix includes .csv. Set the Amazon Resource Name (ARN) of the Lambda function as the destination for the event notification.
  4. Create an S3 event notification that has an event type of s3:ObjectCreated:*. Use a filter rule to generate notifications only when the suffix includes .csv. Set an Amazon Simple Notification Service (Amazon SNS) topic as the destination for the event notification. Subscribe the Lambda function to the SNS topic.

Answer(s): A

Explanation:

A is correct because S3 event notifications can filter for object creation events with a .csv suffix and directly invoke the Lambda function, minimizing components and operational overhead. B is incorrect because tag-based triggers require tagging policy and do not guarantee the file is a CSV, adding complexity. C is incorrect because s3:* is overly broad and not needed; it would generate excessive events and complicate processing. D is incorrect because using SNS introduces an additional service and subscription step, increasing latency and maintenance versus direct Lambda invocation from S3 events.



A data engineer needs Amazon Athena queries to finish faster. The data engineer notices that all the files the Athena queries use are currently stored in uncompressed .csv format. The data engineer also notices that users perform most queries by selecting a specific column.
Which solution will MOST speed up the Athena query performance?

  1. Change the data format from .csv to JSON format. Apply Snappy compression.
  2. Compress the .csv files by using Snappy compression.
  3. Change the data format from .csv to Apache Parquet. Apply Snappy compression.
  4. Compress the .csv files by using gzip compression.

Answer(s): C

Explanation:

Athena performance improves when using columnar, compressed formats and predicate pushdown; Parquet with Snappy enables efficient column pruning for single-column queries.
A) JSON is not columnar and increases read I/O; adds no efficient columnar pruning, even with Snappy.
B) Snappy compression alone on CSV helps slightly but does not enable columnar pruning or reduce I/O as effectively as Parquet.
C) Parquet is columnar AND Snappy-compressed, enabling predicate pushdown for specific columns and substantial I/O reduction, speeding queries.
D) gzip is a single-stream compression on CSV, not columnar and limits parallelism/Predicate pushdown; less performance gain than Parquet.



A manufacturing company collects sensor data from its factory floor to monitor and enhance operational efficiency. The company uses Amazon Kinesis Data Streams to publish the data that the sensors collect to a data stream. Then Amazon Kinesis Data Firehose writes the data to an Amazon S3 bucket.
The company needs to display a real-time view of operational efficiency on a large screen in the manufacturing facility.
Which solution will meet these requirements with the LOWEST latency?

  1. Use Amazon Managed Service for Apache Flink (previously known as Amazon Kinesis Data Analytics) to process the sensor data. Use a connector for Apache Flink to write data to an Amazon Timestream database. Use the Timestream database as a source to create a Grafana dashboard.
  2. Configure the S3 bucket to send a notification to an AWS Lambda function when any new object is created. Use the Lambda function to publish the data to Amazon Aurora. Use Aurora as a source to create an Amazon QuickSight dashboard.
  3. Use Amazon Managed Service for Apache Flink (previously known as Amazon Kinesis Data Analytics) to process the sensor data. Create a new Data Firehose delivery stream to publish data directly to an Amazon Timestream database. Use the Timestream database as a source to create an Amazon QuickSight dashboard.
  4. Use AWS Glue bookmarks to read sensor data from the S3 bucket in real time. Publish the data to an Amazon Timestream database. Use the Timestream database as a source to create a Grafana dashboard.

Answer(s): A

Explanation:

A) Real-time processing with Amazon Managed Service for Apache Flink and a Grafana dashboard using Timestream minimizes latency by streaming analytics directly from Kinesis Data Streams, with Timestream as a time-series store and Grafana for real-time visualization.
B) S3 event-based Lambda introduces higher latency due to polling and object-level processing; Aurora+QuickSight is batch-oriented and not real-time.
C) While Flink is suitable, publishing directly from Flink to Timestream via a dedicated Firehose stream adds unnecessary hops and potential latency compared to inline Flink processing and Grafana.
D) AWS Glue bookmarks are batch-oriented and not suitable for real-time dashboards; Grafana over Timestream would be possible, but the end-to-end real-time path is weaker than A.



A company stores daily records of the financial performance of investment portfolios in .csv format in an Amazon S3 bucket. A data engineer uses AWS Glue crawlers to crawl the S3 data.
The data engineer must make the S3 data accessible daily in the AWS Glue Data Catalog.
Which solution will meet these requirements?

  1. Create an IAM role that includes the AmazonS3FullAccess policy. Associate the role with the crawler. Specify the S3 bucket path of the source data as the crawler's data store. Create a daily schedule to run the crawler. Configure the output destination to a new path in the existing S3 bucket.
  2. Create an IAM role that includes the AWSGlueServiceRole policy. Associate the role with the crawler. Specify the S3 bucket path of the source data as the crawler's data store. Create a daily schedule to run the crawler. Specify a database name for the output.
  3. Create an IAM role that includes the AmazonS3FullAccess policy. Associate the role with the crawler. Specify the S3 bucket path of the source data as the crawler's data store. Allocate data processing units (DPUs) to run the crawler every day. Specify a database name for the output.
  4. Create an IAM role that includes the AWSGlueServiceRole policy. Associate the role with the crawler. Specify the S3 bucket path of the source data as the crawler's data store. Allocate data processing units (DPUs) to run the crawler every day. Configure the output destination to a new path in the existing S3 bucket.

Answer(s): B

Explanation:

The correct option B: Uses the AWSGlueServiceRole, which is the appropriate IAM role for Glue crawlers, ensuring least privilege and proper integration with the Glue service; specifying the S3 source and a daily crawl with a database name for the output places the catalog metadata in a known database.
A) Uses AmazonS3FullAccess, excessive permissions and not required; output destination path concept not necessary for catalog integration.
C) Adds DPUs but lacks proper output database naming and uses broad S3 access; unnecessary for catalog registration.
D) Same issue as C with DPUs and output path, plus uses full service role rather than Glue service role.



A company loads transaction data for each day into Amazon Redshift tables at the end of each day. The company wants to have the ability to track which tables have been loaded and which tables still need to be loaded.
A data engineer wants to store the load statuses of Redshift tables in an Amazon DynamoDB table. The data engineer creates an AWS Lambda function to publish the details of the load statuses to DynamoDB.
How should the data engineer invoke the Lambda function to write load statuses to the DynamoDB table?

  1. Use a second Lambda function to invoke the first Lambda function based on Amazon CloudWatch events.
  2. Use the Amazon Redshift Data API to publish an event to Amazon EventBridge. Configure an EventBridge rule to invoke the Lambda function.
  3. Use the Amazon Redshift Data API to publish a message to an Amazon Simple Queue Service (Amazon SQS) queue. Configure the SQS queue to invoke the Lambda function.
  4. Use a second Lambda function to invoke the first Lambda function based on AWS CloudTrail events.

Answer(s): B

Explanation:

B) The Redshift Data API can publish events to EventBridge, which can trigger the Lambda to write load statuses to DynamoDB, enabling decoupled, serverless event-driven updates aligned with Redshift activity.
A) Requires a separate Lambda and CloudWatch events; more complex and not as direct as EventBridge integration with Redshift Data API.
C) SQS-to-Lambda path adds unnecessary queueing and is not the idiomatic Redshift event notification mechanism.
D) CloudTrail events are audit logs, not intended for real-time data workflow triggers between Redshift and Lambda.



A data engineer needs to securely transfer 5 TB of data from an on-premises data center to an Amazon S3 bucket. Approximately 5% of the data changes every day. Updates to the data need to be regularly proliferated to the S3 bucket. The data includes files that are in multiple formats. The data engineer needs to automate the transfer process and must schedule the process to run periodically.
Which AWS service should the data engineer use to transfer the data in the MOST operationally efficient way?

  1. AWS DataSync
  2. AWS Glue
  3. AWS Direct Connect
  4. Amazon S3 Transfer Acceleration

Answer(s): A

Explanation:

A) AWS DataSync is correct because it enables secure, automated, periodic transfer of large on-premises datasets to S3, supports incremental changes, multiple file formats, and can schedule transfers; it handles continuous updates efficiently without manual scripting.
B) AWS Glue is optimized for ETL processing and data cataloging, not for secure, ongoing bulk transfer from on-premises to S3 with scheduling and incremental sync.
C) AWS Direct Connect provides a dedicated network connection, not data movement orchestration or scheduling of transfers to S3.
D) Amazon S3 Transfer Acceleration speeds individual uploads over long distances but is not designed for automated, scheduled, incremental sync from on-premises with ongoing updates.



A company uses an on-premises Microsoft SQL Server database to store financial transaction data. The company migrates the transaction data from the on-premises database to AWS at the end of each month. The company has noticed that the cost to migrate data from the on-premises database to an Amazon RDS for SQL Server database has increased recently.
The company requires a cost-effective solution to migrate the data to AWS. The solution must cause minimal downtown for the applications that access the database.
Which AWS service should the company use to meet these requirements?

  1. AWS Lambda
  2. AWS Database Migration Service (AWS DMS)
  3. AWS Direct Connect
  4. AWS DataSync

Answer(s): B

Explanation:

AWS DMS is designed for ongoing or batch migrations with minimal downtime, supporting continuous replication from on-premises SQL Server to RDS for SQL Server and optimizing for cost during monthly migrations.
A) AWS Lambda is event-driven compute unsuitable for large data migrations and lacks built-in data replication capabilities.
C) AWS Direct Connect provides dedicated network connectivity but does not handle data transformation or ongoing replication between on-prem and AWS.
D) AWS DataSync focuses on high-speed transfer of files and object storage, not relational database replication to RDS.
B) Correct: DMS handles database migration with minimal downtime and cost-effective, ongoing replication for SQL Server to RDS.



Viewing Page 2 of 27



Share your comments for Amazon DEA-C01 exam with other users:

Simeneh 7/9/2023 8:46:00 AM

it is very nice
Anonymous


john 11/16/2023 5:13:00 PM

i gave the amazon dva-c02 tests today and passed. very helpful.
Anonymous


Tao 11/20/2023 8:53:00 AM

there is an incorrect word in the problem statement. for example, in question 1, there is the word "speci c". this is "specific. in the other question, there is the word "noti cation". this is "notification. these mistakes make this site difficult for me to use.
Anonymous


patricks 10/24/2023 6:02:00 AM

passed my az-120 certification exam today with 90% marks. studied using the dumps highly recommended to all.
Anonymous


Ananya 9/14/2023 5:17:00 AM

i need it, plz make it available
UNITED STATES


JM 12/19/2023 2:41:00 PM

q47: intrusion prevention system is the correct answer, not patch management. by definition, there are no patches available for a zero-day vulnerability. the way to prevent an attacker from exploiting a zero-day vulnerability is to use an ips.
UNITED STATES


Ronke 8/18/2023 10:39:00 AM

this is simple but tiugh as well
Anonymous


CesarPA 7/12/2023 10:36:00 PM

questão 4, segundo meu compilador local e o site https://www.jdoodle.com/online-java-compiler/, a resposta correta é "c" !
UNITED STATES


Jeya 9/13/2023 7:50:00 AM

its very useful
INDIA


Tracy 10/24/2023 6:28:00 AM

i mastered my skills and aced the comptia 220-1102 exam with a score of 920/1000. i give the credit to for my success.
Anonymous


James 8/17/2023 4:33:00 PM

real questions
UNITED STATES


Aderonke 10/23/2023 1:07:00 PM

very helpful assessments
UNITED KINGDOM


Simmi 8/24/2023 7:25:00 AM

hi there, i would like to get dumps for this exam
AUSTRALIA


johnson 10/24/2023 5:47:00 AM

i studied for the microsoft azure az-204 exam through it has 100% real questions available for practice along with various mock tests. i scored 900/1000.
GERMANY


Manas 9/9/2023 1:48:00 AM

please upload 1z0-1072-23 exam dups
UNITED STATES


SB 9/12/2023 5:15:00 AM

i was hoping if you could please share the pdf as i’m currently preparing to give the exam.
Anonymous


Jagjit 8/26/2023 5:01:00 PM

i am looking for oracle 1z0-116 exam
UNITED STATES


S Mallik 11/27/2023 12:32:00 AM

where we can get the answer to the questions
Anonymous


PiPi Li 12/12/2023 8:32:00 PM

nice questions
NETHERLANDS


Dan 8/10/2023 4:19:00 PM

question 129 is completely wrong.
UNITED STATES


gayathiri 7/6/2023 12:10:00 AM

i need dump
UNITED STATES


Deb 8/15/2023 8:28:00 PM

love the site.
UNITED STATES


Michelle 6/23/2023 4:08:00 AM

can you please upload it back?
Anonymous


Ajay 10/3/2023 12:17:00 PM

could you please re-upload this exam? thanks a lot!
Anonymous


him 9/30/2023 2:38:00 AM

great about shared quiz
Anonymous


San 11/14/2023 12:46:00 AM

goood helping
Anonymous


Wang 6/9/2022 10:05:00 PM

pay attention to questions. they are very tricky. i waould say about 80 to 85% of the questions are in this exam dump.
UNITED STATES


Mary 5/16/2023 4:50:00 AM

wish you would allow more free questions
Anonymous


thomas 9/12/2023 4:28:00 AM

great simulation
Anonymous


Sandhya 12/9/2023 12:57:00 AM

very g inood
Anonymous


Agathenta 12/16/2023 1:36:00 PM

q35 should be a
Anonymous


MD. SAIFUL ISLAM 6/22/2023 5:21:00 AM

sap c_ts450_2021
Anonymous


Satya 7/24/2023 3:18:00 AM

nice questions
UNITED STATES


sk 5/13/2023 2:10:00 AM

ecellent materil for unserstanding
INDIA