Amazon DAS-C01 Exam (page: 5)
Amazon AWS Certified Data Analytics - Specialty (DAS-C01)
Updated on: 25-Dec-2025

Viewing Page 5 of 34

Three teams of data analysts use Apache Hive on an Amazon EMR cluster with the EMR File System (EMRFS) to query data stored within each teams Amazon
S3 bucket. The EMR cluster has Kerberos enabled and is con gured to authenticate users from the corporate Active Directory. The data is highly sensitive, so access must be limited to the members of each team.
Which steps will satisfy the security requirements?

  1. For the EMR cluster Amazon EC2 instances, create a service role that grants no access to Amazon S3. Create three additional IAM roles, each granting access to each team's speci c bucket. Add the additional IAM roles to the cluster's EMR role for the EC2 trust policy. Create a security con guration mapping for the additional IAM roles to Active Directory user groups for each team.
  2. For the EMR cluster Amazon EC2 instances, create a service role that grants no access to Amazon S3. Create three additional IAM roles, each granting access to each team's speci c bucket. Add the service role for the EMR cluster EC2 instances to the trust policies for the additional IAM roles. Create a security con guration mapping for the additional IAM roles to Active Directory user groups for each team.
  3. For the EMR cluster Amazon EC2 instances, create a service role that grants full access to Amazon S3. Create three additional IAM roles, each granting access to each team's speci c bucket. Add the service role for the EMR cluster EC2 instances to the trust polices for the additional IAM roles. Create a security con guration mapping for the additional IAM roles to Active Directory user groups for each team.
  4. For the EMR cluster Amazon EC2 instances, create a service role that grants full access to Amazon S3. Create three additional IAM roles, each granting access to each team's speci c bucket. Add the service role for the EMR cluster EC2 instances to the trust polices for the base IAM roles. Create a security con guration mapping for the additional IAM roles to Active Directory user groups for each team.

Answer(s): B



A company is planning to create a data lake in Amazon S3. The company wants to create tiered storage based on access patterns and cost objectives. The solution must include support for JDBC connections from legacy clients, metadata management that allows federation for access control, and batch-based ETL using PySpark and Scala. Operational management should be limited. Which combination of components can meet these requirements? (Choose three.)

  1. AWS Glue Data Catalog for metadata management
  2. Amazon EMR with Apache Spark for ETL
  3. AWS Glue for Scala-based ETL
  4. Amazon EMR with Apache Hive for JDBC clients
  5. Amazon Athena for querying data in Amazon S3 using JDBC drivers
  6. Amazon EMR with Apache Hive, using an Amazon RDS with MySQL-compatible backed metastore

Answer(s): A,C,E


Reference:

https://d1.awsstatic.com/whitepapers/Storage/data-lake-on-aws.pdf



A company wants to optimize the cost of its data and analytics platform. The company is ingesting a number of .csv and JSON les in Amazon S3 from various data sources. Incoming data is expected to be 50 GB each day. The company is using Amazon Athena to query the raw data in Amazon S3 directly. Most queries aggregate data from the past 12 months, and data that is older than 5 years is infrequently queried. The typical query scans about 500 MB of data and is expected to return results in less than 1 minute. The raw data must be retained inde nitely for compliance requirements.
Which solution meets the company's requirements?

  1. Use an AWS Glue ETL job to compress, partition, and convert the data into a columnar data format. Use Athena to query the processed dataset. Con gure a lifecycle policy to move the processed data into the Amazon S3 Standard-Infrequent Access (S3 Standard-IA) storage class 5 years after object creation. Con gure a second lifecycle policy to move the raw data into Amazon S3 Glacier for long-term archival 7 days after object creation.
  2. Use an AWS Glue ETL job to partition and convert the data into a row-based data format. Use Athena to query the processed dataset. Con gure a lifecycle policy to move the data into the Amazon S3 Standard-Infrequent Access (S3 Standard-IA) storage class 5 years after object creation. Con gure a second lifecycle policy to move the raw data into Amazon S3 Glacier for long-term archival 7 days after object creation.
  3. Use an AWS Glue ETL job to compress, partition, and convert the data into a columnar data format. Use Athena to query the processed dataset. Con gure a lifecycle policy to move the processed data into the Amazon S3 Standard-Infrequent Access (S3 Standard-IA) storage class 5 years after the object was last accessed. Con gure a second lifecycle policy to move the raw data into Amazon S3 Glacier for long- term archival 7 days after the last date the object was accessed.
  4. Use an AWS Glue ETL job to partition and convert the data into a row-based data format. Use Athena to query the processed dataset. Con gure a lifecycle policy to move the data into the Amazon S3 Standard-Infrequent Access (S3 Standard-IA) storage class 5 years after the object was last accessed. Con gure a second lifecycle policy to move the raw data into Amazon S3 Glacier for long-term archival 7 days after the last date the object was accessed.

Answer(s): A



An energy company collects voltage data in real time from sensors that are attached to buildings. The company wants to receive noti cations when a sequence of two voltage drops is detected within 10 minutes of a sudden voltage increase at the same building. All noti cations must be delivered as quickly as possible. The system must be highly available. The company needs a solution that will automatically scale when this monitoring feature is implemented in other cities. The noti cation system is subscribed to an Amazon Simple Noti cation Service (Amazon SNS) topic for remediation.
Which solution will meet these requirements?

  1. Create an Amazon Managed Streaming for Apache Kafka cluster to ingest the data. Use an Apache Spark Streaming with Apache Kafka consumer API in an automatically scaled Amazon EMR cluster to process the incoming data. Use the Spark Streaming application to detect the known event sequence and send the SNS message.
  2. Create a REST-based web service by using Amazon API Gateway in front of an AWS Lambda function. Create an Amazon RDS for PostgreSQL database with su cient Provisioned IOPS to meet current demand. Con gure the Lambda function to store incoming events in the RDS for PostgreSQL database, query the latest data to detect the known event sequence, and send the SNS message.
  3. Create an Amazon Kinesis Data Firehose delivery stream to capture the incoming sensor data. Use an AWS Lambda transformation function to detect the known event sequence and send the SNS message.
  4. Create an Amazon Kinesis data stream to capture the incoming sensor data. Create another stream for noti cations. Set up AWS Application Auto Scaling on both streams. Create an Amazon Kinesis Data Analytics for Java application to detect the known event sequence, and add a message to the message stream Con gure an AWS Lambda function to poll the message stream and publish to the SNS topic.

Answer(s): C


Reference:

https://aws.amazon.com/kinesis/data-streams/faqs/



A media company has a streaming playback application. The company needs to collect and analyze data to provide near-real-time feedback on playback issues within 30 seconds. The company requires a consumer application to identify playback issues, such as decreased quality during a speci ed time frame. The data will be streamed in JSON format. The schema can change over time.
Which solution will meet these requirements?

  1. Send the data to Amazon Kinesis Data Firehose with delivery to Amazon S3. Con gure an S3 event to invoke an AWS Lambda function to process and analyze the data.
  2. Send the data to Amazon Managed Streaming for Apache Kafka. Con gure Amazon Kinesis Data Analytics for SQL Application as the consumer application to process and analyze the data.
  3. Send the data to Amazon Kinesis Data Firehose with delivery to Amazon S3. Con gure Amazon S3 to initiate an event for AWS Lambda to process and analyze the data.
  4. Send the data to Amazon Kinesis Data Streams. Con gure an Amazon Kinesis Data Analytics for Apache Flink application as the consumer application to process and analyze the data.

Answer(s): D



Viewing Page 5 of 34



Share your comments for Amazon DAS-C01 exam with other users:

Berihun Desalegn Wonde 7/13/2023 11:00:00 AM

all questions are more important
Anonymous


gr 7/2/2023 7:03:00 AM

ques 4 answer should be c ie automatically recover from failure
Anonymous


RS 7/27/2023 7:17:00 AM

very very useful page
INDIA


Blessious Phiri 8/12/2023 11:47:00 AM

the exams are giving me an eye opener
Anonymous


AD 10/22/2023 9:08:00 AM

3rd so far, need to cover more
Anonymous


Matt 11/18/2023 2:32:00 AM

aligns with the pecd notes
Anonymous


Sri 10/15/2023 4:38:00 PM

question 4: b securityadmin is the correct answer. https://docs.snowflake.com/en/user-guide/security-access-control-overview#access-control-framework
GERMANY


H.T.M. D 6/25/2023 2:55:00 PM

kindly please share dumps
Anonymous


Satish 11/6/2023 4:27:00 AM

it is very useful, thank you
Anonymous


Chinna 7/30/2023 8:37:00 AM

need safe rte dumps
FRANCE


1234 6/30/2023 3:40:00 AM

can you upload the cis - cpg dumps
Anonymous


Did 1/12/2024 3:01:00 AM

q6 = 1. download odt application 2. create a configuration file (xml) 3. setup.exe /download to download the installation files 4. setup.exe /configure to deploy the application
FRANCE


John 10/12/2023 12:30:00 PM

great material
Anonymous


Dinesh 8/1/2023 2:26:00 PM

could you please upload sap c_arsor_2302 questions? it will be very much helpful.
Anonymous


LBert 6/19/2023 10:23:00 AM

vraag 20c: rsa veilig voor symmtrische cryptografie? antwoord c is toch fout. rsa is voor asymmetrische cryptogafie??
NETHERLANDS


g 12/22/2023 1:51:00 PM

so far good
UNITED STATES


Milos 8/4/2023 9:33:00 AM

question 31 has obviously wrong answers. tls and ssl are used to encrypt data at transit, not at rest.
Serbia And Montenegro


Diksha 9/25/2023 2:32:00 AM

pls provide dump for 1z0-1080-23 planning exams
Anonymous


H 7/17/2023 4:28:00 AM

could you please upload the exam?
Anonymous


Anonymous 9/14/2023 4:47:00 AM

please upload this
UNITED STATES


Naveena 1/13/2024 9:55:00 AM

good material
Anonymous


WildWilly 1/19/2024 10:43:00 AM

lets see if this is good stuff...
Anonymous


Lavanya 11/2/2023 1:53:00 AM

useful information
UNITED STATES


Moussa 12/12/2023 5:52:00 AM

intéressant
BURKINA FASO


Madan 6/22/2023 9:22:00 AM

thank you for making the interactive questions
Anonymous


Vavz 11/2/2023 6:51:00 AM

questions are accurate
Anonymous


Su 11/23/2023 4:34:00 AM

i need questions/dumps for this exam.
Anonymous


LuvSN 7/16/2023 11:19:00 AM

i need this exam, when will it be uploaded
ROMANIA


Mihai 7/19/2023 12:03:00 PM

i need the dumps !
Anonymous


Wafa 11/13/2023 3:06:00 AM

very helpful
Anonymous


Alokit 7/3/2023 2:13:00 PM

good source
Anonymous


Show-Stopper 7/27/2022 11:19:00 PM

my 3rd test and passed on first try. hats off to this brain dumps site.
UNITED STATES


Michelle 6/23/2023 4:06:00 AM

please upload it
Anonymous


Lele 11/20/2023 11:55:00 AM

does anybody know if are these real exam questions?
EUROPEAN UNION