Snowflake SnowPro Advanced Data Engineer Exam (page: 2)
Snowflake SnowPro Advanced Data Engineer
Updated on: 12-Feb-2026

A Data Engineer wants to check the status of a pipe named my_pipe. The pipe is inside a database named test and a schema named Extract (case-sensitive).

Which query will provide the status of the pipe?

  1. SELECT SYSTEM$PIPE_STATUS("test.'extract'.my_pipe");
  2. SELECT SYSTEM$PIPE_STATUS('test."Extract".my_pipe');
  3. SELECT * FROM SYSTEM$PIPE_STATUS('test."Extract".my_pipe');
  4. SELECT * FROM SYSTEM$PIPE_STATUS("test.'extract'.my_pipe");

Answer(s): B



Company A and Company B both have Snowflake accounts. Company A's account is hosted on a different cloud provider and region than Company B's account. Companies A and B are not in the same Snowflake organization.

How can Company A share data with Company B? (Choose two.)

  1. Create a share within Company A's account and add Company B's account as a recipient of that share.
  2. Create a share within Company A's account, and create a reader account that is a recipient of the share.
    Grant Company B access to the reader account.
  3. Use database replication to replicate Company A's data into Company B's account. Create a share within Company B's account and grant users within Company B's account access to the share.
  4. Create a new account within Company A's organization in the same cloud provider and region as Company B's account. Use database replication to replicate Company A's data to the new account. Create a share within the new account, and add Company B's account as a recipient of that share.
  5. Create a separate database within Company A's account to contain only those data sets they wish to share with Company B. Create a share within Company A's account and add all the objects within this separate database to the share. Add Company B's account as a recipient of the share.

Answer(s): A,B



A Data Engineer is trying to load the following rows from a CSV file into a table in Snowflake with the following structure:





The engineer is using the following COPY INTO statement:



However, the following error is received:

Number of columns in file (6) does not match that of the corresponding table (3), use file format option error_on_column_count_mismatch=false to ignore this error File 'address.csv.gz', line 3, character 1 Row 1 starts at line 2, column "STGCUSTOMER"[6] If you would like to continue loading when an error is encountered, use other values such as 'SKIP_FILE' or 'CONTINUE' for the ON_ERROR option.

Which file format option should be used to resolve the error and successfully load all the data into the table?

  1. ESCAPE_UNENCLOSED FIELD = '\\'
  2. ERROR_ON_COLUMN_COUNT_MISMATCH = FALSE
  3. FIELD_DELIMITER = ','
  4. FIELD_OPTIONALLY_ENCLOSED_BY = '"'

Answer(s): D



A Data Engineer is working on a continuous data pipeline which receives data from Amazon Kinesis Firehose and loads the data into a staging table which will later be used in the data transformation process. The average file size is 300-500 MB.

The Engineer needs to ensure that Snowpipe is performant while minimizing costs.

How can this be achieved?

  1. Increase the size of the virtual warehouse used by Snowpipe.
  2. Split the files before loading them and set the SIZE_LIMIT option to 250 M
  3. Change the file compression size and increase the frequency of the Snowpipe loads.
  4. Decrease the buffer size to trigger delivery of files sized between 100 to 250 MB in Kinesis Firehose.

Answer(s): D



What is a characteristic of the operations of streams in Snowflake?

  1. Whenever a stream is queried, the offset is automatically advanced.
  2. When a stream is used to update a target table, the offset is advanced to the current time.
  3. Querying a stream returns all change records and table rows from the current offset to the current time.
  4. Each committed and uncommitted transaction on the source table automatically puts a change record in the stream.

Answer(s): B



At what isolation level are Snowflake streams?

  1. Snapshot
  2. Repeatable read
  3. Read committed
  4. Read uncommitted

Answer(s): B



A CSV file, around 1 TB in size, is generated daily on an on-premise server. A corresponding table, internal stage, and file format have already been created in Snowflake to facilitate the data loading process.

How can the process of bringing the CSV file into Snowflake be automated using the LEAST amount of operational overhead?

  1. Create a task in Snowflake that executes once a day and runs a COPY INTO statement that references the internal stage. The internal stage will read the files directly from the on-premise server and copy the newest file into the table from the on-premise server to the Snowflake table.
  2. On the on-premise server, schedule a SQL file to run using SnowSQL that executes a PUT to push a specific file to the internal stage. Create a task that executes once a day in Snowflake and runs a COPY INTO statement that references the internal stage. Schedule the task to start after the file lands in the internal stage.
  3. On the on-premise server, schedule a SQL file to run using SnowSQL that executes a PUT to push a specific file to the internal stage. Create a pipe that runs a COPY INTO statement that references the internal stage. Snowpipe auto-ingest will automatically load the file from the internal stage when the new file lands in the internal stage.
  4. On the on-premise server, schedule a Python file that uses the Snowpark Python library. The Python script will read the CSV data into a DataFrame and generate an INSERT INTO statement that will directly load into the table. The script will bypass the need to move a file into an internal stage.

Answer(s): B



A company is using Snowpipe to bring in millions of rows every day of Change Data Capture (CDC) into a Snowflake staging table on a real-time basis. The CDC needs to get processed and combined with other data in Snowflake and land in a final table as part of the full data pipeline.

How can a Data Engineer MOST efficiently process the incoming CDC on an ongoing basis?

  1. Create a stream on the staging table and schedule a task that transforms data from the stream, only when the stream has data.
  2. Transform the data during the data load with Snowpipe by modifying the related COPY INTO statement to include transformation steps such as CASE statements and JOINS.
  3. Schedule a task that dynamically retrieves the last time the task was run from information_schema.task_history and use that timestamp to process the delta of the new rows since the last time the task was run.
  4. Use a CREATE OR REPLACE TABLE AS statement that references the staging table and includes all the transformation SQL. Use a task to run the full CREATE OR REPLACE TABLE AS statement on a scheduled basis.

Answer(s): A



Viewing Page 2 of 16



Share your comments for Snowflake SnowPro Advanced Data Engineer exam with other users:

shobha 11/29/2025 2:19:59 AM

very helpful
INDIA


Pandithurai 11/12/2025 12:16:21 PM

Question 1, Ans is - Developer,Standard,Professional Direct and Premier
Anonymous


Einstein 11/8/2025 4:13:37 AM

Passed this exam in first appointment. Great resource and valid exam dump.
Anonymous


David 10/31/2025 4:06:16 PM

Today I wrote this exam and passed, i totally relay on this practice exam. The questions were very tough, these questions are valid and I encounter the same.
UNITED STATES


Thor 10/21/2025 5:16:29 AM

Anyone used this dump recently?
NEW ZEALAND


Vladimir 9/25/2025 9:11:14 AM

173 question is A not D
Anonymous


khaos 9/21/2025 7:07:26 AM

nice questions
Anonymous


Katiso Lehasa 9/15/2025 11:21:52 PM

Thanks for the practice questions they helped me a lot.
Anonymous


Einstein 9/2/2025 7:42:00 PM

Passed this exam today. All questions are valid and this is not something you can find in ChatGPT.
UNITED KINGDOM


vito 8/22/2025 4:16:51 AM

i need to pass exam for VMware 2V0-11.25
Anonymous


Matt 7/31/2025 11:44:40 PM

Great questions.
UNITED STATES


OLERATO 7/1/2025 5:44:14 AM

great dumps to practice for the exam
SOUTH AFRICA


Adekunle willaims 6/9/2025 7:37:29 AM

How reliable and relevant are these questions?? also i can see the last update here was January and definitely new questions would have emerged.
Anonymous


Alex 5/24/2025 12:54:15 AM

Can I trust to this source?
Anonymous


SPriyak 3/17/2025 11:08:37 AM

can you please provide the CBDA latest test preparation
UNITED STATES


Chandra 11/28/2024 7:17:38 AM

This is the best and only way of passing this exam as it is extremely hard. Good questions and valid dump.
INDIA


Sunak 1/25/2025 9:17:57 AM

Can I use this dumps when I am taking the exam? I mean does somebody look what tabs or windows I have opened ?
BULGARIA


Frank 2/15/2024 11:36:57 AM

Finally got a change to write this exam and pass it! Valid and accurate!
CANADA


Anonymous User 2/2/2024 6:42:12 PM

Upload this exam please!
Anonymous


Nicholas 2/2/2024 6:17:08 PM

Thank you for providing these questions. It helped me a lot with passing my exam.
Anonymous


Timi 8/19/2023 5:30:00 PM

my first attempt
UNITED KINGDOM


Blessious Phiri 8/13/2023 10:32:00 AM

very explainable
Anonymous


m7md ibrahim 5/26/2023 6:21:00 PM

i think answer of q 462 is variance analysis
Anonymous


Tehu 5/25/2023 12:25:00 PM

hi i need see questions
Anonymous


Ashfaq Nasir 1/17/2024 1:19:00 AM

best study material for exam
Anonymous


Roberto 11/27/2023 12:33:00 AM

very interesting repository
ITALY


Nale 9/18/2023 1:51:00 PM

american history 1
Anonymous


Tanvi 9/27/2023 4:02:00 AM

good level of questions
Anonymous


Boopathy 8/17/2023 1:03:00 AM

i need this dump kindly upload it
Anonymous


s_123 8/12/2023 4:28:00 PM

do we need c# coding to be az204 certified
Anonymous


Blessious Phiri 8/15/2023 3:38:00 PM

excellent topics covered
Anonymous


Manasa 12/5/2023 3:15:00 AM

are these really financial cloud questions and answers, seems these are basic admin question and answers
Anonymous


Not Robot 5/14/2023 5:33:00 PM

are these comments real
Anonymous


kriah 9/4/2023 10:44:00 PM

please upload the latest dumps
UNITED STATES