Your team uses thousands of connected IoT devices to collect device maintenance data for your oil and gas customers in real time. You want to design inspection routines, device repair, and replacement schedules based on insights gathered from the data produced by these devices. You need a managed solution that is highly scalable, supports a multi-cloud strategy, and offers low latency for these IoT devices. What should you do?
Answer(s): C
This scenario has BigTable written all over it - large amounts of data from many devices to be analysed in realtime. I would even argue it could qualify as a multicloud solution, given the links to HBASE. BUT it does not support SQL queries and is not therefore compatible (on its own) with Looker. Firestore + Looker has the same problem. Spanner + Data Studio is at least a compatible pairing, but I agree with others that it doesn't fit this use-case - not least because it's Google-native. By contrast, MongoDB Atlas is a managed solution (just not by Google) which is compatible with the proposed reporting tool (Mongo's own Charts), it's specifically designed for this type of solution and of course it can run on any cloud.
Your application follows a microservices architecture and uses a single large Cloud SQL instance, which is starting to have performance issues as your application grows. in the Cloud Monitoring dashboard, the CPU utilization looks normal You want to follow Google-recommended practices to resolve and prevent these performance issues while avoiding any major refactoring. What should you do?
Answer(s): D
https://cloud.google.com/sql/docs/mysql/best-practices#data-arch
You need to perform a one-time migration of data from a running Cloud SQL for MySQL instance in the us-central1 region to a new Cloud SQL for MySQL instance in the us-east1 region. You want to follow Google-recommended practices to minimize performance impact on the currently running instance. What should you do?
https://cloud.google.com/sql/docs/mysql/import-export#serverless
You are running a mission-critical application on a Cloud SQL for PostgreSQL database with a multi- zonal setup. The primary and read replica instances are in the same region but in different zones. You need to ensure that you split the application load between both instances. What should you do?
Answer(s): B
https://severalnines.com/blog/how-achieve-postgresql-high-availability-pgbouncer/https://cloud.google.com/blog/products/databases/using-haproxy-to-scale-read-only-workloads-on- cloud-sql-for-postgresqlThis answer is correct because PgBouncer is a lightweight connection pooler for PostgreSQL that can help you distribute read requests between the Cloud SQL primary and read replica instances1. PgBouncer can also improve performance and scalability by reducing the overhead of creating new connections and reusing existing ones1. You can install PgBouncer on a Compute Engine instance and configure it to connect to the Cloud SQL instances using private IP addresses or the Cloud SQL Auth proxy2.
Your organization deployed a new version of a critical application that uses Cloud SQL for MySQL with high availability (HA) and binary logging enabled to store transactional information. The latest release of the application had an error that caused massive data corruption in your Cloud SQL for MySQL database. You need to minimize data loss. What should you do?
Binary Logging enabled, with that you can identify the point of time the data was good and recover from that point time. https://cloud.google.com/sql/docs/mysql/backup- recovery/pitr#perform_the_point-in-time_recovery_using_binary_log_positions
Share your comments for Google PROFESSIONAL CLOUD DATABASE ENGINEER exam with other users:
just passed the exam on my first try using these dumps.
very helpful
these questions look good.
this is very helpful content
please provide the dumps
it is amazing
quesion 178 about "a banking system that predicts whether a loan will be repaid is an example of the" the answer is classification. not regresion, you should fix it.
please upload apache spark dumps
q14 is b&c to reduce you will switch off mail for every single alert and you will switch on daily digest to get a mail once per day, you might even skip the empty digest mail but i see this as a part of the daily digest adjustment
i think it is good question
good for students who wish to give certification.
is there a google drive link to the images? the links in questions are not working.
very promising, looks great, so much wow!
i scored 87% on the az-204 exam. thanks! i always trust
good need more
sample questions seems good
huawei is ok
good one nice
please continue
this exam dumps just did the job. i donot want to ruffle your feathers but your exam dumps and mock test engine is amazing.
nice questions
the explanation are really helpful
just passed my exam yesterday on my first attempt. these dumps were extremely helpful in passing first time. the questions were very, very similar to these questions!
cosmos db is paas not saas
what is the percentage of common questions in gcp exam compared to 197 dump questions? are they 100% matching with real gcp exam?
not able to see questions
by far one of the best sites for free questions. i have pass 2 exams with the help of this website.
excellent question bank.
it really helped
excelent material
the new versoin of this exam which i downloaded has all the latest questions from the exam. i only saw 3 new questions in the exam which was not in this dump.
question 8 - can cloudtrail be used for storing jobs? based on aws - aws cloudtrail is used for governance, compliance and investigating api usage across all of our aws accounts. every action that is taken by a user or script is an api call so this is logged to [aws] cloudtrail. something seems incorrect here.
question 13 tda - c01 answer : quick table calculation -> percentage of total , compute using table down
pls share teh dump