Google Google Cloud Architect Professional Exam (page: 11)
Google Cloud Certified - Professional Cloud Architect
Updated on: 01-Sep-2025


Company Overview

TerramEarth manufactures heavy equipment for the mining and agricultural industries. About 80% of their business is from mining and 20% from agriculture. They currently have over 500 dealers and service centers in 100 countries. Their mission is to build products that make their customers more productive.

Solution Concept

There are 20 million TerramEarth vehicles in operation that collect 120 fields of data per second. Data is stored locally on the vehicle and can be accessed for analysis when a vehicle is serviced. The data is downloaded via a maintenance port. This same port can be used to adjust operational parameters, allowing the vehicles to be upgraded in the field with new computing modules.

Approximately 200,000 vehicles are connected to a cellular network, allowing TerramEarth to collect data directly. At a rate of 120 fields of data per second with 22 hours of operation per day, TerramEarth collects a total of about 9 TB/day from these connected vehicles.

Existing Technical Environment

TerramEarth's existing architecture is composed of Linux and Windows-based systems that reside in a single U.S. west coast based data center. These systems gzip CSV files from the field and upload via FTP, and place the data in their data warehouse. Because this process takes time, aggregated reports are based on data that is 3 weeks old.

With this data, TerramEarth has been able to preemptively stock replacement parts and reduce unplanned downtime of their vehicles by 60%. However, because the data is stale, some customers are without their vehicles for up to 4 weeks while they wait for replacement parts.

Business Requirements

Decrease unplanned vehicle downtime to less than 1 week. Support the dealer network with more data on how their customers use their equipment to better position new products and services
Have the ability to partner with different companies ­ especially with seed and fertilizer suppliers in the fast-growing agricultural business ­ to create compelling joint offerings for their customers.

Technical Requirements
Expand beyond a single datacenter to decrease latency to the American Midwest and east coast.
Create a backup strategy.
Increase security of data transfer from equipment to the datacenter.
Improve data in the data warehouse.
Use customer and equipment data to anticipate customer needs.

Application 1: Data ingest
A custom Python application reads uploaded datafiles from a single server, writes to the data warehouse.
Compute:
Windows Server 2008 R2
- 16 CPUs
- 128 GB of RAM
- 10 TB local HDD storage
Application 2: Reporting
An off the shelf application that business analysts use to run a daily report to see what equipment needs repair. Only 2 analysts of a team of 10 (5 west coast, 5 east coast) can connect to the reporting application at a time.
Compute:
Off the shelf application. License tied to number of physical CPUs
- Windows Server 2008 R2
- 16 CPUs
- 32 GB of RAM
- 500 GB HDD
Data warehouse:
A single PostgreSQL server
- RedHat Linux
- 64 CPUs
- 128 GB of RAM
- 4x 6TB HDD in RAID 0

Executive Statement

Our competitive advantage has always been in the manufacturing process, with our ability to build better vehicles for lower cost than our competitors. However, new products with different approaches are constantly being developed, and I'm concerned that we lack the skills to undergo the next wave of transformations in our industry. My goals are to build our skills while addressing immediate market needs through incremental innovations.

You have broken down a legacy monolithic application into a few containerized RESTful microservices. You want to run those microservices on Cloud Run. You also want to make sure the services are highly available with low latency to your customers.
What should you do?

  1. Deploy Cloud Run services to multiple availability zones. Create Cloud Endpoints that point to the services. Create a global HTIP(S) Load Balancing instance and attach the Cloud Endpoints to its backend.
  2. Deploy Cloud Run services to multiple regions Create serverless network endpoint groups pointing to the services. Add the serverless NE Gs to a backend service that is used by a global HTIP(S) Load Balancing instance.
  3. Cloud Run services to multiple regions. In Cloud DNS, create a latency-based DNS name that points to the services.
  4. Deploy Cloud Run services to multiple availability zones. Create a TCP/IP global load balancer. Add the Cloud Run Endpoints to its backend service.

Answer(s): B

Explanation:

https://cloud.google.com/run/docs/multiple-regions




Company Overview

TerramEarth manufactures heavy equipment for the mining and agricultural industries. About 80% of their business is from mining and 20% from agriculture. They currently have over 500 dealers and service centers in 100 countries. Their mission is to build products that make their customers more productive.

Solution Concept

There are 20 million TerramEarth vehicles in operation that collect 120 fields of data per second. Data is stored locally on the vehicle and can be accessed for analysis when a vehicle is serviced. The data is downloaded via a maintenance port. This same port can be used to adjust operational parameters, allowing the vehicles to be upgraded in the field with new computing modules.

Approximately 200,000 vehicles are connected to a cellular network, allowing TerramEarth to collect data directly. At a rate of 120 fields of data per second with 22 hours of operation per day, TerramEarth collects a total of about 9 TB/day from these connected vehicles.

Existing Technical Environment

TerramEarth's existing architecture is composed of Linux and Windows-based systems that reside in a single U.S. west coast based data center. These systems gzip CSV files from the field and upload via FTP, and place the data in their data warehouse. Because this process takes time, aggregated reports are based on data that is 3 weeks old.

With this data, TerramEarth has been able to preemptively stock replacement parts and reduce unplanned downtime of their vehicles by 60%. However, because the data is stale, some customers are without their vehicles for up to 4 weeks while they wait for replacement parts.

Business Requirements

Decrease unplanned vehicle downtime to less than 1 week. Support the dealer network with more data on how their customers use their equipment to better position new products and services
Have the ability to partner with different companies ­ especially with seed and fertilizer suppliers in the fast-growing agricultural business ­ to create compelling joint offerings for their customers.

Technical Requirements
Expand beyond a single datacenter to decrease latency to the American Midwest and east coast.
Create a backup strategy.
Increase security of data transfer from equipment to the datacenter.
Improve data in the data warehouse.
Use customer and equipment data to anticipate customer needs.

Application 1: Data ingest
A custom Python application reads uploaded datafiles from a single server, writes to the data warehouse.
Compute:
Windows Server 2008 R2
- 16 CPUs
- 128 GB of RAM
- 10 TB local HDD storage
Application 2: Reporting
An off the shelf application that business analysts use to run a daily report to see what equipment needs repair. Only 2 analysts of a team of 10 (5 west coast, 5 east coast) can connect to the reporting application at a time.
Compute:
Off the shelf application. License tied to number of physical CPUs
- Windows Server 2008 R2
- 16 CPUs
- 32 GB of RAM
- 500 GB HDD
Data warehouse:
A single PostgreSQL server
- RedHat Linux
- 64 CPUs
- 128 GB of RAM
- 4x 6TB HDD in RAID 0

Executive Statement

Our competitive advantage has always been in the manufacturing process, with our ability to build better vehicles for lower cost than our competitors. However, new products with different approaches are constantly being developed, and I'm concerned that we lack the skills to undergo the next wave of transformations in our industry. My goals are to build our skills while addressing immediate market needs through incremental innovations.

For this question, refer to the TerramEarth case study.

You start to build a new application that uses a few Cloud Functions for the backend. One use case requires a Cloud Function func_display to invoke another Cloud Function func_query. You want func_query only to accept invocations from func_display. You also want to follow Google's recommended best practices.
What should you do?

  1. Create a token and pass it in as an environment variable to func_display.
    When invoking func_query, include the token in the request Pass the same token to func _query and reject the invocation if the tokens are different.
  2. Make func_query 'Require authentication.' Create a unique service account and associate it to func_display. Grant the service account invoker role for func_query. Create an id token in func_display and include the token to the request when invoking func_query.
  3. Make func _query 'Require authentication' and only accept internal traffic. Create those two functions in the same VP Create an ingress firewall rule for func_query to only allow traffic from func_display.
  4. Create those two functions in the same project and VPC. Make func_query only accept internal traffic. Create an ingress firewall for func_query to only allow traffic from func_display. Also, make sure both functions use the same service account.

Answer(s): B

Explanation:

https://cloud.google.com/functions/docs/securing/authenticating#authenticating_function_to_func tion_calls




Company Overview

Mountkirk Games makes online, session-based, multiplayer games for mobile platforms. They build all of their games using some server-side integration. Historically, they have used cloud providers to lease physical servers.
Due to the unexpected popularity of some of their games, they have had problems scaling their global audience, application servers, MySQL databases, and analytics tools. Their current model is to write game statistics to files and send them through an ETL tool that loads them into a centralized MySQL database for reporting.

Solution Concept

Mountkirk Games is building a new game, which they expect to be very popular. They plan to deploy the game's backend on Google Compute Engine so they can capture streaming metrics, run intensive analytics, and take advantage of its autoscaling server environment and integrate with a managed NoSQL database.

Business Requirements
Increase to a global footprint.
Improve uptime ­ downtime is loss of players.
Increase efficiency of the cloud resources we use.
Reduce latency to all customers.

Technical Requirements
Requirements for Game Backend Platform

Dynamically scale up or down based on game activity.
Connect to a transactional database service to manage user profiles and game state. Store game activity in a timeseries database service for future analysis. As the system scales, ensure that data is not lost due to processing backlogs.
Run hardened Linux distro.

Requirements for Game Analytics Platform

Dynamically scale up or down based on game activity
Process incoming data on the fly directly from the game servers Process data that arrives late because of slow mobile networks Allow queries to access at least 10 TB of historical data Process files that are regularly uploaded by users' mobile devices

Executive Statement

Our last successful game did not scale well with our previous cloud provider, resulting in lower user adoption and affecting the game's reputation. Our investors want more key performance indicators (KPIs) to evaluate the speed and stability of the game, as well as other metrics that provide deeper insight into usage patterns so we can adapt the game to target users. Additionally, our current technology stack cannot provide the scale we need, so we want to replace MySQL and move to an environment that provides autoscaling, low latency load balancing, and frees us up from managing physical servers.

For this question, refer to the Mountkirk Games case study. Mountkirk Games wants to migrate from their current analytics and statistics reporting model to one that meets their technical requirements on Google Cloud Platform.
Which two steps should be part of their migration plan? (Choose two.)

  1. Evaluate the impact of migrating their current batch ETL code to Cloud Dataflow.
  2. Write a schema migration plan to denormalize data for better performance in BigQuery.
  3. Draw an architecture diagram that shows how to move from a single MySQL database to a MySQL
    cluster.
  4. Load 10 TB of analytics data from a previous game into a Cloud SQL instance, and run test queries against the full dataset to confirm that they complete successfully.
  5. Integrate Cloud Armor to defend against possible SQL injection attacks in analytics files uploaded to Cloud Storage.

Answer(s): A,B

Explanation:

https://cloud.google.com/bigquery/docs/loading-
data#loading_denormalized_nested_and_repeated_data




Company Overview

Mountkirk Games makes online, session-based, multiplayer games for mobile platforms. They build all of their games using some server-side integration. Historically, they have used cloud providers to lease physical servers.
Due to the unexpected popularity of some of their games, they have had problems scaling their global audience, application servers, MySQL databases, and analytics tools. Their current model is to write game statistics to files and send them through an ETL tool that loads them into a centralized MySQL database for reporting.

Solution Concept

Mountkirk Games is building a new game, which they expect to be very popular. They plan to deploy the game's backend on Google Compute Engine so they can capture streaming metrics, run intensive analytics, and take advantage of its autoscaling server environment and integrate with a managed NoSQL database.

Business Requirements
Increase to a global footprint.
Improve uptime ­ downtime is loss of players.
Increase efficiency of the cloud resources we use.
Reduce latency to all customers.

Technical Requirements
Requirements for Game Backend Platform

Dynamically scale up or down based on game activity.
Connect to a transactional database service to manage user profiles and game state. Store game activity in a timeseries database service for future analysis. As the system scales, ensure that data is not lost due to processing backlogs.
Run hardened Linux distro.

Requirements for Game Analytics Platform

Dynamically scale up or down based on game activity
Process incoming data on the fly directly from the game servers Process data that arrives late because of slow mobile networks Allow queries to access at least 10 TB of historical data Process files that are regularly uploaded by users' mobile devices

Executive Statement

Our last successful game did not scale well with our previous cloud provider, resulting in lower user adoption and affecting the game's reputation. Our investors want more key performance indicators (KPIs) to evaluate the speed and stability of the game, as well as other metrics that provide deeper insight into usage patterns so we can adapt the game to target users. Additionally, our current technology stack cannot provide the scale we need, so we want to replace MySQL and move to an environment that provides autoscaling, low latency load balancing, and frees us up from managing physical servers.

For this question, refer to the Mountkirk Games case study. You need to analyze and define the technical architecture for the compute workloads for your company, Mountkirk Games. Considering the Mountkirk Games business and technical requirements, what should you do?

  1. Create network load balancers. Use preemptible Compute Engine instances.
  2. Create network load balancers. Use non-preemptible Compute Engine instances.
  3. Create a global load balancer with managed instance groups and autoscaling policies. Use preemptible Compute Engine instances.
  4. Create a global load balancer with managed instance groups and autoscaling policies. Use non- preemptible Compute Engine instances.

Answer(s): D




Company Overview

Mountkirk Games makes online, session-based, multiplayer games for mobile platforms. They build all of their games using some server-side integration. Historically, they have used cloud providers to lease physical servers.
Due to the unexpected popularity of some of their games, they have had problems scaling their global audience, application servers, MySQL databases, and analytics tools. Their current model is to write game statistics to files and send them through an ETL tool that loads them into a centralized MySQL database for reporting.

Solution Concept

Mountkirk Games is building a new game, which they expect to be very popular. They plan to deploy the game's backend on Google Compute Engine so they can capture streaming metrics, run intensive analytics, and take advantage of its autoscaling server environment and integrate with a managed NoSQL database.

Business Requirements
Increase to a global footprint.
Improve uptime ­ downtime is loss of players.
Increase efficiency of the cloud resources we use.
Reduce latency to all customers.

Technical Requirements
Requirements for Game Backend Platform

Dynamically scale up or down based on game activity.
Connect to a transactional database service to manage user profiles and game state. Store game activity in a timeseries database service for future analysis. As the system scales, ensure that data is not lost due to processing backlogs.
Run hardened Linux distro.

Requirements for Game Analytics Platform

Dynamically scale up or down based on game activity
Process incoming data on the fly directly from the game servers Process data that arrives late because of slow mobile networks Allow queries to access at least 10 TB of historical data Process files that are regularly uploaded by users' mobile devices

Executive Statement

Our last successful game did not scale well with our previous cloud provider, resulting in lower user adoption and affecting the game's reputation. Our investors want more key performance indicators (KPIs) to evaluate the speed and stability of the game, as well as other metrics that provide deeper insight into usage patterns so we can adapt the game to target users. Additionally, our current technology stack cannot provide the scale we need, so we want to replace MySQL and move to an environment that provides autoscaling, low latency load balancing, and frees us up from managing physical servers.

For this question, refer to the Mountkirk Games case study. Mountkirk Games wants to design their solution for the future in order to take advantage of cloud and technology improvements as they become available.
Which two steps should they take? (Choose two.)

  1. Store as much analytics and game activity data as financially feasible today so it can be used to train machine learning models to predict user behavior in the future.
  2. Begin packaging their game backend artifacts in container images and running them on Kubernetes Engine to improve the availability to scale up or down based on game activity.
  3. Set up a CI/CD pipeline using Jenkins and Spinnaker to automate canary deployments and improve development velocity.
  4. Adopt a schema versioning tool to reduce downtime when adding new game features that require storing additional player data in the database.
  5. Implement a weekly rolling maintenance process for the Linux virtual machines so they can apply critical kernel patches and package updates and reduce the risk of 0-day vulnerabilities.

Answer(s): A,B



Viewing Page 11 of 57



Share your comments for Google Google Cloud Architect Professional exam with other users:

John 9/16/2023 9:37:00 PM

q6 exam topic: terramearth, c: correct answer: copy 1petabyte to encrypted usb device ???
GERMANY