Google Google Cloud Architect Professional Exam (page: 6)
Google Cloud Certified - Professional Cloud Architect
Updated on: 12-Jan-2026


Company Overview
TerramEarth manufactures heavy equipment for the mining and agricultural industries. About 80% of their business is from mining and 20% from agriculture. They currently have over 500 dealers and service centers in 100 countries. Their mission is to build products that make their customers more productive.
Solution Concept
There are 20 million TerramEarth vehicles in operation that collect 120 fields of data per second. Data is stored locally on the vehicle and can be accessed for analysis when a vehicle is serviced. The data is downloaded via a maintenance port. This same port can be used to adjust operational parameters, allowing the vehicles to be upgraded in the field with new computing modules.
Approximately 200,000 vehicles are connected to a cellular network, allowing TerramEarth to collect data directly. At a rate of 120 fields of data per second, with 22 hours of operation per day, TerramEarth collects a total of about 9 TB/day from these connected vehicles.
Existing Technical Environment
TerramEarth's existing architecture is composed of Linux and Windows-based systems that reside in a single
E. S, west coast based data center. These systems gzip CSV files from the field and upload via FTP, and place the data in their data warehouse. Because this process takes time, aggregated reports are based on data that is 3 weeks old.
With this data, TerramEarth has been able to preemptively stock replacement parts and reduce unplanned downtime of their vehicles by 60%. However, because the data is stale, some customers are without their vehicles for up to 4 weeks while they wait for replacement parts.
Business Requirements
Decrease unplanned vehicle downtime to less than 1 week

Support the dealer network with more data on how their customers use their equipment to better position new products and services
Have the ability to partner with different companies ­ especially with seed and fertilizer suppliers in the fast-

growing agricultural business ­ to create compelling joint offerings for their customers Technical Requirements
Expand beyond a single datacenter to decrease latency to the American midwest and east coast

Create a backup strategy

Increase security of data transfer from equipment to the datacenter

Improve data in the data warehouse

Use customer and equipment data to anticipate customer needs

Application 1: Data ingest
A custom Python application reads uploaded datafiles from a single server, writes to the data warehouse.
Compute:
Windows Server 2008 R2

- 16 CPUs
- 128 GB of RAM
- 10 TB local HDD storage
Application 2: Reporting
An off the shelf application that business analysts use to run a daily report to see what equipment needs repair.
Only 2 analysts of a team of 10 (5 west coast, 5 east coast) can connect to the reporting application at a time.
Compute:
Off the shelf application. License tied to number of physical CPUs

- Windows Server 2008 R2
- 16 CPUs
- 32 GB of RAM

- 500 GB HDD
Data warehouse:
A single PostgreSQL server

- RedHat Linux
- 64 CPUs
- 128 GB of RAM
- 4x 6TB HDD in RAID 0
Executive Statement
Our competitive advantage has always been in our manufacturing process, with our ability to build better vehicles for lower cost than our competitors. However, new products with different approaches are constantly being developed, and I'm concerned that we lack the skills to undergo the next wave of transformations in our industry. My goals are to build our skills while addressing immediate market needs through incremental innovations.

For this question, refer to the TerramEarth case study. A new architecture that writes all incoming data to BigQuery has been introduced. You notice that the data is dirty, and want to ensure data quality on an automated daily basis while managing cost.

What should you do?

  1. Set up a streaming Cloud Dataflow job, receiving data by the ingestion process. Clean the data in a Cloud Dataflow pipeline.
  2. Create a Cloud Function that reads data from BigQuery and cleans it. Trigger the Cloud Function from a Compute Engine instance.
  3. Create a SQL statement on the data in BigQuery, and save it as a view. Run the view daily, and save the result to a new table.
  4. Use Cloud Dataprep and configure the BigQuery tables as the source. Schedule a daily job to clean the data.

Answer(s): C




Company Overview
TerramEarth manufactures heavy equipment for the mining and agricultural industries. About 80% of their business is from mining and 20% from agriculture. They currently have over 500 dealers and service centers in 100 countries. Their mission is to build products that make their customers more productive.
Solution Concept
There are 20 million TerramEarth vehicles in operation that collect 120 fields of data per second. Data is stored locally on the vehicle and can be accessed for analysis when a vehicle is serviced. The data is downloaded via a maintenance port. This same port can be used to adjust operational parameters, allowing the vehicles to be upgraded in the field with new computing modules.
Approximately 200,000 vehicles are connected to a cellular network, allowing TerramEarth to collect data directly. At a rate of 120 fields of data per second, with 22 hours of operation per day, TerramEarth collects a total of about 9 TB/day from these connected vehicles.
Existing Technical Environment
TerramEarth's existing architecture is composed of Linux and Windows-based systems that reside in a single
E. S, west coast based data center. These systems gzip CSV files from the field and upload via FTP, and place the data in their data warehouse. Because this process takes time, aggregated reports are based on data that is 3 weeks old.
With this data, TerramEarth has been able to preemptively stock replacement parts and reduce unplanned downtime of their vehicles by 60%. However, because the data is stale, some customers are without their vehicles for up to 4 weeks while they wait for replacement parts.
Business Requirements
Decrease unplanned vehicle downtime to less than 1 week

Support the dealer network with more data on how their customers use their equipment to better position new products and services
Have the ability to partner with different companies ­ especially with seed and fertilizer suppliers in the fast-

growing agricultural business ­ to create compelling joint offerings for their customers Technical Requirements
Expand beyond a single datacenter to decrease latency to the American midwest and east coast

Create a backup strategy

Increase security of data transfer from equipment to the datacenter

Improve data in the data warehouse

Use customer and equipment data to anticipate customer needs

Application 1: Data ingest
A custom Python application reads uploaded datafiles from a single server, writes to the data warehouse.
Compute:
Windows Server 2008 R2

- 16 CPUs
- 128 GB of RAM
- 10 TB local HDD storage
Application 2: Reporting
An off the shelf application that business analysts use to run a daily report to see what equipment needs repair.
Only 2 analysts of a team of 10 (5 west coast, 5 east coast) can connect to the reporting application at a time.
Compute:
Off the shelf application. License tied to number of physical CPUs

- Windows Server 2008 R2
- 16 CPUs
- 32 GB of RAM

- 500 GB HDD
Data warehouse:
A single PostgreSQL server

- RedHat Linux
- 64 CPUs
- 128 GB of RAM
- 4x 6TB HDD in RAID 0
Executive Statement
Our competitive advantage has always been in our manufacturing process, with our ability to build better vehicles for lower cost than our competitors. However, new products with different approaches are constantly being developed, and I'm concerned that we lack the skills to undergo the next wave of transformations in our industry. My goals are to build our skills while addressing immediate market needs through incremental innovations.

For this question, refer to the TerramEarth case study. Considering the technical requirements, how should you reduce the unplanned vehicle downtime in GCP?

  1. Use BigQuery as the data warehouse. Connect all vehicles to the network and stream data into BigQuery using Cloud Pub/Sub and Cloud Dataflow. Use Google Data Studio for analysis and reporting.
  2. Use BigQuery as the data warehouse. Connect all vehicles to the network and upload gzip files to a Multi- Regional Cloud Storage bucket using gcloud. Use Google Data Studio for analysis and reporting.
  3. Use Cloud Dataproc Hive as the data warehouse. Upload gzip files to a Multi-Regional Cloud Storage bucket. Upload this data into BigQuery using gcloud. Use Google Data Studio for analysis and reporting.
  4. Use Cloud Dataproc Hive as the data warehouse. Directly stream data into partitioned Hive tables. Use Pig scripts to analyze data.

Answer(s): A




Company Overview
TerramEarth manufactures heavy equipment for the mining and agricultural industries. About 80% of their business is from mining and 20% from agriculture. They currently have over 500 dealers and service centers in 100 countries. Their mission is to build products that make their customers more productive.
Solution Concept
There are 20 million TerramEarth vehicles in operation that collect 120 fields of data per second. Data is stored locally on the vehicle and can be accessed for analysis when a vehicle is serviced. The data is downloaded via a maintenance port. This same port can be used to adjust operational parameters, allowing the vehicles to be upgraded in the field with new computing modules.
Approximately 200,000 vehicles are connected to a cellular network, allowing TerramEarth to collect data directly. At a rate of 120 fields of data per second, with 22 hours of operation per day, TerramEarth collects a total of about 9 TB/day from these connected vehicles.
Existing Technical Environment
TerramEarth's existing architecture is composed of Linux and Windows-based systems that reside in a single
E. S, west coast based data center. These systems gzip CSV files from the field and upload via FTP, and place the data in their data warehouse. Because this process takes time, aggregated reports are based on data that is 3 weeks old.
With this data, TerramEarth has been able to preemptively stock replacement parts and reduce unplanned downtime of their vehicles by 60%. However, because the data is stale, some customers are without their vehicles for up to 4 weeks while they wait for replacement parts.
Business Requirements
Decrease unplanned vehicle downtime to less than 1 week

Support the dealer network with more data on how their customers use their equipment to better position new products and services
Have the ability to partner with different companies ­ especially with seed and fertilizer suppliers in the fast-

growing agricultural business ­ to create compelling joint offerings for their customers Technical Requirements
Expand beyond a single datacenter to decrease latency to the American midwest and east coast

Create a backup strategy

Increase security of data transfer from equipment to the datacenter

Improve data in the data warehouse

Use customer and equipment data to anticipate customer needs

Application 1: Data ingest
A custom Python application reads uploaded datafiles from a single server, writes to the data warehouse.
Compute:
Windows Server 2008 R2

- 16 CPUs
- 128 GB of RAM
- 10 TB local HDD storage
Application 2: Reporting
An off the shelf application that business analysts use to run a daily report to see what equipment needs repair.
Only 2 analysts of a team of 10 (5 west coast, 5 east coast) can connect to the reporting application at a time.
Compute:
Off the shelf application. License tied to number of physical CPUs

- Windows Server 2008 R2
- 16 CPUs
- 32 GB of RAM

- 500 GB HDD
Data warehouse:
A single PostgreSQL server

- RedHat Linux
- 64 CPUs
- 128 GB of RAM
- 4x 6TB HDD in RAID 0
Executive Statement
Our competitive advantage has always been in our manufacturing process, with our ability to build better vehicles for lower cost than our competitors. However, new products with different approaches are constantly being developed, and I'm concerned that we lack the skills to undergo the next wave of transformations in our industry. My goals are to build our skills while addressing immediate market needs through incremental innovations.

For this question, refer to the TerramEarth case study. You are asked to design a new architecture for the ingestion of the data of the 200,000 vehicles that are connected to a cellular network. You want to follow Google-recommended practices.

Considering the technical requirements, which components should you use for the ingestion of the data?

  1. Google Kubernetes Engine with an SSL Ingress
  2. Cloud IoT Core with public/private key pairs
  3. Compute Engine with project-wide SSH keys
  4. Compute Engine with specific SSH keys

Answer(s): B




Company overview
TerramEarth manufactures heavy equipment for the mining and agricultural industries. They currently have over 500 dealers and service centers in 100 countries. Their mission is to build products that make their customers more productive.
Solution concept
There are 2 million TerramEarth vehicles in operation currently, and we see 20% yearly growth. Vehicles collect telemetry data from many sensors during operation. A small subset of critical data is transmitted from the vehicles in real time to facilitate fleet management. The rest of the sensor data is collected, compressed, and uploaded daily when the vehicles return to home base. Each vehicle usually generates 200 to 500 megabytes of data per day.
Existing technical environment
TerramEarth's vehicle data aggregation and analysis infrastructure resides in Google Cloud and serves clients from all around the world. A growing amount of sensor data is captured from their two main manufacturing plants and sent to private data centers that contain their legacy inventory and logistics management systems.
The private data centers have multiple network interconnects configured to Google Cloud. The web frontend for dealers and customers is running in Google Cloud and allows access to stock management and analytics.
Business requirements
· Predict and detect vehicle malfunction and rapidly ship parts to dealerships for just-in-time repair where possible.
· Decrease cloud operational costs and adapt to seasonality.
· Increase speed and reliability of development workflow.
· Allow remote developers to be productive without compromising code or data security.
· Create a flexible and scalable platform for developers to create custom API services for dealers and partners.
Technical requirements
· Create a new abstraction layer for HTTP API access to their legacy systems to enable a gradual move into the cloud without disrupting operations.
· Modernize all CI/CD pipelines to allow developers to deploy container-based workloads in highly scalable environments.
· Allow developers to run experiments without compromising security and governance requirements.
· Create a self-service portal for internal and partner developers to create new projects, request resources for data analytics jobs, and centrally manage access to the API endpoints.
· Use cloud-native solutions for keys and secrets management and optimize for identity-based access.
· Improve and standardize tools necessary for application and network monitoring and troubleshooting.
Executive statement
Our competitive advantage has always been our focus on the customer, with our ability to provide excellent customer service and minimize vehicle downtimes. After moving multiple systems into Google Cloud, we are seeking new ways to provide best-in-class online fleet management services to our customers and improve operations of our dealerships. Our 5-year strategic plan is to create a partner ecosystem of new products by enabling access to our data, increasing autonomous operation capabilities of our vehicles, and creating a path to move the remaining legacy systems to the cloud.

For this question, refer to the TerramEarth case study. You start to build a new application that uses a few Cloud Functions for the backend. One use case requires a Cloud Function func_display to invoke another Cloud Function func_query. You want func_query only to accept invocations from func_display. You also want to follow Google's recommended best practices.
What should you do?

  1. 1. Create a token and pass it in as an environment variable to func_display.
    2. When invoking func_query, include the token in the request.
    3. Pass the same token to func_query and reject the invocation if the tokens are different.
  2. 1. Make func_query 'Require authentication.'
    2. Create a unique service account and associate it to func_display.
    3. Grant the service account invoker role for func_query.
    4. Create an id token in func_display and include the token to the request when invoking func_query.
  3. 1. Make func_query 'Require authentication' and only accept internal traffic.
    2. Create those two functions in the same VP
    3. Create an ingress firewall rule for func_query to only allow traffic from func_display.
  4. 1. Create those two functions in the same project and VPC.
    2. Make func_query only accept internal traffic.
    3. Create an ingress firewall for func_query to only allow traffic from func_display.
    4. Make sure both functions use the same service account.

Answer(s): B




Company overview
TerramEarth manufactures heavy equipment for the mining and agricultural industries. They currently have over 500 dealers and service centers in 100 countries. Their mission is to build products that make their customers more productive.
Solution concept
There are 2 million TerramEarth vehicles in operation currently, and we see 20% yearly growth. Vehicles collect telemetry data from many sensors during operation. A small subset of critical data is transmitted from the vehicles in real time to facilitate fleet management. The rest of the sensor data is collected, compressed, and uploaded daily when the vehicles return to home base. Each vehicle usually generates 200 to 500 megabytes of data per day.
Existing technical environment
TerramEarth's vehicle data aggregation and analysis infrastructure resides in Google Cloud and serves clients from all around the world. A growing amount of sensor data is captured from their two main manufacturing plants and sent to private data centers that contain their legacy inventory and logistics management systems.
The private data centers have multiple network interconnects configured to Google Cloud. The web frontend for dealers and customers is running in Google Cloud and allows access to stock management and analytics.
Business requirements
· Predict and detect vehicle malfunction and rapidly ship parts to dealerships for just-in-time repair where possible.
· Decrease cloud operational costs and adapt to seasonality.
· Increase speed and reliability of development workflow.
· Allow remote developers to be productive without compromising code or data security.
· Create a flexible and scalable platform for developers to create custom API services for dealers and partners.
Technical requirements
· Create a new abstraction layer for HTTP API access to their legacy systems to enable a gradual move into the cloud without disrupting operations.
· Modernize all CI/CD pipelines to allow developers to deploy container-based workloads in highly scalable environments.
· Allow developers to run experiments without compromising security and governance requirements.
· Create a self-service portal for internal and partner developers to create new projects, request resources for data analytics jobs, and centrally manage access to the API endpoints.
· Use cloud-native solutions for keys and secrets management and optimize for identity-based access.
· Improve and standardize tools necessary for application and network monitoring and troubleshooting.
Executive statement
Our competitive advantage has always been our focus on the customer, with our ability to provide excellent customer service and minimize vehicle downtimes. After moving multiple systems into Google Cloud, we are seeking new ways to provide best-in-class online fleet management services to our customers and improve operations of our dealerships. Our 5-year strategic plan is to create a partner ecosystem of new products by enabling access to our data, increasing autonomous operation capabilities of our vehicles, and creating a path to move the remaining legacy systems to the cloud.

For this question, refer to the TerramEarth case study. You have broken down a legacy monolithic application into a few containerized RESTful microservices. You want to run those microservices on Cloud Run. You also want to make sure the services are highly available with low latency to your customers.
What should you do?

  1. 1. Deploy Cloud Run services to multiple availability zones.
    2. Create an Apigee instance that points to the services.
    3. Create a global external HTTP(S) Load Balancing instance and attach Apigee to its backend.
  2. 1. Deploy Cloud Run services to multiple regions.
    2. Create serverless network endpoint groups pointing to the services.
    3. Add the serverless NEGs to a backend service that is used by a global HTTP(S) Load Balancing instance.
  3. 1. Deploy Cloud Run services to multiple regions.
    2. In Cloud DNS, create a geo-based DNS name that points to the services.
  4. 1. Deploy Cloud Run services to multiple availability zones.
    2. Create a TCP/IP global load balancer, and attach Apigee to its backend service.

Answer(s): C




Company overview
TerramEarth manufactures heavy equipment for the mining and agricultural industries. They currently have over 500 dealers and service centers in 100 countries. Their mission is to build products that make their customers more productive.
Solution concept
There are 2 million TerramEarth vehicles in operation currently, and we see 20% yearly growth. Vehicles collect telemetry data from many sensors during operation. A small subset of critical data is transmitted from the vehicles in real time to facilitate fleet management. The rest of the sensor data is collected, compressed, and uploaded daily when the vehicles return to home base. Each vehicle usually generates 200 to 500 megabytes of data per day.
Existing technical environment
TerramEarth's vehicle data aggregation and analysis infrastructure resides in Google Cloud and serves clients from all around the world. A growing amount of sensor data is captured from their two main manufacturing plants and sent to private data centers that contain their legacy inventory and logistics management systems.
The private data centers have multiple network interconnects configured to Google Cloud. The web frontend for dealers and customers is running in Google Cloud and allows access to stock management and analytics.
Business requirements
· Predict and detect vehicle malfunction and rapidly ship parts to dealerships for just-in-time repair where possible.
· Decrease cloud operational costs and adapt to seasonality.
· Increase speed and reliability of development workflow.
· Allow remote developers to be productive without compromising code or data security.
· Create a flexible and scalable platform for developers to create custom API services for dealers and partners.
Technical requirements
· Create a new abstraction layer for HTTP API access to their legacy systems to enable a gradual move into the cloud without disrupting operations.
· Modernize all CI/CD pipelines to allow developers to deploy container-based workloads in highly scalable environments.
· Allow developers to run experiments without compromising security and governance requirements.
· Create a self-service portal for internal and partner developers to create new projects, request resources for data analytics jobs, and centrally manage access to the API endpoints.
· Use cloud-native solutions for keys and secrets management and optimize for identity-based access.
· Improve and standardize tools necessary for application and network monitoring and troubleshooting.
Executive statement
Our competitive advantage has always been our focus on the customer, with our ability to provide excellent customer service and minimize vehicle downtimes. After moving multiple systems into Google Cloud, we are seeking new ways to provide best-in-class online fleet management services to our customers and improve operations of our dealerships. Our 5-year strategic plan is to create a partner ecosystem of new products by enabling access to our data, increasing autonomous operation capabilities of our vehicles, and creating a path to move the remaining legacy systems to the cloud.

For this question, refer to the TerramEarth case study. You are migrating a Linux-based application from your private data center to Google Cloud. The TerramEarth security team sent you several recent Linux vulnerabilities published by Common Vulnerabilities and Exposures (CVE). You need assistance in understanding how these vulnerabilities could impact your migration.
What should you do? (Choose two.)

  1. Open a support case regarding the CVE and chat with the support engineer.
  2. Read the CVEs from the Google Cloud Status Dashboard to understand the impact.
  3. Read the CVEs from the Google Cloud Platform Security Bulletins to understand the impact.
  4. Post a question regarding the CVE in Stack Overflow to get an explanation.
  5. Post a question regarding the CVE in a Google Cloud discussion group to get an explanation.

Answer(s): A,D




Company overview
TerramEarth manufactures heavy equipment for the mining and agricultural industries. They currently have over 500 dealers and service centers in 100 countries. Their mission is to build products that make their customers more productive.
Solution concept
There are 2 million TerramEarth vehicles in operation currently, and we see 20% yearly growth. Vehicles collect telemetry data from many sensors during operation. A small subset of critical data is transmitted from the vehicles in real time to facilitate fleet management. The rest of the sensor data is collected, compressed, and uploaded daily when the vehicles return to home base. Each vehicle usually generates 200 to 500 megabytes of data per day.
Existing technical environment
TerramEarth's vehicle data aggregation and analysis infrastructure resides in Google Cloud and serves clients from all around the world. A growing amount of sensor data is captured from their two main manufacturing plants and sent to private data centers that contain their legacy inventory and logistics management systems.
The private data centers have multiple network interconnects configured to Google Cloud. The web frontend for dealers and customers is running in Google Cloud and allows access to stock management and analytics.
Business requirements
· Predict and detect vehicle malfunction and rapidly ship parts to dealerships for just-in-time repair where possible.
· Decrease cloud operational costs and adapt to seasonality.
· Increase speed and reliability of development workflow.
· Allow remote developers to be productive without compromising code or data security.
· Create a flexible and scalable platform for developers to create custom API services for dealers and partners.
Technical requirements
· Create a new abstraction layer for HTTP API access to their legacy systems to enable a gradual move into the cloud without disrupting operations.
· Modernize all CI/CD pipelines to allow developers to deploy container-based workloads in highly scalable environments.
· Allow developers to run experiments without compromising security and governance requirements.
· Create a self-service portal for internal and partner developers to create new projects, request resources for data analytics jobs, and centrally manage access to the API endpoints.
· Use cloud-native solutions for keys and secrets management and optimize for identity-based access.
· Improve and standardize tools necessary for application and network monitoring and troubleshooting.
Executive statement
Our competitive advantage has always been our focus on the customer, with our ability to provide excellent customer service and minimize vehicle downtimes. After moving multiple systems into Google Cloud, we are seeking new ways to provide best-in-class online fleet management services to our customers and improve operations of our dealerships. Our 5-year strategic plan is to create a partner ecosystem of new products by enabling access to our data, increasing autonomous operation capabilities of our vehicles, and creating a path to move the remaining legacy systems to the cloud.

For this question, refer to the TerramEarth case study. TerramEarth has a legacy web application that you cannot migrate to cloud. However, you still want to build a cloud-native way to monitor the application. If the application goes down, you want the URL to point to a "Site is unavailable" page as soon as possible. You also want your Ops team to receive a notification for the issue. You need to build a reliable solution for minimum cost.
What should you do?

  1. Create a scheduled job in Cloud Run to invoke a container every minute. The container will check the application URL. If the application is down, switch the URL to the "Site is unavailable" page, and notify the Ops team.
  2. Create a cron job on a Compute Engine VM that runs every minute. The cron job invokes a Python program to check the application URL. If the application is down, switch the URL to the "Site is unavailable" page, and notify the Ops team.
  3. Create a Cloud Monitoring uptime check to validate the application URL. If it fails, put a message in a Pub/ Sub queue that triggers a Cloud Function to switch the URL to the "Site is unavailable" page, and notify the Ops team.
  4. Use Cloud Error Reporting to check the application URL. If the application is down, switch the URL to the "Site is unavailable" page, and notify the Ops team.

Answer(s): B




Company overview
TerramEarth manufactures heavy equipment for the mining and agricultural industries. They currently have over 500 dealers and service centers in 100 countries. Their mission is to build products that make their customers more productive.
Solution concept
There are 2 million TerramEarth vehicles in operation currently, and we see 20% yearly growth. Vehicles collect telemetry data from many sensors during operation. A small subset of critical data is transmitted from the vehicles in real time to facilitate fleet management. The rest of the sensor data is collected, compressed, and uploaded daily when the vehicles return to home base. Each vehicle usually generates 200 to 500 megabytes of data per day.
Existing technical environment
TerramEarth's vehicle data aggregation and analysis infrastructure resides in Google Cloud and serves clients from all around the world. A growing amount of sensor data is captured from their two main manufacturing plants and sent to private data centers that contain their legacy inventory and logistics management systems.
The private data centers have multiple network interconnects configured to Google Cloud. The web frontend for dealers and customers is running in Google Cloud and allows access to stock management and analytics.
Business requirements
· Predict and detect vehicle malfunction and rapidly ship parts to dealerships for just-in-time repair where possible.
· Decrease cloud operational costs and adapt to seasonality.
· Increase speed and reliability of development workflow.
· Allow remote developers to be productive without compromising code or data security.
· Create a flexible and scalable platform for developers to create custom API services for dealers and partners.
Technical requirements
· Create a new abstraction layer for HTTP API access to their legacy systems to enable a gradual move into the cloud without disrupting operations.
· Modernize all CI/CD pipelines to allow developers to deploy container-based workloads in highly scalable environments.
· Allow developers to run experiments without compromising security and governance requirements.
· Create a self-service portal for internal and partner developers to create new projects, request resources for data analytics jobs, and centrally manage access to the API endpoints.
· Use cloud-native solutions for keys and secrets management and optimize for identity-based access.
· Improve and standardize tools necessary for application and network monitoring and troubleshooting.
Executive statement
Our competitive advantage has always been our focus on the customer, with our ability to provide excellent customer service and minimize vehicle downtimes. After moving multiple systems into Google Cloud, we are seeking new ways to provide best-in-class online fleet management services to our customers and improve operations of our dealerships. Our 5-year strategic plan is to create a partner ecosystem of new products by enabling access to our data, increasing autonomous operation capabilities of our vehicles, and creating a path to move the remaining legacy systems to the cloud.

For this question, refer to the TerramEarth case study. You are building a microservice-based application for TerramEarth. The application is based on Docker containers. You want to follow Google-recommended practices to build the application continuously and store the build artifacts.
What should you do?

  1. 1. Configure a trigger in Cloud Build for new source changes.
    2. Invoke Cloud Build to build container images for each microservice, and tag them using the code commit hash.
    3. Push the images to the Container Registry.
  2. 1. Configure a trigger in Cloud Build for new source changes.
    2. The trigger invokes build jobs and build container images for the microservices.
    3. Tag the images with a version number, and push them to Cloud Storage.
  3. 1. Create a Scheduler job to check the repo every minute.
    2. For any new change, invoke Cloud Build to build container images for the microservices.
    3. Tag the images using the current timestamp, and push them to the Container Registry.
  4. 1. Configure a trigger in Cloud Build for new source changes.
    2. Invoke Cloud Build to build one container image, and tag the image with the label 'latest.'
    3. Push the image to the Container Registry.

Answer(s): B



Viewing Page 6 of 39



Share your comments for Google Google Cloud Architect Professional exam with other users:

John 9/16/2023 9:37:00 PM

q6 exam topic: terramearth, c: correct answer: copy 1petabyte to encrypted usb device ???
GERMANY