Cause all that matters here is passing the Google Professional-Cloud-Architect exam. Cause all that you need is a high score of Professional-Cloud-Architect Google Certified Professional - Cloud Architect (GCP) exam. The only one thing you need to do is downloading Actualtests Professional-Cloud-Architect exam study guides now. We will not let you down with our money-back guarantee.

Online Professional-Cloud-Architect free questions and answers of New Version:

NEW QUESTION 1

You want to establish a Compute Engine application in a single VPC across two regions. The application must communicate over VPN to an on-premises network. How should you deploy the VPN?

  • A. Use VPC Network Peering between the VPC and the on-premises network.
  • B. Expose the VPC to the on-premises network using IAM and VPC Sharing.
  • C. Create a global Cloud VPN Gateway with VPN tunnels from each region to the on-premises peer gateway.
  • D. Deploy Cloud VPN Gateway in each regio
  • E. Ensure that each region has at least one VPN tunnel to the on-premises peer gateway.

Answer: C

Explanation:
https://cloud.google.com/vpn/docs/how-to/creating-static-vpns

NEW QUESTION 2

Your customer is moving an existing corporate application to Google Cloud Platform from an on-premises data center. The business owners require minimal user disruption. There are strict security team requirements for storing passwords. What authentication strategy should they use?

  • A. Use G Suite Password Sync to replicate passwords into Google.
  • B. Federate authentication via SAML 2.0 to the existing Identity Provider.
  • C. Provision users in Google using the Google Cloud Directory Sync tool.
  • D. Ask users to set their Google password to match their corporate password.

Answer: B

Explanation:
https://cloud.google.com/solutions/authenticating-corporate-users-in-a-hybrid-environment

NEW QUESTION 3

You are developing a globally scaled frontend for a legacy streaming backend data API. This API expects events in strict chronological order with no repeat data for proper processing.
Which products should you deploy to ensure guaranteed-once FIFO (first-in, first-out) delivery of data?

  • A. Cloud Pub/Sub alone
  • B. Cloud Pub/Sub to Cloud DataFlow
  • C. Cloud Pub/Sub to Stackdriver
  • D. Cloud Pub/Sub to Cloud SQL

Answer: B

Explanation:
Reference https://cloud.google.com/pubsub/docs/ordering

NEW QUESTION 4

Your company wants to start using Google Cloud resources but wants to retain their on-premises Active Directory domain controller for identity management. What should you do?

  • A. Use the Admin Directory API to authenticate against the Active Directory domain controller.
  • B. Use Google Cloud Directory Sync to synchronize Active Directory usernames with cloud identities and configure SAML SSO.
  • C. Use Cloud Identity-Aware Proxy configured to use the on-premises Active Directory domain controller as an identity provider.
  • D. Use Compute Engine to create an Active Directory (AD) domain controller that is a replica of the onpremises AD domain controller using Google Cloud Directory Sync.

Answer: B

Explanation:
https://cloud.google.com/solutions/federating-gcp-with-active-directory-introduction#implementing_federation

NEW QUESTION 5

Your agricultural division is experimenting with fully autonomous vehicles.
You want your architecture to promote strong security during vehicle operation. Which two architecture should you consider?
Choose 2 answers:

  • A. Treat every micro service call between modules on the vehicle as untrusted.
  • B. Require IPv6 for connectivity to ensure a secure address space.
  • C. Use a trusted platform module (TPM) and verify firmware and binaries on boot.
  • D. Use a functional programming language to isolate code execution cycles.
  • E. Use multiple connectivity subsystems for redundancy.
  • F. Enclose the vehicle's drive electronics in a Faraday cage to isolate chips.

Answer: AC

NEW QUESTION 6

You are using Cloud CDN to deliver static HTTP(S) website content hosted on a Compute Engine instance group. You want to improve the cache hit ratio.
What should you do?

  • A. Customize the cache keys to omit the protocol from the key.
  • B. Shorten the expiration time of the cached objects.
  • C. Make sure the HTTP(S) header “Cache-Region” points to the closest region of your users.
  • D. Replicate the static content in a Cloud Storage bucke
  • E. Point CloudCDN toward a load balancer on that bucket.

Answer: A

Explanation:
Reference https://cloud.google.com/cdn/docs/bestpractices#using_custom_cache_keys_to_improve_cache_hit_ratio

NEW QUESTION 7

The current Dress4win system architecture has high latency to some customers because it is located in one data center.
As of a future evaluation and optimizing for performance in the cloud, Dresss4win wants to distribute it's system architecture to multiple locations when Google cloud platform. Which approach should they use?

  • A. Use regional managed instance groups and a global load balancer to increase performance because the regional managed instance group can grow instances in each region separately based on traffic.
  • B. Use a global load balancer with a set of virtual machines that forward the requests to a closer group ofvirtual machines managed by your operations team.
  • C. Use regional managed instance groups and a global load balancer to increase reliability by providing automatic failover between zones in different regions.
  • D. Use a global load balancer with a set of virtual machines that forward the requests to a closer group of virtual machines as part of a separate managed instance groups.

Answer: A

NEW QUESTION 8

You are building a continuous deployment pipeline for a project stored in a Git source repository and want to ensure that code changes can be verified deploying to production. What should you do?

  • A. Use Spinnaker to deploy builds to production using the red/black deployment strategy so that changes can easily be rolled back.
  • B. Use Spinnaker to deploy builds to production and run tests on production deployments.
  • C. Use Jenkins to build the staging branches and the master branc
  • D. Build and deploy changes to production for 10% of users before doing a complete rollout.
  • E. Use Jenkins to monitor tags in the repositor
  • F. Deploy staging tags to a staging environment for testing.After testing, tag the repository for production and deploy that to the production environment.

Answer: D

Explanation:
Reference: https://github.com/GoogleCloudPlatform/continuous-deployment-on-kubernetes/blob/master/ README.md

NEW QUESTION 9

One of the developers on your team deployed their application in Google Container Engine with the Dockerfile below. They report that their application deployments are taking too long.
Professional-Cloud-Architect dumps exhibit
You want to optimize this Dockerfile for faster deployment times without adversely affecting the app’s functionality.
Which two actions should you take? Choose 2 answers.

  • A. Remove Python after running pip.
  • B. Remove dependencies from requirements.txt.
  • C. Use a slimmed-down base image like Alpine linux.
  • D. Use larger machine types for your Google Container Engine node pools.
  • E. Copy the source after the package dependencies (Python and pip) are installed.

Answer: CE

Explanation:
The speed of deployment can be changed by limiting the size of the uploaded app, limiting the complexity of the build necessary in the Dockerfile, if present, and by ensuring a fast and reliable internet connection.
Note: Alpine Linux is built around musl libc and busybox. This makes it smaller and more resource efficient than traditional GNU/Linux distributions. A container requires no more than 8 MB and a minimal installation to disk requires around 130 MB of storage. Not only do you get a fully-fledged Linux environment but a large selection of packages from the repository.
References: https://groups.google.com/forum/#!topic/google-appengine/hZMEkmmObDU https://www.alpinelinux.org/about/

NEW QUESTION 10

You are using a single Cloud SQL instance to serve your application from a specific zone. You want to introduce high availability. What should you do?

  • A. Create a read replica instance in a different region
  • B. Create a failover replica instance in a different region
  • C. Create a read replica instance in the same region, but in a different zone
  • D. Create a failover replica instance in the same region, but in a different zone

Answer: B

Explanation:
https://cloud.google.com/sql/docs/mysql/high-availability

NEW QUESTION 11

You write a Python script to connect to Google BigQuery from a Google Compute Engine virtual machine. The script is printing errors that it cannot connect to BigQuery. What should you do to fix the script?

  • A. Install the latest BigQuery API client library for Python
  • B. Run your script on a new virtual machine with the BigQuery access scope enabled
  • C. Create a new service account with BigQuery access and execute your script with that user
  • D. Install the bq component for gccloud with the command gcloud components install bq.

Answer: B

Explanation:
The error is most like caused by the access scope issue. When create new instance, you have the default Compute engine default service account but most serves access including BigQuery is not enable. Create an instance Most access are not enabled by default You have default service account but don't have the permission (scope) you can stop the instance, edit, change scope and restart it to enable the scope access. Of course, if you Run your script on a new virtual machine with the BigQuery access scope enabled, it also works
https://cloud.google.com/compute/docs/access/service-accounts

NEW QUESTION 12

Your company acquired a healthcare startup and must retain its customers’ medical information for up to 4 more years, depending on when it was created. Your corporate policy is to securely retain this data, and then delete it as soon as regulations allow.
Which approach should you take?

  • A. Store the data in Google Drive and manually delete records as they expire.
  • B. Anonymize the data using the Cloud Data Loss Prevention API and store it indefinitely.
  • C. Store the data using the Cloud Storage and use lifecycle management to delete files when they expire.
  • D. Store the data in Cloud Storage and run a nightly batch script that deletes all expired datA.

Answer: C

Explanation:
https://cloud.google.com/storage/docs/lifecycle

NEW QUESTION 13

You want to optimize the performance of an accurate, real-time, weather-charting application. The data comes from 50,000 sensors sending 10 readings a second, in the format of a timestamp and sensor reading. Where
should you store the data?

  • A. Google BigQuery
  • B. Google Cloud SQL
  • C. Google Cloud Bigtable
  • D. Google Cloud Storage

Answer: C

Explanation:
It is time-series data, So Big Table. https://cloud.google.com/bigtable/docs/schema-design-time-series
Google Cloud Bigtable is a scalable, fully-managed NoSQL wide-column database that is suitable for both real-time access and analytics workloads.
Good for:
Professional-Cloud-Architect dumps exhibit Low-latency read/write access
Professional-Cloud-Architect dumps exhibit High-throughput analytics
Professional-Cloud-Architect dumps exhibit Native time series support
Professional-Cloud-Architect dumps exhibit Common workloads:
Professional-Cloud-Architect dumps exhibit IoT, finance, adtech
Professional-Cloud-Architect dumps exhibit Personalization, recommendations
Professional-Cloud-Architect dumps exhibit Monitoring
Professional-Cloud-Architect dumps exhibit Geospatial datasets
Professional-Cloud-Architect dumps exhibit Graphs
References: https://cloud.google.com/storage-options/

NEW QUESTION 14

You need to ensure reliability for your application and operations by supporting reliable task a scheduling for compute on GCP. Leveraging Google best practices, what should you do?

  • A. Using the Cron service provided by App Engine, publishing messages directly to a message-processing utility service running on Compute Engine instances.
  • B. Using the Cron service provided by App Engine, publish messages to a Cloud Pub/Sub topi
  • C. Subscribe to that topic using a message-processing utility service running on Compute Engine instances.
  • D. Using the Cron service provided by Google Kubernetes Engine (GKE), publish messages directly to a message-processing utility service running on Compute Engine instances.
  • E. Using the Cron service provided by GKE, publish messages to a Cloud Pub/Sub topi
  • F. Subscribe to that topic using a message-processing utility service running on Compute Engine instances.

Answer: B

Explanation:
https://cloud.google.com/solutions/reliable-task-scheduling-compute-engine

NEW QUESTION 15

Your applications will be writing their logs to BigQuery for analysis. Each application should have its own table.
Any logs older than 45 days should be removed. You want to optimize storage and follow Google recommended practices. What should you do?

  • A. Configure the expiration time for your tables at 45 days
  • B. Make the tables time-partitioned, and configure the partition expiration at 45 days
  • C. Rely on BigQuery’s default behavior to prune application logs older than 45 days
  • D. Create a script that uses the BigQuery command line tool (bq) to remove records older than 45 days

Answer: B

Explanation:
https://cloud.google.com/bigquery/docs/managing-partitioned-tables

NEW QUESTION 16

You want to enable your running Google Container Engine cluster to scale as demand for your application changes.
What should you do?

  • A. Add additional nodes to your Container Engine cluster using the following command: gcloud container clusters resize CLUSTER_NAME --size 10
  • B. Add a tag to the instances in the cluster with the following command:gcloud compute instances add-tags INSTANCE --tags enable --autoscaling max-nodes-10
  • C. Update the existing Container Engine cluster with the following command:gcloud alpha container clusters update mycluster --enable-autoscaling --min-nodes=1 --max-nodes=10
  • D. Create a new Container Engine cluster with the following command:gcloud alpha container clusters create mycluster --enable-autocaling --min-nodes=1 --max-nodes=10 and redeploy your application.

Answer: B

Explanation:
https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler Cluster autoscaling
--enable-autoscaling
Enables autoscaling for a node pool.
Enables autoscaling in the node pool specified by --node-pool or the default node pool if --node-pool is not provided.
Where:
--max-nodes=MAX_NODES
Maximum number of nodes in the node pool.
Maximum number of nodes to which the node pool specified by --node-pool (or default node pool if unspecified) can scale.

NEW QUESTION 17

For this question, refer to the Mountkirk Games case study.
Mountkirk Games wants to set up a continuous delivery pipeline. Their architecture includes many small services that they want to be able to update and roll back quickly. Mountkirk Games has the following requirements:
• Services are deployed redundantly across multiple regions in the US and Europe.
• Only frontend services are exposed on the public internet.
• They can provide a single frontend IP for their fleet of services.
• Deployment artifacts are immutable. Which set of products should they use?

  • A. Google Cloud Storage, Google Cloud Dataflow, Google Compute Engine
  • B. Google Cloud Storage, Google App Engine, Google Network Load Balancer
  • C. Google Kubernetes Registry, Google Container Engine, Google HTTP(S) Load Balancer
  • D. Google Cloud Functions, Google Cloud Pub/Sub, Google Cloud Deployment Manager

Answer: C

NEW QUESTION 18

You set up an autoscaling instance group to serve web traffic for an upcoming launch. After configuring the instance group as a backend service to an HTTP(S) load balancer, you notice that virtual machine (VM) instances are being terminated and re-launched every minute. The instances do not have a public IP address. You have verified the appropriate web response is coming from each instance using the curl command. You want to ensure the backend is configured correctly. What should you do?

  • A. Ensure that a firewall rule exists to allow source traffic on HTTP/HTTPS to reach the load balancer.
  • B. Assign a public IP to each instance and configure a firewall rule to allow the load balancer to reach the instance public IP.
  • C. Ensure that a firewall rule exists to allow load balancer health checks to reach the instances in the instance group.
  • D. Create a tag on each instance with the name of the load balance
  • E. Configure a firewall rule with the name of the load balancer as the source and the instance tag as the destination.

Answer: C

Explanation:
https://cloud.google.com/vpc/docs/using-firewalls
The best practice when configuration a health check is to check health and serve traffic on the same port. However, it is possible to perform health checks on one port, but serve traffic on another. If you do use two different ports, ensure that firewall rules and services running on instances are configured appropriately. If you run health checks and serve traffic on the same port, but decide to switch ports at some point, be sure to update both the backend service and the health check.
Backend services that do not have a valid global forwarding rule referencing it will not be health checked and will have no health status.
References: https://cloud.google.com/compute/docs/load-balancing/http/backend-service

NEW QUESTION 19

For this question, refer to the Mountkirk Games case study.
Mountkirk Games has deployed their new backend on Google Cloud Platform (GCP). You want to create a thorough testing process for new versions of the backend before they are released to the public. You want the testing environment to scale in an economical way. How should you design the process?

  • A. Create a scalable environment in GCP for simulating production load.
  • B. Use the existing infrastructure to test the GCP-based backend at scale.
  • C. Build stress tests into each component of your application using resources internal to GCP to simulate load.
  • D. Create a set of static environments in GCP to test different levels of load — for example, high, medium, and low.

Answer: A

Explanation:
From scenario: Requirements for Game Backend Platform
Professional-Cloud-Architect dumps exhibit Dynamically scale up or down based on game activity
Professional-Cloud-Architect dumps exhibit Connect to a managed NoSQL database service
Professional-Cloud-Architect dumps exhibit Run customize Linux distro

NEW QUESTION 20

You are running a cluster on Kubernetes Engine to serve a web application. Users are reporting that a specific part of the application is not responding anymore. You notice that all pods of your deployment keep restarting after 2 seconds. The application writes logs to standard output. You want to inspect the logs to find the cause of the issue. Which approach can you take?

  • A. Review the Stackdriver logs for each Compute Engine instance that is serving as a node in the cluster.
  • B. Review the Stackdriver logs for the specific Kubernetes Engine container that is serving the unresponsive part of the application.
  • C. Connect to the cluster using gcloud credentials and connect to a container in one of the pods to read the logs.
  • D. Review the Serial Port logs for each Compute Engine instance that is serving as a node in the cluster.

Answer: B

NEW QUESTION 21
......

Thanks for reading the newest Professional-Cloud-Architect exam dumps! We recommend you to try the PREMIUM Certshared Professional-Cloud-Architect dumps in VCE and PDF here: https://www.certshared.com/exam/Professional-Cloud-Architect/ (170 Q&As Dumps)