Want to know Exambible Associate-Cloud-Engineer Exam practice test features? Want to lear more about Google Google Cloud Certified - Associate Cloud Engineer certification experience? Study Guaranteed Google Associate-Cloud-Engineer answers to Renewal Associate-Cloud-Engineer questions at Exambible. Gat a success with an absolute guarantee to pass Google Associate-Cloud-Engineer (Google Cloud Certified - Associate Cloud Engineer) test on your first attempt.

Also have Associate-Cloud-Engineer free dumps questions for you:

NEW QUESTION 1
You need to enable traffic between multiple groups of Compute Engine instances that are currently running two different GCP projects. Each group of Compute Engine instances is running in its own VPC. What should you do?

  • A. Verify that both projects are in a GCP Organizatio
  • B. Create a new VPC and add all instances.
  • C. Verify that both projects are in a GCP Organizatio
  • D. Share the VPC from one project and request that the Compute Engine instances in the other project use this shared VPC.
  • E. Verify that you are the Project Administrator of both project
  • F. Create two new VPCs and add all instances.
  • G. Verify that you are the Project Administrator of both project
  • H. Create a new VPC and add all instances.

Answer: B

NEW QUESTION 2
You need to manage a Cloud Spanner Instance for best query performance. Your instance in production runs in a single Google Cloud region. You need to improve performance in the shortest amount of time. You want to follow Google best practices for service configuration. What should you do?

  • A. Create an alert in Cloud Monitoring to alert when the percentage of high priority CPU utilization reaches 45% If you exceed this threshold, add nodes lo your instance.
  • B. Create an alert in Cloud Monitoring to alert when the percentage ot high priority CPU utilization reaches 45% Use database query statistics to identify queries that result in high CPU usage, and then rewrite those queries to optimize their resource usage
  • C. Create an alert in Cloud Monitoring to alert when the percentage of high priority CPU utilization reaches 65% If you exceed this threshold, add nodes to your instance
  • D. Create an alert in Cloud Monitoring to alert when the percentage of high priority CPU utilization reaches 65%. Use database query statistics to identity queries that result in high CPU usage, and then rewrite those queries to optimize their resource usage.

Answer: A

NEW QUESTION 3
Your company has an existing GCP organization with hundreds of projects and a billing account. Your company recently acquired another company that also has hundreds of projects and its own billing account. You would like to consolidate all GCP costs of both GCP organizations onto a single invoice. You would like to consolidate all costs as of tomorrow. What should you do?

  • A. Link the acquired company’s projects to your company's billing account.
  • B. Configure the acquired company's billing account and your company's billing account to export the billing data into the same BigQuery dataset.
  • C. Migrate the acquired company’s projects into your company’s GCP organizatio
  • D. Link the migrated projects to your company's billing account.
  • E. Create a new GCP organization and a new billing accoun
  • F. Migrate the acquired company's projects and your company's projects into the new GCP organization and link the projects to the new billing account.

Answer: D

NEW QUESTION 4
You have a large 5-TB AVRO file stored in a Cloud Storage bucket. Your analysts are proficient only in SQL and need access to the data stored in this file. You want to find a cost-effective way to complete their request as soon as possible. What should you do?

  • A. Load data in Cloud Datastore and run a SQL query against it.
  • B. Create a BigQuery table and load data in BigQuer
  • C. Run a SQL query on this table and drop this table after you complete your request.
  • D. Create external tables in BigQuery that point to Cloud Storage buckets and run a SQL query on these external tables to complete your request.
  • E. Create a Hadoop cluster and copy the AVRO file to NDFS by compressing i
  • F. Load the file in a hive table and provide access to your analysts so that they can run SQL queries.

Answer: C

NEW QUESTION 5
You are managing several Google Cloud Platform (GCP) projects and need access to all logs for the past 60 days. You want to be able to explore and quickly analyze the log contents. You want to follow Google- recommended practices to obtain the combined logs for all projects. What should you do?

  • A. Navigate to Stackdriver Logging and select resource.labels.project_id="*"
  • B. Create a Stackdriver Logging Export with a Sink destination to a BigQuery datase
  • C. Configure the table expiration to 60 days.
  • D. Create a Stackdriver Logging Export with a Sink destination to Cloud Storag
  • E. Create a lifecycle rule to delete objects after 60 days.
  • F. Configure a Cloud Scheduler job to read from Stackdriver and store the logs in BigQuer
  • G. Configure the table expiration to 60 days.

Answer: B

NEW QUESTION 6
Your company implemented BigQuery as an enterprise data warehouse. Users from multiple business units run queries on this data warehouse. However, you notice that query costs for BigQuery are very high, and you need to control costs. Which two methods should you use? (Choose two.)

  • A. Split the users from business units to multiple projects.
  • B. Apply a user- or project-level custom query quota for BigQuery data warehouse.
  • C. Create separate copies of your BigQuery data warehouse for each business unit.
  • D. Split your BigQuery data warehouse into multiple data warehouses for each business unit.
  • E. Change your BigQuery query model from on-demand to flat rat
  • F. Apply the appropriate number of slots to each Project.

Answer: BE

NEW QUESTION 7
You want to deploy an application on Cloud Run that processes messages from a Cloud Pub/Sub topic. You want to follow Google-recommended practices. What should you do?

  • A. 1. Create a Cloud Function that uses a Cloud Pub/Sub trigger on that topic.2. Call your application on Cloud Run from the Cloud Function for every message.
  • B. 1. Grant the Pub/Sub Subscriber role to the service account used by Cloud Run.2. Create a Cloud Pub/Sub subscription for that topic.3. Make your application pull messages from that subscription.
  • C. 1. Create a service account.2. Give the Cloud Run Invoker role to that service account for your Cloud Run application.3. Create a Cloud Pub/Sub subscription that uses that service account and uses your Cloud Run application as the push endpoint.
  • D. 1. Deploy your application on Cloud Run on GKE with the connectivity set to Internal.2. Create a Cloud Pub/Sub subscription for that topic.3. In the same Google Kubernetes Engine cluster as your application, deploy a container that takes the messages and sends them to your application.

Answer: D

NEW QUESTION 8
You are deploying an application to App Engine. You want the number of instances to scale based on request rate. You need at least 3 unoccupied instances at all times. Which scaling type should you use?

  • A. Manual Scaling with 3 instances.
  • B. Basic Scaling with min_instances set to 3.
  • C. Basic Scaling with max_instances set to 3.
  • D. Automatic Scaling with min_idle_instances set to 3.

Answer: D

NEW QUESTION 9
You have an object in a Cloud Storage bucket that you want to share with an external company. The object contains sensitive data. You want access to the content to be removed after four hours. The external company does not have a Google account to which you can grant specific user-based access privileges. You want to use the most secure method that requires the fewest steps. What should you do?

  • A. Create a signed URL with a four-hour expiration and share the URL with the company.
  • B. Set object access to ‘public’ and use object lifecycle management to remove the object after four hours.
  • C. Configure the storage bucket as a static website and furnish the object’s URL to the compan
  • D. Delete the object from the storage bucket after four hours.
  • E. Create a new Cloud Storage bucket specifically for the external company to acces
  • F. Copy the object to that bucke
  • G. Delete the bucket after four hours have passed.

Answer: A

NEW QUESTION 10
Your organization is a financial company that needs to store audit log files for 3 years. Your organization has hundreds of Google Cloud projects. You need to implement a cost-effective approach for log file retention. What should you do?

  • A. Create an export to the sink that saves logs from Cloud Audit to BigQuery.
  • B. Create an export to the sink that saves logs from Cloud Audit to a Coldline Storage bucket.
  • C. Write a custom script that uses logging API to copy the logs from Stackdriver logs to BigQuery.
  • D. Export these logs to Cloud Pub/Sub and write a Cloud Dataflow pipeline to store logs to Cloud SQL.

Answer: A

NEW QUESTION 11
You are using Container Registry to centrally store your company’s container images in a separate project. In another project, you want to create a Google Kubernetes Engine (GKE) cluster. You want to ensure that Kubernetes can download images from Container Registry. What should you do?

  • A. In the project where the images are stored, grant the Storage Object Viewer IAM role to the service account used by the Kubernetes nodes.
  • B. When you create the GKE cluster, choose the Allow full access to all Cloud APIs option under ‘Access scopes’.
  • C. Create a service account, and give it access to Cloud Storag
  • D. Create a P12 key for this service account and use it as an imagePullSecrets in Kubernetes.
  • E. Configure the ACLs on each image in Cloud Storage to give read-only access to the default Compute Engine service account.

Answer: B

NEW QUESTION 12
Your Dataproc cluster runs in a single Virtual Private Cloud (VPC) network in a single subnet with range 172.16.20.128/25. There are no private IP addresses available in the VPC network. You want to add new VMs to communicate with your cluster using the minimum number of steps. What should you do?

  • A. Modify the existing subnet range to 172.16.20.0/24.
  • B. Create a new Secondary IP Range in the VPC and configure the VMs to use that range.
  • C. Create a new VPC network for the VM
  • D. Enable VPC Peering between the VMs’ VPC network and the Dataproc cluster VPC network.
  • E. Create a new VPC network for the VMs with a subnet of 172.32.0.0/16. Enable VPC network Peering between the Dataproc VPC network and the VMs VPC networ
  • F. Configure a custom Route exchange.

Answer: B

Explanation:
A subnet has a single primary IP address range and, optionally, one or more secondary IP address ranges. For each subnet IP address range, Google Cloud creates a subnet route. When you use VPC Network Peering, Google Cloud always exchanges the subnet routes that don't use privately reused public IP addresses between the two peered networks. If firewall rules in each network permit communication, VM instances in one network can communicate with instances in the peered network.

NEW QUESTION 13
You need to track and verity modifications to a set of Google Compute Engine instances in your Google Cloud project. In particular, you want to verify OS system patching events on your virtual machines (VMs). What should you do?

  • A. Review the Compute Engine activity logs Select and review the Admin Event logs
  • B. Review the Compute Engine activity logs Select and review the System Event logs
  • C. Install the Cloud Logging Agent In Cloud Logging review the Compute Engine syslog logs
  • D. Install the Cloud Logging Agent In Cloud Logging, review the Compute Engine operation logs

Answer: A

NEW QUESTION 14
You have a Dockerfile that you need to deploy on Kubernetes Engine. What should you do?

  • A. Use kubectl app deploy <dockerfilename>.
  • B. Use gcloud app deploy <dockerfilename>.
  • C. Create a docker image from the Dockerfile and upload it to Container Registr
  • D. Create a Deployment YAML file to point to that imag
  • E. Use kubectl to create the deployment with that file.
  • F. Create a docker image from the Dockerfile and upload it to Cloud Storag
  • G. Create a Deployment YAML file to point to that imag
  • H. Use kubectl to create the deployment with that file.

Answer: C

NEW QUESTION 15
Your team maintains the infrastructure for your organization. The current infrastructure requires changes. You need to share your proposed changes with the rest of the team. You want to follow Google’s recommended best practices. What should you do?

  • A. Use Deployment Manager templates to describe the proposed changes and store them in a Cloud Storage bucket.
  • B. Use Deployment Manager templates to describe the proposed changes and store them in Cloud Source Repositories.
  • C. Apply the change in a development environment, run gcloud compute instances list, and then save the output in a shared Storage bucket.
  • D. Apply the change in a development environment, run gcloud compute instances list, and then save the output in Cloud Source Repositories.

Answer: B

NEW QUESTION 16
You are assigned to maintain a Google Kubernetes Engine (GKE) cluster named dev that was deployed on Google Cloud. You want to manage the GKE configuration using the command line interface (CLI). You have just downloaded and installed the Cloud SDK. You want to ensure that future CLI commands by default address this specific cluster. What should you do?

  • A. Use the command gcloud config sot container/cluster dev
  • B. Use the command gcloud container clusters update dev
  • C. Create a file called gk
  • D. default in the -/ .gcloud folder that contains the cluster name
  • E. Create a file called default
  • F. j son in the -/.gcioud folder that contains the cluster name

Answer: B

NEW QUESTION 17
You are using Data Studio to visualize a table from your data warehouse that is built on top of BigQuery. Data is appended to the data warehouse during the day. At night, the daily summary is recalculated by overwriting the table. You just noticed that the charts in Data Studio are broken, and you want to analyze the problem. What should you do?

  • A. Use the BigQuery interface to review the nightly Job and look for any errors
  • B. Review the Error Reporting page in the Cloud Console to find any errors.
  • C. In Cloud Logging create a filter for your Data Studio report
  • D. Use Cloud Debugger to find out why the data was not refreshed correctly

Answer: D

NEW QUESTION 18
......

P.S. Surepassexam now are offering 100% pass ensure Associate-Cloud-Engineer dumps! All Associate-Cloud-Engineer exam questions have been updated with correct answers: https://www.surepassexam.com/Associate-Cloud-Engineer-exam-dumps.html (190 New Questions)