Certified of Associate-Cloud-Engineer practice question materials and testing software for Google certification for examinee, Real Success Guaranteed with Updated Associate-Cloud-Engineer pdf dumps vce Materials. 100% PASS Google Cloud Certified - Associate Cloud Engineer exam Today!

Free demo questions for Google Associate-Cloud-Engineer Exam Dumps Below:

NEW QUESTION 1
You have a number of compute instances belonging to an unmanaged instances group. You need to SSH to one of the Compute Engine instances to run an ad hoc script. You’ve already authenticated gcloud, however, you don’t have an SSH key deployed yet. In the fewest steps possible, what’s the easiest way to SSH to the instance?

  • A. Run gcloud compute instances list to get the IP address of the instance, then use the ssh command.
  • B. Use the gcloud compute ssh command.
  • C. Create a key with the ssh-keygen comman
  • D. Then use the gcloud compute ssh command.
  • E. Create a key with the ssh-keygen comman
  • F. Upload the key to the instanc
  • G. Run gcloud compute instances list to get the IP address of the instance, then use the ssh command.

Answer: B

Explanation:
gcloud compute ssh ensures that the user’s public SSH key is present in the project’s metadata. If the user does not have a public SSH key, one is generated using ssh-keygen and added to the project’s metadata. This is similar to the other option where we copy the key explicitly to the project’s metadata but here it is done automatically for us. There are also security benefits with this approach. When we use gcloud compute ssh to connect to Linux instances, we are adding a layer of security by storing your host keys as guest attributes. Storing SSH host keys as guest attributes improve the security of your connections by helping to protect against vulnerabilities such as man-in-the-middle (MITM) attacks. On the initial boot of a VM instance, if guest attributes are enabled, Compute Engine stores your generated host keys as guest attributes.
Compute Engine then uses these host keys that were stored during the initial boot to verify all subsequent connections to the VM instance.
Ref: https://cloud.google.com/compute/docs/instances/connecting-to-instanceRef: https://cloud.google.com/s

NEW QUESTION 2
You deployed an App Engine application using gcloud app deploy, but it did not deploy to the intended project. You want to find out why this happened and where the application deployed. What should you do?

  • A. Check the app.yaml file for your application and check project settings.
  • B. Check the web-application.xml file for your application and check project settings.
  • C. Go to Deployment Manager and review settings for deployment of applications.
  • D. Go to Cloud Shell and run gcloud config list to review the Google Cloud configuration used for deployment.

Answer: D

Explanation:
C:\GCP\appeng>gcloud config list [core]
account = xxx@gmail.com disable_usage_reporting = False
project = my-first-demo-xxxx https://cloud.google.com/endpoints/docs/openapi/troubleshoot-gce-deployment

NEW QUESTION 3
Users of your application are complaining of slowness when loading the application. You realize the slowness is because the App Engine deployment serving the application is deployed in us-central whereas all users of this application are closest to europe-west3. You want to change the region of the App Engine application to europe-west3 to minimize latency. What’s the best way to change the App Engine region?

  • A. Create a new project and create an App Engine instance in europe-west3
  • B. Use the gcloud app region set command and supply the name of the new region.
  • C. From the console, under the App Engine page, click edit, and change the region drop-down.
  • D. Contact Google Cloud Support and request the change.

Answer: A

Explanation:
App engine is a regional service, which means the infrastructure that runs your app(s) is located in a specific region and is managed by Google to be redundantly available across all the zones within that region. Once an app engine deployment is created in a region, it cant be changed. The only way is to create a new project and create an App Engine instance in europe-west3, send all user traffic to this instance and delete the app engine instance in us-central.
Ref: https://cloud.google.com/appengine/docs/locations

NEW QUESTION 4
You are the team lead of a group of 10 developers. You provided each developer with an individual Google Cloud Project that they can use as their personal sandbox to experiment with different Google Cloud solutions. You want to be notified if any of the developers are spending above $500 per month on their sandbox environment. What should you do?

  • A. Create a single budget for all projects and configure budget alerts on this budget.
  • B. Create a separate billing account per sandbox project and enable BigQuery billing export
  • C. Create a Data Studio dashboard to plot the spending per billing account.
  • D. Create a budget per project and configure budget alerts on all of these budgets.
  • E. Create a single billing account for all sandbox projects and enable BigQuery billing export
  • F. Create a Data Studio dashboard to plot the spending per project.

Answer: C

Explanation:
Set budgets and budget alerts Overview Avoid surprises on your bill by creating Cloud Billing budgets to monitor all of your Google Cloud charges in one place. A budget enables you to track your actual Google Cloud spend against your planned spend. After you've set a budget amount, you set budget alert threshold rules that are used to trigger email notifications. Budget alert emails help you stay informed about how your spend is tracking against your budget. 2. Set budget scope Set the budget Scope and then click Next. In the Projects field, select one or more projects that you want to apply the budget alert to. To apply the budget alert to all the projects in the Cloud Billing account, choose Select all.
https://cloud.google.com/billing/docs/how-to/budgets#budget-scop

NEW QUESTION 5
Your auditor wants to view your organization's use of data in Google Cloud. The auditor is most interested in auditing who accessed data in Cloud Storage buckets. You need to help the auditor access the data they need. What should you do?

  • A. Assign the appropriate permissions, and then use Cloud Monitoring to review metrics
  • B. Use the export logs API to provide the Admin Activity Audit Logs in the format they want
  • C. Turn on Data Access Logs for the buckets they want to audit, and Then build a query in the log viewer that filters on Cloud Storage
  • D. Assign the appropriate permissions, and then create a Data Studio report on Admin Activity Audit Logs

Answer: C

Explanation:
Types of audit logs Cloud Audit Logs provides the following audit logs for each Cloud project, folder, and organization: Admin Activity audit logs Data Access audit logs System Event audit logs Policy Denied audit logs ***Data Access audit logs contain API calls that read the configuration or metadata of resources, as well as user-driven API calls that create, modify, or read user-provided resource data. https://cloud.google.com/logging/docs/audit#types
https://cloud.google.com/logging/docs/audit#data-access Cloud Storage: When Cloud Storage usage logs are enabled, Cloud Storage writes usage data to the Cloud Storage bucket, which generates Data Access audit logs for the bucket. The generated Data Access audit log has its caller identity redacted.

NEW QUESTION 6
Your existing application running in Google Kubernetes Engine (GKE) consists of multiple pods running on four GKE n1–standard–2 nodes. You need to deploy additional pods requiring n2–highmem–16 nodes without any downtime. What should you do?

  • A. Use gcloud container clusters upgrad
  • B. Deploy the new services.
  • C. Create a new Node Pool and specify machine type n2–highmem–16. Deploy the new pods.
  • D. Create a new cluster with n2–highmem–16 node
  • E. Redeploy the pods and delete the old cluster.
  • F. Create a new cluster with both n1–standard–2 and n2–highmem–16 node
  • G. Redeploy the pods and delete the old cluster.

Answer: B

Explanation:
https://cloud.google.com/kubernetes-engine/docs/concepts/deployment

NEW QUESTION 7
The DevOps group in your organization needs full control of Compute Engine resources in your development project. However, they should not have permission to create or update any other resources in the project. You want to follow Google's recommendations for setting permissions for the DevOps group. What should you do?

  • A. Grant the basic role roles/viewer and the predefined role roles/compute.admin to the DevOps group.
  • B. Create an IAM policy and grant all comput
  • C. instanceAdmln." permissions to the policy Attach the policy to the DevOps group.
  • D. Create a custom role at the folder level and grant all comput
  • E. instanceAdml
  • F. * permissions to the role Grant the custom role to the DevOps group.
  • G. Grant the basic role roles/editor to the DevOps group.

Answer: A

NEW QUESTION 8
Your company completed the acquisition of a startup and is now merging the IT systems of both companies. The startup had a production Google Cloud project in their organization. You need to move this project into your organization and ensure that the project is billed lo your organization. You want to accomplish this task with minimal effort. What should you do?

  • A. Use the project
  • B. move method to move the project to your organizatio
  • C. Update the billing account of the project to that of your organization.
  • D. Ensure that you have an Organization Administrator Identity and Access Management (IAM) role assigned to you in both organization
  • E. Navigate to the Resource Manager in the startup's Google Cloud organization, and drag the project to your company's organization.
  • F. Create a Private Catalog tor the Google Cloud Marketplace, and upload the resources of the startup’s production project to the Catalo
  • G. Share the Catalog with your organization, and deploy the resources in your company’s project.
  • H. Create an infrastructure-as-code template tor all resources in the project by using Terrafor
  • I. and deploy that template to a new project in your organizatio
  • J. Delete the protect from the startup's Google Cloud organization.

Answer: A

NEW QUESTION 9
Your company is moving its entire workload to Compute Engine. Some servers should be accessible through the Internet, and other servers should only be accessible over the internal network. All servers need to be able to talk to each other over specific ports and protocols. The current on-premises network relies on a demilitarized zone (DMZ) for the public servers and a Local Area Network (LAN) for the private servers. You need to design the networking infrastructure on Google Cloud to match these requirements. What should you do?

  • A. 1. Create a single VPC with a subnet for the DMZ and a subnet for the LA
  • B. 2. Set up firewall rules to open up relevant traffic between the DMZ and the LAN subnets, and another firewall rule to allow public ingress traffic for the DMZ.
  • C. 1. Create a single VPC with a subnet for the DMZ and a subnet for the LA
  • D. 2. Set up firewall rules to open up relevant traffic between the DMZ and the LAN subnets, and another firewall rule to allow public egress traffic for the DMZ.
  • E. 1. Create a VPC with a subnet for the DMZ and another VPC with a subnet for the LA
  • F. 2. Set up firewall rules to open up relevant traffic between the DMZ and the LAN subnets, and another firewall rule to allow public ingress traffic for the DMZ.
  • G. 1. Create a VPC with a subnet for the DMZ and another VPC with a subnet for the LA
  • H. 2. Set up firewall rules to open up relevant traffic between the DMZ and the LAN subnets, and another firewall rule to allow public egress traffic for the DMZ.

Answer: C

Explanation:
https://cloud.google.com/vpc/docs/vpc-peering

NEW QUESTION 10
After a recent security incident, your startup company wants better insight into what is happening in the Google Cloud environment. You need to monitor unexpected firewall changes and instance creation. Your company prefers simple solutions. What should you do?

  • A. Use Cloud Logging filters to create log-based metrics for firewall and instance action
  • B. Monitor the changes and set up reasonable alerts.
  • C. Install Kibana on a compute Instanc
  • D. Create a log sink to forward Cloud Audit Logs filtered for firewalls and compute instances to Pub/Su
  • E. Target the Pub/Sub topic to push messages to the Kibana instanc
  • F. Analyze the logs on Kibana in real time.
  • G. Turn on Google Cloud firewall rules logging, and set up alerts for any insert, update, or delete events.
  • H. Create a log sink to forward Cloud Audit Logs filtered for firewalls and compute instances to Cloud Storage.Use BigQuery to periodically analyze log events in the storage bucket.

Answer: A

Explanation:
This answer is the simplest and most effective way to monitor unexpected firewall changes and instance creation in Google Cloud. Cloud Logging filters allow you to specify the criteria for the log entries that you want to view or export. You can use the Logging query language to write filters based on the LogEntry fields, such as resource.type, severity, or protoPayload.methodName. For example, you can filter for firewall-related events by using the following query:
resource.type=“gce_subnetwork” logName=“projects/PROJECT_ID/logs/compute.googleapis.com%2Ffirewall”
You can filter for instance-related events by using the following query: resource.type=“gce_instance”
logName=“projects/PROJECT_ID/logs/compute.googleapis.com%2Factivity_log”
You can create log-based metrics from these filters to measure the rate or count of log entries that match the filter. Log-based metrics can be used to create charts and dashboards in Cloud Monitoring, or to set up alerts based on the metric values. For example, you can create an alert policy that triggers when the log-based metric for firewall changes exceeds a certain threshold in a given time interval. This way, you can get notified of any unexpected or malicious changes to your firewall rules.
Option B is incorrect because it is unnecessarily complex and costly. Installing Kibana on a compute instance requires additional configuration and maintenance. Creating a log sink to forward Cloud Audit Logs to Pub/Sub also incurs additional charges for the Pub/Sub service. Analyzing the logs on Kibana in real time may not be feasible or efficient, as it requires constant monitoring and manual intervention.
Option C is incorrect because Google Cloud firewall rules logging is a different feature from Cloud Audit Logs. Firewall rules logging allows you to audit, verify, and analyze the effects of your firewall rules by creating connection records for each rule that applies to traffic. However, firewall rules logging does not log the insert, update, or delete events for the firewall rules themselves. Those events are logged by Cloud Audit Logs, which record the administrative activities in your Google Cloud project.
Option D is incorrect because it is not a real-time solution. Creating a log sink to forward Cloud Audit Logs to Cloud Storage requires additional storage space and charges. Using BigQuery to periodically analyze log events in the storage bucket also incurs additional costs for the BigQuery service. Moreover, this option does not provide any alerting mechanism to notify you of any unexpected or malicious changes to your firewall rules or instances.

NEW QUESTION 11
You are managing a project for the Business Intelligence (BI) department in your company. A data pipeline ingests data into BigQuery via streaming. You want the users in the BI department to be able to run the custom SQL queries against the latest data in BigQuery. What should you do?

  • A. Create a Data Studio dashboard that uses the related BigQuery tables as a source and give the BI team view access to the Data Studio dashboard.
  • B. Create a Service Account for the BI team and distribute a new private key to each member of the BI team.
  • C. Use Cloud Scheduler to schedule a batch Dataflow job to copy the data from BigQuery to the BI team's internal data warehouse.
  • D. Assign the IAM role of BigQuery User to a Google Group that contains the members of the BI team.

Answer: D

Explanation:
When applied to a dataset, this role provides the ability to read the dataset's metadata and list tables in the dataset. When applied to a project, this role also provides the ability to run jobs, including queries, within the project. A member with this role can enumerate their own jobs, cancel their own jobs, and enumerate datasets within a project. Additionally, allows the creation of new datasets within the project; the creator is granted the BigQuery Data Owner role (roles/bigquery.dataOwner) on these new datasets.
https://cloud.google.com/bigquery/docs/access-control

NEW QUESTION 12
You are deploying a production application on Compute Engine. You want to prevent anyone from accidentally destroying the instance by clicking the wrong button. What should you do?

  • A. Disable the flag “Delete boot disk when instance is deleted.”
  • B. Enable delete protection on the instance.
  • C. Disable Automatic restart on the instance.
  • D. Enable Preemptibility on the instance.

Answer: D

Explanation:
Preventing Accidental VM Deletion This document describes how to protect specific VM instances from deletion by setting the deletionProtection property on an Instance resource. To learn more about VM instances, read the Instances documentation. As part of your workload, there might be certain VM instances that are critical to running your application or services, such as an instance running a SQL server, a server used as a license manager, and so on. These VM instances might need to stay running indefinitely so you need a way to protect these VMs from being deleted. By setting the deletionProtection flag, a VM instance can be protected from accidental deletion. If a user attempts to delete a VM instance for which you have set the deletionProtection flag, the request fails. Only a user that has been granted a role with compute.instances.create permission can reset the flag to allow the resource to be deleted. https://cloud.google.com/compute/docs/instances/preventing-accidental-vm-deletion

NEW QUESTION 13
You want to select and configure a solution for storing and archiving data on Google Cloud Platform. You need to support compliance objectives for data from one geographic location. This data is archived after 30 days and needs to be accessed annually. What should you do?

  • A. Select Multi-Regional Storag
  • B. Add a bucket lifecycle rule that archives data after 30 days to Coldline Storage.
  • C. Select Multi-Regional Storag
  • D. Add a bucket lifecycle rule that archives data after 30 days to Nearline Storage.
  • E. Select Regional Storag
  • F. Add a bucket lifecycle rule that archives data after 30 days to Nearline Storage.
  • G. Select Regional Storag
  • H. Add a bucket lifecycle rule that archives data after 30 days to Coldline Storage.

Answer: D

Explanation:
Google Cloud Coldline is a new cold-tier storage for archival data with access frequency of less than once per year. Unlike other cold storage options, Nearline has no delays prior to data access, so now it is the leading solution among competitors.
The Real description is about Coldline storage Class: Coldline Storage
Coldline Storage is a very-low-cost, highly durable storage service for storing infrequently accessed data. Coldline Storage is a better choice than Standard Storage or Nearline Storage in scenarios where slightly lower availability, a 90-day minimum storage duration, and higher costs for data access are acceptable trade-offs for lowered at-rest storage costs.
Coldline Storage is ideal for data you plan to read or modify at most once a quarter. Note, however, that for data being kept entirely for backup or archiving purposes, Archive Storage is more cost-effective, as it offers the lowest storage costs.
https://cloud.google.com/storage/docs/storage-classes#coldline

NEW QUESTION 14
You want to configure a solution for archiving data in a Cloud Storage bucket. The solution must be
cost-effective. Data with multiple versions should be archived after 30 days. Previous versions are accessed once a month for reporting. This archive data is also occasionally updated at month-end. What should you do?

  • A. Add a bucket lifecycle rule that archives data with newer versions after 30 days to Coldline Storage.
  • B. Add a bucket lifecycle rule that archives data with newer versions after 30 days to Nearline Storage.
  • C. Add a bucket lifecycle rule that archives data from regional storage after 30 days to Coldline Storage.
  • D. Add a bucket lifecycle rule that archives data from regional storage after 30 days to Nearline Storage.

Answer: B

NEW QUESTION 15
You want to send and consume Cloud Pub/Sub messages from your App Engine application. The Cloud Pub/Sub API is currently disabled. You will use a service account to authenticate your application to the API. You want to make sure your application can use Cloud Pub/Sub. What should you do?

  • A. Enable the Cloud Pub/Sub API in the API Library on the GCP Console.
  • B. Rely on the automatic enablement of the Cloud Pub/Sub API when the Service Account accesses it.
  • C. Use Deployment Manager to deploy your applicatio
  • D. Rely on the automatic enablement of all APIs used by the application being deployed.
  • E. Grant the App Engine Default service account the role of Cloud Pub/Sub Admi
  • F. Have your application enable the API on the first connection to Cloud Pub/Sub.

Answer: A

Explanation:
Quickstart: using the Google Cloud Console
This page shows you how to perform basic tasks in Pub/Sub using the Google Cloud Console. Note: If you are new to Pub/Sub, we recommend that you start with the interactive tutorial. Before you begin
Set up a Cloud Console project. Set up a project
Click to:
Create or select a project.
Enable the Pub/Sub API for that project.
You can view and manage these resources at any time in the Cloud Console. Install and initialize the Cloud SDK.
Note: You can run the gcloud tool in the Cloud Console without installing the Cloud SDK. To run the gcloud tool in the Cloud Console, use Cloud Shell .
https://cloud.google.com/pubsub/docs/quickstart-console

NEW QUESTION 16
You are deploying an application to a Compute Engine VM in a managed instance group. The application must be running at all times, but only a single instance of the VM should run per GCP project. How should you configure the instance group?

  • A. Set autoscaling to On, set the minimum number of instances to 1, and then set the maximum number of instances to 1.
  • B. Set autoscaling to Off, set the minimum number of instances to 1, and then set the maximum number of instances to 1.
  • C. Set autoscaling to On, set the minimum number of instances to 1, and then set the maximum number of instances to 2.
  • D. Set autoscaling to Off, set the minimum number of instances to 1, and then set the maximum number of instances to 2.

Answer: A

Explanation:
https://cloud.google.com/compute/docs/autoscaler#specifications
Autoscaling works independently from autohealing. If you configure autohealing for your group and an instance fails the health check, the autohealer attempts to recreate the instance. Recreating an instance can cause the number of instances in the group to fall below the autoscaling threshold (minNumReplicas) that you specify.
Associate-Cloud-Engineer dumps exhibit Since we need the application running at all times, we need a minimum 1 instance.
Associate-Cloud-Engineer dumps exhibit Only a single instance of the VM should run, we need a maximum 1 instance.
Associate-Cloud-Engineer dumps exhibit We want the application running at all times. If the VM crashes due to any underlying hardware failure, we want another instance to be added to MIG so that application can continue to serve requests. We can achieve this by enabling autoscaling. The only option that satisfies these three is Set autoscaling to On, set the minimum number of instances to 1, and then set the maximum number of instances to 1.
Ref: https://cloud.google.com/compute/docs/autoscaler

NEW QUESTION 17
Your learn wants to deploy a specific content management system (CMS) solution lo Google Cloud. You need a quick and easy way to deploy and install the solution. What should you do?

  • A. Search for the CMS solution in Google Cloud Marketplac
  • B. Use gcloud CLI to deploy the solution.
  • C. Search for the CMS solution in Google Cloud Marketplac
  • D. Deploy the solution directly from Cloud Marketplace.
  • E. Search for the CMS solution in Google Cloud Marketplac
  • F. Use Terraform and the Cloud Marketplace ID to deploy the solution with the appropriate parameters.
  • G. Use the installation guide of the CMS provide
  • H. Perform the installation through your configuration management system.

Answer: B

NEW QUESTION 18
You have a project for your App Engine application that serves a development environment. The required testing has succeeded and you want to create a new project to serve as your production environment. What should you do?

  • A. Use gcloud to create the new project, and then deploy your application to the new project.
  • B. Use gcloud to create the new project and to copy the deployed application to the new project.
  • C. Create a Deployment Manager configuration file that copies the current App Engine deployment into a new project.
  • D. Deploy your application again using gcloud and specify the project parameter with the new project name to create the new project.

Answer: A

Explanation:
You can deploy to a different project by using –project flag.
By default, the service is deployed the current project configured via:
$ gcloud config set core/project PROJECT
To override this value for a single deployment, use the –project flag:
$ gcloud app deploy ~/my_app/app.yaml –project=PROJECT Ref: https://cloud.google.com/sdk/gcloud/reference/app/deploy

NEW QUESTION 19
You need to create an autoscaling managed instance group for an HTTPS web application. You want to make sure that unhealthy VMs are recreated. What should you do?

  • A. Create a health check on port 443 and use that when creating the Managed Instance Group.
  • B. Select Multi-Zone instead of Single-Zone when creating the Managed Instance Group.
  • C. In the Instance Template, add the label ‘health-check’.
  • D. In the Instance Template, add a startup script that sends a heartbeat to the metadata server.

Answer: A

Explanation:
https://cloud.google.com/compute/docs/instance-groups/autohealing-instances-in-migs#setting_up_an_autoheali

NEW QUESTION 20
You host a static website on Cloud Storage. Recently, you began to include links to PDF files on this site. Currently, when users click on the links to these PDF files, their browsers prompt them to save the file onto their local system. Instead, you want the clicked PDF files to be displayed within the browser window directly, without prompting the user to save the file locally. What should you do?

  • A. Enable Cloud CDN on the website frontend.
  • B. Enable ‘Share publicly’ on the PDF file objects.
  • C. Set Content-Type metadata to application/pdf on the PDF file objects.
  • D. Add a label to the storage bucket with a key of Content-Type and value of application/pdf.

Answer: C

Explanation:
https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/MIME_Types#importance_of_setting_t

NEW QUESTION 21
......

Recommend!! Get the Full Associate-Cloud-Engineer dumps in VCE and PDF From Thedumpscentre.com, Welcome to Download: https://www.thedumpscentre.com/Associate-Cloud-Engineer-dumps/ (New 283 Q&As Version)