Want to know Passleader Associate-Cloud-Engineer Exam practice test features? Want to lear more about Google Google Cloud Certified - Associate Cloud Engineer certification experience? Study Best Quality Google Associate-Cloud-Engineer answers to Latest Associate-Cloud-Engineer questions at Passleader. Gat a success with an absolute guarantee to pass Google Associate-Cloud-Engineer (Google Cloud Certified - Associate Cloud Engineer) test on your first attempt.
Also have Associate-Cloud-Engineer free dumps questions for you:
NEW QUESTION 1
You have a website hosted on App Engine standard environment. You want 1% of your users to see a new test version of the website. You want to minimize complexity. What should you do?
- A. Deploy the new version in the same application and use the --migrate option.
- B. Deploy the new version in the same application and use the --splits option to give a weight of 99 to the current version and a weight of 1 to the new version.
- C. Create a new App Engine application in the same projec
- D. Deploy the new version in that application.Use the App Engine library to proxy 1% of the requests to the new version.
- E. Create a new App Engine application in the same projec
- F. Deploy the new version in that application.Configure your network load balancer to send 1% of the traffic to that new application.
Answer: B
Explanation:
https://cloud.google.com/appengine/docs/standard/python/splitting-traffic#gcloud
NEW QUESTION 2
You are building an application that will run in your data center. The application will use Google Cloud Platform (GCP) services like AutoML. You created a service account that has appropriate access to AutoML. You need to enable authentication to the APIs from your on-premises environment. What should you do?
- A. Use service account credentials in your on-premises application.
- B. Use gcloud to create a key file for the service account that has appropriate permissions.
- C. Set up direct interconnect between your data center and Google Cloud Platform to enable authentication for your on-premises applications.
- D. Go to the IAM & admin console, grant a user account permissions similar to the service account permissions, and use this user account for authentication from your data center.
Answer: B
NEW QUESTION 3
You have an on-premises data analytics set of binaries that processes data files in memory for about 45 minutes every midnight. The sizes of those data files range from 1 gigabyte to 16 gigabytes. You want to migrate this application to Google Cloud with minimal effort and cost. What should you do?
- A. Upload the code to Cloud Function
- B. Use Cloud Scheduler to start the application.
- C. Create a container for the set of binarie
- D. Use Cloud Scheduler to start a Cloud Run job for the container.
- E. Create a container for the set of binaries Deploy the container to Google Kubernetes Engine (GKE) and use the Kubernetes scheduler to start the application.
- F. Lift and shift to a VM on Compute Engin
- G. Use an instance schedule to start and stop the instance.
Answer: B
NEW QUESTION 4
You are performing a monthly security check of your Google Cloud environment and want to know who has access to view data stored in your Google Cloud Project. What should you do?
- A. Enable Audit Logs for all APIs that are related to data storage.
- B. Review the IAM permissions for any role that allows for data access.
- C. Review the Identity-Aware Proxy settings for each resource.
- D. Create a Data Loss Prevention job.
Answer: B
Explanation:
https://cloud.google.com/logging/docs/audit
NEW QUESTION 5
Your company runs one batch process in an on-premises server that takes around 30 hours to complete. The task runs monthly, can be performed offline, and must be restarted if interrupted. You want to migrate this workload to the cloud while minimizing cost. What should you do?
- A. Migrate the workload to a Compute Engine Preemptible VM.
- B. Migrate the workload to a Google Kubernetes Engine cluster with Preemptible nodes.
- C. Migrate the workload to a Compute Engine V
- D. Start and stop the instance as needed.
- E. Create an Instance Template with Preemptible VMs O
- F. Create a Managed Instance Group from the template and adjust Target CPU Utilizatio
- G. Migrate the workload.
Answer: D
Explanation:
Install the workload in a compute engine VM, start and stop the instance as needed, because as per the question the VM runs for 30 hours, process can be performed offline and should not be interrupted, if interrupted we need to restart the batch process again. Preemptible VMs are cheaper, but they will not be available beyond 24hrs, and if the process gets interrupted the preemptible VM will restart.
NEW QUESTION 6
You want to configure 10 Compute Engine instances for availability when maintenance occurs. Your requirements state that these instances should attempt to automatically restart if they crash. Also, the instances should be highly available including during system maintenance. What should you do?
- A. Create an instance template for the instance
- B. Set the ‘Automatic Restart’ to o
- C. Set the ‘On-host maintenance’ to Migrate VM instanc
- D. Add the instance template to an instance group.
- E. Create an instance template for the instance
- F. Set ‘Automatic Restart’ to of
- G. Set ‘On-host maintenance’ to Terminate VM instance
- H. Add the instance template to an instance group.
- I. Create an instance group for the instance
- J. Set the ‘Autohealing’ health check to healthy (HTTP).
- K. Create an instance group for the instanc
- L. Verify that the ‘Advanced creation options’ setting for ‘do not retry machine creation’ is set to off.
Answer: A
Explanation:
Create an instance template for the instances so VMs have same specs. Set the "˜Automatic Restart' to on to VM automatically restarts upon crash. Set the "˜On-host maintenance' to Migrate VM instance. This will take care of VM during maintenance window. It will migrate VM instance making it highly available Add the instance template to an instance group so instances can be managed.
• onHostMaintenance: Determines the behavior when a maintenance event occurs that might cause your instance to reboot.
• [Default] MIGRATE, which causes Compute Engine to live migrate an instance when there is a maintenance event.
• TERMINATE, which stops an instance instead of migrating it.
• automaticRestart: Determines the behavior when an instance crashes or is stopped by the system.
• [Default] true, so Compute Engine restarts an instance if the instance crashes or is stopped.
• false, so Compute Engine does not restart an instance if the instance crashes or is stopped.
Enabling automatic restart ensures that compute engine instances are automatically restarted when they crash. And Enabling Migrate VM Instance enables live migrates i.e. compute instances are migrated during system maintenance and remain running during the migration.
Automatic Restart If your instance is set to terminate when there is a maintenance event, or if your instance crashes because of an underlying hardware issue, you can set up Compute Engine to automatically restart the instance by setting the automaticRestart field to true. This setting does not apply if the instance is taken offline through a user action, such as calling sudo shutdown, or during a zone outage.Ref: https://cloud.google.com/compute/docs/instances/setting-instance-scheduling-options#autorestart
Enabling the Migrate VM Instance option migrates your instance away from an infrastructure maintenance event, and your instance remains running during the migration. Your instance might experience a short period of decreased performance, although generally, most instances should not notice any difference. This is ideal for instances that require constant uptime and can tolerate a short period of decreased
performance.Ref: https://cloud.google.com/compute/docs/instances/setting-instance-scheduling-options#live_
NEW QUESTION 7
You need to run an important query in BigQuery but expect it to return a lot of records. You want to find out how much it will cost to run the query. You are using on-demand pricing. What should you do?
- A. Arrange to switch to Flat-Rate pricing for this query, then move back to on-demand.
- B. Use the command line to run a dry run query to estimate the number of bytes rea
- C. Then convert that bytes estimate to dollars using the Pricing Calculator.
- D. Use the command line to run a dry run query to estimate the number of bytes returne
- E. Then convert that bytes estimate to dollars using the Pricing Calculator.
- F. Run a select count (*) to get an idea of how many records your query will look throug
- G. Then convert that number of rows to dollars using the Pricing Calculator.
Answer: B
NEW QUESTION 8
You are hosting an application from Compute Engine virtual machines (VMs) in us–central1–a. You want to adjust your design to support the failure of a single Compute Engine zone, eliminate downtime, and minimize cost. What should you do?
- A. – Create Compute Engine resources in us–central1–b.–Balance the load across both us–central1–a and us–central1–b.
- B. – Create a Managed Instance Group and specify us–central1–a as the zone.–Configure the Health Check with a short Health Interval.
- C. – Create an HTTP(S) Load Balancer.–Create one or more global forwarding rules to direct traffic to your VMs.
- D. – Perform regular backups of your application.–Create a Cloud Monitoring Alert and be notified if your application becomes unavailable.–Restore from backups when notified.
Answer: A
Explanation:
Choosing a region and zone You choose which region or zone hosts your resources, which controls where your data is stored and used. Choosing a region and zone is important for several reasons:
Handling failures
Distribute your resources across multiple zones and regions to tolerate outages. Google designs zones to be independent from each other: a zone usually has power, cooling, networking, and control planes that are isolated from other zones, and most single failure events will affect only a single zone. Thus, if a zone becomes unavailable, you can transfer traffic to another zone in the same region to keep your services running. Similarly, if a region experiences any disturbances, you should have backup services running in a different region. For more information about distributing your resources and designing a robust system, see Designing Robust Systems. Decreased network latency To decrease network latency, you might want to choose a region or zone that is close to your point of service.
https://cloud.google.com/compute/docs/regions-zones#choosing_a_region_and_zone
NEW QUESTION 9
You have an application that uses Cloud Spanner as a backend database. The application has a very predictable traffic pattern. You want to automatically scale up or down the number of Spanner nodes depending on traffic. What should you do?
- A. Create a cron job that runs on a scheduled basis to review stackdriver monitoring metrics, and then resize the Spanner instance accordingly.
- B. Create a Stackdriver alerting policy to send an alert to oncall SRE emails when Cloud Spanner CPU exceeds the threshol
- C. SREs would scale resources up or down accordingly.
- D. Create a Stackdriver alerting policy to send an alert to Google Cloud Support email when Cloud Spanner CPU exceeds your threshol
- E. Google support would scale resources up or down accordingly.
- F. Create a Stackdriver alerting policy to send an alert to webhook when Cloud Spanner CPU is over or under your threshol
- G. Create a Cloud Function that listens to HTTP and resizes Spanner resources accordingly.
Answer: D
Explanation:
As to mexblood1's point, CPU utilization is a recommended proxy for traffic when it comes to Cloud Spanner. See: Alerts for high CPU utilization The following table specifies our recommendations for maximum CPU usage for both single-region and multi-region instances. These numbers are to ensure that your instance has enough compute capacity to continue to serve your traffic in the event of the loss of an entire zone (for single-region instances) or an entire region (for multi-region instances). - https://cloud.google.com/spanner/docs/cpu-utilization
NEW QUESTION 10
You are running a data warehouse on BigQuery. A partner company is offering a recommendation engine based on the data in your data warehouse. The partner company is also running their application on Google Cloud. They manage the resources in their own project, but they need access to the BigQuery dataset in your project. You want to provide the partner company with access to the dataset What should you do?
- A. Create a Service Account in your own project, and grant this Service Account access to BigGuery in your project
- B. Create a Service Account in your own project, and ask the partner to grant this Service Account access to BigQuery in their project
- C. Ask the partner to create a Service Account in their project, and have them give the Service Account access to BigQuery in their project
- D. Ask the partner to create a Service Account in their project, and grant their Service Account access to the BigQuery dataset in your project
Answer: D
Explanation:
https://gtseres.medium.com/using-service-accounts-across-projects-in-gcp-cf9473fef8f0#:~:text=Go%20to%20t
NEW QUESTION 11
You have an application that receives SSL-encrypted TCP traffic on port 443. Clients for this application are located all over the world. You want to minimize latency for the clients. Which load balancing option should you use?
- A. HTTPS Load Balancer
- B. Network Load Balancer
- C. SSL Proxy Load Balancer
- D. Internal TCP/UDP Load Balance
- E. Add a firewall rule allowing ingress traffic from 0.0.0.0/0 on the target instances.
Answer: C
NEW QUESTION 12
You recently received a new Google Cloud project with an attached billing account where you will work. You need to create instances, set firewalls, and store data in Cloud Storage. You want to follow
Google-recommended practices. What should you do?
- A. Use the gcloud CLI services enable cloudresourcemanager.googleapis.com command to enable all resources.
- B. Use the gcloud services enable compute.googleapis.com command to enable Compute Engine and the gcloud services enable storage-api.googleapis.com command to enable the Cloud Storage APIs.
- C. Open the Google Cloud console and enable all Google Cloud APIs from the API dashboard.
- D. Open the Google Cloud console and run gcloud init --project <project-id> in a Cloud Shell.
Answer: B
NEW QUESTION 13
You are planning to migrate your on-premises data to Google Cloud. The data includes:
• 200 TB of video files in SAN storage
• Data warehouse data stored on Amazon Redshift
• 20 GB of PNG files stored on an S3 bucket
You need to load the video files into a Cloud Storage bucket, transfer the data warehouse data into BigQuery, and load the PNG files into a second Cloud Storage bucket. You want to follow Google-recommended practices and avoid writing any code for the migration. What should you do?
- A. Use gcloud storage for the video file
- B. Dataflow for the data warehouse data, and Storage Transfer Service for the PNG files.
- C. Use Transfer Appliance for the video
- D. BigQuery Data Transfer Service for the data warehouse data, and Storage Transfer Service for the PNG files.
- E. Use Storage Transfer Service for the video files, BigQuery Data Transfer Service for the data warehouse data, and Storage Transfer Service for the PNG files.
- F. Use Cloud Data Fusion for the video files, Dataflow for the data warehouse data, and Storage Transfer Service for the PNG files.
Answer: C
NEW QUESTION 14
You are storing sensitive information in a Cloud Storage bucket. For legal reasons, you need to be able to record all requests that read any of the stored data. You want to make sure you comply with these requirements. What should you do?
- A. Enable the Identity Aware Proxy API on the project.
- B. Scan the bucker using the Data Loss Prevention API.
- C. Allow only a single Service Account access to read the data.
- D. Enable Data Access audit logs for the Cloud Storage API.
Answer: D
Explanation:
Logged information Within Cloud Audit Logs, there are two types of logs: Admin Activity logs: Entries for operations that modify the configuration or metadata of a project, bucket, or object. Data Access logs: Entries for operations that modify objects or read a project, bucket, or object. There are several sub-types of data access logs: ADMIN_READ: Entries for operations that read the configuration or metadata of a project, bucket, or object. DATA_READ: Entries for operations that read an object. DATA_WRITE: Entries for operations that create or modify an object. https://cloud.google.com/storage/docs/audit-logs#types
NEW QUESTION 15
You are building a product on top of Google Kubernetes Engine (GKE). You have a single GKE cluster. For each of your customers, a Pod is running in that cluster, and your customers can run arbitrary code inside their Pod. You want to maximize the isolation between your customers’ Pods. What should you do?
- A. Use Binary Authorization and whitelist only the container images used by your customers’ Pods.
- B. Use the Container Analysis API to detect vulnerabilities in the containers used by your customers’ Pods.
- C. Create a GKE node pool with a sandbox type configured to gviso
- D. Add the parameter runtimeClassName: gvisor to the specification of your customers’ Pods.
- E. Use the cos_containerd image for your GKE node
- F. Add a nodeSelector with the value cloud.google.com/gke-os-distribution: cos_containerd to the specification of your customers’ Pods.
Answer: C
NEW QUESTION 16
You have designed a solution on Google Cloud Platform (GCP) that uses multiple GCP products. Your company has asked you to estimate the costs of the solution. You need to provide estimates for the monthly total cost. What should you do?
- A. For each GCP product in the solution, review the pricing details on the products pricing pag
- B. Use the pricing calculator to total the monthly costs for each GCP product.
- C. For each GCP product in the solution, review the pricing details on the products pricing pag
- D. Create a Google Sheet that summarizes the expected monthly costs for each product.
- E. Provision the solution on GC
- F. Leave the solution provisioned for 1 wee
- G. Navigate to the Billing Report page in the Google Cloud Platform Consol
- H. Multiply the 1 week cost to determine the monthly costs.
- I. Provision the solution on GC
- J. Leave the solution provisioned for 1 wee
- K. Use Stackdriver to determine the provisioned and used resource amount
- L. Multiply the 1 week cost to determine the monthly costs.
Answer: A
Explanation:
You can use the Google Cloud Pricing Calculator to total the estimated monthly costs for each GCP product. You dont incur any charges for doing so.
Ref: https://cloud.google.com/products/calculator
NEW QUESTION 17
Your company requires all developers to have the same permissions, regardless of the Google Cloud project they are working on. Your company's security policy also restricts developer permissions to Compute Engine.
Cloud Functions, and Cloud SQL. You want to implement the security policy with minimal effort. What should you do?
- A. • Create a custom role with Compute Engine, Cloud Functions, and Cloud SQL permissions in one project within the Google Cloud organization.• Copy the role across all projects created within the organization with the gcloud iam roles copy command.• Assign the role to developers in those projects.
- B. • Add all developers to a Google group in Google Groups for Workspace.• Assign the predefined role of Compute Admin to the Google group at the Google Cloud organization level.
- C. • Add all developers to a Google group in Cloud Identity.• Assign predefined roles for Compute Engine, Cloud Functions, and Cloud SQL permissions to the Google group for each project in the Google Cloud organization.
- D. • Add all developers to a Google group in Cloud Identity.• Create a custom role with Compute Engine, Cloud Functions, and Cloud SQL permissions at the Google Cloud organization level.• Assign the custom role to the Google group.
Answer: D
Explanation:
https://www.cloudskillsboost.google/focuses/1035?parent=catalog#:~:text=custom%20role%20at%20the%20or
NEW QUESTION 18
You have a Compute Engine instance hosting a production application. You want to receive an email if the instance consumes more than 90% of its CPU resources for more than 15 minutes. You want to use Google services. What should you do?
- A. * 1. Create a consumer Gmail account.* 2. Write a script that monitors the CPU usage.* 3. When the CPU usage exceeds the threshold, have that script send an email using the Gmail account and smtp.gmail.com on port 25 as SMTP server.
- B. * 1. Create a Stackdriver Workspace, and associate your Google Cloud Platform (GCP) project with it.* 2.Create an Alerting Policy in Stackdriver that uses the threshold as a trigger conditio
- C. 3.Configure your email address in the notification channel.
- D. * 1. Create a Stackdriver Workspace, and associate your GCP project with it.* 2. Write a script that monitors the CPU usage and sends it as a custom metric to Stackdrive
- E. 3.Create an uptime check for the instance in Stackdriver.
- F. * 1. In Stackdriver Logging, create a logs-based metric to extract the CPU usage by using this regular expression: CPU Usage: ([0-9] {1,3}) %* 2. In Stackdriver Monitoring, create an Alerting Policy based on this metri
- G. 3.Configure your email address in the notification channel.
Answer: B
Explanation:
Specifying conditions for alerting policies This page describes how to specify conditions for alerting policies. The conditions for an alerting policy define what is monitored and when to trigger an alert. For example, suppose you want to define an alerting policy that emails you if the CPU utilization of a Compute Engine VM instance is above 80% for more than 3 minutes. You use the conditions dialog to specify that you want to monitor the CPU utilization of a Compute Engine VM instance, and that you want an alerting policy to trigger when that utilization is above 80% for 3 minutes. https://cloud.google.com/monitoring/alerts/ui-conditions-ga
https://cloud.google.com/monitoring/alerts/using-alerting-ui https://cloud.google.com/monitoring/support/notification-options
NEW QUESTION 19
You are running multiple microservices in a Kubernetes Engine cluster. One microservice is rendering images. The microservice responsible for the image rendering requires a large amount of CPU time compared to the memory it requires. The other microservices are workloads that are optimized for n1-standard machine types. You need to optimize your cluster so that all workloads are using resources as efficiently as possible. What should you do?
- A. Assign the pods of the image rendering microservice a higher pod priority than the older microservices
- B. Create a node pool with compute-optimized machine type nodes for the image rendering microservice Use the node pool with general-purposemachine type nodes for the other microservices
- C. Use the node pool with general-purpose machine type nodes for lite mage rendering microservice Create a nodepool with compute-optimized machine type nodes for the other microservices
- D. Configure the required amount of CPU and memory in the resource requests specification of the imagerendering microservice deployment Keep the resource requests for the other microservices at the default
Answer: B
NEW QUESTION 20
You created a Kubernetes deployment by running kubectl run nginx image=nginx replicas=1. After a few days, you decided you no longer want this deployment. You identified the pod and deleted it by running kubectl delete pod. You noticed the pod got recreated.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-84748895c4-nqqmt 1/1 Running 0 9m41s
$ kubectl delete pod nginx-84748895c4-nqqmt
pod nginx-84748895c4-nqqmt deleted
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-84748895c4-k6bzl 1/1 Running 0 25s
What should you do to delete the deployment and avoid pod getting recreated?
- A. kubectl delete deployment nginx
- B. kubectl delete –deployment=nginx
- C. kubectl delete pod nginx-84748895c4-k6bzl –no-restart 2
- D. kubectl delete inginx
Answer: A
Explanation:
This command correctly deletes the deployment. Pods are managed by kubernetes workloads (deployments). When a pod is deleted, the deployment detects the pod is unavailable and brings up another pod to maintain the replica count. The only way to delete the workload is by deleting the deployment itself using the kubectl delete deployment command.
$ kubectl delete deployment nginx
deployment.apps nginx deleted
Ref: https://kubernetes.io/docs/reference/kubectl/cheatsheet/#deleting-resources
NEW QUESTION 21
......
Thanks for reading the newest Associate-Cloud-Engineer exam dumps! We recommend you to try the PREMIUM Surepassexam Associate-Cloud-Engineer dumps in VCE and PDF here: https://www.surepassexam.com/Associate-Cloud-Engineer-exam-dumps.html (283 Q&As Dumps)