Proper study guides for Replace Cloudera Cloudera Certified Administrator for Apache Hadoop (CCAH) certified begins with Cloudera CCA-500 preparation products which designed to deliver the Breathing CCA-500 questions by making you pass the CCA-500 test at your first time. Try the free CCA-500 demo right now.

2021 Nov CCA-500 actual exam

Q31. Which scheduler would you deploy to ensure that your cluster allows short jobs to finish within a reasonable time without starting long-running jobs?

A. Complexity Fair Scheduler (CFS)

B. Capacity Scheduler

C. Fair Scheduler

D. FIFO Scheduler

Answer: C

Explanation: Reference:http://hadoop.apache.org/docs/r1.2.1/fair_scheduler.html


Q32. You have just run a MapReduce job to filter user messages to only those of a selected geographical region. The output for this job is in a directory named westUsers, located just below your home directory in HDFS. Which command gathers these into a single file on your local file system?

A. Hadoop fs –getmerge –R westUsers.txt

B. Hadoop fs –getemerge westUsers westUsers.txt

C. Hadoop fs –cp westUsers/* westUsers.txt

D. Hadoop fs –get westUsers westUsers.txt

Answer: B


Q33. Which two features does Kerberos security add to a Hadoop cluster?(Choose two)

A. User authentication on all remote procedure calls (RPCs)

B. Encryption for data during transfer between the Mappers and Reducers

C. Encryption for data on disk (“at rest”)

D. Authentication for user access to the cluster against a central server

E. Root access to the cluster for users hdfs and mapred but non-root access for clients

Answer: A,D


Update CCA-500 free practice exam:

Q34. Which YARN daemon or service monitors a Controller’s per-application resource using (e.g., memory CPU)?

A. ApplicationMaster

B. NodeManager

C. ApplicationManagerService

D. ResourceManager

Answer: A


Q35. Assuming you’re not running HDFS Federation, what is the maximum number of NameNode daemons you should run on your cluster in order to avoid a “split-brain” scenario with your NameNode when running HDFS High Availability (HA) using Quorum- based storage?

A. Two active NameNodes and two Standby NameNodes

B. One active NameNode and one Standby NameNode

C. Two active NameNodes and on Standby NameNode

D. Unlimited. HDFS High Availability (HA) is designed to overcome limitations on the number of NameNodes you can deploy

Answer: B


Q36. Assuming a cluster running HDFS, MapReduce version 2 (MRv2) on YARN with all settings at their default, what do you need to do when adding a new slave node to cluster?

A. Nothing, other than ensuring that the DNS (or/etc/hosts files on all machines) contains any entry for the new node.

B. Restart the NameNode and ResourceManager daemons and resubmit any running jobs.

C. Add a new entry to /etc/nodes on the NameNode host.

D. Restart the NameNode of dfs.number.of.nodes in hdfs-site.xml

Answer: A

Explanation: http://wiki.apache.org/hadoop/FAQ#I_have_a_new_node_I_want_to_add_to_a_running_H adoop_cluster.3B_how_do_I_start_services_on_just_one_node.3F