Ucertify have the many accurate along with authentic Cloudera Cloudera practice questions which in turn with 100% correct answers. Our certified subject matter authorities are dedicated to researching along with creating your Cloudera Cloudera exam dumps which contain your latest contents in accordance with the CCA-500 exam syllabus. All of us hope you will flourish in Cloudera Cloudera CCA-500 exam with our Cloudera Cloudera practice questions and answers. A lot of candidates have got achievement after acquiring our Cloudera products. We tend to be proud of the high passing ratio. However, in the event you unluckily fail the Cloudera certification exam, many of us will give a person a Total REFUND of your getting fee or perhaps send a person another identical value product for totally free.

2021 Jan CCA-500 free practice exam

Q21. Your cluster implements HDFS High Availability (HA). Your two NameNodes are named nn01 and nn02. What occurs when you execute the command: hdfs haadmin –failover nn01 nn02?

A. nn02 is fenced, and nn01 becomes the active NameNode

B. nn01 is fenced, and nn02 becomes the active NameNode

C. nn01 becomes the standby NameNode and nn02 becomes the active NameNode

D. nn02 becomes the standby NameNode and nn01 becomes the active NameNode

Answer: B

Explanation:

failover – initiate a failover between two NameNodes

This subcommand causes a failover from the first provided NameNode to the second. If the first

NameNode is in the Standby state, this command simply transitions the second to the Active statewithout error. If the first NameNode is in the Active state, an attempt will be made to gracefullytransition it to the Standby state. If this fails, the fencing methods (as configured bydfs.ha.fencing.methods) will be attempted in order until one of the methods succeeds. Only afterthis process will the second NameNode be transitioned to the Active state. If no fencing methodsucceeds, the second NameNode will not be transitioned to the Active state, and an error will bereturned.


Q22. Which two are features of Hadoop’s rack topology?(Choose two)

A. Configuration of rack awareness is accomplished using a configuration file. You cannot use a rack topology script.

B. Hadoop gives preference to intra-rack data transfer in order to conserve bandwidth

C. Rack location is considered in the HDFS block placement policy

D. HDFS is rack aware but MapReduce daemon are not

E. Even for small clusters on a single rack, configuring rack awareness will improve performance

Answer: B,C


Q23. You are running a Hadoop cluster with a NameNode on host mynamenode, a secondary NameNode on host mysecondarynamenode and several DataNodes.

Which best describes how you determine when the last checkpoint happened?

A. Execute hdfs namenode –report on the command line and look at the Last Checkpoint information

B. Execute hdfs dfsadmin –saveNamespace on the command line which returns to you the last checkpoint value in fstime file

C. Connect to the web UI of the Secondary NameNode (http://mysecondary:50090/) and look at the “Last Checkpoint” information

D. Connect to the web UI of the NameNode (http://mynamenode:50070) and look at the “Last Checkpoint” information

Answer: C

Explanation: Reference:https://www.inkling.com/read/hadoop-definitive-guide-tom-white-3rd/chapter- 10/hdfs


Q24. Each node in your Hadoop cluster, running YARN, has 64GB memory and 24 cores. Your yarn.site.xml has the following configuration:

<property>

<name>yarn.nodemanager.resource.memory-mb</name>

<value>32768</value>

</property>

<property>

<name>yarn.nodemanager.resource.cpu-vcores</name>

<value>12</value>

</property>

You want YARN to launch no more than 16 containers per node. What should you do?

A. Modify yarn-site.xml with the following property:

<name>yarn.scheduler.minimum-allocation-mb</name>

<value>2048</value>

B. Modify yarn-sites.xml with the following property:

<name>yarn.scheduler.minimum-allocation-mb</name>

<value>4096</value>

C. Modify yarn-site.xml with the following property:

<name>yarn.nodemanager.resource.cpu-vccores</name>

D. No action is needed: YARN’s dynamic resource allocation automatically optimizes the node memory and cores

Answer: A


Q25. Your Hadoop cluster contains nodes in three racks. You have not configured the dfs.hosts property in the NameNode’s configuration file. What results?

A. The NameNode will update the dfs.hosts property to include machines running the DataNode daemon on the next NameNode reboot or with the command dfsadmin–refreshNodes

B. No new nodes can be added to the cluster until you specify them in the dfs.hosts file

C. Any machine running the DataNode daemon can immediately join the cluster

D. Presented with a blank dfs.hosts property, the NameNode will permit DataNodes specified in mapred.hosts to join the cluster

Answer: C


Up to the immediate present CCA-500 practice test:

Q26. You have installed a cluster HDFS and MapReduce version 2 (MRv2) on YARN. You have no dfs.hosts entry(ies) in your hdfs-site.xml configuration file. You configure a new worker node by setting fs.default.name in its configuration files to point to the NameNode on your cluster, and you start the DataNode daemon on that worker node. What do you have to do on the cluster to allow the worker node to join, and start sorting HDFS blocks?

A. Without creating a dfs.hosts file or making any entries, run the commands hadoop.dfsadmin-refreshModes on the NameNode

B. Restart the NameNode

C. Creating a dfs.hosts file on the NameNode, add the worker Node’s name to it, then issue the command hadoop dfsadmin –refresh Nodes = on the Namenode

D. Nothing; the worker node will automatically join the cluster when NameNode daemon is started

Answer: A


Q27. You have recently converted your Hadoop cluster from a MapReduce 1 (MRv1) architecture to MapReduce 2 (MRv2) on YARN architecture. Your developers are accustomed to specifying map and reduce tasks (resource allocation) tasks when they run jobs: A developer wants to know how specify to reduce tasks when a specific job runs. Which method should you tell that developers to implement?

A. MapReduce version 2 (MRv2) on YARN abstracts resource allocation away from the idea of “tasks” into memory and virtual cores, thus eliminating the need for a developer to specify the number of reduce tasks, and indeed preventing the developer from specifying the number of reduce tasks.

B. In YARN, resource allocations is a function of megabytes of memory in multiples of 1024mb. Thus, they should specify the amount of memory resource they need by executing –D mapreduce-reduces.memory-mb-2048

C. In YARN, the ApplicationMaster is responsible for requesting the resource required for a specific launch. Thus, executing –D yarn.applicationmaster.reduce.tasks=2 will specify that the ApplicationMaster launch two task contains on the worker nodes.

D. Developers specify reduce tasks in the exact same way for both MapReduce version 1 (MRv1) and MapReduce version 2 (MRv2) on YARN. Thus, executing –D mapreduce.job.reduces-2 will specify reduce tasks.

E. In YARN, resource allocation is function of virtual cores specified by the ApplicationManager making requests to the NodeManager where a reduce task is handeled by a single container (and thus a single virtual core). Thus, the developer needs to specify the number of virtual cores to the NodeManager by executing –p yarn.nodemanager.cpu-vcores=2

Answer: D


Q28. You want to node to only swap Hadoop daemon data from RAM to disk when absolutely necessary. What should you do?

A. Delete the /dev/vmswap file on the node

B. Delete the /etc/swap file on the node

C. Set the ram.swap parameter to 0 in core-site.xml

D. Set vm.swapfile file on the node

E. Delete the /swapfile file on the node

Answer: D


Q29. On a cluster running MapReduce v2 (MRv2) on YARN, a MapReduce job is given a directory of 10 plain text files as its input directory. Each file is made up of 3 HDFS blocks. How many Mappers will run?

A. We cannot say; the number of Mappers is determined by the ResourceManager

B. We cannot say; the number of Mappers is determined by the developer

C. 30

D. 3

E. 10

F. We cannot say; the number of mappers is determined by the ApplicationMaster

Answer: E


Q30. Your company stores user profile records in an OLTP databases. You want to join these records with web server logs you have already ingested into the Hadoop file system. What is the best way to obtain and ingest these user records?

A. Ingest with Hadoop streaming

B. Ingest using Hive’s IQAD DATA command

C. Ingest with sqoop import

D. Ingest with Pig’s LOAD command

E. Ingest using the HDFS put command

Answer: C