zl程序教程

您现在的位置是:首页 >  其他

当前栏目

Hadoop-2.2.0集群安装配置实践

2023-09-14 08:57:29 时间

Hadoop 2.x和1.x已经大不相同了,应该说对于存储计算都更加通用了。Hadoop 2.x实现了用来管理集群资源的YARN框架,可以面向任何需要使用基于HDFS存储来计算的需要,当然MapReduce现在已经作为外围的插件式的计算框架,你可以根据需要开发或者选择合适的计算框架。目前,貌似对MapReduce支持还是比较好的,毕竟MapReduce框架已经还算成熟。其他一些基于YARN框架的标准也在开发中。
YARN框架的核心是资源的管理和分配调度,它比Hadoop 1.x中的资源分配的粒度更细了,也更加灵活了,它的前景应该不错。由于极大地灵活性,所以在使用过程中由于这些配置的灵活性,可能使用的难度也加大了一些。另外,我个人觉得,YARN毕竟还在发展之中,也有很多不成熟的地方,各种问题频频出现,资料也相对较少,官方文档有时更新也不是很及时,如果我选择做海量数据处理,可能YARN还不能满足生产环境的需要。如果完全使用MapReduce来做计算,还是选择相对更加成熟的Hadoop 1.x版本用于生产环境。
下面使用4台机器,操作系统为CentOS 6.4 64位,一台做主节点,另外三台做从节点,实践集群的安装配置。

主机配置规划

修改/etc/hosts文件,增加如下地址映射:

10.95.3.48 m1

m1为集群主节点,s1、s2、s3为集群从节点。
关于主机资源的配置,我们这里面使用VMWare工具,创建了4个虚拟机,具体置情况如下所示:

一个主节点有1个核(core) 一个主节点内存1G 每个从节点有1个核(core) 每个从节点内存2G

目录规划

Hadoop程序存放目录为/home/shirdrn/cloud/programs/hadoop-2.2.0,相关的数据目录,包括日志、存储等指定为/home/shirdrn/cloud/storage/hadoop-2.2.0。将程序和数据目录分开,可以更加方便的进行配置的同步。
具体目录的准备与配置如下所示:

在每个节点上创建程序存储目录/home/shirdrn/cloud/programs/hadoop-2.2.0,用来存放Hadoop程序文件 在每个节点上创建数据存储目录/home/shirdrn/cloud/storage/hadoop-2.2.0/hdfs,用来存放集群数据 在主节点m1上创建目录/home/shirdrn/cloud/storage/hadoop-2.2.0/hdfs/name,用来存放文件系统元数据 在每个从节点上创建目录/home/shirdrn/cloud/storage/hadoop-2.2.0/hdfs/data,用来存放真正的数据 所有节点上的日志目录为/home/shirdrn/cloud/storage/hadoop-2.2.0/logs 所有节点上的临时目录为/home/shirdrn/cloud/storage/hadoop-2.2.0/tmp

下面配置涉及到的目录,都参照这里的目录规划。

环境变量配置

首先,使用Sun的JDK,修改~/.bashrc文件,配置如下:

export JAVA_HOME=/usr/java/jdk1.6.0_45/
description Comma separated list of paths on the local filesystem of a DataNode where it should store its blocks. /description
value org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler /value
description Defines total available resources on the NodeManager to be made available to running containers /description
$HADOOP_HOME/share/hadoop/mapreduce/*,$HADOOP_HOME/share/hadoop/mapreduce/lib/* /value
description Higher number of parallel copies run by reduces to fetch outputs from very large number of maps. /description
description MapReduce JobHistory Server host:port, default port is 10020. /description
description MapReduce JobHistory Server Web UI host:port, default port is 19888. /description
配置hadoop-env.sh、yarn-env.sh、mapred-env.sh脚本文件

修改每个脚本文件的JAVA_HOME变量即可,如下所示:

export JAVA_HOME=/usr/java/jdk1.6.0_45/

同步分发程序文件
在主节点m1上将上面配置好的程序文件,复制分发到各个从节点上:

scp -r /home/shirdrn/cloud/programs/hadoop-2.2.0 shirdrn@s1:/home/shirdrn/cloud/programs/
scp -r /home/shirdrn/cloud/programs/hadoop-2.2.0 shirdrn@s2:/home/shirdrn/cloud/programs/
scp -r /home/shirdrn/cloud/programs/hadoop-2.2.0 shirdrn@s3:/home/shirdrn/cloud/programs/

经过上面配置以后,可以启动HDFS集群。
为了保证集群启动过程中不会出现问题,需要手动关闭每个节点上的防火墙,执行如下命令:

sudo service iptables stop
tail -100f /home/shirdrn/cloud/storage/hadoop-2.2.0/logs/hadoop-shirdrn-namenode-m1.log
tail -100f /home/shirdrn/cloud/storage/hadoop-2.2.0/logs/hadoop-shirdrn-secondarynamenode-m1.log
tail -100f /home/shirdrn/cloud/storage/hadoop-2.2.0/logs/hadoop-shirdrn-datanode-s1.log
tail -100f /home/shirdrn/cloud/storage/hadoop-2.2.0/logs/hadoop-shirdrn-datanode-s2.log
tail -100f /home/shirdrn/cloud/storage/hadoop-2.2.0/logs/hadoop-shirdrn-datanode-s3.log
tail -100f /home/shirdrn/cloud/storage/hadoop-2.2.0/logs/yarn-shirdrn-resourcemanager-m1.log
tail -100f /home/shirdrn/cloud/storage/hadoop-2.2.0/logs/yarn-shirdrn-nodemanager-s1.log
tail -100f /home/shirdrn/cloud/storage/hadoop-2.2.0/logs/yarn-shirdrn-nodemanager-s2.log
tail -100f /home/shirdrn/cloud/storage/hadoop-2.2.0/logs/yarn-shirdrn-nodemanager-s3.log

NodeManager运行在从节点上,可以通过Web控制台查看对应节点的资源状态,例如节点s1:

http://s1:8042/

管理JobHistory Server

启动可以JobHistory Server,能够通过Web控制台查看集群计算的任务的信息,执行如下命令:

mr-jobhistory-daemon.sh start historyserver

默认使用19888端口。
通过访问http://m1:19888/查看任务执行历史信息。
终止JobHistory Server,执行如下命令:

mr-jobhistory-daemon.sh stop historyserver

集群验证

我们使用Hadoop自带的WordCount例子进行验证。
先在HDFS创建几个数据目录:

hadoop fs -mkdir -p /data/wordcount

目录/data/wordcount用来存放Hadoop自带的WordCount例子的数据文件,运行这个MapReduce任务的结果输出到/output/wordcount目录中。
将本地文件上传到HDFS中:

hadoop fs -put /home/shirdrn/cloud/programs/hadoop-2.2.0/etc/hadoop/*.xml /data/wordcount/

可以查看上传后的文件情况,执行如下命令:

hadoop fs -ls /data/wordcount
hadoop jar /home/shirdrn/cloud/programs/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar wordcount /data/wordcount /output/wordcount

可以看到控制台输出程序运行的信息:

[shirdrn@m1 hadoop-2.2.0]$ hadoop jar /home/shirdrn/cloud/programs/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar wordcount /data/wordcount /output/wordcount
13/12/25 22:38:02 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
13/12/25 22:38:03 INFO client.RMProxy: Connecting to ResourceManager at m1/10.95.3.48:8032
13/12/25 22:38:04 INFO Configuration.deprecation: user.name is deprecated. Instead, use mapreduce.job.user.name
13/12/25 22:38:04 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
13/12/25 22:38:04 INFO Configuration.deprecation: mapred.output.value.class is deprecated. Instead, use mapreduce.job.output.value.class
13/12/25 22:38:04 INFO Configuration.deprecation: mapreduce.combine.class is deprecated. Instead, use mapreduce.job.combine.class
13/12/25 22:38:04 INFO Configuration.deprecation: mapreduce.map.class is deprecated. Instead, use mapreduce.job.map.class
13/12/25 22:38:04 INFO Configuration.deprecation: mapred.job.name is deprecated. Instead, use mapreduce.job.name
13/12/25 22:38:04 INFO Configuration.deprecation: mapreduce.reduce.class is deprecated. Instead, use mapreduce.job.reduce.class
13/12/25 22:38:04 INFO Configuration.deprecation: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
13/12/25 22:38:04 INFO Configuration.deprecation: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
13/12/25 22:38:04 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
13/12/25 22:38:04 INFO Configuration.deprecation: mapred.output.key.class is deprecated. Instead, use mapreduce.job.output.key.class
13/12/25 22:38:04 INFO Configuration.deprecation: mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir
13/12/25 22:38:04 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1388039619930_0002
13/12/25 22:38:05 INFO impl.YarnClientImpl: Submitted application application_1388039619930_0002 to ResourceManager at m1/10.95.3.48:8032
13/12/25 22:38:14 INFO mapreduce.Job: Job job_1388039619930_0002 running in uber mode : false
13/12/25 22:38:58 INFO mapreduce.Job: Job job_1388039619930_0002 completed successfully
13/12/25 22:58:55 WARN util.NativeCodeLoader: Unable to load native-hadoop library foryour platform... using builtin-java classes where applicable
$HADOOP_HOME/share/hadoop/mapreduce/*,$HADOOP_HOME/share/hadoop/mapreduce/lib/* /value 1

登录到Web控制台,访问链接http://m1:8088/可以看到任务记录情况。
可见,我们的HDFS能够存储数据,而YARN集群也能够运行MapReduce任务。

问题及总结

需要知道的默认配置

在Hadoop 2.2.0中,YARN框架有很多默认的参数值,如果你是在机器资源比较不足的情况下,需要修改这些默认值,来满足一些任务需要。
NodeManager和ResourceManager都是在yarn-site.xml文件中配置的,而运行MapReduce任务时,是在mapred-site.xml中进行配置的。
下面看一下相关的参数及其默认值情况:


mapred-site.xml 取值local、classic或yarn其中之一,如果不是yarn,则不会使用YARN集群来实现资源的分配
异常java.io.IOException: Bad connect ack with firstBadLink as 10.95.3.66:50010

详细异常信息,如下所示:

[shirdrn@m1 hadoop-2.2.0]$ hadoop fs -put /home/shirdrn/cloud/programs/hadoop-2.2.0/etc/hadoop/*.xml /data/wordcount/
13/12/25 21:29:45 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1166)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1088)
13/12/25 21:29:46 INFO hdfs.DFSClient: Abandoning BP-1906424073-10.95.3.48-1388035628061:blk_1073741825_1001
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1166)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1088)
13/12/25 21:29:46 INFO hdfs.DFSClient: Abandoning BP-1906424073-10.95.3.48-1388035628061:blk_1073741826_1002
at org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1305)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1128)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1088)
13/12/25 21:29:46 INFO hdfs.DFSClient: Abandoning BP-1906424073-10.95.3.48-1388035628061:blk_1073741828_1004
Hadoop 集群的搭建与配置 你好看官,里面请!今天笔者讲的是Hadoop 集群的搭建与配置。不懂或者觉得我写的有问题可以在评论区留言,我看到会及时回复。 注意:本文仅用于学习参考,不可用于商业用途,如需转载请跟我联系。