符合中小企业对网站设计、功能常规化式的企业展示型网站建设
本套餐主要针对企业品牌型网站、中高端设计、前端互动体验...
商城网站建设因基本功能的需求不同费用上面也有很大的差别...
手机微信网站开发、微信官网、微信商城网站...
本篇内容主要讲解“Hadoop2 namenode联邦实验分析”,感兴趣的朋友不妨来看看。本文介绍的方法操作简单快捷,实用性强。下面就让小编来带大家学习“Hadoop2 namenode联邦实验分析”吧!
为岱山等地区用户提供了全套网页设计制作服务,及岱山网站建设行业解决方案。主营业务为做网站、成都网站制作、岱山网站设计,以传统方式定制建设网站,并提供域名空间备案等一条龙服务,秉承以专业、用心的态度为用户提供真诚的服务。我们深信只要达到每一位用户的要求,就会得到认可,从而选择与我们长期合作。这样,我们也可以走得更远!
实验的Hadoop版本为2.5.2,硬件环境是5台虚拟机,使用的均是CentOS6.6操作系统,虚拟机IP和hostname分别为:
192.168.63.171 node1.zhch
192.168.63.172 node2.zhch
192.168.63.173 node3.zhch
192.168.63.174 node4.zhch
192.168.63.175 node5.zhch
ssh免密码、防火墙、JDK这里就不在赘述了。虚拟机的角色分配是 node1、2是namendoe节点,node3、4、5为datanode节点。
步骤和
搭建普通hadoop集群
基本相同 ,主要的不同在于
hdfs-site.xml这个配置文件,其余配置和hadoop的安装配置基本一致。
一、配置Hadoop
## 解压 [yyl@node1 program]$ tar -zxf hadoop-2.5.2.tar.gz ## 创建文件夹 [yyl@node1 program]$ mkdir hadoop-2.5.2/name [yyl@node1 program]$ mkdir hadoop-2.5.2/data [yyl@node1 program]$ mkdir hadoop-2.5.2/tmp ## 配置hadoop-env.sh [yyl@node1 program]$ cd hadoop-2.5.2/etc/hadoop/ [yyl@node1 hadoop]$ vim hadoop-env.sh export JAVA_HOME=/usr/lib/java/jdk1.7.0_80 ## 配置yarn-env.sh [yyl@node1 hadoop]$ vim yarn-env.sh export JAVA_HOME=/usr/lib/java/jdk1.7.0_80 ## 配置slaves [yyl@node1 hadoop]$ vim slaves node3.zhch node4.zhch node5.zhch ## 配置core-site.xml [yyl@node1 program]$ cd hadoop-2.5.2/etc/hadoop/ [yyl@node1 hadoop]$ vim core-site.xml## 配置hdfs-site.xml [yyl@node1 hadoop]$ vim hdfs-site.xml fs.defaultFS hdfs://node1.zhch:9000 io.file.buffer.size 131072 hadoop.tmp.dir file:/home/yyl/program/hadoop-2.5.2/tmp hadoop.proxyuser.hduser.hosts * hadoop.proxyuser.hduser.groups * ##配置 mapred-site.xml [yyl@node1 hadoop]$ cp mapred-site.xml.template mapred-site.xml [yyl@node1 hadoop]$ vim mapred-site.xml dfs.namenode.name.dir file:/home/yyl/program/hadoop-2.5.2/name dfs.datanode.data.dir file:/home/yyl/program/hadoop-2.5.2/data dfs.replication 1 dfs.webhdfs.enabled true dfs.permissions false dfs.nameservices ns1,ns2 dfs.namenode.rpc-address.ns1 node1.zhch:9000 dfs.namenode.http-address.ns1 node1.zhch:50070 dfs.namenode.rpc-address.ns2 node2.zhch:9000 dfs.namenode.http-address.ns2 node2.zhch:50070 ##配置 yarn-site.xml [yyl@node1 hadoop]$ vim yarn-site.xml mapreduce.framework.name yarn mapreduce.jobhistory.address node1.zhch:10020 mapreduce.jobhistory.webapp.address node1.zhch:19888 ## 分发到各个节点 [yyl@node1 hadoop]$ cd /home/yyl/program/ [yyl@node1 program]$ scp -rp hadoop-2.5.2 yyl@node2.zhch:/home/yyl/program/ [yyl@node1 program]$ scp -rp hadoop-2.5.2 yyl@node3.zhch:/home/yyl/program/ [yyl@node1 program]$ scp -rp hadoop-2.5.2 yyl@node4.zhch:/home/yyl/program/ [yyl@node1 program]$ scp -rp hadoop-2.5.2 yyl@node5.zhch:/home/yyl/program/ ## 在各个节点上设置hadoop环境变量 [yyl@node1 ~]$ vim .bash_profile export HADOOP_PREFIX=/home/yyl/program/hadoop-2.5.2 export HADOOP_COMMON_HOME=$HADOOP_PREFIX export HADOOP_HDFS_HOME=$HADOOP_PREFIX export HADOOP_MAPRED_HOME=$HADOOP_PREFIX export HADOOP_YARN_HOME=$HADOOP_PREFIX export HADOOP_CONF_DIR=$HADOOP_PREFIX/etc/hadoop export PATH=$PATH:$HADOOP_PREFIX/bin:$HADOOP_PREFIX/sbin yarn.nodemanager.aux-services mapreduce_shuffle yarn.nodemanager.aux-services.mapreduce.shuffle.class org.apache.hadoop.mapred.ShuffleHandler yarn.resourcemanager.address node1.zhch:8032 yarn.resourcemanager.scheduler.address node1.zhch:8030 yarn.resourcemanager.resource-tracker.address node1.zhch:8031 yarn.resourcemanager.admin.address node1.zhch:8033 yarn.resourcemanager.webapp.address node1.zhch:8088
二、NameNode
## 在namenode1上执行格式化 [yyl@node1 ~]$ hdfs namenode -format -clusterId c1 ## 在namenode2上执行格式化 [yyl@node2 ~]$ hdfs namenode -format -clusterId c1 ## 在namenode1启动namenode [yyl@node1 ~]$ hadoop-daemon.sh start namenode starting namenode, logging to /home/yyl/program/hadoop-2.5.2/logs/hadoop-yyl-namenode-node1.zhch.out [yyl@node1 ~]$ jps 1177 NameNode 1240 Jps ## 在namenode2启动namenode [yyl@node2 ~]$ hadoop-daemon.sh start namenode starting namenode, logging to /home/yyl/program/hadoop-2.5.2/logs/hadoop-yyl-namenode-node2.zhch.out [yyl@node2 ~]$ jps 1508 Jps 1445 NameNode
三、HDFS联邦检查
http://node1.zhch:50070/
http://node2.zhch:50070/
四、启动DataNode和yarn
[yyl@node1 ~]$ hadoop-daemons.sh start datanode node4.zhch: starting datanode, logging to /home/yyl/program/hadoop-2.5.2/logs/hadoop-yyl-datanode-node4.zhch.out node5.zhch: starting datanode, logging to /home/yyl/program/hadoop-2.5.2/logs/hadoop-yyl-datanode-node5.zhch.out node3.zhch: starting datanode, logging to /home/yyl/program/hadoop-2.5.2/logs/hadoop-yyl-datanode-node3.zhch.out [yyl@node1 ~]$ start-yarn.sh starting yarn daemons starting resourcemanager, logging to /home/yyl/program/hadoop-2.5.2/logs/yarn-yyl-resourcemanager-node1.zhch.out node5.zhch: starting nodemanager, logging to /home/yyl/program/hadoop-2.5.2/logs/yarn-yyl-nodemanager-node5.zhch.out node3.zhch: starting nodemanager, logging to /home/yyl/program/hadoop-2.5.2/logs/yarn-yyl-nodemanager-node3.zhch.out node4.zhch: starting nodemanager, logging to /home/yyl/program/hadoop-2.5.2/logs/yarn-yyl-nodemanager-node4.zhch.out [yyl@node1 ~]$ jps 1402 Jps 1177 NameNode 1333 ResourceManager [yyl@node2 ~]$ jps 1445 NameNode 1539 Jps [yyl@node3 ~]$ jps 1214 NodeManager 1166 DataNode 1256 Jps
下次启动不需要重复上面的步骤,可以直接使用下面的命令启动集群:
sh $HADOOP_HOME/sbin/start-dfs.sh
sh $HADOOP_HOME/sbin/start-yarn.sh
到此,相信大家对“Hadoop2 namenode联邦实验分析”有了更深的了解,不妨来实际操作一番吧!这里是创新互联网站,更多相关内容可以进入相关频道进行查询,关注我们,继续学习!