符合中小企业对网站设计、功能常规化式的企业展示型网站建设
本套餐主要针对企业品牌型网站、中高端设计、前端互动体验...
商城网站建设因基本功能的需求不同费用上面也有很大的差别...
手机微信网站开发、微信官网、微信商城网站...
这篇文章主要介绍“Hive的详细安装步骤”,在日常操作中,相信很多人在Hive的详细安装步骤问题上存在疑惑,小编查阅了各式资料,整理出简单好用的操作方法,希望对大家解答”Hive的详细安装步骤”的疑惑有所帮助!接下来,请跟着小编一起来学习吧!
成都创新互联公司长期为上千家客户提供的网站建设服务,团队从业经验10年,关注不同地域、不同群体,并针对不同对象提供差异化的产品和服务;打造开放共赢平台,与合作伙伴共同营造健康的互联网生态环境。为绥化企业提供专业的网站制作、成都网站建设,绥化网站改版等技术服务。拥有十余年丰富建站经验和众多成功案例,为您定制开发。
1、解压缩文件[root@hadoop0 opt]# tar -zxvf hive-0.9.0.tar.gz
2、改名字[root@hadoop0 opt]# mv hive-0.9.0 hive
3、配置环境变量,修改etc/profile全局变量文件/opt/hive/bin
JAVA_HOME=/opt/jdk1.6.0_24
HADOOP_HOME=/opt/hadoop
HBASE_HOME=/opt/hbase
HIVE_HOME=/opt/hive
PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$HBASE_HOME/bin:$HIVE_HOME/bin:$PATH
export JAVA_HOME HADOOP_HOME HBASE_HOME HIVE_HOME PATH
[root@hadoop0 bin]# su -
4、测试运行,看看是否安装成功[root@hadoop0 ~]# hive
WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
Logging initialized using configuration in jar:file:/opt/hive/lib/hive-common-0.9.0.jar!/hive-log4j.properties
Hive history file=/tmp/root/hive_job_log_root_201509250619_148272494.txt
hive> show tables;
FAILED: Error in metadata: MetaException(message:Got exception: java.net.ConnectException Call to hadoop0/192.168.46.129:9000 failed on connection exception: java.net.ConnectException: Connection refused)
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
--解决方案:hive依赖于hdfs存储数据,所以确保hadoop启动了
[root@hadoop0 ~]# start-all.sh
Warning: $HADOOP_HOME is deprecated.
starting namenode, logging to /opt/hadoop/libexec/../logs/hadoop-root-namenode-hadoop0.out
localhost: starting datanode, logging to /opt/hadoop/libexec/../logs/hadoop-root-datanode-hadoop0.out
localhost: starting secondarynamenode, logging to /opt/hadoop/libexec/../logs/hadoop-root-secondarynamenode-hadoop0.out
starting jobtracker, logging to /opt/hadoop/libexec/../logs/hadoop-root-jobtracker-hadoop0.out
localhost: starting tasktracker, logging to /opt/hadoop/libexec/../logs/hadoop-root-tasktracker-hadoop0.out
--至此最简单的hive环境配置完毕
5、开始创建数据表hive> show tables;
OK
Time taken: 5.619 seconds
hive> create table stu(name String,age int);
FAILED: Error in metadata: MetaException(message:Got exception: org.apache.hadoop.ipc.RemoteException
org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create directory /user/hive/warehouse/stu.
Name node is in safe mode.
The reported blocks 18 has reached the threshold 0.9990 of total blocks 17. Safe mode will be turned off automatically in 15 seconds.
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:2204)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2178)
at org.apache.hadoop.hdfs.server.namenode.NameNode.mkdirs(NameNode.java:857)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
)
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
--解决方案:由于缺少参数配置,手工建立目录,解决这个问题
[root@hadoop0 ~]# mkdir -p /user/hive/warehouse/stu
hive> create table stu(name String,age int);
OK
Time taken: 0.229 seconds
6、开始插入数据,Hive不支持Insert语句hive> insert into stu values('MengMeng',24);
FAILED: Parse Error: line 1:12 mismatched input 'stu' expecting TABLE near 'into' in insert clause
hive> show tables;
OK
stu
Time taken: 0.078 seconds
hive> desc stu;
OK
name string
age int
Time taken: 0.255 seconds
--解决方案:hive不支持上述操作,可以使用load加载
hive> LOAD DATA LOCAL INPATH '/opt/stu.txt' OVERWRITE INTO TABLE stu;
Copying data from file:/opt/stu.txt
Copying file: file:/opt/stu.txt
Loading data to table default.stu
Deleted hdfs://hadoop0:9000/user/hive/warehouse/stu
OK
Time taken: 0.643 seconds
7、查询刚才导入的语句hive> select name ,age from stu;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_201509250620_0001, Tracking URL = http://hadoop0:50030/jobdetails.jsp?jobid=job_201509250620_0001
Kill Command = /opt/hadoop/libexec/../bin/hadoop job -Dmapred.job.tracker=hadoop0:9001 -kill job_201509250620_0001
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
2015-09-25 06:37:55,535 Stage-1 map = 0%, reduce = 0%
2015-09-25 06:37:58,565 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.59 sec
2015-09-25 06:37:59,595 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.59 sec
2015-09-25 06:38:00,647 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 0.59 sec
MapReduce Total cumulative CPU time: 590 msec
Ended Job = job_201509250620_0001
MapReduce Jobs Launched:
Job 0: Map: 1 Cumulative CPU: 0.59 sec HDFS Read: 221 HDFS Write: 22 SUCCESS
Total MapReduce CPU Time Spent: 590 msec
OK
--查询结构显示出来了
JieJie 26 NULL
MM 24 NULL
Time taken: 12.812 seconds
疑问:为何有个null值呢,切待下次研究
到此,关于“Hive的详细安装步骤”的学习就结束了,希望能够解决大家的疑惑。理论与实践的搭配能更好的帮助大家学习,快去试试吧!若想继续学习更多相关知识,请继续关注创新互联网站,小编会继续努力为大家带来更多实用的文章!