网创优客建站品牌官网
为成都网站建设公司企业提供高品质网站建设
热线:028-86922220
成都专业网站建设公司

定制建站费用3500元

符合中小企业对网站设计、功能常规化式的企业展示型网站建设

成都品牌网站建设

品牌网站建设费用6000元

本套餐主要针对企业品牌型网站、中高端设计、前端互动体验...

成都商城网站建设

商城网站建设费用8000元

商城网站建设因基本功能的需求不同费用上面也有很大的差别...

成都微信网站建设

手机微信网站建站3000元

手机微信网站开发、微信官网、微信商城网站...

建站知识

当前位置:首页 > 建站知识

基于cdh的Kafka配置及部署(详细,成功运行)

一、下载

http://archive.cloudera.com/kafka/parcels/2.2.0/

创新互联建站是一家朝气蓬勃的网站建设公司。公司专注于为企业提供信息化建设解决方案。从事网站开发,网站制作,网站设计,网站模板,微信公众号开发,软件开发,小程序设计,十余年建站对OPP胶袋等多个行业,拥有丰富的网站维护经验。

wget http://archive.cloudera.com/kafka/parcels/2.2.0/KAFKA-2.2.0-1.2.2.0.p0.68-el6.parcel
wget http://archive.cloudera.com/kafka/parcels/2.2.0/KAFKA-2.2.0-1.2.2.0.p0.68-el6.parcel.sha1

二、校验

[hadoop@hadoop003 softwares]$ sha1sum KAFKA-2.2.0-1.2.2.0.p0.68-el6.parcel
359509e028ae91a2a082adfad5f64596b63ea750  KAFKA-2.2.0-1.2.2.0.p0.68-el6.parcel
[hadoop@hadoop003 softwares]$ cat KAFKA-2.2.0-1.2.2.0.p0.68-el6.parcel.sha1
359509e028ae91a2a082adfad5f64596b63ea750

校验码相同,说明文件在下载过程中没有任何损坏,可正常使用

三、解压并设置软连接

[hadoop@hadoop003 softwares]$ tar -zxf  KAFKA-2.2.0-1.2.2.0.p0.68-el6.parcel -C ~/app
[hadoop@hadoop003 app]$ ln -s /home/hadoop/app/KAFKA-2.2.0-1.2.2.0.p0.68/ /home/hadoop/app/kafka

四、重要目录说明

[hadoop@hadoop003 kafka]$ pwd
/home/hadoop/app/kafka
[hadoop@hadoop003 kafka]$ ll
total 20
drwxr-xr-x 2 hadoop hadoop 4096 Jul  7  2017 bin
drwxr-xr-x 5 hadoop hadoop 4096 Jul  7  2017 etc
drwxr-xr-x 3 hadoop hadoop 4096 Jul  7  2017 lib
drwxr-xr-x 2 hadoop hadoop 4096 Jul  7  2017 meta
###     kafka配置文件目录,我们修改配置文件就在这里修改
[hadoop@hadoop003 kafka]$ ll etc/kafka/conf.dist/
total 48
-rw-r--r-- 1 hadoop hadoop  906 Jul  7  2017 connect-console-sink.properties
-rw-r--r-- 1 hadoop hadoop  909 Jul  7  2017 connect-console-source.properties
-rw-r--r-- 1 hadoop hadoop 2760 Jul  7  2017 connect-distributed.properties
-rw-r--r-- 1 hadoop hadoop  883 Jul  7  2017 connect-file-sink.properties
-rw-r--r-- 1 hadoop hadoop  881 Jul  7  2017 connect-file-source.properties
-rw-r--r-- 1 hadoop hadoop 1074 Jul  7  2017 connect-log4j.properties
-rw-r--r-- 1 hadoop hadoop 2061 Jul  7  2017 connect-standalone.properties
-rw-r--r-- 1 hadoop hadoop 4369 Jul  7  2017 log4j.properties
-rw-r--r-- 1 hadoop hadoop 5679 Jun  1 01:24 server.properties
-rw-r--r-- 1 hadoop hadoop 1032 Jul  7  2017 tools-log4j.properties

###     kafka功能目录
[hadoop@hadoop003 kafka]$ ll lib/kafka/
total 112
drwxr-xr-x 2 hadoop hadoop  4096 Jul  7  2017 bin
drwxr-xr-x 2 hadoop hadoop  4096 Jul  7  2017 cloudera
lrwxrwxrwx 1 hadoop hadoop    43 Jun  1 02:11 config -> /etc/kafka/conf  #注意这是红色
-rw-rw-r-- 1 hadoop hadoop 48428 Jun  1 02:17 KAFKA-2.2.0-1.2.2.0.p0.68-el6.parcel
drwxr-xr-x 2 hadoop hadoop 12288 Jul  7  2017 libs
-rwxr-xr-x 1 hadoop hadoop 28824 Jul  7  2017 LICENSE
drwxrwxr-x 2 hadoop hadoop  4096 Jun  1 01:39 logs
-rwxr-xr-x 1 hadoop hadoop   336 Jul  7  2017 NOTICE
drwxr-xr-x 2 hadoop hadoop  4096 Jul  7  2017 site-docs
### config软连接此时默认链接的getaway的配置文件,也就是CM客户端的配置文件,因为我们没有使用cm,所以也就没有自动生成/etc/kafka/conf 故报错闪烁红色
###     bin目录下是kafka的相关脚本,例如server启动关闭&&consumer&&producer的启动脚本

五、修改配置文件

# 第一步:
[hadoop@hadoop003 kafka] cd etc/kafka/conf.dist

# 第二步:

vim  server.properties

# 第三步:(主要修改其中的6个参数)

broker.id=0  #标示符

log.dirs=/home/hadoop/app/kafka/logs  #数据保存的位置

log.retention.hours=168  #数据的保留时间(168 hours=7天)

zookeeper.connect=hadoop001:2181,hadoop002:2181,hadoop003:2181/kafka
# zookeeper存储kafka数据的位置
delete.topic.enable=true #可以删除已创建主题

六、启动kafka

[hadoop@hadoop003 kafka]$ lib/kafka/bin/kafka-server-start.sh /home/hadoop/app/kafka/etc/kafka/conf.dist/server.properties 
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hadoop/app/KAFKA-2.2.0-1.2.2.0.p0.68/lib/kafka/libs/slf4j-log4j12-1.7.21.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hadoop/app/KAFKA-2.2.0-1.2.2.0.p0.68/lib/kafka/libs/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
log4j:ERROR Could not read configuration file from URL [file:lib/kafka/bin/../config/log4j.properties].
java.io.FileNotFoundException: lib/kafka/bin/../config/log4j.properties (No such file or directory)
    at java.io.FileInputStream.open0(Native Method)
    at java.io.FileInputStream.open(FileInputStream.java:195)
    at java.io.FileInputStream.(FileInputStream.java:138)
    at java.io.FileInputStream.(FileInputStream.java:93)
    at sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:90)
    at sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:188)
    at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:557)
    at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
    at org.apache.log4j.LogManager.(LogManager.java:127)
    at org.slf4j.impl.Log4jLoggerFactory.(Log4jLoggerFactory.java:66)
    at org.slf4j.impl.StaticLoggerBinder.(StaticLoggerBinder.java:72)
    at org.slf4j.impl.StaticLoggerBinder.(StaticLoggerBinder.java:45)
    at org.slf4j.LoggerFactory.bind(LoggerFactory.java:150)
    at org.slf4j.LoggerFactory.performInitialization(LoggerFactory.java:124)
    at org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:412)
    at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:357)
    at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:383)
    at org.apache.kafka.common.utils.Utils.(Utils.java:59)
    at kafka.Kafka$.getPropsFromArgs(Kafka.scala:41)
    at com.cloudera.kafka.wrap.Kafka$.main(Kafka.scala:72)
    at com.cloudera.kafka.wrap.Kafka.main(Kafka.scala)
log4j:ERROR Ignoring configuration file [file:lib/kafka/bin/../config/log4j.properties].
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
log4j:WARN No appenders could be found for logger (kafka.server.KafkaConfig).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.

出现了bug找不到配置文件

java.io.FileNotFoundException: lib/kafka/bin/../config/log4j.properties

由于 lrwxrwxrwx 1 hadoop hadoop 43 Jun 1 02:11 config -> /etc/kafka/conf 找不到,所以要指定成etc/kafka/conf.dist/

[hadoop@hadoop003 kafka]$ rm lib/kafka/config
[hadoop@hadoop003 kafka]$ ln -s  /home/hadoop/app/kafka/etc/kafka/conf.dist/ /home/hadoop/app/kafka/lib/kafka/config

基于cdh的Kafka配置及部署(详细,成功运行)

重新启动

[hadoop@hadoop003 kafka]$ nohup kafka-server-start.sh /home/hadoop/app/kafka/etc/kafka/conf.dist/server.properties > /home/hadoop/app/kafka/server-logs/kafka-server.log 2>&1 & 

没有报错信息了。。。


文章名称:基于cdh的Kafka配置及部署(详细,成功运行)
当前链接:http://bjjierui.cn/article/jjhics.html

其他资讯