zl程序教程

您现在的位置是:首页 >  其他

当前栏目

Flume环境部署和配置详解及案例大全

案例配置部署 详解 环境 大全 flume
2023-06-13 09:15:41 时间

  一、什么是Flume?
  flume作为cloudera开发的实时日志收集系统,受到了业界的认可与广泛应用。Flume初始的发行版本目前被统称为FlumeOG(originalgeneration),属于cloudera。但随着FLume功能的扩展,FlumeOG代码工程臃肿、核心组件设计不合理、核心配置不标准等缺点暴露出来,尤其是在FlumeOG的最后一个发行版本0.94.0中,日志传输不稳定的现象尤为严重,为了解决这些问题,2011年10月22号,cloudera完成了Flume-728,对Flume进行了里程碑式的改动:重构核心组件、核心配置以及代码架构,重构后的版本统称为FlumeNG(nextgeneration);改动的另一原因是将Flume纳入apache旗下,clouderaFlume改名为ApacheFlume。
 
flume的特点:
  flume是一个分布式、可靠、和高可用的海量日志采集、聚合和传输的系统。支持在日志系统中定制各类数据发送方,用于收集数据;同时,Flume提供对数据进行简单处理,并写到各种数据接受方(比如文本、HDFS、Hbase等)的能力。
  flume的数据流由事件(Event)贯穿始终。事件是Flume的基本数据单位,它携带日志数据(字节数组形式)并且携带有头信息,这些Event由Agent外部的Source生成,当Source捕获事件后会进行特定的格式化,然后Source会把事件推入(单个或多个)Channel中。你可以把Channel看作是一个缓冲区,它将保存事件直到Sink处理完该事件。Sink负责持久化日志或者把事件推向另一个Source。
 
flume的可靠性
  当节点出现故障时,日志能够被传送到其他节点上而不会丢失。Flume提供了三种级别的可靠性保障,从强到弱依次分别为:end-to-end(收到数据agent首先将event写到磁盘上,当数据传送成功后,再删除;如果数据发送失败,可以重新发送。),Storeonfailure(这也是scribe采用的策略,当数据接收方crash时,将数据写到本地,待恢复后,继续发送),Besteffort(数据发送到接收方后,不会进行确认)。
 
flume的可恢复性:
  还是靠Channel。推荐使用FileChannel,事件持久化在本地文件系统里(性能较差)。
 
  flume的一些核心概念:
Agent使用JVM运行Flume。每台机器运行一个agent,但是可以在一个agent中包含多个sources和sinks。
Client生产数据,运行在一个独立的线程。
Source从Client收集数据,传递给Channel。
Sink从Channel收集数据,运行在一个独立线程。
Channel连接sources和sinks,这个有点像一个队列。
Events可以是日志记录、avro对象等。
 
  Flume以agent为最小的独立运行单位。一个agent就是一个JVM。单agent由Source、Sink和Channel三大组件构成,如下图:

  值得注意的是,Flume提供了大量内置的Source、Channel和Sink类型。不同类型的Source,Channel和Sink可以自由组合。组合方式基于用户设置的配置文件,非常灵活。比如:Channel可以把事件暂存在内存里,也可以持久化到本地硬盘上。Sink可以把日志写入HDFS,HBase,甚至是另外一个Source等等。Flume支持用户建立多级流,也就是说,多个agent可以协同工作,并且支持Fan-in、Fan-out、ContextualRouting、BackupRoutes,这也正是NB之处。如下图所示:

  二、flume的官方网站在哪里?
  http://flume.apache.org/

  三、在哪里下载?

  http://www.apache.org/dyn/closer.cgi/flume/1.5.0/apache-flume-1.5.0-bin.tar.gz

  四、如何安装?
    1)将下载的flume包,解压到/home/hadoop目录中,你就已经完成了50%:)简单吧

    2)修改flume-env.sh配置文件,主要是JAVA_HOME变量设置

root@m1:/home/hadoop/flume-1.5.0-bin#cpconf/flume-env.sh.templateconf/flume-env.sh
root@m1:/home/hadoop/flume-1.5.0-bin#viconf/flume-env.sh
#LicensedtotheApacheSoftwareFoundation(ASF)underone
#ormorecontributorlicenseagreements.SeetheNOTICEfile
#distributedwiththisworkforadditionalinformation
#regardingcopyrightownership.TheASFlicensesthisfile
#toyouundertheApacheLicense,Version2.0(the
#"License");youmaynotusethisfileexceptincompliance
#withtheLicense.YoumayobtainacopyoftheLicenseat
#
#http://www.apache.org/licenses/LICENSE-2.0
#
#Unlessrequiredbyapplicablelaworagreedtoinwriting,software
#distributedundertheLicenseisdistributedonan"ASIS"BASIS,
#WITHOUTWARRANTIESORCONDITIONSOFANYKIND,eitherexpressorimplied.
#SeetheLicenseforthespecificlanguagegoverningpermissionsand
#limitationsundertheLicense.

#IfthisfileisplacedatFLUME_CONF_DIR/flume-env.sh,itwillbesourced
#duringFlumestartup.

#Enviromentvariablescanbesethere.

JAVA_HOME=/usr/lib/jvm/java-7-oracle

#GiveFlumemorememoryandpre-allocate,enableremotemonitoringviaJMX
#JAVA_OPTS="-Xms100m-Xmx200m-Dcom.sun.management.jmxremote"

#NotethattheFlumeconfdirectoryisalwaysincludedintheclasspath.
#FLUME_CLASSPATH=""

    3)验证是否安装成功

root@m1:/home/hadoop#/home/hadoop/flume-1.5.0-bin/bin/flume-ngversion
Flume1.5.0
Sourcecoderepository:https://git-wip-us.apache.org/repos/asf/flume.git
Revision:8633220df808c4cd0c13d1cf0320454a94f1ea97
CompiledbyhshreedharanonWedMay714:49:18PDT2014
Fromsourcewithchecksuma01fe726e4380ba0c9f7a7d222db961f
root@m1:/home/hadoop#

    出现上面的信息,表示安装成功了
 
 
  五、flume的案例
    1)案例1:Avro
    Avro可以发送一个给定的文件给Flume,Avro源使用AVRORPC机制。
      a)创建agent配置文件

root@m1:/home/hadoop#vi/home/hadoop/flume-1.5.0-bin/conf/avro.conf

a1.sources=r1
a1.sinks=k1
a1.channels=c1

#Describe/configurethesource
a1.sources.r1.type=avro
a1.sources.r1.channels=c1
a1.sources.r1.bind=0.0.0.0
a1.sources.r1.port=4141

#Describethesink
a1.sinks.k1.type=logger

#Useachannelwhichbufferseventsinmemory
a1.channels.c1.type=memory
a1.channels.c1.capacity=1000
a1.channels.c1.transactionCapacity=100

#Bindthesourceandsinktothechannel
a1.sources.r1.channels=c1
a1.sinks.k1.channel=c1

      b)启动flumeagenta1

root@m1:/home/hadoop#/home/hadoop/flume-1.5.0-bin/bin/flume-ngagent-c.-f/home/hadoop/flume-1.5.0-bin/conf/avro.conf-na1-Dflume.root.logger=INFO,console

      c)创建指定文件

root@m1:/home/hadoop#echo"helloworld">/home/hadoop/flume-1.5.0-bin/log.00

      d)使用avro-client发送文件

root@m1:/home/hadoop#/home/hadoop/flume-1.5.0-bin/bin/flume-ngavro-client-c.-Hm1-p4141-F/home/hadoop/flume-1.5.0-bin/log.00

      f)在m1的控制台,可以看到以下信息,注意最后一行:

root@m1:/home/hadoop/flume-1.5.0-bin/conf#/home/hadoop/flume-1.5.0-bin/bin/flume-ngagent-c.-f/home/hadoop/flume-1.5.0-bin/conf/avro.conf-na1-Dflume.root.logger=INFO,console
Info:Sourcingenvironmentconfigurationscript/home/hadoop/flume-1.5.0-bin/conf/flume-env.sh
Info:IncludingHadooplibrariesfoundvia(/home/hadoop/hadoop-2.2.0/bin/hadoop)forHDFSaccess
Info:Excluding/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/slf4j-api-1.7.5.jarfromclasspath
Info:Excluding/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jarfromclasspath
...
-08-1010:43:25,112(NewI/Oworker#1)[INFO-org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream(NettyServer.java:171)][id:0x92464c4f,/192.168.1.50:59850:>/192.168.1.50:4141]UNBOUND
-08-1010:43:25,112(NewI/Oworker#1)[INFO-org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream(NettyServer.java:171)][id:0x92464c4f,/192.168.1.50:59850:>/192.168.1.50:4141]CLOSED
-08-1010:43:25,112(NewI/Oworker#1)[INFO-org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.channelClosed(NettyServer.java:209)]Connectionto/192.168.1.50:59850disconnected.
-08-1010:43:26,718(SinkRunner-PollingRunner-DefaultSinkProcessor)[INFO-org.apache.flume.sink.LoggerSink.process(LoggerSink.java:70)]Event:{headers:{}body:68656C6C6F20776F726C64helloworld}

    2)案例2:Spool
    Spool监测配置的目录下新增的文件,并将文件中的数据读取出来。需要注意两点:
    1)拷贝到spool目录下的文件不可以再打开编辑。
    2)spool目录下不可包含相应的子目录
      a)创建agent配置文件

root@m1:/home/hadoop#vi/home/hadoop/flume-1.5.0-bin/conf/spool.conf
a1.sources=r1
a1.sinks=k1
a1.channels=c1
#Describe/configurethesource
a1.sources.r1.type=spooldir
a1.sources.r1.channels=c1
a1.sources.r1.spoolDir=/home/hadoop/flume-1.5.0-bin/logs
a1.sources.r1.fileHeader=true
#Describethesink
a1.sinks.k1.type=logger
#Useachannelwhichbufferseventsinmemory
a1.channels.c1.type=memory
a1.channels.c1.capacity=1000
a1.channels.c1.transactionCapacity=100
#Bindthesourceandsinktothechannel
a1.sources.r1.channels=c1
a1.sinks.k1.channel=c1

      b)启动flumeagenta1

root@m1:/home/hadoop#/home/hadoop/flume-1.5.0-bin/bin/flume-ngagent-c.-f/home/hadoop/flume-1.5.0-bin/conf/spool.conf-na1-Dflume.root.logger=INFO,console

      c)追加文件到/home/hadoop/flume-1.5.0-bin/logs目录

root@m1:/home/hadoop#echo"spooltest1">/home/hadoop/flume-1.5.0-bin/logs/spool_text.log

      d)在m1的控制台,可以看到以下相关信息:

/08/1011:37:13INFOsource.SpoolDirectorySource:SpoolingDirectorySourcerunnerhasshutdown.
/08/1011:37:13INFOsource.SpoolDirectorySource:SpoolingDirectorySourcerunnerhasshutdown.
/08/1011:37:14INFOavro.ReliableSpoolingFileEventReader:Preparingtomovefile/home/hadoop/flume-1.5.0-bin/logs/spool_text.logto/home/hadoop/flume-1.5.0-bin/logs/spool_text.log.COMPLETED
/08/1011:37:14INFOsource.SpoolDirectorySource:SpoolingDirectorySourcerunnerhasshutdown.
/08/1011:37:14INFOsource.SpoolDirectorySource:SpoolingDirectorySourcerunnerhasshutdown.
/08/1011:37:14INFOsink.LoggerSink:Event:{headers:{file=/home/hadoop/flume-1.5.0-bin/logs/spool_text.log}body:73706F6F6C207465737431spooltest1}
/08/1011:37:15INFOsource.SpoolDirectorySource:SpoolingDirectorySourcerunnerhasshutdown.
/08/1011:37:15INFOsource.SpoolDirectorySource:SpoolingDirectorySourcerunnerhasshutdown.
/08/1011:37:16INFOsource.SpoolDirectorySource:SpoolingDirectorySourcerunnerhasshutdown.
/08/1011:37:16INFOsource.SpoolDirectorySource:SpoolingDirectorySourcerunnerhasshutdown.
/08/1011:37:17INFOsource.SpoolDirectorySource:SpoolingDirectorySourcerunnerhasshutdown.

    3)案例3:Exec
    EXEC执行一个给定的命令获得输出的源,如果要使用tail命令,必选使得file足够大才能看到输出内容
      a)创建agent配置文件

root@m1:/home/hadoop#vi/home/hadoop/flume-1.5.0-bin/conf/exec_tail.conf
a1.sources=r1
a1.sinks=k1
a1.channels=c1
#Describe/configurethesource
a1.sources.r1.type=exec
a1.sources.r1.channels=c1
a1.sources.r1.command=tail-F/home/hadoop/flume-1.5.0-bin/log_exec_tail
#Describethesink
a1.sinks.k1.type=logger
#Useachannelwhichbufferseventsinmemory
a1.channels.c1.type=memory
a1.channels.c1.capacity=1000
a1.channels.c1.transactionCapacity=100
#Bindthesourceandsinktothechannel
a1.sources.r1.channels=c1
a1.sinks.k1.channel=c1

      b)启动flumeagenta1

root@m1:/home/hadoop#/home/hadoop/flume-1.5.0-bin/bin/flume-ngagent-c.-f/home/hadoop/flume-1.5.0-bin/conf/exec_tail.conf-na1-Dflume.root.logger=INFO,console

      c)生成足够多的内容在文件里

root@m1:/home/hadoop#foriin{1..100};doecho"exectail$i">>/home/hadoop/flume-1.5.0-bin/log_exec_tail;echo$i;sleep0.1;done

      e)在m1的控制台,可以看到以下信息:

-08-1010:59:25,513(SinkRunner-PollingRunner-DefaultSinkProcessor)[INFO-org.apache.flume.sink.LoggerSink.process(LoggerSink.java:70)]Event:{headers:{}body:65786563207461696C2074657374exectailtest}
-08-1010:59:34,535(SinkRunner-PollingRunner-DefaultSinkProcessor)[INFO-org.apache.flume.sink.LoggerSink.process(LoggerSink.java:70)]Event:{headers:{}body:65786563207461696C2074657374exectailtest}
-08-1011:01:40,557(SinkRunner-PollingRunner-DefaultSinkProcessor)[INFO-org.apache.flume.sink.LoggerSink.process(LoggerSink.java:70)]Event:{headers:{}body:65786563207461696C31exectail1}
-08-1011:01:41,180(SinkRunner-PollingRunner-DefaultSinkProcessor)[INFO-org.apache.flume.sink.LoggerSink.process(LoggerSink.java:70)]Event:{headers:{}body:65786563207461696C32exectail2}
-08-1011:01:41,180(SinkRunner-PollingRunner-DefaultSinkProcessor)[INFO-org.apache.flume.sink.LoggerSink.process(LoggerSink.java:70)]Event:{headers:{}body:65786563207461696C33exectail3}
-08-1011:01:41,181(SinkRunner-PollingRunner-DefaultSinkProcessor)[INFO-org.apache.flume.sink.LoggerSink.process(LoggerSink.java:70)]Event:{headers:{}body:65786563207461696C34exectail4}
-08-1011:01:41,181(SinkRunner-PollingRunner-DefaultSinkProcessor)[INFO-org.apache.flume.sink.LoggerSink.process(LoggerSink.java:70)]Event:{headers:{}body:65786563207461696C35exectail5}
-08-1011:01:41,181(SinkRunner-PollingRunner-DefaultSinkProcessor)[INFO-org.apache.flume.sink.LoggerSink.process(LoggerSink.java:70)]Event:{headers:{}body:65786563207461696C36exectail6}
....
....
....
-08-1011:01:51,550(SinkRunner-PollingRunner-DefaultSinkProcessor)[INFO-org.apache.flume.sink.LoggerSink.process(LoggerSink.java:70)]Event:{headers:{}body:65786563207461696C3936exectail96}
-08-1011:01:51,550(SinkRunner-PollingRunner-DefaultSinkProcessor)[INFO-org.apache.flume.sink.LoggerSink.process(LoggerSink.java:70)]Event:{headers:{}body:65786563207461696C3937exectail97}
-08-1011:01:51,551(SinkRunner-PollingRunner-DefaultSinkProcessor)[INFO-org.apache.flume.sink.LoggerSink.process(LoggerSink.java:70)]Event:{headers:{}body:65786563207461696C3938exectail98}
-08-1011:01:51,551(SinkRunner-PollingRunner-DefaultSinkProcessor)[INFO-org.apache.flume.sink.LoggerSink.process(LoggerSink.java:70)]Event:{headers:{}body:65786563207461696C3939exectail99}
-08-1011:01:51,551(SinkRunner-PollingRunner-DefaultSinkProcessor)[INFO-org.apache.flume.sink.LoggerSink.process(LoggerSink.java:70)]Event:{headers:{}body:65786563207461696C313030exectail100}

    4)案例4:Syslogtcp
    Syslogtcp监听TCP的端口做为数据源
      a)创建agent配置文件

root@m1:/home/hadoop#vi/home/hadoop/flume-1.5.0-bin/conf/syslog_tcp.conf
a1.sources=r1
a1.sinks=k1
a1.channels=c1
#Describe/configurethesource
a1.sources.r1.type=syslogtcp
a1.sources.r1.port=5140
a1.sources.r1.host=localhost
a1.sources.r1.channels=c1
#Describethesink
a1.sinks.k1.type=logger
#Useachannelwhichbufferseventsinmemory
a1.channels.c1.type=memory
a1.channels.c1.capacity=1000
a1.channels.c1.transactionCapacity=100
#Bindthesourceandsinktothechannel
a1.sources.r1.channels=c1
a1.sinks.k1.channel=c1

      b)启动flumeagenta1

root@m1:/home/hadoop#/home/hadoop/flume-1.5.0-bin/bin/flume-ngagent-c.-f/home/hadoop/flume-1.5.0-bin/conf/syslog_tcp.conf-na1-Dflume.root.logger=INFO,console

      c)测试产生syslog

root@m1:/home/hadoop#echo"helloidoall.orgsyslog"|nclocalhost5140

      d)在m1的控制台,可以看到以下信息:

/08/1011:41:45INFOnode.PollingPropertiesFileConfigurationProvider:Reloadingconfigurationfile:/home/hadoop/flume-1.5.0-bin/conf/syslog_tcp.conf
/08/1011:41:45INFOconf.FlumeConfiguration:Addedsinks:k1Agent:a1
/08/1011:41:45INFOconf.FlumeConfiguration:Processing:k1
/08/1011:41:45INFOconf.FlumeConfiguration:Processing:k1
/08/1011:41:45INFOconf.FlumeConfiguration:Post-validationflumeconfigurationcontainsconfigurationforagents:[a1]
/08/1011:41:45INFOnode.AbstractConfigurationProvider:Creatingchannels
/08/1011:41:45INFOchannel.DefaultChannelFactory:Creatinginstanceofchannelc1typememory
/08/1011:41:45INFOnode.AbstractConfigurationProvider:Createdchannelc1
/08/1011:41:45INFOsource.DefaultSourceFactory:Creatinginstanceofsourcer1,typesyslogtcp
/08/1011:41:45INFOsink.DefaultSinkFactory:Creatinginstanceofsink:k1,type:logger
/08/1011:41:45INFOnode.AbstractConfigurationProvider:Channelc1connectedto[r1,k1]
/08/1011:41:45INFOnode.Application:Startingnewconfiguration:{sourceRunners:{r1=EventDrivenSourceRunner:{source:org.apache.flume.source.SyslogTcpSource{name:r1,state:IDLE}}}sinkRunners:{k1=SinkRunner:{policy:org.apache.flume.sink.DefaultSinkProcessor@6538b14counterGroup:{name:nullcounters:{}}}}channels:{c1=org.apache.flume.channel.MemoryChannel{name:c1}}}
/08/1011:41:45INFOnode.Application:StartingChannelc1
/08/1011:41:45INFOinstrumentation.MonitoredCounterGroup:Monitoredcountergroupfortype:CHANNEL,name:c1:SuccessfullyregisterednewMBean.
/08/1011:41:45INFOinstrumentation.MonitoredCounterGroup:Componenttype:CHANNEL,name:c1started
/08/1011:41:45INFOnode.Application:StartingSinkk1
/08/1011:41:45INFOnode.Application:StartingSourcer1
/08/1011:41:45INFOsource.SyslogTcpSource:SyslogTCPSourcestarting...
/08/1011:42:15WARNsource.SyslogUtils:EventcreatedfromInvalidSyslogdata.
/08/1011:42:15INFOsink.LoggerSink:Event:{headers:{Severity=0,flume.syslog.status=Invalid,Facility=0}body:68656C6C6F2069646F616C6C2E6F7267helloidoall.org}

    5)案例5:JSONHandler
      a)创建agent配置文件

root@m1:/home/hadoop#vi/home/hadoop/flume-1.5.0-bin/conf/post_json.conf
a1.sources=r1
a1.sinks=k1
a1.channels=c1
#Describe/configurethesource
a1.sources.r1.type=org.apache.flume.source.http.HTTPSource
a1.sources.r1.port=8888
a1.sources.r1.channels=c1
#Describethesink
a1.sinks.k1.type=logger
#Useachannelwhichbufferseventsinmemory
a1.channels.c1.type=memory
a1.channels.c1.capacity=1000
a1.channels.c1.transactionCapacity=100
#Bindthesourceandsinktothechannel
a1.sources.r1.channels=c1
a1.sinks.k1.channel=c1

      b)启动flumeagenta1

root@m1:/home/hadoop#/home/hadoop/flume-1.5.0-bin/bin/flume-ngagent-c.-f/home/hadoop/flume-1.5.0-bin/conf/post_json.conf-na1-Dflume.root.logger=INFO,console

      c)生成JSON格式的POSTrequest

root@m1:/home/hadoop#curl-XPOST-d"[{"headers":{"a":"a1","b":"b1"},"body":"idoall.org_body"}]"http://localhost:8888

      d)在m1的控制台,可以看到以下信息:
/

08/1011:49:59INFOnode.Application:StartingChannelc1
/08/1011:49:59INFOinstrumentation.MonitoredCounterGroup:Monitoredcountergroupfortype:CHANNEL,name:c1:SuccessfullyregisterednewMBean.
/08/1011:49:59INFOinstrumentation.MonitoredCounterGroup:Componenttype:CHANNEL,name:c1started
/08/1011:49:59INFOnode.Application:StartingSinkk1
/08/1011:49:59INFOnode.Application:StartingSourcer1
/08/1011:49:59INFOmortbay.log:Loggingtoorg.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log)viaorg.mortbay.log.Slf4jLog
/08/1011:49:59INFOmortbay.log:jetty-6.1.26
/08/1011:50:00INFOmortbay.log:StartedSelectChannelConnector@0.0.0.0:8888
/08/1011:50:00INFOinstrumentation.MonitoredCounterGroup:Monitoredcountergroupfortype:SOURCE,name:r1:SuccessfullyregisterednewMBean.
/08/1011:50:00INFOinstrumentation.MonitoredCounterGroup:Componenttype:SOURCE,name:r1started
/08/1012:14:32INFOsink.LoggerSink:Event:{headers:{b=b1,a=a1}body:69646F616C6C2E6F72675F626F6479idoall.org_body}

    6)案例6:Hadoopsink
    其中关于hadoop2.2.0部分的安装部署,请参考文章《ubuntu12.04+hadoop2.2.0+zookeeper3.4.5+hbase0.96.2+hive0.13.1分布式环境部署》
      a)创建agent配置文件

root@m1:/home/hadoop#vi/home/hadoop/flume-1.5.0-bin/conf/hdfs_sink.conf
a1.sources=r1
a1.sinks=k1
a1.channels=c1
#Describe/configurethesource
a1.sources.r1.type=syslogtcp
a1.sources.r1.port=5140
a1.sources.r1.host=localhost
a1.sources.r1.channels=c1
#Describethesink
a1.sinks.k1.type=hdfs
a1.sinks.k1.channel=c1
a1.sinks.k1.hdfs.path=hdfs://m1:9000/user/flume/syslogtcp
a1.sinks.k1.hdfs.filePrefix=Syslog
a1.sinks.k1.hdfs.round=true
a1.sinks.k1.hdfs.roundValue=10
a1.sinks.k1.hdfs.roundUnit=minute
#Useachannelwhichbufferseventsinmemory
a1.channels.c1.type=memory
a1.channels.c1.capacity=1000
a1.channels.c1.transactionCapacity=100
#Bindthesourceandsinktothechannel
a1.sources.r1.channels=c1
a1.sinks.k1.channel=c1

      b)启动flumeagenta1

root@m1:/home/hadoop#/home/hadoop/flume-1.5.0-bin/bin/flume-ngagent-c.-f/home/hadoop/flume-1.5.0-bin/conf/hdfs_sink.conf-na1-Dflume.root.logger=INFO,console

      c)测试产生syslog

root@m1:/home/hadoop#echo"helloidoallflume->hadooptestingone"|nclocalhost5140

      d)在m1的控制台,可以看到以下信息:

/08/1012:20:39INFOinstrumentation.MonitoredCounterGroup:Monitoredcountergroupfortype:CHANNEL,name:c1:SuccessfullyregisterednewMBean.
/08/1012:20:39INFOinstrumentation.MonitoredCounterGroup:Componenttype:CHANNEL,name:c1started
/08/1012:20:39INFOnode.Application:StartingSinkk1
/08/1012:20:39INFOnode.Application:StartingSourcer1
/08/1012:20:39INFOinstrumentation.MonitoredCounterGroup:Monitoredcountergroupfortype:SINK,name:k1:SuccessfullyregisterednewMBean.
/08/1012:20:39INFOinstrumentation.MonitoredCounterGroup:Componenttype:SINK,name:k1started
/08/1012:20:39INFOsource.SyslogTcpSource:SyslogTCPSourcestarting...
/08/1012:21:46WARNsource.SyslogUtils:EventcreatedfromInvalidSyslogdata.
/08/1012:21:49INFOhdfs.HDFSSequenceFile:writeFormat=Writable,UseRawLocalFileSystem=false
/08/1012:21:49INFOhdfs.BucketWriter:Creatinghdfs://m1:9000/user/flume/syslogtcp//Syslog.1407644509504.tmp
/08/1012:22:20INFOhdfs.BucketWriter:Closinghdfs://m1:9000/user/flume/syslogtcp//Syslog.1407644509504.tmp
/08/1012:22:20INFOhdfs.BucketWriter:Closetriesincremented
/08/1012:22:20INFOhdfs.BucketWriter:Renaminghdfs://m1:9000/user/flume/syslogtcp/Syslog.1407644509504.tmptohdfs://m1:9000/user/flume/syslogtcp/Syslog.1407644509504
/08/1012:22:20INFOhdfs.HDFSEventSink:Writercallbackcalled.

      e)在m1上再打开一个窗口,去hadoop上检查文件是否生成

root@m1:/home/hadoop#/home/hadoop/hadoop-2.2.0/bin/hadoopfs-ls/user/flume/syslogtcp
Found1items
-rw-r--r--3rootsupergroup1552014-08-1012:22/user/flume/syslogtcp/Syslog.1407644509504
root@m1:/home/hadoop#/home/hadoop/hadoop-2.2.0/bin/hadoopfs-cat/user/flume/syslogtcp/Syslog.1407644509504
SEQ!org.apache.hadoop.io.LongWritable"org.apache.hadoop.io.BytesWritable^;>Gv$helloidoallflume->hadooptestingone

    7)案例7:FileRollSink
      a)创建agent配置文件

root@m1:/home/hadoop#vi/home/hadoop/flume-1.5.0-bin/conf/file_roll.conf
a1.sources=r1
a1.sinks=k1
a1.channels=c1
#Describe/configurethesource
a1.sources.r1.type=syslogtcp
a1.sources.r1.port=5555
a1.sources.r1.host=localhost
a1.sources.r1.channels=c1
#Describethesink
a1.sinks.k1.type=file_roll
a1.sinks.k1.sink.directory=/home/hadoop/flume-1.5.0-bin/logs
#Useachannelwhichbufferseventsinmemory
a1.channels.c1.type=memory
a1.channels.c1.capacity=1000
a1.channels.c1.transactionCapacity=100
#Bindthesourceandsinktothechannel
a1.sources.r1.channels=c1
a1.sinks.k1.channel=c1

      b)启动flumeagenta1

root@m1:/home/hadoop#/home/hadoop/flume-1.5.0-bin/bin/flume-ngagent-c.-f/home/hadoop/flume-1.5.0-bin/conf/file_roll.conf-na1-Dflume.root.logger=INFO,console

      c)测试产生log

root@m1:/home/hadoop#echo"helloidoall.orgsyslog"|nclocalhost5555
root@m1:/home/hadoop#echo"helloidoall.orgsyslog2"|nclocalhost5555

      d)查看/home/hadoop/flume-1.5.0-bin/logs下是否生成文件,默认每30秒生成一个新文件

root@m1:/home/hadoop#ll/home/hadoop/flume-1.5.0-bin/logs
总用量272
drwxr-xr-x3rootroot4096Aug1012:50./
drwxr-xr-x9rootroot4096Aug1010:59../
-rw-r--r--1rootroot50Aug1012:491407646164782-1
-rw-r--r--1rootroot0Aug1012:491407646164782-2
-rw-r--r--1rootroot0Aug1012:501407646164782-3
root@m1:/home/hadoop#cat/home/hadoop/flume-1.5.0-bin/logs/1407646164782-1/home/hadoop/flume-1.5.0-bin/logs/1407646164782-2
helloidoall.orgsyslog
helloidoall.orgsyslog2

    8)案例8:ReplicatingChannelSelector
    Flume支持Fanout流从一个源到多个通道。有两种模式的Fanout,分别是复制和复用。在复制的情况下,流的事件被发送到所有的配置通道。在复用的情况下,事件被发送到可用的渠道中的一个子集。Fanout流需要指定源和Fanout通道的规则。
    这次我们需要用到m1,m2两台机器
      a)在m1创建replicating_Channel_Selector配置文件

root@m1:/home/hadoop#vi/home/hadoop/flume-1.5.0-bin/conf/replicating_Channel_Selector.conf
a1.sources=r1
a1.sinks=k1k2
a1.channels=c1c2
#Describe/configurethesource
a1.sources.r1.type=syslogtcp
a1.sources.r1.port=5140
a1.sources.r1.host=localhost
a1.sources.r1.channels=c1c2
a1.sources.r1.selector.type=replicating
#Describethesink
a1.sinks.k1.type=avro
a1.sinks.k1.channel=c1
a1.sinks.k1.hostname=m1
a1.sinks.k1.port=5555
a1.sinks.k2.type=avro
a1.sinks.k2.channel=c2
a1.sinks.k2.hostname=m2
a1.sinks.k2.port=5555
#Useachannelwhichbufferseventsinmemory
a1.channels.c1.type=memory
a1.channels.c1.capacity=1000
a1.channels.c1.transactionCapacity=100
a1.channels.c2.type=memory
a1.channels.c2.capacity=1000
a1.channels.c2.transactionCapacity=100

      b)在m1创建replicating_Channel_Selector_avro配置文件

root@m1:/home/hadoop#vi/home/hadoop/flume-1.5.0-bin/conf/replicating_Channel_Selector_avro.conf
a1.sources=r1
a1.sinks=k1
a1.channels=c1
#Describe/configurethesource
a1.sources.r1.type=avro
a1.sources.r1.channels=c1
a1.sources.r1.bind=0.0.0.0
a1.sources.r1.port=5555
#Describethesink
a1.sinks.k1.type=logger
#Useachannelwhichbufferseventsinmemory
a1.channels.c1.type=memory
a1.channels.c1.capacity=1000
a1.channels.c1.transactionCapacity=100
#Bindthesourceandsinktothechannel
a1.sources.r1.channels=c1
a1.sinks.k1.channel=c1

      c)在m1上将2个配置文件复制到m2上一份

root@m1:/home/hadoop/flume-1.5.0-bin#scp-r/home/hadoop/flume-1.5.0-bin/conf/replicating_Channel_Selector.confroot@m2:/home/hadoop/flume-1.5.0-bin/conf/replicating_Channel_Selector.conf
root@m1:/home/hadoop/flume-1.5.0-bin#scp-r/home/hadoop/flume-1.5.0-bin/conf/replicating_Channel_Selector_avro.confroot@m2:/home/hadoop/flume-1.5.0-bin/conf/replicating_Channel_Selector_avro.conf<br>

      d)打开4个窗口,在m1和m2上同时启动两个flumeagent

root@m1:/home/hadoop#/home/hadoop/flume-1.5.0-bin/bin/flume-ngagent-c.-f/home/hadoop/flume-1.5.0-bin/conf/replicating_Channel_Selector_avro.conf-na1-Dflume.root.logger=INFO,console
root@m1:/home/hadoop#/home/hadoop/flume-1.5.0-bin/bin/flume-ngagent-c.-f/home/hadoop/flume-1.5.0-bin/conf/replicating_Channel_Selector.conf-na1-Dflume.root.logger=INFO,console

      e)然后在m1或m2的任意一台机器上,测试产生syslog

root@m1:/home/hadoop#echo"helloidoall.orgsyslog"|nclocalhost5140

      f)在m1和m2的sink窗口,分别可以看到以下信息,这说明信息得到了同步:

/08/1014:08:18INFOipc.NettyServer:Connectionto/192.168.1.51:46844disconnected.
/08/1014:08:52INFOipc.NettyServer:[id:0x90f8fe1f,/192.168.1.50:35873=>/192.168.1.50:5555]OPEN
/08/1014:08:52INFOipc.NettyServer:[id:0x90f8fe1f,/192.168.1.50:35873=>/192.168.1.50:5555]BOUND:/192.168.1.50:5555
/08/1014:08:52INFOipc.NettyServer:[id:0x90f8fe1f,/192.168.1.50:35873=>/192.168.1.50:5555]CONNECTED:/192.168.1.50:35873
/08/1014:08:59INFOipc.NettyServer:[id:0xd6318635,/192.168.1.51:46858=>/192.168.1.50:5555]OPEN
/08/1014:08:59INFOipc.NettyServer:[id:0xd6318635,/192.168.1.51:46858=>/192.168.1.50:5555]BOUND:/192.168.1.50:5555
/08/1014:08:59INFOipc.NettyServer:[id:0xd6318635,/192.168.1.51:46858=>/192.168.1.50:5555]CONNECTED:/192.168.1.51:46858
/08/1014:09:20INFOsink.LoggerSink:Event:{headers:{Severity=0,flume.syslog.status=Invalid,Facility=0}body:68656C6C6F2069646F616C6C2E6F7267helloidoall.org}

    
                9)案例9:MultiplexingChannelSelector
      a)在m1创建Multiplexing_Channel_Selector配置文件

root@m1:/home/hadoop#vi/home/hadoop/flume-1.5.0-bin/conf/Multiplexing_Channel_Selector.conf
a1.sources=r1
a1.sinks=k1k2
a1.channels=c1c2
#Describe/configurethesource
a1.sources.r1.type=org.apache.flume.source.http.HTTPSource
a1.sources.r1.port=5140
a1.sources.r1.channels=c1c2
a1.sources.r1.selector.type=multiplexing
a1.sources.r1.selector.header=type
#映射允许每个值通道可以重叠。默认值可以包含任意数量的通道。
a1.sources.r1.selector.mapping.baidu=c1
a1.sources.r1.selector.mapping.ali=c2
a1.sources.r1.selector.default=c1
#Describethesink
a1.sinks.k1.type=avro
a1.sinks.k1.channel=c1
a1.sinks.k1.hostname=m1
a1.sinks.k1.port=5555
a1.sinks.k2.type=avro
a1.sinks.k2.channel=c2
a1.sinks.k2.hostname=m2
a1.sinks.k2.port=5555
#Useachannelwhichbufferseventsinmemory
a1.channels.c1.type=memory
a1.channels.c1.capacity=1000
a1.channels.c1.transactionCapacity=100
a1.channels.c2.type=memory
a1.channels.c2.capacity=1000
a1.channels.c2.transactionCapacity=100

      b)在m1创建Multiplexing_Channel_Selector_avro配置文件

root@m1:/home/hadoop#vi/home/hadoop/flume-1.5.0-bin/conf/Multiplexing_Channel_Selector_avro.conf
a1.sources=r1
a1.sinks=k1
a1.channels=c1
#Describe/configurethesource
a1.sources.r1.type=avro
a1.sources.r1.channels=c1
a1.sources.r1.bind=0.0.0.0
a1.sources.r1.port=5555
#Describethesink
a1.sinks.k1.type=logger
#Useachannelwhichbufferseventsinmemory
a1.channels.c1.type=memory
a1.channels.c1.capacity=1000
a1.channels.c1.transactionCapacity=100
#Bindthesourceandsinktothechannel
a1.sources.r1.channels=c1
a1.sinks.k1.channel=c1

      c)将2个配置文件复制到m2上一份

root@m1:/home/hadoop/flume-1.5.0-bin#scp-r/home/hadoop/flume-1.5.0-bin/conf/Multiplexing_Channel_Selector.confroot@m2:/home/hadoop/flume-1.5.0-bin/conf/Multiplexing_Channel_Selector.conf
root@m1:/home/hadoop/flume-1.5.0-bin#scp-r/home/hadoop/flume-1.5.0-bin/conf/Multiplexing_Channel_Selector_avro.confroot@m2:/home/hadoop/flume-1.5.0-bin/conf/Multiplexing_Channel_Selector_avro.conf

      d)打开4个窗口,在m1和m2上同时启动两个flumeagent

root@m1:/home/hadoop#/home/hadoop/flume-1.5.0-bin/bin/flume-ngagent-c.-f/home/hadoop/flume-1.5.0-bin/conf/Multiplexing_Channel_Selector_avro.conf-na1-Dflume.root.logger=INFO,console
root@m1:/home/hadoop#/home/hadoop/flume-1.5.0-bin/bin/flume-ngagent-c.-f/home/hadoop/flume-1.5.0-bin/conf/Multiplexing_Channel_Selector.conf-na1-Dflume.root.logger=INFO,console

      e)然后在m1或m2的任意一台机器上,测试产生syslog

root@m1:/home/hadoop#curl-XPOST-d"[{"headers":{"type":"baidu"},"body":"idoall_TEST1"}]"http://localhost:5140&&curl-XPOST-d"[{"headers":{"type":"ali"},"body":"idoall_TEST2"}]"http://localhost:5140&&curl-XPOST-d"[{"headers":{"type":"qq"},"body":"idoall_TEST3"}]"http://localhost:5140

      f)在m1的sink窗口,可以看到以下信息:

14/08/1014:32:21INFOnode.Application:StartingSinkk1
14/08/1014:32:21INFOnode.Application:StartingSourcer1
14/08/1014:32:21INFOsource.AvroSource:StartingAvrosourcer1:{bindAddress:0.0.0.0,port:5555}...
14/08/1014:32:21INFOinstrumentation.MonitoredCounterGroup:Monitoredcountergroupfortype:SOURCE,name:r1:SuccessfullyregisterednewMBean.
14/08/1014:32:21INFOinstrumentation.MonitoredCounterGroup:Componenttype:SOURCE,name:r1started
14/08/1014:32:21INFOsource.AvroSource:Avrosourcer1started.
14/08/1014:32:36INFOipc.NettyServer:[id:0xcf00eea6,/192.168.1.50:35916=>/192.168.1.50:5555]OPEN
14/08/1014:32:36INFOipc.NettyServer:[id:0xcf00eea6,/192.168.1.50:35916=>/192.168.1.50:5555]BOUND:/192.168.1.50:5555
14/08/1014:32:36INFOipc.NettyServer:[id:0xcf00eea6,/192.168.1.50:35916=>/192.168.1.50:5555]CONNECTED:/192.168.1.50:35916
14/08/1014:32:44INFOipc.NettyServer:[id:0x432f5468,/192.168.1.51:46945=>/192.168.1.50:5555]OPEN
14/08/1014:32:44INFOipc.NettyServer:[id:0x432f5468,/192.168.1.51:46945=>/192.168.1.50:5555]BOUND:/192.168.1.50:5555
14/08/1014:32:44INFOipc.NettyServer:[id:0x432f5468,/192.168.1.51:46945=>/192.168.1.50:5555]CONNECTED:/192.168.1.51:46945
14/08/1014:34:11INFOsink.LoggerSink:Event:{headers:{type=baidu}body:69646F616C6C5F5445535431idoall_TEST1}
14/08/1014:34:57INFOsink.LoggerSink:Event:{headers:{type=qq}body:69646F616C6C5F5445535433idoall_TEST3}

      g)在m2的sink窗口,可以看到以下信息:

14/08/1014:32:27INFOnode.Application:StartingSinkk1
14/08/1014:32:27INFOnode.Application:StartingSourcer1
14/08/1014:32:27INFOsource.AvroSource:StartingAvrosourcer1:{bindAddress:0.0.0.0,port:5555}...
14/08/1014:32:27INFOinstrumentation.MonitoredCounterGroup:Monitoredcountergroupfortype:SOURCE,name:r1:SuccessfullyregisterednewMBean.
14/08/1014:32:27INFOinstrumentation.MonitoredCounterGroup:Componenttype:SOURCE,name:r1started
14/08/1014:32:27INFOsource.AvroSource:Avrosourcer1started.
14/08/1014:32:36INFOipc.NettyServer:[id:0x7c2f0aec,/192.168.1.50:38104=>/192.168.1.51:5555]OPEN
14/08/1014:32:36INFOipc.NettyServer:[id:0x7c2f0aec,/192.168.1.50:38104=>/192.168.1.51:5555]BOUND:/192.168.1.51:5555
14/08/1014:32:36INFOipc.NettyServer:[id:0x7c2f0aec,/192.168.1.50:38104=>/192.168.1.51:5555]CONNECTED:/192.168.1.50:38104
14/08/1014:32:44INFOipc.NettyServer:[id:0x3d36f553,/192.168.1.51:48599=>/192.168.1.51:5555]OPEN
14/08/1014:32:44INFOipc.NettyServer:[id:0x3d36f553,/192.168.1.51:48599=>/192.168.1.51:5555]BOUND:/192.168.1.51:5555
14/08/1014:32:44INFOipc.NettyServer:[id:0x3d36f553,/192.168.1.51:48599=>/192.168.1.51:5555]CONNECTED:/192.168.1.51:48599
14/08/1014:34:33INFOsink.LoggerSink:Event:{headers:{type=ali}body:69646F616C6C5F5445535432idoall_TEST2}

    可以看到,根据header中不同的条件分布到不同的channel上
 
    10)案例10:FlumeSinkProcessors
    failover的机器是一直发送给其中一个sink,当这个sink不可用的时候,自动发送到下一个sink。
 
      a)在m1创建Flume_Sink_Processors配置文件

root@m1:/home/hadoop#vi/home/hadoop/flume-1.5.0-bin/conf/Flume_Sink_Processors.conf

a1.sources=r1
a1.sinks=k1k2
a1.channels=c1c2

#这个是配置failover的关键,需要有一个sinkgroup
a1.sinkgroups=g1
a1.sinkgroups.g1.sinks=k1k2
#处理的类型是failover
a1.sinkgroups.g1.processor.type=failover
#优先级,数字越大优先级越高,每个sink的优先级必须不相同
a1.sinkgroups.g1.processor.priority.k1=5
a1.sinkgroups.g1.processor.priority.k2=10
#设置为10秒,当然可以根据你的实际状况更改成更快或者很慢
a1.sinkgroups.g1.processor.maxpenalty=10000

#Describe/configurethesource
a1.sources.r1.type=syslogtcp
a1.sources.r1.port=5140
a1.sources.r1.channels=c1c2
a1.sources.r1.selector.type=replicating


#Describethesink
a1.sinks.k1.type=avro
a1.sinks.k1.channel=c1
a1.sinks.k1.hostname=m1
a1.sinks.k1.port=5555

a1.sinks.k2.type=avro
a1.sinks.k2.channel=c2
a1.sinks.k2.hostname=m2
a1.sinks.k2.port=5555

#Useachannelwhichbufferseventsinmemory
a1.channels.c1.type=memory
a1.channels.c1.capacity=1000
a1.channels.c1.transactionCapacity=100

a1.channels.c2.type=memory
a1.channels.c2.capacity=1000
a1.channels.c2.transactionCapacity=100

      b)在m1创建Flume_Sink_Processors_avro配置文件

root@m1:/home/hadoop#vi/home/hadoop/flume-1.5.0-bin/conf/Flume_Sink_Processors_avro.conf

a1.sources=r1
a1.sinks=k1
a1.channels=c1

#Describe/configurethesource
a1.sources.r1.type=avro
a1.sources.r1.channels=c1
a1.sources.r1.bind=0.0.0.0
a1.sources.r1.port=5555

#Describethesink
a1.sinks.k1.type=logger

#Useachannelwhichbufferseventsinmemory
a1.channels.c1.type=memory
a1.channels.c1.capacity=1000
a1.channels.c1.transactionCapacity=100

#Bindthesourceandsinktothechannel
a1.sources.r1.channels=c1
a1.sinks.k1.channel=c1

      c)将2个配置文件复制到m2上一份

root@m1:/home/hadoop/flume-1.5.0-bin#scp-r/home/hadoop/flume-1.5.0-bin/conf/Flume_Sink_Processors.confroot@m2:/home/hadoop/flume-1.5.0-bin/conf/Flume_Sink_Processors.conf
root@m1:/home/hadoop/flume-1.5.0-bin#scp-r/home/hadoop/flume-1.5.0-bin/conf/Flume_Sink_Processors_avro.confroot@m2:/home/hadoop/flume-1.5.0-bin/conf/Flume_Sink_Processors_avro.conf

      d)打开4个窗口,在m1和m2上同时启动两个flumeagent

root@m1:/home/hadoop#/home/hadoop/flume-1.5.0-bin/bin/flume-ngagent-c.-f/home/hadoop/flume-1.5.0-bin/conf/Flume_Sink_Processors_avro.conf-na1-Dflume.root.logger=INFO,console
root@m1:/home/hadoop#/home/hadoop/flume-1.5.0-bin/bin/flume-ngagent-c.-f/home/hadoop/flume-1.5.0-bin/conf/Flume_Sink_Processors.conf-na1-Dflume.root.logger=INFO,console

      e)然后在m1或m2的任意一台机器上,测试产生log

root@m1:/home/hadoop#echo"idoall.orgtest1failover"|nclocalhost5140

      f)因为m2的优先级高,所以在m2的sink窗口,可以看到以下信息,而m1没有:

14/08/1015:02:46INFOipc.NettyServer:Connectionto/192.168.1.51:48692disconnected.
14/08/1015:03:12INFOipc.NettyServer:[id:0x09a14036,/192.168.1.51:48704=>/192.168.1.51:5555]OPEN
14/08/1015:03:12INFOipc.NettyServer:[id:0x09a14036,/192.168.1.51:48704=>/192.168.1.51:5555]BOUND:/192.168.1.51:5555
14/08/1015:03:12INFOipc.NettyServer:[id:0x09a14036,/192.168.1.51:48704=>/192.168.1.51:5555]CONNECTED:/192.168.1.51:48704
14/08/1015:03:26INFOsink.LoggerSink:Event:{headers:{Severity=0,flume.syslog.status=Invalid,Facility=0}body:69646F616C6C2E6F7267207465737431idoall.orgtest1}

      g)这时我们停止掉m2机器上的sink(ctrl+c),再次输出测试数据:

root@m1:/home/hadoop#echo"idoall.orgtest2failover"|nclocalhost5140

      h)可以在m1的sink窗口,看到读取到了刚才发送的两条测试数据:

14/08/1015:02:46INFOipc.NettyServer:Connectionto/192.168.1.51:47036disconnected.
14/08/1015:03:12INFOipc.NettyServer:[id:0xbcf79851,/192.168.1.51:47048=>/192.168.1.50:5555]OPEN
14/08/1015:03:12INFOipc.NettyServer:[id:0xbcf79851,/192.168.1.51:47048=>/192.168.1.50:5555]BOUND:/192.168.1.50:5555
14/08/1015:03:12INFOipc.NettyServer:[id:0xbcf79851,/192.168.1.51:47048=>/192.168.1.50:5555]CONNECTED:/192.168.1.51:47048
14/08/1015:07:56INFOsink.LoggerSink:Event:{headers:{Severity=0,flume.syslog.status=Invalid,Facility=0}body:69646F616C6C2E6F7267207465737431idoall.orgtest1}
14/08/1015:07:56INFOsink.LoggerSink:Event:{headers:{Severity=0,flume.syslog.status=Invalid,Facility=0}body:69646F616C6C2E6F7267207465737432idoall.orgtest2}

      i)我们再在m2的sink窗口中,启动sink:

root@m1:/home/hadoop#/home/hadoop/flume-1.5.0-bin/bin/flume-ngagent-c.-f/home/hadoop/flume-1.5.0-bin/conf/Flume_Sink_Processors_avro.conf-na1-Dflume.root.logger=INFO,console

      j)输入两批测试数据:

root@m1:/home/hadoop#echo"idoall.orgtest3failover"|nclocalhost5140&&echo"idoall.orgtest4failover"|nclocalhost5140

     k)在m2的sink窗口,我们可以看到以下信息,因为优先级的关系,log消息会再次落到m2上:

14/08/1015:09:47INFOnode.Application:StartingSinkk1
14/08/1015:09:47INFOnode.Application:StartingSourcer1
14/08/1015:09:47INFOsource.AvroSource:StartingAvrosourcer1:{bindAddress:0.0.0.0,port:5555}...
14/08/1015:09:47INFOinstrumentation.MonitoredCounterGroup:Monitoredcountergroupfortype:SOURCE,name:r1:SuccessfullyregisterednewMBean.
14/08/1015:09:47INFOinstrumentation.MonitoredCounterGroup:Componenttype:SOURCE,name:r1started
14/08/1015:09:47INFOsource.AvroSource:Avrosourcer1started.
14/08/1015:09:54INFOipc.NettyServer:[id:0x96615732,/192.168.1.51:48741=>/192.168.1.51:5555]OPEN
14/08/1015:09:54INFOipc.NettyServer:[id:0x96615732,/192.168.1.51:48741=>/192.168.1.51:5555]BOUND:/192.168.1.51:5555
14/08/1015:09:54INFOipc.NettyServer:[id:0x96615732,/192.168.1.51:48741=>/192.168.1.51:5555]CONNECTED:/192.168.1.51:48741
14/08/1015:09:57INFOsink.LoggerSink:Event:{headers:{Severity=0,flume.syslog.status=Invalid,Facility=0}body:69646F616C6C2E6F7267207465737432idoall.orgtest2}
14/08/1015:10:43INFOipc.NettyServer:[id:0x12621f9a,/192.168.1.50:38166=>/192.168.1.51:5555]OPEN
14/08/1015:10:43INFOipc.NettyServer:[id:0x12621f9a,/192.168.1.50:38166=>/192.168.1.51:5555]BOUND:/192.168.1.51:5555
14/08/1015:10:43INFOipc.NettyServer:[id:0x12621f9a,/192.168.1.50:38166=>/192.168.1.51:5555]CONNECTED:/192.168.1.50:38166
14/08/1015:10:43INFOsink.LoggerSink:Event:{headers:{Severity=0,flume.syslog.status=Invalid,Facility=0}body:69646F616C6C2E6F7267207465737433idoall.orgtest3}
14/08/1015:10:43INFOsink.LoggerSink:Event:{headers:{Severity=0,flume.syslog.status=Invalid,Facility=0}body:69646F616C6C2E6F7267207465737434idoall.orgtest4}

 
    11)案例11:LoadbalancingSinkProcessor
    loadbalancetype和failover不同的地方是,loadbalance有两个配置,一个是轮询,一个是随机。两种情况下如果被选择的sink不可用,就会自动尝试发送到下一个可用的sink上面。
 
      a)在m1创建Load_balancing_Sink_Processors配置文件

root@m1:/home/hadoop#vi/home/hadoop/flume-1.5.0-bin/conf/Load_balancing_Sink_Processors.conf

a1.sources=r1
a1.sinks=k1k2
a1.channels=c1

#这个是配置Loadbalancing的关键,需要有一个sinkgroup
a1.sinkgroups=g1
a1.sinkgroups.g1.sinks=k1k2
a1.sinkgroups.g1.processor.type=load_balance
a1.sinkgroups.g1.processor.backoff=true
a1.sinkgroups.g1.processor.selector=round_robin

#Describe/configurethesource
a1.sources.r1.type=syslogtcp
a1.sources.r1.port=5140
a1.sources.r1.channels=c1


#Describethesink
a1.sinks.k1.type=avro
a1.sinks.k1.channel=c1
a1.sinks.k1.hostname=m1
a1.sinks.k1.port=5555

a1.sinks.k2.type=avro
a1.sinks.k2.channel=c1
a1.sinks.k2.hostname=m2
a1.sinks.k2.port=5555

#Useachannelwhichbufferseventsinmemory
a1.channels.c1.type=memory
a1.channels.c1.capacity=1000
a1.channels.c1.transactionCapacity=100

      b)在m1创建Load_balancing_Sink_Processors_avro配置文件

root@m1:/home/hadoop#vi/home/hadoop/flume-1.5.0-bin/conf/Load_balancing_Sink_Processors_avro.conf

a1.sources=r1
a1.sinks=k1
a1.channels=c1

#Describe/configurethesource
a1.sources.r1.type=avro
a1.sources.r1.channels=c1
a1.sources.r1.bind=0.0.0.0
a1.sources.r1.port=5555

#Describethesink
a1.sinks.k1.type=logger

#Useachannelwhichbufferseventsinmemory
a1.channels.c1.type=memory
a1.channels.c1.capacity=1000
a1.channels.c1.transactionCapacity=100

#Bindthesourceandsinktothechannel
a1.sources.r1.channels=c1
a1.sinks.k1.channel=c1

      c)将2个配置文件复制到m2上一份

root@m1:/home/hadoop/flume-1.5.0-bin#scp-r/home/hadoop/flume-1.5.0-bin/conf/Load_balancing_Sink_Processors.confroot@m2:/home/hadoop/flume-1.5.0-bin/conf/Load_balancing_Sink_Processors.conf
root@m1:/home/hadoop/flume-1.5.0-bin#scp-r/home/hadoop/flume-1.5.0-bin/conf/Load_balancing_Sink_Processors_avro.confroot@m2:/home/hadoop/flume-1.5.0-bin/conf/Load_balancing_Sink_Processors_avro.conf

      d)打开4个窗口,在m1和m2上同时启动两个flumeagent

root@m1:/home/hadoop#/home/hadoop/flume-1.5.0-bin/bin/flume-ngagent-c.-f/home/hadoop/flume-1.5.0-bin/conf/Load_balancing_Sink_Processors_avro.conf-na1-Dflume.root.logger=INFO,console
root@m1:/home/hadoop#/home/hadoop/flume-1.5.0-bin/bin/flume-ngagent-c.-f/home/hadoop/flume-1.5.0-bin/conf/Load_balancing_Sink_Processors.conf-na1-Dflume.root.logger=INFO,console

      e)然后在m1或m2的任意一台机器上,测试产生log,一行一行输入,输入太快,容易落到一台机器上

root@m1:/home/hadoop#echo"idoall.orgtest1"|nclocalhost5140
root@m1:/home/hadoop#echo"idoall.orgtest2"|nclocalhost5140
root@m1:/home/hadoop#echo"idoall.orgtest3"|nclocalhost5140
root@m1:/home/hadoop#echo"idoall.orgtest4"|nclocalhost5140

      f)在m1的sink窗口,可以看到以下信息:

14/08/1015:35:29INFOsink.LoggerSink:Event:{headers:{Severity=0,flume.syslog.status=Invalid,Facility=0}body:69646F616C6C2E6F7267207465737432idoall.orgtest2}
14/08/1015:35:33INFOsink.LoggerSink:Event:{headers:{Severity=0,flume.syslog.status=Invalid,Facility=0}body:69646F616C6C2E6F7267207465737434idoall.orgtest4}

      g)在m2的sink窗口,可以看到以下信息:

14/08/1015:35:27INFOsink.LoggerSink:Event:{headers:{Severity=0,flume.syslog.status=Invalid,Facility=0}body:69646F616C6C2E6F7267207465737431idoall.orgtest1}
14/08/1015:35:29INFOsink.LoggerSink:Event:{headers:{Severity=0,flume.syslog.status=Invalid,Facility=0}body:69646F616C6C2E6F7267207465737433idoall.orgtest3}

    说明轮询模式起到了作用。
 
    12)案例12:Hbasesink
 
      a)在测试之前,请先参考《ubuntu12.04+hadoop2.2.0+zookeeper3.4.5+hbase0.96.2+hive0.13.1分布式环境部署》将hbase启动
 
      b)然后将以下文件复制到flume中:

cp/home/hadoop/hbase-0.96.2-hadoop2/lib/protobuf-java-2.5.0.jar/home/hadoop/flume-1.5.0-bin/lib
cp/home/hadoop/hbase-0.96.2-hadoop2/lib/hbase-client-0.96.2-hadoop2.jar/home/hadoop/flume-1.5.0-bin/lib
cp/home/hadoop/hbase-0.96.2-hadoop2/lib/hbase-common-0.96.2-hadoop2.jar/home/hadoop/flume-1.5.0-bin/lib
cp/home/hadoop/hbase-0.96.2-hadoop2/lib/hbase-protocol-0.96.2-hadoop2.jar/home/hadoop/flume-1.5.0-bin/lib
cp/home/hadoop/hbase-0.96.2-hadoop2/lib/hbase-server-0.96.2-hadoop2.jar/home/hadoop/flume-1.5.0-bin/lib
cp/home/hadoop/hbase-0.96.2-hadoop2/lib/hbase-hadoop2-compat-0.96.2-hadoop2.jar/home/hadoop/flume-1.5.0-bin/lib
cp/home/hadoop/hbase-0.96.2-hadoop2/lib/hbase-hadoop-compat-0.96.2-hadoop2.jar/home/hadoop/flume-1.5.0-bin/lib@@@
cp/home/hadoop/hbase-0.96.2-hadoop2/lib/htrace-core-2.04.jar/home/hadoop/flume-1.5.0-bin/lib

      c)确保test_idoall_org表在hbase中已经存在,test_idoall_org表的格式以及字段请参考《ubuntu12.04+hadoop2.2.0+zookeeper3.4.5+hbase0.96.2+hive0.13.1分布式环境部署》中关于hbase部分的建表代码。
 
      d)在m1创建hbase_simple配置文件

root@m1:/home/hadoop#vi/home/hadoop/flume-1.5.0-bin/conf/hbase_simple.conf

a1.sources=r1
a1.sinks=k1
a1.channels=c1

#Describe/configurethesource
a1.sources.r1.type=syslogtcp
a1.sources.r1.port=5140
a1.sources.r1.host=localhost
a1.sources.r1.channels=c1

#Describethesink
a1.sinks.k1.type=logger
a1.sinks.k1.type=hbase
a1.sinks.k1.table=test_idoall_org
a1.sinks.k1.columnFamily=name
a1.sinks.k1.column=idoall
a1.sinks.k1.serializer=org.apache.flume.sink.hbase.RegexHbaseEventSerializer
a1.sinks.k1.channel=memoryChannel

#Useachannelwhichbufferseventsinmemory
a1.channels.c1.type=memory
a1.channels.c1.capacity=1000
a1.channels.c1.transactionCapacity=100

#Bindthesourceandsinktothechannel
a1.sources.r1.channels=c1
a1.sinks.k1.channel=c1

      e)启动flumeagent

/home/hadoop/flume-1.5.0-bin/bin/flume-ngagent-c.-f/home/hadoop/flume-1.5.0-bin/conf/hbase_simple.conf-na1-Dflume.root.logger=INFO,console

      f)测试产生syslog

root@m1:/home/hadoop#echo"helloidoall.orgfromflume"|nclocalhost5140

      g)这时登录到hbase中,可以发现新数据已经插入

root@m1:/home/hadoop#/home/hadoop/hbase-0.96.2-hadoop2/bin/hbaseshell
2014-08-1016:09:48,984INFO[main]Configuration.deprecation:hadoop.native.libisdeprecated.Instead,useio.native.lib.available
HBaseShell;enter"help<RETURN>"forlistofsupportedcommands.
Type"exit<RETURN>"toleavetheHBaseShell
Version0.96.2-hadoop2,r1581096,MonMar2416:03:18PDT2014

hbase(main):001:0>list
TABLE
SLF4J:ClasspathcontainsmultipleSLF4Jbindings.
SLF4J:Foundbindingin[jar:file:/home/hadoop/hbase-0.96.2-hadoop2/lib/slf4j-log4j12-1.6.4.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J:Foundbindingin[jar:file:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J:Seehttp://www.slf4j.org/codes.html#multiple_bindingsforanexplanation.
hbase2hive_idoall
hive2hbase_idoall
test_idoall_org
3row(s)in2.6880seconds

=>["hbase2hive_idoall","hive2hbase_idoall","test_idoall_org"]
hbase(main):002:0>scan"test_idoall_org"
ROWCOLUMN+CELL
10086column=name:idoall,timestamp=1406424831473,value=idoallvalue
1row(s)in0.0550seconds

hbase(main):003:0>scan"test_idoall_org"
ROWCOLUMN+CELL
10086column=name:idoall,timestamp=1406424831473,value=idoallvalue
1407658495588-XbQCOZrKK8-0column=name:payload,timestamp=1407658498203,value=helloidoall.orgfromflume
2row(s)in0.0200seconds

hbase(main):004:0>quit

    经过这么多flume的例子测试,如果你全部做完后,会发现flume的功能真的很强大,可以进行各种搭配来完成你想要的工作,俗话说师傅领进门,修行在个人,如何能够结合你的产品业务,将flume更好的应用起来,快去动手实践吧。
 
    这篇文章做为一个笔记,希望能够对刚入门的同学起到帮助作用。