flume的部署和测试

1 flume 安装

flume下载:http://flume.apache.org/download.html
flume1.7下载地址:http://archive.apache.org/dist/flume/1.7.0/

配置环境变量

编辑~/.bash_profile,添加如下内容

export FLUME_HOME=/root/software/apache-flume-1.7.0-bin               
export FLUME_CONF_DIR=$FLUME_HOME/conf
export PATH=$PATH:$FLUME_HOME/bin

编辑flume-env.sh, cp flume-env.sh.template flume-env.sh,添加一下内容

export JAVA_HOME=/usr/java/jdk1.8.0_111/

检查是否安装成功

source ~/.bash_profile
flume-ng version

2 flume测试

测试1:avro source测试

配置agent配置文件,vim $FLUME_CONF_DIR/avro.conf,添加如下内容:

a1.sources = r1
a1.sinks = k1
a1.channels = c1
 
# Describe/configure the source
a1.sources.r1.type = avro
a1.sources.r1.channels = c1
a1.sources.r1.bind = 0.0.0.0
a1.sources.r1.port = 4141
 
# Describe the sink
a1.sinks.k1.type = logger
 
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
 
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

启动flume agent a1

flume-ng agent -c . -f $FLUME_CONF_DIR/avro.conf -n a1 -Dflume.root.logger=INFO,console #启动日志控制台

flume-ng agent -f $FLUME_CONF_DIR/avro.conf \ 
--name a1 -Dflume.root.logger=INFO,console

写入数据到指定文件

echo "hello world" > /root/log.00

启动avro-client

flume-ng avro-client --host localhost -p 4141 -F /root/log.00

在agent端看到有hello word 输出就说明测试成功了。

测试2:netcast source测试

配置agent配置文件,vim $FLUME_CONF_DIR/netcast.conf,添加如下内容:

a1.sources = r1
a1.sinks = k1
a1.channels = c1
 
# Describe/configure the source
a1.sources.r1.type = netcat
a1.sources.r1.channels = c1
a1.sources.r1.bind = 0.0.0.0
a1.sources.r1.port = 44444
 
# Describe the sink
a1.sinks.k1.type = logger
 
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
 
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

启动flume agent

flume-ng agent -f $FLUME_CONF_DIR/netcast.conf \
--name a1 -Dflume.root.logger=INFO,console

telnet发送数据

telnet localhost 44444
# 输入要发送的数据

在agent端看到您输入的数据表示测试已经成功。

测试3:netcast to spark

配置agent配置文件,vim $FLUME_CONF_DIR/netcast-spark.conf,添加如下内容:

#flume-to-spark.conf: A single-node Flume configuration
# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# Describe/configure the source
a1.sources.r1.type = netcat
a1.sources.r1.bind = localhost
a1.sources.r1.port = 33333

# Describe the sink
a1.sinks.k1.type = avro
a1.sinks.k1.hostname = localhost
a1.sinks.k1.port =44444

# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000000
a1.channels.c1.transactionCapacity = 1000000

# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

说明:把Flume Sink类别设置为avro,绑定到localhost的44444端口,这样,Flume Source把采集到的消息汇集到Flume Sink以后,Sink会把消息推送给localhost的44444端口,而我们编写的Spark Streaming程序一直在监听localhost的44444端口,一旦有消息到达,就会被Spark Streaming应用程序取走进行处理。
注意:若spark streaming监听的端口未启动,直接启动agent会报错。
spark streming从flume中获取数据的测试程序

object StreamingFromFlume {
  
    def main(args: Array[String]) = {
        if (args.length < 2) {
            System.err.println("Usage: StreamingFromFlume  ")
            System.exit(1)
        }
        
        val host = args(0)
        val port = args(1).toInt
        val batchInterval = Milliseconds(2000)
        val sparkConf = new SparkConf().setAppName("StreamingFromFlume")
        val ssc = new StreamingContext(sparkConf, batchInterval)
        val stream = FlumeUtils.createStream(ssc, host, port, StorageLevel.MEMORY_ONLY_SER_2)
        stream.count().map(cnt => "Received " + cnt + " flume events." ).print()
        ssc.start()
        ssc.awaitTermination()
    }
}

启动spark streaming程序(编译生成的jar包SparkStreaming-0.0.1-SNAPSHOT.jar )

spark-submit --jars /root/software/spark-2.2.0-bin-hadoop2.6/jars/flume/spark-streaming-flume_2.11-2.2.0.jar,\
/root/software/spark-2.2.0-bin-hadoop2.6/jars/flume/spark-streaming-flume-sink_2.11-2.2.0.jar,\
/root/software/apache-flume-1.7.0-bin/lib/flume-ng-sdk-1.7.0.jar \
--class sparkstudy.sparkstreaming.StreamingFromFlume SparkStreaming-0.0.1-SNAPSHOT.jar 127.0.0.1 44444

启动agent

flume-ng agent -f $FLUME_CONF_DIR/netcast-spark.conf \
--name a1 -Dflume.root.logger=INFO,console

通过telnet向flume发送数据

telnet 127.0.0.1 33333
# 输入要发送的数据

观察程序,若有类似Received 16 flume events,则说明测试成功。

参考

  • 日志采集工具Flume的安装与使用方法
  • Spark2.1.0入门:把Flume作为DStream数据源

你可能感兴趣的:(flume的部署和测试)