Flume 环境配置#
解压安装#
解压 Flume 并重命名
1
2
3
| [bigdata@hadoop102 software]$ tar -zxf /opt/software/apache-flume-1.9.0-bin.tar.gz -C /opt/module/
[bigdata@hadoop102 module]$ mv /opt/module/apache-flume-1.9.0-bin /opt/module/flum-1.9.0
|
将 lib 文件夹下的 guava-11.0.2.jar 删除以兼容 Hadoop 3.1.3
1
| [bigdata@hadoop102 module]$ rm /opt/module/flume-1.9.0/lib/guava-11.0.2.jar
|
监听端口#
安装 netcat 工具
判断 44444 端口是否被占用
1
| sudo netstat -nlp | grep 44444
|
在 flume 目录下创建 job 文件夹并进入 job 文件夹
在 job 文件夹下创建 Flume Agent 配置文件 flume-netcat-logger.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
| # Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
a1.sources.r1.type = netcat
a1.sources.r1.bind = localhost
a1.sources.r1.port = 44444
# Describe the sink
a1.sinks.k1.type = logger
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
|
先开启 flume 监听端口
1
| bin/flume-ng agent --conf conf/ --name a1 --conf-file job/net-flume-logger.conf -Dflume.root.logger=INFO,console
|
使用 netcat 工具向本机的 44444 端口发送内容