Standlone 模式搭建

1
[bigdata@hadoop102 software]$ tar -zxvf flink-1.17.0-bin-scala_2.12.tgz -C /opt/module/

修改集群配置

1. 修改 flink-conf.yaml

1
2
3
4
5
6
7
[atguigu@hadoop102 conf]$ vim flink-conf.yaml

# JobManager 节点地址.
jobmanager.rpc.address: hadoop102 jobmanager.bind-host: 0.0.0.0 rest.address: hadoop102 
rest.bind-address: 0.0.0.0
# TaskManager 节点地址.需要配置为当前机器名
taskmanager.bind-host: 0.0.0.0 anager.host: hadoop102

2.修改workers

1
2
3
4
5
[bigdata@hadoop102 conf]$ vim workers

hadoop102
hadoop103
hadoop104

3.修改masters

1
2
3
[bigdata@hadoop102 conf]$ vim masters

hadoop102:8081

分发安装目录

1. 将 Flink 分发至 hadoop102 hadoop103

1
[bigdata@hadoop102 module]$ xsync flink-1.17.0/

2. 分别修改 Hadoop102 Hadoop103 的 taskmanager.host

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
[bigdata@hadoop103 ~]$ cd /opt/module/flink-1.17.0/conf/
[bigdata@hadoop103 conf]$ vim flink-conf.yaml 

# TaskManager节点地址.需要配置为当前机器名 
taskmanager.host: hadoop103


[bigdata@hadoop104 ~]$ cd /opt/module/flink-1.17.0/conf/
[bigdata@hadoop104 conf]$ vim flink-conf.yaml 

# TaskManager节点地址.需要配置为当前机器名 
taskmanager.host: hadoop104

启动集群

1
[bigdata@hadoop102 flink-1.17.0]$ bin/start-cluster.sh 

Yarn 模式

添加环境变量;

1
2
3
sudo vim /etc/profile.d/my_env.sh
export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop
export HADOOP_CLASSPATH=`hadoop classpath`