完全分布式
Name | Static IP | DESC |
---|---|---|
bigdata102 | 192.168.1.102 | DataNode、NodeManager、NameNode |
bigdata103 | 192.168.1.103 | DataNode、NodeManager、ResourceManager |
bigdata104 | 192.168.1.104 | DataNode、NodeManager、SecondaryNameNode |
# 安装工具
sudo yum install -y epel-release
sudo yum install -y psmisc nc net-tools rsync vim lrzsz ntp libzstd openssl-static tree iotop git
需求:循环复制文件到所有节点的相同目录下
cd /home/atguigu
vim xsync
---
#!/bin/bash
#1. 判断参数个数
if [ $# -lt 1 ]
then
echo Not Enough Arguement!
exit;
fi
#2. 遍历集群所有机器
for host in hadoop102 hadoop103 hadoop104
do
echo ==================== $host ====================
#3. 遍历所有目录,挨个发送
for file in $@
do
#4 判断文件是否存在
if [ -e $file ]
then
#5. 获取父目录
pdir=$(cd -P $(dirname $file); pwd)
#6. 获取当前文件的名称
fname=$(basename $file)
ssh $host "mkdir -p $pdir"
rsync -av $pdir/$fname $host:$pdir
else
echo $file does not exists!
fi
done
done
---
chmod +x xsync
sudo mv xsync /bin/
三台机器都执行
ssh-keygen -t rsa
ssh-copy-id hadoop102
ssh-copy-id hadoop103
ssh-copy-id hadoop104
新建目录 /opt/software
/opt/module
下载地址:Java Downloads | Oracle
将下载好的tar包传到 /opt/software
并解压到 /opt/module
tar -zxvf /opt/software/jdk-8u212-linux-x64.tar.gz -C /opt/module/
配置环境变量
vim /etc/profile.d/my_env.sh
---
JAVA_HOME=/opt/module/jdk1.8.0_212
PATH=$PATH:$JAVA_HOME/bin
export PATH JAVA_HOME
---
source /etc/profile.d/my_env.sh
[atguigu@bigdata102 ~]$ java -version
java version "1.8.0_212"
Java(TM) SE Runtime Environment (build 1.8.0_212-b10)
Java HotSpot(TM) 64-Bit Server VM (build 25.212-b10, mixed mode)
下载地址:Apache Archive Distribution Directory
将下载好的tar包传到 /opt/software
并解压到 /opt/module
tar -zxvf /opt/software/hadoop-3.1.3.tar.gz -C /opt/module/
配置环境变量
JAVA_HOME=/opt/module/jdk1.8.0_212
HADOOP_HOME=/opt/module/hadoop-3.1.3
PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
export PATH JAVA_HOME HADOOP_HOME
[atguigu@bigdata102 ~]$ hadoop version
Hadoop 3.1.3
Source code repository https://gitbox.apache.org/repos/asf/hadoop.git -r ba631c436b806728f8ec2f54ab1e289526c90579
Compiled by ztang on 2019-09-12T02:47Z
Compiled with protoc 2.5.0
From source with checksum ec785077c385118ac91aadde5ec9799
This command was run using /opt/module/hadoop-3.1.3/share/hadoop/common/hadoop-common-3.1.3.jar
注意:NameNode和SecondaryNameNode不要安装在同一台服务器
注意:ResourceManager也很消耗内存,不要和NameNode、SecondaryNameNode配置在同一台机器上。
hadoop102 | hadoop103 | hadoop104 | |
---|---|---|---|
HDFS | NameNode DataNode |
DataNode | SecondaryNameNode DataNode |
YARN | NodeManager | ResourceManager NodeManager |
NodeManager |
Hadoop配置文件分为两类:默认配置文件和自定义配置文件,只有用户想修改某一默认配置时才需要修改自定义配置文件,更改相应属性值。
默认配置文件
要获取的默认文件 | 文件存放在Hadoop的jar包中的位置 |
---|---|
[core-default.xml] | hadoop-common-3.1.3.jar/core-default.xml |
[hdfs-default.xml] | hadoop-hdfs-3.1.3.jar/hdfs-default.xml |
[yarn-default.xml] | hadoop-yarn-3.1.3.jar/yarn-default.xml |
[mapred-default.xml] | hadoop-mapreduce-client-3.1.3.jar/mapred-default.xml |
自定义配置文件
core-site.xml、hdfs-site.xml、yarn-site.xml、mapred-site.xml四个配置文件存放在$HADOOP_HOME/etc/hadoop这个路径上,用户可以根据项目需求重新进行修改配置。
<configuration>
<property>
<name>fs.defaultFSname>
<value>hdfs://hadoop102:8020value>
property>
<property>
<name>hadoop.data.dirname>
<value>/opt/module/hadoop-3.1.3/datavalue>
property>
<property>
<name>hadoop.proxyuser.atguigu.hostsname>
<value>*value>
property>
<property>
<name>hadoop.proxyuser.atguigu.groupsname>
<value>*value>
property>
<property>
<name>hadoop.http.staticuser.username>
<value>atguiguvalue>
property>
configuration>
<configuration>
<property>
<name>dfs.replicationname>
<value>3value>
property>
<property>
<name>dfs.namenode.name.dirname>
<value>file://${hadoop.data.dir}/namevalue>
property>
<property>
<name>dfs.datanode.data.dirname>
<value>file://${hadoop.data.dir}/datavalue>
property>
<property>
<name>dfs.namenode.checkpoint.dirname>
<value>file://${hadoop.data.dir}/namesecondaryvalue>
property>
<property>
<name>dfs.client.datanode-restart.timeoutname>
<value>30svalue>
property>
<property>
<name>dfs.namenode.http-addressname>
<value>hadoop102:9870value>
property>
<property>
<name>dfs.namenode.secondary.http-addressname>
<value>hadoop104:9868value>
property>
configuration>
<configuration>
<property>
<name>yarn.nodemanager.aux-servicesname>
<value>mapreduce_shufflevalue>
property>
<property>
<name>yarn.resourcemanager.hostnamename>
<value>hadoop103value>
property>
<property>
<name>yarn.nodemanager.env-whitelistname>
<value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOMEvalue>
property>
configuration>
cd /opt/module/hadoop-3.1.3/etc
xsync hadoop
如果集群是第一次启动,需要格式化NameNode
[atguigu@bigdata102 etc]$ hdfs namenode -format
执行后会发现 /opt/module/hadoop-3.1.3
会多一个data目录,这个data是在 /opt/module/hadoop-3.1.3/etc/core-site.xml
配置的
在hadoop102上启动NameNode
[atguigu@bigdata102 etc]$ hdfs --daemon start namenode
完成后执行 jps
命令,看到如下结果(进程号可能不同)
[atguigu@bigdata102 etc]$ jps
2513 NameNode
2582 Jps
在hadoop102、hadoop103、hadoop104上执行如下命令(三台都要执行)启动 datanode
hdfs --daemon start datanode
在hadoop104上启动SecondaryNameNode
hdfs --daemon start secondarynamenode
在hadoop103上启动ResourceManager
yarn --daemon start resourcemanager
在hadoop102、hadoop103、hadoop104 上执行如下命令(三台都要执行)启动 nodemanager
yarn --daemon start nodemanager
访问nn http://hadoop102:9870
访问2nn http://hadoop104:9868
访问yarn http://hadoop103:8088