spark on yarn模式安装部署

配置spark之前,请自行部署Hadoop2.7.2,JDK1.8,ssh免密码登录等准备工作。
1)修改hadoop配置文件yarn-site.xml,添加如下内容:
[root@mzz11 opt]$ vi yarn-site.xml

    
            yarn.nodemanager.pmem-check-enabled
            false
    
   
    
            yarn.nodemanager.vmem-check-enabled
            false
    

2)修改spark-env.sh,添加如下配置:
[root@mzz11 conf]$ vi spark-env.sh

YARN_CONF_DIR=/opt/hadoop-2.7.2/etc/hadoop
HADOOP_CONF_DIR=/opt/hadoop-2.7.2/etc/hadoop
3)分发配置文件
[root@mzz11 opt]$ scp /opt/hadoop-2.7.2/etc/hadoop/yarn-site.xml 192.168.0.12:/opt/hadoop-2.7.2/etc/hadoop/
[root@mzz11 opt]$ scp /opt/hadoop-2.7.2/etc/hadoop/yarn-site.xml 192.168.0.13:/opt/hadoop-2.7.2/etc/hadoop/
[root@mzz11 opt] s c p s p a r k 2.1.1192.168.0.12 : / o p t / [ r o o t @ m z z 11 o p t ] scp spark2.1.1 192.168.0.12:/opt/ [root@mzz11 opt] scpspark2.1.1192.168.0.12:/opt/[root@mzz11opt]scp spark2.1.1 192.168.0.13:/opt/
4)启动spark(启动之前请先启动HDFS,yarn)

[root@mzz11 spark2.1.1]# ./bin/spark-shell --master yarn-client
Warning: Master yarn-client is deprecated since 2.0. Please use master "yarn" with specified deploy mode instead.
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
18/12/24 07:22:13 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
18/12/24 07:22:17 WARN yarn.Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
18/12/24 07:22:51 WARN metastore.ObjectStore: Failed to get database global_temp, returning NoSuchObjectException
Spark context Web UI available at http://192.168.0.11:4040
Spark context available as 'sc' (master = yarn, app id = application_1545630396302_0003).
Spark session available as 'spark'.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.1.1
      /_/
         
Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_91)
Type in expressions to have them evaluated.
Type :help for more information.

scala> 

启动成功!!

5)执行一个程序
[root@mzz11 spark]$ bin/spark-submit
–class org.apache.spark.examples.SparkPi
–master yarn
–deploy-mode client
./examples/jars/spark-test_2.11-2.1.1.jar
100
注意:在提交任务之前需启动HDFS以及YARN集群。

你可能感兴趣的:(大数据,spark)