linux安装单机版spark3.5.0

一、spark介绍

 是一种通用的大数据计算框架, 正如传统大数据技术Hadoop的MapReduce、 Hive引擎, 以及Storm流式实时计算引擎等. Spark主要用于大数据的计算

二、spark下载

spark3.5.0

三、spark环境变量配置

export JAVA_HOME=/usr/local/jdk1.8.0_391
export JRE_HOME=/usr/local/jdk1.8.0_391/jre
export HBASE_HOME=/usr/local/bigdata/hbase-2.5.6
export HADOOP_HOME=/usr/local/bigdata/hadoop-3.3.6
export FLINK_HOME=/usr/local/bigdata/flink-1.18.0
export SCALA_HOME=/usr/local/bigdata/scala-2.13.12
export SPARK_HOME=/usr/local/bigdata/spark-3.5.0-bin-hadoop3
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JAR_HOME/lib
export PATH=.:$JAVA_HOME/bin:$JRE_HOME/bin:$FLINK_HOME/bin:$SPARK_HOME/bin:$SCALA_HOME/bin:$HADOOP_HOME/bin:$HBASE_HOME/bin:$PYTHON_HOME/bin:$PATH

在配置spark之前,需要安装scala。

四、启动spark

直接进入bin里面,启动./spark-shell

Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Spark context Web UI available at http://ip:4040
Spark context available as 'sc' (master = local[*], app id = local-1699600686469).
Spark session available as 'spark'.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 3.5.0
      /_/

Using Scala version 2.12.18 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_311)
Type in expressions to have them evaluated.
Type :help for more information.

访问地址 

http://ip:4040

linux安装单机版spark3.5.0_第1张图片

你可能感兴趣的:(大数据,JAVA知识,spark,大数据,分布式)