sparksql 查看物理执行计划

import org.apache.spark.sql.SparkSession

object DF2DS {
  def main(args: Array[String]): Unit = {
    println("astron")
    val spark = SparkSession
      .builder()
      .master("local")
      .appName("star")
      .getOrCreate()

    // 导入spark的隐式转换
    import spark.implicits._
    val emp = spark.read.json("d://employee.json")
    emp.createOrReplaceTempView("emp")
    spark.sql("SELECT * FROM emp").show()

    spark.sql("SELECT * FROM emp").explain()

    spark.sql("SELECT age FROM emp where age>25").explain()

    spark.sql("SELECT age FROM emp where age>25 order by age").explain()

  }

}
astron
+---+-----+------+------+------+
|age|depId|gender|  name|salary|
+---+-----+------+------+------+
| 25|    1|  male|   Leo| 20000|
| 30|    2|female| Marry| 25000|
| 35|    1|  male|  Jack| 15000|
| 42|    3|  male|   Tom| 18000|
| 21|    3|female|Kattie| 21000|
| 30|    2|female|   Jen| 28000|
| 19|    2|female|   Jen|  8000|
+---+-----+------+------+------+

== Physical Plan ==
*FileScan json [age#0L,depId#1L,gender#2,name#3,salary#4L] Batched: false, Format: JSON, Location: InMemoryFileIndex[file:/d:/employee.json], PartitionFilters: [], PushedFilters: [], ReadSchema: struct
== Physical Plan ==
*Project [age#0L]
+- *Filter (isnotnull(age#0L) && (age#0L > 25))
   +- *FileScan json [age#0L] Batched: false, Format: JSON, Location: InMemoryFileIndex[file:/d:/employee.json], PartitionFilters: [], PushedFilters: [IsNotNull(age), GreaterThan(age,25)], ReadSchema: struct
== Physical Plan ==
*Sort [age#0L ASC NULLS FIRST], true, 0
+- Exchange rangepartitioning(age#0L ASC NULLS FIRST, 200)
   +- *Project [age#0L]
      +- *Filter (isnotnull(age#0L) && (age#0L > 25))
         +- *FileScan json [age#0L] Batched: false, Format: JSON, Location: InMemoryFileIndex[file:/d:/employee.json], PartitionFilters: [], PushedFilters: [IsNotNull(age), GreaterThan(age,25)], ReadSchema: struct

你可能感兴趣的:(Spark)