【Spark八十一】Hive in the spark assembly

Spark SQL supports most commonly used features of HiveQL. However, different HiveQL statements are executed in different manners:

  1. 1. DDL statements (e.g. CREATE TABLE, DROP TABLE, etc.) and commands (e.g. SET <key> = <value>, ADD FILE, ADD JAR, etc.)

    2. In most cases, Spark SQL simply delegates these statements to Hive, as they don’t need to issue any distributed jobs and don’t rely on the computation engine (Spark, MR, or Tez).

  2. SELECT queries, CREATE TABLE ... AS SELECT ... statements and insertions

    These statements are executed using Spark as the execution engine.

The Hive classes packaged in the assembly jar are used to provide entry points to Hive features, for example:

  1. 1. HiveQL parser
  2. 2. Talking to Hive metastore to execute DDL statements
  3. 3. Accessing UDF/UDAF/UDTF

As for the differences between Hive on Spark and Spark SQL’s Hive support, please refer to this article by Reynold: https://databricks.com/blog/2014/07/01/shark-spark-sql-hive-on-spark-and-the-future-of-sql-on-spark.html

你可能感兴趣的:(assembly)