spark源码阅读之shuffleManager

1、shufflemanager的实现类:sortshufflemanager

Spark 0.8及以前 Hash Based Shuffle

在Shuffle Write过程按照Hash的方式重组Partition的数据,不进行排序。每个map端的任务为每个reduce端的Task生成一个文件,通常会产生大量的文件(即对应为M*R个中间文件,其中M表示map端的Task个数,R表示reduce端的Task个数),伴随大量的随机磁盘IO操作与大量的内存开销。

Shuffle Read过程如果有combiner操作,那么它会把拉到的数据保存在一个Spark封装的哈希表(AppendOnlyMap)中进行合并。在代码结构上:

  • org.apache.spark.storage.ShuffleBlockManager负责Shuffle Write
  • org.apache.spark.BlockStoreShuffleFetcher负责Shuffle Read
  • org.apache.spark.Aggregator负责combine,依赖于AppendOnlyMap

Spark 0.8.1 为Hash Based Shuffle引入File Consolidation机制

通过文件合并,中间文件的生成方式修改为每个执行单位(一个Executor中的执行单位等于Core的个数除以每个Task所需的Core数)为每个reduce端的任务生成一个文件。最终可以将文件个数从MR修改为EC/T*R,其中,E表示Executor的个数,C表示每个Executor中可用Core的个数,T表示Task所分配的Core的个数。是否采用Consolidate机制,需要配置spark.shuffle.consolidateFiles参数

Spark 0.9 引入ExternalAppendOnlyMap

在combine的时候,可以将数据spill到磁盘,然后通过堆排序merge

Spark 1.1 引入Sort Based Shuffle,但默认仍为Hash Based Shuffle

在Sort Based Shuffle的Shuffle Write阶段,map端的任务会按照Partition id以及key对记录进行排序。同时将全部结果写到一个数据文件中,同时生成一个索引文件,reduce端的Task可以通过该索引文件获取相关的数据。

在代码结构上:

从以前的ShuffleBlockManager中分离出ShuffleManager来专门管理Shuffle Writer和Shuffle Reader。两种Shuffle方式分别对应

org.apache.spark.shuffle.hash.HashShuffleManager和

org.apache.spark.shuffle.sort.SortShuffleManager,

可通过spark.shuffle.manager参数配置。两种Shuffle方式有各自的ShuffleWriter:org.apache.spark.shuffle.hash.HashShuffle和org.apache.spark.shuffle.sort.SortShuffleWriter;但共用一个ShuffleReader,即org.apache.spark.shuffle.hash.HashShuffleReader。

org.apache.spark.util.collection.ExternalSorter实现排序功能。可通过对spark.shuffle.spill参数配置,决定是否可以在排序时将临时数据Spill到磁盘。

Spark 1.2 默认的Shuffle方式改为Sort Based Shuffle

Spark 1.4 引入Tungsten-Sort Based Shuffle

将数据记录用序列化的二进制方式存储,把排序转化成指针数组的排序,引入堆外内存空间和新的内存管理模型,这些技术决定了使用Tungsten-Sort要符合一些严格的限制,比如Shuffle dependency不能带有aggregation、输出不能排序等。由于堆外内存的管理基于JDK Sun Unsafe API,故Tungsten-Sort Based Shuffle也被称为Unsafe Shuffle。

在代码层面:

  • 新增org.apache.spark.shuffle.unsafe.UnsafeShuffleManager
  • 新增org.apache.spark.shuffle.unsafe.UnsafeShuffleWriter(用java实现)
  • ShuffleReader复用HashShuffleReader

Spark 1.6 Tungsten-sort并入Sort Based Shuffle

由SortShuffleManager自动判断选择最佳Shuffle方式,如果检测到满足Tungsten-sort条件会自动采用Tungsten-sort Based Shuffle,否则采用Sort Based Shuffle。

在代码方面:

  • UnsafeShuffleManager合并到SortShuffleManager
  • HashShuffleReader 重命名为BlockStoreShuffleReader,Sort Based Shuffle和Hash Based Shuffle仍共用ShuffleReader。

Spark 2.0 Hash Based Shuffle退出历史舞台,从此Spark只有Sort Based Shuffle,ShuffleManager的实现类就只有SortShufflemanager

2、sortshufflemanager.registerShuffle

3、sortshufflemanager.getReader

4、sortshufflemanager.getWriter

[html] view plaincopy

  1. case unsafeShuffleHandle: SerializedShuffleHandle] =>
  2. new UnsafeShuffleWriter(.......)
  3. case bypassMergeSortHandle: BypassMergeSortShuffleHandle=>
  4. new BypassMergeSortShuffleWriter(......)
  5. case other: BaseShuffleHandle =>
  6. new SortShuffleWriter(shuffleBlockResolver, other, mapId, context)

5、BypassMergeSortShuffleWriter类似于hash shuffle,但是将output file合并成一个文件

1)、BypassMergeSortShuffleWriter.write

传参:partition的itearator

【如果record为空】

[html] view plaincopy

  1. if (!records.hasNext()) {
  2. partitionLengths = new long[numPartitions];
  3. shuffleBlockResolver.writeIndexFileAndCommit(shuffleId, mapId, partitionLengths, null);
  4. mapStatus = MapStatus.apply(blockManager.shuffleServerId(), partitionLengths);
  5. return;
  6. }

【获取partition写入磁盘文件的writer】

[html] view plaincopy

  1. partitionWriters = new DiskBlockObjectWriter[numPartitions];
  2. for (int i = 0; i < numPartitions; i++) {
  3. final Tuple2 tempShuffleBlockIdPlusFile =
  4. blockManager.diskBlockManager().createTempShuffleBlock();
  5. final File file = tempShuffleBlockIdPlusFile._2();
  6. final BlockId blockId = tempShuffleBlockIdPlusFile._1();
  7. partitionWriters[i] =
  8. blockManager.getDiskWriter(blockId, file, serInstance, fileBufferSize, writeMetrics);
  9. }

【写文件】

[html] view plaincopy

  1. while (records.hasNext()) {
  2. final Product2 record = records.next();
  3. final K key = record._1();
  4. partitionWriters[partitioner.getPartition(key)].write(key, record._2());
  5. }

【获取每个ShuffleBlock,ShuffleBlock被称为FileSegment】

[html] view plaincopy

  1. partitionWriterSegments = new FileSegment[numPartitions];
  2. for (int i = 0; i < numPartitions; i++) {
  3. final DiskBlockObjectWriter writer = partitionWriters[i];
  4. partitionWriterSegments[i] = writer.commitAndGet();
  5. writer.close();
  6. }

【合并文件以及写index文件】

[html] view plaincopy

  1. File output = shuffleBlockResolver.getDataFile(shuffleId, mapId);
  2. File tmp = Utils.tempFileWith(output);
  3. try {
  4. partitionLengths = writePartitionedFile(tmp);
  5. shuffleBlockResolver.writeIndexFileAndCommit(shuffleId, mapId, partitionLengths, tmp);
  6. } finally {
  7. if (tmp.exists() && !tmp.delete()) {
  8. logger.error("Error while deleting temp file {}", tmp.getAbsolutePath());
  9. }
  10. }

2)、BypassMergeSortShuffleWriter.writePartitionedFile

传参:合并文件 File outputFile

[html] view plaincopy

  1. final FileOutputStream out = new FileOutputStream(outputFile, true);
  2. for (int i = 0; i < numPartitions; i++) {
  3. final File file = partitionWriterSegments[i].file();
  4. if (file.exists()) {
  5. final FileInputStream in = new FileInputStream(file);
  6. boolean copyThrewException = true;
  7. try {
  8. lengths[i] = Utils.copyStream(in, out, false, transferToEnabled);
  9. copyThrewException = false;
  10. } finally {
  11. Closeables.close(in, copyThrewException);
  12. }
  13. if (!file.delete()) {
  14. logger.error("Unable to delete file for partition {}", i);
  15. }
  16. }
  17. }
  18. threwException = false;

返回文件的偏移量

6、SortShuffleWriter

1)、SortShuffleWriter.writer

【对rdd进行排序】

[html] view plaincopy

  1. 如果map端进行combine则,反之则不关心key在每个partition中是否被排序,既不传递aggregator也不传递ordering
  2. sorter = if (dep.mapSideCombine) {
  3. require(dep.aggregator.isDefined, "Map-side combine without Aggregator specified!")
  4. //需要combine时,传递partitioner以及ordering
  5. new ExternalSorter[K, V, C](
  6. context, dep.aggregator, Some(dep.partitioner), dep.keyOrdering, dep.serializer)
  7. } else {
  8. new ExternalSorter[K, V, V](
  9. context, aggregator = None, Some(dep.partitioner), ordering = None, dep.serializer)
  10. }
  11. //将数据存储在buffer或者map中,这是最关键的地方,根据需求(包括partition内的key排序,partitionID排序等等)排序,内存不够时数据会spill后写入spillfile
  12. sorter.insertAll(records)

【写文件】

[html] view plaincopy

  1. //blockManager.diskBlockManager.getFile(ShuffleDataBlockId(shuffleId, mapId, NOOP_REDUCE_ID))
  2. val output = shuffleBlockResolver.getDataFile(dep.shuffleId, mapId)
  3. //通过工具类创建临时文件
  4. val tmp = Utils.tempFileWith(output)
  5. try {
  6. val blockId = ShuffleBlockId(dep.shuffleId, mapId, IndexShuffleBlockResolver.NOOP_REDUCE_ID)
  7. //将buffer或者map中的数据写入文件,各个partition
  8. val partitionLengths = sorter.writePartitionedFile(blockId, tmp)
  9. //写index文件
  10. shuffleBlockResolver.writeIndexFileAndCommit(dep.shuffleId, mapId, partitionLengths, tmp)
  11. mapStatus = MapStatus(blockManager.shuffleServerId, partitionLengths)
  12. } finally {
  13. if (tmp.exists() && !tmp.delete()) {
  14. logError(s"Error while deleting temp file ${tmp.getAbsolutePath}")
  15. }
  16. }

你可能感兴趣的:(spark源码阅读之shuffleManager)