倒排索引是将文章中的单词挑出来,排序,便于检索。利用map-reduce思想来实现,如下:
原始文本及内容:
doc1.txt:MapReduce is simple
doc2.txt:MapReduce ispowerful is simple
doc3.txt:Hello MapReduce byeMapReduce
那么输出结果应该是这样子的:
MapReduce:doc1.txt:1;doc2.txt:1;doc3.txt:2;
is:doc1.txt:1;doc2.txt:2;
simple:doc1.txt:1;doc2.txt:1;
Hello:doc3.txt:1;
MapReduce:doc3.txt:1;
其中冒号之前表示文档,之后表示在这个文档中出现的次数,分号分隔各个文档。例如:MapReduce:doc1.txt:1;doc2.txt:1;doc3.txt:2; 表示MapReduce在doc1.txt中出现一次,在doc2.txt中出现一次,在doc3.txt中出现两次。
明白了原理之后,看如何用MapReduce来实现。
原始文件作为输入,经过Map之后变成以下格式:
经过Combiner之后变成以下格式:
经过reduce之后变成以下内容:
可以考虑考虑为什么这么做。
Mapper类:
package cn.kepu.littlefu;
import java.io.IOException;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.mapred.MapReduceBase;
import org.apache.hadoop.mapred.Mapper;
import org.apache.hadoop.mapred.OutputCollector;
import org.apache.hadoop.mapred.Reporter;
@SuppressWarnings("deprecation")
public class InverseIndexMapper extends MapReduceBaseimplements Mapper
Combiner类:
package cn.kepu.littlefu;
import java.io.IOException;
import java.util.Iterator;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.MapReduceBase;
import org.apache.hadoop.mapred.OutputCollector;
import org.apache.hadoop.mapred.Reducer;
import org.apache.hadoop.mapred.Reporter;
public class InverseIndexCombiner extendsMapReduceBase implements
Reducer{
@Override
public void reduce(Text key, Iteratorvalues,
OutputCollector output, Reporter reporter)
throwsIOException {
//total
int sum =0;
while(values.hasNext()){
sum+= Integer.parseInt(values.next().toString());
}
//outputposition
int pos =key.toString().indexOf(":");
//output
TextoutKey = new Text(key.toString().subSequence(0, pos).toString());
TextoutValue = new Text(key.toString().substring(pos+1).toString()+":"+sum);
System.out.print("combiner:");
output.collect(outKey, outValue);
}
}
Reduce类:
package cn.kepu.littlefu;
import java.io.IOException;
import java.util.Iterator;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.MapReduceBase;
import org.apache.hadoop.mapred.OutputCollector;
import org.apache.hadoop.mapred.Reducer;
import org.apache.hadoop.mapred.Reporter;
public class InverseIndexReducer extends MapReduceBaseimplements
Reducer {
@Override
public voidreduce(Text key, Iterator values,
OutputCollector output, Reporter reporter)
throwsIOException {
StringfileList = new String();
while(values.hasNext()){
fileList+= values.next().toString()+";";
}
//output
output.collect(key,new Text(fileList));
}
}
Main类:
package cn.kepu.littlefu;
import java.io.IOException;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.FileInputFormat;
import org.apache.hadoop.mapred.FileOutputFormat;
import org.apache.hadoop.mapred.JobClient;
import org.apache.hadoop.mapred.JobConf;
public class InverseIndexLuncher {
public staticvoid main(String[] args) throws IOException{
if(args.length != 2){
System.err.println("Usage :InverseIndex
参考:《实战Hadoop--开启通向云计算的捷径》P74-P83