在以前使用Hadoop的時候因為mahout裡面很多都要求輸入文件時序列文件,所以涉及到把文本文件轉換為序列文件或者序列文件轉為文本文件(因為當時要分析mahout的源碼,所以就要看到它的輸入文件是什麼,文本比較好看其內容)。一般這個有兩種做法,其一:按照《Hadoop權威指南》上面的方面直接讀出序列文件然後寫入一個文本;其二,編寫一個job任務,直接設置輸出文件的格式,這樣也可以把序列文件讀成文本(個人一般采用這樣方法)。
相關閱讀:
Hadoop權威指南(中文第2版)PDF http://www.linuxidc.com/Linux/2012-07/65972.htm
Hadoop序列化文件SequenceFile http://www.linuxidc.com/Linux/2013-06/86701.htm
時隔好久,今天又重新試了下,居然不行了?,比如,我要編寫一個把文本轉為序列文件的java程序如下:
package mahout.fansy.canopy.transformdata;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.io.Writable;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat;
import org.apache.mahout.common.AbstractJob;
import org.apache.mahout.math.RandomAccessSparseVector;
import org.apache.mahout.math.Vector;
import org.apache.mahout.math.VectorWritable;
public class Text2VectorWritable extends AbstractJob{
@Override
public int run(String[] arg0) throws Exception {
addInputOption();
addOutputOption();
if (parseArguments(arg0) == null) {
return -1;
}
Path input=getInputPath();
Path output=getOutputPath();
Configuration conf=getConf();
Job job=new Job(conf,"text2vectorWritable with input:"+input.getName());
// job.setInputFormatClass(SequenceFileInputFormat.class);
job.setOutputFormatClass(SequenceFileOutputFormat.class);
job.setMapperClass(Text2VectorWritableMapper.class);
job.setMapOutputKeyClass(Writable.class);
job.setMapOutputValueClass(VectorWritable.class);
job.setNumReduceTasks(0);
job.setJarByClass(Text2VectorWritable.class);
FileInputFormat.addInputPath(job, input);
SequenceFileOutputFormat.setOutputPath(job, output);
if (!job.waitForCompletion(true)) {
throw new InterruptedException("Canopy Job failed processing " + input);
}
return 0;
}
public static class Text2VectorWritableMapper extends Mapper<Writable,Text,Writable,VectorWritable>{
public void map(Writable key,Text value,Context context)throws IOException,InterruptedException{
String[] str=value.toString().split(",");
Vector vector=new RandomAccessSparseVector(str.length);
for(int i=0;i<str.length;i++){
vector.set(i, Double.parseDouble(str[i]));
}
VectorWritable va=new VectorWritable(vector);
context.write(key, va);
}
}
}
請繼續閱讀:http://www.linuxidc.com/Linux/2013-07/88120p2.htm