A. eclipse下配置hadoop插件(保证Run as Hadoop )好使。
下载0.20.3版本的plugin。下载地址:https://issues.apache.org/jira/secure/attachment/12460491/hadoop-eclipse-plugin-0.20.3-SNAPSHOT.jar。删除eclipse/plugin目录下的0.20.2版本,将新下载的0.20.3版本放入plugin目录下,修改名称为0.20.2版本的名称。重新启动Eclipse。B. 在eclipse下配置相关的信息,注意,需要在hadoop/conf下找到对应的.xml文件。把端口填上。
Map/Reduce Master(Job Tracker的IP和端口,根据mapred-site.xml中配置的mapred.job.tracker来填写) DFS Master(Name Node的IP和端口,根据core-site.xml中配置的fs.default.name来填写)
C. 使用eclipse对HDFS内容进行修改
经过上一步骤,左侧“Project Explorer”中应该会出现配置好的HDFS,点击右键,可以进行新建文件夹、删除文件夹、上传文件、下载文件、删除文件等操作。
注意:每一次操作完在eclipse中不能马上显示变化,必须得刷新一下。
D. 测试代码
package hadoop.map.reduce; import java.io.IOException; import java.util.StringTokenizer; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.Mapper; import org.apache.hadoop.mapreduce.Reducer; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import org.apache.hadoop.util.GenericOptionsParser; /** * 我的程序OK了。 * * @author junjun.zhang * */ public class WordCount { public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); String[] otherArgs = new GenericOptionsParser(conf, args) .getRemainingArgs(); if (otherArgs.length != 2) { System.err.println("Usage: wordcount <in> <out>"); System.exit(2); } conf.set("mapred.job.tracker", "Namenode Ip 或者远程主机名称:9004"); Job job = new Job(conf, "my_yhql_916_world_count"); job.setJarByClass(WordCount.class); job.setMapperClass(TokenizerMapper.class); job.setCombinerClass(IntSumReducer.class); job.setReducerClass(IntSumReducer.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); FileInputFormat.addInputPath(job, new Path(otherArgs[0])); FileOutputFormat.setOutputPath(job, new Path(otherArgs[1])); System.exit(job.waitForCompletion(true) ? 0 : 1); } public static class IntSumReducer extends Reducer<Text, IntWritable, Text, IntWritable> { private IntWritable result = new IntWritable(); public void reduce(Text key, Iterable<IntWritable> values, Reducer<Text, IntWritable, Text, IntWritable>.Context context) throws IOException, InterruptedException { int sum = 0; for (IntWritable val : values) { sum += val.get(); } this.result.set(sum); context.write(key, this.result); } } public static class TokenizerMapper extends Mapper<Object, Text, Text, IntWritable> { private static final IntWritable one = new IntWritable(1); private Text word = new Text(); public void map(Object key, Text value, Mapper<Object, Text, Text, IntWritable>.Context context) throws IOException, InterruptedException { StringTokenizer itr = new StringTokenizer(value.toString()); while (itr.hasMoreTokens()) { this.word.set(itr.nextToken()); context.write(this.word, one); } } } }
E:在运行参数->运行配置中,增加内容如下图:
F: 准备测试数据.
file1.txt 对应内容:Hello Hadoop Goodbye Hadoop
file2.txt 对应内容:Hello World Bye World
F: 运行程序.
右键->运行方式-》run on hadoop
G:eclipse执行结果(OK)
H.程序运行结果.
转载于:https://www.cnblogs.com/yhql/p/eclipes连接Hadoop集群-执行mapreduce程序.html