99热这里有精品-夜夜嗨av色一区二区不卡-亚洲国产精彩中文乱码AV-日本japanese乳偷乱熟

尚硅谷大數據技術之Hadoop(MapReduce)(新)第1章 MapReduce概述

1.7?MapReduce編程規范

用戶編寫的程序分成三個部分:Mapper、Reducer和Driver。

1.8?WordCount案例實操

1.需求

在給定的文本文件中統計輸出每一個單詞出現的總次數

(1)輸入數據

atguigu atguigu
ss ss
cls cls
jiao
banzhang
xue
hadoop

(2)期望輸出數據

atguigu 2

banzhang 1

cls 2

hadoop 1

jiao 1

ss 2

xue 1

2.需求分析

按照MapReduce編程規范,分別編寫Mapper,Reducer,Driver,如圖4-2所示。

圖4-2?需求分析

3.環境準備

(1)創建maven工程

(2)在pom.xml文件中添加如下依賴

<dependencies>

<dependency>

<groupId>junit</groupId>

<artifactId>junit</artifactId>

<version>RELEASE</version>

</dependency>

<dependency>

<groupId>org.apache.logging.log4j</groupId>

<artifactId>log4j-core</artifactId>

<version>2.8.2</version>

</dependency>

<dependency>

<groupId>org.apache.hadoop</groupId>

<artifactId>hadoop-common</artifactId>

<version>2.7.2</version>

</dependency>

<dependency>

<groupId>org.apache.hadoop</groupId>

<artifactId>hadoop-client</artifactId>

<version>2.7.2</version>

</dependency>

<dependency>

<groupId>org.apache.hadoop</groupId>

<artifactId>hadoop-hdfs</artifactId>

<version>2.7.2</version>

</dependency>

</dependencies>

(2)在項目的src/main/resources目錄下,新建一個文件,命名為“log4j.properties”,在文件中填入。

log4j.rootLogger=INFO, stdout

log4j.appender.stdout=org.apache.log4j.ConsoleAppender

log4j.appender.stdout.layout=org.apache.log4j.PatternLayout

log4j.appender.stdout.layout.ConversionPattern=%d %p [%c] - %m%n

log4j.appender.logfile=org.apache.log4j.FileAppender

log4j.appender.logfile.File=target/spring.log

log4j.appender.logfile.layout=org.apache.log4j.PatternLayout

log4j.appender.logfile.layout.ConversionPattern=%d %p [%c] - %m%n

4.編寫程序

(1)編寫Mapper類

package com.atguigu.mapreduce;

import java.io.IOException;

import org.apache.hadoop.io.IntWritable;

import org.apache.hadoop.io.LongWritable;

import org.apache.hadoop.io.Text;

import org.apache.hadoop.mapreduce.Mapper;

 

public class WordcountMapper extends Mapper<LongWritable, Text, Text, IntWritable>{

Text k = new Text();

IntWritable v = new IntWritable(1);

@Override

protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {

// 1 獲取一行

String line = value.toString();

// 2 切割

String[] words = line.split(" ");

// 3 輸出

for (String word : words) {

k.set(word);

context.write(k, v);

}

}

}

(2)編寫Reducer類

package com.atguigu.mapreduce.wordcount;

import java.io.IOException;

import org.apache.hadoop.io.IntWritable;

import org.apache.hadoop.io.Text;

import org.apache.hadoop.mapreduce.Reducer;

 

public class WordcountReducer extends Reducer<Text, IntWritable, Text, IntWritable>{

 

int sum;

IntWritable v = new IntWritable();

 

@Override

protected void reduce(Text key, Iterable<IntWritable> values,Context context) throws IOException, InterruptedException {

// 1 累加求和

sum = 0;

for (IntWritable count : values) {

sum += count.get();

}

// 2 輸出

???????v.set(sum);

context.write(key,v);

}

}

(3)編寫Driver驅動類

package com.atguigu.mapreduce.wordcount;

import java.io.IOException;

import org.apache.hadoop.conf.Configuration;

import org.apache.hadoop.fs.Path;

import org.apache.hadoop.io.IntWritable;

import org.apache.hadoop.io.Text;

import org.apache.hadoop.mapreduce.Job;

import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;

import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

 

public class WordcountDriver {

 

public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {

 

// 1 獲取配置信息以及封裝任務

Configuration configuration = new Configuration();

Job job = Job.getInstance(configuration);

 

// 2 設置jar加載路徑

job.setJarByClass(WordcountDriver.class);

 

// 3 設置map和reduce類

job.setMapperClass(WordcountMapper.class);

job.setReducerClass(WordcountReducer.class);

 

// 4 設置map輸出

job.setMapOutputKeyClass(Text.class);

job.setMapOutputValueClass(IntWritable.class);

 

// 5 設置最終輸出kv類型

job.setOutputKeyClass(Text.class);

job.setOutputValueClass(IntWritable.class);

// 6 設置輸入和輸出路徑

FileInputFormat.setInputPaths(job, new Path(args[0]));

FileOutputFormat.setOutputPath(job, new Path(args[1]));

 

// 7 提交

boolean result = job.waitForCompletion(true);

 

System.exit(result ? 0 : 1);

}

}

5.本地測試

(1)如果電腦系統是win7的就將win7的hadoop jar包解壓到非中文路徑,并在Windows環境上配置HADOOP_HOME環境變量。如果是電腦win10操作系統,就解壓win10的hadoop jar包,并配置HADOOP_HOME環境變量。

注意:win8電腦和win10家庭版操作系統可能有問題,需要重新編譯源碼或者更改操作系統。

(2)在Eclipse/Idea上運行程序

6.集群上測試

(0)用maven打jar包,需要添加的打包插件依賴

注意:標記紅顏色的部分需要替換為自己工程主類

<build>

<plugins>

<plugin>

<artifactId>maven-compiler-plugin</artifactId>

<version>2.3.2</version>

<configuration>

<source>1.8</source>

<target>1.8</target>

</configuration>

</plugin>

<plugin>

<artifactId>maven-assembly-plugin </artifactId>

<configuration>

<descriptorRefs>

<descriptorRef>jar-with-dependencies</descriptorRef>

</descriptorRefs>

<archive>

<manifest>

<mainClass>com.atguigu.mr.WordcountDriver</mainClass>

</manifest>

</archive>

</configuration>

<executions>

<execution>

<id>make-assembly</id>

<phase>package</phase>

<goals>

<goal>single</goal>

</goals>

</execution>

</executions>

</plugin>

</plugins>

</build>

注意:如果工程上顯示紅叉。在項目上右鍵->maven->update project即可。

(1)將程序打成jar包,然后拷貝到Hadoop集群中

步驟詳情:右鍵->Run as->maven install。等待編譯完成就會在項目的target文件夾中生成jar包。如果看不到。在項目上右鍵-》Refresh,即可看到。修改不帶依賴的jar包名稱為wc.jar,并拷貝該jar包到Hadoop集群。

(2)啟動Hadoop集群

(3)執行WordCount程序

[atguigu@hadoop102 software]$ hadoop jar ?wc.jar

?com.atguigu.wordcount.WordcountDriver /user/atguigu/input /user/atguigu/output

三河市| 荃湾区| 宿迁市| 同心县| 莲花县| 静安区| 伽师县| 左贡县| 德钦县| 潼南县| 句容市| 根河市| 清镇市| 延边| 高密市| 临海市| 平凉市| 龙游县| 新兴县| 凤阳县| 余干县| 青川县| 若羌县| 札达县| 贡嘎县| 鄯善县| 耿马| 竹北市| 册亨县| 西华县| 县级市| 天柱县| 桐柏县| 平定县| 潮安县| 肥城市| 台南县| 湖北省| 山西省| 应城市| 麟游县|