[HADOOP] 내 맵리 듀스 프로그램은 제로 출력을 생성
HADOOP내 맵리 듀스 프로그램은 제로 출력을 생성
출력 폴더 내용은 없습니다 일부-00000 파일이 있습니다!
저는 여기에 예외를 볼 명령 추적은,
[cloudera@localhost ~]$ hadoop jar testmr.jar TestMR /tmp/example.csv /user/cloudera/output
14/02/06 11:45:24 WARN conf.Configuration: session.id is deprecated. Instead, use dfs.metrics.session-id
14/02/06 11:45:24 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
14/02/06 11:45:24 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
14/02/06 11:45:25 INFO mapred.FileInputFormat: Total input paths to process : 1
14/02/06 11:45:25 INFO mapred.JobClient: Running job: job_local1238439569_0001
14/02/06 11:45:25 INFO mapred.LocalJobRunner: OutputCommitter set in config null
14/02/06 11:45:25 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapred.FileOutputCommitter
14/02/06 11:45:25 INFO mapred.LocalJobRunner: Waiting for map tasks
14/02/06 11:45:25 INFO mapred.LocalJobRunner: Starting task: attempt_local1238439569_0001_m_000000_0
14/02/06 11:45:26 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
14/02/06 11:45:26 INFO util.ProcessTree: setsid exited with exit code 0
14/02/06 11:45:26 INFO mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@44aea710
14/02/06 11:45:26 INFO mapred.MapTask: Processing split: hdfs://localhost.localdomain:8020/tmp/example.csv:0+2963382
14/02/06 11:45:26 WARN mapreduce.Counters: Counter name MAP_INPUT_BYTES is deprecated. Use FileInputFormatCounters as group name and BYTES_READ as counter name instead
14/02/06 11:45:26 INFO mapred.MapTask: numReduceTasks: 1
14/02/06 11:45:26 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
14/02/06 11:45:26 INFO mapred.MapTask: io.sort.mb = 50
14/02/06 11:45:26 INFO mapred.MapTask: data buffer = 39845888/49807360
14/02/06 11:45:26 INFO mapred.MapTask: record buffer = 131072/163840
14/02/06 11:45:26 INFO mapred.JobClient: map 0% reduce 0%
14/02/06 11:45:28 INFO mapred.MapTask: Starting flush of map output
14/02/06 11:45:28 INFO compress.CodecPool: Got brand-new compressor [.snappy]
14/02/06 11:45:28 INFO mapred.Task: Task:attempt_local1238439569_0001_m_000000_0 is done. And is in the process of commiting
14/02/06 11:45:28 INFO mapred.LocalJobRunner: hdfs://localhost.localdomain:8020/tmp/example.csv:0+2963382
14/02/06 11:45:28 INFO mapred.Task: Task 'attempt_local1238439569_0001_m_000000_0' done.
14/02/06 11:45:28 INFO mapred.LocalJobRunner: Finishing task: attempt_local1238439569_0001_m_000000_0
14/02/06 11:45:28 INFO mapred.LocalJobRunner: Map task executor complete.
14/02/06 11:45:28 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
14/02/06 11:45:28 INFO mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@1d382926
14/02/06 11:45:28 INFO mapred.LocalJobRunner:
14/02/06 11:45:28 INFO mapred.Merger: Merging 1 sorted segments
14/02/06 11:45:28 INFO compress.CodecPool: Got brand-new decompressor [.snappy]
14/02/06 11:45:28 INFO mapred.Merger: Down to the last merge-pass, with 0 segments left of total size: 0 bytes
14/02/06 11:45:28 INFO mapred.LocalJobRunner:
14/02/06 11:45:28 INFO mapred.Task: Task:attempt_local1238439569_0001_r_000000_0 is done. And is in the process of commiting
14/02/06 11:45:28 INFO mapred.LocalJobRunner:
14/02/06 11:45:28 INFO mapred.Task: Task attempt_local1238439569_0001_r_000000_0 is allowed to commit now
14/02/06 11:45:28 INFO mapred.FileOutputCommitter: Saved output of task 'attempt_local1238439569_0001_r_000000_0' to hdfs://localhost.localdomain:8020/user/cloudera/output
14/02/06 11:45:28 INFO mapred.LocalJobRunner: reduce > reduce
14/02/06 11:45:28 INFO mapred.Task: Task 'attempt_local1238439569_0001_r_000000_0' done.
14/02/06 11:45:28 INFO mapred.JobClient: map 100% reduce 100%
14/02/06 11:45:28 INFO mapred.JobClient: Job complete: job_local1238439569_0001
14/02/06 11:45:28 INFO mapred.JobClient: Counters: 26
14/02/06 11:45:28 INFO mapred.JobClient: File System Counters
14/02/06 11:45:28 INFO mapred.JobClient: FILE: Number of bytes read=7436
14/02/06 11:45:28 INFO mapred.JobClient: FILE: Number of bytes written=199328
14/02/06 11:45:28 INFO mapred.JobClient: FILE: Number of read operations=0
14/02/06 11:45:28 INFO mapred.JobClient: FILE: Number of large read operations=0
14/02/06 11:45:28 INFO mapred.JobClient: FILE: Number of write operations=0
14/02/06 11:45:28 INFO mapred.JobClient: HDFS: Number of bytes read=5926764
14/02/06 11:45:28 INFO mapred.JobClient: HDFS: Number of bytes written=0
14/02/06 11:45:28 INFO mapred.JobClient: HDFS: Number of read operations=10
14/02/06 11:45:28 INFO mapred.JobClient: HDFS: Number of large read operations=0
14/02/06 11:45:28 INFO mapred.JobClient: HDFS: Number of write operations=4
14/02/06 11:45:28 INFO mapred.JobClient: Map-Reduce Framework
14/02/06 11:45:28 INFO mapred.JobClient: Map input records=24518
14/02/06 11:45:28 INFO mapred.JobClient: Map output records=0
14/02/06 11:45:28 INFO mapred.JobClient: Map output bytes=0
14/02/06 11:45:28 INFO mapred.JobClient: Input split bytes=129
14/02/06 11:45:28 INFO mapred.JobClient: Combine input records=0
14/02/06 11:45:28 INFO mapred.JobClient: Combine output records=0
14/02/06 11:45:28 INFO mapred.JobClient: Reduce input groups=0
14/02/06 11:45:28 INFO mapred.JobClient: Reduce shuffle bytes=0
14/02/06 11:45:28 INFO mapred.JobClient: Reduce input records=0
14/02/06 11:45:28 INFO mapred.JobClient: Reduce output records=0
14/02/06 11:45:28 INFO mapred.JobClient: Spilled Records=0
14/02/06 11:45:28 INFO mapred.JobClient: CPU time spent (ms)=0
14/02/06 11:45:28 INFO mapred.JobClient: Physical memory (bytes) snapshot=0
14/02/06 11:45:28 INFO mapred.JobClient: Virtual memory (bytes) snapshot=0
14/02/06 11:45:28 INFO mapred.JobClient: Total committed heap usage (bytes)=221126656
14/02/06 11:45:28 INFO mapred.JobClient: org.apache.hadoop.mapreduce.lib.input.FileInputFormatCounter
14/02/06 11:45:28 INFO mapred.JobClient: BYTES_READ=2963382
[cloudera@localhost ~]$
아래는, 내 MR 코드
import java.io.IOException;
import java.util.*;
import java.text.SimpleDateFormat;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapred.*;
import org.apache.hadoop.mapred.Reducer;
import org.apache.hadoop.util.*;
public class TestMR
{
public static class Map extends MapReduceBase implements Mapper<LongWritable,Text,Text,Text>
{
public void map(LongWritable key, Text line, OutputCollector<Text, Text> output, Reporter reporter) throws IOException
{
final String [] split = line.toString().split(",");
if(split[2].equals("Test"))
{
output.collect(new Text(split[0]), new Text(split[4] + "|" + split[7]));
}
}
}
public static class Reduce extends MapReduceBase implements Reducer<Text,Text,Text,DoubleWritable>
{
public void reduce(Text key, Iterator<Text> values, OutputCollector<Text, DoubleWritable> output, Reporter reporter) throws IOException
{
while(values.hasNext())
{
long t1=0, t2=0;
SimpleDateFormat df = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
String [] tmpBuf_1 = values.next().toString().split("|");
String v1 = tmpBuf_1[0];
try
{
t1 = df.parse(tmpBuf_1[1]).getTime();
}
catch (java.text.ParseException e)
{
System.out.println("Unable to parse date string: "+ tmpBuf_1[1]);
continue;
}
if(!values.hasNext())
break;
String [] tmpBuf_2 = values.next().toString().split("|");
String v2 = tmpBuf_2[0];
try
{
t2 = df.parse(tmpBuf_2[1]).getTime();
}
catch (java.text.ParseException e)
{
System.out.println("Unable to parse date string: "+ tmpBuf_2[1]);
continue;
}
int vDiff = Integer.parseInt(v2) - Integer.parseInt(v1);
long tDiff = (t2 - t1)/1000;
if(tDiff > 600)
break;
double declineV = vDiff / tDiff;
output.collect(key, new DoubleWritable(declineV));
}
}
}
public static void main(String[] args) throws Exception
{
JobConf conf = new JobConf(TestMR.class);
conf.setJobName("TestMapReduce");
conf.set("mapred.job.tracker", "local");
conf.setOutputKeyClass(Text.class);
conf.setOutputValueClass(DoubleWritable.class);
conf.setMapperClass(Map.class);
conf.setCombinerClass(Reduce.class);
conf.setReducerClass(Reduce.class);
conf.setInputFormat(TextInputFormat.class);
conf.setOutputFormat(TextOutputFormat.class);
FileInputFormat.setInputPaths(conf, new Path(args[0]));
FileOutputFormat.setOutputPath(conf, new Path(args[1]));
JobClient.runJob(conf);
}
}
이것은 내 첫 번째 맵리 듀스 프로그램이며, 나는 그것이 출력을 생성하지 않는 이유를 찾을 수 없습니다! 내 코드에서 어떤 문제 나 출력을 얻기위한 맵리 듀스 작업을 실행하는 더 좋은 방법이 있으면 알려 주시기 바랍니다.
참고로는 testmr.jar 파일은 로컬 파일 시스템 및 HDFS에서 CSV 출력 폴더에 있습니다.
해결법
-
==============================
1.당신이 로그를 보면,지도 방법은 출력을 생성하지 않는 것을 알 수 있습니다 :
당신이 로그를 보면,지도 방법은 출력을 생성하지 않는 것을 알 수 있습니다 :
14/02/06 11:45:28 INFO mapred.JobClient: Map input records=24518 14/02/06 11:45:28 INFO mapred.JobClient: Map output records=0 14/02/06 11:45:28 INFO mapred.JobClient: Map output bytes=0
당신이 볼 수 있듯이,지도 방법은 입력 레코드를 받고 있지만, 0 출력 기록을 생산하고있다. 그래서지도 방법의 논리에 뭔가 문제가 있어야합니다 :
final String [] split = line.toString().split(","); if(split[2].equals("Test")) { output.collect(new Text(split[0]), new Text(split[4] + "|" + split[7])); }
난 당신이 몇 가지 예제 입력 데이터와 간단한 자바 코드로이 논리를 테스트하고, 다음 맵리 듀스 코드를 편집하고 작업을 다시 실행 해보십시오 작동하는지 확인하는 것이 좋습니다.
from https://stackoverflow.com/questions/21612783/my-mapreduce-program-produces-a-zero-output by cc-by-sa and MIT license
'HADOOP' 카테고리의 다른 글
[HADOOP] 수 하둡-1.2.1에서 실행되는 하둡-2.2.0에서 작성하는 하둡을 programm? (0) | 2019.10.15 |
---|---|
[HADOOP] 문제 하이브 AvroSerDe tblProperties 최대 길이 (0) | 2019.10.15 |
[HADOOP] 하둡 MR / 돼지의 작업에 배관 데이터 (0) | 2019.10.15 |
[HADOOP] 분산 캐시를 하둡 어떻게 코드에서 외부 항아리를 사용하는 방법 : -libjars를 사용하여 (0) | 2019.10.15 |
[HADOOP] Cygwin에서에서 하둡에 대한 네임 노드를 시작할 수 없습니다 (0) | 2019.10.15 |