[HADOOP] Hadoop, MapReduce 사용자 정의 Java 카운터 스레드 "main"의 예외 java.lang.IllegalStateException : RUNNING 대신 DEFINE 상태의 작업
HADOOPHadoop, MapReduce 사용자 정의 Java 카운터 스레드 "main"의 예외 java.lang.IllegalStateException : RUNNING 대신 DEFINE 상태의 작업
Exception in thread "main" java.lang.IllegalStateException: Job in state DEFINE instead of RUNNING
at org.apache.hadoop.mapreduce.Job.ensureState(Job.java:294)
at org.apache.hadoop.mapreduce.Job.getCounters(Job.java:762)
at com.aamend.hadoop.MapReduce.CountryIncomeConf.main(CountryIncomeConf.java:41)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
오류는 문제가 줄에 있음을 보여줍니다.
또한 저는 COUNTER라는 이름의 열거 형을 가지고 있습니다.
import java.io.IOException;
import org.apache.hadoop.io.DoubleWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.log4j.Logger;
public class CountryIncomeMapper extends Mapper<Object, Text, Text, DoubleWritable> {
private Logger logger = Logger.getLogger("FilterMapper");
private final int incomeIndex = 54;
private final int countryIndex = 0;
private final int lenIndex = 58;
String seperator = ",";
public void map(Object key, Text line, Context context) throws IOException,
InterruptedException {
if (line == null) {
logger.info("null found.");
context.getCounter(COUNTERS.ERROR_COUNT).increment(1);
return;
}
if (line.toString().contains(
"Adjusted net national income per capita (current US$)")) {
String[] recordSplits = line.toString().split(seperator);
logger.info("The data has been splitted.");
if (recordSplits.length == lenIndex) {
String countryName = recordSplits[countryIndex];
try {
double income = Double.parseDouble(recordSplits[incomeIndex]);
context.write(new Text(countryName), new DoubleWritable(income));
} catch (NumberFormatException nfe) {
logger.info("The value of income is in wrong format." + countryName);
context.getCounter(COUNTERS.MISSING_FIELDS_RECORD_COUNT).increment(1);
return;
}
}
}
}
}
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.net.URI;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.DoubleWritable;
import org.apache.hadoop.io.IOUtils;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.io.compress.CompressionCodec;
import org.apache.hadoop.io.compress.CompressionCodecFactory;
import org.apache.hadoop.mapred.Counters;
import org.apache.hadoop.mapreduce.Counter;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
public class CountryIncomeConf {
public static void main(String[] args) throws IOException,
InterruptedException, ClassNotFoundException {
Path inputPath = new Path(args[0]);
Path outputDir = new Path(args[1]);
// Create configuration
Configuration conf = new Configuration(true);
// Create job
Job job = new Job(conf, "CountryIncomeConf");
job.setJarByClass(CountryIncomeConf.class);
Counter counter =
job.getCounters().findCounter(COUNTERS.MISSING_FIELDS_RECORD_COUNT);
System.out.println("Error Counter = " + counter.getValue());
// Setup MapReduce
job.setMapperClass(CountryIncomeMapper.class);
job.setNumReduceTasks(1);
// Specify key / value
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(DoubleWritable.class);
// Input
FileInputFormat.addInputPath(job, inputPath);
job.setInputFormatClass(TextInputFormat.class);
// Output
FileOutputFormat.setOutputPath(job, outputDir);
job.setOutputFormatClass(TextOutputFormat.class);
// Delete output if exists
FileSystem hdfs = FileSystem.get(conf);
if (hdfs.exists(outputDir))
hdfs.delete(outputDir, true);
// Execute job
int code = job.waitForCompletion(true) ? 0 : 1;
System.exit(code);
}
}
해결법
-
==============================
1.일자리를 제출하기 전에 카운터를 얻으려고하는 것 같습니다.
일자리를 제출하기 전에 카운터를 얻으려고하는 것 같습니다.
from https://stackoverflow.com/questions/31480400/hadoop-mapreduce-custom-java-counters-exception-in-thread-main-java-lang-ille by cc-by-sa and MIT license
'HADOOP' 카테고리의 다른 글
[HADOOP] 하둡 kerberos 티켓 자동 갱신 (0) | 2019.08.01 |
---|---|
[HADOOP] Spark 독립 실행 형 클러스터가있는 작업자 노드에서 다중 실행 프로그램을 관리하는 방법은 무엇입니까? (0) | 2019.08.01 |
[HADOOP] 아파치 Zeppelin 던지고 NullPointerException 오류 (0) | 2019.08.01 |
[HADOOP] 작업 추적기 및 작업 추적기가 Hadoop에서 실행되고 있지 않습니까? (0) | 2019.08.01 |
[HADOOP] 오류 hive.HiveConfig : org.apache.hadoop.hive.conf.HiveConf를로드 할 수 없습니다. HIVE_CONF _DIR이 올바르게 설정되었는지 확인하십시오. (0) | 2019.08.01 |