[HADOOP] Hadoop - thread "main"의 예외 java.lang.NullPointerException
HADOOPHadoop - thread "main"의 예외 java.lang.NullPointerException
이 자습서를 통해 Windows 플랫폼 용 Apache Hadoop을 사용하려고합니다. http://www.codeproject.com/Articles/757934/Apache-Hadoop-for-Windows-Platform?fid=1858035, 이클립스 부분. 마지막 단계까지 모든 것이 잘 진행됩니다. 프로그램을 실행할 때 나는 얻었다 : log4j : WARN logger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory)에 대한 appender를 찾을 수 없습니다.
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Exception in thread "main" java.lang.NullPointerException
at java.lang.ProcessBuilder.start(Unknown Source)
at org.apache.hadoop.util.Shell.runCommand(Shell.java:445)
at org.apache.hadoop.util.Shell.run(Shell.java:418)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:739)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:722)
at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:631)
at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:421)
at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:277)
at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:125)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:348)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Unknown Source)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303)
at Recipe.main(Recipe.java:82)
코드는 다음과 같습니다.
import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;
import com.google.gson.Gson;
public class Recipe {
public static class TokenizerMapper
extends Mapper<Object, Text, Text, IntWritable>{
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
Gson gson = new Gson();
public void map(Object key, Text value, Context context
) throws IOException, InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
context.write(word, one);
}
Roo roo=gson.fromJson(value.toString(),Roo.class);
if(roo.cookTime!=null)
{
word.set(roo.cookTime);
}
else
{
word.set("none");
}
context.write(word, one);
}
}
public static class IntSumReducer
extends Reducer<Text,IntWritable,Text,IntWritable> {
private IntWritable result = new IntWritable();
public void reduce(Text key, Iterable<IntWritable> values,
Context context
) throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
result.set(sum);
context.write(key, result);
}
}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
@SuppressWarnings("deprecation")
Job job = new Job(conf, "Recipe");
job.setJarByClass(Recipe.class);
job.setMapperClass(TokenizerMapper.class);
job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
//FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
//FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
FileInputFormat.addInputPath(job, new Path("hdfs://127.0.0.1:9000/in"));
FileOutputFormat.setOutputPath(job, new Path("hdfs://127.0.0.1:9000/output"));
System.exit(job.waitForCompletion(true) ? 0 : 1);
// job.submit();
}
}
class Id
{
public String oid;
}
class Ts
{
public long date ;
}
class Roo
{
public Id _id ;
public String name ;
public String ingredients ;
public String url ;
public String image ;
public Ts ts ;
public String cookTime;
public String source ;
public String recipeYield ;
public String datePublished;
public String prepTime ;
public String description;
}
이것은 Eclipse를 통해 실행하려고 할 때만 발생합니다. CMD를 통해 잘 진행되었습니다.
javac -classpath C:\hadoop-2.3\share\hadoop\common\hadoop-common-2.3.0.jar;C:\hadoop-2.3\share\hadoop\common\lib\gson-2.2.4.jar;C:\hadoop-2.3\share\hadoop\common\lib\commons-cli-1.2.jar;C:\hadoop-2.3\share\hadoop\mapreduce\hadoop-mapreduce-client-core-2.3.0.jar;Recipe.java
jar -cvf Recipe.jar *.class
hadoop jar c:\Hwork\Recipe.jar Recipe /in /out
어떻게하면이 문제를 해결할 수 있을까요?
해결법
-
==============================
1.나는 같은 문제와 해결 방법을 http://qnalist.com/questions/4994960/run-spark-unit-test-on-windows-7에서 고쳤다.
나는 같은 문제와 해결 방법을 http://qnalist.com/questions/4994960/run-spark-unit-test-on-windows-7에서 고쳤다.
해결 방법은 다음과 같습니다.
from https://stackoverflow.com/questions/27201505/hadoop-exception-in-thread-main-java-lang-nullpointerexception by cc-by-sa and MIT license
'HADOOP' 카테고리의 다른 글
[HADOOP] spb sbt가 오류 libraryDependencies를 컴파일합니다. (0) | 2019.07.02 |
---|---|
[HADOOP] $ SPARK_HOME에 포함 된 hive-site.xml은 어떻게 생겼습니까? (0) | 2019.07.02 |
[HADOOP] 테이블을 생성하는 동안 하이브에서 한 번에 2 개의 필드 종결 자 (예 : ','및 '.')를 사용할 수 있습니까? (0) | 2019.07.02 |
[HADOOP] Hbase 자동으로 모든 열 / 행 키 증가 (0) | 2019.07.02 |
[HADOOP] mapReduce 및 hadoop을 사용하여 특정 값을 포함하는 행 추출 (0) | 2019.07.01 |